{"title": "Learning to grok: Emergence of in-context learning and skill composition in modular arithmetic tasks", "url": "https://openreview.net/forum?id=aVh9KRZdRk", "detail_url": "https://openreview.net/forum?id=aVh9KRZdRk", "authors": "Tianyu He,Darshil Doshi,Aritra Das,Andrey Gromov", "tags": "NIPS 2024,Oral", "abstract": "Large language models can solve tasks that were not present in the training set. This capability is believed to be due to in-context learning and skill composition. In this work, we study the emergence of in-context learning and skill composition in a collection of modular arithmetic tasks. Specifically, we consider a finite collection of linear modular functions $z = a x + b y \\text{ mod } p$ labeled by the vector $(a, b) \\in \\mathbb{Z}_p^2$. We use some of these tasks for pre-training and the rest for out-of-distribution testing. We empirically show that a GPT-style transformer exhibits a transition from in-distribution to out-of-distribution generalization as the number of pre-training tasks increases. We find that the smallest model capable of out-of-distribution generalization requires two transformer blocks, while for deeper models, the out-of-distribution generalization phase is *transient*, necessitating early stopping. Finally, we perform an interpretability study of the pre-trained models, revealing highly structured representations in both attention heads and MLPs; and discuss the learned algorithms. Notably, we find an algorithmic shift in deeper models, as we go from few to many in-context examples.", "pdf": "https://openreview.net/pdf/5737b58d308dafc16130635934df4276a7a574aa.pdf"} {"title": "Trading Place for Space: Increasing Location Resolution Reduces Contextual Capacity in Hippocampal Codes", "url": "https://openreview.net/forum?id=REIK4SZMJt", "detail_url": "https://openreview.net/forum?id=REIK4SZMJt", "authors": "Spencer Rooke,Zhaoze Wang,Ronald W Di Tullio,Vijay Balasubramanian", "tags": "NIPS 2024,Oral", "abstract": "Many animals learn cognitive maps of their environment - a simultaneous representation of context, experience, and position. Place cells in the hippocampus, named for their explicit encoding of position, are believed to be a neural substrate of these maps, with place cell \"remapping\" explaining how this system can represent different contexts. Briefly, place cells alter their firing properties, or \"remap\", in response to changes in experiential or sensory cues. Substantial sensory changes, produced, e.g., by moving between environments, cause large subpopulations of place cells to change their tuning entirely. While many studies have looked at the physiological basis of remapping, we lack explicit calculations of how the contextual capacity of the place cell system changes as a function of place field firing properties. Here, we propose a geometric approach to understanding population level activity of place cells. Using known firing field statistics, we investigate how changes to place cell firing properties affect the distances between representations of different environments within firing rate space. Using this approach, we find that the number of contexts storable by the hippocampus grows exponentially with the number of place cells, and calculate this exponent for environments of different sizes. We identify a fundamental trade-off between high resolution encoding of position and the number of storable contexts. This trade-off is tuned by place cell width, which might explain the change in firing field scale along the dorsal-ventral axis of the hippocampus. We demonstrate that clustering of place cells near likely points of confusion, such as boundaries, increases the contextual capacity of the place system within our framework and conclude by discussing how our geometric approach could be extended to include other cell types and abstract spaces.", "pdf": "https://openreview.net/pdf/9753767cc23ca7180fd4278699c23a3b28c99199.pdf"} {"title": "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction", "url": "https://openreview.net/forum?id=gojL67CfS8", "detail_url": "https://openreview.net/forum?id=gojL67CfS8", "authors": "Keyu Tian,Yi Jiang,Zehuan Yuan,BINGYUE PENG,Liwei Wang", "tags": "NIPS 2024,Oral", "abstract": "We present Visual AutoRegressive modeling (VAR), a new generation paradigm that redefines the autoregressive learning on images as coarse-to-fine \"next-scale prediction\" or \"next-resolution prediction\", diverging from the standard raster-scan \"next-token prediction\". This simple, intuitive methodology allows autoregressive (AR) transformers to learn visual distributions fast and generalize well: VAR, for the first time, makes GPT-style AR models surpass diffusion transformers in image generation. On ImageNet 256x256 benchmark, VAR significantly improve AR baseline by improving Frechet inception distance (FID) from 18.65 to 1.73, inception score (IS) from 80.4 to 350.2, with around 20x faster inference speed. It is also empirically verified that VAR outperforms the Diffusion Transformer (DiT) in multiple dimensions including image quality, inference speed, data efficiency, and scalability. Scaling up VAR models exhibits clear power-law scaling laws similar to those observed in LLMs, with linear correlation coefficients near -0.998 as solid evidence. VAR further showcases zero-shot generalization ability in downstream tasks including image in-painting, out-painting, and editing. These results suggest VAR has initially emulated the two important properties of LLMs: Scaling Laws and zero-shot task generalization. We have released all models and codes to promote the exploration of AR/VAR models for visual generation and unified learning.", "pdf": "https://openreview.net/pdf/1366e6f25deff9942d17a853f81351d6caa8dcdf.pdf"} {"title": "Cracking the Code of Juxtaposition: Can AI Models Understand the Humorous Contradictions", "url": "https://openreview.net/forum?id=bCMpdaQCNW", "detail_url": "https://openreview.net/forum?id=bCMpdaQCNW", "authors": "Zhe Hu,Tuo Liang,Jing Li,Yiren Lu,Yunlai Zhou,Yiran Qiao,Jing Ma,Yu Yin", "tags": "NIPS 2024,Oral", "abstract": "Recent advancements in large vision language models have demonstrated remarkable proficiency across a wide range of tasks. \nYet, these models still struggle with understanding the nuances of human humor through juxtaposition, particularly when it involves nonlinear narratives that underpin many jokes and humor cues. This paper investigates this challenge by focusing on comics with contradictory narratives, where each comic consists of two panels that create a humorous contradiction. We introduce the YesBut benchmark, which comprises tasks of varying difficulty aimed at assessing AI's capabilities in recognizing and interpreting these comics, ranging from literal content comprehension to deep narrative reasoning. Through extensive experimentation and analysis of recent commercial or open-sourced large vision language models, we assess their capability to comprehend the complex interplay of the narrative humor inherent in these comics. Our results show that even the state-of-the-art models still struggle with this task. Our findings offer insights into the current limitations and potential improvements for AI in understanding human creative expressions.", "pdf": "https://openreview.net/pdf/1f618d0020c8650176d91ef4418ef3cea6151adb.pdf"} {"title": "Human Expertise in Algorithmic Prediction", "url": "https://openreview.net/forum?id=wpGJ2AX6SZ", "detail_url": "https://openreview.net/forum?id=wpGJ2AX6SZ", "authors": "Rohan Alur,Manish Raghavan,Devavrat Shah", "tags": "NIPS 2024,Oral", "abstract": "We introduce a novel framework for incorporating human expertise into algorithmic predictions. Our approach leverages human judgment to distinguish inputs which are *algorithmically indistinguishable*, or \"look the same\" to predictive algorithms. We argue that this framing clarifies the problem of human-AI collaboration in prediction tasks, as experts often form judgments by drawing on information which is not encoded in an algorithm's training data. Algorithmic indistinguishability yields a natural test for assessing whether experts incorporate this kind of \"side information\", and further provides a simple but principled method for selectively incorporating human feedback into algorithmic predictions. We show that this method provably improves the performance of any feasible algorithmic predictor and precisely quantify this improvement. We find empirically that although algorithms often outperform their human counterparts *on average*, human judgment can improve algorithmic predictions on *specific* instances (which can be identified ex-ante). In an X-ray classification task, we find that this subset constitutes nearly 30% of the patient population. Our approach provides a natural way of uncovering this heterogeneity and thus enabling effective human-AI collaboration.", "pdf": "https://openreview.net/pdf/4f5dc6075a84c5c600343c682e95020208b5f943.pdf"} {"title": "Learning diffusion at lightspeed", "url": "https://openreview.net/forum?id=y10avdRFNK", "detail_url": "https://openreview.net/forum?id=y10avdRFNK", "authors": "Antonio Terpin,Nicolas Lanzetti,Mart\u00edn Gadea,Florian Dorfler", "tags": "NIPS 2024,Oral", "abstract": "Diffusion regulates numerous natural processes and the dynamics of many successful generative models. Existing models to learn the diffusion terms from observational data rely on complex bilevel optimization problems and model only the drift of the system.\nWe propose a new simple model, JKOnet*, which bypasses the complexity of existing architectures while presenting significantly enhanced representational capabilities: JKOnet* recovers the potential, interaction, and internal energy components of the underlying diffusion process. JKOnet* minimizes a simple quadratic loss and outperforms other baselines in terms of sample efficiency, computational complexity, and accuracy. Additionally, JKOnet* provides a closed-form optimal solution for linearly parametrized functionals, and, when applied to predict the evolution of cellular processes from real-world data, it achieves state-of-the-art accuracy at a fraction of the computational cost of all existing methods.\nOur methodology is based on the interpretation of diffusion processes as energy-minimizing trajectories in the probability space via the so-called JKO scheme, which we study via its first-order optimality conditions.", "pdf": "https://openreview.net/pdf/71e85a95e3f40ebd277c5df65f9dff3c748e2ddb.pdf"} {"title": "Weisfeiler and Leman Go Loopy: A New Hierarchy for Graph Representational Learning", "url": "https://openreview.net/forum?id=9O2sVnEHor", "detail_url": "https://openreview.net/forum?id=9O2sVnEHor", "authors": "Raffaele Paolino,Sohir Maskey,Pascal Welke,Gitta Kutyniok", "tags": "NIPS 2024,Oral", "abstract": "We introduce $r$-loopy Weisfeiler-Leman ($r$-$\\ell$WL), a novel hierarchy of graph isomorphism tests and a corresponding GNN framework, $r$-$\\ell$MPNN, that can count cycles up to length $r{+}2$. Most notably, we show that $r$-$\\ell$WL can count homomorphisms of cactus graphs. This extends 1-WL, which can only count homomorphisms of trees and, in fact, is incomparable to $k$-WL for any fixed $k$. We empirically validate the expressive and counting power of $r$-$\\ell$MPNN on several synthetic datasets and demonstrate the scalability and strong performance on various real-world datasets, particularly on sparse graphs.", "pdf": "https://openreview.net/pdf/160b0368f27f6ae00575a4abc8d44870237c95f9.pdf"} {"title": "Unlocking the Capabilities of Thought: A Reasoning Boundary Framework to Quantify and Optimize Chain-of-Thought", "url": "https://openreview.net/forum?id=pC44UMwy2v", "detail_url": "https://openreview.net/forum?id=pC44UMwy2v", "authors": "Qiguang Chen,Libo Qin,Jiaqi WANG,Jingxuan Zhou,Wanxiang Che", "tags": "NIPS 2024,Oral", "abstract": "Chain-of-Thought (CoT) reasoning has emerged as a promising approach for enhancing the performance of large language models (LLMs) on complex reasoning tasks. Recently, a series of studies attempt to explain the mechanisms underlying CoT, aiming to deepen the understanding of its efficacy. Nevertheless, the existing research faces two major challenges: (1) a lack of quantitative metrics to assess CoT capabilities and (2) a dearth of guidance on optimizing CoT performance. Motivated by this, in this work, we introduce a novel reasoning boundary framework (RBF) to address these challenges. To solve the lack of quantification, we first define a reasoning boundary (RB) to quantify the upper-bound of CoT and establish a combination law for RB, enabling a practical quantitative approach applicable to various real-world CoT tasks. To address the lack of optimization, we propose three categories of RBs. We further optimize these categories with combination laws focused on RB promotion and reasoning path optimization for CoT improvement. Through extensive experiments on 27 models and 5 tasks, the study validates the existence and rationality of the proposed framework. Furthermore, it explains the effectiveness of 10 CoT strategies and guides optimization from two perspectives. We hope this work can provide a comprehensive understanding of the boundaries and optimization strategies for reasoning in LLMs. Our code and data are available at https://github.com/LightChen233/reasoning-boundary.", "pdf": "https://openreview.net/pdf/47a165ca745dea00bf9fe4ba52210932fb6d1787.pdf"} {"title": "Reinforcement Learning Under Latent Dynamics: Toward Statistical and Algorithmic Modularity", "url": "https://openreview.net/forum?id=qf2uZAdy1N", "detail_url": "https://openreview.net/forum?id=qf2uZAdy1N", "authors": "Philip Amortila,Dylan J Foster,Nan Jiang,Akshay Krishnamurthy,Zakaria Mhammedi", "tags": "NIPS 2024,Oral", "abstract": "Real-world applications of reinforcement learning often involve environments where agents operate on complex, high-dimensional observations, but the underlying (``latent'') dynamics are comparatively simple. However, beyond restrictive settings\n such as tabular latent dynamics, the fundamental statistical requirements and algorithmic principles for *reinforcement learning under latent dynamics* are poorly\n understood.\n\n This paper addresses the question of reinforcement learning under *general latent dynamics* from a\n statistical and algorithmic perspective. On the statistical side, our main negative\nresult shows that *most* well-studied settings for reinforcement learning with function approximation become intractable when composed with rich observations; we complement this with a positive result, identifying *latent pushforward coverability* as a\ngeneral condition that enables statistical tractability. Algorithmically, we develop provably efficient *observable-to-latent* reductions ---that is, reductions that transform an arbitrary algorithm for the\n latent MDP into an algorithm that can operate on rich observations--- in two settings: one where the agent has access to hindsight\nobservations of the latent dynamics (Lee et al., 2023) and one\nwhere the agent can estimate *self-predictive* latent models (Schwarzer et al., 2020). Together, our results serve as a\n first step toward a unified statistical and algorithmic theory for\nreinforcement learning under latent dynamics.", "pdf": "https://openreview.net/pdf/17710a946394531d22cd1cf32e0a7fd7bac1e6ac.pdf"} {"title": "Generalization Error Bounds for Two-stage Recommender Systems with Tree Structure", "url": "https://openreview.net/forum?id=m1a4CrRJR7", "detail_url": "https://openreview.net/forum?id=m1a4CrRJR7", "authors": "Jin Zhang,Ze Liu,Defu Lian,Enhong Chen", "tags": "NIPS 2024,Oral", "abstract": "Two-stage recommender systems play a crucial role in efficiently identifying relevant items and personalizing recommendations from a vast array of options. This paper, based on an error decomposition framework, analyzes the generalization error for two-stage recommender systems with a tree structure, which consist of an efficient tree-based retriever and a more precise yet time-consuming ranker. We use the Rademacher complexity to establish the generalization upper bound for various tree-based retrievers using beam search, as well as for different ranker models under a shifted training distribution. Both theoretical insights and practical experiments on real-world datasets indicate that increasing the branches in tree-based retrievers and harmonizing distributions across stages can enhance the generalization performance of two-stage recommender systems.", "pdf": "https://openreview.net/pdf/0573ad42adbbc93100e6c898b23c116d78de695b.pdf"} {"title": "Aligner: Efficient Alignment by Learning to Correct", "url": "https://openreview.net/forum?id=kq166jACVP", "detail_url": "https://openreview.net/forum?id=kq166jACVP", "authors": "Jiaming Ji,Boyuan Chen,Hantao Lou,Donghai Hong,Borong Zhang,Xuehai Pan,Tianyi Qiu,Juntao Dai,Yaodong Yang", "tags": "NIPS 2024,Oral", "abstract": "With the rapid development of large language models (LLMs) and ever-evolving practical requirements, finding an efficient and effective alignment method has never been more critical. However, the tension between the complexity of current alignment methods and the need for rapid iteration in deployment scenarios necessitates the development of a model-agnostic alignment approach that can operate under these constraints. In this paper, we introduce Aligner, a novel and simple alignment paradigm that learns the correctional residuals between preferred and dispreferred answers using a small model. Designed as a model-agnostic, plug-and-play module, Aligner can be directly applied to various open-source and API-based models with only one-off training, making it suitable for rapid iteration. Notably, Aligner can be applied to any powerful, large-scale upstream models. Moreover, it can even iteratively bootstrap the upstream models using corrected responses as synthetic human preference data, breaking through the model's performance ceiling. Our experiments demonstrate performance improvements by deploying the same Aligner model across 11 different LLMs, evaluated on the 3H dimensions (helpfulness, harmlessness, and honesty). Specifically, Aligner-7B has achieved an average improvement of 68.9\\% in helpfulness and 23.8\\% in harmlessness across the tested LLMs while also effectively reducing hallucination. In the Alpaca-Eval leaderboard, stacking Aligner-2B on GPT-4 Turbo improved its LC Win Rate from 55.0\\% to 58.3\\%, surpassing GPT-4 Omni's 57.5\\% Win Rate (community report).", "pdf": "https://openreview.net/pdf/80ca837e0c7f9e0d8dbf5b1edefbdf611c8ded34.pdf"} {"title": "Questioning the Survey Responses of Large Language Models", "url": "https://openreview.net/forum?id=Oo7dlLgqQX", "detail_url": "https://openreview.net/forum?id=Oo7dlLgqQX", "authors": "Ricardo Dominguez-Olmedo,Moritz Hardt,Celestine Mendler-D\u00fcnner", "tags": "NIPS 2024,Oral", "abstract": "Surveys have recently gained popularity as a tool to study large language models. By comparing models\u2019 survey responses to those of different human reference populations, researchers aim to infer the demographics, political opinions, or values best represented by current language models. In this work, we critically examine language models' survey responses on the basis of the well-established American Community Survey by the U.S. Census Bureau. Evaluating 43 different language models using de-facto standard prompting methodologies, we establish two dominant patterns. First, models' responses are governed by ordering and labeling biases, for example, towards survey responses labeled with the letter \u201cA\u201d. Second, when adjusting for these systematic biases through randomized answer ordering, models across the board trend towards uniformly random survey responses, irrespective of model size or training data. As a result, models consistently appear to better represent subgroups whose aggregate statistics are closest to uniform for the survey under consideration, leading to potentially misguided conclusions about model alignment.", "pdf": "https://openreview.net/pdf/6a9813651d8de7fdc565ddb5dacecf057526a29a.pdf"} {"title": "Stochastic Taylor Derivative Estimator: Efficient amortization for arbitrary differential operators", "url": "https://openreview.net/forum?id=J2wI2rCG2u", "detail_url": "https://openreview.net/forum?id=J2wI2rCG2u", "authors": "Zekun Shi,Zheyuan Hu,Min Lin,Kenji Kawaguchi", "tags": "NIPS 2024,Oral", "abstract": "Optimizing neural networks with loss that contain high-dimensional and high-order differential operators\n is expensive to evaluate with back-propagation due to $\\mathcal{O}(d^{k})$ scaling of the derivative tensor size and the $\\mathcal{O}(2^{k-1}L)$ scaling in the computation graph, where $d$ is the dimension of the domain, $L$ is the number of ops in the forward computation graph, and $k$ is the derivative order. In previous works, the polynomial scaling in $d$ was addressed by amortizing the computation over the optimization process via randomization. Separately, the exponential scaling in $k$ for univariate functions ($d=1$) was addressed with high-order auto-differentiation (AD). In this work, we show how to efficiently perform arbitrary contraction of the derivative tensor of arbitrary order for multivariate functions, by properly constructing the input tangents to univariate high-order AD, which can be used to efficiently randomize any differential operator.\n When applied to Physics-Informed Neural Networks (PINNs), our method provides >1000$\\times$ speed-up and >30$\\times$ memory reduction over randomization with first-order AD, and we can now solve 1-million-dimensional PDEs in 8 minutes on a single NVIDIA A100 GPU. This work opens the possibility of using high-order differential operators in large-scale problems.", "pdf": "https://openreview.net/pdf/525882bf51a6cb819e7762a437a606419814f5c7.pdf"} {"title": "Do Finetti: On Causal Effects for Exchangeable Data", "url": "https://openreview.net/forum?id=4rCZeCZAON", "detail_url": "https://openreview.net/forum?id=4rCZeCZAON", "authors": "Siyuan Guo,Chi Zhang,Karthika Mohan,Ferenc Husz\u00e1r,Bernhard Sch\u00f6lkopf", "tags": "NIPS 2024,Oral", "abstract": "We study causal effect estimation in a setting where the data are not i.i.d.$\\ $(independent and identically distributed). We focus on exchangeable data satisfying an assumption of independent causal mechanisms. Traditional causal effect estimation frameworks, e.g., relying on structural causal models and do-calculus, are typically limited to i.i.d. data and do not extend to more general exchangeable generative processes, which naturally arise in multi-environment data. To address this gap, we develop a generalized framework for exchangeable data and introduce a truncated factorization formula that facilitates both the identification and estimation of causal effects in our setting. To illustrate potential applications, we introduce a causal P\u00f3lya urn model and demonstrate how intervention propagates effects in exchangeable data settings. Finally, we develop an algorithm that performs simultaneous causal discovery and effect estimation given multi-environment data.", "pdf": "https://openreview.net/pdf/8f348634669f055ea725df69d4de4fac31b49194.pdf"} {"title": "LLM Evaluators Recognize and Favor Their Own Generations", "url": "https://openreview.net/forum?id=4NJBV6Wp0h", "detail_url": "https://openreview.net/forum?id=4NJBV6Wp0h", "authors": "Arjun Panickssery,Samuel R. Bowman,Shi Feng", "tags": "NIPS 2024,Oral", "abstract": "Self-evaluation using large language models (LLMs) has proven valuable not only in benchmarking but also methods like reward modeling, constitutional AI, and self-refinement. But new biases are introduced due to the same LLM acting as both the evaluator and the evaluatee. One such bias is self-preference, where an LLM evaluator scores its own outputs higher than others\u2019 while human annotators consider them of equal quality. But do LLMs actually recognize their own outputs when they give those texts higher scores, or is it just a coincidence? In this paper, we investigate if self-recognition capability contributes to self-preference. We discover that, out of the box, LLMs such as GPT-4 and Llama 2 have non-trivial accuracy at distinguishing themselves from other LLMs and humans. By finetuning LLMs, we discover a linear correlation between self-recognition capability and the strength of self-preference bias; using controlled experiments, we show that the causal explanation resists straightforward confounders. We discuss how self-recognition can interfere with unbiased evaluations and AI safety more generally.", "pdf": "https://openreview.net/pdf/17f3e3ce067de145352b0881a5a5a351cfcceac4.pdf"} {"title": "Span-Based Optimal Sample Complexity for Weakly Communicating and General Average Reward MDPs", "url": "https://openreview.net/forum?id=pGEY8JQ3qx", "detail_url": "https://openreview.net/forum?id=pGEY8JQ3qx", "authors": "Matthew Zurek,Yudong Chen", "tags": "NIPS 2024,Oral", "abstract": "We study the sample complexity of learning an $\\varepsilon$-optimal policy in an average-reward Markov decision process (MDP) under a generative model. For weakly communicating MDPs, we establish the complexity bound $\\widetilde{O}\\left(SA\\frac{\\mathsf{H}}{\\varepsilon^2} \\right)$, where $\\mathsf{H}$ is the span of the bias function of the optimal policy and $SA$ is the cardinality of the state-action space. Our result is the first that is minimax optimal (up to log factors) in all parameters $S,A,\\mathsf{H}$, and $\\varepsilon$, improving on existing work that either assumes uniformly bounded mixing times for all policies or has suboptimal dependence on the parameters. We also initiate the study of sample complexity in general (multichain) average-reward MDPs. We argue a new transient time parameter $\\mathsf{B}$ is necessary, establish an $\\widetilde{O}\\left(SA\\frac{\\mathsf{B} + \\mathsf{H}}{\\varepsilon^2} \\right)$ complexity bound, and prove a matching (up to log factors) minimax lower bound. Both results are based on reducing the average-reward MDP to a discounted MDP, which requires new ideas in the general setting. To optimally analyze this reduction, we develop improved bounds for $\\gamma$-discounted MDPs, showing that $\\widetilde{O}\\left(SA\\frac{\\mathsf{H}}{(1-\\gamma)^2\\varepsilon^2} \\right)$ and $\\widetilde{O}\\left(SA\\frac{\\mathsf{B} + \\mathsf{H}}{(1-\\gamma)^2\\varepsilon^2} \\right)$ samples suffice to learn $\\varepsilon$-optimal policies in weakly communicating and in general MDPs, respectively. Both these results circumvent the well-known minimax lower bound of $\\widetilde{\\Omega}\\left(SA\\frac{1}{(1-\\gamma)^3\\varepsilon^2} \\right)$ for $\\gamma$-discounted MDPs, and establish a quadratic rather than cubic horizon dependence for a fixed MDP instance.", "pdf": "https://openreview.net/pdf/2ff245e09d2ec82378e2aa6ffea57a9ec01c043c.pdf"} {"title": "Learning Formal Mathematics From Intrinsic Motivation", "url": "https://openreview.net/forum?id=uNKlTQ8mBD", "detail_url": "https://openreview.net/forum?id=uNKlTQ8mBD", "authors": "Gabriel Poesia,David Broman,Nick Haber,Noah Goodman", "tags": "NIPS 2024,Oral", "abstract": "How did humanity coax mathematics from the aether? We explore the Platonic view that mathematics can be discovered from its axioms---a game of conjecture and proof. We describe an agent that jointly learns to pose challenging problems for itself (conjecturing) and solve them (theorem proving). Given a mathematical domain axiomatized in dependent type theory, we first combine methods for constrained decoding and type-directed synthesis to sample valid conjectures from a language model. Our method guarantees well-formed conjectures by construction, even as we start with a randomly initialized model. We use the same model to represent a policy and value function for guiding proof search. Our agent targets generating hard but provable conjectures --- a moving target, since its own theorem proving ability also improves as it trains. We propose novel methods for hindsight relabeling on proof search trees to significantly improve the agent's sample efficiency in both tasks. Experiments on 3 axiomatic domains (propositional logic, arithmetic and group theory) demonstrate that our agent can bootstrap from only the axioms, self-improving in generating true and challenging conjectures and in finding proofs.", "pdf": "https://openreview.net/pdf/42d3b14720041d447c657071a08de640733954a0.pdf"} {"title": "Identification and Estimation of the Bi-Directional MR with Some Invalid Instruments", "url": "https://openreview.net/forum?id=S2P6KPLtm8", "detail_url": "https://openreview.net/forum?id=S2P6KPLtm8", "authors": "Feng Xie,Zhen Yao,Lin Xie,Yan Zeng,Zhi Geng", "tags": "NIPS 2024,Oral", "abstract": "We consider the challenging problem of estimating causal effects from purely observational data in the bi-directional Mendelian randomization (MR), where some invalid instruments, as well as unmeasured confounding, usually exist. \nTo address this problem, most existing methods attempt to find proper valid instrumental variables (IVs) for the target causal effect by expert knowledge or by assuming that the causal model is a one-directional MR model. \nAs such, in this paper, we first theoretically investigate the identification of the bi-directional MR from observational data. In particular, we provide necessary and sufficient conditions under which valid IV sets are correctly identified such that the bi-directional MR model is identifiable, including the causal directions of a pair of phenotypes (i.e., the treatment and outcome).\nMoreover, based on the identification theory, we develop a cluster fusion-like method to discover valid IV sets and estimate the causal effects of interest.\nWe theoretically demonstrate the correctness of the proposed algorithm.\nExperimental results show the effectiveness of our method for estimating causal effects in both one-directional and bi-directional MR models.", "pdf": "https://openreview.net/pdf/7864b4bc0bd0c32d66af795cacadc545cbdd6432.pdf"} {"title": "Maximum Entropy Inverse Reinforcement Learning of Diffusion Models with Energy-Based Models", "url": "https://openreview.net/forum?id=V0oJaLqY4E", "detail_url": "https://openreview.net/forum?id=V0oJaLqY4E", "authors": "Sangwoong Yoon,Himchan Hwang,Dohyun Kwon,Yung-Kyun Noh,Frank C. Park", "tags": "NIPS 2024,Oral", "abstract": "We present a maximum entropy inverse reinforcement learning (IRL) approach for improving the sample quality of diffusion generative models, especially when the number of generation time steps is small. Similar to how IRL trains a policy based on the reward function learned from expert demonstrations, we train (or fine-tune) a diffusion model using the log probability density estimated from training data. \nSince we employ an energy-based model (EBM) to represent the log density, our approach boils down to the joint training of a diffusion model and an EBM. Our IRL formulation, named Diffusion by Maximum Entropy IRL (DxMI), is a minimax problem that reaches equilibrium when both models converge to the data distribution. The entropy maximization plays a key role in DxMI, facilitating the exploration of the diffusion model and ensuring the convergence of the EBM. We also propose Diffusion by Dynamic Programming (DxDP), a novel reinforcement learning algorithm for diffusion models, as a subroutine in DxMI. DxDP makes the diffusion model update in DxMI efficient by transforming the original problem into an optimal control formulation where value functions replace back-propagation in time. Our empirical studies show that diffusion models fine-tuned using DxMI can generate high-quality samples in as few as 4 and 10 steps. Additionally, DxMI enables the training of an EBM without MCMC, stabilizing EBM training dynamics and enhancing anomaly detection performance.", "pdf": "https://openreview.net/pdf/fbd48eb1b53fd48de22ddd59edf0d18875315635.pdf"} {"title": "Improving Environment Novelty Quantification for Effective Unsupervised Environment Design", "url": "https://openreview.net/forum?id=UdxpjKO2F9", "detail_url": "https://openreview.net/forum?id=UdxpjKO2F9", "authors": "Jayden Teoh,Wenjun Li,Pradeep Varakantham", "tags": "NIPS 2024,Oral", "abstract": "Unsupervised Environment Design (UED) formalizes the problem of autocurricula through interactive training between a teacher agent and a student agent. The teacher generates new training environments with high learning potential, curating an adaptive curriculum that strengthens the student's ability to handle unseen scenarios. Existing UED methods mainly rely on *regret*, a metric that measures the difference between the agent's optimal and actual performance, to guide curriculum design. Regret-driven methods generate curricula that progressively increase environment complexity for the student but overlook environment *novelty* \u2014 a critical element for enhancing an agent's generalizability. Measuring environment novelty is especially challenging due to the underspecified nature of environment parameters in UED, and existing approaches face significant limitations. To address this, this paper introduces the *Coverage-based Evaluation of Novelty In Environment* (CENIE) framework. CENIE proposes a scalable, domain-agnostic, and curriculum-aware approach to quantifying environment novelty by leveraging the student's state-action space coverage from previous curriculum experiences. We then propose an implementation of CENIE that models this coverage and measures environment novelty using Gaussian Mixture Models. By integrating both regret and novelty as complementary objectives for curriculum design, CENIE facilitates effective exploration across the state-action space while progressively increasing curriculum complexity. Empirical evaluations demonstrate that augmenting existing regret-based UED algorithms with CENIE achieves state-of-the-art performance across multiple benchmarks, underscoring the effectiveness of novelty-driven autocurricula for robust generalization.", "pdf": "https://openreview.net/pdf/395c3c5df43310736f6134ab07ff32330b2a8f45.pdf"} {"title": "Enhancing Preference-based Linear Bandits via Human Response Time", "url": "https://openreview.net/forum?id=aIPwlkdOut", "detail_url": "https://openreview.net/forum?id=aIPwlkdOut", "authors": "Shen Li,Yuyang Zhang,Zhaolin Ren,Claire Liang,Na Li,Julie Shah", "tags": "NIPS 2024,Oral", "abstract": "Interactive preference learning systems infer human preferences by presenting queries as pairs of options and collecting binary choices. Although binary choices are simple and widely used, they provide limited information about preference strength. To address this, we leverage human response times, which are inversely related to preference strength, as an additional signal. We propose a computationally efficient method that combines choices and response times to estimate human utility functions, grounded in the EZ diffusion model from psychology. Theoretical and empirical analyses show that for queries with strong preferences, response times complement choices by providing extra information about preference strength, leading to significantly improved utility estimation. We incorporate this estimator into preference-based linear bandits for fixed-budget best-arm identification. Simulations on three real-world datasets demonstrate that using response times significantly accelerates preference learning compared to choice-only approaches. Additional materials, such as code, slides, and talk video, are available at https://shenlirobot.github.io/pages/NeurIPS24.html.", "pdf": "https://openreview.net/pdf/b32d10afd0c5117bb0b9ac42cf07b7786e40cbd9.pdf"} {"title": "Scale Equivariant Graph Metanetworks", "url": "https://openreview.net/forum?id=8Fxqn1tZM1", "detail_url": "https://openreview.net/forum?id=8Fxqn1tZM1", "authors": "Ioannis Kalogeropoulos,Giorgos Bouritsas,Yannis Panagakis", "tags": "NIPS 2024,Oral", "abstract": "This paper pertains to an emerging machine learning paradigm: learning higher- order functions, i.e. functions whose inputs are functions themselves, particularly when these inputs are Neural Networks (NNs). With the growing interest in architectures that process NNs, a recurring design principle has permeated the field: adhering to the permutation symmetries arising from the connectionist structure of\nNNs. However, are these the sole symmetries present in NN parameterizations? Zooming into most practical activation functions (e.g. sine, ReLU, tanh) answers this question negatively and gives rise to intriguing new symmetries, which we collectively refer to as scaling symmetries, that is, non-zero scalar multiplications and divisions of weights and biases. In this work, we propose Scale Equivariant Graph MetaNetworks - ScaleGMNs, a framework that adapts the Graph Metanetwork (message-passing) paradigm by incorporating scaling symmetries and thus rendering neuron and edge representations equivariant to valid scalings. We introduce novel building blocks, of independent technical interest, that allow for equivariance or invariance with respect to individual scalar multipliers or their product and use them in all components of ScaleGMN. Furthermore, we prove that, under certain expressivity conditions, ScaleGMN can simulate the forward and backward pass of any input feedforward neural network. Experimental results demonstrate that our method advances the state-of-the-art performance for several datasets and activation functions, highlighting the power of scaling symmetries as an inductive bias for NN processing. The source code is publicly available at https://github.com/jkalogero/scalegmn.", "pdf": "https://openreview.net/pdf/6d3b36cd5d6e1acb5d27b18b7da7333f5c075e0e.pdf"} {"title": "CAT3D: Create Anything in 3D with Multi-View Diffusion Models", "url": "https://openreview.net/forum?id=TFZlFRl9Ks", "detail_url": "https://openreview.net/forum?id=TFZlFRl9Ks", "authors": "Ruiqi Gao,Aleksander Holynski,Philipp Henzler,Arthur Brussee,Ricardo Martin Brualla,Pratul P. Srinivasan,Jonathan T. Barron,Ben Poole", "tags": "NIPS 2024,Oral", "abstract": "Advances in 3D reconstruction have enabled high-quality 3D capture, but require a user to collect hundreds to thousands of images to create a 3D scene. We present CAT3D, a method for creating anything in 3D by simulating this real-world capture process with a multi-view diffusion model. Given any number of input images and a set of target novel viewpoints, our model generates highly consistent novel views of a scene. These generated views can be used as input to robust 3D reconstruction techniques to produce 3D representations that can be rendered from any viewpoint in real-time. CAT3D can create entire 3D scenes in as little as one minute, and outperforms existing methods for single image and few-view 3D scene creation.", "pdf": "https://openreview.net/pdf/a17526d158b6388ba1714b7d1decfdd7ec50e8da.pdf"} {"title": "Stylus: Automatic Adapter Selection for Diffusion Models", "url": "https://openreview.net/forum?id=3Odq2tGSpp", "detail_url": "https://openreview.net/forum?id=3Odq2tGSpp", "authors": "Michael Luo,Justin Wong,Brandon Trabucco,Yanping Huang,Joseph E. Gonzalez,Zhifeng Chen,Russ Salakhutdinov,Ion Stoica", "tags": "NIPS 2024,Oral", "abstract": "Beyond scaling base models with more data or parameters, fine-tuned adapters provide an alternative way to generate high fidelity, custom images at reduced costs. As such, adapters have been widely adopted by open-source communities, accumulating a database of over 100K adapters\u2014most of which are highly customized with insufficient descriptions. To generate high quality images, this paper explores the problem of matching the prompt to a Stylus of relevant adapters, built on recent work that highlight the performance gains of composing adapters. We introduce Stylus, which efficiently selects and automatically composes task-specific adapters based on a prompt's keywords. Stylus outlines a three-stage approach that first summarizes adapters with improved descriptions and embeddings, retrieves relevant adapters, and then further assembles adapters based on prompts' keywords by checking how well they fit the prompt. To evaluate Stylus, we developed StylusDocs, a curated dataset featuring 75K adapters with pre-computed adapter embeddings. In our evaluation on popular Stable Diffusion checkpoints, Stylus achieves greater CLIP/FID Pareto efficiency and is twice as preferred, with humans and multimodal models as evaluators, over the base model.", "pdf": "https://openreview.net/pdf/b41be568e09a4892b988b18214b6686115e4ccb9.pdf"} {"title": "The Sample-Communication Complexity Trade-off in Federated Q-Learning", "url": "https://openreview.net/forum?id=6YIpvnkjUK", "detail_url": "https://openreview.net/forum?id=6YIpvnkjUK", "authors": "Sudeep Salgia,Yuejie Chi", "tags": "NIPS 2024,Oral", "abstract": "We consider the problem of Federated Q-learning, where $M$ agents aim to collaboratively learn the optimal Q-function of an unknown infinite horizon Markov Decision Process with finite state and action spaces. We investigate the trade-off between sample and communication complexity for the widely used class of intermittent communication algorithms. We first establish the converse result, where we show that any Federated Q-learning that offers a linear speedup with respect to number of agents in sample complexity needs to incur a communication cost of at least $\\Omega(\\frac{1}{1-\\gamma})$, where $\\gamma$ is the discount factor. We also propose a new Federated Q-learning algorithm, called Fed-DVR-Q, which is the first Federated Q-learning algorithm to simultaneously achieve order-optimal sample and communication complexities. Thus, together these results provide a complete characterization of the sample-communication complexity trade-off in Federated Q-learning.", "pdf": "https://openreview.net/pdf/aa89287b43d0d38cc8ef9cd412964652a0b005cb.pdf"} {"title": "Guiding a Diffusion Model with a Bad Version of Itself", "url": "https://openreview.net/forum?id=bg6fVPVs3s", "detail_url": "https://openreview.net/forum?id=bg6fVPVs3s", "authors": "Tero Karras,Miika Aittala,Tuomas Kynk\u00e4\u00e4nniemi,Jaakko Lehtinen,Timo Aila,Samuli Laine", "tags": "NIPS 2024,Oral", "abstract": "The primary axes of interest in image-generating diffusion models are image quality, the amount of variation in the results, and how well the results align with a given condition, e.g., a class label or a text prompt. The popular classifier-free guidance approach uses an unconditional model to guide a conditional model, leading to simultaneously better prompt alignment and higher-quality images at the cost of reduced variation. These effects seem inherently entangled, and thus hard to control. We make the surprising observation that it is possible to obtain disentangled control over image quality without compromising the amount of variation by guiding generation using a smaller, less-trained version of the model itself rather than an unconditional model. This leads to significant improvements in ImageNet generation, setting record FIDs of 1.01 for 64x64 and 1.25 for 512x512, using publicly available networks. Furthermore, the method is also applicable to unconditional diffusion models, drastically improving their quality.", "pdf": "https://openreview.net/pdf/9173da6000cdac7dc5129691366a29747954b7ef.pdf"} {"title": "RG-SAN: Rule-Guided Spatial Awareness Network for End-to-End 3D Referring Expression Segmentation", "url": "https://openreview.net/forum?id=r5spnrY6H3", "detail_url": "https://openreview.net/forum?id=r5spnrY6H3", "authors": "Changli Wu,Qi Chen,Jiayi Ji,Haowei Wang,Yiwei Ma,You Huang,Gen Luo,Hao Fei,Xiaoshuai Sun,Rongrong Ji", "tags": "NIPS 2024,Oral", "abstract": "3D Referring Expression Segmentation (3D-RES) aims to segment 3D objects by correlating referring expressions with point clouds. However, traditional approaches frequently encounter issues like over-segmentation or mis-segmentation, due to insufficient emphasis on spatial information of instances. In this paper, we introduce a Rule-Guided Spatial Awareness Network (RG-SAN) by utilizing solely the spatial information of the target instance for supervision. This approach enables the network to accurately depict the spatial relationships among all entities described in the text, thus enhancing the reasoning capabilities. The RG-SAN consists of the Text-driven Localization Module (TLM) and the Rule-guided Weak Supervision (RWS) strategy. The TLM initially locates all mentioned instances and iteratively refines their positional information. The RWS strategy, acknowledging that only target objects have supervised positional information, employs dependency tree rules to precisely guide the core instance\u2019s positioning. Extensive testing on the ScanRefer benchmark has shown that RG-SAN not only establishes new performance benchmarks, with an mIoU increase of 5.1 points, but also exhibits significant improvements in robustness when processing descriptions with spatial ambiguity. All codes are available at https://github.com/sosppxo/RG-SAN.", "pdf": "https://openreview.net/pdf/074c8caaa0b5feabaad18b25db6c0ee86ed09863.pdf"} {"title": "VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time", "url": "https://openreview.net/forum?id=5zSCSE0k41", "detail_url": "https://openreview.net/forum?id=5zSCSE0k41", "authors": "Sicheng Xu,Guojun Chen,Yu-Xiao Guo,Jiaolong Yang,Chong Li,Zhenyu Zang,Yizhong Zhang,Xin Tong,Baining Guo", "tags": "NIPS 2024,Oral", "abstract": "We introduce VASA, a framework for generating lifelike talking faces with appealing visual affective skills (VAS) given a single static image and a speech audio clip. Our premiere model, VASA-1, is capable of not only generating lip movements that are exquisitely synchronized with the audio, but also producing a large spectrum of facial nuances and natural head motions that contribute to the perception of authenticity and liveliness. \nThe core innovations include a diffusion-based holistic facial dynamics and head movement generation model that works in a face latent space, and the development of such an expressive and disentangled face latent space using videos.\nThrough extensive experiments including evaluation on a set of new metrics, we show that our method significantly outperforms previous methods along various dimensions comprehensively. Our method delivers high video quality with realistic facial and head dynamics and also supports the online generation of 512$\\times$512 videos at up to 40 FPS with negligible starting latency.\nIt paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors.", "pdf": "https://openreview.net/pdf/ccbb9d0f4688567aed95ad757cf65f0dd4538631.pdf"} {"title": "Learning rigid-body simulators over implicit shapes for large-scale scenes and vision", "url": "https://openreview.net/forum?id=QDYts5dYgq", "detail_url": "https://openreview.net/forum?id=QDYts5dYgq", "authors": "Yulia Rubanova,Tatiana Lopez-Guevara,Kelsey R Allen,William F Whitney,Kim Stachenfeld,Tobias Pfaff", "tags": "NIPS 2024,Oral", "abstract": "Simulating large scenes with many rigid objects is crucial for a variety of applications, such as robotics, engineering, film and video games. Rigid interactions are notoriously hard to model: small changes to the initial state or the simulation parameters can lead to large changes in the final state. Recently, learned simulators based on graph networks (GNNs) were developed as an alternative to hand-designed simulators like MuJoCo and Bullet. They are able to accurately capture dynamics of real objects directly from real-world observations. However, current state-of-the-art learned simulators operate on meshes and scale poorly to scenes with many objects or detailed shapes. Here we present SDF-Sim, the first learned rigid-body simulator designed for scale. We use learned signed-distance functions (SDFs) to represent the object shapes and to speed up distance computation. We design the simulator to leverage SDFs and avoid the fundamental bottleneck of the previous simulators associated with collision detection.\nFor the first time in literature, we demonstrate that we can scale the GNN-based simulators to scenes with hundreds of objects and up to 1.1 million nodes, where mesh-based approaches run out of memory. Finally, we show that SDF-Sim can be applied to real world scenes by extracting SDFs from multi-view images.", "pdf": "https://openreview.net/pdf/a025a4908402e558708ed28771812dd10af193dd.pdf"} {"title": "Neural Pfaffians: Solving Many Many-Electron Schr\u00f6dinger Equations", "url": "https://openreview.net/forum?id=HRkniCWM3E", "detail_url": "https://openreview.net/forum?id=HRkniCWM3E", "authors": "Nicholas Gao,Stephan G\u00fcnnemann", "tags": "NIPS 2024,Oral", "abstract": "Neural wave functions accomplished unprecedented accuracies in approximating the ground state of many-electron systems, though at a high computational cost. Recent works proposed amortizing the cost by learning generalized wave functions across different structures and compounds instead of solving each problem independently. Enforcing the permutation antisymmetry of electrons in such generalized neural wave functions remained challenging as existing methods require discrete orbital selection via non-learnable hand-crafted algorithms. This work tackles the problem by defining overparametrized, fully learnable neural wave functions suitable for generalization across molecules. We achieve this by relying on Pfaffians rather than Slater determinants. The Pfaffian allows us to enforce the antisymmetry on arbitrary electronic systems without any constraint on electronic spin configurations or molecular structure. Our empirical evaluation finds that a single neural Pfaffian calculates the ground state and ionization energies with chemical accuracy across various systems. On the TinyMol dataset, we outperform the `gold-standard' CCSD(T) CBS reference energies by 1.9m$E_h$ and reduce energy errors compared to previous generalized neural wave functions by up to an order of magnitude.", "pdf": "https://openreview.net/pdf/c766b139548380a74ad7a69a3c638798a81d5de3.pdf"} {"title": "DapperFL: Domain Adaptive Federated Learning with Model Fusion Pruning for Edge Devices", "url": "https://openreview.net/forum?id=Pezt0xttae", "detail_url": "https://openreview.net/forum?id=Pezt0xttae", "authors": "Yongzhe Jia,Xuyun Zhang,Hongsheng Hu,Kim-Kwang Raymond Choo,Lianyong Qi,Xiaolong Xu,Amin Beheshti,Wanchun Dou", "tags": "NIPS 2024,Oral", "abstract": "Federated learning (FL) has emerged as a prominent machine learning paradigm in edge computing environments, enabling edge devices to collaboratively optimize a global model without sharing their private data. However, existing FL frameworks suffer from efficacy deterioration due to the system heterogeneity inherent in edge computing, especially in the presence of domain shifts across local data. \nIn this paper, we propose a heterogeneous FL framework DapperFL, to enhance model performance across multiple domains. In DapperFL, we introduce a dedicated Model Fusion Pruning (MFP) module to produce personalized compact local models for clients to address the system heterogeneity challenges. The MFP module prunes local models with fused knowledge obtained from both local and remaining domains, ensuring robustness to domain shifts. Additionally, we design a Domain Adaptive Regularization (DAR) module to further improve the overall performance of DapperFL. The DAR module employs regularization generated by the pruned model, aiming to learn robust representations across domains. Furthermore, we introduce a specific aggregation algorithm for aggregating heterogeneous local models with tailored architectures and weights. We implement DapperFL on a real-world FL platform with heterogeneous clients. Experimental results on benchmark datasets with multiple domains demonstrate that DapperFL outperforms several state-of-the-art FL frameworks by up to 2.28%, while significantly achieving model volume reductions ranging from 20% to 80%. Our code is available at: https://github.com/jyzgh/DapperFL.", "pdf": "https://openreview.net/pdf/40235b2ea6b49d81841886f194bd9d4a2897ff15.pdf"} {"title": "DenoiseRep: Denoising Model for Representation Learning", "url": "https://openreview.net/forum?id=OycU0bAus6", "detail_url": "https://openreview.net/forum?id=OycU0bAus6", "authors": "zhengrui Xu,Guan'an Wang,Xiaowen Huang,Jitao Sang", "tags": "NIPS 2024,Oral", "abstract": "The denoising model has been proven a powerful generative model but has little exploration of discriminative tasks. Representation learning is important in discriminative tasks, which is defined as *\"learning representations (or features) of the data that make it easier to extract useful information when building classifiers or other predictors\"*. In this paper, we propose a novel Denoising Model for Representation Learning (*DenoiseRep*) to improve feature discrimination with joint feature extraction and denoising. *DenoiseRep* views each embedding layer in a backbone as a denoising layer, processing the cascaded embedding layers as if we are recursively denoise features step-by-step. This unifies the frameworks of feature extraction and denoising, where the former progressively embeds features from low-level to high-level, and the latter recursively denoises features step-by-step. After that, *DenoiseRep* fuses the parameters of feature extraction and denoising layers, and *theoretically demonstrates* its equivalence before and after the fusion, thus making feature denoising computation-free. *DenoiseRep* is a label-free algorithm that incrementally improves features but also complementary to the label if available. Experimental results on various discriminative vision tasks, including re-identification (Market-1501, DukeMTMC-reID, MSMT17, CUHK-03, vehicleID), image classification (ImageNet, UB200, Oxford-Pet, Flowers), object detection (COCO), image segmentation (ADE20K) show stability and impressive improvements. We also validate its effectiveness on the CNN (ResNet) and Transformer (ViT, Swin, Vmamda) architectures.", "pdf": "https://openreview.net/pdf/ccc22185c7b5ceeab3929bff884d84473546f5d7.pdf"} {"title": "Optimal Parallelization of Boosting", "url": "https://openreview.net/forum?id=rtz4df9IF1", "detail_url": "https://openreview.net/forum?id=rtz4df9IF1", "authors": "Arthur da Cunha,Mikael M\u00f8ller H\u00f8gsgaard,Kasper Green Larsen", "tags": "NIPS 2024,Oral", "abstract": "Recent works on the parallel complexity of Boosting have established strong lower bounds on the tradeoff between the number of training rounds $p$ and the total parallel work per round $t$.\nThese works have also presented highly non-trivial parallel algorithms that shed light on different regions of this tradeoff.\nDespite these advancements, a significant gap persists between the theoretical lower bounds and the performance of these algorithms across much of the tradeoff space.\nIn this work, we essentially close this gap by providing both improved lower bounds on the parallel complexity of weak-to-strong learners, and a parallel Boosting algorithm whose performance matches these bounds across the entire $p$ vs. $t$ compromise spectrum, up to logarithmic factors.\nUltimately, this work settles the parallel complexity of Boosting algorithms that are nearly sample-optimal.", "pdf": "https://openreview.net/pdf/b88f812c42a45b79e5e8663c27463c4580ab45a6.pdf"} {"title": "Decompose, Analyze and Rethink: Solving Intricate Problems with Human-like Reasoning Cycle", "url": "https://openreview.net/forum?id=NPKZF1WDjZ", "detail_url": "https://openreview.net/forum?id=NPKZF1WDjZ", "authors": "Shangzi Xue,Zhenya Huang,Jiayu Liu,Xin Lin,Yuting Ning,Binbin Jin,Xin Li,Qi Liu", "tags": "NIPS 2024,Oral", "abstract": "In this paper, we introduce DeAR (_Decompose-Analyze-Rethink_), a framework that iteratively builds a reasoning tree to tackle intricate problems within a single large language model (LLM). Unlike approaches that extend or search for rationales, DeAR is featured by 1) adopting a tree-based question decomposition manner to plan the organization of rationales, which mimics the logical planning inherent\nin human cognition; 2) globally updating the rationales at each reasoning step through natural language feedback. Specifically, the _Decompose_ stage decomposes the question into simpler sub-questions, storing them as new nodes; the _Analyze_ stage generates and self-checks rationales for sub-questions at each node evel; and the _Rethink_ stage updates parent-node rationales based on feedback from their child nodes. By generating and updating the reasoning process from a more global perspective, DeAR constructs more adaptive and accurate logical structures for complex problems, facilitating timely error correction compared to rationale-extension and search-based approaches such as Tree-of-Thoughts (ToT) and Graph-of-Thoughts (GoT). We conduct extensive experiments on three reasoning benchmarks, including ScienceQA, StrategyQA, and GSM8K, which cover a variety of reasoning tasks, demonstrating that our approach significantly reduces logical errors and enhances performance across various LLMs. Furthermore, we validate that DeAR is an efficient method that achieves a superior trade-off between accuracy and reasoning time compared to ToT and GoT.", "pdf": "https://openreview.net/pdf/48641218f9362ec9ed75e6482a2030d00757c6d8.pdf"} {"title": "Bayesian-guided Label Mapping for Visual Reprogramming", "url": "https://openreview.net/forum?id=135eKqDoRR", "detail_url": "https://openreview.net/forum?id=135eKqDoRR", "authors": "Chengyi Cai,Zesheng Ye,Lei Feng,Jianzhong Qi,Feng Liu", "tags": "NIPS 2024,Oral", "abstract": "*Visual reprogramming* (VR) leverages the intrinsic capabilities of pretrained vision models by adapting their input or output interfaces to solve downstream tasks whose labels (i.e., downstream labels) might be totally different from the labels associated with the pretrained models (i.e., pretrained labels). \nWhen adapting the output interface, label mapping methods transform the pretrained labels to downstream labels by establishing a gradient-free one-to-one correspondence between the two sets of labels.\nHowever, in this paper, we reveal that one-to-one mappings may overlook the complex relationship between pretrained and downstream labels. Motivated by this observation, we propose a ***B**ayesian-guided **L**abel **M**apping* (BLM) method. \nBLM constructs an iteratively-updated probabilistic label mapping matrix, with each element quantifying a pairwise relationship between pretrained and downstream labels.\nThe assignment of values to the constructed matrix is guided by Bayesian conditional probability, considering the joint distribution of the downstream labels and the labels predicted by the pretrained model on downstream samples. Experiments conducted on both pretrained vision models (e.g., ResNeXt) and vision-language models (e.g., CLIP) demonstrate the superior performance of BLM over existing label mapping methods. The success of BLM also offers a probabilistic lens through which to understand and analyze the effectiveness of VR.\nOur code is available at https://github.com/tmlr-group/BayesianLM.", "pdf": "https://openreview.net/pdf/5bd51ea14b1857a137832007130aaf712c5b6a63.pdf"} {"title": "Policy Learning from Tutorial Books via Understanding, Rehearsing and Introspecting", "url": "https://openreview.net/forum?id=Ddak3nSqQM", "detail_url": "https://openreview.net/forum?id=Ddak3nSqQM", "authors": "Xiong-Hui Chen,Ziyan Wang,Yali Du,Shengyi Jiang,Meng Fang,Yang Yu,Jun Wang", "tags": "NIPS 2024,Oral", "abstract": "When humans need to learn a new skill, we can acquire knowledge through written books, including textbooks, tutorials, etc. However, current research for decision-making, like reinforcement learning (RL), has primarily required numerous real interactions with the target environment to learn a skill, while failing to utilize the existing knowledge already summarized in the text. The success of Large Language Models (LLMs) sheds light on utilizing such knowledge behind the books. In this paper, we discuss a new policy learning problem called Policy Learning from tutorial Books (PLfB) upon the shoulders of LLMs\u2019 systems, which aims to leverage rich resources such as tutorial books to derive a policy network. Inspired by how humans learn from books, we solve the problem via a three-stage framework: Understanding, Rehearsing, and Introspecting (URI). In particular, it first rehearses decision-making trajectories based on the derived knowledge after understanding the books, then introspects in the imaginary dataset to distill a policy network. \n We build two benchmarks for PLfB~based on Tic-Tac-Toe and Football games. In experiment, URI's policy achieves at least 44% net win rate against GPT-based agents without any real data; In Football game, which is a complex scenario, URI's policy beat the built-in AIs with a 37% while using GPT-based agent can only achieve a 6\\% winning rate. The project page: https://plfb-football.github.io.", "pdf": "https://openreview.net/pdf/f4d95b3399a1323142228b0362d42345119de142.pdf"} {"title": "GIC: Gaussian-Informed Continuum for Physical Property Identification and Simulation", "url": "https://openreview.net/forum?id=SSCtCq2MH2", "detail_url": "https://openreview.net/forum?id=SSCtCq2MH2", "authors": "Junhao Cai,Yuji Yang,Weihao Yuan,Yisheng HE,Zilong Dong,Liefeng Bo,Hui Cheng,Qifeng Chen", "tags": "NIPS 2024,Oral", "abstract": "This paper studies the problem of estimating physical properties (system identification) through visual observations. To facilitate geometry-aware guidance in physical property estimation, we introduce a novel hybrid framework that leverages 3D Gaussian representation to not only capture explicit shapes but also enable the simulated continuum to render object masks as 2D shape surrogates during training. We propose a new dynamic 3D Gaussian framework based on motion factorization to recover the object as 3D Gaussian point sets across different time states. Furthermore, we develop a coarse-to-fine filling strategy to generate the density fields of the object from the Gaussian reconstruction, allowing for the extraction of object continuums along with their surfaces and the integration of Gaussian attributes into these continuum. In addition to the extracted object surfaces, the Gaussian-informed continuum also enables the rendering of object masks during simulations, serving as 2D-shape guidance for physical property estimation. Extensive experimental evaluations demonstrate that our pipeline achieves state-of-the-art performance across multiple benchmarks and metrics. Additionally, we illustrate the effectiveness of the proposed method through real-world demonstrations, showcasing its practical utility. Our project page is at https://jukgei.github.io/project/gic.", "pdf": "https://openreview.net/pdf/35d3fb34ac9b1b65eb96b7a01480e9b13895a855.pdf"} {"title": "PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression", "url": "https://openreview.net/forum?id=YvA8UF0I37", "detail_url": "https://openreview.net/forum?id=YvA8UF0I37", "authors": "Vladimir Malinovskii,Denis Mazur,Ivan Ilin,Denis Kuznedelev,Konstantin Pavlovich Burlachenko,Kai Yi,Dan Alistarh,Peter Richt\u00e1rik", "tags": "NIPS 2024,Oral", "abstract": "There has been significant interest in \"extreme\" compression of large language models (LLMs), i.e. to 1-2 bits per parameter, which allows such models to be executed efficiently on resource-constrained devices. \nExisting work focused on improved one-shot quantization techniques and weight representations; yet, purely post-training approaches are reaching diminishing returns in terms of the accuracy-vs-bit-width trade-off. State-of-the-art quantization methods such as QuIP# and AQLM include fine-tuning (part of) the compressed parameters over a limited amount of calibration data; however, such fine-tuning techniques over compressed weights often make exclusive use of straight-through estimators (STE), whose performance is not well-understood in this setting. \nIn this work, we question the use of STE for extreme LLM compression, showing that it can be sub-optimal, and perform a systematic study of quantization-aware fine-tuning strategies for LLMs.\nWe propose PV-Tuning - a representation-agnostic framework that generalizes and improves upon existing fine-tuning strategies, and provides convergence guarantees in restricted cases.\nOn the practical side, when used for 1-2 bit vector quantization, PV-Tuning outperforms prior techniques for highly-performant models such as Llama and Mistral. \nUsing PV-Tuning, we achieve the first Pareto-optimal quantization for Llama-2 family models at 2 bits per parameter.", "pdf": "https://openreview.net/pdf/a41bd553618c035e26d1f1f6a8ebd19108274f50.pdf"} {"title": "RL-GPT: Integrating Reinforcement Learning and Code-as-policy", "url": "https://openreview.net/forum?id=LEzx6QRkRH", "detail_url": "https://openreview.net/forum?id=LEzx6QRkRH", "authors": "Shaoteng Liu,Haoqi Yuan,Minda Hu,Yanwei Li,Yukang Chen,Shu Liu,Zongqing Lu,Jiaya Jia", "tags": "NIPS 2024,Oral", "abstract": "Large Language Models (LLMs) have demonstrated proficiency in utilizing various tools by coding, yet they face limitations in handling intricate logic and precise control. In embodied tasks, high-level planning is amenable to direct coding, while low-level actions often necessitate task-specific refinement, such as Reinforcement Learning (RL). To seamlessly integrate both modalities, we introduce a two-level hierarchical framework, RL-GPT, comprising a slow agent and a fast agent. The slow agent analyzes actions suitable for coding, while the fast agent executes coding tasks. This decomposition effectively focuses each agent on specific tasks, proving highly efficient within our pipeline. Our approach outperforms traditional RL methods and existing GPT agents, demonstrating superior efficiency. In the Minecraft game, it rapidly obtains diamonds within a single day on an RTX3090. Additionally, it achieves SOTA performance across all designated MineDojo tasks.", "pdf": "https://openreview.net/pdf/8489e6d14edc65b16f5f04f6773edb790ac430a4.pdf"} {"title": "Statistical Efficiency of Distributional Temporal Difference Learning", "url": "https://openreview.net/forum?id=eWUM5hRYgH", "detail_url": "https://openreview.net/forum?id=eWUM5hRYgH", "authors": "Yang Peng,Liangyu Zhang,Zhihua Zhang", "tags": "NIPS 2024,Oral", "abstract": "Distributional reinforcement learning (DRL) has achieved empirical success in various domains.\nOne of the core tasks in the field of DRL is distributional policy evaluation, which involves estimating the return distribution $\\eta^\\pi$ for a given policy $\\pi$.\nThe distributional temporal difference learning has been accordingly proposed, which\nis an extension of the temporal difference learning (TD) in the classic RL area.\nIn the tabular case, Rowland et al. [2018] and Rowland et al. [2023] proved the asymptotic convergence of two instances of distributional TD, namely categorical temporal difference learning (CTD) and quantile temporal difference learning (QTD), respectively.\nIn this paper, we go a step further and analyze the finite-sample performance of distributional TD.\nTo facilitate theoretical analysis, we propose a non-parametric distributional TD learning (NTD).\nFor a $\\gamma$-discounted infinite-horizon tabular Markov decision process,\nwe show that for NTD we need $\\widetilde O\\left(\\frac{1}{\\varepsilon^{2p}(1-\\gamma)^{2p+1}}\\right)$ iterations to achieve an $\\varepsilon$-optimal estimator with high probability, when the estimation error is measured by the $p$-Wasserstein distance.\nThis sample complexity bound is minimax optimal (up to logarithmic factors) in the case of the $1$-Wasserstein distance.\nTo achieve this, we establish a novel Freedman's inequality in Hilbert spaces, which would be of independent interest.\nIn addition, we revisit CTD, showing that the same non-asymptotic convergence bounds hold for CTD in the case of the $p$-Wasserstein distance.", "pdf": "https://openreview.net/pdf/3002a75ebfe6a386efc8dee88d8a2382d1d837e1.pdf"} {"title": "Exploitation of a Latent Mechanism in Graph Contrastive Learning: Representation Scattering", "url": "https://openreview.net/forum?id=R8SolCx62K", "detail_url": "https://openreview.net/forum?id=R8SolCx62K", "authors": "Dongxiao He,Lianze Shan,Jitao Zhao,Hengrui Zhang,Zhen Wang,Weixiong Zhang", "tags": "NIPS 2024,Oral", "abstract": "Graph Contrastive Learning (GCL) has emerged as a powerful approach for generating graph representations without the need for manual annotation. Most advanced GCL methods fall into three main frameworks: node discrimination, group discrimination, and bootstrapping schemes, all of which achieve comparable performance. However, the underlying mechanisms and factors that contribute to their effectiveness are not yet fully understood. In this paper, we revisit these frameworks and reveal a common mechanism\u2014representation scattering\u2014that significantly enhances their performance. Our discovery highlights an essential feature of GCL and unifies these seemingly disparate methods under the concept of representation scattering. To leverage this insight, we introduce Scattering Graph Representation Learning (SGRL), a novel framework that incorporates a new representation scattering mechanism designed to enhance representation diversity through a center-away strategy. Additionally, consider the interconnected nature of graphs, we develop a topology-based constraint mechanism that integrates graph structural properties with representation scattering to prevent excessive scattering. We extensively evaluate SGRL across various downstream tasks on benchmark datasets, demonstrating its efficacy and superiority over existing GCL methods. Our findings underscore the significance of representation scattering in GCL and provide a structured framework for harnessing this mechanism to advance graph representation learning. The code of SGRL is at https://github.com/hedongxiao-tju/SGRL.", "pdf": "https://openreview.net/pdf/e21a9b3822e99ccaefbd6f6562cd41ff019e09ba.pdf"} {"title": "You Only Cache Once: Decoder-Decoder Architectures for Language Models", "url": "https://openreview.net/forum?id=25Ioxw576r", "detail_url": "https://openreview.net/forum?id=25Ioxw576r", "authors": "Yutao Sun,Li Dong,Yi Zhu,Shaohan Huang,Wenhui Wang,Shuming Ma,Quanlu Zhang,Jianyong Wang,Furu Wei", "tags": "NIPS 2024,Oral", "abstract": "We introduce a decoder-decoder architecture, YOCO, for large language models, which only caches key-value pairs once. It consists of two components, i.e., a cross-decoder stacked upon a self-decoder. The self-decoder efficiently encodes global key-value (KV) caches that are reused by the cross-decoder via cross-attention. The overall model behaves like a decoder-only Transformer, although YOCO only caches once. The design substantially reduces GPU memory demands, yet retains global attention capability. Additionally, the computation flow enables prefilling to early exit without changing the final output, thereby significantly speeding up the prefill stage. Experimental results demonstrate that YOCO achieves favorable performance compared to Transformer in various settings of scaling up model size and number of training tokens. We also extend YOCO to 1M context length with near-perfect needle retrieval accuracy. The profiling results show that YOCO improves inference memory, prefill latency, and throughput by orders of magnitude across context lengths and model sizes.", "pdf": "https://openreview.net/pdf/c001fdfd3a2894f8c62da3eef3be8317b3800c61.pdf"} {"title": "Divide-and-Conquer Meets Consensus: Unleashing the Power of Functions in Code Generation", "url": "https://openreview.net/forum?id=cFqAANINgW", "detail_url": "https://openreview.net/forum?id=cFqAANINgW", "authors": "Jingchang Chen,Hongxuan Tang,Zheng Chu,Qianglong Chen,Zekun Wang,Ming Liu,Bing Qin", "tags": "NIPS 2024,Oral", "abstract": "Despite recent progress made by large language models in code generation, they still struggle with programs that meet complex requirements. Recent work utilizes plan-and-solve decomposition to decrease the complexity and leverage self-tests to refine the generated program. Yet, planning deep-inside requirements in advance can be challenging, and the tests need to be accurate to accomplish self-improvement. To this end, we propose FunCoder, a code generation framework incorporating the divide-and-conquer strategy with functional consensus. Specifically, FunCoder recursively branches off sub-functions as smaller goals during code generation, represented by a tree hierarchy. These sub-functions are then composited to attain more complex objectives. Additionally, we designate functions via a consensus formed by identifying similarities in program behavior, mitigating error propagation. FunCoder outperforms state-of-the-art methods by +9.8% on average in HumanEval, MBPP, xCodeEval and MATH with GPT-3.5 and GPT-4. Moreover, our method demonstrates superiority on smaller models: With FunCoder, StableCode-3b surpasses GPT-3.5 by +18.6% and achieves 97.7% of GPT-4's performance on HumanEval. Further analysis reveals that our proposed dynamic function decomposition is capable of handling complex requirements, and the functional consensus prevails over self-testing in correctness evaluation.", "pdf": "https://openreview.net/pdf/d6fd653a659d95ce4466896d76af521361a4e0ef.pdf"} {"title": "DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs", "url": "https://openreview.net/forum?id=mp8u2Pcmqz", "detail_url": "https://openreview.net/forum?id=mp8u2Pcmqz", "authors": "Haokun Lin,Haobo Xu,Yichen Wu,Jingzhi Cui,Yingtao Zhang,Linzhan Mou,Linqi Song,Zhenan Sun,Ying Wei", "tags": "NIPS 2024,Oral", "abstract": "Quantization of large language models (LLMs) faces significant challenges, particularly due to the presence of outlier activations that impede efficient low-bit representation. Traditional approaches predominantly address Normal Outliers, which are activations across all tokens with relatively large magnitudes. However, these methods struggle with smoothing Massive Outliers that display significantly larger values, which leads to significant performance degradation in low-bit quantization. In this paper, we introduce DuQuant, a novel approach that utilizes rotation and permutation transformations to more effectively mitigate both massive and normal outliers. First, DuQuant starts by constructing the rotation matrix, using specific outlier dimensions as prior knowledge, to redistribute outliers to adjacent channels by block-wise rotation. Second, We further employ a zigzag permutation to balance the distribution of outliers across blocks, thereby reducing block-wise variance. A subsequent rotation further smooths the activation landscape, enhancing model performance. DuQuant simplifies the quantization process and excels in managing outliers, outperforming the state-of-the-art baselines across various sizes and types of LLMs on multiple tasks, even with 4-bit weight-activation quantization. Our code is available at https://github.com/Hsu1023/DuQuant.", "pdf": "https://openreview.net/pdf/e940d83a63794869ac25c4a08c075cc76b1ebdef.pdf"} {"title": "Not All Tokens Are What You Need for Pretraining", "url": "https://openreview.net/forum?id=0NMzBwqaAJ", "detail_url": "https://openreview.net/forum?id=0NMzBwqaAJ", "authors": "Zhenghao Lin,Zhibin Gou,Yeyun Gong,Xiao Liu,yelong shen,Ruochen Xu,Chen Lin,Yujiu Yang,Jian Jiao,Nan Duan,Weizhu Chen", "tags": "NIPS 2024,Oral", "abstract": "Previous language model pre-training methods have uniformly applied a next-token prediction loss to all training tokens. Challenging this norm, we posit that ''Not all tokens in a corpus are equally important for language model training''. Our initial analysis examines token-level training dynamics of language model, revealing distinct loss patterns for different tokens. Leveraging these insights, we introduce a new language model called Rho-1. Unlike traditional LMs that learn to predict every next token in a corpus, Rho-1 employs Selective Language Modeling (SLM), which selectively trains on useful tokens that aligned with the desired distribution. This approach involves scoring training tokens using a reference model, and then training the language model with a focused loss on tokens with higher scores. When continual continual pretraining on 15B OpenWebMath corpus, Rho-1 yields an absolute improvement in few-shot accuracy of up to 30% in 9 math tasks. After fine-tuning, Rho-1-1B and 7B achieved state-of-the-art results of 40.6% and 51.8% on MATH dataset, respectively - matching DeepSeekMath with only 3% of the pretraining tokens. Furthermore, when continual pretraining on 80B general tokens, Rho-1 achieves 6.8% average enhancement across 15 diverse tasks, increasing both data efficiency and performance of the language model pre-training.", "pdf": "https://openreview.net/pdf/479db135fe05befa88285a35b9f23c2e1122fa8f.pdf"} {"title": "Achieving Optimal Clustering in Gaussian Mixture Models with Anisotropic Covariance Structures", "url": "https://openreview.net/forum?id=ge8GZn8Gtu", "detail_url": "https://openreview.net/forum?id=ge8GZn8Gtu", "authors": "Xin Chen,Anderson Ye Zhang", "tags": "NIPS 2024,Oral", "abstract": "We study clustering under anisotropic Gaussian Mixture Models (GMMs), where covariance matrices from different clusters are unknown and are not necessarily the identity matrix. We analyze two anisotropic scenarios: homogeneous, with identical covariance matrices, and heterogeneous, with distinct matrices per cluster. For these models, we derive minimax lower bounds that illustrate the critical influence of covariance structures on clustering accuracy. To solve the clustering problem, we consider a variant of Lloyd's algorithm, adapted to estimate and utilize covariance information iteratively. We prove that the adjusted algorithm not only achieves the minimax optimality but also converges within a logarithmic number of iterations, thus bridging the gap between theoretical guarantees and practical efficiency.", "pdf": "https://openreview.net/pdf/43a0e0281aa6e1dcadbd067c201ceb2c07c5bf4c.pdf"} {"title": "Return of Unconditional Generation: A Self-supervised Representation Generation Method", "url": "https://openreview.net/forum?id=clTa4JFBML", "detail_url": "https://openreview.net/forum?id=clTa4JFBML", "authors": "Tianhong Li,Dina Katabi,Kaiming He", "tags": "NIPS 2024,Oral", "abstract": "Unconditional generation -- the problem of modeling data distribution without relying on human-annotated labels -- is a long-standing and fundamental challenge in generative models, creating a potential of learning from large-scale unlabeled data. In the literature, the generation quality of an unconditional method has been much worse than that of its conditional counterpart. This gap can be attributed to the lack of semantic information provided by labels. In this work, we show that one can close this gap by generating semantic representations in the representation space produced by a self-supervised encoder. These representations can be used to condition the image generator. This framework, called Representation-Conditioned Generation (RCG), provides an effective solution to the unconditional generation problem without using labels. Through comprehensive experiments, we observe that RCG significantly improves unconditional generation quality: e.g., it achieves a new state-of-the-art FID of 2.15 on ImageNet 256x256, largely reducing the previous best of 5.91 by a relative 64%. Our unconditional results are situated in the same tier as the leading class-conditional ones. We hope these encouraging observations will attract the community's attention to the fundamental problem of unconditional generation. Code is available at [https://github.com/LTH14/rcg](https://github.com/LTH14/rcg).", "pdf": "https://openreview.net/pdf/5eb9f339be4769dbc0a7ac40c1b8e020626b9052.pdf"} {"title": "Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs", "url": "https://openreview.net/forum?id=Vi8AepAXGy", "detail_url": "https://openreview.net/forum?id=Vi8AepAXGy", "authors": "Shengbang Tong,Ellis L Brown II,Penghao Wu,Sanghyun Woo,ADITHYA JAIRAM IYER,Sai Charitha Akula,Shusheng Yang,Jihan Yang,Manoj Middepogu,Ziteng Wang,Xichen Pan,Rob Fergus,Yann LeCun,Saining Xie", "tags": "NIPS 2024,Oral", "abstract": "We introduce Cambrian-1, a family of multimodal LLMs (MLLMs) designed with a vision-centric approach. While stronger language models can enhance multimodal capabilities, the design choices for vision components are often insufficiently explored and disconnected from visual representation learning research. This gap hinders accurate sensory grounding in real-world scenarios. Our study uses LLMs and visual instruction tuning as an interface to evaluate various visual representations, offering new insights into different models and architectures\u2014self-supervised, strongly supervised, or combinations thereof\u2014based on experiments with over 15 vision models. We critically examine existing MLLM benchmarks, addressing the difficulties involved in consolidating and interpreting results from various tasks. To further improve visual grounding, we propose spatial vision aggregator (SVA), a dynamic and spatially-aware connector that integrates vision features with LLMs while reducing the number of tokens. Additionally, we discuss the curation of high-quality visual instruction-tuning data from publicly available sources, emphasizing the importance of distribution balancing. Collectively, Cambrian-1 not only achieves state-of-the-art performances but also serves as a comprehensive, open cookbook for instruction-tuned MLLMs. We provide model weights, code, supporting tools, datasets, and detailed instruction-tuning and evaluation recipes. We hope our release will inspire and accelerate advancements in multimodal systems and visual representation learning.", "pdf": "https://openreview.net/pdf/6e2bfbfc4a63dae9ce2226db223d05c1152a1fb8.pdf"} {"title": "MDAgents: An Adaptive Collaboration of LLMs for Medical Decision-Making", "url": "https://openreview.net/forum?id=EKdk4vxKO4", "detail_url": "https://openreview.net/forum?id=EKdk4vxKO4", "authors": "Yubin Kim,Chanwoo Park,Hyewon Jeong,Yik Siu Chan,Xuhai Xu,Daniel McDuff,Hyeonhoon Lee,Marzyeh Ghassemi,Cynthia Breazeal,Hae Won Park", "tags": "NIPS 2024,Oral", "abstract": "Foundation models are becoming valuable tools in medicine. Yet despite their promise, the best way to leverage Large Language Models (LLMs) in complex medical tasks remains an open question. We introduce a novel multi-agent framework, named **M**edical **D**ecision-making **Agents** (**MDAgents**) that helps to address this gap by automatically assigning a collaboration structure to a team of LLMs. The assigned solo or group collaboration structure is tailored to the medical task at hand, a simple emulation inspired by the way real-world medical decision-making processes are adapted to tasks of different complexities. We evaluate our framework and baseline methods using state-of-the-art LLMs across a suite of real-world medical knowledge and clinical diagnosis benchmarks, including a comparison of\nLLMs\u2019 medical complexity classification against human physicians. MDAgents achieved the **best performance in seven out of ten** benchmarks on tasks requiring an understanding of medical knowledge and multi-modal reasoning, showing a significant **improvement of up to 4.2\\%** ($p$ < 0.05) compared to previous methods' best performances. Ablation studies reveal that MDAgents effectively determines medical complexity to optimize for efficiency and accuracy across diverse medical tasks. Notably, the combination of moderator review and external medical knowledge in group collaboration resulted in an average accuracy **improvement of 11.8\\%**. Our code can be found at https://github.com/mitmedialab/MDAgents.", "pdf": "https://openreview.net/pdf/9993edbaf6679577c07aeae6b39fe0a546abaca1.pdf"} {"title": "Graph Diffusion Transformers for Multi-Conditional Molecular Generation", "url": "https://openreview.net/forum?id=cfrDLD1wfO", "detail_url": "https://openreview.net/forum?id=cfrDLD1wfO", "authors": "Gang Liu,Jiaxin Xu,Tengfei Luo,Meng Jiang", "tags": "NIPS 2024,Oral", "abstract": "Inverse molecular design with diffusion models holds great potential for advancements in material and drug discovery. Despite success in unconditional molecule generation, integrating multiple properties such as synthetic score and gas permeability as condition constraints into diffusion models remains unexplored. We present the Graph Diffusion Transformer (Graph DiT) for multi-conditional molecular generation. Graph DiT has a condition encoder to learn the representation of numerical and categorical properties and utilizes a Transformer-based graph denoiser to achieve molecular graph denoising under conditions. Unlike previous graph diffusion models that add noise separately on the atoms and bonds in the forward diffusion process, we propose a graph-dependent noise model for training Graph DiT, designed to accurately estimate graph-related noise in molecules. We extensively validate the Graph DiT for multi-conditional polymer and small molecule generation. Results demonstrate our superiority across metrics from distribution learning to condition control for molecular properties. A polymer inverse design task for gas separation with feedback from domain experts further demonstrates its practical utility. The code is available at https://github.com/liugangcode/Graph-DiT.", "pdf": "https://openreview.net/pdf/46c02e1bf7e313ee41cca4c78d39825812de8c3d.pdf"} {"title": "MeshFormer : High-Quality Mesh Generation with 3D-Guided Reconstruction Model", "url": "https://openreview.net/forum?id=x7pjdDod6Z", "detail_url": "https://openreview.net/forum?id=x7pjdDod6Z", "authors": "Minghua Liu,Chong Zeng,Xinyue Wei,Ruoxi Shi,Linghao Chen,Chao Xu,Mengqi Zhang,Zhaoning Wang,Xiaoshuai Zhang,Isabella Liu,Hongzhi Wu,Hao Su", "tags": "NIPS 2024,Oral", "abstract": "Open-world 3D reconstruction models have recently garnered significant attention. However, without sufficient 3D inductive bias, existing methods typically entail expensive training costs and struggle to extract high-quality 3D meshes. In this work, we introduce MeshFormer, a sparse-view reconstruction model that explicitly leverages 3D native structure, input guidance, and training supervision. Specifically, instead of using a triplane representation, we store features in 3D sparse voxels and combine transformers with 3D convolutions to leverage an explicit 3D structure and projective bias. In addition to sparse-view RGB input, we require the network to take input and generate corresponding normal maps. The input normal maps can be predicted by 2D diffusion models, significantly aiding in the guidance and refinement of the geometry's learning. Moreover, by combining Signed Distance Function (SDF) supervision with surface rendering, we directly learn to generate high-quality meshes without the need for complex multi-stage training processes. By incorporating these explicit 3D biases, MeshFormer can be trained efficiently and deliver high-quality textured meshes with fine-grained geometric details. It can also be integrated with 2D diffusion models to enable fast single-image-to-3D and text-to-3D tasks. **Videos are available at https://meshformer3d.github.io/**", "pdf": "https://openreview.net/pdf/0137993914b1c34b105ba8ce5545d99389e3b12a.pdf"} {"title": "Get Rid of Isolation: A Continuous Multi-task Spatio-Temporal Learning Framework", "url": "https://openreview.net/forum?id=tnh4LK72yj", "detail_url": "https://openreview.net/forum?id=tnh4LK72yj", "authors": "Zhongchao Yi,Zhengyang Zhou,Qihe Huang,Yanjiang Chen,Liheng Yu,Xu Wang,Yang Wang", "tags": "NIPS 2024,Oral", "abstract": "Spatiotemporal learning has become a pivotal technique to enable urban intelligence. Traditional spatiotemporal models mostly focus on a specific task by assuming a same distribution between training and testing sets. However, given that urban systems are usually dynamic, multi-sourced with imbalanced data distributions, current specific task-specific models fail to generalize to new urban conditions and adapt to new domains without explicitly modeling interdependencies across various dimensions and types of urban data. To this end, we argue that there is an essential to propose a Continuous Multi-task Spatio-Temporal learning framework (CMuST) to empower collective urban intelligence, which reforms the urban spatiotemporal learning from single-domain to cooperatively multi-dimensional and multi-task learning. Specifically, CMuST proposes a new multi-dimensional spatiotemporal interaction network (MSTI) to allow cross-interactions between context and main observations as well as self-interactions within spatial and temporal aspects to be exposed, which is also the core for capturing task-level commonality and personalization. To ensure continuous task learning, a novel Rolling Adaptation training scheme (RoAda) is devised, which not only preserves task uniqueness by constructing data summarization-driven task prompts, but also harnesses correlated patterns among tasks by iterative model behavior modeling. We further establish a benchmark of three cities for multi-task spatiotemporal learning, and empirically demonstrate the superiority of CMuST via extensive evaluations on these datasets. The impressive improvements on both few-shot streaming data and new domain tasks against existing SOAT methods are achieved. Code is available at https://github.com/DILab-USTCSZ/CMuST.", "pdf": "https://openreview.net/pdf/97148ef3439d4c09aeb2847ed85a61ab7bd105d9.pdf"} {"title": "HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning", "url": "https://openreview.net/forum?id=qEpi8uWX3N", "detail_url": "https://openreview.net/forum?id=qEpi8uWX3N", "authors": "Chunlin Tian,Zhan Shi,Zhijiang Guo,Li Li,Cheng-zhong Xu", "tags": "NIPS 2024,Oral", "abstract": "Adapting Large Language Models (LLMs) to new tasks through fine-tuning has been made more efficient by the introduction of Parameter-Efficient Fine-Tuning (PEFT) techniques, such as LoRA. However, these methods often underperform compared to full fine-tuning, particularly in scenarios involving complex datasets. This issue becomes even more pronounced in complex domains, highlighting the need for improved PEFT approaches that can achieve better performance. Through a series of experiments, we have uncovered two critical insights that shed light on the training and parameter inefficiency of LoRA. Building on these insights, we have developed HydraLoRA, a LoRA framework with an asymmetric structure that eliminates the need for domain expertise. Our experiments demonstrate that HydraLoRA outperforms other PEFT approaches, even those that rely on domain knowledge during the training and inference phases. Our anonymous codes are submitted with the paper and will be publicly available. Code is available: https://github.com/Clin0212/HydraLoRA.", "pdf": "https://openreview.net/pdf/60e4bb51758f975380df1586e785d29a101c7f4a.pdf"} {"title": "SeeA*: Efficient Exploration-Enhanced A* Search by Selective Sampling", "url": "https://openreview.net/forum?id=mSaqxZVZW8", "detail_url": "https://openreview.net/forum?id=mSaqxZVZW8", "authors": "Dengwei Zhao,Shikui Tu,Lei Xu", "tags": "NIPS 2024,Oral", "abstract": "Monte-Carlo tree search (MCTS) and reinforcement learning contributed crucially to the success of AlphaGo and AlphaZero, and A$^*$ is a tree search algorithm among the most well-known ones in the classical AI literature. MCTS and A$^*$ both perform heuristic search and are mutually beneficial. Efforts have been made to the renaissance of A$^*$ from three possible aspects, two of which have been confirmed by studies in recent years, while the third is about the OPEN list that consists of open nodes of A$^*$ search, but still lacks deep investigation. This paper aims at the third, i.e., developing the Sampling-exploration enhanced A$^*$ (SeeA$^*$) search by constructing a dynamic subset of OPEN through a selective sampling process, such that the node with the best heuristic value in this subset instead of in the OPEN is expanded. Nodes with the best heuristic values in OPEN are most probably picked into this subset, but sometimes may not be included, which enables SeeA$^*$ to explore other promising branches. Three sampling techniques are presented for comparative investigations. Moreover, under the assumption about the distribution of prediction errors, we have theoretically shown the superior efficiency of SeeA$^*$ over A$^*$ search, particularly when the accuracy of the guiding heuristic function is insufficient. Experimental results on retrosynthetic planning in organic chemistry, logic synthesis in integrated circuit design, and the classical Sokoban game empirically demonstrate the efficiency of SeeA$^*$, in comparison with the state-of-the-art heuristic search algorithms.", "pdf": "https://openreview.net/pdf/fa5dedfe169ea46edcf332d8d7d9b5256b506793.pdf"} {"title": "Improved Distribution Matching Distillation for Fast Image Synthesis", "url": "https://openreview.net/forum?id=tQukGCDaNT", "detail_url": "https://openreview.net/forum?id=tQukGCDaNT", "authors": "Tianwei Yin,Micha\u00ebl Gharbi,Taesung Park,Richard Zhang,Eli Shechtman,Fredo Durand,William T. Freeman", "tags": "NIPS 2024,Oral", "abstract": "Recent approaches have shown promises distilling expensive diffusion models into efficient one-step generators.\nAmongst them, Distribution Matching Distillation (DMD) produces one-step generators that match their teacher in distribution, i.e., the distillation process does not enforce a one-to-one correspondence with the sampling trajectories of their teachers.\nHowever, to ensure stable training in practice, DMD requires an additional regression loss computed using a large set of noise--image pairs, generated by the teacher with many steps of a deterministic sampler.\nThis is not only computationally expensive for large-scale text-to-image synthesis, but it also limits the student's quality, tying it too closely to the teacher's original sampling paths.\nWe introduce DMD2, a set of techniques that lift this limitation and improve DMD training.\nFirst, we eliminate the regression loss and the need for expensive dataset construction.\nWe show that the resulting instability is due to the \"fake\" critic not estimating the distribution \nof generated samples with sufficient accuracy and propose a two time-scale update rule as a remedy.\nSecond, we integrate a GAN loss into the distillation procedure, discriminating between generated samples and real images.\nThis lets us train the student model on real data, thus mitigating the imperfect \"real\" score estimation from the teacher model, and thereby enhancing quality.\nThird, we introduce a new training procedure that enables multi-step sampling in the student, and\naddresses the training--inference input mismatch of previous work, by simulating inference-time generator samples during training. \nTaken together, our improvements set new benchmarks in one-step image generation, with FID scores of 1.28 on ImageNet-64\u00d764 and 8.35 on zero-shot COCO 2014, surpassing the original teacher despite a 500X reduction in inference cost.\nFurther, we show our approach can generate megapixel images by distilling SDXL, demonstrating exceptional visual quality among few-step methods, and surpassing the teacher. \nWe release our code and pretrained models.", "pdf": "https://openreview.net/pdf/3c7ea6adb0b86f707c8c396aa752165bc482e55b.pdf"} {"title": "E2E-MFD: Towards End-to-End Synchronous Multimodal Fusion Detection", "url": "https://openreview.net/forum?id=47loYmzxep", "detail_url": "https://openreview.net/forum?id=47loYmzxep", "authors": "Jiaqing Zhang,Mingxiang Cao,Weiying Xie,Jie Lei,DaixunLi,Wenbo Huang,Yunsong Li,Xue Yang", "tags": "NIPS 2024,Oral", "abstract": "Multimodal image fusion and object detection are crucial for autonomous driving. While current methods have advanced the fusion of texture details and semantic information, their complex training processes hinder broader applications. Addressing this challenge, we introduce E2E-MFD, a novel end-to-end algorithm for multimodal fusion detection. E2E-MFD streamlines the process, achieving high performance with a single training phase. It employs synchronous joint optimization across components to avoid suboptimal solutions associated to individual tasks. Furthermore, it implements a comprehensive optimization strategy in the gradient matrix for shared parameters, ensuring convergence to an optimal fusion detection configuration. Our extensive testing on multiple public datasets reveals E2E-MFD's superior capabilities, showcasing not only visually appealing image fusion but also impressive detection outcomes, such as a 3.9\\% and 2.0\\% $\\text{mAP}_{50}$ increase on horizontal object detection dataset M3FD and oriented object detection dataset DroneVehicle, respectively, compared to state-of-the-art approaches.", "pdf": "https://openreview.net/pdf/b861f70a3f6d0b0377a6c809e5aeb3cc2bb8a6ba.pdf"} {"title": "MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map", "url": "https://openreview.net/forum?id=Y8YVCOMEpz", "detail_url": "https://openreview.net/forum?id=Y8YVCOMEpz", "authors": "Yuhong Chou,Man Yao,Kexin Wang,Yuqi Pan,Rui-Jie Zhu,Jibin Wu,Yiran Zhong,Yu Qiao,Bo XU,Guoqi Li", "tags": "NIPS 2024,Oral", "abstract": "Various linear complexity models, such as Linear Transformer (LinFormer), State Space Model (SSM), and Linear RNN (LinRNN), have been proposed to replace the conventional softmax attention in Transformer structures. However, the optimal design of these linear models is still an open question. In this work, we attempt to answer this question by finding the best linear approximation to softmax attention from a theoretical perspective. We start by unifying existing linear complexity models as the linear attention form and then identify three conditions for the optimal linear attention design: (1) Dynamic memory ability; (2) Static approximation ability; (3) Least parameter approximation. We find that none of the current linear models meet all three conditions, resulting in suboptimal performance. Instead, we propose Meta Linear Attention (MetaLA) as a solution that satisfies these conditions. Our experiments on Multi-Query Associative Recall (MQAR) task, language modeling, image classification, and Long-Range Arena (LRA) benchmark demonstrate that MetaLA is more effective than the existing linear models.", "pdf": "https://openreview.net/pdf/6115a7c6711108daff03a490bc177f2d26b8446b.pdf"} {"title": "Flipped Classroom: Aligning Teacher Attention with Student in Generalized Category Discovery", "url": "https://openreview.net/forum?id=C4NbtYnyQg", "detail_url": "https://openreview.net/forum?id=C4NbtYnyQg", "authors": "Haonan Lin,Wenbin An,Jiahao Wang,Yan Chen,Feng Tian,Mengmeng Wang,QianYing Wang,Guang Dai,Jingdong Wang", "tags": "NIPS 2024,Oral", "abstract": "Recent advancements have shown promise in applying traditional Semi-Supervised Learning strategies to the task of Generalized Category Discovery (GCD). Typically, this involves a teacher-student framework in which the teacher imparts knowledge to the student to classify categories, even in the absence of explicit labels. Nevertheless, GCD presents unique challenges, particularly the absence of priors for new classes, which can lead to the teacher's misguidance and unsynchronized learning with the student, culminating in suboptimal outcomes. In our work, we delve into why traditional teacher-student designs falter in generalized category discovery as compared to their success in closed-world semi-supervised learning. We identify inconsistent pattern learning as the crux of this issue and introduce FlipClass\u2014a method that dynamically updates the teacher to align with the student's attention, instead of maintaining a static teacher reference. Our teacher-attention-update strategy refines the teacher's focus based on student feedback, promoting consistent pattern recognition and synchronized learning across old and new classes. Extensive experiments on a spectrum of benchmarks affirm that FlipClass significantly surpasses contemporary GCD methods, establishing new standards for the field.", "pdf": "https://openreview.net/pdf/2b0097d679b2b1297e2351cac3b7369e7b84e150.pdf"} {"title": "NeuroClips: Towards High-fidelity and Smooth fMRI-to-Video Reconstruction", "url": "https://openreview.net/forum?id=8qu52Fl1Dt", "detail_url": "https://openreview.net/forum?id=8qu52Fl1Dt", "authors": "Zixuan Gong,Guangyin Bao,Qi Zhang,Zhongwei Wan,Duoqian Miao,Shoujin Wang,Lei Zhu,Changwei Wang,Rongtao Xu,Liang Hu,Ke Liu,Yu Zhang", "tags": "NIPS 2024,Oral", "abstract": "Reconstruction of static visual stimuli from non-invasion brain activity fMRI achieves great success, owning to advanced deep learning models such as CLIP and Stable Diffusion. However, the research on fMRI-to-video reconstruction remains limited since decoding the spatiotemporal perception of continuous visual experiences is formidably challenging. We contend that the key to addressing these challenges lies in accurately decoding both high-level semantics and low-level perception flows, as perceived by the brain in response to video stimuli. To the end, we propose NeuroClips, an innovative framework to decode high-fidelity and smooth video from fMRI. NeuroClips utilizes a semantics reconstructor to reconstruct video keyframes, guiding semantic accuracy and consistency, and employs a perception reconstructor to capture low-level perceptual details, ensuring video smoothness. During inference, it adopts a pre-trained T2V diffusion model injected with both keyframes and low-level perception flows for video reconstruction. Evaluated on a publicly available fMRI-video dataset, NeuroClips achieves smooth high-fidelity video reconstruction of up to 6s at 8FPS, gaining significant improvements over state-of-the-art models in various metrics, e.g., a 128% improvement in SSIM and an 81% improvement in spatiotemporal metrics. Our project is available at https://github.com/gongzix/NeuroClips.", "pdf": "https://openreview.net/pdf/258f5ea41fed74143053a220d1c9971bc970b99a.pdf"} {"title": "The Road Less Scheduled", "url": "https://openreview.net/forum?id=0XeNkkENuI", "detail_url": "https://openreview.net/forum?id=0XeNkkENuI", "authors": "Aaron Defazio,Xingyu Alice Yang,Ahmed Khaled,Konstantin Mishchenko,Harsh Mehta,Ashok Cutkosky", "tags": "NIPS 2024,Oral", "abstract": "Existing learning rate schedules that do not require specification of the optimization stopping step $T$ are greatly out-performed by learning rate schedules that depend on $T$. We propose an approach that avoids the need for this stopping time by eschewing the use of schedules entirely, while exhibiting state-of-the-art performance compared to schedules across a wide family of problems ranging from convex problems to large-scale deep learning problems. Our Schedule-Free approach introduces no additional hyper-parameters over standard optimizers with momentum. Our method is a direct consequence of a new theory we develop that unifies scheduling and iterate averaging. An open source implementation of our method is available at https://github.com/facebookresearch/schedule_free. Schedule-Free AdamW is the core algorithm behind our winning entry to the MLCommons 2024 AlgoPerf Algorithmic Efficiency Challenge Self-Tuning track.", "pdf": "https://openreview.net/pdf/6c9eff74f240a8115542beea292c058b239a8712.pdf"} {"title": "Convolutional Differentiable Logic Gate Networks", "url": "https://openreview.net/forum?id=4bKEFyUHT4", "detail_url": "https://openreview.net/forum?id=4bKEFyUHT4", "authors": "Felix Petersen,Hilde Kuehne,Christian Borgelt,Julian Welzel,Stefano Ermon", "tags": "NIPS 2024,Oral", "abstract": "With the increasing inference cost of machine learning models, there is a growing interest in models with fast and efficient inference. \nRecently, an approach for learning logic gate networks directly via a differentiable relaxation was proposed. Logic gate networks are faster than conventional neural network approaches because their inference only requires logic gate operators such as NAND, OR, and XOR, which are the underlying building blocks of current hardware and can be efficiently executed. We build on this idea, extending it by deep logic gate tree convolutions, logical OR pooling, and residual initializations. This allows scaling logic gate networks up by over one order of magnitude and utilizing the paradigm of convolution. On CIFAR-10, we achieve an accuracy of 86.29% using only 61 million logic gates, which improves over the SOTA while being 29x smaller.", "pdf": "https://openreview.net/pdf/550935e8b4e775076ce2310d9d089be095ad0708.pdf"} {"title": "SPRINQL: Sub-optimal Demonstrations driven Offline Imitation Learning", "url": "https://openreview.net/forum?id=uDD44NROOt", "detail_url": "https://openreview.net/forum?id=uDD44NROOt", "authors": "Huy Hoang,Tien Anh Mai,Pradeep Varakantham", "tags": "NIPS 2024,Poster", "abstract": "We focus on offline imitation learning (IL), which aims to mimic an expert's behavior using demonstrations without any interaction with the environment. One of the main challenges in offline IL is the limited support of expert demonstrations, which typically cover only a small fraction of the state-action space. While it may not be feasible to obtain numerous expert demonstrations, it is often possible to gather a larger set of sub-optimal demonstrations. For example, in treatment optimization problems, there are varying levels of doctor treatments available for different chronic conditions. These range from treatment specialists and experienced general practitioners to less experienced general practitioners. Similarly, when robots are trained to imitate humans in routine tasks, they might learn from individuals with different levels of expertise and efficiency. \n\nIn this paper, we propose an offline IL approach that leverages the larger set of sub-optimal demonstrations while effectively mimicking expert trajectories. Existing offline IL methods based on behavior cloning or distribution matching often face issues such as overfitting to the limited set of expert demonstrations or inadvertently imitating sub-optimal trajectories from the larger dataset. Our approach, which is based on inverse soft-Q learning, learns from both expert and sub-optimal demonstrations. It assigns higher importance (through learned weights) to aligning with expert demonstrations and lower importance to aligning with sub-optimal ones. A key contribution of our approach, called SPRINQL, is transforming the offline IL problem into a convex optimization over the space of Q functions. Through comprehensive experimental evaluations, we demonstrate that the SPRINQL algorithm achieves state-of-the-art (SOTA) performance on offline IL benchmarks. Code is available at https://github.com/hmhuy0/SPRINQL .", "pdf": "https://openreview.net/pdf/21f890aa8acefa4c5640a534a16533bb251a5681.pdf"} {"title": "Gradient Guidance for Diffusion Models: An Optimization Perspective", "url": "https://openreview.net/forum?id=X1QeUYBXke", "detail_url": "https://openreview.net/forum?id=X1QeUYBXke", "authors": "Yingqing Guo,Hui Yuan,Yukang Yang,Minshuo Chen,Mengdi Wang", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models have demonstrated empirical successes in various applications and can be adapted to task-specific needs via guidance. This paper studies a form of gradient guidance for adapting a pre-trained diffusion model towards optimizing user-specified objectives. We establish a mathematical framework for guided diffusion to systematically study its optimization theory and algorithmic design. Our theoretical analysis spots a strong link between guided diffusion models and optimization: gradient-guided diffusion models are essentially sampling solutions to a regularized optimization problem, where the regularization is imposed by the pre-training data. As for guidance design, directly bringing in the gradient of an external objective function as guidance would jeopardize the structure in generated samples. We investigate a modified form of gradient guidance based on a forward prediction loss, which leverages the information in pre-trained score functions and provably preserves the latent structure. We further consider an iteratively fine-tuned version of gradient-guided diffusion where guidance and score network are both updated with newly generated samples. This process mimics a first-order optimization iteration in expectation, for which we proved $\\tilde{\\mathcal{O}}(1/K)$ convergence rate to the global optimum when the objective function is concave. Our code is released at https://github.com/yukang123/GGDMOptim.git.", "pdf": "https://openreview.net/pdf/f1a0fd98ecfdc9b4afa72ce8adc61e3dea16e2ca.pdf"} {"title": "Chimera: Effectively Modeling Multivariate Time Series with 2-Dimensional State Space Models", "url": "https://openreview.net/forum?id=ncYGjx2vnE", "detail_url": "https://openreview.net/forum?id=ncYGjx2vnE", "authors": "Ali Behrouz,Michele Santacatterina,Ramin Zabih", "tags": "NIPS 2024,Poster", "abstract": "Modeling multivariate time series is a well-established problem with a wide range of applications from healthcare to financial markets. It, however, is challenging as it requires methods to (1) have high expressive power of representing complicated dependencies along the time axis to capture both long-term progression and seasonal patterns, (2) capture the inter-variate dependencies when it is informative, (3) dynamically model the dependencies of variate and time dimensions, and (4) have efficient training and inference for very long sequences. Traditional State Space Models (SSMs) are classical approaches for univariate time series modeling due to their simplicity and expressive power to represent linear dependencies. They, however, have fundamentally limited expressive power to capture non-linear dependencies, are slow in practice, and fail to model the inter-variate information flow. Despite recent attempts to improve the expressive power of SSMs by using deep structured SSMs, the existing methods are either limited to univariate time series, fail to model complex patterns (e.g., seasonal patterns), fail to dynamically model the dependencies of variate and time dimensions, and/or are input-independent. We present Chimera, an expressive variation of the 2-dimensional SSMs with careful design of parameters to maintain high expressive power while keeping the training complexity linear. Using two SSM heads with different discretization processes and input-dependent parameters, Chimera is provably able to learn long-term progression, seasonal patterns, and desirable dynamic autoregressive processes. To improve the efficiency of complex 2D recurrence, we present a fast training using a new 2-dimensional parallel selective scan. Our experimental evaluation shows the superior performance of Chimera on extensive and diverse benchmarks, including ECG and speech time series classification, long-term and short-term time series forecasting, and time series anomaly detection.", "pdf": "https://openreview.net/pdf/293e7ef70612d586ad3576a085191e54b2c0eb16.pdf"} {"title": "A Nearly Optimal and Low-Switching Algorithm for Reinforcement Learning with General Function Approximation", "url": "https://openreview.net/forum?id=s3icZC2NLq", "detail_url": "https://openreview.net/forum?id=s3icZC2NLq", "authors": "Heyang Zhao,Jiafan He,Quanquan Gu", "tags": "NIPS 2024,Poster", "abstract": "The exploration-exploitation dilemma has been a central challenge in reinforcement learning (RL) with complex model classes. In this paper, we propose a new algorithm, Monotonic Q-Learning with Upper Confidence Bound (MQL-UCB) for RL with general function approximation. Our key algorithmic design includes (1) a general deterministic policy-switching strategy that achieves low switching cost, (2) a monotonic value function structure with carefully controlled function class complexity, and (3) a variance-weighted regression scheme that exploits historical trajectories with high data efficiency. MQL-UCB achieves minimax optimal regret of $\\tilde{O}(d\\sqrt{HK})$ when $K$ is sufficiently large and near-optimal policy switching cost of $\\tilde{O}(dH)$, with $d$ being the eluder dimension of the function class, $H$ being the planning horizon, and $K$ being the number of episodes. \n Our work sheds light on designing provably sample-efficient and deployment-efficient Q-learning with nonlinear function approximation.", "pdf": "https://openreview.net/pdf/b3423ead9010a96399c1d7d679491e9c48a0fd4f.pdf"} {"title": "VQ-Map: Bird's-Eye-View Map Layout Estimation in Tokenized Discrete Space via Vector Quantization", "url": "https://openreview.net/forum?id=bKuxygBW2Y", "detail_url": "https://openreview.net/forum?id=bKuxygBW2Y", "authors": "Yiwei Zhang,Jin Gao,Fudong Ge,Guan Luo,Bing Li,Zhaoxiang Zhang,Haibin Ling,Weiming Hu", "tags": "NIPS 2024,Poster", "abstract": "Bird's-eye-view (BEV) map layout estimation requires an accurate and full understanding of the semantics for the environmental elements around the ego car to make the results coherent and realistic. Due to the challenges posed by occlusion, unfavourable imaging conditions and low resolution, \\emph{generating} the BEV semantic maps corresponding to corrupted or invalid areas in the perspective view (PV) is appealing very recently. \\emph{The question is how to align the PV features with the generative models to facilitate the map estimation}. In this paper, we propose to utilize a generative model similar to the Vector Quantized-Variational AutoEncoder (VQ-VAE) to acquire prior knowledge for the high-level BEV semantics in the tokenized discrete space. Thanks to the obtained BEV tokens accompanied with a codebook embedding encapsulating the semantics for different BEV elements in the groundtruth maps, we are able to directly align the sparse backbone image features with the obtained BEV tokens from the discrete representation learning based on a specialized token decoder module, and finally generate high-quality BEV maps with the BEV codebook embedding serving as a bridge between PV and BEV. We evaluate the BEV map layout estimation performance of our model, termed VQ-Map, on both the nuScenes and Argoverse benchmarks, achieving 62.2/47.6 mean IoU for surround-view/monocular evaluation on nuScenes, as well as 73.4 IoU for monocular evaluation on Argoverse, which all set a new record for this map layout estimation task. The code and models are available on \\url{https://github.com/Z1zyw/VQ-Map}.", "pdf": "https://openreview.net/pdf/685c7f5fa23644eff84f69db3233d4fb61bc6c4e.pdf"} {"title": "On the Impact of Feature Heterophily on Link Prediction with Graph Neural Networks", "url": "https://openreview.net/forum?id=3LZHatxUa9", "detail_url": "https://openreview.net/forum?id=3LZHatxUa9", "authors": "Jiong Zhu,Gaotang Li,Yao-An Yang,Jing Zhu,Xuehao Cui,Danai Koutra", "tags": "NIPS 2024,Poster", "abstract": "Heterophily, or the tendency of connected nodes in networks to have different class labels or dissimilar features, has been identified as challenging for many Graph Neural Network (GNN) models. While the challenges of applying GNNs for node classification when class labels display strong heterophily are well understood, it is unclear how heterophily affects GNN performance in other important graph learning tasks where class labels are not available. In this work, we focus on the link prediction task and systematically analyze the impact of heterophily in node features on GNN performance. We first introduce formal definitions of homophilic and heterophilic link prediction tasks, and present a theoretical framework that highlights the different optimizations needed for the respective tasks. We then analyze how different link prediction encoders and decoders adapt to varying levels of feature homophily and introduce designs for improved performance. Based on our definitions, we identify and analyze six real-world benchmarks spanning from homophilic to heterophilic link prediction settings, with graphs containing up to 30M edges. Our empirical analysis on a variety of synthetic and real-world datasets confirms our theoretical insights and highlights the importance of adopting learnable decoders and GNN encoders with ego- and neighbor-embedding separation in message passing for link prediction tasks beyond homophily.", "pdf": "https://openreview.net/pdf/7c0d24d8c5b940086df83fb002c3e92da763b36b.pdf"} {"title": "Factorized Diffusion Architectures for Unsupervised Image Generation and Segmentation", "url": "https://openreview.net/forum?id=7G362fgJFd", "detail_url": "https://openreview.net/forum?id=7G362fgJFd", "authors": "Xin Yuan,Michael Maire", "tags": "NIPS 2024,Poster", "abstract": "We develop a neural network architecture which, trained in an unsupervised manner as a denoising diffusion model, simultaneously learns to both generate and segment images. Learning is driven entirely by the denoising diffusion objective, without any annotation or prior knowledge about regions during training. A computational bottleneck, built into the neural architecture, encourages the denoising network to partition an input into regions, denoise them in parallel, and combine the results. Our trained model generates both synthetic images and, by simple examination of its internal predicted partitions, semantic segmentations of those images. Without fine-tuning, we directly apply our unsupervised model to the downstream task of segmenting real images via noising and subsequently denoising them. Experiments demonstrate that our model achieves accurate unsupervised image segmentation and high-quality synthetic image generation across multiple datasets.", "pdf": "https://openreview.net/pdf/0b0e26bd5cb8b993746d295c433c593d7ad86d9c.pdf"} {"title": "Probabilistic Decomposed Linear Dynamical Systems for Robust Discovery of Latent Neural Dynamics", "url": "https://openreview.net/forum?id=XPhSbybD73", "detail_url": "https://openreview.net/forum?id=XPhSbybD73", "authors": "Yenho Chen,Noga Mudrik,Kyle A. Johnsen,Sankaraleengam Alagapan,Adam Shabti Charles,Christopher John Rozell", "tags": "NIPS 2024,Poster", "abstract": "Time-varying linear state-space models are powerful tools for obtaining mathematically interpretable representations of neural signals. For example, switching and decomposed models describe complex systems using latent variables that evolve according to simple locally linear dynamics. However, existing methods for latent variable estimation are not robust to dynamical noise and system nonlinearity due to noise-sensitive inference procedures and limited model formulations. This can lead to inconsistent results on signals with similar dynamics, limiting the model's ability to provide scientific insight. In this work, we address these limitations and propose a probabilistic approach to latent variable estimation in decomposed models that improves robustness against dynamical noise. Additionally, we introduce an extended latent dynamics model to improve robustness against system nonlinearities. We evaluate our approach on several synthetic dynamical systems, including an empirically-derived brain-computer interface experiment, and demonstrate more accurate latent variable inference in nonlinear systems with diverse noise conditions. Furthermore, we apply our method to a real-world clinical neurophysiology dataset, illustrating the ability to identify interpretable and coherent structure where previous models cannot.", "pdf": "https://openreview.net/pdf/97fd4685ad572113a49942a0e71937b3db55efb0.pdf"} {"title": "Implicit Regularization of Decentralized Gradient Descent for Sparse Regression", "url": "https://openreview.net/forum?id=MlADRQI0Wf", "detail_url": "https://openreview.net/forum?id=MlADRQI0Wf", "authors": "Tongle Wu,Ying Sun", "tags": "NIPS 2024,Poster", "abstract": "We consider learning a sparse model from linear measurements taken by a network of agents. Different from existing decentralized methods designed based on the LASSO regression with explicit $\\ell_1$ norm regularization, we exploit the implicit regularization of decentralized optimization method applied to an over-parameterized nonconvex least squares formulation without penalization. Our first result shows that despite nonconvexity, if the network connectivity is good, the well-known decentralized gradient descent algorithm (DGD) with small initialization and early stopping can compute the statistically optimal solution. Sufficient conditions on the initialization scale, choice of step size, network connectivity, and stopping time are further provided to achieve convergence. Our result recovers the convergence rate of gradient descent in the centralized setting, showing its tightness. \nBased on the analysis of DGD, we further propose a communication-efficient version, termed T-DGD, by truncating the iterates before transmission. In the high signal-to-noise ratio (SNR) regime, we show that T-DGD achieves comparable statistical accuracy to DGD, while the communication cost is logarithmic in the number of parameters. Numerical results are provided to validate the effectiveness of DGD and T-DGD for sparse learning through implicit regularization.", "pdf": "https://openreview.net/pdf/c2c69e05224053f3049709bd80a96662992b6366.pdf"} {"title": "Universal Exact Compression of Differentially Private Mechanisms", "url": "https://openreview.net/forum?id=CgGjT8EG8A", "detail_url": "https://openreview.net/forum?id=CgGjT8EG8A", "authors": "Yanxiao Liu,Wei-Ning Chen,Ayfer Ozgur,Cheuk Ting Li", "tags": "NIPS 2024,Poster", "abstract": "To reduce the communication cost of differential privacy mechanisms, we introduce a novel construction, called Poisson private representation (PPR), designed to compress and simulate any local randomizer while ensuring local differential privacy. Unlike previous simulation-based local differential privacy mechanisms, PPR exactly preserves the joint distribution of the data and the output of the original local randomizer. Hence, the PPR-compressed privacy mechanism retains all desirable statistical properties of the original privacy mechanism such as unbiasedness and Gaussianity. Moreover, PPR achieves a compression size within a logarithmic gap from the theoretical lower bound. Using the PPR, we give a new order-wise trade-off between communication, accuracy, central and local differential privacy for distributed mean estimation. Experiment results on distributed mean estimation show that PPR consistently gives a better trade-off between communication, accuracy and central differential privacy compared to the coordinate subsampled Gaussian mechanism, while also providing local differential privacy.", "pdf": "https://openreview.net/pdf/bc8db1e9cf2899d281127d72d1993d71ead0af3c.pdf"} {"title": "Learning Representations for Hierarchies with Minimal Support", "url": "https://openreview.net/forum?id=HFS800reZK", "detail_url": "https://openreview.net/forum?id=HFS800reZK", "authors": "Benjamin Rozonoyer,Michael Boratko,Dhruvesh Patel,Wenlong Zhao,Shib Sankar Dasgupta,Hung Le,Andrew McCallum", "tags": "NIPS 2024,Poster", "abstract": "When training node embedding models to represent large directed graphs (digraphs), it is impossible to observe all entries of the adjacency matrix during training. As a consequence most methods employ sampling. For very large digraphs, however, this means many (most) entries may be unobserved during training. In general, observing every entry would be necessary to uniquely identify a graph, however if we know the graph has a certain property some entries can be omitted - for example, only half the entries would be required for a symmetric graph. \nIn this work, we develop a novel framework to identify a subset of entries required to uniquely distinguish a graph among all transitively-closed DAGs. We give an explicit algorithm to compute the provably minimal set of entries, and demonstrate empirically that one can train node embedding models with greater efficiency and performance, provided the energy function has an appropriate inductive bias. We achieve robust performance on synthetic hierarchies and a larger real-world taxonomy, observing improved convergence rates in a resource-constrained setting while reducing the set of training examples by as much as 99%.", "pdf": "https://openreview.net/pdf/98c7ccf6ef86019ffc994aba434e5c6603739459.pdf"} {"title": "OwMatch: Conditional Self-Labeling with Consistency for Open-world Semi-Supervised Learning", "url": "https://openreview.net/forum?id=rle9X7DQuH", "detail_url": "https://openreview.net/forum?id=rle9X7DQuH", "authors": "Shengjie Niu,Lifan Lin,Jian Huang,Chao Wang", "tags": "NIPS 2024,Poster", "abstract": "Semi-supervised learning (SSL) offers a robust framework for harnessing the potential of unannotated data. Traditionally, SSL mandates that all classes possess labeled instances. However, the emergence of open-world SSL (OwSSL) introduces a more practical challenge, wherein unlabeled data may encompass samples from unseen classes. This scenario leads to misclassification of unseen classes as known ones, consequently undermining classification accuracy. To overcome this challenge, this study revisits two methodologies from self-supervised and semi-supervised learning, self-labeling and consistency, tailoring them to address the OwSSL problem. Specifically, we propose an effective framework called _OwMatch_, combining conditional self-labeling and open-world hierarchical thresholding. Theoretically, we analyze the estimation of class distribution on unlabeled data through rigorous statistical analysis, thus demonstrating that OwMatch can ensure the unbiasedness of the label assignment estimator with reliability. Comprehensive empirical analyses demonstrate that our method yields substantial performance enhancements across both known and unknown classes in comparison to previous studies. Code is available at [https://github.com/niusj03/OwMatch](https://github.com/niusj03/OwMatch).", "pdf": "https://openreview.net/pdf/3dcbcaa02ca1db047267a26a4853ed26ee59bd15.pdf"} {"title": "Fair Allocation in Dynamic Mechanism Design", "url": "https://openreview.net/forum?id=bEunGps83o", "detail_url": "https://openreview.net/forum?id=bEunGps83o", "authors": "Alireza Fallah,Michael Jordan,Annie S Ulichney", "tags": "NIPS 2024,Poster", "abstract": "We consider a dynamic mechanism design problem where an auctioneer sells an indivisible good to two groups of buyers in every round, for a total of $T$ rounds. The auctioneer aims to maximize their discounted overall revenue while adhering to a fairness constraint that guarantees a minimum average allocation for each group. We begin by studying the static case ($T=1$) and establish that the optimal mechanism involves two types of subsidization: one that increases the overall probability of allocation to all buyers, and another that favors the group which otherwise has a lower probability of winning the item. We then extend our results to the dynamic case by characterizing a set of recursive functions that determine the optimal allocation and payments in each round. Notably, our results establish that in the dynamic case, the seller, on one hand, commits to a participation reward to incentivize truth-telling, and, on the other hand, charges an entry fee for every round. Moreover, the optimal allocation once more involves subsidization in favor of one group, where the extent of subsidization depends on the difference in future utilities for both the seller and buyers when allocating the item to one group versus the other. Finally, we present an approximation scheme to solve the recursive equations and determine an approximately optimal and fair allocation efficiently.", "pdf": "https://openreview.net/pdf/8365b7cc74e6acf8ccffc75743d5ba8d7745188d.pdf"} {"title": "Sample and Computationally Efficient Robust Learning of Gaussian Single-Index Models", "url": "https://openreview.net/forum?id=MN7d0S2i1d", "detail_url": "https://openreview.net/forum?id=MN7d0S2i1d", "authors": "Puqian Wang,Nikos Zarifis,Ilias Diakonikolas,Jelena Diakonikolas", "tags": "NIPS 2024,Poster", "abstract": "A single-index model (SIM) is a function of the form $\\sigma(\\mathbf{w}^{\\ast} \\cdot \\mathbf{x})$, where\n$\\sigma: \\mathbb{R} \\to \\mathbb{R}$ is a known link function and $\\mathbf{w}^{\\ast}$ is a hidden unit vector. \nWe study the task of learning SIMs in the agnostic (a.k.a. adversarial label noise) model \nwith respect to the $L^2_2$-loss under the Gaussian distribution. \nOur main result is a sample and computationally efficient agnostic proper learner \nthat attains $L^2_2$-error of $O(\\mathrm{OPT})+\\epsilon$, where $\\mathrm{OPT}$ is the optimal loss. The sample complexity of our algorithm is \n$\\tilde{O}(d^{\\lceil k^{\\ast}/2\\rceil}+d/\\epsilon)$, where \n$k^{\\ast}$ is the information-exponent of $\\sigma$ \ncorresponding to the degree of its first non-zero Hermite coefficient. \nThis sample bound nearly matches known CSQ lower bounds, even in the realizable setting. \nPrior algorithmic work in this setting had focused \non learning in the realizable case or in the presence \nof semi-random noise. Prior computationally efficient robust learners required \nsignificantly stronger assumptions on the link function.", "pdf": "https://openreview.net/pdf/cf0991dda9a6419627e0a2ad5fa255be8c831ebe.pdf"} {"title": "Once Read is Enough: Domain-specific Pretraining-free Language Models with Cluster-guided Sparse Experts for Long-tail Domain Knowledge", "url": "https://openreview.net/forum?id=manHbkpIW6", "detail_url": "https://openreview.net/forum?id=manHbkpIW6", "authors": "Fang Dong,Mengyi Chen,Jixian Zhou,Yubin Shi,Yixuan Chen,Mingzhi Dong,Yujiang Wang,Dongsheng Li,Xiaochen Yang,Rui Zhu,Robert P. Dick,Qin Lv,Fan Yang,Tun Lu,Ning Gu,Li Shang", "tags": "NIPS 2024,Poster", "abstract": "Language models (LMs) only pretrained on a general and massive corpus usually cannot attain satisfying performance on domain-specific downstream tasks, and hence, applying domain-specific pretraining to LMs is a common and indispensable practice.\nHowever, domain-specific pretraining can be costly and time-consuming, hindering LMs' deployment in real-world applications.\nIn this work, we consider the incapability to memorize domain-specific knowledge embedded in the general corpus with rare occurrences and long-tail distributions as the leading cause for pretrained LMs' inferior downstream performance. \nAnalysis of Neural Tangent Kernels (NTKs) reveals that those long-tail data are commonly overlooked in the model's gradient updates and, consequently, are not effectively memorized, leading to poor domain-specific downstream performance.\nBased on the intuition that data with similar semantic meaning are closer in the embedding space, we devise a Cluster-guided Sparse Expert (CSE) layer to actively learn long-tail domain knowledge typically neglected in previous pretrained LMs.\nDuring pretraining, a CSE layer efficiently clusters domain knowledge together and assigns long-tail knowledge to designate extra experts. CSE is also a lightweight structure that only needs to be incorporated in several deep layers.\nWith our training strategy, we found that during pretraining, data of long-tail knowledge gradually formulate isolated, outlier clusters in an LM's representation spaces, especially in deeper layers. Our experimental results show that only pretraining CSE-based LMs is enough to achieve superior performance than regularly pretrained-finetuned LMs on various downstream tasks, implying the prospects of domain-specific-pretraining-free language models.", "pdf": "https://openreview.net/pdf/b28c3a4f4f5da3bb75eb2cc6852c1eb990371e11.pdf"} {"title": "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models", "url": "https://openreview.net/forum?id=JhqyeppMiD", "detail_url": "https://openreview.net/forum?id=JhqyeppMiD", "authors": "Yuancheng Xu,Jiarui Yao,Manli Shu,Yanchao Sun,Zichu Wu,Ning Yu,Tom Goldstein,Furong Huang", "tags": "NIPS 2024,Poster", "abstract": "Vision-Language Models (VLMs) excel in generating textual responses from visual inputs, but their versatility raises security concerns. This study takes the first step in exposing VLMs\u2019 susceptibility to data poisoning attacks that can manipulate responses to innocuous, everyday prompts. We introduce Shadowcast, a stealthy data poisoning attack where poison samples are visually indistinguishable from benign images with matching texts. Shadowcast demonstrates effectiveness in two attack types. The first is a traditional Label Attack, tricking VLMs into misidentifying class labels, such as confusing Donald Trump for Joe Biden. The second is a novel Persuasion Attack, leveraging VLMs\u2019 text generation capabilities to craft persuasive and seemingly rational narratives for misinformation, such as portraying junk food as healthy. We show that Shadowcast effectively achieves the attacker\u2019s intentions using as few as 50 poison samples. Crucially, the poisoned samples demonstrate transferability across different VLM architectures, posing a significant concern in black-box settings. Moreover, Shadowcast remains potent under realistic conditions involving various text prompts, training data augmentation, and image compression techniques. This work reveals how poisoned VLMs can disseminate convincing yet deceptive misinformation to everyday, benign users, emphasizing the importance of data integrity for responsible VLM deployments. Our code is available at: https://github.com/umd-huang-lab/VLM-Poisoning.", "pdf": "https://openreview.net/pdf/9d686ad4b89c927c71ccff3e7ea68ea1b6c0dce2.pdf"} {"title": "Multi-Instance Partial-Label Learning with Margin Adjustment", "url": "https://openreview.net/forum?id=NnAi0L5H8J", "detail_url": "https://openreview.net/forum?id=NnAi0L5H8J", "authors": "Wei Tang,Yin-Fang Yang,Zhaofei Wang,Weijia Zhang,Min-Ling Zhang", "tags": "NIPS 2024,Poster", "abstract": "Multi-instance partial-label learning (MIPL) is an emerging learning framework where each training sample is represented as a multi-instance bag associated with a candidate label set. Existing MIPL algorithms often overlook the margins for attention scores and predicted probabilities, leading to suboptimal generalization performance. A critical issue with these algorithms is that the highest prediction probability of the classifier may appear on a non-candidate label. In this paper, we propose an algorithm named MIPLMA, i.e., Multi-Instance Partial-Label learning with Margin Adjustment, which adjusts the margins for attention scores and predicted probabilities. We introduce a margin-aware attention mechanism to dynamically adjust the margins for attention scores and propose a margin distribution\nloss to constrain the margins between the predicted probabilities on candidate and non-candidate label sets. Experimental results demonstrate the superior performance of MIPLMA over existing MIPL algorithms, as well as other well-established multi-instance learning algorithms and partial-label learning algorithms.", "pdf": "https://openreview.net/pdf/6d7eb1b41514181cec8475f2ea9d3edf24e6cd56.pdf"} {"title": "Antigen-Specific Antibody Design via Direct Energy-based Preference Optimization", "url": "https://openreview.net/forum?id=GN2GXjPyN8", "detail_url": "https://openreview.net/forum?id=GN2GXjPyN8", "authors": "Xiangxin Zhou,Dongyu Xue,Ruizhe Chen,Zaixiang Zheng,Liang Wang,Quanquan Gu", "tags": "NIPS 2024,Poster", "abstract": "Antibody design, a crucial task with significant implications across various disciplines such as therapeutics and biology, presents considerable challenges due to its intricate nature. In this paper, we tackle antigen-specific antibody sequence-structure co-design as an optimization problem towards specific preferences, considering both rationality and functionality. Leveraging a pre-trained conditional diffusion model that jointly models sequences and structures of antibodies with equivariant neural networks, we propose direct energy-based preference optimization to guide the generation of antibodies with both rational structures and considerable binding affinities to given antigens. Our method involves fine-tuning the pre-trained diffusion model using a residue-level decomposed energy preference. Additionally, we employ gradient surgery to address conflicts between various types of energy, such as attraction and repulsion. Experiments on RAbD benchmark show that our approach effectively optimizes the energy of generated antibodies and achieves state-of-the-art performance in designing high-quality antibodies with low total energy and high binding affinity simultaneously, demonstrating the superiority of our approach.", "pdf": "https://openreview.net/pdf/1707cccb06a5edc814908e30e85b89e886aed8f5.pdf"} {"title": "Deep Support Vectors", "url": "https://openreview.net/forum?id=5WoYFypPv0", "detail_url": "https://openreview.net/forum?id=5WoYFypPv0", "authors": "Junhoo Lee,Hyunho Lee,Kyomin Hwang,Nojun Kwak", "tags": "NIPS 2024,Poster", "abstract": "Deep learning has achieved tremendous success. However, unlike SVMs, which provide direct decision criteria and can be trained with a small dataset, it still has significant weaknesses due to its requirement for massive datasets during training and the black-box characteristics on decision criteria. This paper addresses these issues by identifying support vectors in deep learning models. To this end, we propose the DeepKKT condition, an adaptation of the traditional Karush-Kuhn-Tucker (KKT) condition for deep learning models, and confirm that generated Deep Support Vectors (DSVs) using this condition exhibit properties similar to traditional support vectors. This allows us to apply our method to few-shot dataset distillation problems and alleviate the black-box characteristics of deep learning models. Additionally, we demonstrate that the DeepKKT condition can transform conventional classification models into generative models with high fidelity, particularly as latent generation models using class labels as latent variables. We validate the effectiveness of DSVs using common datasets (ImageNet, CIFAR10 and CIFAR100) on the general architectures (ResNet and ConvNet), proving their practical applicability.", "pdf": "https://openreview.net/pdf/c34cbd4c21b4871ff90d03acc5b73b7af13721a3.pdf"} {"title": "Balancing Context Length and Mixing Times for Reinforcement Learning at Scale", "url": "https://openreview.net/forum?id=VaJ4XOW7Ey", "detail_url": "https://openreview.net/forum?id=VaJ4XOW7Ey", "authors": "Matthew Riemer,Khimya Khetarpal,Janarthanan Rajendran,Sarath Chandar", "tags": "NIPS 2024,Poster", "abstract": "Due to the recent remarkable advances in artificial intelligence, researchers have begun to consider challenging learning problems such as learning to generalize behavior from large offline datasets or learning online in non-Markovian environments. Meanwhile, recent advances in both of these areas have increasingly relied on conditioning policies on large context lengths. A natural question is if there is a limit to the performance benefits of increasing the context length if the computation needed is available. In this work, we establish a novel theoretical result that links the context length of a policy to the time needed to reliably evaluate its performance (i.e., its mixing time) in large scale partially observable reinforcement learning environments that exhibit latent sub-task structure. This analysis underscores a key tradeoff: when we extend the context length, our policy can more effectively model non-Markovian dependencies, but this comes at the cost of potentially slower policy evaluation and as a result slower downstream learning. Moreover, our empirical results highlight the relevance of this analysis when leveraging Transformer based neural networks. This perspective will become increasingly pertinent as the field scales towards larger and more realistic environments, opening up a number of potential future directions for improving the way we design learning agents.", "pdf": "https://openreview.net/pdf/0d2f1e3d4565423b45b2830d8dcae8ea0d71fa8d.pdf"} {"title": "MAGIS: LLM-Based Multi-Agent Framework for GitHub Issue Resolution", "url": "https://openreview.net/forum?id=qevq3FZ63J", "detail_url": "https://openreview.net/forum?id=qevq3FZ63J", "authors": "Wei Tao,Yucheng Zhou,Yanlin Wang,Wenqiang Zhang,Hongyu Zhang,Yu Cheng", "tags": "NIPS 2024,Poster", "abstract": "In software development, resolving the emergent issues within GitHub repositories is a complex challenge that involves not only the incorporation of new code but also the maintenance of existing code.\nLarge Language Models (LLMs) have shown promise in code generation but face difficulties in resolving Github issues, particularly at the repository level. \nTo overcome this challenge, we empirically study the reason why LLMs fail to resolve GitHub issues and analyze the major factors. \nMotivated by the empirical findings, we propose a novel LLM-based **M**ulti-**A**gent framework for **G**itHub **I**ssue re**S**olution, **MAGIS**, consisting of four agents customized for software evolution: Manager, Repository Custodian, Developer, and Quality Assurance Engineer agents. \nThis framework leverages the collaboration of various agents in the planning and coding process to unlock the potential of LLMs to resolve GitHub issues. \nIn experiments, we employ the SWE-bench benchmark to compare MAGIS with popular LLMs, including GPT-3.5, GPT-4, and Claude-2. \nMAGIS can resolve **13.94%** GitHub issues, significantly outperforming the baselines.\nSpecifically, MAGIS achieves an eight-fold increase in resolved ratio over the direct application of GPT-4, the advanced LLM.", "pdf": "https://openreview.net/pdf/160f5e4c2c7ce5f4555901cb61fa6bd97dbfbd5c.pdf"} {"title": "NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention", "url": "https://openreview.net/forum?id=4xDxVQHsbZ", "detail_url": "https://openreview.net/forum?id=4xDxVQHsbZ", "authors": "Tianyi Zhang,Jonah Wonkyu Yi,Bowen Yao,Zhaozhuo Xu,Anshumali Shrivastava", "tags": "NIPS 2024,Poster", "abstract": "Large Language Model (LLM) inference on Central Processing Units (CPU) is challenging due to the vast quantities of Multiply-Add (MAD) matrix operations in the attention computations. This paper highlights a rare gem in modern CPUs, Single-Instruction-Multiple-Data (SIMD) registers, which allows for ultra-low-latency lookups in a batch. We leverage this unique capability to propose NoMAD-Attention, an efficient attention algorithm that replaces MAD operations with in-register lookups. Through hardware-aware algorithmic designs, NoMAD-Attention achieves the computation of attention scores using repeated fast accesses to SIMD registers. NoMAD-Attention works with pre-trained attention-based LLMs without model finetuning. Extensive empirical evaluations demonstrate that NoMAD-Attention maintains the quality of the original LLMs well and speeds up the 4-bit quantized LLaMA-7B-based model by up to $2 \\times$ at 16k context length.", "pdf": "https://openreview.net/pdf/68372dd1d74a348f9569575a9907e59741292fab.pdf"} {"title": "Navigating the Effect of Parametrization for Dimensionality Reduction", "url": "https://openreview.net/forum?id=eYNYnYle41", "detail_url": "https://openreview.net/forum?id=eYNYnYle41", "authors": "Haiyang Huang,Yingfan Wang,Cynthia Rudin", "tags": "NIPS 2024,Poster", "abstract": "Parametric dimensionality reduction methods have gained prominence for their ability to generalize to unseen datasets, an advantage that traditional non-parametric approaches typically lack. Despite their growing popularity, there remains a prevalent misconception among practitioners about the equivalence in performance between parametric and non-parametric methods. Here, we show that these methods are not equivalent -- parametric methods retain global structure but lose significant local details. To explain this, we provide evidence that parameterized approaches lack the ability to repulse negative samples, and the choice of loss function also has an impact.\nAddressing these issues, we developed a new parametric method, ParamRepulsor, that incorporates Hard Negative Mining and a loss function that applies a strong repulsive force. This new method achieves state-of-the-art performance on local structure preservation for parametric methods without sacrificing the fidelity of global structural representation. Our code is available at https://github.com/hyhuang00/ParamRepulsor.", "pdf": "https://openreview.net/pdf/dd9ebeee6f173ea24fa48be291e3625217634dd4.pdf"} {"title": "$\\beta$-DPO: Direct Preference Optimization with Dynamic $\\beta$", "url": "https://openreview.net/forum?id=ZfBuhzE556", "detail_url": "https://openreview.net/forum?id=ZfBuhzE556", "authors": "Junkang Wu,Yuexiang Xie,Zhengyi Yang,Jiancan Wu,Jinyang Gao,Bolin Ding,Xiang Wang,Xiangnan He", "tags": "NIPS 2024,Poster", "abstract": "Direct Preference Optimization (DPO) has emerged as a compelling approach for training Large Language Models (LLMs) to adhere to human preferences. However, the performance of DPO is sensitive to the fine-tuning of its trade-off parameter $\\beta$, as well as to the quality of the preference data. We analyze the impact of $\\beta$ and data quality on DPO, uncovering that optimal $\\beta$ values vary with the informativeness of pairwise data. Addressing the limitations of static $\\beta$ values, we introduce a novel framework that dynamically calibrates $\\beta$ at the batch level, informed by data quality considerations. Additionally, our method incorporates $\\beta$-guided data filtering to safeguard against the influence of outliers. Through empirical evaluation, we demonstrate that our dynamic $\\beta$ adjustment technique significantly improves DPO\u2019s performance across a range of models and datasets, offering a more robust and adaptable training paradigm for aligning LLMs with human feedback. The code is available at \\url{https://anonymous.4open.science/r/beta-DPO-EE6C}.", "pdf": "https://openreview.net/pdf/30536c86d3ed63ada9ccbfca8f6fbea2d6282296.pdf"} {"title": "Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling", "url": "https://openreview.net/forum?id=CMgxAaRqZh", "detail_url": "https://openreview.net/forum?id=CMgxAaRqZh", "authors": "Yiran Zhao,Wenyue Zheng,Tianle Cai,Do Xuan Long,Kenji Kawaguchi,Anirudh Goyal,Michael Shieh", "tags": "NIPS 2024,Poster", "abstract": "Safety of Large Language Models (LLMs) has become a central issue given their rapid progress and wide applications. Greedy Coordinate Gradient (GCG) is shown to be effective in constructing prompts containing adversarial suffixes to break the presumingly safe LLMs, but the optimization of GCG is time-consuming and limits its practicality. To reduce the time cost of GCG and enable more comprehensive studies of LLM safety, in this work, we study a new algorithm called $\\texttt{Probe sampling}$ to accelerate the GCG algorithm. At the core of the algorithm is a mechanism that dynamically determines how similar a smaller draft model's predictions are to the target model's predictions for prompt candidates. When the target model is similar to the draft model, we rely heavily on the draft model to filter out a large number of potential prompt candidates to reduce the computation time. Probe sampling achieves up to $5.6$ times speedup using Llama2-7b-chat and leads to equal or improved attack success rate (ASR) on the AdvBench. Furthermore, probe sampling is also able to accelerate other prompt optimization techniques and adversarial attack methods, leading to acceleration of $1.8\\times$ for AutoPrompt, $2.4\\times$ for APE and $2.4\\times$ for AutoDAN.", "pdf": "https://openreview.net/pdf/c8b4a1521c3825d5fc77d1bc75f534885da21586.pdf"} {"title": "Enhancing Feature Diversity Boosts Channel-Adaptive Vision Transformers", "url": "https://openreview.net/forum?id=EXuv4tVNa3", "detail_url": "https://openreview.net/forum?id=EXuv4tVNa3", "authors": "Chau Pham,Bryan A. Plummer", "tags": "NIPS 2024,Poster", "abstract": "Multi-Channel Imaging (MCI) contains an array of challenges for encoding useful feature representations not present in traditional images. For example, images from two different satellites may both contain RGB channels, but the remaining channels can be different for each imaging source. Thus, MCI models must support a variety of channel configurations at test time. Recent work has extended traditional visual encoders for MCI, such as Vision Transformers (ViT), by supplementing pixel information with an encoding representing the channel configuration. However, these methods treat each channel equally, i.e., they do not consider the unique properties of each channel type, which can result in needless and potentially harmful redundancies in the learned features. For example, if RGB channels are always present, the other channels can focus on extracting information that cannot be captured by the RGB channels. To this end, we propose DiChaViT, which aims to enhance the diversity in the learned features of MCI-ViT models. This is achieved through a novel channel sampling strategy that encourages the selection of more distinct channel sets for training. Additionally, we employ regularization and initialization techniques to increase the likelihood that new information is learned from each channel. Many of our improvements are architecture agnostic and can be incorporated into new architectures as they are developed. Experiments on both satellite and cell microscopy datasets, CHAMMI, JUMP-CP, and So2Sat, report DiChaViT yields a 1.5 - 5.0% gain over the state-of-the-art. Our code is publicly available at https://github.com/chaudatascience/diverse_channel_vit.", "pdf": "https://openreview.net/pdf/19191cda99db12be6bc8912fc1698da138cab1c6.pdf"} {"title": "SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion", "url": "https://openreview.net/forum?id=89AUi5L1uA", "detail_url": "https://openreview.net/forum?id=89AUi5L1uA", "authors": "Lu Han,Xu-Yang Chen,Han-Jia Ye,De-Chuan Zhan", "tags": "NIPS 2024,Poster", "abstract": "Multivariate time series forecasting plays a crucial role in various fields such as finance, traffic management, energy, and healthcare. Recent studies have highlighted the advantages of channel independence to resist distribution drift but neglect channel correlations, limiting further enhancements. Several methods utilize mechanisms like attention or mixer to address this by capturing channel correlations, but they either introduce excessive complexity or rely too heavily on the correlation to achieve satisfactory results under distribution drifts, particularly with a large number of channels. Addressing this gap, this paper presents an efficient MLP-based model, the Series-cOre Fused Time Series forecaster (SOFTS), which incorporates a novel STar Aggregate-Redistribute (STAR) module. Unlike traditional approaches that manage channel interactions through distributed structures, \\textit{e.g.}, attention, STAR employs a centralized strategy to improve efficiency and reduce reliance on the quality of each channel. It aggregates all series to form a global core representation, which is then dispatched and fused with individual series representations to facilitate channel interactions effectively. SOFTS achieves superior performance over existing state-of-the-art methods with only linear complexity. The broad applicability of the STAR module across different forecasting models is also demonstrated empirically. We have made our code publicly available at https://github.com/Secilia-Cxy/SOFTS.", "pdf": "https://openreview.net/pdf/c8f5e1f12b1143b1e273394867caf779b33c0a82.pdf"} {"title": "SEEV: Synthesis with Efficient Exact Verification for ReLU Neural Barrier Functions", "url": "https://openreview.net/forum?id=nWMqQHzI3W", "detail_url": "https://openreview.net/forum?id=nWMqQHzI3W", "authors": "Hongchao Zhang,Zhizhen Qin,Sicun Gao,Andrew Clark", "tags": "NIPS 2024,Poster", "abstract": "Neural Control Barrier Functions (NCBFs) have shown significant promise in enforcing safety constraints on nonlinear autonomous systems. State-of-the-art exact approaches to verifying safety of NCBF-based controllers exploit the piecewise-linear structure of ReLU neural networks, however, such approaches still rely on enumerating all of the activation regions of the network near the safety boundary, thus incurring high computation cost. In this paper, we propose a framework for Synthesis with Efficient Exact Verification (SEEV). Our framework consists of two components, namely (i) an NCBF synthesis algorithm that introduces a novel regularizer to reduce the number of activation regions at the safety boundary, and (ii) a verification algorithm that exploits tight over-approximations of the safety conditions to reduce the cost of verifying each piecewise-linear segment. Our simulations show that SEEV significantly improves verification efficiency while maintaining the CBF quality across various benchmark systems and neural network structures. Our code is available at https://github.com/HongchaoZhang-HZ/SEEV.", "pdf": "https://openreview.net/pdf/8c8be656daa65c9db0d7eaaf0f5e2cbcf3137202.pdf"} {"title": "Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees", "url": "https://openreview.net/forum?id=ZIpdu0cHYu", "detail_url": "https://openreview.net/forum?id=ZIpdu0cHYu", "authors": "Sijia Chen,Yibo Wang,Yi-Feng Wu,Qing-Guo Chen,Zhao Xu,Weihua Luo,Kaifu Zhang,Lijun Zhang", "tags": "NIPS 2024,Poster", "abstract": "Tool-augmented large language models (LLMs) leverage tools, often in the form of APIs, to improve their reasoning capabilities on complex tasks. This enables them to act as intelligent agents interacting with the real world. The recently introduced ToolLLaMA model by Qin et al. [2023] utilizes the depth-first search-based decision tree (DFSDT) mechanism for multi-step reasoning with $16000+$ real-world APIs, effectively enhancing the performance of tool-augmented LLMs compared to traditional chain reasoning mechanisms. However, their approach only employs successful paths from decision trees (also called inference trees) for supervised fine-tuning (SFT), missing out on the potential learning opportunities from failed paths. Inspired by this, we propose an inference trajectory optimization framework based on preference learning to address this limitation. We first introduce a novel method for constructing preference data from tree-like expert trajectories, which leverages the previously ignored failed explorations in the decision trees. Specifically, we generate a step-wise preference dataset, ToolPreference, from the ToolBench dataset for tool learning. In the subsequent training phase, we first fine-tune the LLM with successful tool-usage expert trajectories and then apply direct preference optimization (DPO) with ToolPreference to update the LLM's policy, resulting in our ToolPrefer-LLaMA (TP-LLaMA) model. This approach not only enhances the utilization of original expert data but also broadens the learning space of the model. Our experiments demonstrate that by obtaining insights from errors in inference trees, TP-LLaMA significantly outperforms the baselines across almost all test scenarios by a large margin and exhibits better generalization capabilities with unseen APIs. At the same time, TP-LLaMA has also demonstrated superior reasoning efficiency compared to the baselines, making it more suitable for complex tool-usage reasoning tasks.", "pdf": "https://openreview.net/pdf/74ee6f313ee1667abf207c714f9e3e241341d853.pdf"} {"title": "A Primal-Dual-Assisted Penalty Approach to Bilevel Optimization with Coupled Constraints", "url": "https://openreview.net/forum?id=uZi7H5Ac0X", "detail_url": "https://openreview.net/forum?id=uZi7H5Ac0X", "authors": "Liuyuan Jiang,Quan Xiao,Victor M. Tenorio,Fernando Real-Rojas,Antonio Marques,Tianyi Chen", "tags": "NIPS 2024,Poster", "abstract": "Interest in bilevel optimization has grown in recent years, partially due to its relevance for challenging machine-learning problems. Several exciting recent works have been centered around developing efficient gradient-based algorithms that can solve bilevel optimization problems with provable guarantees. However, the existing literature mainly focuses on bilevel problems either without constraints, or featuring only simple constraints that do not couple variables across the upper and lower levels, excluding a range of complex applications. Our paper studies this challenging but less explored scenario and develops a (fully) first-order algorithm, which we term BLOCC, to tackle BiLevel Optimization problems with Coupled Constraints. We establish rigorous convergence theory for the proposed algorithm and demonstrate its effectiveness on two well-known real-world applications - support vector machine (SVM) - based model training and infrastructure planning in transportation networks.", "pdf": "https://openreview.net/pdf/13a0f27075bedab8b79d901ed72ef74c635ac09c.pdf"} {"title": "CE-NAS: An End-to-End Carbon-Efficient Neural Architecture Search Framework", "url": "https://openreview.net/forum?id=v6W55lCkhN", "detail_url": "https://openreview.net/forum?id=v6W55lCkhN", "authors": "Yiyang Zhao,Yunzhuo Liu,Bo Jiang,Tian Guo", "tags": "NIPS 2024,Poster", "abstract": "This work presents a novel approach to neural architecture search (NAS) that aims to increase carbon efficiency for the model design process. The proposed framework CE-NAS addresses the key challenge of high carbon cost associated with NAS by exploring the carbon emission variations of energy and energy differences of different NAS algorithms. At the high level, CE-NAS leverages a reinforcement-learning agent to dynamically adjust GPU resources based on carbon intensity, predicted by a time-series transformer, to balance energy-efficient sampling and energy-intensive evaluation tasks. Furthermore, CE-NAS leverages a recently proposed multi-objective optimizer to effectively reduce the NAS search space. We demonstrate the efficacy of CE-NAS in lowering carbon emissions while achieving SOTA results for both NAS datasets and open-domain NAS tasks. For example, on the HW-NasBench dataset, CE-NAS reduces carbon emissions by up to 7.22X while maintaining a search efficiency comparable to vanilla NAS. For open-domain NAS tasks, CE-NAS achieves SOTA results with 97.35% top-1 accuracy on CIFAR-10 with only 1.68M parameters and a carbon consumption of 38.53 lbs of CO2. On ImageNet, our searched model achieves 80.6% top-1 accuracy with a 0.78 ms TensorRT latency using FP16 on NVIDIA V100, consuming only 909.86 lbs of CO2, making it comparable to other one-shot-based NAS baselines. Our code is available at https://github.com/cake-lab/CE-NAS.", "pdf": "https://openreview.net/pdf/1e1daf62c7b574a8a94781af5ea3ed13da72701b.pdf"} {"title": "Fairness-Aware Estimation of Graphical Models", "url": "https://openreview.net/forum?id=WvWS8goWyR", "detail_url": "https://openreview.net/forum?id=WvWS8goWyR", "authors": "Zhuoping Zhou,Davoud Ataee Tarzanagh,Bojian Hou,Qi Long,Li Shen", "tags": "NIPS 2024,Poster", "abstract": "This paper examines the issue of fairness in the estimation of graphical models (GMs), particularly Gaussian, Covariance, and Ising models. These models play a vital role in understanding complex relationships in high-dimensional data. However, standard GMs can result in biased outcomes, especially when the underlying data involves sensitive characteristics or protected groups. To address this, we introduce a comprehensive framework designed to reduce bias in the estimation of GMs related to protected attributes. Our approach involves the integration of the pairwise graph disparity error and a tailored loss function into a nonsmooth multi-objective optimization problem, striving to achieve fairness across different sensitive groups while maintaining the effectiveness of the GMs. Experimental evaluations on synthetic and real-world datasets demonstrate that our framework effectively mitigates bias without undermining GMs' performance.", "pdf": "https://openreview.net/pdf/3cbfdb839c78a76a277d4d32e573fc2186d4fc53.pdf"} {"title": "Toward Efficient Inference for Mixture of Experts", "url": "https://openreview.net/forum?id=stXtBqyTWX", "detail_url": "https://openreview.net/forum?id=stXtBqyTWX", "authors": "Haiyang Huang,Newsha Ardalani,Anna Sun,Liu Ke,Shruti Bhosale,Hsien-Hsin S. Lee,Carole-Jean Wu,Benjamin Lee", "tags": "NIPS 2024,Poster", "abstract": "Mixture-of-Experts (MoE) models have recently gained steam in achieving the state-of-the-art performance in a wide range of tasks in computer vision and natural language processing. They effectively expand the model capacity while incurring a minimal increase in computation cost during training. However, deploying such models for inference is difficult due to their large model size and complex communication pattern. In this work, we provide a characterization of two MoE workloads, namely Language Modeling (LM) and Machine Translation (MT) and identify their sources of inefficiencies at deployment. We propose three optimization techniques to mitigate sources of inefficiencies, namely (1) Dynamic gating, (2) Expert Buffering, and (3) Expert load balancing. We show that dynamic gating improves maximum throughput by 6.21-11.55$\\times$ for LM, 5.75-10.98$\\times$ for MT Encoder and 2.58-5.71$\\times$ for MT Decoder.\nIt also reduces memory usage by up to 1.36$\\times$ for LM and up to 1.1$\\times$ for MT. We further propose Expert Buffering, a new caching mechanism that only keeps hot, active experts in GPU memory while buffering the rest in CPU memory. This reduces static memory allocation by 1.47$\\times$. Finally, we propose a load balancing methodology that provides additional robustness to the workload. Our code is available at https://github.com/hyhuang00/moe_inference.", "pdf": "https://openreview.net/pdf/b9888255233cbfec88dd7c0bc9b48c48b33bf0ec.pdf"} {"title": "KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference with Coupled Quantization", "url": "https://openreview.net/forum?id=pNnvzQsS4P", "detail_url": "https://openreview.net/forum?id=pNnvzQsS4P", "authors": "Tianyi Zhang,Jonah Wonkyu Yi,Zhaozhuo Xu,Anshumali Shrivastava", "tags": "NIPS 2024,Poster", "abstract": "Efficient deployment of Large Language Models (LLMs) requires batching multiple requests together to improve throughput. As batch size, context length, or model size increases, the size of key and value (KV) cache quickly becomes the main contributor to GPU memory usage and the bottleneck of inference latency and throughput. Quantization has emerged as an effective technique for KV cache compression, but existing methods still fail at very low bit widths. Currently, KV cache quantization is performed per-channel or per-token independently. Our analysis shows that distinct channels of a key/value activation embedding are highly interdependent, and the joint entropy of multiple channels grows at a slower rate than the sum of their marginal entropy, which implies that per-channel independent quantization is sub-optimal. To mitigate this sub-optimality, we propose Coupled Quantization (CQ), which couples multiple key/value channels together for quantization to exploit their interdependence and encode the activations in a more information-efficient manner. Extensive experiments reveal that CQ compares favorably with existing baselines in preserving model quality, and improves inference throughput by 1.4\u20133.5$\\times$ relative to the uncompressed baseline. Furthermore, we demonstrate that CQ can preserve model quality reasonably with KV cache quantized down to 1 bit.", "pdf": "https://openreview.net/pdf/cc83819e3c2ee5e47a2a7f0f28eb98ada7deb1ce.pdf"} {"title": "WaveAttack: Asymmetric Frequency Obfuscation-based Backdoor Attacks Against Deep Neural Networks", "url": "https://openreview.net/forum?id=J6NByZlLNj", "detail_url": "https://openreview.net/forum?id=J6NByZlLNj", "authors": "Jun Xia,Zhihao Yue,Yingbo Zhou,Zhiwei Ling,Yiyu Shi,Xian Wei,Mingsong Chen", "tags": "NIPS 2024,Poster", "abstract": "Due to the increasing popularity of Artificial Intelligence (AI), more and more backdoor attacks are designed to mislead Deep Neural Network (DNN) predictions by manipulating training samples or processes. Although backdoor attacks have been investigated in various scenarios, they still suffer from the problems of both low fidelity of poisoned samples and non-negligible transfer in latent space, which make them easily identified by existing backdoor detection algorithms. To overcome this weakness, this paper proposes a novel frequency-based backdoor attack method named WaveAttack, which obtains high-frequency image features through Discrete Wavelet Transform (DWT) to generate highly stealthy backdoor triggers. By introducing an asymmetric frequency obfuscation method, our approach adds an adaptive residual to the training and inference stages to improve the impact of triggers, thus further enhancing the effectiveness of WaveAttack. Comprehensive experimental results show that, WaveAttack can not only achieve higher effectiveness than state-of-the-art backdoor attack methods, but also outperform them in the fidelity of images (i.e., by up to 28.27\\% improvement in PSNR, 1.61\\% improvement in SSIM, and 70.59\\% reduction in IS). Our code is available at https://github.com/BililiCode/WaveAttack.", "pdf": "https://openreview.net/pdf/b8863e81ef74693919a2a6ff884da8764bc43f8b.pdf"} {"title": "Fully Explicit Dynamic Gaussian Splatting", "url": "https://openreview.net/forum?id=g8pyTkxyIV", "detail_url": "https://openreview.net/forum?id=g8pyTkxyIV", "authors": "Junoh Lee,Changyeon Won,Hyunjun Jung,Inhwan Bae,Hae-Gon Jeon", "tags": "NIPS 2024,Poster", "abstract": "3D Gaussian Splatting has shown fast and high-quality rendering results in static scenes by leveraging dense 3D prior and explicit representations. Unfortunately, the benefits of the prior and representation do not involve novel view synthesis for dynamic motions. Ironically, this is because the main barrier is the reliance on them, which requires increasing training and rendering times to account for dynamic motions. \nIn this paper, we design Explicit 4D Gaussian Splatting (Ex4DGS).\nOur key idea is to firstly separate static and dynamic Gaussians during training, and to explicitly sample positions and rotations of the dynamic Gaussians at sparse timestamps. The sampled positions and rotations are then interpolated to represent both spatially and temporally continuous motions of objects in dynamic scenes as well as reducing computational cost. \nAdditionally, we introduce a progressive training scheme and a point-backtracking technique that improves Ex4DGS's convergence. We initially train Ex4DGS using short timestamps and progressively extend timestamps, which makes it work well with a few point clouds. The point-backtracking is used to quantify the cumulative error of each Gaussian over time, enabling the detection and removal of erroneous Gaussians in dynamic scenes. Comprehensive experiments on various scenes demonstrate the state-of-the-art rendering quality from our method, achieving fast rendering of 62 fps on a single 2080Ti GPU.", "pdf": "https://openreview.net/pdf/0381a18f5cdf57d1b8cc805a21ced8ccfa4a6239.pdf"} {"title": "Physics-Informed Regularization for Domain-Agnostic Dynamical System Modeling", "url": "https://openreview.net/forum?id=iWlqbNE8P7", "detail_url": "https://openreview.net/forum?id=iWlqbNE8P7", "authors": "Zijie Huang,Wanjia Zhao,Jingdong Gao,Ziniu Hu,Xiao Luo,Yadi Cao,Yuanzhou Chen,Yizhou Sun,Wei Wang", "tags": "NIPS 2024,Poster", "abstract": "Learning complex physical dynamics purely from data is challenging due to the intrinsic properties of systems to be satisfied. Incorporating physics-informed priors, such as in Hamiltonian Neural Networks (HNNs), achieves high-precision modeling for energy-conservative systems. However, real-world systems often deviate from strict energy conservation and follow different physical priors. To address this, we present a framework that achieves high-precision modeling for a wide range of dynamical systems from the numerical aspect, by enforcing Time-Reversal Symmetry (TRS) via a novel regularization term. It helps preserve energies for conservative systems while serving as a strong inductive bias for non-conservative, reversible systems. While TRS is a domain-specific physical prior, we present the first theoretical proof that TRS loss can universally improve modeling accuracy by minimizing higher-order Taylor terms in ODE integration, which is numerically beneficial to various systems regardless of their properties, even for irreversible systems. By integrating the TRS loss within neural ordinary differential equation models, the proposed model TREAT demonstrates superior performance on diverse physical systems. It achieves a significant 11.5% MSE improvement in a challenging chaotic triple-pendulum scenario, underscoring TREAT\u2019s broad applicability and effectiveness.", "pdf": "https://openreview.net/pdf/5dc1a3884cb257f2b8d5cacac17a2f7d915c8408.pdf"} {"title": "Adaptive Sampling for Efficient Softmax Approximation", "url": "https://openreview.net/forum?id=XsNA2b8GPz", "detail_url": "https://openreview.net/forum?id=XsNA2b8GPz", "authors": "Tavor Baharav,Ryan Kang,Colin Sullivan,Mo Tiwari,Eric Sager Luxenberg,David Tse,Mert Pilanci", "tags": "NIPS 2024,Poster", "abstract": "The softmax function is ubiquitous in machine learning and optimization applications. Computing the full softmax evaluation of a matrix-vector product can be computationally expensive in high-dimensional settings. In many applications, however, it is sufficient to calculate only the top few outputs of the softmax function. In this work, we present an algorithm, dubbed AdaptiveSoftmax, that adaptively computes the top k softmax values more efficiently than the full softmax computation, with probabilistic guarantees. We demonstrate the sample efficiency improvements afforded by AdaptiveSoftmax on real and synthetic data to corroborate our theoretical results. AdaptiveSoftmax yields >10x gain over full softmax computation on most datasets, yielding up to 30x improvement for Mistral7B evaluated on the Wikitext dataset. The adaptive method we propose for estimating the partition function (the softmax denominator) is of independent interest and can be used in other applications such as kernel density estimation.", "pdf": "https://openreview.net/pdf/e188b661e6b0a37452f6813bf9348a9472d23a63.pdf"} {"title": "MutaPLM: Protein Language Modeling for Mutation Explanation and Engineering", "url": "https://openreview.net/forum?id=yppcLFeZgy", "detail_url": "https://openreview.net/forum?id=yppcLFeZgy", "authors": "YIZHEN LUO,Zikun Nie,Massimo Hong,Suyuan Zhao,Hao Zhou,Zaiqing Nie", "tags": "NIPS 2024,Poster", "abstract": "Studying protein mutations within amino acid sequences holds tremendous significance in life sciences. Protein language models (PLMs) have demonstrated strong capabilities in broad biological applications. However, due to architectural design and lack of supervision, PLMs model mutations implicitly with evolutionary plausibility, which is not satisfactory to serve as explainable and engineerable tools in real-world studies. To address these issues, we present MutaPLM, a unified framework for interpreting and navigating protein mutations with protein language models. MutaPLM introduces a protein *delta* network that captures explicit protein mutation representations within a unified feature space, and a transfer learning pipeline with a chain-of-thought (CoT) strategy to harvest protein mutation knowledge from biomedical texts. We also construct MutaDescribe, the first large-scale protein mutation dataset with rich textual annotations, which provides cross-modal supervision signals. Through comprehensive experiments, we demonstrate that MutaPLM excels at providing human-understandable explanations for mutational effects and prioritizing novel mutations with desirable properties. Our code, model, and data are open-sourced at https://github.com/PharMolix/MutaPLM.", "pdf": "https://openreview.net/pdf/6ba89a23eb0008a9e5fa6007a9fcb9c765216d9f.pdf"} {"title": "Cascade Speculative Drafting for Even Faster LLM Inference", "url": "https://openreview.net/forum?id=lZY9u0ijP7", "detail_url": "https://openreview.net/forum?id=lZY9u0ijP7", "authors": "Ziyi Chen,Xiaocong Yang,Jiacheng Lin,Chenkai Sun,Kevin Chang,Jie Huang", "tags": "NIPS 2024,Poster", "abstract": "Introduced to enhance the efficiency of large language model (LLM) inference, speculative decoding operates by having a smaller model generate a draft. A larger target model then reviews this draft to align with its output, and any acceptance by the target model results in a reduction of the number of the target model runs, ultimately improving efficiency. However, the drafting process in speculative decoding includes slow autoregressive generation and allocates equal time to generating tokens, irrespective of their importance. These inefficiencies collectively contribute to the suboptimal performance of speculative decoding. To further improve LLM inference, we introduce Cascade Speculative Drafting (CS Drafting), a speculative execution algorithm that incorporates two types of cascades. The *Vertical Cascade* eliminates autoregressive generation from neural models, while the *Horizontal Cascade* optimizes time allocation in drafting for improved efficiency. Combining both cascades, CS Drafting achieves greater speedup compared to the baselines in our experiments, while preserving the same output distribution as the target model. Our code is publicly available at https://github.com/lfsszd/CS-Drafting.", "pdf": "https://openreview.net/pdf/7d6cf3bf7fac4f5e70a8ef98098a47f541169c34.pdf"} {"title": "Quantum Deep Equilibrium Models", "url": "https://openreview.net/forum?id=CWhwKb0Q4k", "detail_url": "https://openreview.net/forum?id=CWhwKb0Q4k", "authors": "Philipp Schleich,Marta Skreta,Lasse Bj\u00f8rn Kristensen,Rodrigo Vargas-Hernandez,Alan Aspuru-Guzik", "tags": "NIPS 2024,Poster", "abstract": "The feasibility of variational quantum algorithms, the most popular correspondent of neural networks on noisy, near-term quantum hardware, is highly impacted by the circuit depth of the involved parametrized quantum circuits (PQCs). Higher depth increases expressivity, but also results in a detrimental accumulation of errors. Furthermore, the number of parameters involved in the PQC significantly influences the performance through the necessary number of measurements to evaluate gradients, which scales linearly with the number of parameters.\n Motivated by this, we look at deep equilibrium models (DEQs), which mimic an infinite-depth, weight-tied network using a fraction of the memory by employing a root solver to find the fixed points of the network. In this work, we present Quantum Deep Equilibrium Models (QDEQs): a training paradigm that learns parameters of a quantum machine learning model given by a PQC using DEQs. To our knowledge, no work has yet explored the application of DEQs to QML models. We apply QDEQs to find the parameters of a quantum circuit in two settings: the first involves classifying MNIST-4 digits with 4 qubits; the second extends it to 10 classes of MNIST, FashionMNIST and CIFAR. We find that QDEQ is not only competitive with comparable existing baseline models, but also achieves higher performance than a network with 5 times more layers. This demonstrates that the QDEQ paradigm can be used to develop significantly more shallow quantum circuits for a given task, something which is essential for the utility of near-term quantum computers. \n Our code is available at \\url{https://github.com/martaskrt/qdeq}.", "pdf": "https://openreview.net/pdf/ee5a435e0f6ba173f8dfa3a37a2e0d0fbba4d37d.pdf"} {"title": "GraphMorph: Tubular Structure Extraction by Morphing Predicted Graphs", "url": "https://openreview.net/forum?id=hW5QWiCctl", "detail_url": "https://openreview.net/forum?id=hW5QWiCctl", "authors": "Zhao Zhang,Ziwei Zhao,Dong Wang,Liwei Wang", "tags": "NIPS 2024,Poster", "abstract": "Accurately restoring topology is both challenging and crucial in tubular structure extraction tasks, such as blood vessel segmentation and road network extraction. Diverging from traditional approaches based on pixel-level classification, our proposed method, named GraphMorph, focuses on branch-level features of tubular structures to achieve more topologically accurate predictions. GraphMorph comprises two main components: a Graph Decoder and a Morph Module. Utilizing multi-scale features extracted from an image patch by the segmentation network, the Graph Decoder facilitates the learning of branch-level features and generates a graph that accurately represents the tubular structure in this patch. The Morph Module processes two primary inputs: the graph and the centerline probability map, provided by the Graph Decoder and the segmentation network, respectively. Employing a novel SkeletonDijkstra algorithm, the Morph Module produces a centerline mask that aligns with the predicted graph. Furthermore, we observe that employing centerline masks predicted by GraphMorph significantly reduces false positives in the segmentation task, which is achieved by a simple yet effective post-processing strategy. The efficacy of our method in the centerline extraction and segmentation tasks has been substantiated through experimental evaluations across various datasets. Source code will be released soon.", "pdf": "https://openreview.net/pdf/1c697327e7b2306dcb3fa000f7f7712661179b29.pdf"} {"title": "Rapid Plug-in Defenders", "url": "https://openreview.net/forum?id=UMPedMhKWm", "detail_url": "https://openreview.net/forum?id=UMPedMhKWm", "authors": "Kai Wu,Yujian Betterest Li,Jian Lou,Xiaoyu Zhang,Handing Wang,Jing Liu", "tags": "NIPS 2024,Poster", "abstract": "In the realm of daily services, the deployment of deep neural networks underscores the paramount importance of their reliability. However, the vulnerability of these networks to adversarial attacks, primarily evasion-based, poses a concerning threat to their functionality. Common methods for enhancing robustness involve heavy adversarial training or leveraging learned knowledge from clean data, both necessitating substantial computational resources. This inherent time-intensive nature severely limits the agility of large foundational models to swiftly counter adversarial perturbations. To address this challenge, this paper focuses on the \\textbf{Ra}pid \\textbf{P}lug-\\textbf{i}n \\textbf{D}efender (\\textbf{RaPiD}) problem, aiming to rapidly counter adversarial perturbations without altering the deployed model. Drawing inspiration from the generalization and the universal computation ability of pre-trained transformer models, we propose a novel method termed \\textbf{CeTaD} (\\textbf{C}onsidering Pr\\textbf{e}-trained \\textbf{T}ransformers \\textbf{a}s \\textbf{D}efenders) for RaPiD, optimized for efficient computation. \\textbf{CeTaD} strategically fine-tunes the normalization layer parameters within the defender using a limited set of clean and adversarial examples. Our evaluation centers on assessing \\textbf{CeTaD}'s effectiveness, transferability, and the impact of different components in scenarios involving one-shot adversarial examples. The proposed method is capable of rapidly adapting to various attacks and different application scenarios without altering the target model and clean training data. We also explore the influence of varying training data conditions on \\textbf{CeTaD}'s performance. Notably, \\textbf{CeTaD} exhibits adaptability across differentiable service models and proves the potential of continuous learning.", "pdf": "https://openreview.net/pdf/cc7da26977f10358e4e756a435a638d2ad7405d3.pdf"} {"title": "Ordered Momentum for Asynchronous SGD", "url": "https://openreview.net/forum?id=U2Mx0hSRwA", "detail_url": "https://openreview.net/forum?id=U2Mx0hSRwA", "authors": "Chang-Wei Shi,Yi-Rui Yang,Wu-Jun Li", "tags": "NIPS 2024,Poster", "abstract": "Distributed learning is essential for training large-scale deep models.\nAsynchronous SGD (ASGD) and its variants are commonly used distributed learning methods, particularly in scenarios where the computing capabilities of workers in the cluster are heterogeneous.\nMomentum has been acknowledged for its benefits in both optimization and generalization in deep model training. However, existing works have found that naively incorporating momentum into ASGD can impede the convergence.\nIn this paper, we propose a novel method called ordered momentum (OrMo) for ASGD. In OrMo, momentum is incorporated into ASGD by organizing the gradients in order based on their iteration indexes. We theoretically prove the convergence of OrMo with both constant and delay-adaptive learning rates for non-convex problems. To the best of our knowledge, this is the first work to establish the convergence analysis of ASGD with momentum without dependence on the maximum delay. Empirical results demonstrate that OrMo can achieve better convergence performance compared with ASGD and other asynchronous methods with momentum.", "pdf": "https://openreview.net/pdf/70e16903503ce6fa76e9df2a300c8f95295f2509.pdf"} {"title": "Is the MMI Criterion Necessary for Interpretability? Degenerating Non-causal Features to Plain Noise for Self-Rationalization", "url": "https://openreview.net/forum?id=eAqcVZx30k", "detail_url": "https://openreview.net/forum?id=eAqcVZx30k", "authors": "Wei Liu,Zhiying Deng,Zhongyu Niu,Jun Wang,Haozhao Wang,YuanKai Zhang,Ruixuan Li", "tags": "NIPS 2024,Poster", "abstract": "An important line of research in the field of explainability is to extract a small subset of crucial rationales from the full input. The most widely used criterion for rationale extraction is the maximum mutual information (MMI) criterion. However, in certain datasets, there are spurious features non-causally correlated with the label and also get high mutual information, complicating the loss landscape of MMI. Although some penalty-based methods have been developed to penalize the spurious features (e.g., invariance penalty, intervention penalty, etc) to help MMI work better, these are merely remedial measures. \nIn the optimization objectives of these methods, spurious features are still distinguished from plain noise, which hinders the discovery of causal rationales. \nThis paper aims to develop a new criterion that treats spurious features as plain noise, allowing the model to work on datasets rich in spurious features as if it were working on clean datasets, thereby making rationale extraction easier.\nWe theoretically observe that removing either plain noise or spurious features from the input does not alter the conditional distribution of the remaining components relative to the task label. However, significant changes in the conditional distribution occur only when causal features are eliminated.\nBased on this discovery, the paper proposes a criterion for \\textbf{M}aximizing the \\textbf{R}emaining \\textbf{D}iscrepancy (MRD). Experiments on six widely used datasets show that our MRD criterion improves rationale quality (measured by the overlap with human-annotated rationales) by up to $10.4\\%$ as compared to several recent competitive MMI variants. Code: \\url{https://github.com/jugechengzi/Rationalization-MRD}.", "pdf": "https://openreview.net/pdf/2081235c67059bc2acdc065ef0fcce259f7eb1b5.pdf"} {"title": "Are More LLM Calls All You Need? Towards the Scaling Properties of Compound AI Systems", "url": "https://openreview.net/forum?id=m5106RRLgx", "detail_url": "https://openreview.net/forum?id=m5106RRLgx", "authors": "Lingjiao Chen,Jared Quincy Davis,Boris Hanin,Peter Bailis,Ion Stoica,Matei Zaharia,James Zou", "tags": "NIPS 2024,Poster", "abstract": "Many recent state-of-the-art results in language tasks were achieved using compound systems that perform multiple Language Model (LM) calls and aggregate their responses. However, there is little understanding of how the number of LM calls -- e.g., when asking the LM to answer each question multiple times and taking a majority vote -- affects such a compound system's performance. In this paper, we initiate the study of scaling properties of compound inference systems. We analyze, theoretically and empirically, how the number of LM calls affects the performance of Vote and Filter-Vote, two of the simplest compound system designs, which aggregate LM responses via majority voting, optionally applying LM filters. We find, surprisingly, that across multiple language tasks, the performance of both Vote and Filter-Vote can first increase but then decrease as a function of the number of LM calls. Our theoretical results suggest that this non-monotonicity is due to the diversity of query difficulties within a task: more LM calls lead to higher performance on \"easy\" queries, but lower performance on \"hard\" queries, and non-monotone behavior can emerge when a task contains both types of queries. This insight then allows us to compute, from a small number of samples, the number of LM calls that maximizes system performance, and define an analytical scaling model for both systems. Experiments show that our scaling model can accurately predict the performance of Vote and Filter-Vote systems and thus find the optimal number of LM calls to make.", "pdf": "https://openreview.net/pdf/933389eba2d451a433d83e7d55975efe12b0a17b.pdf"} {"title": "Hamiltonian Monte Carlo on ReLU Neural Networks is Inefficient", "url": "https://openreview.net/forum?id=aBmiyi7iA7", "detail_url": "https://openreview.net/forum?id=aBmiyi7iA7", "authors": "Vu C. Dinh,Lam Si Tung Ho,Cuong V. Nguyen", "tags": "NIPS 2024,Poster", "abstract": "We analyze the error rates of the Hamiltonian Monte Carlo algorithm with leapfrog integrator for Bayesian neural network inference. We show that due to the non-differentiability of activation functions in the ReLU family, leapfrog HMC for networks with these activation functions has a large local error rate of $\\Omega(\\epsilon)$ rather than the classical error rate of $\\mathcal{O}(\\epsilon^3)$. This leads to a higher rejection rate of the proposals, making the method inefficient. We then verify our theoretical findings through empirical simulations as well as experiments on a real-world dataset that highlight the inefficiency of HMC inference on ReLU-based neural networks compared to analytical networks.", "pdf": "https://openreview.net/pdf/942c535cbf05e39d929d1238b3c761fba01fa6da.pdf"} {"title": "A Method for Evaluating Hyperparameter Sensitivity in Reinforcement Learning", "url": "https://openreview.net/forum?id=4OJdZhcwBb", "detail_url": "https://openreview.net/forum?id=4OJdZhcwBb", "authors": "Jacob Adkins,Michael Bowling,Adam White", "tags": "NIPS 2024,Poster", "abstract": "The performance of modern reinforcement learning algorithms critically relies\non tuning ever increasing numbers of hyperparameters. Often, small changes in\na hyperparameter can lead to drastic changes in performance, and different environments require very different hyperparameter settings to achieve state-of-the-art\nperformance reported in the literature. We currently lack a scalable and widely\naccepted approach to characterizing these complex interactions. This work proposes a new empirical methodology for studying, comparing, and quantifying the\nsensitivity of an algorithm\u2019s performance to hyperparameter tuning for a given set\nof environments. We then demonstrate the utility of this methodology by assessing\nthe hyperparameter sensitivity of several commonly used normalization variants of\nPPO. The results suggest that several algorithmic performance improvements may,\nin fact, be a result of an increased reliance on hyperparameter tuning.", "pdf": "https://openreview.net/pdf/8eff860c8489c171cf2a36abbbeddb2bfad51ac8.pdf"} {"title": "TransVIP: Speech to Speech Translation System with Voice and Isochrony Preservation", "url": "https://openreview.net/forum?id=ZpVTRQVX5b", "detail_url": "https://openreview.net/forum?id=ZpVTRQVX5b", "authors": "Chenyang Le,Yao Qian,Dongmei Wang,Long Zhou,Shujie LIU,Xiaofei Wang,Midia Yousefi,Yanmin Qian,Jinyu Li,Michael Zeng", "tags": "NIPS 2024,Poster", "abstract": "There is a rising interest and trend in research towards directly translating speech from one language to another, known as end-to-end speech-to-speech translation. However, most end-to-end models struggle to outperform cascade models, i.e., a pipeline framework by concatenating speech recognition, machine translation and text-to-speech models. The primary challenges stem from the inherent complexities involved in direct translation tasks and the scarcity of data. In this study, we introduce a novel model framework TransVIP that leverages diverse datasets in a cascade fashion yet facilitates end-to-end inference through joint probability. Furthermore, we propose two separated encoders to preserve the speaker\u2019s voice characteristics and isochrony from the source speech during the translation process, making it highly suitable for scenarios such as video dubbing. Our experiments on the French-English language pair demonstrate that our model outperforms the current state-of-the-art speech-to-speech translation model.", "pdf": "https://openreview.net/pdf/b8d1936c6491d6c912b49703bfa1f9d232db22ca.pdf"} {"title": "Adaptive Labeling for Efficient Out-of-distribution Model Evaluation", "url": "https://openreview.net/forum?id=uuQQwrjMzb", "detail_url": "https://openreview.net/forum?id=uuQQwrjMzb", "authors": "Daksh Mittal,Yuanzhe Ma,Shalmali Joshi,Hongseok Namkoong", "tags": "NIPS 2024,Poster", "abstract": "Datasets often suffer severe selection bias; clinical labels are only available on patients for whom doctors ordered medical exams. To assess model performance outside the support of available data, we present a computational framework for adaptive labeling, providing cost-efficient model evaluations under severe distribution shifts. We formulate the problem as a Markov Decision Process over states defined by posterior beliefs on model performance. Each batch of new labels incurs a \u201cstate transition\u201d to sharper beliefs, and we choose batches to minimize uncertainty on model performance at the end of the label collection process. Instead of relying on high-variance REINFORCE policy gradient estimators that do not scale, our adaptive labeling policy is optimized using path-wise policy gradients computed by auto-differentiating through simulated roll-outs. Our framework is agnostic to different uncertainty quantification approaches and highlights the virtue of planning in adaptive labeling. On synthetic and real datasets, we empirically demonstrate even a one-step lookahead policy substantially outperforms active learning-inspired heuristics.", "pdf": "https://openreview.net/pdf/4eefea64ae0afe13d30727e1893f662df3e3b799.pdf"} {"title": "NeuroBOLT: Resting-state EEG-to-fMRI Synthesis with Multi-dimensional Feature Mapping", "url": "https://openreview.net/forum?id=y6qhVtFG77", "detail_url": "https://openreview.net/forum?id=y6qhVtFG77", "authors": "Yamin Li,Ange Lou,Ziyuan Xu,SHENGCHAO ZHANG,Shiyu Wang,Dario J. Englot,Soheil Kolouri,Daniel Moyer,Roza G Bayrak,Catie Chang", "tags": "NIPS 2024,Poster", "abstract": "Functional magnetic resonance imaging (fMRI) is an indispensable tool in modern neuroscience, providing a non-invasive window into whole-brain dynamics at millimeter-scale spatial resolution. However, fMRI is constrained by issues such as high operation costs and immobility. With the rapid advancements in cross-modality synthesis and brain decoding, the use of deep neural networks has emerged as a promising solution for inferring whole-brain, high-resolution fMRI features directly from electroencephalography (EEG), a more widely accessible and portable neuroimaging modality. Nonetheless, the complex projection from neural activity to fMRI hemodynamic responses and the spatial ambiguity of EEG pose substantial challenges both in modeling and interpretability. Relatively few studies to date have developed approaches for EEG-fMRI translation, and although they have made significant strides, the inference of fMRI signals in a given study has been limited to a small set of brain areas and to a single condition (i.e., either resting-state or a specific task). The capability to predict fMRI signals in other brain areas, as well as to generalize across conditions, remain critical gaps in the field. To tackle these challenges, we introduce a novel and generalizable framework: NeuroBOLT, i.e., Neuro-to-BOLD Transformer, which leverages multi-dimensional representation learning from temporal, spatial, and spectral domains to translate raw EEG data to the corresponding fMRI activity signals across the brain. Our experiments demonstrate that NeuroBOLT effectively reconstructs unseen resting-state fMRI signals from primary sensory, high-level cognitive areas, and deep subcortical brain regions, achieving state-of-the-art accuracy with the potential to generalize across varying conditions and sites, which significantly advances the integration of these two modalities.", "pdf": "https://openreview.net/pdf/7637d729304c503ef5c555a139062365ae9005dc.pdf"} {"title": "Bayesian Adaptive Calibration and Optimal Design", "url": "https://openreview.net/forum?id=m906PS5G9x", "detail_url": "https://openreview.net/forum?id=m906PS5G9x", "authors": "Rafael Oliveira,Dino Sejdinovic,David Howard,Edwin V. Bonilla", "tags": "NIPS 2024,Poster", "abstract": "The process of calibrating computer models of natural phenomena is essential for applications in the physical sciences, where plenty of domain knowledge can be embedded into simulations and then calibrated against real observations. Current machine learning approaches, however, mostly rely on rerunning simulations over a fixed set of designs available in the observed data, potentially neglecting informative correlations across the design space and requiring a large amount of simulations. Instead, we consider the calibration process from the perspective of Bayesian adaptive experimental design and propose a data-efficient algorithm to run maximally informative simulations within a batch-sequential process. At each round, the algorithm jointly estimates the parameters posterior distribution and optimal designs by maximising a variational lower bound of the expected information gain. The simulator is modelled as a sample from a Gaussian process, which allows us to correlate simulations and real data with the unknown calibration parameters. We show the benefits of our method when compared to related approaches across synthetic and real-data problems.", "pdf": "https://openreview.net/pdf/1a3b58749c0c68a59d59059df64282b1a3adc0a4.pdf"} {"title": "FineStyle: Fine-grained Controllable Style Personalization for Text-to-image Models", "url": "https://openreview.net/forum?id=1SmXUGzrH8", "detail_url": "https://openreview.net/forum?id=1SmXUGzrH8", "authors": "Gong Zhang,Kihyuk Sohn,Meera Hahn,Humphrey Shi,Irfan Essa", "tags": "NIPS 2024,Poster", "abstract": "Few-shot fine-tuning of text-to-image (T2I) generation models enables people to create unique images in their own style using natural languages without requiring extensive prompt engineering. However, fine-tuning with only a handful, as little as one, of image-text paired data prevents fine-grained control of style attributes at generation. In this paper, we present FineStyle, a few-shot fine-tuning method that allows enhanced controllability for style personalized text-to-image generation. To overcome the lack of training data for fine-tuning, we propose a novel concept-oriented data scaling that amplifies the number of image-text pair, each of which focuses on different concepts (e.g., objects) in the style reference image. We also identify the benefit of parameter-efficient adapter tuning of key and value kernels of cross-attention layers. Extensive experiments show the effectiveness of FineStyle at following fine-grained text prompts and delivering visual quality faithful to the specified style, measured by CLIP scores and human raters.", "pdf": "https://openreview.net/pdf/75bf1ad3580ab645399dbf37996275aa30130566.pdf"} {"title": "Linking In-context Learning in Transformers to Human Episodic Memory", "url": "https://openreview.net/forum?id=AYDBFxNon4", "detail_url": "https://openreview.net/forum?id=AYDBFxNon4", "authors": "Li Ji-An,Corey Yishan Zhou,Marcus K. Benna,Marcelo G Mattar", "tags": "NIPS 2024,Poster", "abstract": "Understanding connections between artificial and biological intelligent systems can reveal fundamental principles of general intelligence. While many artificial intelligence models have a neuroscience counterpart, such connections are largely missing in Transformer models and the self-attention mechanism. Here, we examine the relationship between interacting attention heads and human episodic memory. We focus on induction heads, which contribute to in-context learning in Transformer-based large language models (LLMs). We demonstrate that induction heads are behaviorally, functionally, and mechanistically similar to the contextual maintenance and retrieval (CMR) model of human episodic memory. Our analyses of LLMs pre-trained on extensive text data show that CMR-like heads often emerge in the intermediate and late layers, qualitatively mirroring human memory biases. The ablation of CMR-like heads suggests their causal role in in-context learning. Our findings uncover a parallel between the computational mechanisms of LLMs and human memory, offering valuable insights into both research fields.", "pdf": "https://openreview.net/pdf/0bd34b9f6d0eaba66fd4ba873e8e84a2bdd91e14.pdf"} {"title": "AmoebaLLM: Constructing Any-Shape Large Language Models for Efficient and Instant Deployment", "url": "https://openreview.net/forum?id=G0yxFmP87g", "detail_url": "https://openreview.net/forum?id=G0yxFmP87g", "authors": "Yonggan Fu,Zhongzhi Yu,Junwei Li,Jiayi Qian,Yongan Zhang,Xiangchi Yuan,Dachuan Shi,Roman Yakunin,Yingyan Celine Lin", "tags": "NIPS 2024,Poster", "abstract": "Motivated by the transformative capabilities of large language models (LLMs) across various natural language tasks, there has been a growing demand to deploy these models effectively across diverse real-world applications and platforms. However, the challenge of efficiently deploying LLMs has become increasingly pronounced due to the varying application-specific performance requirements and the rapid evolution of computational platforms, which feature diverse resource constraints and deployment flows. These varying requirements necessitate LLMs that can adapt their structures (depth and width) for optimal efficiency across different platforms and application specifications. To address this critical gap, we propose AmoebaLLM, a novel framework designed to enable the instant derivation of LLM subnets of arbitrary shapes, which achieve the accuracy-efficiency frontier and can be extracted immediately after a one-time fine-tuning. In this way, AmoebaLLM significantly facilitates rapid deployment tailored to various platforms and applications. Specifically, AmoebaLLM integrates three innovative components: (1) a knowledge-preserving subnet selection strategy that features a dynamic-programming approach for depth shrinking and an importance-driven method for width shrinking; (2) a shape-aware mixture of LoRAs to mitigate gradient conflicts among subnets during fine-tuning; and (3) an in-place distillation scheme with loss-magnitude balancing as the fine-tuning objective. Extensive experiments validate that AmoebaLLM not only sets new standards in LLM adaptability but also successfully delivers subnets that achieve state-of-the-art trade-offs between accuracy and efficiency.", "pdf": "https://openreview.net/pdf/6cccf970913f515d37d602734d01d0c947705492.pdf"} {"title": "N-agent Ad Hoc Teamwork", "url": "https://openreview.net/forum?id=q7TxGUWlhD", "detail_url": "https://openreview.net/forum?id=q7TxGUWlhD", "authors": "Caroline Wang,Arrasy Rahman,Ishan Durugkar,Elad Liebman,Peter Stone", "tags": "NIPS 2024,Poster", "abstract": "Current approaches to learning cooperative multi-agent behaviors assume relatively restrictive settings. In standard fully cooperative multi-agent reinforcement learning, the learning algorithm controls *all* agents in the scenario, while in ad hoc teamwork, the learning algorithm usually assumes control over only a *single* agent in the scenario. However, many cooperative settings in the real world are much less restrictive. For example, in an autonomous driving scenario, a company might train its cars with the same learning algorithm, yet once on the road, these cars must cooperate with cars from another company. Towards expanding the class of scenarios that cooperative learning methods may optimally address, we introduce $N$*-agent ad hoc teamwork* (NAHT), where a set of autonomous agents must interact and cooperate with dynamically varying numbers and types of teammates. This paper formalizes the problem, and proposes the *Policy Optimization with Agent Modelling* (POAM) algorithm. POAM is a policy gradient, multi-agent reinforcement learning approach to the NAHT problem, that enables adaptation to diverse teammate behaviors by learning representations of teammate behaviors. Empirical evaluation on tasks from the multi-agent particle environment and StarCraft II shows that POAM improves cooperative task returns compared to baseline approaches, and enables out-of-distribution generalization to unseen teammates.", "pdf": "https://openreview.net/pdf/b2a493d4f38a4116108b0ba02a974d3b686c5421.pdf"} {"title": "Provably Faster Algorithms for Bilevel Optimization via Without-Replacement Sampling", "url": "https://openreview.net/forum?id=BNnZwbZGpm", "detail_url": "https://openreview.net/forum?id=BNnZwbZGpm", "authors": "Junyi Li,Heng Huang", "tags": "NIPS 2024,Poster", "abstract": "Bilevel Optimization has experienced significant advancements recently with the introduction of new efficient algorithms. Mirroring the success in single-level optimization, stochastic gradient-based algorithms are widely used in bilevel optimization. However, a common limitation in these algorithms is the presumption of independent sampling, which can lead to increased computational costs due to the unique hyper-gradient structure in bilevel problems. To address this challenge, we study the example-selection strategy for bilevel optimization in this work. More specifically, we introduce a without-replacement sampling based algorithm which achieves a faster convergence rate compared to its counterparts that rely on independent sampling. Beyond the standard bilevel optimization formulation, we extend our discussion to conditional bilevel optimization and also two special cases: minimax and compositional optimization. Finally, we validate our algorithms over both synthetic and real-world applications. Numerical results clearly showcase the superiority of our algorithms.", "pdf": "https://openreview.net/pdf/02a1d8edd1255179e52012fdd5a11e5b2a4e5acc.pdf"} {"title": "FedGTST: Boosting Global Transferability of Federated Models via Statistics Tuning", "url": "https://openreview.net/forum?id=QXkFC7D6p4", "detail_url": "https://openreview.net/forum?id=QXkFC7D6p4", "authors": "Evelyn Ma,Chao Pan,S. Rasoul Etesami,Han Zhao,Olgica Milenkovic", "tags": "NIPS 2024,Poster", "abstract": "The performance of Transfer Learning (TL) significantly depends on effective pretraining, which not only requires extensive amounts of data but also substantial computational resources. As a result, in practice, it is challenging to successfully perform TL at the level of individual model developers. Federated Learning (FL) addresses these challenges by enabling collaboration among individual clients through an indirect expansion of the available dataset, distribution of the computation burden across different entities, and privacy-preserving communication mechanisms. Despite several attempts to devise effective transferable FL approaches, several important issues remain unsolved. First, existing methods in this setting primarily focus on optimizing transferability within their local client domains, thereby ignoring transferability over the global learning domain. Second, most approaches focus on analyzing indirect transferability metrics, which does not allow for accurate assessment of the final target loss and extent of transferability. To address these issues, we introduce two important FL features into the model. The first boosts transferability via an exchange protocol between the clients and the server that includes information about cross-client Jacobian (gradient) norms. The second feature promotes an increase of the average of the Jacobians of the clients at the server side, which is subsequently used as a local regularizer that reduces the cross-client Jacobian variance. A rigorous analysis of our transferable federated algorithm, termed FedGTST (Federated Global Transferability via Statistics Tuning), reveals that increasing the averaged Jacobian norm across clients and reducing its variance ensures tight control of the target loss. This insight leads to the first known upper bound on the target loss of transferable federated learning in terms of the source loss and source-target domain discrepancy. Extensive experimental results on datasets including MNIST \u2192 MNIST-M and CIFAR10 \u2192 SVHN suggest that FedGTST significantly outperforms other relevant baselines, such as FedSR. For example, on the second source-target dataset pair, we improve the accuracy of FedSR by 9.8% and that of FedIIR by 7.6% when the backbone used is LeNet.", "pdf": "https://openreview.net/pdf/5654ab819d08c8951c309ec6e440949b8155196b.pdf"} {"title": "SemCoder: Training Code Language Models with Comprehensive Semantics Reasoning", "url": "https://openreview.net/forum?id=PnlCHQrM69", "detail_url": "https://openreview.net/forum?id=PnlCHQrM69", "authors": "Yangruibo Ding,Jinjun Peng,Marcus J. Min,Gail Kaiser,Junfeng Yang,Baishakhi Ray", "tags": "NIPS 2024,Poster", "abstract": "Code Large Language Models (Code LLMs) have excelled at tasks like code completion but often miss deeper semantics such as execution effects and dynamic states. This paper aims to bridge the gap between Code LLMs' reliance on static text data and the need for semantic understanding for complex tasks like debugging and program repair. We introduce a novel strategy, _monologue reasoning_, to train Code LLMs to reason comprehensive semantics, encompassing high-level functional descriptions, local execution effects of individual statements, and overall input/output behavior, thereby linking static code text with dynamic execution states.\nWe begin by collecting PyX, a clean Python corpus of fully executable code samples with functional descriptions and test cases. \nWe propose training Code LLMs not only to write code but also to understand code semantics by reasoning about key properties, constraints, and execution behaviors using natural language, mimicking human verbal debugging, i.e., rubber-duck debugging. This approach led to the development of SemCoder, a Code LLM with only 6.7B parameters, which shows competitive performance with GPT-3.5-turbo on code generation and execution reasoning tasks. SemCoder achieves 79.3% on HumanEval (GPT-3.5-turbo: 76.8%), 63.6% on CRUXEval-I (GPT-3.5-turbo: 50.3%), and 63.9% on CRUXEval-O (GPT-3.5-turbo: 59.0%). We also study the effectiveness of SemCoder's monologue-style execution reasoning compared to concrete scratchpad reasoning, showing that our approach integrates semantics from multiple dimensions more smoothly. Finally, we demonstrate the potential of applying learned semantics to improve Code LLMs' debugging and self-refining capabilities. Our data, code, and models are available at: https://github.com/ARiSE-Lab/SemCoder.", "pdf": "https://openreview.net/pdf/1e07d8e51eb5f904f2122b7a56ed6151e47c5cc0.pdf"} {"title": "On $f$-Divergence Principled Domain Adaptation: An Improved Framework", "url": "https://openreview.net/forum?id=xSU27DgWEr", "detail_url": "https://openreview.net/forum?id=xSU27DgWEr", "authors": "Ziqiao Wang,Yongyi Mao", "tags": "NIPS 2024,Poster", "abstract": "Unsupervised domain adaptation (UDA) plays a crucial role in addressing distribution shifts in machine learning. In this work, we improve the theoretical foundations of UDA proposed in Acuna et al. (2021) by refining their $f$-divergence-based discrepancy and additionally introducing a new measure, $f$-domain discrepancy ($f$-DD). By removing the absolute value function and incorporating a scaling parameter, $f$-DD obtains novel target error and sample complexity bounds, allowing us to recover previous KL-based results and bridging the gap between algorithms and theory presented in Acuna et al. (2021). Using a localization technique, we also develop a fast-rate generalization bound. Empirical results demonstrate the superior performance of $f$-DD-based learning algorithms over previous works in popular UDA benchmarks.", "pdf": "https://openreview.net/pdf/e6f6280a04e2892629381753602bc9e403e994ea.pdf"} {"title": "Improved Generation of Adversarial Examples Against Safety-aligned LLMs", "url": "https://openreview.net/forum?id=8hBc843g1p", "detail_url": "https://openreview.net/forum?id=8hBc843g1p", "authors": "Qizhang Li,Yiwen Guo,Wangmeng Zuo,Hao Chen", "tags": "NIPS 2024,Poster", "abstract": "Adversarial prompts (or say, adversarial examples) generated using gradient-based methods exhibit outstanding performance in performing automatic jailbreak attacks against safety-aligned LLMs. Nevertheless, due to the discrete nature of texts, the input gradient of LLMs struggles to precisely reflect the magnitude of loss change that results from token replacements in the prompt, leading to limited attack success rates against safety-aligned LLMs, even in the *white-box* setting. In this paper, we explore a new perspective on this problem, suggesting that it can be alleviated by leveraging innovations inspired in transfer-based attacks that were originally proposed for attacking *black-box* image classification models. For the first time, we appropriate the ideologies of effective methods among these transfer-based attacks, *i.e.*, Skip Gradient Method and Intermediate Level Attack, into gradient-based adversarial prompt generation and achieve significant performance gains without introducing obvious computational cost. Meanwhile, by discussing mechanisms behind the gains, new insights are drawn, and proper combinations of these methods are also developed. Our empirical results show that 87% of the query-specific adversarial suffixes generated by the developed combination can induce Llama-2-7B-Chat to produce the output that exactly matches the target string on AdvBench. This match rate is 33% higher than that of a very strong baseline known as GCG, demonstrating advanced discrete optimization for adversarial prompt generation against LLMs. In addition, without introducing obvious cost, the combination achieves >30% absolute increase in attack success rates compared with GCG when generating both query-specific (38% ->68%) and universal adversarial prompts (26.68% -> 60.32%) for attacking the Llama-2-7B-Chat model on AdvBench.\nCode at: https://github.com/qizhangli/Gradient-based-Jailbreak-Attacks.", "pdf": "https://openreview.net/pdf/993ca7e4d8e5f38ab0bcc37328fa57307b6f0ea9.pdf"} {"title": "Multi-model Ensemble Conformal Prediction in Dynamic Environments", "url": "https://openreview.net/forum?id=J1Y70keorq", "detail_url": "https://openreview.net/forum?id=J1Y70keorq", "authors": "Erfan Hajihashemi,Yanning Shen", "tags": "NIPS 2024,Poster", "abstract": "Conformal prediction is an uncertainty quantification method that constructs a prediction set for a previously unseen datum, ensuring the true label is included with a predetermined coverage probability. Adaptive conformal prediction has been developed to address data distribution shifts in dynamic environments. However, the efficiency of prediction sets varies depending on the learning model used. Employing a single fixed model may not consistently offer the best performance in dynamic environments with unknown data distribution shifts. To address this issue, we introduce a novel adaptive conformal prediction framework, where the model used for creating prediction sets is selected \u2018on the fly\u2019 from multiple candidate models. The proposed algorithm is proven to achieve strongly adaptive regret over all intervals while maintaining valid coverage. Experiments on both real and synthetic datasets corroborate that the proposed approach consistently yields more efficient prediction sets while maintaining valid coverage, outperforming alternative methods.", "pdf": "https://openreview.net/pdf/e8099609fd67117212f5fdae1d419adf13be51f9.pdf"} {"title": "Disentangled Representation Learning in Non-Markovian Causal Systems", "url": "https://openreview.net/forum?id=uLGyoBn7hm", "detail_url": "https://openreview.net/forum?id=uLGyoBn7hm", "authors": "Adam Li,Yushu Pan,Elias Bareinboim", "tags": "NIPS 2024,Poster", "abstract": "Considering various data modalities, such as images, videos, and text, humans perform causal reasoning using high-level causal variables, as opposed to operating at the low, pixel level from which the data comes. \nIn practice, most causal reasoning methods assume that the data is described as granular as the underlying causal generative factors, which is often violated in various AI tasks. \nThis mismatch translates into a lack of guarantees in various tasks such as generative modeling, decision-making, fairness, and generalizability, to cite a few. \nIn this paper, we acknowledge this issue and study the problem of causal disentangled representation learning from a combination of data gathered from various heterogeneous domains and assumptions in the form of a latent causal graph. To the best of our knowledge, the proposed work is the first to consider i) non-Markovian causal settings, where there may be unobserved confounding, ii) arbitrary distributions that arise from multiple domains, and iii) a relaxed version of disentanglement. Specifically, we introduce graphical criteria that allow for disentanglement under various conditions. Building on these results, we develop an algorithm that returns a causal disentanglement map, highlighting which latent variables can be disentangled given the combination of data and assumptions. The theory is corroborated by experiments.", "pdf": "https://openreview.net/pdf/8350116f8253990dda7ce413729df73f9a61f109.pdf"} {"title": "Cal-DPO: Calibrated Direct Preference Optimization for Language Model Alignment", "url": "https://openreview.net/forum?id=57OQXxbTbY", "detail_url": "https://openreview.net/forum?id=57OQXxbTbY", "authors": "Teng Xiao,Yige Yuan,Huaisheng Zhu,Mingxiao Li,Vasant G Honavar", "tags": "NIPS 2024,Poster", "abstract": "We study the problem of aligning large language models (LLMs) with human preference data. Contrastive preference optimization has shown promising results in aligning LLMs with available preference data by optimizing the implicit reward associated with the policy. However, the contrastive objective focuses mainly on the relative values of implicit rewards associated with two responses while ignoring\ntheir actual values, resulting in suboptimal alignment with human preferences. To address this limitation, we propose calibrated direct preference optimization (Cal-DPO), a simple yet effective algorithm. We show that substantial improvement in alignment with the given preferences can be achieved simply by calibrating the implicit reward to ensure that the learned implicit rewards are comparable in\nscale to the ground-truth rewards. We demonstrate the theoretical advantages of Cal-DPO over existing approaches. The results of our experiments on a variety of standard benchmarks show that Cal-DPO remarkably improves off-the-shelf methods.", "pdf": "https://openreview.net/pdf/deafc46d7f78e13f7390612cd2ea92bd3459b277.pdf"} {"title": "Stochastic contextual bandits with graph feedback: from independence number to MAS number", "url": "https://openreview.net/forum?id=t8iosEWoyd", "detail_url": "https://openreview.net/forum?id=t8iosEWoyd", "authors": "Yuxiao Wen,Yanjun Han,Zhengyuan Zhou", "tags": "NIPS 2024,Poster", "abstract": "We consider contextual bandits with graph feedback, a class of interactive learning problems with richer structures than vanilla contextual bandits, where taking an action reveals the rewards for all neighboring actions in the feedback graph under all contexts. Unlike the multi-armed bandits setting where a growing literature has painted a near-complete understanding of graph feedback, much remains unexplored in the contextual bandits counterpart. In this paper, we make inroads into this inquiry by establishing a regret lower bound $\\Omega(\\sqrt{\\beta_M(G) T})$, where $M$ is the number of contexts, $G$ is the feedback graph, and $\\beta_M(G)$ is our proposed graph-theoretic quantity that characterizes the fundamental learning limit for this class of problems. Interestingly, $\\beta_M(G)$ interpolates between $\\alpha(G)$ (the independence number of the graph) and $\\mathsf{m}(G)$ (the maximum acyclic subgraph (MAS) number of the graph) as the number of contexts $M$ varies. We also provide algorithms that achieve near-optimal regret for important classes of context sequences and/or feedback graphs, such as transitively closed graphs that find applications in auctions and inventory control. In particular, with many contexts, our results show that the MAS number essentially characterizes the statistical complexity for contextual bandits, as opposed to the independence number in multi-armed bandits.", "pdf": "https://openreview.net/pdf/513ccbfe70b63a8134c688af5c125c0ddad739c2.pdf"} {"title": "OccamLLM: Fast and Exact Language Model Arithmetic in a Single Step", "url": "https://openreview.net/forum?id=vAOgaPvgYr", "detail_url": "https://openreview.net/forum?id=vAOgaPvgYr", "authors": "Owen M Dugan,Donato M. Jim\u00e9nez Benet\u00f3,Charlotte Loh,Zhuo Chen,Rumen Dangovski,Marin Soljacic", "tags": "NIPS 2024,Poster", "abstract": "Despite significant advancements in text generation and reasoning, Large Language Models (LLMs) still face challenges in accurately performing complex arithmetic operations. Language model systems often enable LLMs to generate code for arithmetic operations to achieve accurate calculations. However, this approach compromises speed and security, and fine-tuning risks the language model losing prior capabilities. We propose a framework that enables exact arithmetic in *a single autoregressive step*, providing faster, more secure, and more interpretable LLM systems with arithmetic capabilities. We use the hidden states of a LLM to control a symbolic architecture that performs arithmetic. Our implementation using Llama 3 with OccamNet as a symbolic model (OccamLlama) achieves 100\\% accuracy on single arithmetic operations ($+,-,\\times,\\div,\\sin{},\\cos{},\\log{},\\exp{},\\sqrt{}$), outperforming GPT 4o with and without a code interpreter. Furthermore, OccamLlama outperforms GPT 4o with and without a code interpreter on average across a range of mathematical problem solving benchmarks, demonstrating that OccamLLMs can excel in arithmetic tasks, even surpassing much larger models. Code is available at https://github.com/druidowm/OccamLLM.", "pdf": "https://openreview.net/pdf/2f805a9041d7d2e112fd00bc3259fa9079805498.pdf"} {"title": "Sample Complexity of Interventional Causal Representation Learning", "url": "https://openreview.net/forum?id=XL9aaXl0u6", "detail_url": "https://openreview.net/forum?id=XL9aaXl0u6", "authors": "Emre Acart\u00fcrk,Burak Var\u0131c\u0131,Karthikeyan Shanmugam,Ali Tajer", "tags": "NIPS 2024,Poster", "abstract": "Consider a data-generation process that transforms low-dimensional _latent_ causally-related variables to high-dimensional _observed_ variables. Causal representation learning (CRL) is the process of using the observed data to recover the latent causal variables and the causal structure among them. Despite the multitude of identifiability results under various interventional CRL settings, the existing guarantees apply exclusively to the _infinite-sample_ regime (i.e., infinite observed samples). This paper establishes the first sample-complexity analysis for the finite-sample regime, in which the interactions between the number of observed samples and probabilistic guarantees on recovering the latent variables and structure are established. This paper focuses on _general_ latent causal models, stochastic _soft_ interventions, and a linear transformation from the latent to the observation space. The identifiability results ensure graph recovery up to ancestors and latent variables recovery up to mixing with parent variables. Specifically, ${\\cal O}((\\log \\frac{1}{\\delta})^{4})$ samples suffice for latent graph recovery up to ancestors with probability $1 - \\delta$, and ${\\cal O}((\\frac{1}{\\epsilon}\\log \\frac{1}{\\delta})^{4})$ samples suffice for latent causal variables recovery that is $\\epsilon$ close to the identifiability class with probability $1 - \\delta$.", "pdf": "https://openreview.net/pdf/3cd848f730138b8b2afd1dcc6c71c80ba6f6a6a1.pdf"} {"title": "On the Complexity of Teaching a Family of Linear Behavior Cloning Learners", "url": "https://openreview.net/forum?id=4SAR7IRqmB", "detail_url": "https://openreview.net/forum?id=4SAR7IRqmB", "authors": "Shubham Kumar Bharti,Stephen Wright,Adish Singla,Jerry Zhu", "tags": "NIPS 2024,Poster", "abstract": "We study optimal teaching for a family of Behavior Cloning learners that learn using a linear hypothesis class. In this setup, a knowledgeable teacher can demonstrate a dataset of state and action tuples and is required to teach an optimal policy to an entire family of BC learners using the smallest possible dataset. We analyze the linear family and design a novel teaching algorithm called `TIE' that achieves the instance optimal Teaching Dimension for the entire family. However, we show that this problem is NP-hard for action spaces with $|\\mathcal{A}| > 2$ and provide an efficient approximation algorithm with a $\\log(|\\mathcal{A}| - 1)$ guarantee on the optimal teaching size. We present empirical results to demonstrate the effectiveness of our algorithm and compare it to various baselines in different teaching environments.", "pdf": "https://openreview.net/pdf/efe5b979cd3307122d731db58d556f69dc10e559.pdf"} {"title": "Towards a \"Universal Translator\" for Neural Dynamics at Single-Cell, Single-Spike Resolution", "url": "https://openreview.net/forum?id=nRRJsDahEg", "detail_url": "https://openreview.net/forum?id=nRRJsDahEg", "authors": "Yizi Zhang,Yanchen Wang,Donato M. Jim\u00e9nez-Benet\u00f3,Zixuan Wang,Mehdi Azabou,Blake Aaron Richards,Renee Tung,Olivier Winter,International Brain Laboratory,Eva L Dyer,Liam Paninski,Cole Lincoln Hurwitz", "tags": "NIPS 2024,Poster", "abstract": "Neuroscience research has made immense progress over the last decade, but our understanding of the brain remains fragmented and piecemeal: the dream of probing an arbitrary brain region and automatically reading out the information encoded in its neural activity remains out of reach. In this work, we build towards a first foundation model for neural spiking data that can solve a diverse set of tasks across multiple brain areas. We introduce a novel self-supervised modeling approach for population activity in which the model alternates between masking out and reconstructing neural activity across different time steps, neurons, and brain regions. To evaluate our approach, we design unsupervised and supervised prediction tasks using the International Brain Laboratory repeated site dataset, which is comprised of Neuropixels recordings targeting the same brain locations across 48 animals and experimental sessions. The prediction tasks include single-neuron and region-level activity prediction, forward prediction, and behavior decoding. We demonstrate that our multi-task-masking (MtM) approach significantly improves the performance of current state-of-the-art population models and enables multi-task learning. We also show that by training on multiple animals, we can improve the generalization ability of the model to unseen animals, paving the way for a foundation model of the brain at single-cell, single-spike resolution.", "pdf": "https://openreview.net/pdf/e7664eef58345c56b265804ba3f72932b5f88c14.pdf"} {"title": "Simple and Effective Masked Diffusion Language Models", "url": "https://openreview.net/forum?id=L4uaAR4ArM", "detail_url": "https://openreview.net/forum?id=L4uaAR4ArM", "authors": "Subham Sekhar Sahoo,Marianne Arriola,Aaron Gokaslan,Edgar Mariano Marroquin,Alexander M Rush,Yair Schiff,Justin T Chiu,Volodymyr Kuleshov", "tags": "NIPS 2024,Poster", "abstract": "While diffusion models excel at generating high-quality images, prior work reports a significant performance gap between diffusion and autoregressive (AR) methods in language modeling.\nIn this work, we show that simple masked discrete diffusion is more performant than previously thought.\nWe apply an effective training recipe that improves the performance of masked diffusion models and derive a simplified, Rao-Blackwellized objective that results in additional improvements.\nOur objective has a simple form—it is a mixture of classical masked language modeling losses—and can be used to train encoder-only language models that admit efficient samplers, including ones that can generate arbitrary lengths of text semi-autoregressively like a traditional language model.\nOn language modeling benchmarks, a range of masked diffusion models trained with modern engineering practices achieves a new state-of-the-art among diffusion models, and approaches AR perplexity. We provide the code, along with a blog post and video tutorial on the project page: https://s-sahoo.com/mdlm", "pdf": "https://openreview.net/pdf/3a7ac707cefd8a4120d5e11741324aa678d7ce77.pdf"} {"title": "A Bayesian Approach for Personalized Federated Learning in Heterogeneous Settings", "url": "https://openreview.net/forum?id=hilGwNabqB", "detail_url": "https://openreview.net/forum?id=hilGwNabqB", "authors": "Disha Makhija,Joydeep Ghosh,Nhat Ho", "tags": "NIPS 2024,Poster", "abstract": "Federated learning (FL), through its privacy-preserving collaborative learning approach, has significantly empowered decentralized devices. However, constraints in either data and/or computational resources among participating clients introduce several challenges in learning, including the inability to train large model architectures, heightened risks of overfitting, and more. In this work, we present a novel FL framework grounded in Bayesian learning to address these challenges. Our approach involves training personalized Bayesian models at each client tailored to the unique complexities of the clients' datasets and efficiently collaborating across these clients. By leveraging Bayesian neural networks and their uncertainty quantification capabilities, our local training procedure robustly learns from small datasets. And the novel collaboration procedure utilizing priors in the functional (output) space of the networks facilitates collaboration across models of varying sizes, enabling the framework to adapt well in heterogeneous data and computational settings. Furthermore, we present a differentially private version of the algorithm, accompanied by formal differential privacy guarantees that apply without any assumptions on the learning algorithm. Through experiments on popular FL datasets, we demonstrate that our approach outperforms strong baselines in both homogeneous and heterogeneous settings, and under strict privacy constraints.", "pdf": "https://openreview.net/pdf/eae39129243dc6c4b8c87d448599e80d0b9fce05.pdf"} {"title": "Kaleido Diffusion: Improving Conditional Diffusion Models with Autoregressive Latent Modeling", "url": "https://openreview.net/forum?id=qZSwlcLMCS", "detail_url": "https://openreview.net/forum?id=qZSwlcLMCS", "authors": "Jiatao Gu,Ying Shen,Shuangfei Zhai,Yizhe Zhang,Navdeep Jaitly,Joshua M. Susskind", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models have emerged as a powerful tool for generating high-quality images from textual descriptions. Despite their successes, these models often exhibit limited diversity in the sampled images, particularly when sampling with a high classifier-free guidance weight. To address this issue, we present Kaleido, a novel approach that enhances the diversity of samples by incorporating autoregressive latent priors. Kaleido integrates an autoregressive language model that encodes the original caption and generates latent variables, serving as abstract and intermediary representations for guiding and facilitating the image generation process.\nIn this paper, we explore a variety of discrete latent representations, including textual descriptions, detection bounding boxes, object blobs, and visual tokens. These representations diversify and enrich the input conditions to the diffusion models, enabling more diverse outputs.\nOur experimental results demonstrate that Kaleido effectively broadens the diversity of the generated image samples from a given textual description while maintaining high image quality. Furthermore, we show that Kaleido adheres closely to the guidance provided by the generated latent variables, demonstrating its capability to effectively control and direct the image generation process.", "pdf": "https://openreview.net/pdf/6bf6fdec8dd6eed85f84157b0809edad3641d855.pdf"} {"title": "FairWire: Fair Graph Generation", "url": "https://openreview.net/forum?id=V0JvwCQlJe", "detail_url": "https://openreview.net/forum?id=V0JvwCQlJe", "authors": "Oyku Deniz Kose,Yanning Shen", "tags": "NIPS 2024,Poster", "abstract": "Machine learning over graphs has recently attracted growing attention due to its ability to analyze and learn complex relations within critical interconnected systems. However, the disparate impact that is amplified by the use of biased graph structures in these algorithms has raised significant concerns for their deployment in real-world decision systems. In addition, while synthetic graph generation has become pivotal for privacy and scalability considerations, the impact of generative learning algorithms on structural bias has not yet been investigated. Motivated by this, this work focuses on the analysis and mitigation of structural bias for both real and synthetic graphs. Specifically, we first theoretically analyze the sources of structural bias that result in disparity for the predictions of dyadic relations. To alleviate the identified bias factors, we design a novel fairness regularizer that offers a versatile use. Faced with the bias amplification in graph generation models brought to light in this work, we further propose a fair graph generation framework, FairWire, by leveraging our fair regularizer design in a generative model. Experimental results on real-world networks validate that the proposed tools herein deliver effective structural bias mitigation for both real and synthetic graphs.", "pdf": "https://openreview.net/pdf/1f5eea2983c84e12175cd2b978aa11cb3f7ce158.pdf"} {"title": "Apathetic or Empathetic? Evaluating LLMs' Emotional Alignments with Humans", "url": "https://openreview.net/forum?id=pwRVGRWtGg", "detail_url": "https://openreview.net/forum?id=pwRVGRWtGg", "authors": "Jen-tse Huang,Man Ho LAM,Eric John Li,Shujie Ren,Wenxuan Wang,Wenxiang Jiao,Zhaopeng Tu,Michael Lyu", "tags": "NIPS 2024,Poster", "abstract": "Evaluating Large Language Models\u2019 (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes seven LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4, Mixtral-8x22B, and LLaMA-3.1. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, i.e., EmotionBench, are publicly available at https://github.com/CUHK-ARISE/EmotionBench.", "pdf": "https://openreview.net/pdf/4d6e71e0ca7fffae0c70fd69763ea99167e3d197.pdf"} {"title": "Scaling transformer neural networks for skillful and reliable medium-range weather forecasting", "url": "https://openreview.net/forum?id=aBP01akha9", "detail_url": "https://openreview.net/forum?id=aBP01akha9", "authors": "Tung Nguyen,Rohan Shah,Hritik Bansal,Troy Arcomano,Romit Maulik,Veerabhadra Kotamarthi,Ian Foster,Sandeep Madireddy,Aditya Grover", "tags": "NIPS 2024,Poster", "abstract": "Weather forecasting is a fundamental problem for anticipating and mitigating the impacts of climate change. Recently, data-driven approaches for weather forecasting based on deep learning have shown great promise, achieving accuracies that are competitive with operational systems. However, those methods often employ complex, customized architectures without sufficient ablation analysis, making it difficult to understand what truly contributes to their success. Here we introduce Stormer, a simple transformer model that achieves state-of-the art performance on weather forecasting with minimal changes to the standard transformer backbone. We identify the key components of Stormer through careful empirical analyses, including weather-specific embedding, randomized dynamics forecast, and pressure-weighted loss. At the core of Stormer is a randomized forecasting objective that trains the model to forecast the weather dynamics over varying time intervals. During inference, this allows us to produce multiple forecasts for a target lead time and combine them to obtain better forecast accuracy. On WeatherBench 2, Stormer performs competitively at short to medium-range forecasts and outperforms current methods beyond 7 days, while requiring orders-of-magnitude less training data and compute. Additionally, we demonstrate Stormer\u2019s favorable scaling properties, showing consistent improvements in forecast accuracy with increases in model size and training tokens. Code and checkpoints are available at https://github.com/tung-nd/stormer.", "pdf": "https://openreview.net/pdf/2fdb23d735460d3e9df36b9d966b324b7a000548.pdf"} {"title": "A theoretical case-study of Scalable Oversight in Hierarchical Reinforcement Learning", "url": "https://openreview.net/forum?id=3tj3A26wsV", "detail_url": "https://openreview.net/forum?id=3tj3A26wsV", "authors": "Tom Yan,Zachary Chase Lipton", "tags": "NIPS 2024,Poster", "abstract": "A key source of complexity in next-generation AI models is the size of model outputs, making it time-consuming to parse and provide reliable feedback on. To ensure such models are aligned, we will need to bolster our understanding of scalable oversight and how to scale up human feedback. To this end, we study the challenges of scalable oversight in the context of goal-conditioned hierarchical reinforcement learning. Hierarchical structure is a promising entrypoint into studying how to scale up human feedback, which in this work we assume can only be provided for model outputs below a threshold size. In the cardinal feedback setting, we develop an apt sub-MDP reward and algorithm that allows us to acquire and scale up low-level feedback for learning with sublinear regret. In the ordinal feedback setting, we show the necessity of both high- and low-level feedback, and develop a hierarchical experimental design algorithm that efficiently acquires both types of feedback for learning. Altogether, our work aims to consolidate the foundations of scalable oversight, formalizing and studying the various challenges thereof.", "pdf": "https://openreview.net/pdf/a143b9d7d28c1a6e8cdea9a18adc4fa9293ed1a7.pdf"} {"title": "Causal Imitation for Markov Decision Processes: a Partial Identification Approach", "url": "https://openreview.net/forum?id=KHX0dKXdqH", "detail_url": "https://openreview.net/forum?id=KHX0dKXdqH", "authors": "Kangrui Ruan,Junzhe Zhang,Xuan Di,Elias Bareinboim", "tags": "NIPS 2024,Poster", "abstract": "Imitation learning enables an agent to learn from expert demonstrations when the performance measure is unknown and the reward signal is not specified. Standard imitation methods do not generally apply when the learner and the expert's sensory capabilities mismatch and demonstrations are contaminated with unobserved confounding bias. To address these challenges, recent advancements in causal imitation learning have been pursued. However, these methods often require access to underlying causal structures that might not always be available, posing practical challenges.\nIn this paper, we investigate robust imitation learning within the framework of canonical Markov Decision Processes (MDPs) using partial identification, allowing the agent to achieve expert performance even when the system dynamics are not uniquely determined from the confounded expert demonstrations. Specifically, first, we theoretically demonstrate that when unobserved confounders (UCs) exist in an MDP, the learner is generally unable to imitate expert performance. We then explore imitation learning in partially identifiable settings --- either transition distribution or reward function is non-identifiable from the available data and knowledge. Augmenting the celebrated GAIL method (Ho \\& Ermon, 2016), our analysis leads to two novel causal imitation algorithms that can obtain effective policies guaranteed to achieve expert performance.", "pdf": "https://openreview.net/pdf/44332f130be85fa6cd9ebf2c17a3b40392bccbae.pdf"} {"title": "Learning from Uncertain Data: From Possible Worlds to Possible Models", "url": "https://openreview.net/forum?id=v9RqRFSLQ2", "detail_url": "https://openreview.net/forum?id=v9RqRFSLQ2", "authors": "Jiongli Zhu,Su Feng,Boris Glavic,Babak Salimi", "tags": "NIPS 2024,Poster", "abstract": "We introduce an efficient method for learning linear models from uncertain data, where uncertainty is represented as a set of possible variations in the data, leading to predictive multiplicity. Our approach leverages abstract interpretation and zonotopes, a type of convex polytope, to compactly represent these dataset variations, enabling the symbolic execution of gradient descent on all possible worlds simultaneously. We develop techniques to ensure that this process converges to a fixed point and derive closed-form solutions for this fixed point. Our method provides sound over-approximations of all possible optimal models and viable prediction ranges. We demonstrate the effectiveness of our approach through theoretical and empirical analysis, highlighting its potential to reason about model and prediction uncertainty due to data quality issues in training data.", "pdf": "https://openreview.net/pdf/87fd59808dc5ed78d3e3e6ef14d35b6e060362d8.pdf"} {"title": "Adaptive Exploration for Data-Efficient General Value Function Evaluations", "url": "https://openreview.net/forum?id=HC6iqpPt3L", "detail_url": "https://openreview.net/forum?id=HC6iqpPt3L", "authors": "Arushi Jain,Josiah P. Hanna,Doina Precup", "tags": "NIPS 2024,Poster", "abstract": "General Value Functions (GVFs) (Sutton et al., 2011) represent predictive knowledge in reinforcement learning. Each GVF computes the expected return for a given policy, based on a unique reward. Existing methods relying on fixed behavior policies or pre-collected data often face data efficiency issues when learning multiple GVFs in parallel using off-policy methods. To address this, we introduce *GVFExplorer*, which adaptively learns a single behavior policy that efficiently collects data for evaluating multiple GVFs in parallel. Our method optimizes the behavior policy by minimizing the total variance in return across GVFs, thereby reducing the required environmental interactions. We use an existing temporal-difference-style variance estimator to approximate the return variance. We prove that each behavior policy update decreases the overall mean squared error in GVF predictions. We empirically show our method's performance in tabular and nonlinear function approximation settings, including Mujoco environments, with stationary and non-stationary reward signals, optimizing data usage and reducing prediction errors across multiple GVFs.", "pdf": "https://openreview.net/pdf/20c5e327d236868140f6e856c42c6b8592a50482.pdf"} {"title": "One-Layer Transformer Provably Learns One-Nearest Neighbor In Context", "url": "https://openreview.net/forum?id=WDX45LNZXE", "detail_url": "https://openreview.net/forum?id=WDX45LNZXE", "authors": "Zihao Li,Yuan Cao,Cheng Gao,Yihan He,Han Liu,Jason Matthew Klusowski,Jianqing Fan,Mengdi Wang", "tags": "NIPS 2024,Poster", "abstract": "Transformers have achieved great success in recent years. Interestingly, transformers have shown particularly strong in-context learning capability -- even without fine-tuning, they are still able to solve unseen tasks well purely based on task-specific prompts. In this paper, we study the capability of one-layer transformers in learning the one-nearest neighbor prediction rule. Under a theoretical framework where the prompt contains a sequence of labeled training data and unlabeled test data, we show that, although the loss function is nonconvex, when trained with gradient descent, a single softmax attention layer can successfully learn to behave like a one-nearest neighbor classifier. Our result gives a concrete example on how transformers can be trained to implement nonparametric machine learning algorithms, and sheds light on the role of softmax attention in transformer models.", "pdf": "https://openreview.net/pdf/69e3d7430a05e0d5696f5dbe23746ff3a22096e9.pdf"} {"title": "SeTAR: Out-of-Distribution Detection with Selective Low-Rank Approximation", "url": "https://openreview.net/forum?id=65UoJ0z7Kp", "detail_url": "https://openreview.net/forum?id=65UoJ0z7Kp", "authors": "Yixia Li,Boya Xiong,Guanhua Chen,Yun Chen", "tags": "NIPS 2024,Poster", "abstract": "Out-of-distribution (OOD) detection is crucial for the safe deployment of neural networks. Existing CLIP-based approaches perform OOD detection by devising novel scoring functions or sophisticated fine-tuning methods. In this work, we propose SeTAR, a novel, training-free OOD detection method that leverages selective low-rank approximation of weight matrices in vision-language and vision-only models. SeTAR enhances OOD detection via post-hoc modification of the model's weight matrices using a simple greedy search algorithm. Based on SeTAR, we further propose SeTAR+FT, a fine-tuning extension optimizing model performance for OOD detection tasks. Extensive evaluations on ImageNet1K and Pascal-VOC benchmarks show SeTAR's superior performance, reducing the relatively false positive rate by up to 18.95\\% and 36.80\\% compared to zero-shot and fine-tuning baselines. Ablation studies further validate our approach's effectiveness, robustness, and generalizability across different model backbones. Our work offers a scalable, efficient solution for OOD detection, setting a new state-of-the-art in this area.", "pdf": "https://openreview.net/pdf/1f941a564d6513eccbbda6e05d521e80daf9ffcc.pdf"} {"title": "OptEx: Expediting First-Order Optimization with Approximately Parallelized Iterations", "url": "https://openreview.net/forum?id=MzNjnbgcPN", "detail_url": "https://openreview.net/forum?id=MzNjnbgcPN", "authors": "Yao Shu,Jiongfeng Fang,Ying Tiffany He,Fei Richard Yu", "tags": "NIPS 2024,Poster", "abstract": "First-order optimization (FOO) algorithms are pivotal in numerous computational domains, such as reinforcement learning and deep learning. However, their application to complex tasks often entails significant optimization inefficiency due to their need of many sequential iterations for convergence. In response, we introduce first-order optimization expedited with approximately parallelized iterations (OptEx), the first general framework that enhances the time efficiency of FOO by leveraging parallel computing to directly mitigate its requirement of many sequential iterations for convergence. To achieve this, OptEx utilizes a kernelized gradient estimation that is based on the history of evaluated gradients to predict the gradients required by the next few sequential iterations in FOO, which helps to break the inherent iterative dependency and hence enables the approximate parallelization of iterations in FOO. We further establish theoretical guarantees for the estimation error of our kernelized gradient estimation and the iteration complexity of SGD-based OptEx, confirming that the estimation error diminishes to zero as the history of gradients accumulates and that our SGD-based OptEx enjoys an effective acceleration rate of \u0398(\u221aN ) over standard SGD given parallelism of N, in terms of the sequential iterations required for convergence. Finally, we provide extensive empirical studies, including synthetic functions, reinforcement learning tasks, and neural network training on various datasets, to underscore the substantial efficiency improvements achieved by our OptEx in practice.", "pdf": "https://openreview.net/pdf/6355a6b19af5c8832921ce57986888808909ddc1.pdf"} {"title": "FineCLIP: Self-distilled Region-based CLIP for Better Fine-grained Understanding", "url": "https://openreview.net/forum?id=nExI4FuKWD", "detail_url": "https://openreview.net/forum?id=nExI4FuKWD", "authors": "Dong Jing,Xiaolong He,Yutian Luo,Nanyi Fei,Guoxing Yang,Wei Wei,Huiwen Zhao,Zhiwu Lu", "tags": "NIPS 2024,Poster", "abstract": "Contrastive Language-Image Pre-training (CLIP) achieves impressive performance on tasks like image classification and image-text retrieval by learning on large-scale image-text datasets. However, CLIP struggles with dense prediction tasks due to the poor grasp of the fine-grained details. Although existing works pay attention to this issue, they achieve limited improvements and usually sacrifice the important visual-semantic consistency. To overcome these limitations, we propose FineCLIP, which keeps the global contrastive learning to preserve the visual-semantic consistency and further enhances the fine-grained understanding through two innovations: 1) A real-time self-distillation scheme that facilitates the transfer of representation capability from global to local features. 2) A semantically-rich regional contrastive learning paradigm with generated region-text pairs, boosting the local representation capabilities with abundant fine-grained knowledge. \nBoth cooperate to fully leverage diverse semantics and multi-grained complementary information.\nTo validate the superiority of our FineCLIP and the rationality of each design, we conduct extensive experiments on challenging dense prediction and image-level tasks. \nAll the observations demonstrate the effectiveness of FineCLIP.", "pdf": "https://openreview.net/pdf/e77b9bf69974b22ae77ee4209dc907d97148cbdd.pdf"} {"title": "On the Role of Information Structure in Reinforcement Learning for Partially-Observable Sequential Teams and Games", "url": "https://openreview.net/forum?id=QgMC8ftbNd", "detail_url": "https://openreview.net/forum?id=QgMC8ftbNd", "authors": "Awni Altabaa,Zhuoran Yang", "tags": "NIPS 2024,Poster", "abstract": "In sequential decision-making problems, the *information structure* describes the causal dependencies between system variables, encompassing the dynamics of the environment and the agents' actions. Classical models of reinforcement learning (e.g., MDPs, POMDPs) assume a restricted and highly regular information structure, while more general models like predictive state representations do not explicitly model the information structure. By contrast, real-world sequential decision-making problems typically involve a complex and time-varying interdependence of system variables, requiring a rich and flexible representation of information structure. In this paper, we formalize a novel reinforcement learning model which explicitly represents the information structure.\nWe then use this model to carry out an information-structural analysis of the statistical complexity of general sequential decision-making problems, obtaining a characterization via a graph-theoretic quantity of the DAG representation of the information structure. We prove an upper bound on the sample complexity of learning a general sequential decision-making problem in terms of its information structure by exhibiting an algorithm achieving the upper bound. This recovers known tractability results and gives a novel perspective on reinforcement learning in general sequential decision-making problems, providing a systematic way of identifying new tractable classes of problems.", "pdf": "https://openreview.net/pdf/48f357d190f35342349977f6fc217aacfb61f634.pdf"} {"title": "SelectIT: Selective Instruction Tuning for LLMs via Uncertainty-Aware Self-Reflection", "url": "https://openreview.net/forum?id=QNieOPt4fg", "detail_url": "https://openreview.net/forum?id=QNieOPt4fg", "authors": "Liangxin Liu,Xuebo Liu,Derek F. Wong,Dongfang Li,Ziyi Wang,Baotian Hu,Min Zhang", "tags": "NIPS 2024,Poster", "abstract": "Instruction tuning (IT) is crucial to tailoring large language models (LLMs) towards human-centric interactions. Recent advancements have shown that the careful selection of a small, high-quality subset of IT data can significantly enhance the performance of LLMs. Despite this, common approaches often rely on additional models or data, which increases costs and limits widespread adoption. In this work, we propose a novel approach, termed $\\textit{SelectIT}$, that capitalizes on the foundational capabilities of the LLM itself. Specifically, we exploit the intrinsic uncertainty present in LLMs to more effectively select high-quality IT data, without the need for extra resources. Furthermore, we introduce a curated IT dataset, the $\\textit{Selective Alpaca}$, created by applying SelectIT to the Alpaca-GPT4 dataset. Empirical results demonstrate that IT using Selective Alpaca leads to substantial model ability enhancement. The robustness of SelectIT has also been corroborated in various foundation models and domain-specific tasks. Our findings suggest that longer and more computationally intensive IT data may serve as superior sources of IT, offering valuable insights for future research in this area. Data, code, and scripts are freely available at https://github.com/Blue-Raincoat/SelectIT.", "pdf": "https://openreview.net/pdf/9ee81561e94050705f358e4b646c204f4ac6cb24.pdf"} {"title": "Aligning Large Language Models with Representation Editing: A Control Perspective", "url": "https://openreview.net/forum?id=yTTomSJsSW", "detail_url": "https://openreview.net/forum?id=yTTomSJsSW", "authors": "Lingkai Kong,Haorui Wang,Wenhao Mu,Yuanqi Du,Yuchen Zhuang,Yifei Zhou,Yue Song,Rongzhi Zhang,Kai Wang,Chao Zhang", "tags": "NIPS 2024,Poster", "abstract": "Aligning large language models (LLMs) with human objectives is crucial for real-world applications. However, fine-tuning LLMs for alignment often suffers from unstable training and requires substantial computing resources. Test-time alignment techniques, such as prompting and guided decoding, do not modify the underlying model, and their performance remains dependent on the original model's capabilities. To address these challenges, we propose aligning LLMs through representation editing. The core of our method is to view a pre-trained autoregressive LLM as a discrete-time stochastic dynamical system. To achieve alignment for specific objectives, we introduce external control signals into the state space of this language dynamical system. We train a value function directly on the hidden states according to the Bellman equation, enabling gradient-based optimization to obtain the optimal control signals at test time. Our experiments demonstrate that our method outperforms existing test-time alignment techniques while requiring significantly fewer resources compared to fine-tuning methods. Our code is available at [https://github.com/Lingkai-Kong/RE-Control](https://github.com/Lingkai-Kong/RE-Control).", "pdf": "https://openreview.net/pdf/5b01199621eef2e71cc22c61871a279fc51beeba.pdf"} {"title": "Nearly Tight Black-Box Auditing of Differentially Private Machine Learning", "url": "https://openreview.net/forum?id=cCDMXXiamP", "detail_url": "https://openreview.net/forum?id=cCDMXXiamP", "authors": "Meenatchi Sundaram Muthu Selva Annamalai,Emiliano De Cristofaro", "tags": "NIPS 2024,Poster", "abstract": "This paper presents an auditing procedure for the Differentially Private Stochastic Gradient Descent (DP-SGD) algorithm in the black-box threat model that is substantially tighter than prior work.\nThe main intuition is to craft worst-case initial model parameters, as DP-SGD's privacy analysis is agnostic to the choice of the initial model parameters.\nFor models trained on MNIST and CIFAR-10 at theoretical $\\varepsilon=10.0$, our auditing procedure yields empirical estimates of $\\varepsilon_{emp} = 7.21$ and $6.95$, respectively, on a 1,000-record sample and $\\varepsilon_{emp} = 6.48$ and $4.96$ on the full datasets.\nBy contrast, previous audits were only (relatively) tight in stronger white-box models, where the adversary can access the model's inner parameters and insert arbitrary gradients.\nOverall, our auditing procedure can offer valuable insight into how the privacy analysis of DP-SGD could be improved and detect bugs and DP violations in real-world implementations.\nThe source code needed to reproduce our experiments is available from https://github.com/spalabucr/bb-audit-dpsgd.", "pdf": "https://openreview.net/pdf/4b8080bdff94b173112c6cc0c6042066baef4b32.pdf"} {"title": "Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse Training", "url": "https://openreview.net/forum?id=Gug7wc0BSs", "detail_url": "https://openreview.net/forum?id=Gug7wc0BSs", "authors": "Pihe Hu,Shaolong Li,Zhuoran Li,Ling Pan,Longbo Huang", "tags": "NIPS 2024,Poster", "abstract": "Deep Multi-agent Reinforcement Learning (MARL) relies on neural networks with numerous parameters in multi-agent scenarios, often incurring substantial computational overhead. Consequently, there is an urgent need to expedite training and enable model compression in MARL. This paper proposes the utilization of dynamic sparse training (DST), a technique proven effective in deep supervised learning tasks, to alleviate the computational burdens in MARL training. However, a direct adoption of DST fails to yield satisfactory MARL agents, leading to breakdowns in value learning within deep sparse value-based MARL models. Motivated by this challenge, we introduce an innovative Multi-Agent Sparse Training (MAST) framework aimed at simultaneously enhancing the reliability of learning targets and the rationality of sample distribution to improve value learning in sparse models. Specifically, MAST incorporates the Soft Mellowmax Operator with a hybrid TD-($\\lambda$) schema to establish dependable learning targets. Additionally, it employs a dual replay buffer mechanism to enhance the distribution of training samples. Building upon these aspects, MAST utilizes gradient-based topology evolution to exclusively train multiple MARL agents using sparse networks. Our comprehensive experimental investigation across various value-based MARL algorithms on multiple benchmarks demonstrates, for the first time, significant reductions in redundancy of up to $20\\times$ in Floating Point Operations (FLOPs) for both training and inference, with less than 3% performance degradation.", "pdf": "https://openreview.net/pdf/bf8bb00ab8e48a246aea7bd4371261f2f92f54dd.pdf"} {"title": "Guiding Neural Collapse: Optimising Towards the Nearest Simplex Equiangular Tight Frame", "url": "https://openreview.net/forum?id=z4FaPUslma", "detail_url": "https://openreview.net/forum?id=z4FaPUslma", "authors": "Evan Markou,Thalaiyasingam Ajanthan,Stephen Gould", "tags": "NIPS 2024,Poster", "abstract": "Neural Collapse (NC) is a recently observed phenomenon in neural networks that characterises the solution space of the final classifier layer when trained until zero training loss. Specifically, NC suggests that the final classifier layer converges to a Simplex Equiangular Tight Frame (ETF), which maximally separates the weights corresponding to each class. By duality, the penultimate layer feature means also converge to the same simplex ETF. Since this simple symmetric structure is optimal, our idea is to utilise this property to improve convergence speed. Specifically, we introduce the notion of \\textit{nearest simplex ETF geometry} for the penultimate layer features at any given training iteration, by formulating it as a Riemannian optimisation. Then, at each iteration, the classifier weights are implicitly set to the nearest simplex ETF by solving this inner-optimisation, which is encapsulated within a declarative node to allow backpropagation. Our experiments on synthetic and real-world architectures on classification tasks demonstrate that our approach accelerates convergence and enhances training stability.", "pdf": "https://openreview.net/pdf/ac02c11fa162633bf19fadb27beddf13e3c58e97.pdf"} {"title": "Local Anti-Concentration Class: Logarithmic Regret for Greedy Linear Contextual Bandit", "url": "https://openreview.net/forum?id=rblaF2euXQ", "detail_url": "https://openreview.net/forum?id=rblaF2euXQ", "authors": "Seok-Jin Kim,Min-hwan Oh", "tags": "NIPS 2024,Poster", "abstract": "We study the performance guarantees of exploration-free greedy algorithms for the linear contextual bandit problem. \nWe introduce a novel condition, named the \\textit{Local Anti-Concentration} (LAC) condition, which enables a greedy bandit algorithm to achieve provable efficiency. \nWe show that the LAC condition is satisfied by a broad class of distributions, including Gaussian, exponential, uniform, Cauchy, and Student's~$t$ distributions, along with other exponential family distributions and their truncated variants. \nThis significantly expands the class of distributions under which greedy algorithms can perform efficiently. \nUnder our proposed LAC condition, we prove that the cumulative expected regret of the greedy algorithm for the linear contextual bandit is bounded by $\\mathcal{O}(\\operatorname{poly} \\log T)$. \nOur results establish the widest range of distributions known to date that allow a sublinear regret bound for greedy algorithms, further achieving a sharp poly-logarithmic regret.", "pdf": "https://openreview.net/pdf/4de6a468dddbfb975396e4c31c95c83e157b2eae.pdf"} {"title": "MambaSCI: Efficient Mamba-UNet for Quad-Bayer Patterned Video Snapshot Compressive Imaging", "url": "https://openreview.net/forum?id=U4WeoyRHPd", "detail_url": "https://openreview.net/forum?id=U4WeoyRHPd", "authors": "Zhenghao Pan,Haijin Zeng,Jiezhang Cao,Yongyong Chen,Kai Zhang,Yong Xu", "tags": "NIPS 2024,Poster", "abstract": "Color video snapshot compressive imaging (SCI) employs computational imaging techniques to capture multiple sequential video frames in a single Bayer-patterned measurement. With the increasing popularity of quad-Bayer pattern in mainstream smartphone cameras for capturing high-resolution videos, mobile photography has become more accessible to a wider audience. However, existing color video SCI reconstruction algorithms are designed based on the traditional Bayer pattern. When applied to videos captured by quad-Bayer cameras, these algorithms often result in color distortion and ineffective demosaicing, rendering them impractical for primary equipment. To address this challenge, we propose the MambaSCI method, which leverages the Mamba and UNet architectures for efficient reconstruction of quad-Bayer patterned color video SCI. To the best of our knowledge, our work presents the first algorithm for quad-Bayer patterned SCI reconstruction, and also the initial application of the Mamba model to this task. Specifically, we customize Residual-Mamba-Blocks, which residually connect the Spatial-Temporal Mamba (STMamba), Edge-Detail-Reconstruction (EDR) module, and Channel Attention (CA) module. Respectively, STMamba is used to model long-range spatial-temporal dependencies with linear complexity, EDR is for better edge-detail reconstruction, and CA is used to compensate for the missing channel information interaction in Mamba model. Experiments demonstrate that MambaSCI surpasses state-of-the-art methods with lower computational and memory costs. PyTorch style pseudo-code for the core modules is provided in the supplementary materials. Code is at https://github.com/PAN083/MambaSCI.", "pdf": "https://openreview.net/pdf/72c3ea7ea0eeba8e9718d04e2061a016d54bfee0.pdf"} {"title": "KnowGPT: Knowledge Graph based Prompting for Large Language Models", "url": "https://openreview.net/forum?id=PacBluO5m7", "detail_url": "https://openreview.net/forum?id=PacBluO5m7", "authors": "Qinggang Zhang,Junnan Dong,Hao Chen,Daochen Zha,Zailiang Yu,Xiao Huang", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in many real-world applications. Nonetheless, LLMs are often criticized for their tendency to produce hallucinations, wherein the models fabricate incorrect statements on tasks beyond their knowledge and perception. To alleviate this issue, graph retrieval-augmented generation (GraphRAG) has been extensively explored which leverages the factual knowledge in knowledge graphs (KGs) to ground the LLM's responses in established facts and principles. However, most state-of-the-art LLMs are closed-source, making it challenging to develop a prompting framework that can efficiently and effectively integrate KGs into LLMs with hard prompts only. Generally, existing KG-enhanced LLMs usually suffer from three critical issues, including huge search space, high API costs, and laborious prompt engineering, that impede their widespread application in practice. To this end, we introduce a novel **Know**ledge **Gr**aph based **P**romp**T**ing framework, namely **KnowGPT**, to enhance LLMs with domain knowledge. KnowGPT contains a knowledge extraction module to extract the most informative knowledge from KGs, and a context-aware prompt construction module to automatically convert extracted knowledge into effective prompts. Experiments on three benchmarks demonstrate that KnowGPT significantly outperforms all competitors. Notably, KnowGPT achieves a 92.6% accuracy on OpenbookQA leaderboard, comparable to human-level performance.", "pdf": "https://openreview.net/pdf/4ec9739895ff72a71118e7b64bb98e28f109616b.pdf"} {"title": "Long-Horizon Planning for Multi-Agent Robots in Partially Observable Environments", "url": "https://openreview.net/forum?id=Y1rOWS2Z4i", "detail_url": "https://openreview.net/forum?id=Y1rOWS2Z4i", "authors": "Siddharth Nayak,Adelmo Morrison Orozco,Marina Ten Have,Jackson Zhang,Vittal Thirumalai,Darren Chen,Aditya Kapoor,Eric Robinson,Karthik Gopalakrishnan,James Harrison,Anuj Mahajan,brian ichter,Hamsa Balakrishnan", "tags": "NIPS 2024,Poster", "abstract": "The ability of Language Models (LMs) to understand natural language makes them a powerful tool for parsing human instructions into task plans for autonomous robots. Unlike traditional planning methods that rely on domain-specific knowledge and handcrafted rules, LMs generalize from diverse data and adapt to various tasks with minimal tuning, acting as a compressed knowledge base. However, LMs in their standard form face challenges with long-horizon tasks, particularly in partially observable multi-agent settings. We propose an LM-based Long-Horizon Planner for Multi-Agent Robotics (LLaMAR), a cognitive architecture for planning that achieves state-of-the-art results in long-horizon tasks within partially observable environments. LLaMAR employs a plan-act-correct-verify framework, allowing self-correction from action execution feedback without relying on oracles or simulators. Additionally, we present MAP-THOR, a comprehensive test suite encompassing household tasks of varying complexity within the AI2-THOR environment. Experiments show that LLaMAR achieves a 30\\% higher success rate than other state-of-the-art LM-based multi-agent planners in MAP-THOR and Search \\& Rescue tasks. Code can be found at [https://github.com/nsidn98/LLaMAR](https://github.com/nsidn98/LLaMAR)", "pdf": "https://openreview.net/pdf/c6dfb94fb019cc2a364a5a1bc89c8064812a935d.pdf"} {"title": "Cost-efficient Knowledge-based Question Answering with Large Language Models", "url": "https://openreview.net/forum?id=pje1Y71jad", "detail_url": "https://openreview.net/forum?id=pje1Y71jad", "authors": "Junnan Dong,Qinggang Zhang,Chuang Zhou,Hao Chen,Daochen Zha,Xiao Huang", "tags": "NIPS 2024,Poster", "abstract": "Knowledge-based question answering (KBQA) is widely used in many scenarios that necessitate domain knowledge. Large language models (LLMs) bring opportunities to KBQA, while their costs are significantly higher and absence of domain-specific knowledge during pre-training. We are motivated to combine LLMs and prior small models on knowledge graphs (KGMs) for both inferential accuracy and cost saving. However, it remains challenging since accuracy and cost are not readily combined in the optimization as two distinct metrics. It is also laborious for model selection since different models excel in diverse knowledge. To this end, we propose Coke, a novel cost-efficient strategy for KBQA with LLMs, modeled as a tailored multi-armed bandit problem to minimize calls to LLMs within limited budgets. We first formulate the accuracy expectation with a cluster-level Thompson Sampling for either KGMs or LLMs. A context-aware policy is optimized to further distinguish the expert model subject to the question semantics. The overall decision is bounded by the cost regret according to historical expenditure on failures. Extensive experiments showcase the superior performance of Coke, which moves the Pareto frontier with up to 20.89% saving of GPT-4 fees while achieving a 2.74% higher accuracy on the benchmark datasets.", "pdf": "https://openreview.net/pdf/0c4dc789433d497c2f2c0f0da165be3b5c9f715b.pdf"} {"title": "Active Learning for Derivative-Based Global Sensitivity Analysis with Gaussian Processes", "url": "https://openreview.net/forum?id=da0ZJatRCN", "detail_url": "https://openreview.net/forum?id=da0ZJatRCN", "authors": "Syrine Belakaria,Benjamin Letham,Jana Doppa,Barbara E Engelhardt,Stefano Ermon,Eytan Bakshy", "tags": "NIPS 2024,Poster", "abstract": "We consider the problem of active learning for global sensitivity analysis of expensive black-box functions. Our aim is to efficiently learn the importance of different input variables, e.g., in vehicle safety experimentation, we study the impact of the thickness of various components on safety objectives. Since function evaluations are expensive, we use active learning to prioritize experimental resources where they yield the most value. We propose novel active learning acquisition functions that directly target key quantities of derivative-based global sensitivity measures (DGSMs) under Gaussian process surrogate models.\nWe showcase the first application of active learning directly to DGSMs, and develop tractable uncertainty reduction and information gain acquisition functions for these measures. Through comprehensive evaluation on synthetic and real-world problems, our study demonstrates how these active learning acquisition strategies substantially enhance the sample efficiency of DGSM estimation, particularly with limited evaluation budgets. Our work paves the way for more efficient and accurate sensitivity analysis in various scientific and engineering applications.", "pdf": "https://openreview.net/pdf/e7cbe6f405410f66193a63a86f7ddaae0e3eb870.pdf"} {"title": "Divergences between Language Models and Human Brains", "url": "https://openreview.net/forum?id=DpP5F3UfKw", "detail_url": "https://openreview.net/forum?id=DpP5F3UfKw", "authors": "Yuchen Zhou,Emmy Liu,Graham Neubig,Michael J. Tarr,Leila Wehbe", "tags": "NIPS 2024,Poster", "abstract": "Do machines and humans process language in similar ways? Recent research has hinted at the affirmative, showing that human neural activity can be effectively predicted using the internal representations of language models (LMs). Although such results are thought to reflect shared computational principles between LMs and human brains, there are also clear differences in how LMs and humans represent and use language. In this work, we systematically explore the divergences between human and machine language processing by examining the differences between LM representations and human brain responses to language as measured by Magnetoencephalography (MEG) across two datasets in which subjects read and listened to narrative stories. Using an LLM-based data-driven approach, we identify two domains that LMs do not capture well: social/emotional intelligence and physical commonsense. We validate these findings with human behavioral experiments and hypothesize that the gap is due to insufficient representations of social/emotional and physical knowledge in LMs. Our results show that fine-tuning LMs on these domains can improve their alignment with human brain responses.", "pdf": "https://openreview.net/pdf/3f5c514423f1a9678561f73def188118d5bcf7d3.pdf"} {"title": "Covariate Shift Corrected Conditional Randomization Test", "url": "https://openreview.net/forum?id=Me5esZTRqW", "detail_url": "https://openreview.net/forum?id=Me5esZTRqW", "authors": "Bowen Xu,Yiwen Huang,Chuan Hong,Shuangning Li,Molei Liu", "tags": "NIPS 2024,Poster", "abstract": "Conditional independence tests are crucial across various disciplines in determining the independence of an outcome variable $Y$ from a treatment variable $X$, conditioning on a set of confounders $Z$. The Conditional Randomization Test (CRT) offers a powerful framework for such testing by assuming known distributions of $X \\mid Z$; it controls the Type-I error exactly, allowing for the use of flexible, black-box test statistics. In practice, testing for conditional independence often involves using data from a source population to draw conclusions about a target population. This can be challenging due to covariate shift---differences in the distribution of $X$, $Z$, and surrogate variables, which can affect the conditional distribution of $Y \\mid X, Z$---rendering traditional CRT approaches invalid. To address this issue, we propose a novel Covariate Shift Corrected Pearson Chi-squared Conditional Randomization (csPCR) test. This test adapts to covariate shifts by integrating importance weights and employing the control variates method to reduce variance in the test statistics and thus enhance power. Theoretically, we establish that the csPCR test controls the Type-I error asymptotically. Empirically, through simulation studies, we demonstrate that our method not only maintains control over Type-I errors but also exhibits superior power, confirming its efficacy and practical utility in real-world scenarios where covariate shifts are prevalent. Finally, we apply our methodology to a real-world dataset to assess the impact of a COVID-19 treatment on the 90-day mortality rate among patients.", "pdf": "https://openreview.net/pdf/74507c2335d3c2e774426629dc05a2f7ad13d3bb.pdf"} {"title": "Pretrained Transformer Efficiently Learns Low-Dimensional Target Functions In-Context", "url": "https://openreview.net/forum?id=uHcG5Y6fdB", "detail_url": "https://openreview.net/forum?id=uHcG5Y6fdB", "authors": "Kazusato Oko,Yujin Song,Taiji Suzuki,Denny Wu", "tags": "NIPS 2024,Poster", "abstract": "Transformers can efficiently learn in-context from example demonstrations. Most existing theoretical analyses studied the in-context learning (ICL) ability of transformers for linear function classes, where it is typically shown that the minimizer of the pretraining loss implements one gradient descent step on the least squares objective. However, this simplified linear setting arguably does not demonstrate the statistical efficiency of ICL, since the pretrained transformer does not outperform directly solving linear regression on the test prompt. \nIn this paper, we study ICL of a nonlinear function class via transformer with nonlinear MLP layer: given a class of \\textit{single-index} target functions $f_*(\\boldsymbol{x}) = \\sigma_*(\\langle\\boldsymbol{x},\\boldsymbol{\\beta}\\rangle)$, where the index features $\\boldsymbol{\\beta}\\in\\mathbb{R}^d$ are drawn from a $r$-dimensional subspace, we show that a nonlinear transformer optimized by gradient descent (with a pretraining sample complexity that depends on the \\textit{information exponent} of the link functions $\\sigma_*$) learns $f_*$ in-context with a prompt length that only depends on the dimension of the distribution of target functions $r$; in contrast, any algorithm that directly learns $f_*$ on test prompt yields a statistical complexity that scales with the ambient dimension $d$. Our result highlights the adaptivity of the pretrained transformer to low-dimensional structures of the function class, which enables sample-efficient ICL that outperforms estimators that only have access to the in-context data.", "pdf": "https://openreview.net/pdf/ebe3fdc5e357d327b920801545a353f902eefb86.pdf"} {"title": "An effective framework for estimating individualized treatment rules", "url": "https://openreview.net/forum?id=G7L65B2P0y", "detail_url": "https://openreview.net/forum?id=G7L65B2P0y", "authors": "Joowon Lee,Jared Davis Huling,Guanhua Chen", "tags": "NIPS 2024,Poster", "abstract": "Estimating individualized treatment rules (ITRs) is fundamental in causal inference, particularly for precision medicine applications. Traditional ITR estimation methods rely on inverse probability weighting (IPW) to address confounding factors and $L_{1}$-penalization for simplicity and interpretability. However, IPW can introduce statistical bias without precise propensity score modeling, while $L_1$-penalization makes the objective non-smooth, leading to computational bias and requiring subgradient methods. In this paper, we propose a unified ITR estimation framework formulated as a constrained, weighted, and smooth convex optimization problem. The optimal ITR can be robustly and effectively computed by projected gradient descent. Our comprehensive theoretical analysis reveals that weights that balance the spectrum of a `weighted design matrix' improve both the optimization and likelihood landscapes, yielding improved computational and statistical estimation guarantees. In particular, this is achieved by distributional covariate balancing weights, which are model-free alternatives to IPW. Extensive simulations and applications demonstrate that our framework achieves significant gains in both robustness and effectiveness for ITR learning against existing methods.", "pdf": "https://openreview.net/pdf/3b195f1ab7b8f324455c2d592ed796416f102aeb.pdf"} {"title": "DiMSUM: Diffusion Mamba - A Scalable and Unified Spatial-Frequency Method for Image Generation", "url": "https://openreview.net/forum?id=KqbLzSIXkm", "detail_url": "https://openreview.net/forum?id=KqbLzSIXkm", "authors": "Hao Phung,Quan Dao,Trung Tuan Dao,Hoang Phan,Dimitris N. Metaxas,Anh Tuan Tran", "tags": "NIPS 2024,Poster", "abstract": "We introduce a novel state-space architecture for diffusion models, effectively harnessing spatial and frequency information to enhance the inductive bias towards local features in input images for image generation tasks. While state-space networks, including Mamba, a revolutionary advancement in recurrent neural networks, typically scan input sequences from left to right, they face difficulties in designing effective scanning strategies, especially in the processing of image data. Our method demonstrates that integrating wavelet transformation into Mamba enhances the local structure awareness of visual inputs and better captures long-range relations of frequencies by disentangling them into wavelet subbands, representing both low- and high-frequency components. These wavelet-based outputs are then processed and seamlessly fused with the original Mamba outputs through a cross-attention fusion layer, combining both spatial and frequency information to optimize the order awareness of state-space models which is essential for the details and overall quality of image generation. Besides, we introduce a globally-shared transformer to supercharge the performance of Mamba, harnessing its exceptional power to capture global relationships. Through extensive experiments on standard benchmarks, our method demonstrates superior results compared to DiT and DIFFUSSM, achieving faster training convergence and delivering high-quality outputs. The codes and pretrained models are released at https://github.com/VinAIResearch/DiMSUM.git.", "pdf": "https://openreview.net/pdf/0ae8bfdeeec0ac6c1b9be00728313d0eee7040d2.pdf"} {"title": "Rule Based Rewards for Language Model Safety", "url": "https://openreview.net/forum?id=QVtwpT5Dmg", "detail_url": "https://openreview.net/forum?id=QVtwpT5Dmg", "authors": "Tong Mu,Alec Helyar,Johannes Heidecke,Joshua Achiam,Andrea Vallone,Ian D Kivlichan,Molly Lin,Alex Beutel,John Schulman,Lilian Weng", "tags": "NIPS 2024,Poster", "abstract": "Reinforcement learning based fine-tuning of large language models (LLMs) on human preferences has been shown to enhance both their capabilities and safety behavior.\n However, in cases related to safety, without precise instructions to human annotators, the data collected may cause the model to become overly cautious, or to respond in an undesirable style, such as being judgmental.\n Additionally, as model capabilities and usage patterns evolve, there may be a costly need to add or relabel data to modify safety behavior. \n We propose a novel preference modeling approach that utilizes AI feedback and only requires a small amount of human data. \n Our method, Rule Based Rewards (RBR), uses a collection of rules for desired or undesired behaviors (e.g. refusals should not be judgmental) along with a LLM grader.\n In contrast to prior methods using AI feedback, our method uses fine-grained, composable, LLM-graded few-shot prompts as reward directly in RL training, resulting in greater control, accuracy and ease of updating.\n We show that RBRs are an effective training method, achieving an F1 score of 97.1, compared to a human-feedback baseline of 91.7, resulting in much higher safety-behavior accuracy through better balancing usefulness and safety.", "pdf": "https://openreview.net/pdf/e963b11386699f5b75503a72861c8a01fb09a180.pdf"} {"title": "Alias-Free Mamba Neural Operator", "url": "https://openreview.net/forum?id=gUEBXGV8JM", "detail_url": "https://openreview.net/forum?id=gUEBXGV8JM", "authors": "Jianwei Zheng,LiweiNo,Ni Xu,Junwei Zhu,XiaoxuLin,Xiaoqin Zhang", "tags": "NIPS 2024,Poster", "abstract": "Benefiting from the booming deep learning techniques, neural operators (NO) are considered as an ideal alternative to break the traditions of solving Partial Differential Equations (PDE) with expensive cost.\nYet with the remarkable progress, current solutions concern little on the holistic function features--both global and local information-- during the process of solving PDEs.\nBesides, a meticulously designed kernel integration to meet desirable performance often suffers from a severe computational burden, such as GNO with $O(N(N-1))$, FNO with $O(NlogN)$, and Transformer-based NO with $O(N^2)$.\nTo counteract the dilemma, we propose a mamba neural operator with $O(N)$ computational complexity, namely MambaNO.\nFunctionally, MambaNO achieves a clever balance between global integration, facilitated by state space model of Mamba that scans the entire function, and local integration, engaged with an alias-free architecture. We prove a property of continuous-discrete equivalence to show the capability of\nMambaNO in approximating operators arising from universal PDEs to desired accuracy. MambaNOs are evaluated on a diverse set of benchmarks with possibly multi-scale solutions and set new state-of-the-art scores, yet with fewer parameters and better efficiency.", "pdf": "https://openreview.net/pdf/a1a1561a826925c5f0083b9694af193271f8b359.pdf"} {"title": "On Convergence of Adam for Stochastic Optimization under Relaxed Assumptions", "url": "https://openreview.net/forum?id=x7usmidzxj", "detail_url": "https://openreview.net/forum?id=x7usmidzxj", "authors": "Yusu Hong,Junhong Lin", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we study Adam in non-convex smooth scenarios with potential unbounded gradients and affine variance noise. We consider a general noise model which governs affine variance noise, bounded noise, and sub-Gaussian noise. We show that Adam with a specific hyper-parameter setup can find a stationary point with a $\\mathcal{O}(\\text{poly}(\\log T)/\\sqrt{T})$ rate in high probability under this general noise model where $T$ denotes total number iterations, matching the lower rate of stochastic first-order algorithms up to logarithm factors. We also provide a probabilistic convergence result for Adam under a generalized smooth condition which allows unbounded smoothness parameters and has been illustrated empirically to capture the smooth property of many practical objective functions more accurately.", "pdf": "https://openreview.net/pdf/53b61c417a7761bfb6f0d648f3d93f54a1153174.pdf"} {"title": "Ad Auctions for LLMs via Retrieval Augmented Generation", "url": "https://openreview.net/forum?id=Ujo8V7iXmR", "detail_url": "https://openreview.net/forum?id=Ujo8V7iXmR", "authors": "MohammadTaghi Hajiaghayi,Sebastien Lahaie,Keivan Rezaei,Suho Shin", "tags": "NIPS 2024,Poster", "abstract": "In the field of computational advertising, the integration of ads into the outputs of large language models (LLMs) presents an opportunity to support these services without compromising content integrity. This paper introduces novel auction mechanisms for ad allocation and pricing within the textual outputs of LLMs, leveraging retrieval-augmented generation (RAG). We propose a \\emph{segment auction} where an ad is probabilistically retrieved for each discourse segment (paragraph, section, or entire output) according to its bid and relevance, following the RAG framework, and priced according to competing bids. We show that our auction maximizes logarithmic social welfare, a new notion of welfare that balances allocation efficiency and fairness, and we characterize the associated incentive-compatible pricing rule. These results are extended to multi-ad allocation per segment. An empirical evaluation validates the feasibility and effectiveness of our approach over several ad auction scenarios, and exhibits inherent tradeoffs in metrics as we allow the LLM more flexibility to allocate ads.", "pdf": "https://openreview.net/pdf/1a43bddd10e8ed2f5ca0b3de382ea2aca7da548b.pdf"} {"title": "WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models", "url": "https://openreview.net/forum?id=VzOgnDJMgh", "detail_url": "https://openreview.net/forum?id=VzOgnDJMgh", "authors": "Jinghan Jia,Jiancheng Liu,Yihua Zhang,Parikshit Ram,Nathalie Baracaldo,Sijia Liu", "tags": "NIPS 2024,Poster", "abstract": "The need for effective unlearning mechanisms in large language models (LLMs) is increasingly urgent, driven by the necessity to adhere to data regulations and foster ethical generative AI practices. LLM unlearning is designed to reduce the impact of undesirable data influences and associated model capabilities without diminishing the utility of the model if unrelated to the information being forgotten. Despite growing interest, much of the existing research has focused on varied unlearning method designs to boost effectiveness and efficiency. However, the inherent relationship between model weights and LLM unlearning has not been extensively examined. In this paper, we systematically explore how model weights interact with unlearning processes in LLMs and we design the weight attribution-guided LLM unlearning method, WAGLE, which unveils the interconnections between 'influence' of weights and 'influence' of data to forget and retain in LLM generation. By strategically guiding the LLM unlearning across different types of unlearning methods and tasks, WAGLE can erase the undesired content, while maintaining the performance of the original tasks. We refer to the weight attribution-guided LLM unlearning method as WAGLE, which unveils the interconnections between 'influence' of weights and 'influence' of data to forget and retain in LLM generation. Our extensive experiments show that WAGLE boosts unlearning performance across a range of LLM unlearning methods such as gradient difference and (negative) preference optimization, applications such as fictitious unlearning (TOFU benchmark), malicious use prevention (WMDP benchmark), and copyrighted information removal, and models including Zephyr-7b-beta and Llama2-7b. To the best of our knowledge, our work offers the first principled method for attributing and pinpointing the influential weights in enhancing LLM unlearning. It stands in contrast to previous methods that lack weight attribution and simpler weight attribution techniques.", "pdf": "https://openreview.net/pdf/f8e6edd72761c9672749d9c628233d2df16aae08.pdf"} {"title": "Who\u2019s Gaming the System? A Causally-Motivated Approach for Detecting Strategic Adaptation", "url": "https://openreview.net/forum?id=PXGY9Fz8vC", "detail_url": "https://openreview.net/forum?id=PXGY9Fz8vC", "authors": "Trenton Chang,Lindsay Warrenburg,Sae-Hwan Park,Ravi B Parikh,Maggie Makar,Jenna Wiens", "tags": "NIPS 2024,Poster", "abstract": "In many settings, machine learning models may be used to inform decisions that impact individuals or entities who interact with the model. Such entities, or *agents,* may *game* model decisions by manipulating their inputs to the model to obtain better outcomes and maximize some utility. We consider a multi-agent setting where the goal is to identify the \u201cworst offenders:\u201d agents that are gaming most aggressively. However, identifying such agents is difficult without knowledge of their utility function. Thus, we introduce a framework in which each agent\u2019s tendency to game is parameterized via a scalar. We show that this gaming parameter is only partially identifiable. By recasting the problem as a causal effect estimation problem where different agents represent different \u201ctreatments,\u201d we prove that a ranking of all agents by their gaming parameters is identifiable. We present empirical results in a synthetic data study validating the usage of causal effect estimation for gaming detection and show in a case study of diagnosis coding behavior in the U.S. that our approach highlights features associated with gaming.", "pdf": "https://openreview.net/pdf/7f31354db4587aad2bba879b475c0a1ac5a5c57e.pdf"} {"title": "Achieving $\\tilde{O}(1/\\epsilon)$ Sample Complexity for Constrained Markov Decision Process", "url": "https://openreview.net/forum?id=psG4LXlDNs", "detail_url": "https://openreview.net/forum?id=psG4LXlDNs", "authors": "Jiashuo Jiang,Yinyu Ye", "tags": "NIPS 2024,Poster", "abstract": "We consider the reinforcement learning problem for the constrained Markov decision process (CMDP), which plays a central role in satisfying safety or resource constraints in sequential learning and decision-making. In this problem, we are given finite resources and a MDP with unknown transition probabilities. At each stage, we take an action, collecting a reward and consuming some resources, all assumed to be unknown and need to be learned over time. In this work, we take the first step towards deriving optimal problem-dependent guarantees for the CMDP problems. We derive a logarithmic regret bound, which translates into a $O(\\frac{1}{\\Delta\\cdot\\epsilon}\\cdot\\log^2(1/\\epsilon))$ sample complexity bound, with $\\Delta$ being a problem-dependent parameter, yet independent of $\\epsilon$. Our sample complexity bound improves upon the state-of-art $O(1/\\epsilon^2)$ sample complexity for CMDP problems established in the previous literature, in terms of the dependency on $\\epsilon$. To achieve this advance, we develop a new framework for analyzing CMDP problems. To be specific, our algorithm operates in the primal space and we resolve the primal LP for the CMDP problem at each period in an online manner, with \\textit{adaptive} remaining resource capacities. The key elements of our algorithm are: i) a characterization of the instance hardness via LP basis, ii) an eliminating procedure that identifies one optimal basis of the primal LP, and; iii) a resolving procedure that is adaptive to the remaining resources and sticks to the characterized optimal basis.", "pdf": "https://openreview.net/pdf/d266944c7c83f38bc65d9643812af49872f309c1.pdf"} {"title": "Scaling Laws in Linear Regression: Compute, Parameters, and Data", "url": "https://openreview.net/forum?id=PH7sdEanXP", "detail_url": "https://openreview.net/forum?id=PH7sdEanXP", "authors": "Licong Lin,Jingfeng Wu,Sham M. Kakade,Peter Bartlett,Jason D. Lee", "tags": "NIPS 2024,Poster", "abstract": "Empirically, large-scale deep learning models often satisfy a neural scaling law: the test error of the trained model improves polynomially as the model size and data size grow. However, conventional wisdom suggests the test error consists of approximation, bias, and variance errors, where the variance error increases with model size. This disagrees with the general form of neural scaling laws, which predict that increasing model size monotonically improves performance.\n\nWe study the theory of scaling laws in an infinite dimensional linear regression setup. Specifically, we consider a model with $M$ parameters as a linear function of sketched covariates. The model is trained by one-pass stochastic gradient descent (SGD) using $N$ data. Assuming the optimal parameter satisfies a Gaussian prior and the data covariance matrix has a power-law spectrum of degree $a>1$, we show that the reducible part of the test error is $\\Theta(M^{-(a-1)} + N^{-(a-1)/a})$. The variance error, which increases with $M$, is dominated by the other errors due to the implicit regularization of SGD, thus disappearing from the bound. Our theory is consistent with the empirical neural scaling laws and verified by numerical simulation.", "pdf": "https://openreview.net/pdf/0f2b588586b8f9357aca924f3353b0c13b102112.pdf"} {"title": "Learning Image Priors Through Patch-Based Diffusion Models for Solving Inverse Problems", "url": "https://openreview.net/forum?id=HGnxhHz6ss", "detail_url": "https://openreview.net/forum?id=HGnxhHz6ss", "authors": "Jason Hu,Bowen Song,Xiaojian Xu,Liyue Shen,Jeffrey A Fessler", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models can learn strong image priors from underlying data distribution and use them to solve inverse problems,\nbut the training process is computationally expensive and requires lots of data.\nSuch bottlenecks prevent most existing works from being feasible for high-dimensional and high-resolution data such as 3D images.\nThis paper proposes a method to learn an efficient data prior for the entire image by training diffusion models only on patches of images.\nSpecifically, we propose a patch-based position-aware diffusion inverse solver, called PaDIS, where we obtain the score function of the whole image through scores of patches and their positional encoding and utilize this as the prior for solving inverse problems.\nFirst of all, we show that this diffusion model achieves an improved memory efficiency and data efficiency\nwhile still maintaining the capability to generate entire images via positional encoding.\nAdditionally, the proposed PaDIS model is highly flexible and can be plugged in with different diffusion inverse solvers (DIS).\nWe demonstrate that the proposed PaDIS approach enables solving various inverse problems in both natural and medical image domains, including CT reconstruction, deblurring, and superresolution, given only patch-based priors.\nNotably, PaDIS outperforms previous DIS methods trained on entire image priors in the case of limited training data, demonstrating the data efficiency of our proposed approach by learning patch-based prior.", "pdf": "https://openreview.net/pdf/5c1849cec489253b53dd5ced49cd88613b54d884.pdf"} {"title": "Amortized Fourier Neural Operators", "url": "https://openreview.net/forum?id=a6em980M9x", "detail_url": "https://openreview.net/forum?id=a6em980M9x", "authors": "Zipeng Xiao,Siqi Kou,Zhongkai Hao,Bokai Lin,Zhijie Deng", "tags": "NIPS 2024,Poster", "abstract": "Fourier Neural Operators (FNOs) have shown promise for solving partial differential equations (PDEs).\nTypically, FNOs employ separate parameters for different frequency modes to specify tunable kernel integrals in Fourier space, which, yet, results in an undesirably large number of parameters when solving high-dimensional PDEs. \nA workaround is to abandon the frequency modes exceeding a predefined threshold, but this limits the FNOs' ability to represent high-frequency details and poses non-trivial challenges for hyper-parameter specification. \nTo address these, we propose AMortized Fourier Neural Operator (AM-FNO), where an amortized neural parameterization of the kernel function is deployed to accommodate arbitrarily many frequency modes using a fixed number of parameters. \nWe introduce two implementations of AM-FNO, based on the recently developed, appealing Kolmogorov\u2013Arnold Network (KAN) and Multi-Layer Perceptrons (MLPs) equipped with orthogonal embedding functions respectively. \nWe extensively evaluate our method on diverse datasets from various domains and observe up to 31\\% average improvement compared to competing neural operator baselines.", "pdf": "https://openreview.net/pdf/ac3e9bb4adc6f5e7eda9fb232b311cc5daf2ded2.pdf"} {"title": "Retrieval-Augmented Diffusion Models for Time Series Forecasting", "url": "https://openreview.net/forum?id=dRJJt0Ji48", "detail_url": "https://openreview.net/forum?id=dRJJt0Ji48", "authors": "Jingwei Liu,Ling Yang,Hongyan Li,Shenda Hong", "tags": "NIPS 2024,Poster", "abstract": "While time series diffusion models have received considerable focus from many recent works, the performance of existing models remains highly unstable. Factors limiting time series diffusion models include insufficient time series datasets and the absence of guidance. To address these limitations, we propose a Retrieval-Augmented Time series Diffusion model (RATD). The framework of RATD consists of two parts: an embedding-based retrieval process and a reference-guided diffusion model. In the first part, RATD retrieves the time series that are most relevant to historical time series from the database as references. The references are utilized to guide the denoising process in the second part. Our approach allows leveraging meaningful samples within the database to aid in sampling, thus maximizing the utilization of datasets. Meanwhile, this reference-guided mechanism also compensates for the deficiencies of existing time series diffusion models in terms of guidance. Experiments and visualizations on multiple datasets demonstrate the effectiveness of our approach, particularly in complicated prediction tasks. Our code is available at https://github.com/stanliu96/RATD", "pdf": "https://openreview.net/pdf/e87ce496bc882c66ee7f20b01c0a67af85c06f6f.pdf"} {"title": "MACM: Utilizing a Multi-Agent System for Condition Mining in Solving Complex Mathematical Problems", "url": "https://openreview.net/forum?id=VR2RdSxtzs", "detail_url": "https://openreview.net/forum?id=VR2RdSxtzs", "authors": "Bin Lei,Yi Zhang,Shan Zuo,Ali Payani,Caiwen Ding", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements in large language models, such as GPT-4, have demonstrated remarkable capabilities in processing standard queries. Despite these advancements, their performance substantially declines in advanced mathematical problems requiring complex, multi-step logical reasoning. To enhance their inferential capabilities, current research has delved into prompting engineering, exemplified by methodologies such as the Tree of Thought and Graph of Thought.\nNonetheless, these existing approaches encounter two significant limitations. Firstly, their effectiveness in tackling complex mathematical problems is somewhat constrained. Secondly, the necessity to design distinct prompts for individual problems hampers their generalizability.\nIn response to these limitations, this paper introduces the Multi-Agent System for conditional Mining (MACM) prompting method. It not only resolves intricate mathematical problems but also demonstrates strong generalization capabilities across various mathematical contexts.\nWith the assistance of MACM, the accuracy of GPT-4 Turbo on the most challenging level five mathematical problems in the MATH dataset increase from $\\mathbf{54.68\\\\%} \\text{ to } \\mathbf{76.73\\\\%}$.", "pdf": "https://openreview.net/pdf/361f70e303eefe87ee3e42f46fb2d3d21347df37.pdf"} {"title": "Transformers as Game Players: Provable In-context Game-playing Capabilities of Pre-trained Models", "url": "https://openreview.net/forum?id=pRQmRaonxf", "detail_url": "https://openreview.net/forum?id=pRQmRaonxf", "authors": "Chengshuai Shi,Kun Yang,Jing Yang,Cong Shen", "tags": "NIPS 2024,Poster", "abstract": "The in-context learning (ICL) capability of pre-trained models based on the transformer architecture has received growing interest in recent years. While theoretical understanding has been obtained for ICL in reinforcement learning (RL), the previous results are largely confined to the single-agent setting. This work proposes to further explore the in-context learning capabilities of pre-trained transformer models in competitive multi-agent games, i.e., in-context game-playing (ICGP). Focusing on the classical two-player zero-sum games, theoretical guarantees are provided to demonstrate that pre-trained transformers can provably learn to approximate Nash equilibrium in an in-context manner for both decentralized and centralized learning settings. As a key part of the proof, constructional results are established to demonstrate that the transformer architecture is sufficiently rich to realize celebrated multi-agent game-playing algorithms, in particular, decentralized V-learning and centralized VI-ULCB.", "pdf": "https://openreview.net/pdf/a739b11c92fd5cfc39cc60917a0bafb7c9f5b8cf.pdf"} {"title": "Mobile-Agent-v2: Mobile Device Operation Assistant with Effective Navigation via Multi-Agent Collaboration", "url": "https://openreview.net/forum?id=O0nBMRlkc8", "detail_url": "https://openreview.net/forum?id=O0nBMRlkc8", "authors": "Junyang Wang,Haiyang Xu,Haitao Jia,Xi Zhang,Ming Yan,Weizhou Shen,Ji Zhang,Fei Huang,Jitao Sang", "tags": "NIPS 2024,Poster", "abstract": "Mobile device operation tasks are increasingly becoming a popular multi-modal AI application scenario. Current Multi-modal Large Language Models (MLLMs), constrained by their training data, lack the capability to function effectively as operation assistants. Instead, MLLM-based agents, which enhance capabilities through tool invocation, are gradually being applied to this scenario. However, the two major navigation challenges in mobile device operation tasks \u2014 task progress navigation and focus content navigation \u2014 are difficult to effectively solve under the single-agent architecture of existing work. This is due to the overly long token sequences and the interleaved text-image data format, which limit performance. To address these navigation challenges effectively, we propose Mobile-Agent-v2, a multi-agent architecture for mobile device operation assistance. The architecture comprises three agents: planning agent, decision agent, and reflection agent. The planning agent condenses lengthy, interleaved image-text history operations and screens summaries into a pure-text task progress, which is then passed on to the decision agent. This reduction in context length makes it easier for decision agent to navigate the task progress. To retain focus content, we design a memory unit that updates with task progress by decision agent. Additionally, to correct erroneous operations, the reflection agent observes the outcomes of each operation and handles any mistake accordingly. Experimental results indicate that Mobile-Agent-v2 achieves over a 30% improvement in task completion compared to the single-agent architecture of Mobile-Agent. The code is open-sourced at https://github.com/X-PLUG/MobileAgent.", "pdf": "https://openreview.net/pdf/1884d55b0eac95c14035e897dcbb1c8186bcd65e.pdf"} {"title": "MGF: Mixed Gaussian Flow for Diverse Trajectory Prediction", "url": "https://openreview.net/forum?id=muYhNDlxWc", "detail_url": "https://openreview.net/forum?id=muYhNDlxWc", "authors": "Jiahe Chen,Jinkun Cao,Dahua Lin,Kris M. Kitani,Jiangmiao Pang", "tags": "NIPS 2024,Poster", "abstract": "To predict future trajectories, the normalizing flow with a standard Gaussian prior suffers from weak diversity. \nThe ineffectiveness comes from the conflict between the fact of asymmetric and multi-modal distribution of likely outcomes and symmetric and single-modal original distribution and supervision losses.\nInstead, we propose constructing a mixed Gaussian prior for a normalizing flow model for trajectory prediction.\nThe prior is constructed by analyzing the trajectory patterns in the training samples without requiring extra annotations while showing better expressiveness and being multi-modal and asymmetric.\nBesides diversity, it also provides better controllability for probabilistic trajectory generation.\nWe name our method Mixed Gaussian Flow (MGF). It achieves state-of-the-art performance in the evaluation of both trajectory alignment and diversity on the popular UCY/ETH and SDD datasets. Code is available at https://github.com/mulplue/MGF.", "pdf": "https://openreview.net/pdf/a24b2249847a2068a01c6fa992db6a0aad0d0e19.pdf"} {"title": "Neural Localizer Fields for Continuous 3D Human Pose and Shape Estimation", "url": "https://openreview.net/forum?id=RrTjcbcHEH", "detail_url": "https://openreview.net/forum?id=RrTjcbcHEH", "authors": "Istv\u00e1n S\u00e1r\u00e1ndi,Gerard Pons-Moll", "tags": "NIPS 2024,Poster", "abstract": "With the explosive growth of available training data, single-image 3D human modeling is ahead of a transition to a data-centric paradigm.\nA key to successfully exploiting data scale is to design flexible models that can be supervised from various heterogeneous data sources produced by different researchers or vendors.\nTo this end, we propose a simple yet powerful paradigm for seamlessly unifying different human pose and shape-related tasks and datasets.\nOur formulation is centered on the ability - both at training and test time - to query any arbitrary point of the human volume, and obtain its estimated location in 3D.\nWe achieve this by learning a continuous neural field of body point localizer functions, each of which is a differently parameterized 3D heatmap-based convolutional point localizer (detector).\nFor generating parametric output, we propose an efficient post-processing step for fitting SMPL-family body models to nonparametric joint and vertex predictions.\nWith this approach, we can naturally exploit differently annotated data sources including mesh, 2D/3D skeleton and dense pose, without having to convert between them, and thereby train large-scale 3D human mesh and skeleton estimation models that outperform the state-of-the-art on several public benchmarks including 3DPW, EMDB, EHF, SSP-3D and AGORA by a considerable margin.\nWe release our code and models to foster downstream research.", "pdf": "https://openreview.net/pdf/bb9be9482b7a972c9ff1a24f3d75ee22d6195fdd.pdf"} {"title": "Efficient Prompt Optimization Through the Lens of Best Arm Identification", "url": "https://openreview.net/forum?id=FLNnlfBGMo", "detail_url": "https://openreview.net/forum?id=FLNnlfBGMo", "authors": "Chengshuai Shi,Kun Yang,Zihan Chen,Jundong Li,Jing Yang,Cong Shen", "tags": "NIPS 2024,Poster", "abstract": "The remarkable instruction-following capability of large language models (LLMs) has sparked a growing interest in automatically finding good prompts, i.e., prompt optimization. Most existing works follow the scheme of selecting from a pre-generated pool of candidate prompts. However, these designs mainly focus on the generation strategy, while limited attention has been paid to the selection method. Especially, the cost incurred during the selection (e.g., accessing LLM and evaluating the responses) is rarely explicitly considered. To overcome this limitation, this work provides a principled framework, TRIPLE, to efficiently perform prompt selection under an explicit budget constraint. TRIPLE is built on a novel connection established between prompt optimization and fixed-budget best arm identification (BAI-FB) in multi-armed bandits (MAB); thus, it is capable of leveraging the rich toolbox from BAI-FB systematically and also incorporating unique characteristics of prompt optimization. Extensive experiments on multiple well-adopted tasks using various LLMs demonstrate the remarkable performance improvement of TRIPLE over baselines while satisfying the limited budget constraints. As an extension, variants of TRIPLE are proposed to efficiently select examples for few-shot prompts, also achieving superior empirical performance.", "pdf": "https://openreview.net/pdf/d55bf9078917118b9c52834d084c1245727ed3e9.pdf"} {"title": "Fast Best-of-N Decoding via Speculative Rejection", "url": "https://openreview.net/forum?id=348hfcprUs", "detail_url": "https://openreview.net/forum?id=348hfcprUs", "authors": "Hanshi Sun,Momin Haider,Ruiqi Zhang,Huitao Yang,Jiahao Qiu,Ming Yin,Mengdi Wang,Peter Bartlett,Andrea Zanette", "tags": "NIPS 2024,Poster", "abstract": "The safe and effective deployment of Large Language Models (LLMs) involves a critical step called alignment, which ensures that the model's responses are in accordance with human preferences. Prevalent alignment techniques, such as DPO, PPO and their variants, align LLMs by changing the pre-trained model weights during a phase called post-training. While predominant, these post-training methods add substantial complexity before LLMs can be deployed. Inference-time alignment methods avoid the complex post-training step and instead bias the generation towards responses that are aligned with human preferences. The best-known inference-time alignment method, called Best-of-N, is as effective as the state-of-the-art post-training procedures. Unfortunately, Best-of-N requires vastly more resources at inference time than standard decoding strategies, which makes it computationally not viable. In this work, we introduce Speculative Rejection, a computationally-viable inference-time alignment algorithm. It generates high-scoring responses according to a given reward model, like Best-of-N does, while being between 16 to 32 times more computationally efficient.", "pdf": "https://openreview.net/pdf/1185ba27284299162dd748d2582af8def317545b.pdf"} {"title": "Full-Atom Peptide Design with Geometric Latent Diffusion", "url": "https://openreview.net/forum?id=IAQNJUJe8q", "detail_url": "https://openreview.net/forum?id=IAQNJUJe8q", "authors": "Xiangzhe Kong,Yinjun Jia,Wenbing Huang,Yang Liu", "tags": "NIPS 2024,Poster", "abstract": "Peptide design plays a pivotal role in therapeutics, allowing brand new possibility to leverage target binding sites that are previously undruggable. Most existing methods are either inefficient or only concerned with the target-agnostic design of 1D sequences. In this paper, we propose a generative model for full-atom Peptide design with Geometric LAtent Diffusion (PepGLAD) given the binding site. We first establish a benchmark consisting of both 1D sequences and 3D structures from Protein Data Bank (PDB) and literature for systematic evaluation. We then identify two major challenges of leveraging current diffusion-based models for peptide design: the full-atom geometry and the variable binding geometry. To tackle the first challenge, PepGLAD derives a variational autoencoder that first encodes full-atom residues of variable size into fixed-dimensional latent representations, and then decodes back to the residue space after conducting the diffusion process in the latent space. For the second issue, PepGLAD explores a receptor-specific affine transformation to convert the 3D coordinates into a shared standard space, enabling better generalization ability across different binding shapes. Experimental Results show that our method not only improves diversity and binding affinity significantly in the task of sequence-structure co-design, but also excels at recovering reference structures for binding conformation generation.", "pdf": "https://openreview.net/pdf/69729ae7bb5ba90164d10c5cefa3f252d78a5c65.pdf"} {"title": "3D Gaussian Rendering Can Be Sparser: Efficient Rendering via Learned Fragment Pruning", "url": "https://openreview.net/forum?id=IVqzbuLfoL", "detail_url": "https://openreview.net/forum?id=IVqzbuLfoL", "authors": "Zhifan Ye,Chenxi Wan,Chaojian Li,Jihoon Hong,Sixu Li,Leshu Li,Yongan Zhang,Yingyan Celine Lin", "tags": "NIPS 2024,Poster", "abstract": "3D Gaussian splatting has recently emerged as a promising technique for novel view synthesis from sparse image sets, yet comes at the cost of requiring millions of 3D Gaussian primitives to reconstruct each 3D scene. This largely limits its application to resource-constrained devices and applications.\nDespite advances in Gaussian pruning techniques that aim to remove individual 3D Gaussian primitives, the significant reduction in primitives often fails to translate into commensurate increases in rendering speed, impeding efficiency and practical deployment. We identify that this discrepancy arises due to the overlooked impact of fragment count per Gaussian (i.e., the number of pixels each Gaussian is projected onto). To bridge this gap and meet the growing demands for efficient on-device 3D Gaussian rendering, we propose fragment pruning, an orthogonal enhancement to existing pruning methods that can significantly accelerate rendering by selectively pruning fragments within each Gaussian. Our pruning framework dynamically optimizes the pruning threshold for each Gaussian, markedly improving rendering speed and quality. Extensive experiments in both static and dynamic scenes validate the effectiveness of our approach. For instance, by integrating our fragment pruning technique with state-of-the-art Gaussian pruning methods, we achieve up to a 1.71$\\times$ speedup on an edge GPU device, the Jetson Orin NX, and enhance rendering quality by an average of 0.16 PSNR on the Tanks\\&Temples dataset. Our code is available at https://github.com/GATECH-EIC/Fragment-Pruning.", "pdf": "https://openreview.net/pdf/0f4648bcb47f776c689bfd3c5d96e9f9131c9021.pdf"} {"title": "Dimension-free Private Mean Estimation for Anisotropic Distributions", "url": "https://openreview.net/forum?id=kRwQCAIA7z", "detail_url": "https://openreview.net/forum?id=kRwQCAIA7z", "authors": "Yuval Dagan,Michael Jordan,Xuelin Yang,Lydia Zakynthinou,Nikita Zhivotovskiy", "tags": "NIPS 2024,Poster", "abstract": "We present differentially private algorithms for high-dimensional mean estimation. Previous private estimators on distributions over $\\mathbb{R}^d$ suffer from a curse of dimensionality, as they require $\\Omega(d^{1/2})$ samples to achieve non-trivial error, even in cases where $O(1)$ samples suffice without privacy. This rate is unavoidable when the distribution is isotropic, namely, when the covariance is a multiple of the identity matrix. Yet, real-world data is often highly anisotropic, with signals concentrated on a small number of principal components. We develop estimators that are appropriate for such signals---our estimators are $(\\varepsilon,\\delta)$-differentially private and have sample complexity that is dimension-independent for anisotropic subgaussian distributions. Given $n$ samples from a distribution with known covariance-proxy $\\Sigma$ and unknown mean $\\mu$, we present an estimator $\\hat{\\mu}$ that achieves error, $\\|\\hat{\\mu}-\\mu\\|_2\\leq \\alpha$, as long as $n\\gtrsim \\text{tr}(\\Sigma)/\\alpha^2+ \\text{tr}(\\Sigma^{1/2})/(\\alpha\\varepsilon)$. We show that this is the optimal sample complexity for this task up to logarithmic factors. Moreover, for the case of unknown covariance, we present an algorithm whose sample complexity has improved dependence on the dimension, from $d^{1/2}$ to $d^{1/4}$.", "pdf": "https://openreview.net/pdf/90919b8b60143f0e171dd5310efcb65e20de7354.pdf"} {"title": "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents", "url": "https://openreview.net/forum?id=Nf4MHF1pi5", "detail_url": "https://openreview.net/forum?id=Nf4MHF1pi5", "authors": "Wenkai Yang,Xiaohan Bi,Yankai Lin,Sishuo Chen,Jie Zhou,Xu Sun", "tags": "NIPS 2024,Poster", "abstract": "Driven by the rapid development of Large Language Models (LLMs), LLM-based agents have been developed to handle various real-world applications, including finance, healthcare, and shopping, etc. It is crucial to ensure the reliability and security of LLM-based agents during applications. However, the safety issues of LLM-based agents are currently under-explored. In this work, we take the first step to investigate one of the typical safety threats, backdoor attack, to LLM-based agents. We first formulate a general framework of agent backdoor attacks, then we present a thorough analysis of different forms of agent backdoor attacks. Specifically, compared with traditional backdoor attacks on LLMs that are only able to manipulate the user inputs and model outputs, agent backdoor attacks exhibit more diverse and covert forms: (1) From the perspective of the final attacking outcomes, the agent backdoor attacker can not only choose to manipulate the final output distribution, but also introduce the malicious behavior in an intermediate reasoning step only, while keeping the final output correct. (2) Furthermore, the former category can be divided into two subcategories based on trigger locations, in which the backdoor trigger can either be hidden in the user query or appear in an intermediate observation returned by the external environment. We implement the above variations of agent backdoor attacks on two typical agent tasks including web shopping and tool utilization. Extensive experiments show that LLM-based agents suffer severely from backdoor attacks and such backdoor vulnerability cannot be easily mitigated by current textual backdoor defense algorithms. This indicates an urgent need for further research on the development of targeted defenses against backdoor attacks on LLM-based agents. Warning: This paper may contain biased content.", "pdf": "https://openreview.net/pdf/14dda2e2e067d5c1fc5179293fd0d4072276f210.pdf"} {"title": "Beyond Accuracy: Tracking more like Human via Visual Search", "url": "https://openreview.net/forum?id=LezAEImfoc", "detail_url": "https://openreview.net/forum?id=LezAEImfoc", "authors": "Dailing Zhang,Shiyu Hu,Xiaokun Feng,Xuchen Li,Meiqi Wu,Jing Zhang,Kaiqi Huang", "tags": "NIPS 2024,Poster", "abstract": "Human visual search ability enables efficient and accurate tracking of an arbitrary moving target, which is a significant research interest in cognitive neuroscience. The recently proposed Central-Peripheral Dichotomy (CPD) theory sheds light on how humans effectively process visual information and track moving targets in complex environments. However, existing visual object tracking algorithms still fall short of matching human performance in maintaining tracking over time, particularly in complex scenarios requiring robust visual search skills. These scenarios often involve Spatio-Temporal Discontinuities (i.e., STDChallenge), prevalent in long-term tracking and global instance tracking. To address this issue, we conduct research from a human-like modeling perspective: (1) Inspired by the CPD, we pro- pose a new tracker named CPDTrack to achieve human-like visual search ability. The central vision of CPDTrack leverages the spatio-temporal continuity of videos to introduce priors and enhance localization precision, while the peripheral vision improves global awareness and detects object movements. (2) To further evaluate and analyze STDChallenge, we create the STDChallenge Benchmark. Besides, by incorporating human subjects, we establish a human baseline, creating a high- quality environment specifically designed to assess trackers\u2019 visual search abilities in videos across STDChallenge. (3) Our extensive experiments demonstrate that the proposed CPDTrack not only achieves state-of-the-art (SOTA) performance in this challenge but also narrows the behavioral differences with humans. Additionally, CPDTrack exhibits strong generalizability across various challenging benchmarks. In summary, our research underscores the importance of human-like modeling and offers strategic insights for advancing intelligent visual target tracking. Code and models are available at https://github.com/ZhangDailing8/CPDTrack.", "pdf": "https://openreview.net/pdf/11960c0cf6a34cdc6a956f476a0fb526022a4514.pdf"} {"title": "Neuc-MDS: Non-Euclidean Multidimensional Scaling Through Bilinear Forms", "url": "https://openreview.net/forum?id=8W5ADJOKcv", "detail_url": "https://openreview.net/forum?id=8W5ADJOKcv", "authors": "Chengyuan Deng,Jie Gao,Kevin Lu,Feng Luo,Hongbin Sun,Cheng Xin", "tags": "NIPS 2024,Poster", "abstract": "We introduce \\textbf{N}on-\\textbf{Euc}lidean-\\textbf{MDS} (Neuc-MDS), which extends Multidimensional Scaling (MDS) to generate outputs that can be non-Euclidean and non-metric. The main idea is to generalize the inner product to other symmetric bilinear forms to utilize the negative eigenvalues of dissimiliarity Gram matrices. Neuc-MDS efficiently optimizes the choice of (both positive and negative) eigenvalues of the dissimilarity Gram matrix to reduce STRESS, the sum of squared pairwise error. We provide an in-depth error analysis and proofs of the optimality in minimizing lower bounds of STRESS. We demonstrate Neuc-MDS's ability to address limitations of classical MDS raised by prior research, and test it on various synthetic and real-world datasets in comparison with both linear and non-linear dimension reduction methods.", "pdf": "https://openreview.net/pdf/39cf858fefc937ab29191f4ab0dc60436e3a517a.pdf"} {"title": "Is Cross-validation the Gold Standard to Estimate Out-of-sample Model Performance?", "url": "https://openreview.net/forum?id=4lGPSbGe11", "detail_url": "https://openreview.net/forum?id=4lGPSbGe11", "authors": "Garud Iyengar,Henry Lam,Tianyu Wang", "tags": "NIPS 2024,Poster", "abstract": "Cross-Validation (CV) is the default choice for estimate the out-of-sample performance of machine learning models. Despite its wide usage, their statistical benefits have remained half-understood, especially in challenging nonparametric regimes. In this paper we fill in this gap and show that, in terms of estimating the out-of-sample performances, for a wide spectrum of models, CV does not statistically outperform the simple ``plug-in'' approach where one reuses training data for testing evaluation. Specifically, in terms of both the asymptotic bias and coverage accuracy of the associated interval for out-of-sample evaluation, $K$-fold CV provably cannot outperform plug-in regardless of the rate at which the parametric or nonparametric models converge. Leave-one-out CV can have a smaller bias as compared to plug-in; however, this bias improvement is negligible compared to the variability of the evaluation, and in some important cases leave-one-out again does not outperform plug-in once this variability is taken into account. We obtain our theoretical comparisons via a novel higher-order Taylor analysis that dissects the limit theorems of testing evaluations, which applies to model classes that are not amenable to previously known sufficient conditions. Our numerical results demonstrate that plug-in performs indeed no worse than CV in estimating model performance across a wide range of examples.", "pdf": "https://openreview.net/pdf/189b1a44f32cdb8dc7985f8314688df2d9804e5f.pdf"} {"title": "Target-Guided Adversarial Point Cloud Transformer Towards Recognition Against Real-world Corruptions", "url": "https://openreview.net/forum?id=FcUyz33OED", "detail_url": "https://openreview.net/forum?id=FcUyz33OED", "authors": "Jie Wang,Tingfa Xu,Lihe Ding,Jianan Li", "tags": "NIPS 2024,Poster", "abstract": "Achieving robust 3D perception in the face of corrupted data presents an challenging hurdle within 3D vision research. Contemporary transformer-based point cloud recognition models, albeit advanced, tend to overfit to specific patterns, consequently undermining their robustness against corruption. In this work, we introduce the Target-Guided Adversarial Point Cloud Transformer, termed APCT, a novel architecture designed to augment global structure capture through an adversarial feature erasing mechanism predicated on patterns discerned at each step during training. Specifically, APCT integrates an Adversarial Significance Identifier and a Target-guided Promptor. The Adversarial Significance Identifier, is tasked with discerning token significance by integrating global contextual analysis, utilizing a structural salience index algorithm alongside an auxiliary supervisory mechanism. The Target-guided Promptor, is responsible for accentuating the propensity for token discard within the self-attention mechanism, utilizing the value derived above, consequently directing the model attention towards alternative segments in subsequent stages. By iteratively applying this strategy in multiple steps during training, the network progressively identifies and integrates an expanded array of object-associated patterns. Extensive experiments demonstrate that our method achieves state-of-the-art results on multiple corruption benchmarks.", "pdf": "https://openreview.net/pdf/39d342b992430643e7c7bb388857230d156519e2.pdf"} {"title": "Faster Accelerated First-order Methods for Convex Optimization with Strongly Convex Function Constraints", "url": "https://openreview.net/forum?id=pG380vLYRU", "detail_url": "https://openreview.net/forum?id=pG380vLYRU", "authors": "Zhenwei Lin,Qi Deng", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we introduce faster accelerated primal-dual algorithms for minimizing a convex function subject to strongly convex function constraints. \nPrior to our work, the best complexity bound was $\\mathcal{O}(1/{\\varepsilon})$, regardless of the strong convexity of the constraint function.\nIt is unclear whether the strong convexity assumption can enable even better convergence results. \nTo address this issue, we have developed novel techniques to progressively estimate the strong convexity of the Lagrangian function.\nOur approach, for the first time, effectively leverages the constraint strong convexity, obtaining an improved complexity of $\\mathcal{O}(1/\\sqrt{\\varepsilon})$. This rate matches the complexity lower bound for strongly-convex-concave saddle point optimization and is therefore order-optimal.\nWe show the superior performance of our methods in sparsity-inducing constrained optimization, notably Google's personalized PageRank problem. Furthermore, we show that a restarted version of the proposed methods can effectively identify the optimal solution's sparsity pattern within a finite number of steps, a result that appears to have independent significance.", "pdf": "https://openreview.net/pdf/39d2a679b231a9c79f5e0031e24c97e052f8d1b3.pdf"} {"title": "Exactly Minimax-Optimal Locally Differentially Private Sampling", "url": "https://openreview.net/forum?id=Dr7UarlhVE", "detail_url": "https://openreview.net/forum?id=Dr7UarlhVE", "authors": "Hyun-Young Park,Shahab Asoodeh,Si-Hyeon Lee", "tags": "NIPS 2024,Poster", "abstract": "The sampling problem under local differential privacy has recently been studied with potential applications to generative models, but a fundamental analysis of its privacy-utility trade-off (PUT) remains incomplete. In this work, we define the fundamental PUT of private sampling in the minimax sense, using the $f$-divergence between original and sampling distributions as the utility measure. We characterize the exact PUT for both finite and continuous data spaces under some mild conditions on the data distributions, and propose sampling mechanisms that are universally optimal for all $f$-divergences. Our numerical experiments demonstrate the superiority of our mechanisms over baselines, in terms of theoretical utilities for finite data space and of empirical utilities for continuous data space.", "pdf": "https://openreview.net/pdf/4db55857d23f0b9e879bd411410654e42341a38f.pdf"} {"title": "Pard: Permutation-Invariant Autoregressive Diffusion for Graph Generation", "url": "https://openreview.net/forum?id=x4Kk4FxLs3", "detail_url": "https://openreview.net/forum?id=x4Kk4FxLs3", "authors": "Lingxiao Zhao,Xueying Ding,Leman Akoglu", "tags": "NIPS 2024,Poster", "abstract": "Graph generation has been dominated by autoregressive models due to their simplicity and effectiveness, despite their sensitivity to ordering. Yet diffusion models have garnered increasing attention, as they offer comparable performance while being permutation-invariant. Current graph diffusion models generate graphs in a one-shot fashion, but they require extra features and thousands of denoising steps to achieve optimal performance. We introduce PARD, a Permutation-invariant Auto Regressive Diffusion model that integrates diffusion models with autoregressive methods. PARD harnesses the effectiveness and efficiency of the autoregressive model while maintaining permutation invariance without ordering sensitivity. Specifically, we show that contrary to sets, elements in a graph are not entirely un-ordered and there is a unique partial order for nodes and edges. With this partial order, PARD generates a graph in a block-by-block, autoregressive fashion, where each block\u2019s probability is conditionally modeled by a shared diffusion model with an equivariant network. To ensure efficiency while being expressive, we further propose a higher-order graph transformer, which integrates transformer with PPGN (Maronet al., 2019). Like GPT, we extend the higher-order graph transformer to support parallel training of all blocks. Without any extra features, PARD achieves state-of-the-art performance on molecular and non-molecular datasets, and scales to large datasets like MOSES containing 1.9M molecules.", "pdf": "https://openreview.net/pdf/c1a62f1c53db519f90d14aed3ca68d4ed80a9146.pdf"} {"title": "Robust Reinforcement Learning with General Utility", "url": "https://openreview.net/forum?id=8Uyfr5TcNR", "detail_url": "https://openreview.net/forum?id=8Uyfr5TcNR", "authors": "Ziyi Chen,Yan Wen,Zhengmian Hu,Heng Huang", "tags": "NIPS 2024,Poster", "abstract": "Reinforcement Learning (RL) problem with general utility is a powerful decision making framework that covers standard RL with cumulative cost, exploration problems, and demonstration learning. Existing works on RL with general utility do not consider the robustness under environmental perturbation, which is important to adapt RL system in the real-world environment that differs from the training environment. To train a robust policy, we propose a robust RL framework with general utility, which subsumes many existing RL frameworks including RL, robust RL, RL with general utility, constrained RL, robust constrained RL, pure exploration, robust entropy regularized RL, etc. Then we focus on popular convex utility functions, with which our proposed learning framework is a challenging nonconvex-nonconcave minimax optimization problem, and design a two-phase stochastic policy gradient type algorithm and obtain its sample complexity result for gradient convergence. Furthermore, for convex utility on a widely used polyhedral ambiguity set, we design an algorithm and obtain its convergence rate to a global optimal solution.", "pdf": "https://openreview.net/pdf/bd1e309302d23d5c27fb998972d826f00ff8c3cc.pdf"} {"title": "Online Estimation via Offline Estimation: An Information-Theoretic Framework", "url": "https://openreview.net/forum?id=sks7x4I8Bh", "detail_url": "https://openreview.net/forum?id=sks7x4I8Bh", "authors": "Dylan J Foster,Yanjun Han,Jian Qian,Alexander Rakhlin", "tags": "NIPS 2024,Poster", "abstract": "The classical theory of statistical estimation aims to estimate a parameter of interest under data generated from a fixed design (''offline estimation''), while the contemporary theory of online learning provides algorithms for estimation under adaptively chosen covariates (''online estimation''). Motivated by connections between estimation and interactive decision making, we ask: is it possible to convert offline estimation algorithms into online estimation algorithms in a black-box fashion? We investigate this question from an information-theoretic perspective by introducing a new framework, Oracle-Efficient Online Estimation (OEOE), where the learner can only interact with the data stream indirectly through a sequence of offline estimators produced by a black-box algorithm operating on the stream. Our main results settle the statistical and computational complexity of online estimation in this framework.\n\n $\\bullet$ Statistical complexity. We show that information-theoretically, there exist algorithms that achieve near-optimal online estimation error via black-box offline estimation oracles, and give a nearly-tight characterization for minimax rates in the OEOE framework.\n\n $\\bullet$ Computational complexity. We show that the guarantees above cannot be achieved in a computationally efficient fashion in general, but give a refined characterization for the special case of conditional density estimation: computationally efficient online estimation via black-box offline estimation is possible whenever it is possible via unrestricted algorithms.\n\nFinally, we apply our results to give offline oracle-efficient algorithms for interactive decision making.", "pdf": "https://openreview.net/pdf/b8cfbc277ea416f34c48378cb8a72149176fc155.pdf"} {"title": "Diffusion of Thought: Chain-of-Thought Reasoning in Diffusion Language Models", "url": "https://openreview.net/forum?id=G0v0TxX01N", "detail_url": "https://openreview.net/forum?id=G0v0TxX01N", "authors": "Jiacheng Ye,Shansan Gong,Liheng Chen,Lin Zheng,Jiahui Gao,Han Shi,Chuan Wu,Xin Jiang,Zhenguo Li,Wei Bi,Lingpeng Kong", "tags": "NIPS 2024,Poster", "abstract": "Recently, diffusion models have garnered significant interest in the field of text processing due to their many potential advantages compared to conventional autoregressive models.\nIn this work, we propose Diffusion-of-Thought (DoT), a novel approach that integrates diffusion models with Chain-of-Thought, a well-established technique for improving the reasoning ability of autoregressive language models. In contrast to autoregressive language models that make decisions in a left-to-right, token-by-token manner, DoT allows reasoning steps to diffuse over time through a diffusion language model and offers greater flexibility in trading-off computation for reasoning performance. Our experimental results demonstrate the effectiveness of DoT in multi-digit multiplication, boolean logic, and grade school math problems. In addition to that, DoT showcases promising self-correction abilities and benefits from existing reasoning-enhancing techniques like self-consistency decoding. Our findings contribute to the understanding and development of reasoning with diffusion language models.", "pdf": "https://openreview.net/pdf/c87cdf6b6e90f2c3f736be50639670dba4245f12.pdf"} {"title": "No-Regret Bandit Exploration based on Soft Tree Ensemble Model", "url": "https://openreview.net/forum?id=cKKXBhyijL", "detail_url": "https://openreview.net/forum?id=cKKXBhyijL", "authors": "Shogo Iwazaki,Shinya Suzumura", "tags": "NIPS 2024,Poster", "abstract": "We propose a novel stochastic bandit algorithm that employs reward estimates using a tree ensemble model. Specifically, our focus is on a soft tree model, a variant of the conventional decision tree that has undergone both practical and theoretical scrutiny in recent years. By deriving several non-trivial properties of soft trees, we extend the existing analytical techniques used for neural bandit algorithms to our soft tree-based algorithm. We demonstrate that our algorithm achieves a smaller cumulative regret compared to the existing ReLU-based neural bandit algorithms. We also show that this advantage comes with a trade-off: the hypothesis space of the soft tree ensemble model is more constrained than that of a ReLU-based neural network.", "pdf": "https://openreview.net/pdf/5f22cbcc4e1f4f15297ff48ae857d328731de108.pdf"} {"title": "Transfer Learning for Diffusion Models", "url": "https://openreview.net/forum?id=6emETARnWi", "detail_url": "https://openreview.net/forum?id=6emETARnWi", "authors": "Yidong Ouyang,Liyan Xie,Hongyuan Zha,Guang Cheng", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models, a specific type of generative model, have achieved unprecedented performance in recent years and consistently produce high-quality synthetic samples. A critical prerequisite for their notable success lies in the presence of a substantial number of training samples, which can be impractical in real-world applications due to high collection costs or associated risks. Consequently, various finetuning and regularization approaches have been proposed to transfer knowledge from existing pre-trained models to specific target domains with limited data. This paper introduces the Transfer Guided Diffusion Process (TGDP), a novel approach distinct from conventional finetuning and regularization methods. \nWe prove that the optimal diffusion model for the target domain integrates pre-trained diffusion models on the source domain with additional guidance from a domain classifier. \nWe further extend TGDP to a conditional version for modeling the joint distribution of data and its corresponding labels, together with two additional regularization terms to enhance the model performance. We validate the effectiveness of TGDP on both simulated and real-world datasets.", "pdf": "https://openreview.net/pdf/b69df4de4e1f7e40fdc5f172023f56acde8ecf7a.pdf"} {"title": "Clustering in Causal Attention Masking", "url": "https://openreview.net/forum?id=OiVxYf9trg", "detail_url": "https://openreview.net/forum?id=OiVxYf9trg", "authors": "Nikita Karagodin,Yury Polyanskiy,Philippe Rigollet", "tags": "NIPS 2024,Poster", "abstract": "This work presents a modification of the self-attention dynamics proposed in Geshkovski et al to better reflect the practically relevant, causally masked attention used in transformer architectures for generative AI. This modification translates into an interacting particle system that cannot be interpreted as a mean-field gradient flow. Despite this loss of structure, we significantly strengthen the results of Geshkovski et al in this context: While previous rigorous results focused on cases where all three matrices (key, query, and value) were scaled identities, we prove asymptotic convergence to a single cluster for arbitrary key-query matrices and value matrix equal to the identity.\nAdditionally, we establish a connection to the classical R\\'enyi parking problem from combinatorial geometry to make initial theoretical steps towards demonstrating the existence of meta-stable states.", "pdf": "https://openreview.net/pdf/1361fab0c43791b9c9dcfb0a70e718f4ecb7d356.pdf"} {"title": "Active Set Ordering", "url": "https://openreview.net/forum?id=GkJbXpd3wM", "detail_url": "https://openreview.net/forum?id=GkJbXpd3wM", "authors": "Quoc Phong Nguyen,Sunil Gupta,Svetha Venkatesh,Bryan Kian Hsiang Low,Patrick Jaillet", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we formalize the active set ordering problem, which involves actively discovering a set of inputs based on their orderings determined by expensive evaluations of a blackbox function. We then propose the mean prediction (MP) algorithm and theoretically analyze it in terms of the regret of predicted pairwise orderings between inputs. Notably, as a special case of this framework, we can cast Bayesian optimization as an active set ordering problem by recognizing that maximizers can be identified solely by comparison rather than by precisely estimating the function evaluations. As a result, we are able to construct the popular Gaussian process upper confidence bound (GP-UCB) algorithm through the lens of ordering with several nuanced insights. We empirically validate the performance of our proposed solution using various synthetic functions and real-world datasets.", "pdf": "https://openreview.net/pdf/a84bb38a2dbbe31a1fdccd16481727e5c72a82a0.pdf"} {"title": "HGDL: Heterogeneous Graph Label Distribution Learning", "url": "https://openreview.net/forum?id=OwguhIAh8R", "detail_url": "https://openreview.net/forum?id=OwguhIAh8R", "authors": "Yufei Jin,Heng Lian,Yi He,Xingquan Zhu", "tags": "NIPS 2024,Poster", "abstract": "Label Distribution Learning (LDL) has been extensively studied in IID data applications such as computer vision, thanks to its more generic setting over single-label and multi-label classification. \nThis paper advances LDL into graph domains and aims to tackle a novel and fundamental\nheterogeneous graph label distribution learning (HGDL) problem.\nWe argue that \nthe graph heterogeneity reflected on node types, node attributes, and neighborhood structures can \nimpose significant challenges for generalizing \nLDL onto graphs. \nTo address the challenges, we propose a new \nlearning framework with two key components: \n1) proactive graph topology homogenization, \nand 2) topology and content consistency-aware graph transformer. \nSpecifically, \nthe former learns optimal information aggregation between meta-paths, so that the node\nheterogeneity can be proactively addressed prior to the succeeding embedding learning; the latter leverages an attention mechanism to learn consistency between meta-path and node attributes, allowing network topology and nodal attributes to be equally emphasized during the label distribution learning. By using KL-divergence and additional constraints, \\method~delivers \nan end-to-end solution for learning and predicting label distribution for nodes. \nBoth theoretical and empirical studies substantiate \nthe effectiveness of our HGDL approach.\nOur code and datasets are available at https://github.com/Listener-Watcher/HGDL.", "pdf": "https://openreview.net/pdf/c98fe7ec6e30f7f8475be8f25e8e979518ad86be.pdf"} {"title": "Compressing Large Language Models using Low Rank and Low Precision Decomposition", "url": "https://openreview.net/forum?id=lkx3OpcqSZ", "detail_url": "https://openreview.net/forum?id=lkx3OpcqSZ", "authors": "Rajarshi Saha,Naomi Sagan,Varun Srivastava,Andrea Goldsmith,Mert Pilanci", "tags": "NIPS 2024,Poster", "abstract": "The prohibitive sizes of Large Language Models (LLMs) today make it difficult to deploy them on memory-constrained edge devices. This work introduces $\\rm CALDERA$ -- a new post-training LLM compression algorithm that harnesses the inherent low-rank structure of a weight matrix $\\mathbf{W}$ by approximating it via a low-rank, low-precision decomposition as $\\mathbf{W} \\approx \\mathbf{Q} + \\mathbf{L}\\mathbf{R}$. Here, $\\mathbf{L}$ and $\\mathbf{R}$ are low rank factors, and the entries of $\\mathbf{Q}$, $\\mathbf{L}$ and $\\mathbf{R}$ are quantized. The model is compressed by substituting each layer with its $\\mathbf{Q} + \\mathbf{L}\\mathbf{R}$ decomposition, and the zero-shot performance of the compressed model is evaluated. Additionally, $\\mathbf{L}$ and $\\mathbf{R}$ are readily amenable to low-rank adaptation, consequently enhancing the zero-shot performance. $\\rm CALDERA$ obtains this decomposition by formulating it as an optimization problem $\\min_{\\mathbf{Q},\\mathbf{L},\\mathbf{R}}\\lVert(\\mathbf{Q} + \\mathbf{L}\\mathbf{R} - \\mathbf{W})\\mathbf{X}^\\top\\rVert_{\\rm F}^2$, where $\\mathbf{X}$ is the calibration data, and $\\mathbf{Q}, \\mathbf{L}, \\mathbf{R}$ are constrained to be representable using low-precision formats. Theoretical upper bounds on the approximation error of $\\rm CALDERA$ are established using a rank-constrained regression framework, and the tradeoff between compression ratio and model performance is studied by analyzing the impact of target rank and quantization bit budget. Results illustrate that compressing LlaMa-$2$ $7$B/$13$B/$70$B and LlaMa-$3$ $8$B models obtained using $\\rm CALDERA$ outperforms existing post-training LLM compression techniques in the regime of less than $2.5$ bits per parameter.", "pdf": "https://openreview.net/pdf/2b6005c971c3343b98f66b536c29add85a496414.pdf"} {"title": "Efficient Contextual LLM Cascades through Budget-Constrained Policy Learning", "url": "https://openreview.net/forum?id=aDQlAz09dS", "detail_url": "https://openreview.net/forum?id=aDQlAz09dS", "authors": "Xuechen Zhang,Zijian Huang,Ege Onur Taga,Carlee Joe-Wong,Samet Oymak,Jiasi Chen", "tags": "NIPS 2024,Poster", "abstract": "Recent successes in natural language processing have led to the proliferation of large language models (LLMs) by multiple providers. Each LLM offering has different inference accuracy, monetary cost, and latency, and their accuracy further depends on the exact wording of the question (i.e., the specific prompt). At the same time, users often have a limit on monetary budget and latency to answer all their questions, and they do not know which LLMs to choose for each question to meet their accuracy and long term budget requirements. To navigate this rich design space, we propose TREACLE (Thrifty Reasoning via Context-Aware LLM and Prompt Selection), a reinforcement learning policy that jointly selects the model and prompting scheme while respecting the user's monetary cost and latency constraints. TREACLE uses the problem context, including question text embeddings (reflecting the type or difficulty of a query) and the response history (reflecting the consistency of previous responses) to make smart decisions. Our evaluations on standard reasoning datasets (GSM8K, CSQA, and LLC) with various LLMs and prompts show that TREACLE enables cost savings of up to 85% compared to baselines, while maintaining high accuracy. Importantly, it provides the user with the ability to gracefully trade off accuracy for cost.", "pdf": "https://openreview.net/pdf/6edc8a474ffa7f439968f38dd2ced40f203ae8db.pdf"} {"title": "TinyLUT: Tiny Look-Up Table for Efficient Image Restoration at the Edge", "url": "https://openreview.net/forum?id=tN0xnYPLt6", "detail_url": "https://openreview.net/forum?id=tN0xnYPLt6", "authors": "Huanan LI,Juntao Guan,Lai Rui,Sijun Ma,Lin Gu,Zhangming Zhu", "tags": "NIPS 2024,Poster", "abstract": "Look-up tables(LUTs)-based methods have recently shown enormous potential in image restoration tasks, which are capable of significantly accelerating the inference. However, the size of LUT exhibits exponential growth with the convolution kernel size, creating a storage bottleneck for its broader application on edge devices. Here, we address the storage explosion challenge to promote the capacity of mapping the complex CNN models by LUT. We introduce an innovative separable mapping strategy to achieve over $7\\times$ storage reduction, transforming the storage from exponential dependence on kernel size to a linear relationship. Moreover, we design a dynamic discretization mechanism to decompose the activation and compress the quantization scale that further shrinks the LUT storage by $4.48\\times$. As a result, the storage requirement of our proposed TinyLUT is around 4.1\\% of MuLUT-SDY-X2 and amenable to on-chip cache, yielding competitive accuracy with over $5\\times$ lower inference latency on Raspberry 4B than FSRCNN. Our proposed TinyLUT enables superior inference speed on edge devices with new state-of-the-art accuracy on both of image super-resolution and denoising, showcasing the potential of applying this method to various image restoration tasks at the edge. The codes are available at: https://github.com/Jonas-KD/TinyLUT.", "pdf": "https://openreview.net/pdf/1df5ba9d709ef47fd00e399db6114e07034d4339.pdf"} {"title": "CoLoR-Filter: Conditional Loss Reduction Filtering for Targeted Language Model Pre-training", "url": "https://openreview.net/forum?id=GUccmOMBv6", "detail_url": "https://openreview.net/forum?id=GUccmOMBv6", "authors": "David Brandfonbrener,Hanlin Zhang,Andreas Kirsch,Jonathan Richard Schwarz,Sham M. Kakade", "tags": "NIPS 2024,Poster", "abstract": "Selecting high-quality data for pre-training is crucial in shaping the downstream task performance of language models. A major challenge lies in identifying this optimal subset, a problem generally considered intractable, thus necessitating scalable and effective heuristics. In this work, we propose a data selection method, CoLoR-Filter (Conditional Loss Reduction Filtering), which leverages an empirical Bayes-inspired approach to derive a simple and computationally efficient selection criterion based on the relative loss values of two auxiliary models.\n\nIn addition to the modeling rationale, we evaluate CoLoR-Filter empirically on two language modeling tasks: (1) selecting data from C4 for domain adaptation to evaluation on Books and (2) selecting data from C4 for a suite of downstream multiple-choice question answering tasks. We demonstrate favorable scaling both as we subselect more aggressively and using small auxiliary models to select data for large target models. As one headline result, CoLoR-Filter data selected using a pair of 150m parameter auxiliary models can train a 1.2b parameter target model to match a 1.2b parameter model trained on 25b randomly selected tokens with 25x less data for Books and 11x less data for the downstream tasks. \n\nCode: https://github.com/davidbrandfonbrener/color-filter-olmo\n\nFiltered data: https://huggingface.co/datasets/davidbrandfonbrener/color-filtered-c4", "pdf": "https://openreview.net/pdf/e639e5cb9a9b6a85d1607f14ab0742d340d48165.pdf"} {"title": "Position Coupling: Improving Length Generalization of Arithmetic Transformers Using Task Structure", "url": "https://openreview.net/forum?id=5cIRdGM1uG", "detail_url": "https://openreview.net/forum?id=5cIRdGM1uG", "authors": "Hanseul Cho,Jaeyoung Cha,Pranjal Awasthi,Srinadh Bhojanapalli,Anupam Gupta,Chulhee Yun", "tags": "NIPS 2024,Poster", "abstract": "Even for simple arithmetic tasks like integer addition, it is challenging for Transformers to generalize to longer sequences than those encountered during training. To tackle this problem, we propose *position coupling*, a simple yet effective method that directly embeds the structure of the tasks into the positional encoding of a (decoder-only) Transformer. Taking a departure from the vanilla absolute position mechanism assigning unique position IDs to each of the tokens, we assign the same position IDs to two or more \"relevant\" tokens; for integer addition tasks, we regard digits of the same significance as in the same position. On the empirical side, we show that with the proposed position coupling, our models trained on 1 to 30-digit additions can generalize up to *200-digit* additions (6.67x of the trained length). On the theoretical side, we prove that a 1-layer Transformer with coupled positions can solve the addition task involving exponentially many digits, whereas any 1-layer Transformer without positional information cannot entirely solve it. We also demonstrate that position coupling can be applied to other algorithmic tasks such as Nx2 multiplication and a two-dimensional task. Our codebase is available at [github.com/HanseulJo/position-coupling](https://github.com/HanseulJo/position-coupling).", "pdf": "https://openreview.net/pdf/103df0af7e400b66814f3dceaf95ed859b2d944f.pdf"} {"title": "Invisible Image Watermarks Are Provably Removable Using Generative AI", "url": "https://openreview.net/forum?id=7hy5fy2OC6", "detail_url": "https://openreview.net/forum?id=7hy5fy2OC6", "authors": "Xuandong Zhao,Kexun Zhang,Zihao Su,Saastha Vasan,Ilya Grishchenko,Christopher Kruegel,Giovanni Vigna,Yu-Xiang Wang,Lei Li", "tags": "NIPS 2024,Poster", "abstract": "Invisible watermarks safeguard images' copyrights by embedding hidden messages only detectable by owners. They also prevent people from misusing images, especially those generated by AI models.\nWe propose a family of regeneration attacks to remove these invisible watermarks. \nThe proposed attack method first adds random noise to an image to destroy the watermark and then reconstructs the image. \nThis approach is flexible and can be instantiated with many existing image-denoising algorithms and pre-trained generative models such as diffusion models. Through formal proofs and extensive empirical evaluations, we demonstrate that pixel-level invisible watermarks are vulnerable to this regeneration attack.\nOur results reveal that, across four different pixel-level watermarking schemes, the proposed method consistently achieves superior performance compared to existing attack techniques, with lower detection rates and higher image quality.\nHowever, watermarks that keep the image semantically similar can be an alternative defense against our attacks.\nOur finding underscores the need for a shift in research/industry emphasis from invisible watermarks to semantic-preserving watermarks. Code is available at https://github.com/XuandongZhao/WatermarkAttacker", "pdf": "https://openreview.net/pdf/ffea2d2c76fd07118cd2a1c52075d932d44f0ddf.pdf"} {"title": "Stochastic Optimal Control for Diffusion Bridges in Function Spaces", "url": "https://openreview.net/forum?id=WyQW4G57Zd", "detail_url": "https://openreview.net/forum?id=WyQW4G57Zd", "authors": "Byoungwoo Park,Jungwon Choi,Sungbin Lim,Juho Lee", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements in diffusion models and diffusion bridges primarily focus on finite-dimensional spaces, yet many real-world problems necessitate operations in infinite-dimensional function spaces for more natural and interpretable formulations. \nIn this paper, we present a theory of stochastic optimal control (SOC) tailored to infinite-dimensional spaces, aiming to extend diffusion-based algorithms to function spaces. \nSpecifically, we demonstrate how Doob\u2019s $h$-transform, the fundamental tool for constructing diffusion bridges, can be derived from the SOC perspective and expanded to infinite dimensions. \nThis expansion presents a challenge, as infinite-dimensional spaces typically lack closed-form densities. \nLeveraging our theory, we establish that solving the optimal control problem with a specific objective function choice is equivalent to learning diffusion-based generative models. \nWe propose two applications: 1) learning bridges between two infinite-dimensional distributions and 2) generative models for sampling from an infinite-dimensional distribution. \nOur approach proves effective for diverse problems involving continuous function space representations, such as resolution-free images, time-series data, and probability density functions.", "pdf": "https://openreview.net/pdf/6dc38b9c17ec695098e95def34f4ce97a1a745ed.pdf"} {"title": "Efficient Discrepancy Testing for Learning with Distribution Shift", "url": "https://openreview.net/forum?id=ojIhvhQBAQ", "detail_url": "https://openreview.net/forum?id=ojIhvhQBAQ", "authors": "Gautam Chandrasekaran,Adam Klivans,Vasilis Kontonis,Konstantinos Stavropoulos,Arsen Vasilyan", "tags": "NIPS 2024,Poster", "abstract": "A fundamental notion of distance between train and test distributions from the field of domain adaptation is discrepancy distance. While in general hard to compute, here we provide the first set of provably efficient algorithms for testing *localized* discrepancy distance, where discrepancy is computed with respect to a fixed output classifier. These results imply a broad set of new, efficient learning algorithms in the recently introduced model of Testable Learning with Distribution Shift (TDS learning) due to Klivans et al. (2023).\n\nOur approach generalizes and improves all prior work on TDS learning: (1) we obtain *universal* learners that succeed simultaneously for large classes of test distributions, (2) achieve near-optimal error rates, and (3) give exponential improvements for constant depth circuits. Our methods further extend to semi-parametric settings and imply the first positive results for low-dimensional convex sets. Additionally, we separate learning and testing phases and obtain algorithms that run in fully polynomial time at test time.", "pdf": "https://openreview.net/pdf/8c9c8d6cd774059c113f60609819dd77cc2b2769.pdf"} {"title": "A Unifying Normative Framework of Decision Confidence", "url": "https://openreview.net/forum?id=BRvGfN3Xfm", "detail_url": "https://openreview.net/forum?id=BRvGfN3Xfm", "authors": "Amelia Johnson,Michael A Buice,Koosha Khalvati", "tags": "NIPS 2024,Poster", "abstract": "Self-assessment of one\u2019s choices, i.e., confidence, is the topic of many decision neuroscience studies. Computational models of confidence, however, are limited to specific scenarios such as between choices with the same value. Here we present a normative framework for modeling decision confidence that is generalizable to various tasks and experimental setups. We further drive the implications of our model from both theoretical and experimental points of view. Specifically, we show that our model maps to the planning as an inference framework where the objective function is maximizing the gained reward and information entropy of the policy. Moreover, we validate our model on two different psychophysics experiments and show its superiority over other approaches in explaining subjects' confidence reports.", "pdf": "https://openreview.net/pdf/bad57d2f6699f41e9faffa00585fdc10015d1e23.pdf"} {"title": "Decision Mamba: A Multi-Grained State Space Model with Self-Evolution Regularization for Offline RL", "url": "https://openreview.net/forum?id=dc4xbVfdzy", "detail_url": "https://openreview.net/forum?id=dc4xbVfdzy", "authors": "Qi Lv,Xiang Deng,Gongwei Chen,Michael Y Wang,Liqiang Nie", "tags": "NIPS 2024,Poster", "abstract": "While the conditional sequence modeling with the transformer architecture has demonstrated its effectiveness in dealing with offline reinforcement learning (RL) tasks, it is struggle to handle out-of-distribution states and actions.\nExisting work attempts to address this issue by data augmentation with the learned policy or adding extra constraints with the value-based RL algorithm. However, these studies still fail to overcome the following challenges: (1) insufficiently utilizing the historical temporal information among inter-steps, (2) overlooking the local intra-step relationships among states, actions and return-to-gos (RTGs), (3) overfitting suboptimal trajectories with noisy labels. To address these challenges, we propose $\\textbf{D}$ecision $\\textbf{M}$amba ($\\textbf{DM}$), a novel multi-grained state space model (SSM) with a self-evolving policy learning strategy.\nDM explicitly models the historical hidden state to extract the temporal information by using the mamba architecture. To capture the relationship among state-action-RTG triplets, a fine-grained SSM module is designed and integrated into the original coarse-grained SSM in mamba, resulting in a novel mamba architecture tailored for offline RL. Finally, to mitigate the overfitting issue on noisy trajectories, a self-evolving policy is proposed by using progressive regularization. The policy evolves by using its own past knowledge to refine the suboptimal actions, thus enhancing its robustness on noisy demonstrations. Extensive experiments on various tasks show that DM outperforms other baselines substantially.", "pdf": "https://openreview.net/pdf/bf1c3377f6b6a5448c29ed1730ada8e2d248ce23.pdf"} {"title": "Model-Based Transfer Learning for Contextual Reinforcement Learning", "url": "https://openreview.net/forum?id=KLv1VLuMo8", "detail_url": "https://openreview.net/forum?id=KLv1VLuMo8", "authors": "Jung-Hoon Cho,Vindula Jayawardana,Sirui Li,Cathy Wu", "tags": "NIPS 2024,Poster", "abstract": "Deep reinforcement learning (RL) is a powerful approach to complex decision-making. However, one issue that limits its practical application is its brittleness, sometimes failing to train in the presence of small changes in the environment. Motivated by the success of zero-shot transfer\u2014where pre-trained models perform well on related tasks\u2014we consider the problem of selecting a good set of training tasks to maximize generalization performance across a range of tasks. Given the high cost of training, it is critical to select training tasks strategically, but not well understood how to do so. We hence introduce Model-Based Transfer Learning (MBTL), which layers on top of existing RL methods to effectively solve contextual RL problems. MBTL models the generalization performance in two parts: 1) the performance set point, modeled using Gaussian processes, and 2) performance loss (generalization gap), modeled as a linear function of contextual similarity. MBTL combines these two pieces of information within a Bayesian optimization (BO) framework to strategically select training tasks. We show theoretically that the method exhibits sublinear regret in the number of training tasks and discuss conditions to further tighten regret bounds. We experimentally validate our methods using urban traffic and standard continuous control benchmarks. The experimental results suggest that MBTL can achieve up to 50x improved sample efficiency compared with canonical independent training and multi-task training. Further experiments demonstrate the efficacy of BO and the insensitivity to the underlying RL algorithm and hyperparameters. This work lays the foundations for investigating explicit modeling of generalization, thereby enabling principled yet effective methods for contextual RL. Code is available at https://github.com/jhoon-cho/MBTL/.", "pdf": "https://openreview.net/pdf/d2cc3180959ef0dcfdb2471da9ce763751f63ba2.pdf"} {"title": "DASH: Warm-Starting Neural Network Training in Stationary Settings without Loss of Plasticity", "url": "https://openreview.net/forum?id=IdQuUYMA1t", "detail_url": "https://openreview.net/forum?id=IdQuUYMA1t", "authors": "Baekrok Shin,Junsoo Oh,Hanseul Cho,Chulhee Yun", "tags": "NIPS 2024,Poster", "abstract": "Warm-starting neural network training by initializing networks with previously learned weights is appealing, as practical neural networks are often deployed under a continuous influx of new data. However, it often leads to *loss of plasticity*, where the network loses its ability to learn new information, resulting in worse generalization than training from scratch. This occurs even under stationary data distributions, and its underlying mechanism is poorly understood. We develop a framework emulating real-world neural network training and identify noise memorization as the primary cause of plasticity loss when warm-starting on stationary data. Motivated by this, we propose **Direction-Aware SHrinking (DASH)**, a method aiming to mitigate plasticity loss by selectively forgetting memorized noise while preserving learned features. We validate our approach on vision tasks, demonstrating improvements in test accuracy and training efficiency.", "pdf": "https://openreview.net/pdf/1f7e991a14fd00e18381be7ccda7d2d45118e189.pdf"} {"title": "Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization", "url": "https://openreview.net/forum?id=7qJFkuZdYo", "detail_url": "https://openreview.net/forum?id=7qJFkuZdYo", "authors": "Yuanpu Cao,Tianrong Zhang,Bochuan Cao,Ziyi Yin,Lu Lin,Fenglong Ma,Jinghui Chen", "tags": "NIPS 2024,Poster", "abstract": "Researchers have been studying approaches to steer the behavior of Large Language Models (LLMs) and build personalized LLMs tailored for various applications. While fine-tuning seems to be a direct solution, it requires substantial computational resources and may significantly affect the utility of the original LLM. \nRecent endeavors have introduced more lightweight strategies, focusing on extracting ``steering vectors'' to guide the model's output toward desired behaviors by adjusting activations within specific layers of the LLM's transformer architecture. However, such steering vectors are directly extracted from the activations of human preference data and thus often lead to suboptimal results and occasional failures, especially in alignment-related scenarios.\nIn this work, we propose an innovative approach that could produce more effective steering vectors through bi-directional preference optimization. \nOur method is designed to allow steering vectors to directly influence the generation probability of contrastive human preference data pairs, thereby offering a more precise representation of the target behavior. By carefully adjusting the direction and magnitude of the steering vector, we enabled personalized control over the desired behavior across a spectrum of intensities.\nExtensive experimentation across various open-ended generation tasks, particularly focusing on steering AI personas, has validated the efficacy of our approach. \nMoreover, we comprehensively investigate critical alignment-concerning scenarios, such as managing truthfulness, mitigating hallucination, and addressing jailbreaking attacks alongside their respective defenses. Remarkably, our method can still demonstrate outstanding steering effectiveness across these scenarios. Furthermore, we showcase the transferability of our steering vectors across different models/LoRAs and highlight the synergistic benefits of applying multiple vectors simultaneously. These findings significantly broaden the practicality and versatility of our proposed method.", "pdf": "https://openreview.net/pdf/f3732528b64a68528a6adbc74189e17f4c6fa168.pdf"} {"title": "A Prompt-Based Knowledge Graph Foundation Model for Universal In-Context Reasoning", "url": "https://openreview.net/forum?id=VQyb9LKmUH", "detail_url": "https://openreview.net/forum?id=VQyb9LKmUH", "authors": "Yuanning Cui,Zequn Sun,Wei Hu", "tags": "NIPS 2024,Poster", "abstract": "Extensive knowledge graphs (KGs) have been constructed to facilitate knowledge-driven tasks across various scenarios. However, existing work usually develops separate reasoning models for different KGs, lacking the ability to generalize and transfer knowledge across diverse KGs and reasoning settings. In this paper, we propose a prompt-based KG foundation model via in-context learning, namely KG-ICL, to achieve a universal reasoning ability. Specifically, we introduce a prompt graph centered with a query-related example fact as context to understand the query relation. To encode prompt graphs with the generalization ability to unseen entities and relations in queries, we first propose a unified tokenizer that maps entities and relations in prompt graphs to predefined tokens. Then, we propose two message passing neural networks to perform prompt encoding and KG reasoning, respectively. We conduct evaluation on 43 different KGs in both transductive and inductive settings. Results indicate that the proposed KG-ICL outperforms baselines on most datasets, showcasing its outstanding generalization and universal reasoning capabilities. The source code is accessible on GitHub: https://github.com/nju-websoft/KG-ICL.", "pdf": "https://openreview.net/pdf/15c8fc9c778f0a77cd25ebff402ea4613ac94fef.pdf"} {"title": "FinCon: A Synthesized LLM Multi-Agent System with Conceptual Verbal Reinforcement for Enhanced Financial Decision Making", "url": "https://openreview.net/forum?id=dG1HwKMYbC", "detail_url": "https://openreview.net/forum?id=dG1HwKMYbC", "authors": "Yangyang Yu,Zhiyuan Yao,Haohang Li,Zhiyang Deng,Yuechen Jiang,Yupeng Cao,Zhi Chen,Jordan W. Suchow,Zhenyu Cui,Rong Liu,Zhaozhuo Xu,Denghui Zhang,Koduvayur Subbalakshmi,GUOJUN XIONG,Yueru He,Jimin Huang,Dong Li,Qianqian Xie", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have demonstrated notable potential in conducting complex tasks and are increasingly utilized in various financial applications. However, high-quality sequential financial investment decision-making remains challenging. These tasks require multiple interactions with a volatile environment for every decision, demanding sufficient intelligence to maximize returns and manage risks. Although LLMs have been used to develop agent systems that surpass human teams and yield impressive investment returns, opportunities to enhance multi-source information synthesis and optimize decision-making outcomes through timely experience refinement remain unexplored. Here, we introduce FinCon, an LLM-based multi-agent framework tailored for diverse financial tasks. Inspired by effective real-world investment firm organizational structures, FinCon utilizes a manager-analyst communication hierarchy. This structure allows for synchronized cross-functional agent collaboration towards unified goals through natural language interactions and equips each agent with greater memory capacity than humans. Additionally, a risk-control component in FinCon enhances decision quality by episodically initiating a self-critiquing mechanism to update systematic investment beliefs. The conceptualized beliefs serve as verbal reinforcement for the future agent\u2019s behavior and can be selectively propagated to the appropriate node that requires knowledge updates. This feature significantly improves performance while reducing unnecessary peer-to-peer communication costs. Moreover, FinCon demonstrates strong generalization capabilities in various financial tasks, including stock trading and portfolio management.", "pdf": "https://openreview.net/pdf/db337351a482134151dca292db9301d981d96463.pdf"} {"title": "Global Rewards in Restless Multi-Armed Bandits", "url": "https://openreview.net/forum?id=3apt5AJ5QN", "detail_url": "https://openreview.net/forum?id=3apt5AJ5QN", "authors": "Naveen Janaki Raman,Zheyuan Ryan Shi,Fei Fang", "tags": "NIPS 2024,Poster", "abstract": "Restless multi-armed bandits (RMAB) extend multi-armed bandits so arm pulls impact future arm states. Despite the success of RMABs, a key limiting assumption is the separability of rewards into a sum across arms. We address this deficiency by proposing restless-multi-armed bandit with global rewards (RMAB-G), a generalization of RMABs to global non-separable rewards. To solve RMAB-G, we develop the Linear-Whittle and Shapley-Whittle indices, which extend Whittle indices from RMABs to RMAB-Gs. We prove approximation bounds which demonstrate how Linear and Shapley-Whittle indices fail for non-linear rewards. To overcome this limitation, we propose two sets of adaptive policies: the first computes indices iteratively and the second combines indices with Monte-Carlo Tree Search (MCTS). Empirically, we demonstrate that adaptive policies outperform both pre-computed index policies and baselines in synthetic and real-world food rescue datasets.", "pdf": "https://openreview.net/pdf/c94e70e7b28d901a74a3fb2df5e9d23afe6565dd.pdf"} {"title": "Large Language Model Unlearning via Embedding-Corrupted Prompts", "url": "https://openreview.net/forum?id=e5icsXBD8Q", "detail_url": "https://openreview.net/forum?id=e5icsXBD8Q", "authors": "Chris Yuhao Liu,Yaxuan Wang,Jeffrey Flanigan,Yang Liu", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have advanced to encompass extensive knowledge across diverse domains. Yet controlling what a large language model should not know is important for ensuring alignment and thus safe use. However, accurately and efficiently unlearning knowledge from an LLM remains challenging due to the potential collateral damage caused by the fuzzy boundary between retention and forgetting, and the large computational requirements for optimization across state-of-the-art models with hundreds of billions of parameters. In this work, we present \\textbf{Embedding-COrrupted (ECO) Prompts}, a lightweight unlearning framework for large language models to address both the challenges of knowledge entanglement and unlearning efficiency. Instead of relying on the LLM itself to unlearn, we enforce an unlearned state during inference by employing a prompt classifier to identify and safeguard prompts to forget. We learn corruptions added to prompt embeddings via zeroth order optimization toward the unlearning objective offline and corrupt prompts flagged by the classifier during inference. We find that these embedding-corrupted prompts not only lead to desirable outputs that satisfy the unlearning objective but also closely approximate the output from a model that has never been trained on the data intended for forgetting. Through extensive experiments on unlearning, we demonstrate the superiority of our method in achieving promising unlearning at \\textit{nearly zero side effects} in general domains and domains closely related to the unlearned ones. Additionally, we highlight the scalability of our method to 100 LLMs, ranging from 0.5B to 236B parameters, incurring no additional cost as the number of parameters increases. We have made our code publicly available at \\url{https://github.com/chrisliu298/llm-unlearn-eco}.", "pdf": "https://openreview.net/pdf/6ce60d7844fe2a3c3a9937127c591337d3945e16.pdf"} {"title": "Euclidean distance compression via deep random features", "url": "https://openreview.net/forum?id=Fanbig8DR9", "detail_url": "https://openreview.net/forum?id=Fanbig8DR9", "authors": "Brett Leroux,Luis Rademacher", "tags": "NIPS 2024,Poster", "abstract": "Motivated by the problem of compressing point sets into as few bits as possible while maintaining information about approximate distances between points, we construct random nonlinear maps $\\varphi_\\ell$ that compress point sets in the following way. For a point set $S$, the map $\\varphi_\\ell:\\mathbb{R}^d \\to N^{-1/2}\\{-1,1\\}^N$ has the property that storing $\\varphi_\\ell(S)$ (a sketch of $S$) allows one to report squared distances between points up to some multiplicative $(1\\pm \\epsilon)$ error with high probability. The maps $\\varphi_\\ell$ are the $\\ell$-fold composition of a certain type of random feature mapping. \n\nCompared to existing techniques, our maps offer several advantages. The standard method for compressing point sets by random mappings relies on the Johnson-Lindenstrauss lemma and involves compressing point sets with a random linear map. The main advantage of our maps $\\varphi_\\ell$ over random linear maps is that ours map point sets directly into the discrete cube $N^{-1/2}\\{-1,1\\}^N$ and so there is no additional step needed to convert the sketch to bits. For some range of parameters, our maps $\\varphi_\\ell$ produce sketches using fewer bits of storage space. We validate the method with experiments, including an application to nearest neighbor search.", "pdf": "https://openreview.net/pdf/0b61638bdc473a11461a84a7f80fe3487d5a6e30.pdf"} {"title": "Towards Scalable and Stable Parallelization of Nonlinear RNNs", "url": "https://openreview.net/forum?id=hBCxxVQDBw", "detail_url": "https://openreview.net/forum?id=hBCxxVQDBw", "authors": "Xavier Gonzalez,Andrew Warrington,Jimmy T.H. Smith,Scott Linderman", "tags": "NIPS 2024,Poster", "abstract": "Conventional nonlinear RNNs are not naturally parallelizable across the sequence length, unlike transformers and linear RNNs. Lim et. al. therefore tackle parallelized evaluation of nonlinear RNNs, posing it as a fixed point problem solved with Newton's method. By deriving and applying a parallelized form of Newton's method, they achieve large speedups over sequential evaluation. However, their approach inherits cubic computational complexity and numerical instability. We tackle these weaknesses. To reduce the computational complexity, we apply quasi-Newton approximations and show they converge comparably, use less memory, and are faster, compared to full-Newton. To stabilize Newton's method, we leverage a connection between Newton's method damped with trust regions and Kalman smoothing. This connection allows us to stabilize the iteration, per the trust region, and use efficient parallelized Kalman algorithms to retain performance. We compare these methods empirically and highlight use cases where each algorithm excels.", "pdf": "https://openreview.net/pdf/054bd98d9f6a779fbde59e5b8df033ed98c92dfd.pdf"} {"title": "The Unmet Promise of Synthetic Training Images: Using Retrieved Real Images Performs Better", "url": "https://openreview.net/forum?id=fNoleQa9RX", "detail_url": "https://openreview.net/forum?id=fNoleQa9RX", "authors": "Scott Geng,Cheng-Yu Hsieh,Vivek Ramanujan,Matthew Wallingford,Chun-Liang Li,Pang Wei Koh,Ranjay Krishna", "tags": "NIPS 2024,Poster", "abstract": "Generative text-to-image models enable us to synthesize unlimited amounts of images in a controllable manner, spurring many recent efforts to train vision models with synthetic data. However, every synthetic image ultimately originates from the upstream data used to train the generator. Does the intermediate generator provide additional information over directly training on relevant parts of the upstream data? \nGrounding this question in the setting of image classification, we compare finetuning on task-relevant, targeted synthetic data generated by Stable Diffusion---a generative model trained on the LAION-2B dataset---against finetuning on targeted real images retrieved directly from LAION-2B. We show that while synthetic data can benefit some downstream tasks, it is universally matched or outperformed by real data from the simple retrieval baseline. Our analysis suggests that this underperformance is partially due to generator artifacts and inaccurate task-relevant visual details in the synthetic images. Overall, we argue that targeted retrieval is a critical baseline to consider when training with synthetic data---a baseline that current methods do not yet surpass. We release code, data, and models at [https://github.com/scottgeng00/unmet-promise/](https://github.com/scottgeng00/unmet-promise).", "pdf": "https://openreview.net/pdf/c2284c545059ec27c6dcfc2ba711727798963d58.pdf"} {"title": "A Structure-Aware Framework for Learning Device Placements on Computation Graphs", "url": "https://openreview.net/forum?id=Kzno1r3Xef", "detail_url": "https://openreview.net/forum?id=Kzno1r3Xef", "authors": "Shukai Duan,Heng Ping,Nikos Kanakaris,Xiongye Xiao,Panagiotis Kyriakis,Nesreen K. Ahmed,Peiyu Zhang,Guixiang Ma,Mihai Capot\u0103,Shahin Nazarian,Theodore L. Willke,Paul Bogdan", "tags": "NIPS 2024,Poster", "abstract": "Computation graphs are Directed Acyclic Graphs (DAGs) where the nodes correspond to mathematical operations and are used widely as abstractions in optimizations of neural networks. The device placement problem aims to identify optimal allocations of those nodes to a set of (potentially heterogeneous) devices. Existing approaches rely on two types of architectures known as grouper-placer and encoder-placer, respectively. In this work, we bridge the gap between encoder-placer and grouper-placer techniques and propose a novel framework for the task of device placement, relying on smaller computation graphs extracted from the OpenVINO toolkit. The framework consists of five steps, including graph coarsening, node representation learning and policy optimization. It facilitates end-to-end training and takes into account the DAG nature of the computation graphs. We also propose a model variant, inspired by graph parsing networks and complex network analysis, enabling graph representation learning and jointed, personalized graph partitioning, using an unspecified number of groups. To train the entire framework, we use reinforcement learning using the execution time of the placement as a reward. We demonstrate the flexibility and effectiveness of our approach through multiple experiments with three benchmark models, namely Inception-V3, ResNet, and BERT. The robustness of the proposed framework is also highlighted through an ablation study. The suggested placements improve the inference speed for the benchmark models by up to $58.2\\%$ over CPU execution and by up to $60.24\\%$ compared to other commonly used baselines.", "pdf": "https://openreview.net/pdf/344d24d9d6c4f7273dbdceae38b89aca712973eb.pdf"} {"title": "Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models", "url": "https://openreview.net/forum?id=GNSMl1P5VR", "detail_url": "https://openreview.net/forum?id=GNSMl1P5VR", "authors": "Yushi Hu,Weijia Shi,Xingyu Fu,Dan Roth,Mari Ostendorf,Luke Zettlemoyer,Noah A. Smith,Ranjay Krishna", "tags": "NIPS 2024,Poster", "abstract": "Humans draw to facilitate reasoning: we draw auxiliary lines when solving geometry problems; we mark and circle when reasoning on maps; we use sketches to amplify our ideas and relieve our limited-capacity working memory. However, such actions are missing in current multimodal language models (LMs). Current chain-of-thought and tool-use paradigms only use text as intermediate reasoning steps. In this work, we introduce Sketchpad, a framework that gives multimodal LMs a visual sketchpad and tools to draw on the sketchpad. The LM conducts planning and reasoning according to the visual artifacts it has drawn. Different from prior work, which uses text-to-image models to enable LMs to draw, Sketchpad enables LMs to draw with lines, boxes, marks, etc., which is closer to human sketching and better facilitates reasoning. \\name can also use specialist vision models during the sketching process (e.g., draw bounding boxes with object detection models, draw masks with segmentation models), to further enhance visual perception and reasoning. We experiment on a wide range of math tasks (including geometry, functions, graph, chess) and complex visual reasoning tasks. Sketchpad substantially improves performance on all tasks over strong base models with no sketching, yielding an average gain of 12.7% on math tasks, and 8.6% on vision tasks. GPT-4o with Sketchpad sets a new state of the art on all tasks, including V*Bench (80.3%), BLINK spatial reasoning (83.9%), and visual correspondence (80.8%). We will release all code and data.", "pdf": "https://openreview.net/pdf/ef421114c4f5982516766e4ef464a1fe54b1b572.pdf"} {"title": "Confident Natural Policy Gradient for Local Planning in $q_\\pi$-realizable Constrained MDPs", "url": "https://openreview.net/forum?id=TNEmAgwoXR", "detail_url": "https://openreview.net/forum?id=TNEmAgwoXR", "authors": "Tian Tian,Lin Yang,Csaba Szepesvari", "tags": "NIPS 2024,Poster", "abstract": "The constrained Markov decision process (CMDP) framework emerges as an important reinforcement learning approach for imposing safety or other critical objectives while maximizing cumulative reward. However, the current understanding of how to learn efficiently in a CMDP environment with a potentially infinite number of states remains under investigation, particularly when function approximation is applied to the value functions. In this paper, we address the learning problem given linear function approximation with $q_{\\pi}$-realizability, where the value functions of all policies are linearly representable with a known feature map, a setting known to be more general and challenging than other linear settings. Utilizing a local-access model, we propose a novel primal-dual algorithm that, after $\\tilde{O}(\\text{poly}(d) \\epsilon^{-3})$ iterations, outputs with high probability a policy that strictly satisfies the constraints while nearly optimizing the value with respect to a reward function. Here, $d$ is the feature dimension and $\\epsilon > 0$ is a given error. The algorithm relies on a carefully crafted off-policy evaluation procedure to evaluate the policy using historical data, which informs policy updates through policy gradients and conserves samples. To our knowledge, this is the first result achieving polynomial sample complexity for CMDP in the $q_{\\pi}$-realizable setting.", "pdf": "https://openreview.net/pdf/9c2af3f8d9c16b5729d7d8ff36f13a70315c5e0c.pdf"} {"title": "GaussianCut: Interactive segmentation via graph cut for 3D Gaussian Splatting", "url": "https://openreview.net/forum?id=Ns0LQokxa5", "detail_url": "https://openreview.net/forum?id=Ns0LQokxa5", "authors": "Umangi Jain,Ashkan Mirzaei,Igor Gilitschenski", "tags": "NIPS 2024,Poster", "abstract": "We introduce GaussianCut, a new method for interactive multiview segmentation of scenes represented as 3D Gaussians. Our approach allows for selecting the objects to be segmented by interacting with a single view. It accepts intuitive user input, such as point clicks, coarse scribbles, or text. Using 3D Gaussian Splatting (3DGS) as the underlying scene representation simplifies the extraction of objects of interest which are considered to be a subset of the scene's Gaussians. Our key idea is to represent the scene as a graph and use the graph-cut algorithm to minimize an energy function to effectively partition the Gaussians into foreground and background. To achieve this, we construct a graph based on scene Gaussians and devise a segmentation-aligned energy function on the graph to combine user inputs with scene properties. To obtain an initial coarse segmentation, we leverage 2D image/video segmentation models and further refine these coarse estimates using our graph construction. Our empirical evaluations show the adaptability of GaussianCut across a diverse set of scenes. GaussianCut achieves competitive performance with state-of-the-art approaches for 3D segmentation without requiring any additional segmentation-aware training", "pdf": "https://openreview.net/pdf/bed9099005e28fab95397cb818fadc9488157252.pdf"} {"title": "A Single-Step, Sharpness-Aware Minimization is All You Need to Achieve Efficient and Accurate Sparse Training", "url": "https://openreview.net/forum?id=MJgMMqMDu4", "detail_url": "https://openreview.net/forum?id=MJgMMqMDu4", "authors": "Jie Ji,Gen Li,Jingjing Fu,Fatemeh Afghah,Linke Guo,Xiaoyong Yuan,Xiaolong Ma", "tags": "NIPS 2024,Poster", "abstract": "Sparse training stands as a landmark approach in addressing the considerable training resource demands imposed by the continuously expanding size of Deep Neural Networks (DNNs). However, the training of a sparse DNN encounters great challenges in achieving optimal generalization ability despite the efforts from the state-of-the-art sparse training methodologies. To unravel the mysterious reason behind the difficulty of sparse training, we connect the network sparsity with neural loss functions structure, and identify the cause of such difficulty lies in chaotic loss surface. In light of such revelation, we propose $S^{2} - SAM$, characterized by a **S**ingle-step **S**harpness_**A**ware **M**inimization that is tailored for **S**parse training. For the first time, $S^{2} - SAM$ innovates the traditional SAM-style optimization by approximating sharpness perturbation through prior gradient information, incurring *zero extra cost*. Therefore, $S^{2} - SAM$ not only exhibits the capacity to improve generalization but also aligns with the efficiency goal of sparse training. Additionally, we study the generalization result of $S^{2} - SAM$ and provide theoretical proof for convergence. Through extensive experiments, $S^{2} - SAM$ demonstrates its universally applicable plug-and-play functionality, enhancing accuracy across various sparse training methods. Code available at https://github.com/jjsrf/SSAM-NEURIPS2024.", "pdf": "https://openreview.net/pdf/b232744747c0917b88facb8e8dfa64df8b2665b6.pdf"} {"title": "SEA: State-Exchange Attention for High-Fidelity Physics Based Transformers", "url": "https://openreview.net/forum?id=3gvGZhkkVt", "detail_url": "https://openreview.net/forum?id=3gvGZhkkVt", "authors": "Parsa Esmati,Amirhossein Dadashzadeh,Vahid Goodarzi Ardakani,Nicolas Larrosa,Nicol\u00f2 Grilli", "tags": "NIPS 2024,Poster", "abstract": "Current approaches using sequential networks have shown promise in estimating field variables for dynamical systems, but they are often limited by high rollout errors. The unresolved issue of rollout error accumulation results in unreliable estimations as the network predicts further into the future, with each step's error compounding and leading to an increase in inaccuracy. Here, we introduce the State-Exchange Attention (SEA) module, a novel transformer-based module enabling information exchange between encoded fields through multi-head cross-attention. The cross-field multidirectional information exchange design enables all state variables in the system to exchange information with one another, capturing physical relationships and symmetries between fields. Additionally, we introduce an efficient ViT-like mesh autoencoder to generate spatially coherent mesh embeddings for a large number of meshing cells. The SEA integrated transformer demonstrates the state-of-the-art rollout error compared to other competitive baselines. Specifically, we outperform PbGMR-GMUS Transformer-RealNVP and GMR-GMUS Transformer, with a reduction in error of 88% and 91%, respectively. Furthermore, we demonstrate that the SEA module alone can reduce errors by 97% for state variables that are highly dependent on other states of the system. The repository for this work is available at: https://github.com/ParsaEsmati/SEA", "pdf": "https://openreview.net/pdf/899aa844132a7590d0fd4ff1ca9b65a86a8cbb57.pdf"} {"title": "S$^{2}$FT: Efficient, Scalable and Generalizable LLM Fine-tuning by Structured Sparsity", "url": "https://openreview.net/forum?id=lEUle8S4xQ", "detail_url": "https://openreview.net/forum?id=lEUle8S4xQ", "authors": "Xinyu Yang,Jixuan Leng,Geyang Guo,Jiawei Zhao,Ryumei Nakada,Linjun Zhang,Huaxiu Yao,Beidi Chen", "tags": "NIPS 2024,Poster", "abstract": "Current PEFT methods for LLMs can achieve high quality, efficient training, or scalable serving, but not all three simultaneously. \nTo address this limitation, we investigate sparse fine-tuning and observe a remarkable improvement in generalization ability. \nUtilizing this key insight, we propose a family of Structured Sparse Fine-Tuning (S${^2}$FT) methods for LLMs, which concurrently achieve state-of-the-art fine-tuning performance, training efficiency, and inference scalability. S${^2}$FT accomplishes this by \"selecting sparsely and computing densely\". Based on the coupled structures in LLMs, \\model selects a few attention heads and channels in the MHA and FFN modules for each Transformer block, respectively. Next, it co-permutes the weight matrices on both sides of all coupled structures to connect the selected subsets in each layer into a dense submatrix. Finally, S${^2}$FT performs in-place gradient updates on all selected submatrices.\nThrough theoretical analyses and empirical results, our method prevents forgetting while simplifying optimization, delivers SOTA performance on both commonsense and arithmetic reasoning with 4.6% and 1.3% average improvements compared to LoRA, and surpasses full FT by 11.5% when generalizing to various domains after instruction tuning. \nUsing our partial back-propagation algorithm, S${^2}$FT saves training memory up to 3$\\times$ and improves latency by 1.5-2.7$\\times$ compared to full FT, while achieving an average 10\\% improvement over LoRA on both metrics. We further demonstrate that the weight updates in S${^2}$FT can be decoupled into adapters, enabling effective fusion, fast switch, and efficient parallelism when serving multiple fine-tuned models.", "pdf": "https://openreview.net/pdf/4471079851bf7b34b21b69c2d29905ec2566b2ba.pdf"} {"title": "Your contrastive learning problem is secretly a distribution alignment problem", "url": "https://openreview.net/forum?id=iNUKoLU8xb", "detail_url": "https://openreview.net/forum?id=iNUKoLU8xb", "authors": "Zihao Chen,Chi-Heng Lin,Ran Liu,Jingyun Xiao,Eva L Dyer", "tags": "NIPS 2024,Poster", "abstract": "Despite the success of contrastive learning (CL) in vision and language, its theoretical foundations and mechanisms for building representations remain poorly understood. In this work, we build connections between noise contrastive estimation losses widely used in CL and distribution alignment with entropic optimal transport (OT). This connection allows us to develop a family of different losses and multistep iterative variants for existing CL methods. Intuitively, by using more information from the distribution of latents, our approach allows a more distribution-aware manipulation of the relationships within augmented sample sets.\nWe provide theoretical insights and experimental evidence demonstrating the benefits of our approach for generalized contrastive alignment. Through this framework, it is possible to leverage tools in OT to build unbalanced losses to handle noisy views and customize the representation space by changing the constraints on alignment.\nBy reframing contrastive learning as an alignment problem and leveraging existing optimization tools for OT, our work provides new insights and connections between different self-supervised learning models in addition to new tools that can be more easily adapted to incorporate domain knowledge into learning.", "pdf": "https://openreview.net/pdf/c7c6656445f089a46f221ffe1c65abf802905470.pdf"} {"title": "Data Free Backdoor Attacks", "url": "https://openreview.net/forum?id=pX71TM2MLh", "detail_url": "https://openreview.net/forum?id=pX71TM2MLh", "authors": "Bochuan Cao,Jinyuan Jia,Chuxuan Hu,Wenbo Guo,Zhen Xiang,Jinghui Chen,Bo Li,Dawn Song", "tags": "NIPS 2024,Poster", "abstract": "Backdoor attacks aim to inject a backdoor into a classifier such that it predicts any input with an attacker-chosen backdoor trigger as an attacker-chosen target class. \nExisting backdoor attacks require either retraining the classifier with some clean data or modifying the model's architecture.\nAs a result, they are 1) not applicable when clean data is unavailable, 2) less efficient when the model is large, and 3) less stealthy due to architecture changes. \nIn this work, we propose DFBA, a novel retraining-free and data-free backdoor attack without changing the model architecture. \nTechnically, our proposed method modifies a few parameters of a classifier to inject a backdoor. \nThrough theoretical analysis, we verify that our injected backdoor is provably undetectable and unremovable by various state-of-the-art defenses under mild assumptions. \nOur evaluation on multiple datasets further demonstrates that our injected backdoor: 1) incurs negligible classification loss, 2) achieves 100\\% attack success rates, and 3) bypasses six existing state-of-the-art defenses. \nMoreover, our comparison with a state-of-the-art non-data-free backdoor attack shows our attack is more stealthy and effective against various defenses while achieving less classification accuracy loss.\nWe will release our code upon paper acceptance.", "pdf": "https://openreview.net/pdf/626a446188f0b3dac28d22823764a7655735a226.pdf"} {"title": "Equivariant Blurring Diffusion for Hierarchical Molecular Conformer Generation", "url": "https://openreview.net/forum?id=Aj0Zf28l6o", "detail_url": "https://openreview.net/forum?id=Aj0Zf28l6o", "authors": "Jiwoong Park,Yang Shen", "tags": "NIPS 2024,Poster", "abstract": "How can diffusion models process 3D geometries in a coarse-to-fine manner, akin to our multiscale view of the world?\nIn this paper, we address the question by focusing on a fundamental biochemical problem of generating 3D molecular conformers conditioned on molecular graphs in a multiscale manner. \nOur approach consists of two hierarchical stages: i) generation of coarse-grained fragment-level 3D structure from the molecular graph, and ii) generation of fine atomic details from the coarse-grained approximated structure while allowing the latter to be adjusted simultaneously.\nFor the challenging second stage, which demands preserving coarse-grained information while ensuring SE(3) equivariance, we introduce a novel generative model termed Equivariant Blurring Diffusion (EBD), which defines a forward process that moves towards the fragment-level coarse-grained structure by blurring the fine atomic details of conformers, and a reverse process that performs the opposite operation using equivariant networks.\nWe demonstrate the effectiveness of EBD by geometric and chemical comparison to state-of-the-art denoising diffusion models on a benchmark of drug-like molecules.\nAblation studies draw insights on the design of EBD by thoroughly analyzing its architecture, which includes the design of the loss function and the data corruption process.\nCodes are released at https://github.com/Shen-Lab/EBD.", "pdf": "https://openreview.net/pdf/18ad6fd7ccc4d430e351d06384b59c3ba44fe1a4.pdf"} {"title": "UNIT: Unifying Image and Text Recognition in One Vision Encoder", "url": "https://openreview.net/forum?id=YIxKeHQZpi", "detail_url": "https://openreview.net/forum?id=YIxKeHQZpi", "authors": "Yi Zhu,Zhou Yanpeng,Chunwei Wang,Yang Cao,Jianhua Han,Lu Hou,Hang Xu", "tags": "NIPS 2024,Poster", "abstract": "Currently, vision encoder models like Vision Transformers (ViTs) typically excel at image recognition tasks but cannot simultaneously support text recognition like human visual recognition. To address this limitation, we propose UNIT, a novel training framework aimed at UNifying Image and Text recognition within a single model. Starting with a vision encoder pre-trained with image recognition tasks, UNIT introduces a lightweight language decoder for predicting text outputs and a lightweight vision decoder to prevent catastrophic forgetting of the original image encoding capabilities. The training process comprises two stages: intra-scale pretraining and inter-scale finetuning. During intra-scale pretraining, UNIT learns unified representations from multi-scale inputs, where images and documents are at their commonly used resolution, to enable fundamental recognition capability. In the inter-scale finetuning stage, the model introduces scale-exchanged data, featuring images and documents at resolutions different from the most commonly used ones, to enhance its scale robustness. Notably, UNIT retains the original vision encoder architecture, making it cost-free in terms of inference and deployment. Experiments across multiple benchmarks confirm that our method significantly outperforms existing methods on document-related tasks (e.g., OCR and DocQA) while maintaining the performances on natural images, demonstrating its ability to substantially enhance text recognition without compromising its core image recognition capabilities.", "pdf": "https://openreview.net/pdf/1a1619bf7aa98f58a0486c2329bc820c108fbdea.pdf"} {"title": "Catastrophic Goodhart: regularizing RLHF with KL divergence does not mitigate heavy-tailed reward misspecification", "url": "https://openreview.net/forum?id=UXuBzWoZGK", "detail_url": "https://openreview.net/forum?id=UXuBzWoZGK", "authors": "Thomas Kwa,Drake Thomas,Adri\u00e0 Garriga-Alonso", "tags": "NIPS 2024,Poster", "abstract": "When applying reinforcement learning from human feedback (RLHF), the reward is learned from data and, therefore, always has some error. It is common to mitigate this by regularizing the policy with KL divergence from a base model, with the hope that balancing reward with regularization will achieve desirable outcomes despite this reward misspecification. We show that when the reward function has light-tailed error, optimal policies under less restrictive KL penalties achieve arbitrarily high utility. However, if error is heavy-tailed, some policies obtain arbitrarily high reward despite achieving no more utility than the base model--a phenomenon we call catastrophic Goodhart. We adapt a discrete optimization method to measure the tails of reward models, finding that they are consistent with light-tailed error. However, the pervasiveness of heavy-tailed distributions in many real-world applications indicates that future sources of RL reward could have heavy-tailed error, increasing the likelihood of reward hacking even with KL regularization.", "pdf": "https://openreview.net/pdf/be3a14bb23f805713d5c57d8b5458c2712757e8c.pdf"} {"title": "Scalable Bayesian Optimization via Focalized Sparse Gaussian Processes", "url": "https://openreview.net/forum?id=OF0YsxoRai", "detail_url": "https://openreview.net/forum?id=OF0YsxoRai", "authors": "Yunyue Wei,Vincent Zhuang,Saraswati Soedarmadji,Yanan Sui", "tags": "NIPS 2024,Poster", "abstract": "Bayesian optimization is an effective technique for black-box optimization, but its applicability is typically limited to low-dimensional and small-budget problems due to the cubic complexity of computing the Gaussian process (GP) surrogate. While various approximate GP models have been employed to scale Bayesian optimization to larger sample sizes, most suffer from overly-smooth estimation and focus primarily on problems that allow for large online samples. In this work, we argue that Bayesian optimization algorithms with sparse GPs can more efficiently allocate their representational power to relevant regions of the search space. To achieve this, we propose focalized GP, which leverages a novel variational loss function to achieve stronger local prediction, as well as FocalBO, which hierarchically optimizes the focalized GP acquisition function over progressively smaller search spaces. Experimental results demonstrate that FocalBO can efficiently leverage large amounts of offline and online data to achieve state-of-the-art performance on robot morphology design and to control a 585-dimensional musculoskeletal system.", "pdf": "https://openreview.net/pdf/3b2fc5597a3c16b9823e4dcaa2fc4a20f006d647.pdf"} {"title": "Transfer Learning for Latent Variable Network Models", "url": "https://openreview.net/forum?id=PK8xOCBQRO", "detail_url": "https://openreview.net/forum?id=PK8xOCBQRO", "authors": "Akhil Jalan,Arya Mazumdar,Soumendu Sundar Mukherjee,Purnamrita Sarkar", "tags": "NIPS 2024,Poster", "abstract": "We study transfer learning for estimation in latent variable network models. In our setting, the conditional edge probability matrices given the latent variables are represented by $P$ for the source and $Q$ for the target. We wish to estimate $Q$ given two kinds of data: (1) edge data from a subgraph induced by an $o(1)$ fraction of the nodes of $Q$, and (2) edge data from all of $P$. If the source $P$ has no relation to the target $Q$, the estimation error must be $\\Omega(1)$. However, we show that if the latent variables are shared, then vanishing error is possible. We give an efficient algorithm that utilizes the ordering of a suitably defined graph distance. Our algorithm achieves $o(1)$ error and does not assume a parametric form on the source or target networks. Next, for the specific case of Stochastic Block Models we prove a minimax lower bound and show that a simple algorithm achieves this rate. Finally, we empirically demonstrate our algorithm's use on real-world and simulated graph transfer problems.", "pdf": "https://openreview.net/pdf/4830f6ed817c8346d73ee25f85017b31d6e4996f.pdf"} {"title": "An Offline Adaptation Framework for Constrained Multi-Objective Reinforcement Learning", "url": "https://openreview.net/forum?id=QB6CvDqa6b", "detail_url": "https://openreview.net/forum?id=QB6CvDqa6b", "authors": "Qian Lin,Zongkai Liu,Danying Mo,Chao Yu", "tags": "NIPS 2024,Poster", "abstract": "In recent years, significant progress has been made in multi-objective reinforcement learning (RL) research, which aims to balance multiple objectives by incorporating preferences for each objective. In most existing studies, specific preferences must be provided during deployment to indicate the desired policies explicitly. However, designing these preferences depends heavily on human prior knowledge, which is typically obtained through extensive observation of high-performing demonstrations with expected behaviors. In this work, we propose a simple yet effective offline adaptation framework for multi-objective RL problems without assuming handcrafted target preferences, but only given several demonstrations to implicitly indicate the preferences of expected policies. Additionally, we demonstrate that our framework can naturally be extended to meet constraints on safety-critical objectives by utilizing safe demonstrations, even when the safety thresholds are unknown. Empirical results on offline multi-objective and safe tasks demonstrate the capability of our framework to infer policies that align with real preferences while meeting the constraints implied by the provided demonstrations.", "pdf": "https://openreview.net/pdf/7cb3958531c21777dd36e0b964c92ade366fc766.pdf"} {"title": "Intrinsic Robustness of Prophet Inequality to Strategic Reward Signaling", "url": "https://openreview.net/forum?id=Mmcy1p15Hc", "detail_url": "https://openreview.net/forum?id=Mmcy1p15Hc", "authors": "Wei Tang,Haifeng Xu,Ruimin Zhang,Derek Zhu", "tags": "NIPS 2024,Poster", "abstract": "Prophet inequality concerns a basic optimal stopping problem and states that simple threshold stopping policies --- i.e., accepting the first reward larger than a certain threshold --- can achieve tight $\\frac{1}{2}$-approximation to the optimal prophet value. Motivated by its economic applications, this paper studies the robustness of this approximation to natural strategic manipulations in which each random reward is associated with a self-interested player who may selectively reveal his realized reward to the searcher in order to maximize his probability of being selected. \n\nWe say a threshold policy is $\\alpha$(-strategically)-robust if it (a) achieves the $\\alpha$-approximation to the prophet value for strategic players; and (b) meanwhile remains a $\\frac{1}{2}$-approximation in the standard non-strategic setting.\nStarting with a characterization of each player's optimal information revealing strategy, we demonstrate the intrinsic robustness of prophet inequalities to strategic reward signaling through the following results:\n(1) for arbitrary reward distributions, there is a threshold policy that is $\\frac{1-\\frac{1}{e}}{2}$-robust, and this ratio is tight;\n(2) for i.i.d. reward distributions, there is a threshold policy that is $\\frac{1}{2}$-robust, which is tight for the setting; \nand (3) for log-concave (but non-identical) reward distributions, the $\\frac{1}{2}$-robustness can also be achieved under certain regularity assumptions.", "pdf": "https://openreview.net/pdf/5dda2558f65809779f2ce50bbc3fac437cf946ba.pdf"} {"title": "Sharpness-Aware Minimization Activates the Interactive Teaching's Understanding and Optimization", "url": "https://openreview.net/forum?id=Prw98p1nV0", "detail_url": "https://openreview.net/forum?id=Prw98p1nV0", "authors": "Mingwei Xu,Xiaofeng Cao,Ivor Tsang", "tags": "NIPS 2024,Poster", "abstract": "Teaching is a potentially effective approach for understanding interactions among multiple intelligences. Previous explorations have convincingly shown that teaching presents additional opportunities for observation and demonstration within the learning model, such as data distillation and selection. However, the underlying optimization principles and convergence of interactive teaching lack theoretical analysis, and in this regard co-teaching serves as a notable prototype. In this paper, we discuss its role as a reduction of the larger loss landscape derived from Sharpness-Aware Minimization (SAM). Then, we classify it as an iterative parameter estimation process using Expectation-Maximization. The convergence of this typical interactive teaching is achieved by continuously optimizing a variational lower bound on the log marginal likelihood. This lower bound represents the expected value of the log posterior distribution of the latent variables under a scaled, factorized variational distribution. To further enhance interactive teaching's performance, we incorporate SAM's strong generalization information into interactive teaching, referred as Sharpness Reduction Interactive Teaching (SRIT). This integration can be viewed as a novel sequential optimization process. Finally, we validate the performance of our approach through multiple experiments.", "pdf": "https://openreview.net/pdf/98ade1fd990a7c5d2b7c8f5c6e8677030fa52c6a.pdf"} {"title": "Policy-shaped prediction: avoiding distractions in model-based reinforcement learning", "url": "https://openreview.net/forum?id=hgdh4foghu", "detail_url": "https://openreview.net/forum?id=hgdh4foghu", "authors": "Miles Richard Hutson,Isaac Kauvar,Nick Haber", "tags": "NIPS 2024,Poster", "abstract": "Model-based reinforcement learning (MBRL) is a promising route to sample-efficient policy optimization. However, a known vulnerability of reconstruction-based MBRL consists of scenarios in which detailed aspects of the world are highly predictable, but irrelevant to learning a good policy. Such scenarios can lead the model to exhaust its capacity on meaningless content, at the cost of neglecting important environment dynamics. While existing approaches attempt to solve this problem, we highlight its continuing impact on leading MBRL methods ---including DreamerV3 and DreamerPro--- with a novel environment where background distractions are intricate, predictable, and useless for planning future actions. To address this challenge we develop a method for focusing the capacity of the world model through a synergy of a pretrained segmentation model, a task-aware reconstruction loss, and adversarial learning. Our method outperforms a variety of other approaches designed to reduce the impact of distractors, and is an advance towards robust model-based reinforcement learning.", "pdf": "https://openreview.net/pdf/85986bb7f2dd538aa4e129bdca1ba0ffe4b2e6f3.pdf"} {"title": "Recognize Any Regions", "url": "https://openreview.net/forum?id=qKfiWNHp6k", "detail_url": "https://openreview.net/forum?id=qKfiWNHp6k", "authors": "Haosen Yang,Chuofan Ma,Bin Wen,Yi Jiang,Zehuan Yuan,Xiatian Zhu", "tags": "NIPS 2024,Poster", "abstract": "Understanding the semantics of individual regions or patches of unconstrained images, such as open-world object detection, remains a critical yet challenging task in computer vision. Building on the success of powerful image-level vision-language (ViL) foundation models like CLIP, recent efforts have sought to harness their capabilities by either training a contrastive model from scratch with an extensive collection of region-label pairs or aligning the outputs of a detection model with image-level representations of region proposals. Despite notable progress, these approaches are plagued by computationally intensive training requirements, susceptibility to data noise, \nand deficiency in contextual information. To address these limitations, we explore the synergistic potential of off-the-shelf foundation models, leveraging their respective strengths in localization and semantics. We introduce a novel, generic, and efficient architecture, named RegionSpot, designed to integrate position-aware localization knowledge from a localization foundation model (e.g., SAM) with semantic information from a ViL model (e.g., CLIP). To fully exploit pretrained knowledge while minimizing training overhead, we keep both foundation models frozen, focusing optimization efforts solely on a lightweight attention-based knowledge integration module.\nExtensive experiments in open-world object recognition show that our RegionSpot achieves significant performance gain over prior alternatives, along with substantial computational savings (e.g., training our model with 3 million data in a single day using 8 V100 GPUs). \nRegionSpot outperforms GLIP-L by 2.9 in mAP on LVIS val set, with an even larger margin of 13.1 AP for more challenging and rare categories, and a 2.5 AP increase on ODinW. Furthermore, it exceeds GroundingDINO-L by 11.0 AP for rare categories on the LVIS minival set.", "pdf": "https://openreview.net/pdf/e158974032bb53bfca7244c8a0e2c67406654e37.pdf"} {"title": "Oracle-Efficient Reinforcement Learning for Max Value Ensembles", "url": "https://openreview.net/forum?id=KLL70pTQ17", "detail_url": "https://openreview.net/forum?id=KLL70pTQ17", "authors": "Marcel Hussing,Michael Kearns,Aaron Roth,Sikata Bela Sengupta,Jessica Sorrell", "tags": "NIPS 2024,Poster", "abstract": "Reinforcement learning (RL) in large or infinite state spaces is notoriously challenging, both theoretically (where worst-case sample and computational complexities must scale with state space cardinality) and experimentally (where function approximation and policy gradient techniques often scale poorly and suffer from instability and high variance). One line of research attempting to address these difficulties\nmakes the natural assumption that we are given a collection of base or *constituent* policies (possibly heuristic) upon which we would like to improve in a scalable manner. In this work we aim to compete with the *max-following policy*, which at each state follows the action of whichever constituent policy has the highest value. The max-following policy is always at least as good as the best constituent policy, and may be considerably better. Our main result is an efficient algorithm that learns to compete with the max-following policy, given only access to the constituent policies (but not their value functions). In contrast to prior work in similar settings, our theoretical results require only the minimal assumption of an ERM oracle for value function approximation for the constituent policies (and not the global optimal policy or the max-following policy itself) on samplable distributions. We illustrate our algorithm's experimental effectiveness and behavior on several robotic simulation testbeds.", "pdf": "https://openreview.net/pdf/7b8153019b588df4f07fea0a2e38cf08689af1e8.pdf"} {"title": "CRT-Fusion: Camera, Radar, Temporal Fusion Using Motion Information for 3D Object Detection", "url": "https://openreview.net/forum?id=EdXW71LvKE", "detail_url": "https://openreview.net/forum?id=EdXW71LvKE", "authors": "Jisong Kim,Minjae Seong,Jun Won Choi", "tags": "NIPS 2024,Poster", "abstract": "Accurate and robust 3D object detection is a critical component in autonomous vehicles and robotics. While recent radar-camera fusion methods have made significant progress by fusing information in the bird's-eye view (BEV) representation, they often struggle to effectively capture the motion of dynamic objects, leading to limited performance in real-world scenarios. In this paper, we introduce CRT-Fusion, a novel framework that integrates temporal information into radar-camera fusion to address this challenge. Our approach comprises three key modules: Multi-View Fusion (MVF), Motion Feature Estimator (MFE), and Motion Guided Temporal Fusion (MGTF). The MVF module fuses radar and image features within both the camera view and bird's-eye view, thereby generating a more precise unified BEV representation. The MFE module conducts two simultaneous tasks: estimation of pixel-wise velocity information and BEV segmentation. Based on the velocity and the occupancy score map obtained from the MFE module, the MGTF module aligns and fuses feature maps across multiple timestamps in a recurrent manner. By considering the motion of dynamic objects, CRT-Fusion can produce robust BEV feature maps, thereby improving detection accuracy and robustness. Extensive evaluations on the challenging nuScenes dataset demonstrate that CRT-Fusion achieves state-of-the-art performance for radar-camera-based 3D object detection. Our approach outperforms the previous best method in terms of NDS by +1.7%, while also surpassing the leading approach in mAP by +1.4%. These significant improvements in both metrics showcase the effectiveness of our proposed fusion strategy in enhancing the reliability and accuracy of 3D object detection.", "pdf": "https://openreview.net/pdf/4324eb5080025d064864306ccd9a5422f55d18ca.pdf"} {"title": "On the Surprising Effectiveness of Attention Transfer for Vision Transformers", "url": "https://openreview.net/forum?id=5DwqmoCE1N", "detail_url": "https://openreview.net/forum?id=5DwqmoCE1N", "authors": "Alexander Cong Li,Yuandong Tian,Beidi Chen,Deepak Pathak,Xinlei Chen", "tags": "NIPS 2024,Poster", "abstract": "Conventional wisdom suggests that pre-training Vision Transformers (ViT) improves downstream performance by learning useful representations. Is this actually true? We investigate this question and find that the features and representations learned during pre-training are not essential. Surprisingly, using only the attention patterns from pre-training (i.e., guiding how information flows between tokens) is sufficient for models to learn high quality features from scratch and achieve comparable downstream performance. We show this by introducing a simple method called attention transfer, where only the attention patterns from a pre-trained teacher ViT are transferred to a student, either by copying or distilling the attention maps. Since attention transfer lets the student learn its own features, ensembling it with a fine-tuned teacher also further improves accuracy on ImageNet. We systematically study various aspects of our findings on the sufficiency of attention maps, including distribution shift settings where they underperform fine-tuning. We hope our exploration provides a better understanding of what pre-training accomplishes and leads to a useful alternative to the standard practice of fine-tuning.", "pdf": "https://openreview.net/pdf/3cb6ab79a8ac05e29b69c4600053fd98fe84f7f2.pdf"} {"title": "A Canonicalization Perspective on Invariant and Equivariant Learning", "url": "https://openreview.net/forum?id=jjcY92FX4R", "detail_url": "https://openreview.net/forum?id=jjcY92FX4R", "authors": "George Ma,Yifei Wang,Derek Lim,Stefanie Jegelka,Yisen Wang", "tags": "NIPS 2024,Poster", "abstract": "In many applications, we desire neural networks to exhibit invariance or equivariance to certain groups due to symmetries inherent in the data. Recently, frame-averaging methods emerged to be a unified framework for attaining symmetries efficiently by averaging over input-dependent subsets of the group, i.e., frames. What we currently lack is a principled understanding of the design of frames. In this work, we introduce a canonicalization perspective that provides an essential and complete view of the design of frames. Canonicalization is a classic approach for attaining invariance by mapping inputs to their canonical forms. We show that there exists an inherent connection between frames and canonical forms. Leveraging this connection, we can efficiently compare the complexity of frames as well as determine the optimality of certain frames. Guided by this principle, we design novel frames for eigenvectors that are strictly superior to existing methods --- some are even optimal --- both theoretically and empirically. The reduction to the canonicalization perspective further uncovers equivalences between previous methods. These observations suggest that canonicalization provides a fundamental understanding of existing frame-averaging methods and unifies existing equivariant and invariant learning methods. Code is available at https://github.com/PKU-ML/canonicalization.", "pdf": "https://openreview.net/pdf/6fdab7559bff9da67419fc255751d38598a58ae7.pdf"} {"title": "SkiLD: Unsupervised Skill Discovery Guided by Factor Interactions", "url": "https://openreview.net/forum?id=i816TeqgVh", "detail_url": "https://openreview.net/forum?id=i816TeqgVh", "authors": "Zizhao Wang,Jiaheng Hu,Caleb Chuck,Stephen Chen,Roberto Mart\u00edn-Mart\u00edn,Amy Zhang,Scott Niekum,Peter Stone", "tags": "NIPS 2024,Poster", "abstract": "Unsupervised skill discovery carries the promise that an intelligent agent can learn reusable skills through autonomous, reward-free interactions with environments. Existing unsupervised skill discovery methods learn skills by encouraging distinguishable behaviors that cover diverse states. However, in complex environments with many state factors (e.g., household environments with many objects), learning skills that cover all possible states is impossible, and naively encouraging state diversity often leads to simple skills that are not ideal for solving downstream tasks. This work introduces Skill Discovery from Local Dependencies (SkiLD), which leverages state factorization as a natural inductive bias to guide the skill learning process. The key intuition guiding SkiLD is that skills that induce \\textbf{diverse interactions} between state factors are often more valuable for solving downstream tasks. To this end, SkiLD develops a novel skill learning objective that explicitly encourages the mastering of skills that effectively induce different interactions within an environment. We evaluate SkiLD in several domains with challenging, long-horizon sparse reward tasks including a realistic simulated household robot domain, where SkiLD successfully learns skills with clear semantic meaning and shows superior performance compared to existing unsupervised reinforcement learning methods that only maximize state coverage.", "pdf": "https://openreview.net/pdf/02c32e154257dc57f63f03ad314a8242d5f3dbdb.pdf"} {"title": "SpaFL: Communication-Efficient Federated Learning With Sparse Models And Low Computational Overhead", "url": "https://openreview.net/forum?id=dAXuir2ets", "detail_url": "https://openreview.net/forum?id=dAXuir2ets", "authors": "Minsu Kim,Walid Saad,Merouane Abdelkader DEBBAH,Choong Seon Hong", "tags": "NIPS 2024,Poster", "abstract": "The large communication and computation overhead of federated learning (FL) is one of the main challenges facing its practical deployment over resource-constrained clients and systems. In this work, SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead. In SpaFL, a trainable threshold is defined for each filter/neuron to prune its all connected \nparameters, thereby leading to structured sparsity. To optimize the pruning process itself, only thresholds are communicated between a server and clients instead of parameters, thereby learning how to prune. Further, global thresholds are used to update model parameters by extracting aggregated parameter importance. The generalization bound of SpaFL is also derived, thereby proving key insights on the relation between sparsity and performance. Experimental results show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines. The code is available at https://github.com/news-vt/SpaFL_NeruIPS_2024", "pdf": "https://openreview.net/pdf/74429405f3d0b3fdbe3bf31f9b09e1ea523d149d.pdf"} {"title": "UniAR: A Unified model for predicting human Attention and Responses on visual content", "url": "https://openreview.net/forum?id=FjssnGuHih", "detail_url": "https://openreview.net/forum?id=FjssnGuHih", "authors": "Peizhao Li,Junfeng He,Gang Li,Rachit Bhargava,Shaolei Shen,NACHIAPPAN VALLIAPPAN,Youwei Liang,Hongxiang Gu,Venky Ramachandran,Golnaz farhadi,Yang Li,Kai J Kohlhoff,Vidhya Navalpakkam", "tags": "NIPS 2024,Poster", "abstract": "Progress in human behavior modeling involves understanding both implicit, early-stage perceptual behavior, such as human attention, and explicit, later-stage behavior, such as subjective preferences or likes. Yet most prior research has focused on modeling implicit and explicit human behavior in isolation; and often limited to a specific type of visual content. We propose UniAR -- a unified model of human attention and preference behavior across diverse visual content. UniAR leverages a multimodal transformer to predict subjective feedback, such as satisfaction or aesthetic quality, along with the underlying human attention or interaction heatmaps and viewing order. We train UniAR on diverse public datasets spanning natural images, webpages, and graphic designs, and achieve SOTA performance on multiple benchmarks across various image domains and behavior modeling tasks. Potential applications include providing instant feedback on the effectiveness of UIs/visual content, and enabling designers and content-creation models to optimize their creation for human-centric improvements.", "pdf": "https://openreview.net/pdf/3bcbe97b1b4b7ebce3be6038936511932cdd80e9.pdf"} {"title": "Hypothesis Testing the Circuit Hypothesis in LLMs", "url": "https://openreview.net/forum?id=5ai2YFAXV7", "detail_url": "https://openreview.net/forum?id=5ai2YFAXV7", "authors": "Claudia Shi,Nicolas Beltran-Velez,Achille Nazaret,Carolina Zheng,Adri\u00e0 Garriga-Alonso,Andrew Jesson,Maggie Makar,David Blei", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) demonstrate surprising capabilities, but we do not understand how they are implemented. \nOne hypothesis suggests that these capabilities are primarily executed by small subnetworks within the LLM, known as circuits. But how can we evaluate this hypothesis?\nIn this paper, we formalize a set of criteria that a circuit is hypothesized to meet and develop a suite of hypothesis tests to evaluate how well circuits satisfy them. \nThe criteria focus on the extent to which the LLM's behavior is preserved, the degree of localization of this behavior, and whether the circuit is minimal.\nWe apply these tests to six circuits described in the research literature. \nWe find that synthetic circuits -- circuits that are hard-coded in the model -- align with the idealized properties. \nCircuits discovered in Transformer models satisfy the criteria to varying degrees.\nTo facilitate future empirical studies of circuits, we created the \\textit{circuitry} package, a wrapper around the \\textit{TransformerLens} library, which abstracts away lower-level manipulations of hooks and activations. The software is available at \\url{https://github.com/blei-lab/circuitry}.", "pdf": "https://openreview.net/pdf/d42b43708ca0c06c98f6b5d7a422bd9082f54bdf.pdf"} {"title": "Reconstruction Attacks on Machine Unlearning: Simple Models are Vulnerable", "url": "https://openreview.net/forum?id=i4gqCM1r3z", "detail_url": "https://openreview.net/forum?id=i4gqCM1r3z", "authors": "Martin Andres Bertran,Shuai Tang,Michael Kearns,Jamie Heather Morgenstern,Aaron Roth,Steven Wu", "tags": "NIPS 2024,Poster", "abstract": "Machine unlearning is motivated by principles of data autonomy. The premise is that a person can request to have their data's influence removed from deployed models, and those models should be updated as if they were retrained without the person's data. We show that these updates expose individuals to high-accuracy reconstruction attacks which allow the attacker to recover their data in its entirety, even when the original models are so simple that privacy risk might not otherwise have been a concern. We show how to mount a near-perfect attack on the deleted data point from linear regression models. We then generalize our attack to other loss functions and architectures, and empirically demonstrate the effectiveness of our attacks across a wide range of datasets (capturing both tabular and image data). Our work highlights that privacy risk is significant even for extremely simple model classes when individuals can request deletion of their data from the model.", "pdf": "https://openreview.net/pdf/cb154ee097ce8ebc2a66d1f24af8d3eccfdb05a9.pdf"} {"title": "Explaining Datasets in Words: Statistical Models with Natural Language Parameters", "url": "https://openreview.net/forum?id=u5BkOgWWZW", "detail_url": "https://openreview.net/forum?id=u5BkOgWWZW", "authors": "Ruiqi Zhong,Heng Wang,Dan Klein,Jacob Steinhardt", "tags": "NIPS 2024,Poster", "abstract": "To make sense of massive data, we often first fit simplified models and then interpret the parameters; for example, we cluster the text embeddings and then interpret the mean parameters of each cluster.\nHowever, these parameters are often high-dimensional and hard to interpret.\nTo make model parameters directly interpretable, we introduce a family of statistical models---including clustering, time series, and classification models---parameterized by *natural language predicates*. \nFor example, a cluster of text about COVID could be parameterized by the predicate ``*discusses COVID*''.\nTo learn these statistical models effectively, we develop a model-agnostic algorithm that optimizes continuous relaxations of predicate parameters with gradient descent and discretizes them by prompting language models (LMs).\nFinally, we apply our framework to a wide range of problems: taxonomizing user chat dialogues, characterizing how they evolve across time, finding categories where one language model is better than the other, clustering math problems based on subareas, and explaining visual features in memorable images.\nOur framework is highly versatile, applicable to both textual and visual domains, can be easily steered to focus on specific properties (e.g. subareas), and explains sophisticated concepts that classical methods (e.g. n-gram analysis) struggle to produce.", "pdf": "https://openreview.net/pdf/6bf57dd60959ba5605f358a7a7b884d7a32c7de5.pdf"} {"title": "Faster Algorithms for User-Level Private Stochastic Convex Optimization", "url": "https://openreview.net/forum?id=hNlk9cIGo9", "detail_url": "https://openreview.net/forum?id=hNlk9cIGo9", "authors": "Andrew Lowy,Daogao Liu,Hilal Asi", "tags": "NIPS 2024,Poster", "abstract": "We study private stochastic convex optimization (SCO) under user-level differential privacy (DP) constraints. In this setting, there are $n$ users (e.g., cell phones), each possessing $m$ data items (e.g., text messages), and we need to protect the privacy of each user's entire collection of data items. Existing algorithms for user-level DP SCO are impractical in many large-scale machine learning scenarios because: (i) they make restrictive assumptions on the smoothness parameter of the loss function and require the number of users to grow polynomially with the dimension of the parameter space; or (ii) they are prohibitively slow, requiring at least $(mn)^{3/2}$ gradient computations for smooth losses and $(mn)^3$ computations for non-smooth losses. To address these limitations, we provide novel user-level DP algorithms with state-of-the-art excess risk and runtime guarantees, without stringent assumptions. First, we develop a linear-time algorithm with state-of-the-art excess risk (for a non-trivial linear-time algorithm) under a mild smoothness assumption. Our second algorithm applies to arbitrary smooth losses and achieves optimal excess risk in $\\approx (mn)^{9/8}$ gradient computations. Third, for non-smooth loss functions, we obtain optimal excess risk in $n^{11/8} m^{5/4}$ gradient computations. Moreover, our algorithms do not require the number of users to grow polynomially with the dimension.", "pdf": "https://openreview.net/pdf/f36c6f68a0ef60ef7d864db1e2e045e4ca2b6e24.pdf"} {"title": "Task Confusion and Catastrophic Forgetting in Class-Incremental Learning: A Mathematical Framework for Discriminative and Generative Modelings", "url": "https://openreview.net/forum?id=Tj5wJslj0R", "detail_url": "https://openreview.net/forum?id=Tj5wJslj0R", "authors": "Milad Khademi Nori,IL MIN KIM", "tags": "NIPS 2024,Poster", "abstract": "In class-incremental learning (class-IL), models must classify all previously seen classes at test time without task-IDs, leading to task confusion. Despite being a key challenge, task confusion lacks a theoretical understanding. We present a novel mathematical framework for class-IL and prove the Infeasibility Theorem, showing optimal class-IL is impossible with discriminative modeling due to task confusion. However, we establish the Feasibility Theorem, demonstrating that generative modeling can achieve optimal class-IL by overcoming task confusion. We then assess popular class-IL strategies, including regularization, bias-correction, replay, and generative classifier, using our framework. Our analysis suggests that adopting generative modeling, either for generative replay or direct classification (generative classifier), is essential for optimal class-IL.", "pdf": "https://openreview.net/pdf/57bd4f3d82cf6a55dece2ea557e36fce58d61778.pdf"} {"title": "Boundary Decomposition for Nadir Objective Vector Estimation", "url": "https://openreview.net/forum?id=f829mkQMUg", "detail_url": "https://openreview.net/forum?id=f829mkQMUg", "authors": "Ruihao Zheng,Zhenkun Wang", "tags": "NIPS 2024,Poster", "abstract": "The nadir objective vector plays a key role in solving multi-objective optimization problems (MOPs), where it is often used to normalize the objective space and guide the search. The current methods for estimating the nadir objective vector perform effectively only on specific MOPs. This paper reveals the limitations of these methods: exact methods can only work on discrete MOPs, while heuristic methods cannot deal with the MOP with a complicated feasible objective region. To fill this gap, we propose a general and rigorous method, namely boundary decomposition for nadir objective vector estimation (BDNE). BDNE scalarizes the MOP into a set of boundary subproblems. By utilizing bilevel optimization, boundary subproblems are optimized and adjusted alternately, thereby refining their optimal solutions to align with the nadir objective vector. We prove that the bilevel optimization identifies the nadir objective vector under mild conditions. We compare BDNE with existing methods on various black-box MOPs. The results conform to the theoretical analysis and show the significant potential of BDNE for real-world application.", "pdf": "https://openreview.net/pdf/ac319a1318380908cd485270961ab07f92d9d9f3.pdf"} {"title": "OSLO: One-Shot Label-Only Membership Inference Attacks", "url": "https://openreview.net/forum?id=ZJBBeyEAyX", "detail_url": "https://openreview.net/forum?id=ZJBBeyEAyX", "authors": "Yuefeng Peng,Jaechul Roh,Subhransu Maji,Amir Houmansadr", "tags": "NIPS 2024,Poster", "abstract": "We introduce One-Shot Label-Only (OSLO) membership inference attacks (MIAs), which accurately infer a given sample's membership in a target model's training set with high precision using just a single query, where the target model only returns the predicted hard label. \n This is in contrast to state-of-the-art label-only attacks which require $\\sim6000$ queries, yet get attack precisions lower than OSLO's.\n OSLO leverages transfer-based black-box adversarial attacks. The core idea is that a member sample exhibits more resistance to adversarial perturbations than a non-member. We compare OSLO against state-of-the-art label-only attacks and demonstrate that, despite requiring only one query, our method significantly outperforms previous attacks in terms of precision and true positive rate (TPR) under the same false positive rates (FPR). For example, compared to previous label-only MIAs, OSLO achieves a TPR that is at least 7$\\times$ higher under a 1\\% FPR and at least 22$\\times$ higher under a 0.1\\% FPR on CIFAR100 for a ResNet18 model. We evaluated multiple defense mechanisms against OSLO.", "pdf": "https://openreview.net/pdf/9fd3b4434a639fc69ae4bba2b397013f29cbf2df.pdf"} {"title": "End-To-End Causal Effect Estimation from Unstructured Natural Language Data", "url": "https://openreview.net/forum?id=gzQARCgIsI", "detail_url": "https://openreview.net/forum?id=gzQARCgIsI", "authors": "Nikita Dhawan,Leonardo Cotta,Karen Ullrich,Rahul Krishnan,Chris J. Maddison", "tags": "NIPS 2024,Poster", "abstract": "Knowing the effect of an intervention is critical for human decision-making, but current approaches for causal effect estimation rely on manual data collection and structuring, regardless of the causal assumptions. This increases both the cost and time-to-completion for studies. We show how large, diverse observational text data can be mined with large language models (LLMs) to produce inexpensive causal effect estimates under appropriate causal assumptions. We introduce _NATURAL_, a novel family of causal effect estimators built with LLMs that operate over datasets of unstructured text. Our estimators use LLM conditional distributions (over variables of interest, given the text data) to assist in the computation of classical estimators of causal effect. We overcome a number of technical challenges to realize this idea, such as automating data curation and using LLMs to impute missing information. We prepare six (two synthetic and four real) observational datasets, paired with corresponding ground truth in the form of randomized trials, which we used to systematically evaluate each step of our pipeline. NATURAL estimators demonstrate remarkable performance, yielding causal effect estimates that fall within 3 percentage points of their ground truth counterparts, including on real-world Phase 3/4 clinical trials. Our results suggest that unstructured text data is a rich source of causal effect information, and NATURAL is a first step towards an automated pipeline to tap this resource.", "pdf": "https://openreview.net/pdf/3b51533646d3d910f744e6b3f9388df0f917b423.pdf"} {"title": "Semidefinite Relaxations of the Gromov-Wasserstein Distance", "url": "https://openreview.net/forum?id=rM3FFH1mqk", "detail_url": "https://openreview.net/forum?id=rM3FFH1mqk", "authors": "Junyu Chen,Binh Nguyen,Shang Hui Koh,Yong Sheng Soh", "tags": "NIPS 2024,Poster", "abstract": "The Gromov-Wasserstein (GW) distance is an extension of the optimal transport problem that allows one to match objects between incomparable spaces. At its core, the GW distance is specified as the solution of a non-convex quadratic program and is not known to be tractable to solve. In particular, existing solvers for the GW distance are only able to find locally optimal solutions. In this work, we propose a semi-definite programming (SDP) relaxation of the GW distance. The relaxation can be viewed as the Lagrangian dual of the GW distance augmented with constraints that relate to the linear and quadratic terms of transportation plans. In particular, our relaxation provides a tractable (polynomial-time) algorithm to compute globally optimal transportation plans (in some instances) together with an accompanying proof of global optimality. Our numerical experiments suggest that the proposed relaxation is strong in that it frequently computes the globally optimal solution. Our Python implementation is available at https://github.com/tbng/gwsdp.", "pdf": "https://openreview.net/pdf/27fc72a84eb27e72c7b20792e2bcf5a42dfcc79a.pdf"} {"title": "TableRAG: Million-Token Table Understanding with Language Models", "url": "https://openreview.net/forum?id=41lovPOCo5", "detail_url": "https://openreview.net/forum?id=41lovPOCo5", "authors": "Si-An Chen,Lesly Miculicich,Julian Martin Eisenschlos,Zifeng Wang,Zilong Wang,Yanfei Chen,Yasuhisa Fujii,Hsuan-Tien Lin,Chen-Yu Lee,Tomas Pfister", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements in language models (LMs) have notably enhanced their ability to reason with tabular data, primarily through program-aided mechanisms that manipulate and analyze tables.\nHowever, these methods often require the entire table as input, leading to scalability challenges due to the positional bias or context length constraints.\nIn response to these challenges, we introduce TableRAG, a Retrieval-Augmented Generation (RAG) framework specifically designed for LM-based table understanding.\nTableRAG leverages query expansion combined with schema and cell retrieval to pinpoint crucial information before providing it to the LMs.\nThis enables more efficient data encoding and precise retrieval, significantly reducing prompt lengths and mitigating information loss.\nWe have developed two new million-token benchmarks from the Arcade and BIRD-SQL datasets to thoroughly evaluate TableRAG's effectiveness at scale.\nOur results demonstrate that TableRAG's retrieval design achieves the highest retrieval quality, leading to the new state-of-the-art performance on large-scale table understanding.", "pdf": "https://openreview.net/pdf/500503c127a0798de73d7b290c7c0f3280df87b7.pdf"} {"title": "DynaMITE-RL: A Dynamic Model for Improved Temporal Meta-Reinforcement Learning", "url": "https://openreview.net/forum?id=OPrPegYIZo", "detail_url": "https://openreview.net/forum?id=OPrPegYIZo", "authors": "Anthony Liang,Guy Tennenholtz,ChihWei Hsu,Yinlam Chow,Erdem Biyik,Craig Boutilier", "tags": "NIPS 2024,Poster", "abstract": "We introduce DynaMITE-RL, a meta-reinforcement learning (meta-RL) approach to approximate inference in environments where the latent state evolves at varying rates. We model episode sessions---parts of the episode where the latent state is fixed---and propose three key modifications to existing meta-RL methods: (i) consistency of latent information within sessions, (ii) session masking, and (iii) prior latent conditioning. We demonstrate the importance of these modifications in various domains, ranging from discrete Gridworld environments to continuous-control and simulated robot assistive tasks, illustrating the efficacy of DynaMITE-RL over state-of-the-art baselines in both online and offline RL settings.", "pdf": "https://openreview.net/pdf/816ee5b0296158f0938d78ab4b84abe989d5fbfc.pdf"} {"title": "Efficient Temporal Action Segmentation via Boundary-aware Query Voting", "url": "https://openreview.net/forum?id=jij4vOVU7i", "detail_url": "https://openreview.net/forum?id=jij4vOVU7i", "authors": "Peiyao Wang,Yuewei Lin,Erik Blasch,Jie Wei,Haibin Ling", "tags": "NIPS 2024,Poster", "abstract": "Although the performance of Temporal Action Segmentation (TAS) has been improved in recent years, achieving promising results often comes with a high computational cost due to dense inputs, complex model structures, and resource-intensive post-processing requirements. To improve the efficiency while keeping the high performance, we present a novel perspective centered on per-segment classification. By harnessing the capabilities of Transformers, we tokenize each video segment as an instance token, endowed with intrinsic instance segmentation. To realize efficient action segmentation, we introduce BaFormer, a boundary-aware Transformer network. It employs instance queries for instance segmentation and a global query for class-agnostic boundary prediction, yielding continuous segment proposals. During inference, BaFormer employs a simple yet effective voting strategy to classify boundary-wise segments based on instance segmentation. Remarkably, as a single-stage approach, BaFormer significantly reduces the computational costs, utilizing only 6% of the running time compared to the state-of-the-art method DiffAct, while producing better or comparable accuracy over several popular benchmarks. The code for this project is publicly available at https://github.com/peiyao-w/BaFormer.", "pdf": "https://openreview.net/pdf/e567b4b19df3f5ef6569ca19f112a8d20782f12f.pdf"} {"title": "Conditional Generative Models are Sufficient to Sample from Any Causal Effect Estimand", "url": "https://openreview.net/forum?id=vymkuBMLlh", "detail_url": "https://openreview.net/forum?id=vymkuBMLlh", "authors": "Md Musfiqur Rahman,Matt Jordan,Murat Kocaoglu", "tags": "NIPS 2024,Poster", "abstract": "Causal inference from observational data plays critical role in many applications in trustworthy machine learning.\nWhile sound and complete algorithms exist to compute causal effects, many of them assume access to conditional likelihoods,\n which is difficult to estimate for high-dimensional (particularly image) data. Researchers have alleviated this issue by simulating causal relations with neural models. However, when we have high-dimensional variables in the causal graph along with some unobserved confounders, no existing work can effectively sample from the un/conditional interventional distributions. In this work, we show how to sample from any identifiable interventional distribution given an arbitrary causal graph through a sequence of push-forward computations of conditional generative models, such as diffusion models. Our proposed algorithm follows the recursive steps of the existing likelihood-based identification algorithms to train a set of feed-forward models, and connect them in a specific way to sample from the desired distribution. We conduct experiments on a Colored MNIST dataset having both the treatment ($X$) and the target variables ($Y$) as images and sample from $P(y|do(x))$. Our algorithm also enables us to conduct a causal analysis to evaluate spurious correlations among input features of generative models pre-trained on the CelebA dataset. Finally, we generate high-dimensional interventional samples from the MIMIC-CXR dataset involving text and image variables.", "pdf": "https://openreview.net/pdf/d7a91169682e7030f7d0115904a50cf697b82461.pdf"} {"title": "A Versatile Diffusion Transformer with Mixture of Noise Levels for Audiovisual Generation", "url": "https://openreview.net/forum?id=cs1HISJkLU", "detail_url": "https://openreview.net/forum?id=cs1HISJkLU", "authors": "Gwanghyun Kim,Alonso Martinez,Yu-Chuan Su,Brendan Jou,Jose Lezama,Agrim Gupta,Lijun Yu,Lu Jiang,Aren Jansen,Jacob C Walker,Krishna Somandepalli", "tags": "NIPS 2024,Poster", "abstract": "Training diffusion models for audiovisual sequences allows for a range of generation tasks by learning conditional distributions of various input-output combinations of the two modalities. Nevertheless, this strategy often requires training a separate model for each task which is expensive. Here, we propose a novel training approach to effectively learn arbitrary conditional distributions in the audiovisual space. Our key contribution lies in how we parameterize the diffusion timestep in the forward diffusion process. Instead of the standard fixed diffusion timestep, we propose applying variable diffusion timesteps across the temporal dimension and across modalities of the inputs. This formulation offers flexibility to introduce variable noise levels for various portions of the input, hence the term mixture of noise levels. We propose a transformer-based audiovisual latent diffusion model and show that it can be trained in a task-agnostic fashion using our approach to enable a variety of audiovisual generation tasks at inference time. Experiments demonstrate the versatility of our method in tackling cross-modal and multimodal interpolation tasks in the audiovisual space. Notably, our proposed approach surpasses baselines in generating temporally and perceptually consistent samples conditioned on the input. Project page: neurips13025.github.io", "pdf": "https://openreview.net/pdf/5e4f46f6f36d9828240aa3f46a012311ce5787f9.pdf"} {"title": "Self-Refining Diffusion Samplers: Enabling Parallelization via Parareal Iterations", "url": "https://openreview.net/forum?id=XHWkHFWi3k", "detail_url": "https://openreview.net/forum?id=XHWkHFWi3k", "authors": "Nikil Roashan Selvam,Amil Merchant,Stefano Ermon", "tags": "NIPS 2024,Poster", "abstract": "In diffusion models, samples are generated through an iterative refinement process, requiring hundreds of sequential model evaluations. Several recent methods have introduced approximations (fewer discretization steps or distillation) to trade off speed at the cost of sample quality. In contrast, we introduce Self-Refining Diffusion Samplers (SRDS) that retain sample quality and can improve latency at the cost of additional parallel compute. We take inspiration from the Parareal algorithm, a popular numerical method for parallel-in-time integration of differential equations. In SRDS, a quick but rough estimate of a sample is first created and then iteratively refined in parallel through Parareal iterations. SRDS is not only guaranteed to accurately solve the ODE and converge to the serial solution but also benefits from parallelization across the diffusion trajectory, enabling batched inference and pipelining. As we demonstrate for pre-trained diffusion models, the early convergence of this refinement procedure drastically reduces the number of steps required to produce a sample, speeding up generation for instance by up to 1.7x on a 25-step StableDiffusion-v2 benchmark and up to 4.3x on longer trajectories.", "pdf": "https://openreview.net/pdf/f3027e409105679c2ff8905ad7351a1f5ec1463a.pdf"} {"title": "Is Value Learning Really the Main Bottleneck in Offline RL?", "url": "https://openreview.net/forum?id=nyp59a31Ju", "detail_url": "https://openreview.net/forum?id=nyp59a31Ju", "authors": "Seohong Park,Kevin Frans,Sergey Levine,Aviral Kumar", "tags": "NIPS 2024,Poster", "abstract": "While imitation learning requires access to high-quality data, offline reinforcement learning (RL) should, in principle, perform similarly or better with substantially lower data quality by using a value function. However, current results indicate that offline RL often performs worse than imitation learning, and it is often unclear what holds back the performance of offline RL. Motivated by this observation, we aim to understand the bottlenecks in current offline RL algorithms. While poor performance of offline RL is typically attributed to an imperfect value function, we ask: *is the main bottleneck of offline RL indeed in learning the value function, or something else?* To answer this question, we perform a systematic empirical study of (1) value learning, (2) policy extraction, and (3) policy generalization in offline RL problems, analyzing how these components affect performance. We make two surprising observations. First, we find that the choice of a policy extraction algorithm significantly affects the performance and scalability of offline RL, often more so than the value learning objective. For instance, we show that common value-weighted behavioral cloning objectives (e.g., AWR) do not fully leverage the learned value function, and switching to behavior-constrained policy gradient objectives (e.g., DDPG+BC) often leads to substantial improvements in performance and scalability. Second, we find that a big barrier to improving offline RL performance is often imperfect policy generalization on test-time states out of the support of the training data, rather than policy learning on in-distribution states. We then show that the use of suboptimal but high-coverage data or test-time policy training techniques can address this generalization issue in practice. Specifically, we propose two simple test-time policy improvement methods and show that these methods lead to better performance.", "pdf": "https://openreview.net/pdf/d91da7edfb55832deba6ddea4345f1b3e29cb1b5.pdf"} {"title": "Chain of Thoughtlessness? An Analysis of CoT in Planning", "url": "https://openreview.net/forum?id=kPBEAZU5Nm", "detail_url": "https://openreview.net/forum?id=kPBEAZU5Nm", "authors": "Kaya Stechly,Karthik Valmeekam,Subbarao Kambhampati", "tags": "NIPS 2024,Poster", "abstract": "Large language model (LLM) performance on reasoning problems typically does not generalize out of distribution. Previous work has claimed that this can be mitigated with chain of thought prompting--a method of demonstrating solution procedures--with the intuition that it is possible to in-context teach an LLM an algorithm for solving the problem.\nThis paper presents a case study of chain of thought on problems from Blocksworld, a classical planning domain, and examines the performance of two state-of-the-art LLMs across two axes: generality of examples given in prompt, and complexity of problems queried with each prompt. While our problems are very simple, we only find meaningful performance improvements from chain of thought prompts when those prompts are exceedingly specific to their problem class, and that those improvements quickly deteriorate as the size n of the query-specified stack grows past the size of stacks shown in the examples.\nWe also create scalable variants of three domains commonly studied in previous CoT papers and demonstrate the existence of similar failure modes.\nOur results hint that, contrary to previous claims in the literature, CoT's performance improvements do not stem from the model learning general algorithmic procedures via demonstrations but depend on carefully engineering highly problem specific prompts. This spotlights drawbacks of chain of thought, especially the sharp tradeoff between possible performance gains and the amount of human labor necessary to generate examples with correct reasoning traces.", "pdf": "https://openreview.net/pdf/571aa30896a391c557d296759a7f6b04f53b9ed2.pdf"} {"title": "Understanding the Gains from Repeated Self-Distillation", "url": "https://openreview.net/forum?id=gMqaKJCOCB", "detail_url": "https://openreview.net/forum?id=gMqaKJCOCB", "authors": "Divyansh Pareek,Simon Shaolei Du,Sewoong Oh", "tags": "NIPS 2024,Poster", "abstract": "Self-Distillation is a special type of knowledge distillation where the student model has the same architecture as the teacher model. Despite using the same architecture and the same training data, self-distillation has been empirically observed to improve performance, especially when applied repeatedly. For such a process, there is a fundamental question of interest: How much gain is possible by applying multiple steps of self-distillation? To investigate this relative gain, we propose using the simple but canonical task of linear regression. Our analysis shows that the excess risk achieved by multi-step self-distillation can significantly improve upon a single step of self-distillation, reducing the excess risk by a factor of $d$, where $d$ is the input dimension. Empirical results on regression tasks from the UCI repository show a reduction in the learnt model's risk (MSE) by up to $47$%.", "pdf": "https://openreview.net/pdf/d7ff32ece89462affe997f6189eed65267fa5bc3.pdf"} {"title": "Recursive Introspection: Teaching Language Model Agents How to Self-Improve", "url": "https://openreview.net/forum?id=DRC9pZwBwR", "detail_url": "https://openreview.net/forum?id=DRC9pZwBwR", "authors": "Yuxiao Qu,Tianjun Zhang,Naman Garg,Aviral Kumar", "tags": "NIPS 2024,Poster", "abstract": "A central piece in enabling intelligent agentic behavior in foundation models is to make them capable of introspecting upon their behavior, reasoning, and correcting their mistakes as more computation or interaction is available. Even the strongest proprietary large language models (LLMs) do not quite exhibit the ability of continually improving their responses sequentially. In this paper, we develop $\\textbf{RISE:}$ $\\textbf{R}$ecursive $\\textbf{I}$ntro$\\textbf{S}$p$\\textbf{E}$ction, an approach for fine-tuning LLMs to introduce this capability, despite prior work hypothesizing that this capability may not be possible to attain. Our approach prescribes an iterative fine-tuning procedure, which attempts to teach the model how to alter its response after having executed previously unsuccessful attempts to solve a hard test-time problem, with optionally additional environment feedback. RISE poses fine-tuning for a single-turn prompt as solving a multi-turn Markov decision process (MDP), where the initial state is the prompt. Inspired by principles in online imitation and offline reinforcement learning, we propose strategies for multi-turn data collection and training so as to imbue an LLM with the capability to recursively detect and correct its previous mistakes in subsequent iterations. Our experiments show that RISE enables Llama2, Llama3, and Mistral models to improve themselves with more turns on reasoning tasks, outperforming several single-turn strategies given an equal amount of inference-time computation. We also find that RISE scales well, often attaining larger benefits with more capable models, without disrupting one-turn abilities as a result of expressing more complex distributions.", "pdf": "https://openreview.net/pdf/f50ad94a939b176eb3bdf712e863034fc3076193.pdf"} {"title": "Corruption-Robust Linear Bandits: Minimax Optimality and Gap-Dependent Misspecification", "url": "https://openreview.net/forum?id=wqs2RMq4CW", "detail_url": "https://openreview.net/forum?id=wqs2RMq4CW", "authors": "Haolin Liu,Artin Tajdini,Andrew Wagenmaker,Chen-Yu Wei", "tags": "NIPS 2024,Poster", "abstract": "In linear bandits, how can a learner effectively learn when facing corrupted rewards? While significant work has explored this question, a holistic understanding across different adversarial models and corruption measures is lacking, as is a full characterization of the minimax regret bounds. In this work, we compare two types of corruptions commonly considered: strong corruption, where the corruption level depends on the learner\u2019s chosen action, and weak corruption, where the corruption level does not depend on the learner\u2019s chosen action. We provide a unified framework to analyze these corruptions. For stochastic linear bandits, we fully characterize the gap between the minimax regret under strong and weak corruptions. We also initiate the study of corrupted adversarial linear bandits, obtaining upper and lower bounds with matching dependencies on the corruption level. Next, we reveal a connection between corruption-robust learning and learning with gap-dependent misspecification\u2014a setting first studied by Liu et al. (2023a), where the misspecification level of an action or policy is proportional to its suboptimality. We present a general reduction that enables any corruption-robust algorithm to handle gap-dependent misspecification. This allows us to recover the results of Liu et al. (2023a) in a black-box manner and significantly generalize them to settings like linear MDPs, yielding the first results for gap-dependent misspecification in reinforcement learning. However, this general reduction does not attain the optimal rate for gap-dependent misspecification. Motivated by this, we develop a specialized algorithm that achieves optimal bounds for gap-dependent misspecification in linear bandits, thus answering an open question posed by Liu et al. (2023a).", "pdf": "https://openreview.net/pdf/d72fd0fc0d62a924ff97b58e851197a74e7f045b.pdf"} {"title": "Test-Time Adaptation Induces Stronger Accuracy and Agreement-on-the-Line", "url": "https://openreview.net/forum?id=giXUx4VH9t", "detail_url": "https://openreview.net/forum?id=giXUx4VH9t", "authors": "Eungyeup Kim,Mingjie Sun,Christina Baek,Aditi Raghunathan,J Zico Kolter", "tags": "NIPS 2024,Poster", "abstract": "Recently, Miller et al. (2021) and Baek et al. (2022) empirically demonstrated strong linear correlations between in-distribution (ID) versus out-of-distribution (OOD) accuracy and agreement. These trends, coined accuracy-on-the-line (ACL) and agreement-on-the-line (AGL), enable OOD model selection and performance estimation without labeled data. However, these phenomena also break for certain shifts, such as CIFAR10-C Gaussian Noise, posing a critical bottleneck. In this paper, we make a key finding that recent test-time adaptation (TTA) methods not only improve OOD performance, but it drastically strengthen the ACL and AGL trends in models, even in shifts where models showed very weak correlations before. To analyze this, we revisit the theoretical conditions from Miller et al. (2021) that outline the types of distribution shifts needed for perfect ACL in linear models. Surprisingly, these conditions are satisfied after applying TTA to deep models in the penultimate feature embedding space. In particular, TTA causes the data distribution to collapse complex shifts into those can be expressed by a singular \"scaling\" variable in the feature space. Our results show that by combining TTA with AGL-based estimation methods, we can estimate the OOD performance of models with high precision for a broader set of distribution shifts. This lends us a simple system for selecting the best hyperparameters and adaptation strategy without any OOD labeled data. Code is available at https://github.com/EungyeupKim/TTALine.", "pdf": "https://openreview.net/pdf/d3bf71ef39b171b5e14b67edd0789e1445dd32c0.pdf"} {"title": "LookHere: Vision Transformers with Directed Attention Generalize and Extrapolate", "url": "https://openreview.net/forum?id=o7DOGbZeyP", "detail_url": "https://openreview.net/forum?id=o7DOGbZeyP", "authors": "Anthony Fuller,Daniel Kyrollos,Yousef Yassin,James R Green", "tags": "NIPS 2024,Poster", "abstract": "High-resolution images offer more information about scenes that can improve model accuracy. However, the dominant model architecture in computer vision, the vision transformer (ViT), cannot effectively leverage larger images without finetuning \u2014 ViTs poorly extrapolate to more patches at test time, although transformers offer sequence length flexibility. We attribute this shortcoming to the current patch position encoding methods, which create a distribution shift when extrapolating.\n\nWe propose a drop-in replacement for the position encoding of plain ViTs that restricts attention heads to fixed fields of view, pointed in different directions, using 2D attention masks. Our novel method, called LookHere, provides translation-equivariance, ensures attention head diversity, and limits the distribution shift that attention heads face when extrapolating. We demonstrate that LookHere improves performance on classification (avg. 1.6%), against adversarial attack (avg. 5.4%), and decreases calibration error (avg. 1.5%) \u2014 on ImageNet without extrapolation. With extrapolation, LookHere outperforms the current SoTA position encoding method, 2D-RoPE, by 21.7% on ImageNet when trained at $224^2$ px and tested at $1024^2$ px. Additionally, we release a high-resolution test set to improve the evaluation of high-resolution image classifiers, called ImageNet-HR.", "pdf": "https://openreview.net/pdf/c456ddc1b0dbd841b80ff6464ae6d63c046b2815.pdf"} {"title": "CONTRAST: Continual Multi-source Adaptation to Dynamic Distributions", "url": "https://openreview.net/forum?id=mpDbWjLzfT", "detail_url": "https://openreview.net/forum?id=mpDbWjLzfT", "authors": "Sk Miraj Ahmed,Fahim Faisal Niloy,Xiangyu Chang,Dripta S. Raychaudhuri,Samet Oymak,Amit Roy-Chowdhury", "tags": "NIPS 2024,Poster", "abstract": "Adapting to dynamic data distributions is a practical yet challenging task. One effective strategy is to use a model ensemble, which leverages the diverse expertise of different models to transfer knowledge to evolving data distributions. However, this approach faces difficulties when the dynamic test distribution is available only in small batches and without access to the original source data. To address the challenge of adapting to dynamic distributions in such practical settings, we propose continual multi-source adaptation to dynamic distributions (CONTRAST), a novel method that optimally combines multiple source models to adapt to the dynamic test data. CONTRAST has two distinguishing features. First, it efficiently computes the optimal combination weights to combine the source models to adapt to the test data distribution continuously as a function of time. Second, it identifies which of the source model parameters to update so that only the model which is most correlated to the target data is adapted, leaving the less correlated ones untouched; this mitigates the issue of ``forgetting\" the source model parameters by focusing only on the source model that exhibits the strongest correlation with the test batch distribution. Through theoretical analysis we show that the proposed method is able to optimally combine the source models and prioritize updates to the model least prone to forgetting. Experimental analysis on diverse datasets demonstrates that the combination of multiple source models does at least as well as the best source (with hindsight knowledge), and performance does not degrade as the test data distribution changes over time (robust to forgetting).", "pdf": "https://openreview.net/pdf/e782de99a42bd7cbb46ed030b5bdbf45b070ae1f.pdf"} {"title": "A Simple yet Universal Framework for Depth Completion", "url": "https://openreview.net/forum?id=Y4tHp5Jilp", "detail_url": "https://openreview.net/forum?id=Y4tHp5Jilp", "authors": "Jin-Hwi Park,Hae-Gon Jeon", "tags": "NIPS 2024,Poster", "abstract": "Consistent depth estimation across diverse scenes and sensors is a crucial challenge in computer vision, especially when deploying machine learning models in the real world. Traditional methods depend heavily on extensive pixel-wise labeled data, which is costly and labor-intensive to acquire, and frequently have difficulty in scale issues on various depth sensors. In response, we define Universal Depth Completion (UniDC) problem. We also present a baseline architecture, a simple yet effective approach tailored to estimate scene depth across a wide range of sensors and environments using minimal labeled data. \nOur approach addresses two primary challenges: generalizable knowledge of unseen scene configurations and strong adaptation to arbitrary depth sensors with various specifications. To enhance versatility in the wild, we utilize a foundation model for monocular depth estimation that provides a comprehensive understanding of 3D structures in scenes. Additionally, for fast adaptation to off-the-shelf sensors, we generate a pixel-wise affinity map based on the knowledge from the foundation model. We then adjust depth information from arbitrary sensors to the monocular depth along with the constructed affinity. Furthermore, to boost up both the adaptability and generality, we embed the learned features into hyperbolic space, which builds implicit hierarchical structures of 3D data from fewer examples. Extensive experiments demonstrate the proposed method's superior generalization capabilities for UniDC problem over state-of-the-art depth completion. Source code is publicly available at https://github.com/JinhwiPark/UniDC.", "pdf": "https://openreview.net/pdf/8105f4aa92bb3f5ae688b7551ee814d2f3bbaa3a.pdf"} {"title": "Unconditional stability of a recurrent neural circuit implementing divisive normalization", "url": "https://openreview.net/forum?id=5lLb7aXRN9", "detail_url": "https://openreview.net/forum?id=5lLb7aXRN9", "authors": "Shivang Rawat,David Heeger,Stefano Martiniani", "tags": "NIPS 2024,Poster", "abstract": "Stability in recurrent neural models poses a significant challenge, particularly in developing biologically plausible neurodynamical models that can be seamlessly trained. Traditional cortical circuit models are notoriously difficult to train due to expansive nonlinearities in the dynamical system, leading to an optimization problem with nonlinear stability constraints that are difficult to impose. Conversely, recurrent neural networks (RNNs) excel in tasks involving sequential data but lack biological plausibility and interpretability. In this work, we address these challenges by linking dynamic divisive normalization (DN) to the stability of \"oscillatory recurrent gated neural integrator circuits'' (ORGaNICs), a biologically plausible recurrent cortical circuit model that dynamically achieves DN and that has been shown to simulate a wide range of neurophysiological phenomena. By using the indirect method of Lyapunov, we prove the remarkable property of unconditional local stability for an arbitrary-dimensional ORGaNICs circuit when the recurrent weight matrix is the identity. We thus connect ORGaNICs to a system of coupled damped harmonic oscillators, which enables us to derive the circuit's energy function, providing a normative principle of what the circuit, and individual neurons, aim to accomplish. Further, for a generic recurrent weight matrix, we prove the stability of the 2D model and demonstrate empirically that stability holds in higher dimensions. Finally, we show that ORGaNICs can be trained by backpropagation through time without gradient clipping/scaling, thanks to its intrinsic stability property and adaptive time constants, which address the problems of exploding, vanishing, and oscillating gradients. By evaluating the model's performance on RNN benchmarks, we find that ORGaNICs outperform alternative neurodynamical models on static image classification tasks and perform comparably to LSTMs on sequential tasks.", "pdf": "https://openreview.net/pdf/80cde123e8722b065a22110db6eebbdfbc4a798b.pdf"} {"title": "OW-VISCapTor: Abstractors for Open-World Video Instance Segmentation and Captioning", "url": "https://openreview.net/forum?id=cIVj8xLVZh", "detail_url": "https://openreview.net/forum?id=cIVj8xLVZh", "authors": "Anwesa Choudhuri,Girish Chowdhary,Alex Schwing", "tags": "NIPS 2024,Poster", "abstract": "We propose the new task open-world video instance segmentation and captioning. It requires to detect, segment, track and describe with rich captions never before seen objects. This challenging task can be addressed by developing \"abstractors\" which connect a vision model and a language foundation model. Concretely, we connect a multi-scale visual feature extractor and a large language model (LLM) by developing an object abstractor and an object-to-text abstractor. The object abstractor, consisting of a prompt encoder and transformer blocks, introduces spatially-diverse open-world object queries to discover never before seen objects in videos. An inter-query contrastive loss further encourages the diversity of object queries. The object-to-text abstractor is augmented with masked cross-attention and acts as a bridge between the object queries and a frozen LLM to generate rich and descriptive object-centric captions for each detected object. Our generalized approach surpasses the baseline that jointly addresses the tasks of open-world video instance segmentation and dense video object captioning by 13% on never before seen objects, and by 10% on object-centric captions.", "pdf": "https://openreview.net/pdf/047abf75fc1793c3d73e1480cfd9b6ee77d70fc1.pdf"} {"title": "Overcoming the Sim-to-Real Gap: Leveraging Simulation to Learn to Explore for Real-World RL", "url": "https://openreview.net/forum?id=JjQl8hXJAS", "detail_url": "https://openreview.net/forum?id=JjQl8hXJAS", "authors": "Andrew Wagenmaker,Kevin Huang,Liyiming Ke,Kevin Jamieson,Abhishek Gupta", "tags": "NIPS 2024,Poster", "abstract": "In order to mitigate the sample complexity of real-world reinforcement learning, common practice is to first train a policy in a simulator where samples are cheap, and then deploy this policy in the real world, with the hope that it generalizes effectively. Such \\emph{direct sim2real} transfer is not guaranteed to succeed, however, and in cases where it fails, it is unclear how to best utilize the simulator. In this work, we show that in many regimes, while direct sim2real transfer may fail, we can utilize the simulator to learn a set of \\emph{exploratory} policies which enable efficient exploration in the real world. In particular, in the setting of low-rank MDPs, we show that coupling these exploratory policies with simple, practical approaches---least-squares regression oracles and naive randomized exploration---yields a polynomial sample complexity in the real world, an exponential improvement over direct sim2real transfer, or learning without access to a simulator. To the best of our knowledge, this is the first evidence that simulation transfer yields a provable gain in reinforcement learning in settings where direct sim2real transfer fails. We validate our theoretical results on several realistic robotic simulators and a real-world robotic sim2real task, demonstrating that transferring exploratory policies can yield substantial gains in practice as well.", "pdf": "https://openreview.net/pdf/e749c63c44906a2b940f81830331ae3d5f02f741.pdf"} {"title": "Qualitative Mechanism Independence", "url": "https://openreview.net/forum?id=RE5LSV8QYH", "detail_url": "https://openreview.net/forum?id=RE5LSV8QYH", "authors": "Oliver Ethan Richardson,Spencer J Peters,Joseph Halpern", "tags": "NIPS 2024,Poster", "abstract": "We define what it means for a joint probability distribution to be compatible with aset of independent causal mechanisms, at a qualitative level\u2014or, more precisely with a directed hypergraph $\\mathcal A$, which is the qualitative structure of a probabilistic dependency graph (PDG). When A represents a qualitative Bayesian network, QIM-compatibility with $\\mathcal A$ reduces to satisfying the appropriate conditional independencies. But giving semantics to hypergraphs using QIM-compatibility lets us do much more. For one thing, we can capture functional dependencies. For another, we can capture important aspects of causality using compatibility: we can use compatibility to understand cyclic causal graphs, and to demonstrate structural compatibility, we must essentially produce a causal model. Finally, compatibility has deep connections to information theory. Applying compatibility to cyclic structures helps to clarify a longstanding conceptual issue in information theory.", "pdf": "https://openreview.net/pdf/0f664f7585fb4d89c8626b0f25948abcb1290e85.pdf"} {"title": "Inference via Interpolation: Contrastive Representations Provably Enable Planning and Inference", "url": "https://openreview.net/forum?id=PoCs4jq7cV", "detail_url": "https://openreview.net/forum?id=PoCs4jq7cV", "authors": "Benjamin Eysenbach,Vivek Myers,Russ Salakhutdinov,Sergey Levine", "tags": "NIPS 2024,Poster", "abstract": "Given time series data, how can we answer questions like ``what will happen in the future?'' and ``how did we get here?'' These sorts of probabilistic inference questions are challenging when observations are high-dimensional. In this paper, we show how these questions can have compact, closed form solutions in terms of learned representations. The key idea is to apply a variant of contrastive learning to time series data. Prior work already shows that the representations learned by contrastive learning encode a probability ratio. By extending prior work to show that the marginal distribution over representations is Gaussian, we can then prove that joint distribution of representations is also Gaussian. Taken together, these results show that representations learned via temporal contrastive learning follow a Gauss-Markov chain, a graphical model where inference (e.g., prediction, planning) over representations corresponds to inverting a low-dimensional matrix. In one special case, inferring intermediate representations will be equivalent to interpolating between the learned representations. We validate our theory using numerical simulations on tasks up to 46-dimensions.", "pdf": "https://openreview.net/pdf/5c5066a4b5b8b3f0f1c5836336c061c373e5a190.pdf"} {"title": "Pricing and Competition for Generative AI", "url": "https://openreview.net/forum?id=8LbJfEjIrT", "detail_url": "https://openreview.net/forum?id=8LbJfEjIrT", "authors": "Rafid Mahmood", "tags": "NIPS 2024,Poster", "abstract": "Compared to classical machine learning (ML) models, generative models offer a new usage paradigm where (i) a single model can be used for many different tasks out-of-the-box; (ii) users interact with this model over a series of natural language prompts; and (iii) the model is ideally evaluated on binary user satisfaction with respect to model outputs. Given these characteristics, we explore the problem of how developers of new generative AI software can release and price their technology. We first develop a comparison of two different models for a specific task with respect to user cost-effectiveness. We then model the pricing problem of generative AI software as a game between two different companies who sequentially release their models before users choose their preferred model for each task. Here, the price optimization problem becomes piecewise continuous where the companies must choose a subset of the tasks on which to be cost-effective and forgo revenue for the remaining tasks. In particular, we reveal the value of market information by showing that a company who deploys later after knowing their competitor\u2019s price can always secure cost-effectiveness on at least one task, whereas the company who is the first-to-market must price their model in a way that incentivizes higher prices from the latecomer in order to gain revenue. Most importantly, we find that if the different tasks are sufficiently similar, the first-to-market model may become cost-ineffective on all tasks regardless of how this technology is priced.", "pdf": "https://openreview.net/pdf/2a1a36420317e5da9e570ef1b33d018bdaeded34.pdf"} {"title": "Reciprocal Reward Influence Encourages Cooperation From Self-Interested Agents", "url": "https://openreview.net/forum?id=Vq2kzpig8v", "detail_url": "https://openreview.net/forum?id=Vq2kzpig8v", "authors": "John Luoyu Zhou,Weizhe Hong,Jonathan Kao", "tags": "NIPS 2024,Poster", "abstract": "Cooperation between self-interested individuals is a widespread phenomenon in the natural world, but remains elusive in interactions between artificially intelligent agents. Instead, na\u00efve reinforcement learning algorithms typically converge to Pareto-dominated outcomes in even the simplest of social dilemmas. An emerging literature on opponent shaping has demonstrated the ability to reach prosocial outcomes by influencing the learning of other agents. However, such methods differentiate through the learning step of other agents or optimize for meta-game dynamics, which rely on privileged access to opponents' learning algorithms or exponential sample complexity, respectively. To provide a learning rule-agnostic and sample-efficient alternative, we introduce Reciprocators, reinforcement learning agents which are intrinsically motivated to reciprocate the influence of opponents' actions on their returns. This approach seeks to modify other agents' $Q$-values by increasing their return following beneficial actions (with respect to the Reciprocator) and decreasing it after detrimental actions, guiding them towards mutually beneficial actions without directly differentiating through a model of their policy. We show that Reciprocators can be used to promote cooperation in temporally extended social dilemmas during simultaneous learning. Our code is available at https://github.com/johnlyzhou/reciprocator/.", "pdf": "https://openreview.net/pdf/45d35fd1191c62185b2756971c6888092bc3f045.pdf"} {"title": "Fair Secretaries with Unfair Predictions", "url": "https://openreview.net/forum?id=dxxj4S06YL", "detail_url": "https://openreview.net/forum?id=dxxj4S06YL", "authors": "Eric Balkanski,Will Ma,Andreas Maggiori", "tags": "NIPS 2024,Poster", "abstract": "Algorithms with predictions is a recent framework for decision-making under uncertainty that leverages the power of machine-learned predictions without making any assumption about their quality. The goal in this framework is for algorithms to achieve an improved performance when the predictions are accurate while maintaining acceptable guarantees when the predictions are erroneous. A serious concern with algorithms that use predictions is that these predictions can be biased and, as a result, cause the algorithm to make decisions that are deemed unfair. We show that this concern manifests itself in the classical secretary problem in the learning-augmented setting---the state-of-the-art algorithm can have zero probability of accepting the best candidate, which we deem unfair, despite promising to accept a candidate whose expected value is at least $\\max\\{\\Omega (1) , 1 - O(\\varepsilon)\\}$ times the optimal value, where $\\varepsilon$ is the prediction error.\nWe show how to preserve this promise while also guaranteeing to accept the best candidate with probability $\\Omega(1)$. Our algorithm and analysis are based on a new ``pegging'' idea that diverges from existing works and simplifies/unifies some of their results. Finally, we extend to the $k$-secretary problem and complement our theoretical analysis with experiments.", "pdf": "https://openreview.net/pdf/80c675fc8ed958e7a8de38f6c3bcd921119ed877.pdf"} {"title": "Fully Unconstrained Online Learning", "url": "https://openreview.net/forum?id=BtCrHwiBHP", "detail_url": "https://openreview.net/forum?id=BtCrHwiBHP", "authors": "Ashok Cutkosky,Zakaria Mhammedi", "tags": "NIPS 2024,Poster", "abstract": "We provide a technique for OLO that obtains regret $G\\|w_\\star\\|\\sqrt{T\\log(\\|w_\\star\\|G\\sqrt{T})} + \\|w_\\star\\|^2 + G^2$ on $G$-Lipschitz losses for any comparison point $w_\\star$ without knowing either $G$ or $\\|w_\\star\\|$. Importantly, this matches the optimal bound $G\\|w_\\star\\|\\sqrt{T}$ available with such knowledge (up to logarithmic factors), unless either $\\|w_\\star\\|$ or $G$ is so large that even $G\\|w_\\star\\|\\sqrt{T}$ is roughly linear in $T$. Thus, at a high level it matches the optimal bound in all cases in which one can achieve sublinear regret.", "pdf": "https://openreview.net/pdf/e8ad1c22b97a575228aaa4ae662d40d1582160af.pdf"} {"title": "Advection Augmented Convolutional Neural Networks", "url": "https://openreview.net/forum?id=jgpWXnXdME", "detail_url": "https://openreview.net/forum?id=jgpWXnXdME", "authors": "Niloufar Zakariaei,Siddharth Rout,Eldad Haber,Moshe Eliasof", "tags": "NIPS 2024,Poster", "abstract": "Many problems in physical sciences are characterized by the prediction of space-time sequences. Such problems range from weather prediction to the analysis of disease propagation and video prediction. Modern techniques for the solution of these problems typically combine Convolution Neural Networks (CNN) architecture with a time prediction mechanism. However, oftentimes, such approaches underperform in the long-range propagation of information and lack explainability. In this work, we introduce a physically inspired architecture for the solution of such problems. Namely, we propose to augment CNNs with advection by designing a novel semi-Lagrangian push operator. We show that the proposed operator allows for the non-local transformation of information compared with standard convolutional kernels. We then complement it with Reaction and Diffusion neural components to form a network that mimics the Reaction-Advection-Diffusion equation, in high dimensions. We demonstrate the effectiveness of our network on a number of spatio-temporal datasets that show their merit. Our code is available at https://github.com/Siddharth-Rout/deepADRnet.", "pdf": "https://openreview.net/pdf/88edfa504e2e58cce52fc7bcaaaba0c3c6613dbc.pdf"} {"title": "Nearly Minimax Optimal Submodular Maximization with Bandit Feedback", "url": "https://openreview.net/forum?id=Vn0FWRImra", "detail_url": "https://openreview.net/forum?id=Vn0FWRImra", "authors": "Artin Tajdini,Lalit K Jain,Kevin Jamieson", "tags": "NIPS 2024,Poster", "abstract": "We consider maximizing an unknown monotonic, submodular set function $f: 2^{[n]} \\rightarrow [0,1]$ with cardinality constraint under stochastic bandit feedback. \n At each time $t=1,\\dots,T$ the learner chooses a set $S_t \\subset [n]$ with $|S_t| \\leq k$ and receives reward $f(S_t) + \\eta_t$ where $\\eta_t$ is mean-zero sub-Gaussian noise. \n The objective is to minimize the learner's regret with respect to an approximation of the maximum $f(S_*)$ with $|S_*| = k$, obtained through robust greedy maximization of $f$. \n To date, the best regret bound in the literature scales as $k n^{1/3} T^{2/3}$. \n And by trivially treating every set as a unique arm one deduces that $\\sqrt{ {n \\choose k} T }$ is also achievable using standard multi-armed bandit algorithms. \n In this work, we establish the first minimax lower bound for this setting that scales like $\\tilde{\\Omega}(\\min_{L \\le k}(L^{1/3}n^{1/3}T^{2/3} + \\sqrt{{n \\choose k - L}T}))$. For a slightly restricted algorithm class, we prove a stronger regret lower bound of $\\tilde{\\Omega}(\\min_{L \\le k}(Ln^{1/3}T^{2/3} + \\sqrt{{n \\choose k - L}T}))$.\n Moreover, we propose an algorithm Sub-UCB that achieves regret $\\tilde{\\mathcal{O}}(\\min_{L \\le k}(Ln^{1/3}T^{2/3} + \\sqrt{{n \\choose k - L}T}))$ capable of matching the lower bound on regret for the restricted class up to logarithmic factors.", "pdf": "https://openreview.net/pdf/e074cce8caa93f256b9c2fbbb4c0f4b210013932.pdf"} {"title": "Do LLMs dream of elephants (when told not to)? Latent concept association and associative memory in transformers", "url": "https://openreview.net/forum?id=WJ04ZX8txM", "detail_url": "https://openreview.net/forum?id=WJ04ZX8txM", "authors": "Yibo Jiang,Goutham Rajendran,Pradeep Kumar Ravikumar,Bryon Aragam", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) have the capacity to store and recall facts. Through experimentation with open-source models, we observe that this ability to retrieve facts can be easily manipulated by changing contexts, even without altering their factual meanings. These findings highlight that LLMs might behave like an associative memory model where certain tokens in the contexts serve as clues to retrieving facts. We mathematically explore this property by studying how transformers, the building blocks of LLMs, can complete such memory tasks. We study a simple latent concept association problem with a one-layer transformer and we show theoretically and empirically that the transformer gathers information using self-attention and uses the value matrix for associative memory.", "pdf": "https://openreview.net/pdf/b09c3137e86216785baed4f8a020d498082266b8.pdf"} {"title": "RTify: Aligning Deep Neural Networks with Human Behavioral Decisions", "url": "https://openreview.net/forum?id=nTJeOXlWyV", "detail_url": "https://openreview.net/forum?id=nTJeOXlWyV", "authors": "Yu-Ang Cheng,Ivan F Rodriguez Rodriguez,Sixuan Chen,Kohitij Kar,Takeo Watanabe,Thomas Serre", "tags": "NIPS 2024,Poster", "abstract": "Current neural network models of primate vision focus on replicating overall levels of behavioral accuracy, often neglecting perceptual decisions' rich, dynamic nature. Here, we introduce a novel computational framework to model the dynamics of human behavioral choices by learning to align the temporal dynamics of a recurrent neural network (RNN) to human reaction times (RTs). We describe an approximation that allows us to constrain the number of time steps an RNN takes to solve a task with human RTs. The approach is extensively evaluated against various psychophysics experiments. We also show that the approximation can be used to optimize an ``ideal-observer'' RNN model to achieve an optimal tradeoff between speed and accuracy without human data. The resulting model is found to account well for human RT data. Finally, we use the approximation to train a deep learning implementation of the popular Wong-Wang decision-making model. The model is integrated with a convolutional neural network (CNN) model of visual processing and evaluated using both artificial and natural image stimuli. Overall, we present a novel framework that helps align current vision models with human behavior, bringing us closer to an integrated model of human vision.", "pdf": "https://openreview.net/pdf/6db9515e0b18b080a2e152c67101dadf6a756c37.pdf"} {"title": "Fast Sampling via Discrete Non-Markov Diffusion Models with Predetermined Transition Time", "url": "https://openreview.net/forum?id=KkYZmepjHn", "detail_url": "https://openreview.net/forum?id=KkYZmepjHn", "authors": "Zixiang Chen,Huizhuo Yuan,Yongqian Li,Yiwen Kou,Junkai Zhang,Quanquan Gu", "tags": "NIPS 2024,Poster", "abstract": "Discrete diffusion models have emerged as powerful tools for high-quality data generation. Despite their success in discrete spaces, such as text generation tasks, the acceleration of discrete diffusion models remains under-explored. In this paper, we propose discrete non-Markov diffusion models (DNDM), which naturally induce the predetermined transition time set. This enables a training-free sampling algorithm that significantly reduces the number of function evaluations (i.e., calls to the neural network), making the sampling process much faster. Furthermore, we study the transition from finite to infinite step sampling, offering new insights into bridging the gap between discrete and continuous-time processes for discrete diffusion models. Extensive experiments on natural language generation and machine translation tasks demonstrate the superior performance of our method in terms of both generation speed and sample quality compared to existing methods for discrete diffusion models. Codes are available at \\url{https://github.com/uclaml/DNDM}.", "pdf": "https://openreview.net/pdf/cd03a087df1388277628e04341f404d254d386b1.pdf"} {"title": "Compositional Automata Embeddings for Goal-Conditioned Reinforcement Learning", "url": "https://openreview.net/forum?id=6KDZHgrDhG", "detail_url": "https://openreview.net/forum?id=6KDZHgrDhG", "authors": "Beyazit Yalcinkaya,Niklas Lauffer,Marcell Vazquez-Chanlatte,Sanjit A. Seshia", "tags": "NIPS 2024,Poster", "abstract": "Goal-conditioned reinforcement learning is a powerful way to control an AI agent's behavior at runtime. That said, popular goal representations, e.g., target states or natural language, are either limited to Markovian tasks or rely on ambiguous task semantics. We propose representing temporal goals using compositions of deterministic finite automata (cDFAs) and use cDFAs to guide RL agents. cDFAs balance the need for formal temporal semantics with ease of interpretation: if one can understand a flow chart, one can understand a cDFA. On the other hand, cDFAs form a countably infinite concept class with Boolean semantics, and subtle changes to the automaton can result in very different tasks, making them difficult to condition agent behavior on. To address this, we observe that all paths through a DFA correspond to a series of reach-avoid tasks and propose pre-training graph neural network embeddings on \"reach-avoid derived\" DFAs. Through empirical evaluation, we demonstrate that the proposed pre-training method enables zero-shot generalization to various cDFA task classes and accelerated policy specialization without the myopic suboptimality of hierarchical methods.", "pdf": "https://openreview.net/pdf/c75c71b8dd1c4d79af19b4c0988ac2b32d3a451c.pdf"} {"title": "The Importance of Being Scalable: Improving the Speed and Accuracy of Neural Network Interatomic Potentials Across Chemical Domains", "url": "https://openreview.net/forum?id=Y4mBaZu4vy", "detail_url": "https://openreview.net/forum?id=Y4mBaZu4vy", "authors": "Eric Qu,Aditi S. Krishnapriyan", "tags": "NIPS 2024,Poster", "abstract": "Scaling has been a critical factor in improving model performance and generalization across various fields of machine learning.\nIt involves how a model\u2019s performance changes with increases in model size or input data, as well as how efficiently computational resources are utilized to support this growth. \nDespite successes in scaling other types of machine learning models, the study of scaling in Neural Network Interatomic Potentials (NNIPs) remains limited. NNIPs act as surrogate models for ab initio quantum mechanical calculations, predicting the energy and forces between atoms in molecules and materials based on atomic configurations. The dominant paradigm in this field is to incorporate numerous physical domain constraints into the model, such as symmetry constraints like rotational equivariance. We contend that these increasingly complex domain constraints inhibit the scaling ability of NNIPs, and such strategies are likely to cause model performance to plateau in the long run. In this work, we take an alternative approach and start by systematically studying NNIP scaling properties and strategies. Our findings indicate that scaling the model through attention mechanisms is both efficient and improves model expressivity. These insights motivate us to develop an NNIP architecture designed for scalability: the Efficiently Scaled Attention Interatomic Potential (EScAIP). \nEScAIP leverages a novel multi-head self-attention formulation within graph neural networks, applying attention at the neighbor-level representations.\nImplemented with highly-optimized attention GPU kernels, EScAIP achieves substantial gains in efficiency---at least 10x speed up in inference time, 5x less in memory usage---compared to existing NNIP models. EScAIP also achieves state-of-the-art performance on a wide range of datasets including catalysts (OC20 and OC22), molecules (SPICE), and materials (MPTrj).\nWe emphasize that our approach should be thought of as a philosophy rather than a specific model, representing a proof-of-concept towards developing general-purpose NNIPs that achieve better expressivity through scaling, and continue to scale efficiently with increased computational resources and training data.", "pdf": "https://openreview.net/pdf/24a8b4f2e3844760ba7ee75be691b2edc6db6c15.pdf"} {"title": "Length Optimization in Conformal Prediction", "url": "https://openreview.net/forum?id=E4ILjwzdEA", "detail_url": "https://openreview.net/forum?id=E4ILjwzdEA", "authors": "Shayan Kiyani,George J. Pappas,Hamed Hassani", "tags": "NIPS 2024,Poster", "abstract": "Conditional validity and length efficiency are two crucial aspects of conformal prediction (CP). Conditional validity ensures accurate uncertainty quantification for data subpopulations, while proper length efficiency ensures that the prediction sets remain informative. Despite significant efforts to address each of these issues individually, a principled framework that reconciles these two objectives has been missing in the CP literature. In this paper, we develop Conformal Prediction with Length-Optimization (CPL) - a novel and practical framework that constructs prediction sets with (near-) optimal length while ensuring conditional validity under various classes of covariate shifts, including the key cases of marginal and group-conditional coverage. In the infinite sample regime, we provide strong duality results which indicate that CPL achieves conditional validity and length optimality. In the finite sample regime, we show that CPL constructs conditionally valid prediction sets. Our extensive empirical evaluations demonstrate the superior prediction set size performance of CPL compared to state-of-the-art methods across diverse real-world and synthetic datasets in classification, regression, and large language model-based multiple choice question answering. An Implementation of our algorithm can be accessed at the following link: https://github.com/shayankiyani98/CP.", "pdf": "https://openreview.net/pdf/5427bae8d8296fbab1a0dd10970cd5980f9baa0b.pdf"} {"title": "Truthfulness of Calibration Measures", "url": "https://openreview.net/forum?id=cDa8hfTyGc", "detail_url": "https://openreview.net/forum?id=cDa8hfTyGc", "authors": "Nika Haghtalab,Mingda Qiao,Kunhe Yang,Eric Zhao", "tags": "NIPS 2024,Poster", "abstract": "We study calibration measures in a sequential prediction setup. In addition to rewarding accurate predictions (completeness) and penalizing incorrect ones (soundness), an important desideratum of calibration measures is *truthfulness*, a minimal condition for the forecaster not to be incentivized to exploit the system. Formally, a calibration measure is truthful if the forecaster (approximately) minimizes the expected penalty by predicting the conditional expectation of the next outcome, given the prior distribution of outcomes. We conduct a taxonomy of existing calibration measures. Perhaps surprisingly, all of them are far from being truthful. We introduce a new calibration measure termed the *Subsampled Smooth Calibration Error (SSCE)*, which is complete and sound, and under which truthful prediction is optimal up to a constant multiplicative factor. In contrast, under existing calibration measures, there are simple distributions on which a polylogarithmic (or even zero) penalty is achievable, while truthful prediction leads to a polynomial penalty.", "pdf": "https://openreview.net/pdf/7de0f6622f6908feabc59687a846eca21091098c.pdf"} {"title": "Simplified and Generalized Masked Diffusion for Discrete Data", "url": "https://openreview.net/forum?id=xcqSOfHt4g", "detail_url": "https://openreview.net/forum?id=xcqSOfHt4g", "authors": "Jiaxin Shi,Kehang Han,Zhe Wang,Arnaud Doucet,Michalis Titsias", "tags": "NIPS 2024,Poster", "abstract": "Masked (or absorbing) diffusion is actively explored as an alternative to autoregressive models for generative modeling of discrete data. However, existing work in this area has been hindered by unnecessarily complex model formulations and unclear relationships between different perspectives, leading to suboptimal parameterization, training objectives, and ad hoc adjustments to counteract these issues. In this work, we aim to provide a simple and general framework that unlocks the full potential of masked diffusion models. We show that the continuous-time variational objective of masked diffusion models is a simple weighted integral of cross-entropy losses. Our framework also enables training generalized masked diffusion models with state-dependent masking schedules. When evaluated by perplexity, our models trained on OpenWebText surpass prior diffusion language models at GPT-2 scale and demonstrate superior performance on 4 out of 5 zero-shot language modeling tasks. Furthermore, our models vastly outperform previous discrete diffusion models on pixel-level image modeling, achieving 2.75 (CIFAR-10) and 3.40 (ImageNet 64x64) bits per dimension that are better than autoregressive models of similar sizes.", "pdf": "https://openreview.net/pdf/d1e7b0d9ce4ef700190320cf3f0cf3558857545b.pdf"} {"title": "Learning Neural Contracting Dynamics: Extended Linearization and Global Guarantees", "url": "https://openreview.net/forum?id=YYnP3Xpv3y", "detail_url": "https://openreview.net/forum?id=YYnP3Xpv3y", "authors": "Sean Jaffe,Alexander Davydov,Deniz Lapsekili,Ambuj Singh,Francesco Bullo", "tags": "NIPS 2024,Poster", "abstract": "Global stability and robustness guarantees in learned dynamical systems are essential to ensure well-behavedness of the systems in the face of uncertainty. We present Extended Linearized Contracting Dynamics (ELCD), the first neural network-based dynamical system with global contractivity guarantees in arbitrary metrics. The key feature of ELCD is a parametrization of the extended linearization of the nonlinear vector field. In its most basic form, ELCD is guaranteed to be (i) globally exponentially stable, (ii) equilibrium contracting, and (iii) globally contracting with respect to some metric. To allow for contraction with respect to more general metrics in the data space, we train diffeomorphisms between the data space and a latent space and enforce contractivity in the latent space, which ensures global contractivity in the data space. We demonstrate the performance of ELCD on the high dimensional LASA, multi-link pendulum, and Rosenbrock datasets.", "pdf": "https://openreview.net/pdf/8cd04bb230dcace06222c3a29aa5e180682aca90.pdf"} {"title": "A New Neural Kernel Regime: The Inductive Bias of Multi-Task Learning", "url": "https://openreview.net/forum?id=APBq3KAmFa", "detail_url": "https://openreview.net/forum?id=APBq3KAmFa", "authors": "Julia B Nakhleh,Joseph Shenouda,Robert D Nowak", "tags": "NIPS 2024,Poster", "abstract": "This paper studies the properties of solutions to multi-task shallow ReLU neural network learning problems, wherein the network is trained to fit a dataset with minimal sum of squared weights. Remarkably, the solutions learned for each individual task resemble those obtained by solving a kernel method, revealing a novel connection between neural networks and kernel methods. It is known that single-task neural network training problems are equivalent to minimum norm interpolation problem in a non-Hilbertian Banach space, and that the solutions of such problems are generally non-unique. In contrast, we prove that the solutions to univariate-input, multi-task neural network interpolation problems are almost always unique, and coincide with the solution to a minimum-norm interpolation problem in a first-order Sobolev (reproducing kernel) Hilbert Space. We also demonstrate a similar phenomenon in the multivariate-input case; specifically, we show that neural network training problems with a large number of diverse tasks are approximately equivalent to an $\\ell^2$ (Hilbert space) minimization problem over a fixed kernel determined by the optimal neurons.", "pdf": "https://openreview.net/pdf/4bef968123d583752201a92dcee34b71b3ae85db.pdf"} {"title": "Conformal Prediction for Class-wise Coverage via Augmented Label Rank Calibration", "url": "https://openreview.net/forum?id=T7dS1Ghwwu", "detail_url": "https://openreview.net/forum?id=T7dS1Ghwwu", "authors": "Yuanjie Shi,SUBHANKAR GHOSH,Taha Belkhouja,Jana Doppa,Yan Yan", "tags": "NIPS 2024,Poster", "abstract": "Conformal prediction (CP) is an emerging uncertainty quantification framework that allows us to construct a prediction set to cover the true label with a pre-specified marginal or conditional probability.\nAlthough the valid coverage guarantee has been extensively studied for classification problems, CP often produces large prediction sets which may not be practically useful.\nThis issue is exacerbated for the setting of class-conditional coverage on imbalanced classification tasks with many and/or imbalanced classes.\nThis paper proposes the Rank Calibrated Class-conditional CP (RC3P) algorithm to reduce the prediction set sizes to achieve class-conditional coverage, where the valid coverage holds for each class.\nIn contrast to the standard class-conditional CP (CCP) method that uniformly thresholds the class-wise conformity score for each class, the augmented label rank calibration step allows RC3P to selectively iterate this class-wise thresholding subroutine only for a subset of classes whose class-wise top-$k$ error is small.\nWe prove that agnostic to the classifier and data distribution, RC3P achieves class-wise coverage. We also show that RC3P reduces the size of prediction sets compared to the CCP method. \nComprehensive experiments on multiple real-world datasets demonstrate that RC3P achieves class-wise coverage and $26.25\\\\%$ $\\downarrow$ reduction in prediction set sizes on average.", "pdf": "https://openreview.net/pdf/7ad03c77e17e962d610bd1999051ca39e4372d0d.pdf"} {"title": "GraphTrail: Translating GNN Predictions into Human-Interpretable Logical Rules", "url": "https://openreview.net/forum?id=fzlMza6dRZ", "detail_url": "https://openreview.net/forum?id=fzlMza6dRZ", "authors": "Burouj Armgaan,Manthan Dalmia,Sourav Medya,Sayan Ranu", "tags": "NIPS 2024,Poster", "abstract": "Instance-level explanation of graph neural networks (GNNs) is a well-studied area. These explainers, however, only explain an instance (e.g., a graph) and fail to uncover the combinatorial reasoning learned by a GNN from the training data towards making its predictions. In this work, we introduce GraphTrail, the first end-to-end, global, post-hoc GNN explainer that translates the functioning of a black-box GNN model to a boolean formula over the (sub)graph level concepts without relying on local explainers. GraphTrail is unique in automatically mining the discriminative subgraph-level concepts using Shapley values. Subsequently, the GNN predictions are mapped to a human-interpretable boolean formula over these concepts through symbolic regression. Extensive experiments across diverse datasets and GNN architectures demonstrate significant improvement over existing global explainers in mapping GNN predictions to faithful logical formulae. The robust and accurate performance of GraphTrail makes it invaluable for improving GNNs and facilitates adoption in domains with strict transparency requirements.", "pdf": "https://openreview.net/pdf/476e9d9dcd1fe989b9ed65ef25c804d563cf1340.pdf"} {"title": "Layer-Adaptive State Pruning for Deep State Space Models", "url": "https://openreview.net/forum?id=T9GbbWbNQG", "detail_url": "https://openreview.net/forum?id=T9GbbWbNQG", "authors": "Minseon Gwak,Seongrok Moon,Joohwan Ko,PooGyeon Park", "tags": "NIPS 2024,Poster", "abstract": "Due to the lack of state dimension optimization methods, deep state space models (SSMs) have sacrificed model capacity, training search space, or stability to alleviate computational costs caused by high state dimensions. In this work, we provide a structured pruning method for SSMs, Layer-Adaptive STate pruning (LAST), which reduces the state dimension of each layer in minimizing model-level energy loss by extending modal truncation for a single system. LAST scores are evaluated using $\\mathcal{H}_{\\infty}$ norms of subsystems for each state and layer-wise energy normalization. The scores serve as global pruning criteria, enabling cross-layer comparison of states and layer-adaptive pruning. Across various sequence benchmarks, LAST optimizes previous SSMs, revealing the redundancy and compressibility of their state spaces. Notably, we demonstrate that, on average, pruning 33\\% of states still maintains performance with 0.52\\% accuracy loss in multi-input multi-output SSMs without retraining. Code is available at https://github.com/msgwak/LAST.", "pdf": "https://openreview.net/pdf/7550b83912a48326f2dd1fa4a2cd053b02502697.pdf"} {"title": "Structured Unrestricted-Rank Matrices for Parameter Efficient Finetuning", "url": "https://openreview.net/forum?id=MXOzgjlWDF", "detail_url": "https://openreview.net/forum?id=MXOzgjlWDF", "authors": "Arijit Sehanobish,Kumar Avinava Dubey,Krzysztof Marcin Choromanski,Somnath Basu Roy Chowdhury,Deepali Jain,Vikas Sindhwani,Snigdha Chaturvedi", "tags": "NIPS 2024,Poster", "abstract": "Recent efforts to scale Transformer models have demonstrated rapid progress across a wide range of tasks (Wei at. al 2022). However, fine-tuning these models for downstream tasks is quite expensive due to their large parameter counts. Parameter-efficient fine-tuning (PEFT) approaches have emerged as a viable alternative, allowing us to fine-tune models by updating only a small number of parameters.\n In this work, we propose a general framework for parameter efficient fine-tuning (PEFT), based on *structured unrestricted-rank matrices* (SURM) which can serve as a drop-in replacement for popular approaches such as Adapters and LoRA. Unlike other methods like LoRA, SURMs give us more flexibility in finding the right balance between compactness and expressiveness. This is achieved by using *low displacement rank matrices* (LDRMs), which hasn't been used in this context before. SURMs remain competitive with baselines, often providing significant quality improvements while using a smaller parameter budget. SURMs achieve: **5**-**7**% accuracy gains on various image classification tasks while replacing low-rank matrices in LoRA and: up to **12x** reduction of the number of parameters in adapters (with virtually no loss in quality) on the GLUE benchmark.", "pdf": "https://openreview.net/pdf/13d021ceca6e4f8c1d66f8ac57178562c5e644d9.pdf"} {"title": "No Free Delivery Service: Epistemic limits of passive data collection in complex social systems", "url": "https://openreview.net/forum?id=XZ0fpoAKEB", "detail_url": "https://openreview.net/forum?id=XZ0fpoAKEB", "authors": "Maximilian Nickel", "tags": "NIPS 2024,Poster", "abstract": "Rapid model validation via the train-test paradigm has been a key driver for the breathtaking progress in machine learning and AI. However, modern AI systems often depend on a combination of tasks and data collection practices that violate all assumptions ensuring test validity. Yet, without rigorous model validation we cannot ensure the intended outcomes of deployed AI systems, including positive social impact, nor continue to advance AI research in a scienti\ufb01cally sound way. In this paper, I will show that for widely considered inference settings in complex social systems the train-test paradigm does not only lack a justi\ufb01cation but is indeed invalid for any risk estimator, including counterfactual and causal estimators, with high probability. These formal impossibility results highlight a fundamental epistemic issue, i.e., that for key tasks in modern AI we cannot know whether models are valid under current data collection practices. Importantly, this includes variants of both recommender systems and reasoning via large language models, and neither na\u00efve scaling nor limited benchmarks are suited to address this issue. I am illustrating these results via the widely used MovieLens benchmark and conclude by discussing the implications of these results for AI in social systems, including possible remedies such as participatory data curation and open science.", "pdf": "https://openreview.net/pdf/83b67151ce87c2db32db8764ec703c381c935ed5.pdf"} {"title": "A Simple Framework for Generalization in Visual RL under Dynamic Scene Perturbations", "url": "https://openreview.net/forum?id=0AumdfLzpK", "detail_url": "https://openreview.net/forum?id=0AumdfLzpK", "authors": "Wonil Song,Hyesong Choi,Kwanghoon Sohn,Dongbo Min", "tags": "NIPS 2024,Poster", "abstract": "In the rapidly evolving domain of vision-based deep reinforcement learning (RL), a pivotal challenge is to achieve generalization capability to dynamic environmental changes reflected in visual observations.\nOur work delves into the intricacies of this problem, identifying two key issues that appear in previous approaches for visual RL generalization: (i) imbalanced saliency and (ii) observational overfitting.\nImbalanced saliency is a phenomenon where an RL agent disproportionately identifies salient features across consecutive frames in a frame stack. \nObservational overfitting occurs when the agent focuses on certain background regions rather than task-relevant objects.\nTo address these challenges, we present a simple yet effective framework for generalization in visual RL (SimGRL) under dynamic scene perturbations.\nFirst, to mitigate the imbalanced saliency problem, we introduce an architectural modification to the image encoder to stack frames at the feature level rather than the image level.\nSimultaneously, to alleviate the observational overfitting problem, we propose a novel technique called shifted random overlay augmentation, which is specifically designed to learn robust representations capable of effectively handling dynamic visual scenes.\nExtensive experiments demonstrate the superior generalization capability of SimGRL, achieving state-of-the-art performance in benchmarks including the DeepMind Control Suite.", "pdf": "https://openreview.net/pdf/7435c53f5e4329852f86b28bd17a4fb523861f53.pdf"} {"title": "Instance-Optimal Private Density Estimation in the Wasserstein Distance", "url": "https://openreview.net/forum?id=Apq6corvfZ", "detail_url": "https://openreview.net/forum?id=Apq6corvfZ", "authors": "Vitaly Feldman,Audra McMillan,Satchit Sivakumar,Kunal Talwar", "tags": "NIPS 2024,Poster", "abstract": "Estimating the density of a distribution from samples is a fundamental problem in statistics. In many practical settings, the Wasserstein distance is an appropriate error metric for density estimation. For example, when estimating population densities in a geographic region, a small Wasserstein distance means that the estimate is able to capture roughly where the population mass is. In this work we study differentially private density estimation in the Wasserstein distance. We design and analyze instance-optimal algorithms for this problem that can adapt to easy instances.\n\nFor distributions $P$ over $\\mathbb{R}$, we consider a strong notion of instance-optimality: an algorithm that uniformly achieves the instance-optimal estimation rate is competitive with an algorithm that is told that the distribution is either $P$ or $Q_P$ for some distribution $Q_P$ whose probability density function (pdf) is within a factor of 2 of the pdf of $P$. For distributions over $\\mathbb{R}^2$, we use a slightly different notion of instance optimality. We say that an algorithm is instance-optimal if it is competitive with an algorithm that is given a constant multiplicative approximation of the density of the distribution. We characterize the instance-optimal estimation rates in both these settings and show that they are uniformly achievable (up to polylogarithmic factors). Our approach for $\\mathbb{R}^2$ extends to arbitrary metric spaces as it goes via hierarchically separated trees. As a special case our results lead to instance-optimal learning in TV distance for discrete distributions.", "pdf": "https://openreview.net/pdf/f621923d0cce7ced5ddd26bdc3ef54e68c882ef0.pdf"} {"title": "DARG: Dynamic Evaluation of Large Language Models via Adaptive Reasoning Graph", "url": "https://openreview.net/forum?id=5IFeCNA7zR", "detail_url": "https://openreview.net/forum?id=5IFeCNA7zR", "authors": "Zhehao Zhang,Jiaao Chen,Diyi Yang", "tags": "NIPS 2024,Poster", "abstract": "The current paradigm of evaluating Large Language Models (LLMs) through static benchmarks comes with significant limitations, such as vulnerability to data contamination and a lack of adaptability to the evolving capabilities of LLMs. Therefore, evaluation methods that can adapt and generate evaluation data with controlled complexity are urgently needed. In this work, we introduce Dynamic Evaluation of LLMs via Adaptive Reasoning Graph Evolvement (DARG) to dynamically extend current benchmarks with controlled complexity and diversity. Specifically, we first extract the reasoning graphs of data points in current benchmarks and then perturb the reasoning graphs to generate novel testing data. Such newly generated test samples can have different levels of complexity while maintaining linguistic diversity similar to the original benchmarks. We further use a code-augmented LLM to ensure the label correctness of newly generated data. We apply our DARG framework to diverse reasoning tasks in four domains with 15 state-of-the-art LLMs. Experimental results show that almost all LLMs experience a performance decrease with increased complexity and certain LLMs exhibit significant drops. Additionally, we find that LLMs exhibit more biases when being evaluated via the data generated by DARG with higher complexity levels. These observations provide useful insights into how to dynamically and adaptively evaluate LLMs.", "pdf": "https://openreview.net/pdf/ff0b87684bafe06ad128c8af823ddc1c950244a2.pdf"} {"title": "Fast Tree-Field Integrators: From Low Displacement Rank to Topological Transformers", "url": "https://openreview.net/forum?id=Eok6HbcSRI", "detail_url": "https://openreview.net/forum?id=Eok6HbcSRI", "authors": "Krzysztof Marcin Choromanski,Arijit Sehanobish,Somnath Basu Roy Chowdhury,Han Lin,Kumar Avinava Dubey,Tamas Sarlos,Snigdha Chaturvedi", "tags": "NIPS 2024,Poster", "abstract": "We present a new class of fast polylog-linear algorithms based on the theory of structured matrices (in particular *low displacement rank*) for integrating tensor fields defined on weighted trees. Several applications of the resulting *fast tree-field integrators* (FTFIs) are presented, including: (a) approximation of graph metrics with tree metrics, (b) graph classification, (c) modeling on meshes, and finally (d) *Topological Transformers* (TTs) (Choromanski et al., 2022) for images. For Topological Transformers, we propose new relative position encoding (RPE) masking mechanisms with as few as **three** extra learnable parameters per Transformer layer, leading to **1.0-1.5\\%+** accuracy gains. Importantly, most of FTFIs are **exact** methods, thus numerically equivalent to their brute-force counterparts. When applied to graphs with thousands of nodes, those exact algorithms provide **5.7-13x** speedups. We also provide an extensive theoretical analysis of our methods.", "pdf": "https://openreview.net/pdf/7a7e2f0f3ce5ec564c29c694ce3126506812314f.pdf"} {"title": "Label Noise: Ignorance Is Bliss", "url": "https://openreview.net/forum?id=fTKcqr4xuX", "detail_url": "https://openreview.net/forum?id=fTKcqr4xuX", "authors": "Yilun Zhu,Jianxin Zhang,Aditya Gangrade,Clayton Scott", "tags": "NIPS 2024,Poster", "abstract": "We establish a new theoretical framework for learning under multi-class, instance-dependent label noise. \n This framework casts learning with label\n noise as a form of domain adaptation, in particular, domain adaptation\n under posterior drift. \n We introduce the concept of \\emph{relative signal strength} (RSS), a pointwise measure that quantifies the transferability from noisy to clean posterior. \n Using RSS, we establish nearly matching upper and lower bounds on the excess risk. \n Our theoretical findings support \n the simple\n \\emph{Noise Ignorant Empirical Risk Minimization (NI-ERM)} principle,\n which minimizes empirical risk while ignoring label noise.\n Finally, we translate this theoretical insight into practice: by\n using NI-ERM to fit a linear classifier on top of a self-supervised\n feature extractor, we achieve state-of-the-art performance on the\n CIFAR-N data challenge.", "pdf": "https://openreview.net/pdf/568741df07c501c0ff4cc330490b22523e8957b3.pdf"} {"title": "CryoSPIN: Improving Ab-Initio Cryo-EM Reconstruction with Semi-Amortized Pose Inference", "url": "https://openreview.net/forum?id=1MCseWaFZb", "detail_url": "https://openreview.net/forum?id=1MCseWaFZb", "authors": "Shayan Shekarforoush,David B. Lindell,Marcus A Brubaker,David J. Fleet", "tags": "NIPS 2024,Poster", "abstract": "Cryo-EM is an increasingly popular method for determining the atomic resolution 3D structure of macromolecular complexes (eg, proteins) from noisy 2D images captured by an electron microscope. The computational task is to reconstruct the 3D density of the particle, along with 3D pose of the particle in each 2D image, for which the posterior pose distribution is highly multi-modal. Recent developments in cryo-EM have focused on deep learning for which amortized inference has been used to predict pose. Here, we address key problems with this approach, and propose a new semi-amortized method, cryoSPIN, in which reconstruction begins with amortized inference and then switches to a form of auto-decoding to refine poses locally using stochastic gradient descent. Through evaluation on synthetic datasets, we demonstrate that cryoSPIN is able to handle multi-modal pose distributions during the amortized inference stage, while the later, more flexible stage of direct pose optimization yields faster and more accurate convergence of poses compared to baselines. On experimental data, we show that cryoSPIN outperforms the state-of-the-art cryoAI in speed and reconstruction quality.", "pdf": "https://openreview.net/pdf/6c4cb8c1896b91037fa7dbe74f5a53bc084258fe.pdf"} {"title": "Selective Explanations", "url": "https://openreview.net/forum?id=gHCFduRo7o", "detail_url": "https://openreview.net/forum?id=gHCFduRo7o", "authors": "Lucas Monteiro Paes,Dennis Wei,Flavio Calmon", "tags": "NIPS 2024,Poster", "abstract": "Feature attribution methods explain black-box machine learning (ML) models by assigning importance scores to input features. \nThese methods can be computationally expensive for large ML models. To address this challenge, there have been increasing efforts to develop amortized explainers, where a ML model is trained to efficiently approximate computationally expensive feature attribution scores. Despite their efficiency, amortized explainers can produce misleading explanations. In this paper, we propose selective explanations to (i) detect when amortized explainers generate inaccurate explanations and (ii) improve the approximation of the explanation using a technique we call explanations with initial guess. Selective explanations allow practitioners to specify the fraction of samples that receive explanations with initial guess, offering a principled way to bridge the gap between amortized explainers (one inference) and more computationally costly approximations (multiple inferences). Our experiments on various models and datasets demonstrate that feature attributions via selective explanations strike a favorable balance between explanation quality and computational efficiency.", "pdf": "https://openreview.net/pdf/5b7e3b99fcef803366002e9632ee40a47cbfa4c9.pdf"} {"title": "Enhancing Diversity in Bayesian Deep Learning via Hyperspherical Energy Minimization of CKA", "url": "https://openreview.net/forum?id=s2hA6Bz3LE", "detail_url": "https://openreview.net/forum?id=s2hA6Bz3LE", "authors": "David Smerkous,Qinxun Bai,Li Fuxin", "tags": "NIPS 2024,Poster", "abstract": "Particle-based Bayesian deep learning often requires a similarity metric to compare two networks. However, naive similarity metrics lack permutation invariance and are inappropriate for comparing networks. Centered Kernel Alignment (CKA) on feature kernels has been proposed to compare deep networks but has not been used as an optimization objective in Bayesian deep learning. In this paper, we explore the use of CKA in Bayesian deep learning to generate diverse ensembles and hypernetworks that output a network posterior. Noting that CKA projects kernels onto a unit hypersphere and that directly optimizing the CKA objective leads to diminishing gradients when two networks are very similar. We propose adopting the approach of hyperspherical energy (HE) on top of CKA kernels to address this drawback and improve training stability. Additionally, by leveraging CKA-based feature kernels, we derive feature repulsive terms applied to synthetically generated outlier examples. Experiments on both diverse ensembles and hypernetworks show that our approach significantly outperforms baselines in terms of uncertainty quantification in both synthetic and realistic outlier detection tasks.", "pdf": "https://openreview.net/pdf/e8fd6b257ea14297e3fcc15e027f5b978526a38b.pdf"} {"title": "Learning to Edit Visual Programs with Self-Supervision", "url": "https://openreview.net/forum?id=uzIWqRzjEP", "detail_url": "https://openreview.net/forum?id=uzIWqRzjEP", "authors": "R. Kenny Jones,Renhao Zhang,Aditya Ganeshan,Daniel Ritchie", "tags": "NIPS 2024,Poster", "abstract": "We design a system that learns how to edit visual programs. Our edit network consumes a complete input program and a visual target. From this input, we task our network with predicting a local edit operation that could be applied to the input program to improve its similarity to the target. In order to apply this scheme for domains that lack program annotations, we develop a self-supervised learning approach that integrates this edit network into a bootstrapped finetuning loop along with a network that predicts entire programs in one-shot. Our joint finetuning scheme, when coupled with an inference procedure that initializes a population from the one-shot model and evolves members of this population with the edit network, helps to infer more accurate visual programs. Over multiple domains, we experimentally compare our method against the alternative of using only the one-shot model, and find that even under equal search-time budgets, our editing-based paradigm provides significant advantages.", "pdf": "https://openreview.net/pdf/5574437cb41abf73076c2977076bffc90f011092.pdf"} {"title": "ETO:Efficient Transformer-based Local Feature Matching by Organizing Multiple Homography Hypotheses", "url": "https://openreview.net/forum?id=3xHCaDdYcc", "detail_url": "https://openreview.net/forum?id=3xHCaDdYcc", "authors": "Junjie Ni,Guofeng Zhang,Guanglin Li,Yijin Li,Xinyang Liu,Zhaoyang Huang,Hujun Bao", "tags": "NIPS 2024,Poster", "abstract": "We tackle the efficiency problem of learning local feature matching.Recent advancements have given rise to purely CNN-based and transformer-based approaches, each augmented with deep learning techniques. While CNN-based methods often excel in matching speed, transformer-based methods tend to provide more accurate matches. We propose an efficient transformer-based network architecture for local feature matching.This technique is built on constructing multiple homography hypotheses to approximate the continuous correspondence in the real world and uni-directional cross-attention to accelerate the refinement. On the YFCC100M dataset, our matching accuracy is competitive with LoFTR, a state-of-the-art transformer-based architecture, while the inference speed is boosted to 4 times, even outperforming the CNN-based methods.Comprehensive evaluations on other open datasets such as Megadepth, ScanNet, and HPatches demonstrate our method's efficacy, highlighting its potential to significantly enhance a wide array of downstream applications.", "pdf": "https://openreview.net/pdf/3762b865d47c647261ea21651b925a68d24663a7.pdf"} {"title": "Causal Inference in the Closed-Loop: Marginal Structural Models for Sequential Excursion Effects", "url": "https://openreview.net/forum?id=BgZcuEsYU8", "detail_url": "https://openreview.net/forum?id=BgZcuEsYU8", "authors": "Alexander W. Levis,Gabriel Loewinger,Francisco Pereira", "tags": "NIPS 2024,Poster", "abstract": "Optogenetics is widely used to study the effects of neural circuit manipulation on behavior. However, the paucity of causal inference methodological work on this topic has resulted in analysis conventions that discard information, and constrain the scientific questions that can be posed. To fill this gap, we introduce a nonparametric causal inference framework for analyzing \"closed-loop\" designs, which use dynamic policies that assign treatment based on covariates. In this setting, standard methods can introduce bias and occlude causal effects. Building on the sequentially randomized experiments literature in causal inference, our approach extends history-restricted marginal structural models for dynamic regimes. In practice, our framework can identify a wide range of causal effects of optogenetics on trial-by-trial behavior, such as, fast/slow-acting, dose-response, additive/antagonistic, and floor/ceiling. Importantly, it does so without requiring negative controls, and can estimate how causal effect magnitudes evolve across time points. From another view, our work extends \"excursion effect\" methods---popular in the mobile health literature---to enable estimation of causal contrasts for treatment sequences greater than length one, in the presence of positivity violations. We derive rigorous statistical guarantees, enabling hypothesis testing of these causal effects. We demonstrate our approach on data from a recent study of dopaminergic activity on learning, and show how our method reveals relevant effects obscured in standard analyses.", "pdf": "https://openreview.net/pdf/d29c280f804bad36b2451f2e49f236e6099ba176.pdf"} {"title": "Understanding Model Selection for Learning in Strategic Environments", "url": "https://openreview.net/forum?id=R6FOuWv5MD", "detail_url": "https://openreview.net/forum?id=R6FOuWv5MD", "authors": "Tinashe Handina,Eric Mazumdar", "tags": "NIPS 2024,Poster", "abstract": "The deployment of ever-larger machine learning models reflects a growing consensus that the more expressive the model class one optimizes over\u2014and the more data one has access to\u2014the more one can improve performance. As models get deployed in a variety of real-world scenarios, they inevitably face strategic environments. In this work, we consider the natural question of how the interplay of models and strategic interactions affects the relationship between performance at equilibrium and the expressivity of model classes. We find that strategic interactions can break the conventional view\u2014meaning that performance does not necessarily monotonically improve as model classes get larger or more expressive (even with infinite data). We show the implications of this result in several contexts including strategic regression, strategic classification, and multi-agent reinforcement learning. In particular, we show that each of these settings admits a Braess' paradox-like phenomenon in which optimizing over less expressive model classes allows one to achieve strictly better equilibrium outcomes. Motivated by these examples, we then propose a new paradigm for model selection in games wherein an agent seeks to choose amongst different model classes to use as their action set in a game.", "pdf": "https://openreview.net/pdf/a992fb68ba2734fe4f7a089cae07379cb8f7ef58.pdf"} {"title": "MSA Generation with Seqs2Seqs Pretraining: Advancing Protein Structure Predictions", "url": "https://openreview.net/forum?id=D0DLlMOufv", "detail_url": "https://openreview.net/forum?id=D0DLlMOufv", "authors": "Le Zhang,Jiayang Chen,Tao Shen,Yu Li,Siqi Sun", "tags": "NIPS 2024,Poster", "abstract": "Deep learning models like AlphaFold2 have revolutionized protein structure prediction, achieving unprecedented accuracy. However, the dependence on robust multiple sequence alignments (MSAs) continues to pose a challenge, especially for proteins that lack a wealth of homologous sequences. To overcome this limitation, we introduce MSA-Generator, a self-supervised generative protein language model. Trained on a sequence-to-sequence task using an automatically constructed dataset, MSA-Generator employs protein-specific attention mechanisms to harness large-scale protein databases, generating virtual MSAs that enrich existing ones and boost prediction accuracy. Our experiments on CASP14 and CASP15 benchmarks reveal significant improvements in LDDT scores, particularly for complex and challenging sequences, enhancing the performance of both AlphaFold2 and RoseTTAFold. The code is released at \\url{https://github.com/lezhang7/MSAGen}.", "pdf": "https://openreview.net/pdf/fd516f23b421f9d03d5b978b03eded9900f0a462.pdf"} {"title": "Identifiable Shared Component Analysis of Unpaired Multimodal Mixtures", "url": "https://openreview.net/forum?id=ivCX2cjwcT", "detail_url": "https://openreview.net/forum?id=ivCX2cjwcT", "authors": "Subash Timilsina,Sagar Shrestha,Xiao Fu", "tags": "NIPS 2024,Poster", "abstract": "A core task in multi-modal learning is to integrate information from multiple feature spaces (e.g., text and audio), offering modality-invariant essential representations of data. Recent research showed that, classical tools such as canonical correlation analysis (CCA) provably identify the shared components up to minor ambiguities, when samples in each modality are generated from a linear mixture of shared and private components. Such identifiability results were obtained under the condition that the cross-modality samples are aligned/paired according to their shared information. This work takes a step further, investigating shared component identifiability from multi-modal linear mixtures where cross-modality samples are unaligned. A distribution divergence minimization-based loss is proposed, under which a suite of sufficient conditions ensuring identifiability of the shared components are derived. Our conditions are based on cross-modality distribution discrepancy characterization and density-preserving transform removal, which are much milder than existing studies relying on independent component analysis. More relaxed conditions are also provided via adding reasonable structural constraints, motivated by available side information in various applications. The identifiability claims are thoroughly validated using synthetic and real-world data.", "pdf": "https://openreview.net/pdf/782dd5983e36710970c218a7fd9b39791abee723.pdf"} {"title": "Energy-Based Modelling for Discrete and Mixed Data via Heat Equations on Structured Spaces", "url": "https://openreview.net/forum?id=wAqdvcK1Fv", "detail_url": "https://openreview.net/forum?id=wAqdvcK1Fv", "authors": "Tobias Schr\u00f6der,Zijing Ou,Yingzhen Li,Andrew B. Duncan", "tags": "NIPS 2024,Poster", "abstract": "Energy-based models (EBMs) offer a flexible framework for probabilistic modelling across various data domains. However, training EBMs on data in discrete or mixed state spaces poses significant challenges due to the lack of robust and fast sampling methods. In this work, we propose to train discrete EBMs with Energy Discrepancy, a loss function which only requires the evaluation of the energy function at data points and their perturbed counterparts, thus eliminating the need for Markov chain Monte Carlo. We introduce perturbations of the data distribution by simulating a diffusion process on the discrete state space endowed with a graph structure. This allows us to inform the choice of perturbation from the structure of the modelled discrete variable, while the continuous time parameter enables fine-grained control of the perturbation. Empirically, we demonstrate the efficacy of the proposed approaches in a wide range of applications, including the estimation of discrete densities with non-binary vocabulary and binary image modelling. We also introduce the first application of EBMs to tabular data sets with applications in synthetic data generation and calibrated classification.", "pdf": "https://openreview.net/pdf/eac92ea0ece71224dde4b9f69a62521adc463b5c.pdf"} {"title": "On the Optimality of Dilated Entropy and Lower Bounds for Online Learning in Extensive-Form Games", "url": "https://openreview.net/forum?id=6PMfJT2O7G", "detail_url": "https://openreview.net/forum?id=6PMfJT2O7G", "authors": "Zhiyuan Fan,Christian Kroer,Gabriele Farina", "tags": "NIPS 2024,Poster", "abstract": "First-order methods (FOMs) are arguably the most scalable algorithms for equilibrium computation in large extensive-form games. To operationalize these methods, a distance-generating function, acting as a regularizer for the strategy space, must be chosen. \nThe ratio between the strong convexity modulus and the diameter of the regularizer is a key parameter in the analysis of FOMs.\nA natural question is then: what is the optimal distance-generating function for extensive-form decision spaces? In this paper, we make a number of contributions, ultimately establishing that the weight-one dilated entropy (DilEnt) distance-generating function is optimal up to logarithmic factors. \nThe DilEnt regularizer is notable due to its iterate-equivalence with Kernelized OMWU (KOMWU)---the algorithm with state-of-the-art dependence on the game tree size in extensive-form games---when used in conjunction with the online mirror descent (OMD) algorithm. However, the standard analysis for OMD is unable to establish such a result; the only current analysis is by appealing to the iterate equivalence to KOMWU. \nWe close this gap by introducing a pair of primal-dual treeplex norms, which we contend form the natural analytic viewpoint for studying the strong convexity of DilEnt. \nUsing these norm pairs, we recover the diameter-to-strong-convexity ratio that predicts the same performance as KOMWU. Along with a new regret lower bound for online learning in sequence-form strategy spaces, we show that this ratio is nearly optimal.\nFinally, we showcase our analytic techniques by refining the analysis of Clairvoyant OMD when paired with DilEnt, establishing an $\\mathcal{O}(n \\log |\\mathcal{V}| \\log T/T)$ approximation rate to coarse correlated equilibrium in $n$-player games, where $|\\mathcal{V}|$ is the number of reduced normal-form strategies of the players, establishing the new state of the art.", "pdf": "https://openreview.net/pdf/03aa07c5a35abc096ab9e5fb05fb90c95dead009.pdf"} {"title": "Trajectory Data Suffices for Statistically Efficient Learning in Offline RL with Linear $q^\\pi$-Realizability and Concentrability", "url": "https://openreview.net/forum?id=TusuJSbRxm", "detail_url": "https://openreview.net/forum?id=TusuJSbRxm", "authors": "Volodymyr Tkachuk,Gell\u00e9rt Weisz,Csaba Szepesvari", "tags": "NIPS 2024,Poster", "abstract": "We consider offline reinforcement learning (RL) in $H$-horizon Markov decision processes (MDPs) under the linear $q^\\pi$-realizability assumption, where the action-value function of every policy is linear with respect to a given $d$-dimensional feature function. The hope in this setting is that learning a good policy will be possible without requiring a sample size that scales with the number of states in the MDP. Foster et al. [2021] have shown this to be impossible even under $\\text{\\textit{concentrability}}$, a data coverage assumption where a coefficient $C_\\text{conc}$ bounds the extent to which the state-action distribution of any policy can veer off the data distribution. However, the data in this previous work was in the form of a sequence of individual transitions. This leaves open the question of whether the negative result mentioned could be overcome if the data was composed of sequences of full trajectories. In this work we answer this question positively by proving that with trajectory data, a dataset of size $\\text{poly}(d,H,C_\\text{conc})/\\epsilon^2$ is sufficient for deriving an $\\epsilon$-optimal policy, regardless of the size of the state space. The main tool that makes this result possible is due to Weisz et al. [2023], who demonstrate that linear MDPs can be used to approximate linearly $q^\\pi$-realizable MDPs. The connection to trajectory data is that the linear MDP approximation relies on \"skipping\" over certain states. The associated estimation problems are thus easy when working with trajectory data, while they remain nontrivial when working with individual transitions. The question of computational efficiency under our assumptions remains open.", "pdf": "https://openreview.net/pdf/d673788da6ff7e2ffc302ff01028aeef0f99497a.pdf"} {"title": "Predicting the Performance of Foundation Models via Agreement-on-the-Line", "url": "https://openreview.net/forum?id=aJx9onwsR4", "detail_url": "https://openreview.net/forum?id=aJx9onwsR4", "authors": "Rahul Saxena,Taeyoun Kim,Aman Mehra,Christina Baek,J Zico Kolter,Aditi Raghunathan", "tags": "NIPS 2024,Poster", "abstract": "Estimating the out-of-distribution performance in regimes where labels are scarce is critical to safely deploy foundation models. Recently, it was shown that ensembles of neural networks observe the phenomena \"agreement-on-the-line\", which can be leveraged to reliably predict OOD performance without labels. However, in contrast to classical neural networks that are trained on in-distribution data from scratch for numerous epochs, foundation models undergo minimal finetuning from heavily pretrained weights, which may reduce the ensemble diversity needed to observe agreement-on-the-line. In our work, we demonstrate that when lightly finetuning multiple runs from a $\\textit{single}$ foundation model, the choice of randomness during training (linear head initialization, data ordering, and data subsetting) can lead to drastically different levels of agreement-on-the-line in the resulting ensemble. Surprisingly, only random head initialization is able to reliably induce agreement-on-the-line in finetuned foundation models across vision and language benchmarks. Second, we demonstrate that ensembles of $\\textit{multiple}$ foundation models pretrained on different datasets but finetuned on the same task can also show agreement-on-the-line. In total, by careful construction of a diverse ensemble, we can utilize agreement-on-the-line-based methods to predict the OOD performance of foundation models with high precision.", "pdf": "https://openreview.net/pdf/2d38f44c6bd0dca35802701cdeb31cf37e4da882.pdf"} {"title": "Towards Principled Graph Transformers", "url": "https://openreview.net/forum?id=LJCQH6U0pl", "detail_url": "https://openreview.net/forum?id=LJCQH6U0pl", "authors": "Luis M\u00fcller,Daniel Kusuma,Blai Bonet,Christopher Morris", "tags": "NIPS 2024,Poster", "abstract": "The expressive power of graph learning architectures based on the $k$-dimensional Weisfeiler-Leman ($k$-WL) hierarchy is well understood. However, such architectures often fail to deliver solid predictive performance on real-world tasks, limiting their practical impact. In contrast, global attention-based models such as graph transformers demonstrate strong performance in practice, but comparing their expressive power with the $k$-WL hierarchy remains challenging, particularly since these architectures rely on positional or structural encodings for their expressivity and predictive performance. To address this, we show that the recently proposed Edge Transformer, a global attention model operating on node pairs instead of nodes, has 3-WL expressive power when provided with the right tokenization. Empirically, we demonstrate that the Edge Transformer surpasses other theoretically aligned architectures regarding predictive performance while not relying on positional or structural encodings.", "pdf": "https://openreview.net/pdf/ae53c3024c25e68c6b5b7ee1d9cb9975b3297adc.pdf"} {"title": "Stepping on the Edge: Curvature Aware Learning Rate Tuners", "url": "https://openreview.net/forum?id=SEflLHIhhJ", "detail_url": "https://openreview.net/forum?id=SEflLHIhhJ", "authors": "Vincent Roulet,Atish Agarwala,Jean-Bastien Grill,Grzegorz Michal Swirszcz,Mathieu Blondel,Fabian Pedregosa", "tags": "NIPS 2024,Poster", "abstract": "Curvature information -- particularly, the largest eigenvalue of the loss\nHessian, known as the sharpness -- often forms the basis for learning rate\ntuners. However, recent work has shown that the curvature information undergoes\ncomplex dynamics during training, going from a phase of increasing sharpness to\neventual stabilization. We analyze the closed-loop feedback effect between\nlearning rate tuning and curvature. We find that classical learning rate tuners\nmay yield greater one-step loss reduction, yet they ultimately underperform in\nthe long term when compared to constant learning rates in the full batch regime.\nThese models break the stabilization of the sharpness, which we explain using a\nsimplified model of the joint dynamics of the learning rate and the curvature.\nTo further investigate these effects, we introduce a new learning rate tuning\nmethod, Curvature Dynamics Aware Tuning (CDAT), which prioritizes long term\ncurvature stabilization over instantaneous progress on the objective. In the\nfull batch regime, CDAT shows behavior akin to prefixed warm-up schedules on deep\nlearning objectives, outperforming tuned constant learning rates. In the mini\nbatch regime, we observe that stochasticity introduces confounding effects that\nexplain the previous success of some learning rate tuners at appropriate batch\nsizes. Our findings highlight the critical role of understanding the joint\ndynamics of the learning rate and curvature, beyond greedy minimization, to\ndiagnose failures and design effective adaptive learning rate tuners.", "pdf": "https://openreview.net/pdf/33fc0272b7d7a67249291d330ce075067a1e789c.pdf"} {"title": "SceneCraft: Layout-Guided 3D Scene Generation", "url": "https://openreview.net/forum?id=CTvxvAcSJN", "detail_url": "https://openreview.net/forum?id=CTvxvAcSJN", "authors": "Xiuyu Yang,Yunze Man,Jun-Kun Chen,Yu-Xiong Wang", "tags": "NIPS 2024,Poster", "abstract": "The creation of complex 3D scenes tailored to user specifications has been a tedious and challenging task with traditional 3D modeling tools. Although some pioneering methods have achieved automatic text-to-3D generation, they are generally limited to small-scale scenes with restricted control over the shape and texture. We introduce SceneCraft, a novel method for generating detailed indoor scenes that adhere to textual descriptions and spatial layout preferences provided by users. Central to our method is a rendering-based technique, which converts 3D semantic layouts into multi-view 2D proxy maps. Furthermore, we design a semantic and depth conditioned diffusion model to generate multi-view images, which are used to learn a neural radiance field (NeRF) as the final scene representation. Without the constraints of panorama image generation, we surpass previous methods in supporting complicated indoor space generation beyond a single room, even as complicated as a whole multi-bedroom apartment with irregular shapes and layouts. Through experimental analysis, we demonstrate that our method significantly outperforms existing approaches in complex indoor scene generation with diverse textures, consistent geometry, and realistic visual quality.", "pdf": "https://openreview.net/pdf/4fbf5f697f7e35affc341d10063221f725630935.pdf"} {"title": "Bounds for the smallest eigenvalue of the NTK for arbitrary spherical data of arbitrary dimension", "url": "https://openreview.net/forum?id=mHVmsy9len", "detail_url": "https://openreview.net/forum?id=mHVmsy9len", "authors": "Kedar Karhadkar,Michael Murray,Guido Montufar", "tags": "NIPS 2024,Poster", "abstract": "Bounds on the smallest eigenvalue of the neural tangent kernel (NTK) are a key ingredient in the analysis of neural network optimization and memorization. However, existing results require distributional assumptions on the data and are limited to a high-dimensional setting, where the input dimension $d_0$ scales at least logarithmically in the number of samples $n$. In this work we remove both of these requirements and instead provide bounds in terms of a measure of distance between data points: notably these bounds hold with high probability even when $d_0$ is held constant versus $n$. We prove our results through a novel application of the hemisphere transform.", "pdf": "https://openreview.net/pdf/fc64dfe0d79cb125c2577c3c2488762284e984b7.pdf"} {"title": "Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit", "url": "https://openreview.net/forum?id=qK4iS49KDm", "detail_url": "https://openreview.net/forum?id=qK4iS49KDm", "authors": "Jason D. Lee,Kazusato Oko,Taiji Suzuki,Denny Wu", "tags": "NIPS 2024,Poster", "abstract": "We study the problem of gradient descent learning of a single-index target function $f_*(\\boldsymbol{x}) = \\textstyle\\sigma_*\\left(\\langle\\boldsymbol{x},\\boldsymbol{\\theta}\\rangle\\right)$ under isotropic Gaussian data in $\\mathbb{R}^d$, \nwhere the unknown link function $\\sigma_*:\\mathbb{R}\\to\\mathbb{R}$ has information exponent $p$ (defined as the lowest degree in the Hermite expansion). Prior works showed that gradient-based training of neural networks can learn this target with $n\\gtrsim d^{\\Theta(p)}$ samples, and such complexity is predicted to be necessary by the correlational statistical query lower bound. \nSurprisingly, we prove that a two-layer neural network optimized by an SGD-based algorithm (on the squared loss) learns $f_*$ with a complexity that is not governed by the information exponent. Specifically, for arbitrary polynomial single-index models, we establish a sample and runtime complexity of $n \\simeq T = \\Theta(d\\cdot\\mathrm{polylog} d)$, where $\\Theta(\\cdot)$ hides a constant only depending on the degree of $\\sigma_*$; this dimension dependence matches the information theoretic limit up to polylogarithmic factors. More generally, we show that $n\\gtrsim d^{(p_*-1)\\vee 1}$ samples are sufficient to achieve low generalization error, where $p_* \\le p$ is the \\textit{generative exponent} of the link function. Core to our analysis is the reuse of minibatch in the gradient computation, which gives rise to higher-order information beyond correlational queries.", "pdf": "https://openreview.net/pdf/5c351e805429bc780ae5fab35b4eaecf013991eb.pdf"} {"title": "Rethinking Score Distillation as a Bridge Between Image Distributions", "url": "https://openreview.net/forum?id=I8PkICj9kM", "detail_url": "https://openreview.net/forum?id=I8PkICj9kM", "authors": "David McAllister,Songwei Ge,Jia-Bin Huang,David W. Jacobs,Alexei A Efros,Aleksander Holynski,Angjoo Kanazawa", "tags": "NIPS 2024,Poster", "abstract": "Score distillation sampling (SDS) has proven to be an important tool, enabling the use of large-scale diffusion priors for tasks operating in data-poor domains. Unfortunately, SDS has a number of characteristic artifacts that limit its utility in general-purpose applications. In this paper, we make progress toward understanding the behavior of SDS and its variants by viewing them as solving an optimal-cost transport path from some current source distribution to a target distribution. Under this new interpretation, we argue that these methods' characteristic artifacts are caused by (1) linear approximation of the optimal path and (2) poor estimates of the source distribution.\nWe show that by calibrating the text conditioning of the source distribution, we can produce high-quality generation and translation results with little extra overhead. Our method can be easily applied across many domains, matching or beating the performance of specialized methods. We demonstrate its utility in text-to-2D, text-to-3D, translating paintings to real images, optical illusion generation, and 3D sketch-to-real. We compare our method to existing approaches for score distillation sampling and show that it can produce high-frequency details with realistic colors.", "pdf": "https://openreview.net/pdf/6e24468d3ec6ea657f13f09dda826cacbce832af.pdf"} {"title": "Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Models", "url": "https://openreview.net/forum?id=FJlrSZBMCD", "detail_url": "https://openreview.net/forum?id=FJlrSZBMCD", "authors": "Aviv Bick,Kevin Li,Eric P. Xing,J Zico Kolter,Albert Gu", "tags": "NIPS 2024,Poster", "abstract": "Transformer architectures have become a dominant paradigm for domains like language modeling but suffer in many inference settings due to their quadratic-time self-attention. Recently proposed subquadratic architectures, such as Mamba, have shown promise, but have been pretrained with substantially less computational resources than the strongest Transformer models. In this work, we present a method that is able to distill a pretrained Transformer architecture into alternative architectures such as state space models (SSMs). The key idea to our approach is that we can view both Transformers and SSMs as applying different forms of mixing matrices over the token sequences. We can thus progressively distill the Transformer architecture by matching different degrees of granularity in the SSM: first matching the mixing matrices themselves, then the hidden units at each block, and finally the end-to-end predictions. Our method, called MOHAWK, is able to distill a Mamba-2 variant based on the Phi-1.5 architecture (Phi-Mamba) using only 3B tokens. Despite using less than 1% of the training data typically used to train models from scratch, Phi-Mamba boasts substantially stronger performance compared to all past open-source non-Transformer models. MOHAWK allows models like SSMs to leverage computational resources invested in training Transformer-based architectures, highlighting a new avenue for building such models.", "pdf": "https://openreview.net/pdf/ddedaa9f0d6404305d1b4b3223cca34caab6ab83.pdf"} {"title": "The Star Geometry of Critic-Based Regularizer Learning", "url": "https://openreview.net/forum?id=2GQeCbhxVy", "detail_url": "https://openreview.net/forum?id=2GQeCbhxVy", "authors": "Oscar Leong,Eliza O'Reilly,Yong Sheng Soh", "tags": "NIPS 2024,Poster", "abstract": "Variational regularization is a classical technique to solve statistical inference tasks and inverse problems, with modern data-driven approaches parameterizing regularizers via deep neural networks showcasing impressive empirical performance. Recent works along these lines learn task-dependent regularizers. This is done by integrating information about the measurements and ground-truth data in an unsupervised, critic-based loss function, where the regularizer attributes low values to likely data and high values to unlikely data. However, there is little theory about the structure of regularizers learned via this process and how it relates to the two data distributions. To make progress on this challenge, we initiate a study of optimizing critic-based loss functions to learn regularizers over a particular family of regularizers: gauges (or Minkowski functionals) of star-shaped bodies. This family contains regularizers that are commonly employed in practice and shares properties with regularizers parameterized by deep neural networks. We specifically investigate critic-based losses derived from variational representations of statistical distances between probability measures. By leveraging tools from star geometry and dual Brunn-Minkowski theory, we illustrate how these losses can be interpreted as dual mixed volumes that depend on the data distribution. This allows us to derive exact expressions for the optimal regularizer in certain cases. Finally, we identify which neural network architectures give rise to such star body gauges and when do such regularizers have favorable properties for optimization. More broadly, this work highlights how the tools of star geometry can aid in understanding the geometry of unsupervised regularizer learning.", "pdf": "https://openreview.net/pdf/f281b0787e1d0047d6c91046e6bd5f68553224e9.pdf"} {"title": "Perceiving Longer Sequences With Bi-Directional Cross-Attention Transformers", "url": "https://openreview.net/forum?id=5sm8YDnWvC", "detail_url": "https://openreview.net/forum?id=5sm8YDnWvC", "authors": "Markus Hiller,Krista A. Ehinger,Tom Drummond", "tags": "NIPS 2024,Poster", "abstract": "We present a novel bi-directional Transformer architecture (BiXT) which scales linearly with input size in terms of computational cost and memory consumption, but does not suffer the drop in performance or limitation to only one input modality seen with other efficient Transformer-based approaches. BiXT is inspired by the Perceiver architectures but replaces iterative attention with an efficient bi-directional cross-attention module in which input tokens and latent variables attend to each other simultaneously, leveraging a naturally emerging attention-symmetry between the two. This approach unlocks a key bottleneck experienced by Perceiver-like architectures and enables the processing and interpretation of both semantics ('what') and location ('where') to develop alongside each other over multiple layers -- allowing its direct application to dense and instance-based tasks alike. By combining efficiency with the generality and performance of a full Transformer architecture, BiXT can process longer sequences like point clouds, text or images at higher feature resolutions and achieves competitive performance across a range of tasks like point cloud part segmentation, semantic image segmentation, image classification, hierarchical sequence modeling and document retrieval. Our experiments demonstrate that BiXT models outperform larger competitors by leveraging longer sequences more efficiently on vision tasks like classification and segmentation, and perform on par with full Transformer variants on sequence modeling and document retrieval -- but require 28\\% fewer FLOPs and are up to $8.4\\times$ faster.", "pdf": "https://openreview.net/pdf/cffc8b63690897adbc9270e148ab2155fbc70a24.pdf"} {"title": "SceneDiffuser: Efficient and Controllable Driving Simulation Initialization and Rollout", "url": "https://openreview.net/forum?id=a4qT29Levh", "detail_url": "https://openreview.net/forum?id=a4qT29Levh", "authors": "Chiyu Max Jiang,Yijing Bai,Andre Cornman,Christopher Davis,Xiukun Huang,Hong Jeon,Sakshum Kulshrestha,John Wheatley Lambert,Shuangyu Li,Xuanyu Zhou,Carlos Fuertes,Chang Yuan,Mingxing Tan,Yin Zhou,Dragomir Anguelov", "tags": "NIPS 2024,Poster", "abstract": "Simulation with realistic and interactive agents represents a key task for autonomous vehicle (AV) software development in order to test AV performance in prescribed, often long-tail scenarios. In this work, we propose SceneDiffuser, a scene-level diffusion prior for traffic simulation. We present a singular framework that unifies two key stages of simulation: scene initialization and scene rollout. Scene initialization refers to generating the initial layout for the traffic in a scene, and scene rollout refers to closed-loop simulation for the behaviors of the agents. While diffusion has been demonstrated to be effective in learning realistic, multimodal agent distributions, two open challenges remain: controllability and closed-loop inference efficiency and realism. To this end, to address controllability challenges, we propose generalized hard constraints, a generalized inference-time constraint mechanism that is simple yet effective. To improve closed-loop inference quality and efficiency, we propose amortized diffusion, a novel diffusion denoising paradigm that amortizes the physical cost of denoising over future simulation rollout steps, reducing the cost of per physical rollout step to a single denoising function evaluation, while dramatically reducing closed-loop errors. We demonstrate the effectiveness of our approach on the Waymo Open Dataset, where we are able to generate distributionally realistic scenes, while obtaining competitive performance in the Sim Agents Challenge, surpassing the state-of-the-art in many realism attributes.", "pdf": "https://openreview.net/pdf/ac6b24ffb0e47181c8916963928d13383ddf22cf.pdf"} {"title": "No Free Lunch in LLM Watermarking: Trade-offs in Watermarking Design Choices", "url": "https://openreview.net/forum?id=rIOl7KbSkv", "detail_url": "https://openreview.net/forum?id=rIOl7KbSkv", "authors": "Qi Pang,Shengyuan Hu,Wenting Zheng,Virginia Smith", "tags": "NIPS 2024,Poster", "abstract": "Advances in generative models have made it possible for AI-generated text, code, and images to mirror human-generated content in many applications. Watermarking, a technique that aims to embed information in the output of a model to verify its source, is useful for mitigating the misuse of such AI-generated content. However, we show that common design choices in LLM watermarking schemes make the resulting systems surprisingly susceptible to attack---leading to fundamental trade-offs in robustness, utility, and usability. \nTo navigate these trade-offs, we rigorously study a set of simple yet effective attacks on common watermarking systems, and propose guidelines and defenses for LLM watermarking in practice.", "pdf": "https://openreview.net/pdf/bb004f77c167bc7493180ec14d476519fd86acc7.pdf"} {"title": "Scaling Sign Language Translation", "url": "https://openreview.net/forum?id=M80WgiO2Lb", "detail_url": "https://openreview.net/forum?id=M80WgiO2Lb", "authors": "Biao Zhang,Garrett Tanzer,Orhan Firat", "tags": "NIPS 2024,Poster", "abstract": "Sign language translation (SLT) addresses the problem of translating information from a sign language in video to a spoken language in text. Existing studies, while showing progress, are often limited to narrow domains and/or few sign languages and struggle with open-domain tasks. In this paper, we push forward the frontier of SLT by scaling pretraining data, model size, and number of translation directions. We perform large-scale SLT pretraining on different data including 1) noisy multilingual Youtube SLT data,\n2) parallel text corpora, and 3) SLT data augmented by translating video captions to other languages with off-the-shelf machine translation models. We unify different pretraining tasks with task-specific prompts under the encoder-decoder architecture, and initialize the SLT model with pretrained (m/By)T5 models across model sizes. SLT pretraining results on How2Sign and FLEURS-ASL\\#0 (ASL to 42 spoken languages) demonstrate the significance of data/model scaling and cross-lingual cross-modal transfer, as well as the feasibility of zero-shot SLT. We finetune the pretrained SLT models on 5 downstream open-domain SLT benchmarks covering 5 sign languages. Experiments show substantial quality improvements over the vanilla baselines, surpassing the previous state-of-the-art (SOTA) by wide margins.", "pdf": "https://openreview.net/pdf/20674098c57fba69ddfb43ec06d0123229a6df0a.pdf"} {"title": "Provable Editing of Deep Neural Networks using Parametric Linear Relaxation", "url": "https://openreview.net/forum?id=IGhpUd496D", "detail_url": "https://openreview.net/forum?id=IGhpUd496D", "authors": "Zhe Tao,Aditya Thakur", "tags": "NIPS 2024,Poster", "abstract": "Ensuring that a DNN satisfies a desired property is critical when deploying DNNs\nin safety-critical applications. There are efficient methods that can verify\nwhether a DNN satisfies a property, as seen in the annual DNN verification\ncompetition (VNN-COMP). However, the problem of provably editing a DNN to\nsatisfy a property remains challenging. We present PREPARED, the first efficient\ntechnique for provable editing of DNNs. Given a DNN $\\mathcal{N}$\nwith parameters $\\theta$, input polytope $P$, and output\npolytope $Q$, PREPARED finds new parameters $\\theta'$ such that $\\forall\n\\mathrm{x} \\in P\n . \\mathcal{N}(\\mathrm{x}; \\theta') \\in Q$\nwhile minimizing the changes $\\lVert{\\theta' - \\theta}\\rVert$.\nGiven a DNN and a property it violates from the VNN-COMP benchmarks,\nPREPARED is able to provably edit the DNN to satisfy this property within\n45 seconds.\nPREPARED is efficient\nbecause it relaxes the NP-hard provable editing problem to solving a linear\nprogram. The key contribution is the novel notion of Parametric Linear Relaxation,\nwhich enables PREPARED to construct tight output bounds of the DNN that are parameterized\nby the new parameters $\\theta'$. We demonstrate that PREPARED is more efficient and\neffective compared to prior DNN editing approaches i) using the VNN-COMP\nbenchmarks, ii) by editing CIFAR10 and TinyImageNet image-recognition DNNs, and\nBERT sentiment-classification DNNs for local robustness, and iii) by training a\nDNN to model a geodynamics process and satisfy physics constraints.", "pdf": "https://openreview.net/pdf/945784991c16d056f7424e504c586e0fe66b29cd.pdf"} {"title": "Differentially Private Set Representations", "url": "https://openreview.net/forum?id=GQNvvQquO0", "detail_url": "https://openreview.net/forum?id=GQNvvQquO0", "authors": "Sarvar Patel,Giuseppe Persiano,Joon Young Seo,Kevin Yeo", "tags": "NIPS 2024,Poster", "abstract": "We study the problem of differentially private (DP) mechanisms for representing\nsets of size $k$ from a large universe.\nOur first construction creates\n$(\\epsilon,\\delta)$-DP representations with error probability of \n$1/(e^\\epsilon + 1)$ using space at most $1.05 k \\epsilon \\cdot \\log(e)$ bits where\nthe time to construct a representation is $O(k \\log(1/\\delta))$ while decoding time is $O(\\log(1/\\delta))$.\nWe also present a second algorithm for pure $\\epsilon$-DP representations with the same error using space at most $k \\epsilon \\cdot \\log(e)$ bits, but requiring large decoding times.\nOur algorithms match the lower bounds on privacy-utility trade-offs (including constants but ignoring $\\delta$ factors) and we also present a new space lower bound\nmatching our constructions up to small constant factors.\nTo obtain our results, we design a new approach embedding sets into random linear systems\ndeviating from most prior approaches that inject noise into non-private solutions.", "pdf": "https://openreview.net/pdf/8586242b37ea885978251c3f7e0ca1537d1b7e6c.pdf"} {"title": "Enhancing Efficiency of Safe Reinforcement Learning via Sample Manipulation", "url": "https://openreview.net/forum?id=oPFjhl6DpR", "detail_url": "https://openreview.net/forum?id=oPFjhl6DpR", "authors": "Shangding Gu,Laixi Shi,Yuhao Ding,Alois Knoll,Costas Spanos,Adam Wierman,Ming Jin", "tags": "NIPS 2024,Poster", "abstract": "Safe reinforcement learning (RL) is crucial for deploying RL agents in real-world applications, as it aims to maximize long-term rewards while satisfying safety constraints. However, safe RL often suffers from sample inefficiency, requiring extensive interactions with the environment to learn a safe policy. We propose Efficient Safe Policy Optimization (ESPO), a novel approach that enhances the efficiency of safe RL through sample manipulation. ESPO employs an optimization framework with three modes: maximizing rewards, minimizing costs, and balancing the trade-off between the two. By dynamically adjusting the sampling process based on the observed conflict between reward and safety gradients, ESPO theoretically guarantees convergence, optimization stability, and improved sample complexity bounds. Experiments on the Safety-MuJoCo and Omnisafe benchmarks demonstrate that ESPO significantly outperforms existing primal-based and primal-dual-based baselines in terms of reward maximization and constraint satisfaction. Moreover, ESPO achieves substantial gains in sample efficiency, requiring 25--29\\% fewer samples than baselines, and reduces training time by 21--38\\%.", "pdf": "https://openreview.net/pdf/ba79b8360e1e32df1bc174e2a4c138266533424a.pdf"} {"title": "A Non-parametric Direct Learning Approach to Heterogeneous Treatment Effect Estimation under Unmeasured Confounding", "url": "https://openreview.net/forum?id=bwlUQsQumh", "detail_url": "https://openreview.net/forum?id=bwlUQsQumh", "authors": "Xinhai Zhang,Xingye Qiao", "tags": "NIPS 2024,Poster", "abstract": "In many social, behavioral, and biomedical sciences, treatment effect estimation is a crucial step in understanding the impact of an intervention, policy, or treatment. In recent years, an increasing emphasis has been placed on heterogeneity in treatment effects, leading to the development of various methods for estimating Conditional Average Treatment Effects (CATE). These approaches hinge on a crucial identifying condition of no unmeasured confounding, an assumption that is not always guaranteed in observational studies or randomized control trials with non-compliance. In this paper, we proposed a general framework for estimating CATE with a possible unmeasured confounder using Instrumental Variables. We also construct estimators that exhibit greater efficiency and robustness against various scenarios of model misspecification. The efficacy of the proposed framework is demonstrated through simulation studies and a real data example.", "pdf": "https://openreview.net/pdf/c105411027bf75f81c9b025a7ef7a956478b3eba.pdf"} {"title": "Infinite Limits of Multi-head Transformer Dynamics", "url": "https://openreview.net/forum?id=p0BBKhD5aI", "detail_url": "https://openreview.net/forum?id=p0BBKhD5aI", "authors": "Blake Bordelon,Hamza Tahir Chaudhry,Cengiz Pehlevan", "tags": "NIPS 2024,Poster", "abstract": "In this work we analyze various scaling limits of the training dynamics of transformer models in the feature learning regime. We identify the set of parameterizations which admit well defined infinite width and depth limits that allow the attention layers to update throughout training, a relevant notion of feature learning in these models. We then use tools from dynamical mean field theory (DMFT) to analyze various infinite limits (infinite heads, infinite key/query dimension, and infinite depth) which have different statistical descriptions depending on which infinite limit is taken and how attention layers are scaled. We provide numerical evidence of convergence to the limits and show they maintain the correct scale of updates for both SGD and Adam.", "pdf": "https://openreview.net/pdf/a585f90934bd1a7f438b6cf6acb2ada2329f9c29.pdf"} {"title": "AdaFlow: Imitation Learning with Variance-Adaptive Flow-Based Policies", "url": "https://openreview.net/forum?id=ugXKInqDCC", "detail_url": "https://openreview.net/forum?id=ugXKInqDCC", "authors": "Xixi Hu,qiang liu,Xingchao Liu,Bo Liu", "tags": "NIPS 2024,Poster", "abstract": "Diffusion-based imitation learning improves Behavioral Cloning (BC) on multi-modal decision-making, but comes at the cost of significantly slower inference due to the recursion in the diffusion process. It urges us to design efficient policy generators while keeping the ability to generate diverse actions. To address this challenge, we propose AdaFlow, an imitation learning framework based on flow-based generative modeling. AdaFlow represents the policy with state-conditioned ordinary differential equations (ODEs), which are known as probability flows. We reveal an intriguing connection between the conditional variance of their training loss and the discretization error of the ODEs.\nWith this insight, we propose a variance-adaptive ODE solver that can adjust its step size in the inference stage, making\nAdaFlow an adaptive decision-maker, offering rapid inference without sacrificing diversity. Interestingly, it automatically reduces to a one-step generator when the action distribution is uni-modal. Our comprehensive empirical evaluation shows that AdaFlow achieves high performance with fast inference speed.", "pdf": "https://openreview.net/pdf/fa1f545e371f428274cf16d6695ca80a78e5311d.pdf"} {"title": "Generative Fractional Diffusion Models", "url": "https://openreview.net/forum?id=B9qg3wo75g", "detail_url": "https://openreview.net/forum?id=B9qg3wo75g", "authors": "Gabriel Nobis,Maximilian Springenberg,Marco Aversa,Michael Detzel,Rembert Daems,Roderick Murray-Smith,Shinichi Nakajima,Sebastian Lapuschkin,Stefano Ermon,Tolga Birdal,Manfred Opper,Christoph Knochenhauer,Luis Oala,Wojciech Samek", "tags": "NIPS 2024,Poster", "abstract": "We introduce the first continuous-time score-based generative model that leverages fractional diffusion processes for its underlying dynamics. Although diffusion models have excelled at capturing data distributions, they still suffer from various limitations such as slow convergence, mode-collapse on imbalanced data, and lack of diversity. These issues are partially linked to the use of light-tailed Brownian motion (BM) with independent increments. In this paper, we replace BM with an approximation of its non-Markovian counterpart, fractional Brownian motion (fBM), characterized by correlated increments and Hurst index $H \\in (0,1)$, where $H=0.5$ recovers the classical BM. To ensure tractable inference and learning, we employ a recently popularized Markov approximation of fBM (MA-fBM) and derive its reverse-time model, resulting in *generative fractional diffusion models* (GFDM). We characterize the forward dynamics using a continuous reparameterization trick and propose *augmented score matching* to efficiently learn the score function, which is partly known in closed form, at minimal added cost. The ability to drive our diffusion model via MA-fBM offers flexibility and control. $H \\leq 0.5$ enters the regime of *rough paths* whereas $H>0.5$ regularizes diffusion paths and invokes long-term memory. The Markov approximation allows added control by varying the number of Markov processes linearly combined to approximate fBM. Our evaluations on real image datasets demonstrate that GFDM achieves greater pixel-wise diversity and enhanced image quality, as indicated by a lower FID, offering a promising alternative to traditional diffusion models", "pdf": "https://openreview.net/pdf/01334c7c55c6a7e46ca396d90dd37632c4a411a4.pdf"} {"title": "Diffusion Spectral Representation for Reinforcement Learning", "url": "https://openreview.net/forum?id=C3tEX45hJX", "detail_url": "https://openreview.net/forum?id=C3tEX45hJX", "authors": "Dmitry Shribak,Chen-Xiao Gao,Yitong Li,Chenjun Xiao,Bo Dai", "tags": "NIPS 2024,Poster", "abstract": "Diffusion-based models have achieved notable empirical successes in reinforcement learning (RL) due to their expressiveness in modeling complex distributions. Despite existing methods being promising, the key challenge of extending existing methods for broader real-world applications lies in the computational cost at inference time, i.e., sampling from a diffusion model is considerably slow as it often requires tens to hundreds of iterations to generate even one sample. To circumvent this issue, we propose to leverage the flexibility of diffusion models for RL from a representation learning perspective. In particular, by exploiting the connection between diffusion models and energy-based models, we develop Diffusion Spectral Representation (Diff-SR), a coherent algorithm framework that enables extracting sufficient representations for value functions in Markov decision processes (MDP) and partially observable Markov decision processes (POMDP). We further demonstrate how Diff-SR facilitates efficient policy optimization and practical algorithms while explicitly bypassing the difficulty and inference cost of sampling from the diffusion model. Finally, we provide comprehensive empirical studies to verify the benefits of Diff-SR in delivering robust and advantageous performance across various benchmarks with both fully and partially observable settings.", "pdf": "https://openreview.net/pdf/09a16f6e24fbc0417cf0ba278d69fa287ed242e2.pdf"} {"title": "Multi-LLM Debate: Framework, Principals, and Interventions", "url": "https://openreview.net/forum?id=sy7eSEXdPC", "detail_url": "https://openreview.net/forum?id=sy7eSEXdPC", "authors": "Andrew Estornell,Yang Liu", "tags": "NIPS 2024,Poster", "abstract": "The flexible and generalized nature of large language models has allowed for their application in a wide array of language-based domains.\nMuch like their human contemporaries, these models are capable of engaging in discussions and debates as a means of improving answer quality.\nWe first take a theoretical approach to analyzing debate and provide a framework through which debate can be mathematically examined.\nBuilding on this framework, we provide several theoretical results for multi-agent debate.\nIn particular, we demonstrate that similar model capabilities, or similar model responses, can result in static debate dynamics where the debate procedure simply converges to the majority opinion. \nWhen this majority opinion is the result of a common misconception (ingrained in the models through shared training data) debate is likely to converge to answers associated with that common misconception.\nUsing insights from our theoretical results we then propose three interventions which improve the efficacy of debate. \nFor each intervention, we provide theoretical results demonstrating how debate is improved.\nWe also demonstrate that these interventions result in better performance on four common benchmark tasks.", "pdf": "https://openreview.net/pdf/ae3a0032b5023848f8c865ef47d515acf58cb84f.pdf"} {"title": "ProEdit: Simple Progression is All You Need for High-Quality 3D Scene Editing", "url": "https://openreview.net/forum?id=iC869BBmc5", "detail_url": "https://openreview.net/forum?id=iC869BBmc5", "authors": "Jun-Kun Chen,Yu-Xiong Wang", "tags": "NIPS 2024,Poster", "abstract": "This paper proposes ProEdit - a simple yet effective framework for high-quality 3D scene editing guided by diffusion distillation in a novel progressive manner. Inspired by the crucial observation that multi-view inconsistency in scene editing is rooted in the diffusion model\u2019s large feasible output space (FOS), our framework controls the size of FOS and reduces inconsistency by decomposing the overall editing task into several subtasks, which are then executed progressively on the scene. Within this framework, we design a difficulty-aware subtask decomposition scheduler and an adaptive 3D Gaussian splatting (3DGS) training strategy, ensuring high efficiency in performing each subtask. Extensive evaluation shows that our ProEdit achieves state-of-the-art results in various scenes and challenging editing tasks, all through a simple framework without any expensive or sophisticated add-ons like distillation losses, components, or training procedures. Notably, ProEdit also provides a new way to preview, control, and select the aggressivity of editing operation during the editing process.", "pdf": "https://openreview.net/pdf/ad3e5dfa3ac274eb629365c49592c881edf6c5f1.pdf"} {"title": "Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models", "url": "https://openreview.net/forum?id=zIr2QjU4hl", "detail_url": "https://openreview.net/forum?id=zIr2QjU4hl", "authors": "Masatoshi Uehara,Yulai Zhao,Ehsan Hajiramezanali,Gabriele Scalia,G\u00f6kcen Eraslan,Avantika Lal,Sergey Levine,Tommaso Biancalani", "tags": "NIPS 2024,Poster", "abstract": "AI-driven design problems, such as DNA/protein sequence design, are commonly tackled from two angles: generative modeling, which efficiently captures the feasible design space (e.g., natural images or biological sequences), and model-based optimization, which utilizes reward models for extrapolation. To combine the strengths of both approaches, we adopt a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL. Although prior work has explored similar avenues, they primarily focus on scenarios where accurate reward models are accessible. In contrast, we concentrate on an offline setting where a reward model is unknown, and we must learn from static offline datasets, a common scenario in scientific domains. In offline scenarios, existing approaches tend to suffer from overoptimization, as they may be misled by the reward model in out-of-distribution regions. To address this, we introduce a conservative fine-tuning approach, BRAID, by optimizing a conservative reward model, which includes additional penalization outside of offline data distributions. Through empirical and theoretical analysis, we demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models while avoiding the generation of invalid designs through pre-trained diffusion models.", "pdf": "https://openreview.net/pdf/208379a521961503552a6647a7533a7037e81262.pdf"} {"title": "Sparse-view Pose Estimation and Reconstruction via Analysis by Generative Synthesis", "url": "https://openreview.net/forum?id=wgpmDyJgsg", "detail_url": "https://openreview.net/forum?id=wgpmDyJgsg", "authors": "Qitao Zhao,Shubham Tulsiani", "tags": "NIPS 2024,Poster", "abstract": "Inferring the 3D structure underlying a set of multi-view images typically requires solving two co-dependent tasks -- accurate 3D reconstruction requires precise camera poses, and predicting camera poses relies on (implicitly or explicitly) modeling the underlying 3D. The classical framework of analysis by synthesis casts this inference as a joint optimization seeking to explain the observed pixels, and recent instantiations learn expressive 3D representations (e.g., Neural Fields) with gradient-descent-based pose refinement of initial pose estimates. However, given a sparse set of observed views, the observations may not provide sufficient direct evidence to obtain complete and accurate 3D. Moreover, large errors in pose estimation may not be easily corrected and can further degrade the inferred 3D. To allow robust 3D reconstruction and pose estimation in this challenging setup, we propose SparseAGS, a method that adapts this analysis-by-synthesis approach by: a) including novel-view-synthesis-based generative priors in conjunction with photometric objectives to improve the quality of the inferred 3D, and b) explicitly reasoning about outliers and using a discrete search with a continuous optimization-based strategy to correct them. We validate our framework across real-world and synthetic datasets in combination with several off-the-shelf pose estimation systems as initialization. We find that it significantly improves the base systems' pose accuracy while yielding high-quality 3D reconstructions that outperform the results from current multi-view reconstruction baselines.", "pdf": "https://openreview.net/pdf/9deb0cefa84b633dd45b98a2c28dfa4cb9a5847d.pdf"} {"title": "Bayesian Strategic Classification", "url": "https://openreview.net/forum?id=SadbRPoG2k", "detail_url": "https://openreview.net/forum?id=SadbRPoG2k", "authors": "Lee Cohen,Saeed Sharifi -Malvajerdi,Kevin Stangl,Ali Vakilian,Juba Ziani", "tags": "NIPS 2024,Poster", "abstract": "In strategic classification, agents modify their features, at a cost, to obtain a positive classification outcome from the learner\u2019s classifier, \ntypically assuming agents have full knowledge of the deployed classifier. In contrast, we consider a Bayesian setting where agents have a common distributional prior on the classifier being used and agents manipulate their features to maximize their expected utility according to this prior.\nThe learner can reveal truthful, yet not necessarily complete, information about the classifier to the agents, aiming to release just enough information to shape the agents' behavior and thus maximize accuracy. We show that partial information release can counter-intuitively benefit the learner\u2019s accuracy, allowing qualified agents to pass the classifier while preventing unqualified agents from doing so. Despite the intractability of computing the best response of an agent in the general case, we provide oracle-efficient algorithms for scenarios where the learner\u2019s hypothesis class consists of low-dimensional linear classifiers or when the agents\u2019 cost function satisfies a sub-modularity condition. \nAdditionally, we address the learner\u2019s optimization problem, offering both positive and negative results on determining the optimal information release to maximize expected accuracy, particularly in settings where an agent\u2019s qualification can be represented by a real-valued number.", "pdf": "https://openreview.net/pdf/1c9f2f87da91ab770db190333ed39b5e5c423b9f.pdf"} {"title": "InstructG2I: Synthesizing Images from Multimodal Attributed Graphs", "url": "https://openreview.net/forum?id=zWnW4zqkuM", "detail_url": "https://openreview.net/forum?id=zWnW4zqkuM", "authors": "Bowen Jin,Ziqi Pang,Bingjun Guo,Yu-Xiong Wang,Jiaxuan You,Jiawei Han", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we approach an overlooked yet critical task Graph2Image: generating images from multimodal attributed graphs (MMAGs). This task poses significant challenges due to the explosion in graph size, dependencies among graph entities, and the need for controllability in graph conditions. To address these challenges, we propose a graph context-conditioned diffusion model called InstructG2I. InstructG2I first exploits the graph structure and multimodal information to conduct informative neighbor sampling by combining personalized page rank and re-ranking based on vision-language features. Then, a graph QFormer encoder adaptively encodes the graph nodes into an auxiliary set of graph prompts to guide the denoising process of diffusion. Finally, we propose graph classifier-free guidance, enabling controllable generation by varying the strength of graph guidance and multiple connected edges to a node. Extensive experiments conducted on three datasets from different domains demonstrate the effectiveness and controllability of our approach. The code is available at https://github.com/PeterGriffinJin/InstructG2I.", "pdf": "https://openreview.net/pdf/14232e04d8524d77648d3c5ea135527ad4aef01a.pdf"} {"title": "E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation", "url": "https://openreview.net/forum?id=Xp8qhdmeb4", "detail_url": "https://openreview.net/forum?id=Xp8qhdmeb4", "authors": "Boqian Wu,Qiao Xiao,Shiwei Liu,Lu Yin,Mykola Pechenizkiy,Decebal Constantin Mocanu,Maurice van Keulen,Elena Mocanu", "tags": "NIPS 2024,Poster", "abstract": "Deep neural networks have evolved as the leading approach in 3D medical image segmentation due to their outstanding performance. However, the ever-increasing model size and computational cost of deep neural networks have become the primary barriers to deploying them on real-world, resource-limited hardware. To achieve both segmentation accuracy and efficiency, we propose a 3D medical image segmentation model called Efficient to Efficient Network (E2ENet), which incorporates two parametrically and computationally efficient designs. i. Dynamic sparse feature fusion (DSFF) mechanism: it adaptively learns to fuse informative multi-scale features while reducing redundancy. ii. Restricted depth-shift in 3D convolution: it leverages the 3D spatial information while keeping the model and computational complexity as 2D-based methods. We conduct extensive experiments on AMOS, Brain Tumor Segmentation and BTCV Challenge, demonstrating that E2ENet consistently achieves a superior trade-off between accuracy and efficiency than prior arts across various resource constraints. %In particular, with a single model and single scale, E2ENet achieves comparable accuracy on the large-scale challenge AMOS-CT, while saving over 69% parameter count and 27% FLOPs in the inference phase, compared with the previous\nbest-performing method. Our code has been made available at: https://github.com/boqian333/E2ENet-Medical.", "pdf": "https://openreview.net/pdf/8a0a4586b53364bfb4c24a094f9633385ac9ae31.pdf"} {"title": "Personalized Federated Learning with Mixture of Models for Adaptive Prediction and Model Fine-Tuning", "url": "https://openreview.net/forum?id=yvUHnBkCzd", "detail_url": "https://openreview.net/forum?id=yvUHnBkCzd", "authors": "Pouya M. Ghari,Yanning Shen", "tags": "NIPS 2024,Poster", "abstract": "Federated learning is renowned for its efficacy in distributed model training, ensuring that users, called clients, retain data privacy by not disclosing their data to the central server that orchestrates collaborations. Most previous work on federated learning assumes that clients possess static batches of training data. However, clients may also need to make real-time predictions on streaming data in non-stationary environments. In such dynamic environments, employing pre-trained models may be inefficient, as they struggle to adapt to the constantly evolving data streams. To address this challenge, clients can fine-tune models online, leveraging their observed data to enhance performance. Despite the potential benefits of client participation in federated online model fine-tuning, existing analyses have not conclusively demonstrated its superiority over local model fine-tuning. To bridge this gap, the present paper develops a novel personalized federated learning algorithm, wherein each client constructs a personalized model by combining a locally fine-tuned model with multiple federated models learned by the server over time. Theoretical analysis and experiments on real datasets corroborate the effectiveness of this approach for real-time predictions and federated model fine-tuning.", "pdf": "https://openreview.net/pdf/1f67e6c96793fc968860e4ecdc67eeb800a1dc2f.pdf"} {"title": "A Combinatorial Algorithm for the Semi-Discrete Optimal Transport Problem", "url": "https://openreview.net/forum?id=Xq0Jwbczkn", "detail_url": "https://openreview.net/forum?id=Xq0Jwbczkn", "authors": "Pankaj Agarwal,Sharath Raghvendra,Pouyan Shirzadian,Keegan Yao", "tags": "NIPS 2024,Poster", "abstract": "Optimal Transport (OT, also known as the Wasserstein distance) is a popular metric for comparing probability distributions and has been successfully used in many machine-learning applications.\nIn the semi-discrete $2$-Wasserstein problem, we wish to compute the cheapest way to transport all the mass from a continuous distribution $\\mu$ to a discrete distribution $\\nu$ in $\\mathbb{R}^d$ for $d\\ge 1$, where the cost of transporting unit mass between points $a$ and $b$ is $d(a,b)=||a-b||^2$. When both distributions are discrete, a simple combinatorial framework has been used to find the exact solution (see e.g. [Orlin, STOC 1988]). \nIn this paper, we propose a combinatorial framework for the semi-discrete OT, which can be viewed as an extension of the combinatorial framework for the discrete OT but requires several new ideas. We present a new algorithm that given $\\mu$ and $\\nu$ in $\\mathbb{R}^2$ and a parameter $\\varepsilon>0$, computes an $\\varepsilon$-additive approximate semi-discrete transport plan in $O(n^{4}\\log n\\log \\frac{1}{\\varepsilon})$ time (in the worst case), where $n$ is the support-size of the discrete distribution $\\nu$ and we assume that the mass of $\\mu$ inside a triangle can be computed in $O(1)$ time. Our algorithm is significantly faster than the known algorithms, and unlike many numerical algorithms, it does not make any assumptions on the smoothness of $\\mu$.\nAs an application of our algorithm, we describe a data structure to store a large discrete distribution $\\mu$ (with support size $N$) using $O(N)$ space so that, given a query discrete distribution $\\nu$ (with support size $k$), an $\\varepsilon$-additive approximate transport plan can be computed in $O(k^{3}\\sqrt{N}\\log \\frac{1}{\\varepsilon})$ time in $2$ dimensions. Our algorithm and data structure extend to higher dimensions as well as to $p$-Wasserstein problem for any $p \\ge 1$.", "pdf": "https://openreview.net/pdf/9d3d30f46c475b1352cf3332893e08e4e46342c6.pdf"} {"title": "Extending Video Masked Autoencoders to 128 frames", "url": "https://openreview.net/forum?id=bFrNPlWchg", "detail_url": "https://openreview.net/forum?id=bFrNPlWchg", "authors": "Nitesh Bharadwaj Gundavarapu,Luke Friedman,Raghav Goyal,Chaitra Hegde,Eirikur Agustsson,Sagar M. Waghmare,Mikhail Sirotenko,Ming-Hsuan Yang,Tobias Weyand,Boqing Gong,Leonid Sigal", "tags": "NIPS 2024,Poster", "abstract": "Video understanding has witnessed significant progress with recent video foundation models demonstrating strong performance owing to self-supervised pre-training objectives; Masked Autoencoders (MAE) being the design of choice. Nevertheless, the majority of prior works that leverage MAE pre-training have focused on relatively short video representations (16 / 32 frames in length) largely due to hardware memory and compute limitations that scale poorly with video length due to the dense memory-intensive self-attention decoding. One natural strategy to address these challenges is to subsample tokens to reconstruct during decoding (or decoder masking). In this work, we propose an effective strategy for prioritizing tokens which allows training on longer video sequences (128 frames) and gets better performance than, more typical, random and uniform masking strategies. The core of our approach is an adaptive decoder masking strategy that prioritizes the most important tokens and uses quantized tokens as reconstruction objectives. Our adaptive strategy leverages a powerful MAGVIT-based tokenizer that jointly learns the tokens and their priority. We validate our design choices through exhaustive ablations and observe improved performance of the resulting long-video (128 frames) encoders over short-video (32 frames) counterparts. With our long-video masked autoencoder (LVMAE) strategy, we surpass state-of-the-art on Diving48 by 3.9 points and EPIC-Kitchens-100 verb classification by 2.5 points while relying on a simple core architecture and video-only pre-training (unlike some of the prior works that require millions of labeled video-text pairs or specialized encoders).", "pdf": "https://openreview.net/pdf/111644cfb458a030908543084cd59c0cc4b9c127.pdf"} {"title": "D\u00e9j\u00e0 Vu Memorization in Vision\u2013Language Models", "url": "https://openreview.net/forum?id=SFCZdXDyNs", "detail_url": "https://openreview.net/forum?id=SFCZdXDyNs", "authors": "Bargav Jayaraman,Chuan Guo,Kamalika Chaudhuri", "tags": "NIPS 2024,Poster", "abstract": "Vision-Language Models (VLMs) have emerged as the state-of-the-art representation learning solution, with myriads of downstream applications such as image classification, retrieval and generation. A natural question is whether these models memorize their training data, which also has implications for generalization. We propose a new method for measuring memorization in VLMs, which we call d\u00e8j\u00e1 vu memorization. For VLMs trained on image-caption pairs, we show that the model indeed retains information about individual objects in the training images beyond what can be inferred from correlations or the image caption. We evaluate d\u00e8j\u00e1 vu memorization at both sample and population level, and show that it is significant for OpenCLIP trained on as many as 50M image-caption pairs. Finally, we show that text randomization considerably mitigates memorization risk while only moderately impacting the model\u2019s downstream task performance. The code is available here: https://github.com/facebookresearch/VLMDejaVu.", "pdf": "https://openreview.net/pdf/a355ac38d9aa6df494ad197c4810d6481a1ee4a0.pdf"} {"title": "Propensity Score Alignment of Unpaired Multimodal Data", "url": "https://openreview.net/forum?id=hT4y7D2o2T", "detail_url": "https://openreview.net/forum?id=hT4y7D2o2T", "authors": "Johnny Xi,Jana Osea,Zuheng Xu,Jason Hartford", "tags": "NIPS 2024,Poster", "abstract": "Multimodal representation learning techniques typically require paired samples to learn shared representations, but collecting paired samples can be challenging in fields like biology, where measurement devices often destroy the samples. This paper presents an approach to address the challenge of aligning unpaired samples across disparate modalities in multimodal representation learning. We draw an analogy between potential outcomes in causal inference and potential views in multimodal observations, allowing us to leverage Rubin's framework to estimate a common space for matching samples. Our approach assumes experimentally perturbed samples by treatments, and uses this to estimate a propensity score from each modality. We show that the propensity score encapsulates all shared information between a latent state and treatment, and can be used to define a distance between samples. We experiment with two alignment techniques that leverage this distance---shared nearest neighbours (SNN) and optimal transport (OT) matching---and find that OT matching results in significant improvements over state-of-the-art alignment approaches in on synthetic multi-modal tasks, in real-world data from NeurIPS Multimodal Single-Cell Integration Challenge, and on a single cell microscopy to expression prediction task.", "pdf": "https://openreview.net/pdf/e33e66092d51fb20f53ddbc85a231d7c32b7525d.pdf"} {"title": "KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization", "url": "https://openreview.net/forum?id=0LXotew9Du", "detail_url": "https://openreview.net/forum?id=0LXotew9Du", "authors": "Coleman Richard Charles Hooper,Sehoon Kim,Hiva Mohammadzadeh,Michael W. Mahoney,Sophia Shao,Kurt Keutzer,Amir Gholami", "tags": "NIPS 2024,Poster", "abstract": "LLMs are seeing growing use for applications which require large context windows, and with these large context windows KV cache activations surface as the dominant contributor to memory consumption during inference. Quantization is a promising approach for compressing KV cache activations; however, existing solutions fail to represent activations accurately in sub-4-bit precision. Our work, KVQuant, facilitates low precision KV cache quantization by incorporating several novel methods: (i) Per-Channel Key Quantization, where we adjust the dimension along which we quantize the Key activations to better match the distribution; (ii) Pre-RoPE Key Quantization, where we quantize Key activations before the rotary positional embedding to mitigate its impact on quantization; (iii) Non-Uniform KV Cache Quantization, where we derive per-layer sensitivity-weighted non-uniform datatypes that better represent the distributions; and (iv) Per-Vector Dense-and-Sparse Quantization, where we isolate outliers separately for each vector to minimize skews in quantization ranges. By applying our method to the LLaMA, Llama-2, Llama-3, and Mistral models, we achieve < 0.1 perplexity degradation with 3-bit quantization on both Wikitext-2 and C4, outperforming existing approaches. Our method enables serving LLaMA-7B with a context length of up to 1 million on a single A100-80GB GPU and up to 10 million on an 8-GPU system. We develop custom CUDA kernels for KVQuant, showing that we can achieve up to ~1.7x speedups, compared to baseline fp16 matrix-vector multiplications, for the LLaMA-7B model.", "pdf": "https://openreview.net/pdf/14defcf80798b0426d9bd05b25ab492c11727c8a.pdf"} {"title": "Efficient multi-prompt evaluation of LLMs", "url": "https://openreview.net/forum?id=jzkpwcj200", "detail_url": "https://openreview.net/forum?id=jzkpwcj200", "authors": "Felipe Maia Polo,Ronald Xu,Lucas Weber,M\u00edrian Silva,Onkar Bhardwaj,Leshem Choshen,Allysson Flavio Melo de Oliveira,Yuekai Sun,Mikhail Yurochkin", "tags": "NIPS 2024,Poster", "abstract": "Most popular benchmarks for comparing LLMs rely on a limited set of prompt templates, which may not fully capture the LLMs\u2019 abilities and can affect the reproducibility of results on leaderboards. Many recent works empirically verify prompt sensitivity and advocate for changes in LLM evaluation. In this paper, we consider the problem of estimating the performance distribution across many prompt variants instead of finding a single prompt to evaluate with. We introduce PromptEval, a method for estimating performance across a large set of prompts borrowing strength across prompts and examples to produce accurate estimates under practical evaluation budgets. The resulting distribution can be used to obtain performance quantiles to construct various robust performance metrics (e.g., top 95% quantile or median). We prove that PromptEval consistently estimates the performance distribution and demonstrate its efficacy empirically on three prominent LLM benchmarks: MMLU, BIG-bench Hard, and LMentry; for example, PromptEval can accurately estimate performance quantiles across 100 prompt templates on MMLU with a budget equivalent to two single-prompt evaluations. Moreover, we show how PromptEval can be useful in LLM-as-a-judge and best prompt identification applications.", "pdf": "https://openreview.net/pdf/25775e1605f94e86a0854b54c4025198032e1e76.pdf"} {"title": "Estimating Generalization Performance Along the Trajectory of Proximal SGD in Robust Regression", "url": "https://openreview.net/forum?id=ntF7D8tAlQ", "detail_url": "https://openreview.net/forum?id=ntF7D8tAlQ", "authors": "Kai Tan,Pierre C Bellec", "tags": "NIPS 2024,Poster", "abstract": "This paper studies the generalization performance of iterates obtained by Gradient Descent (GD), Stochastic Gradient Descent (SGD) and their proximal variants in high-dimensional robust regression problems. The number of features is comparable to the sample size and errors may be heavy-tailed. We introduce estimators that precisely track the generalization error of the iterates along the trajectory of the iterative algorithm. These estimators are provably consistent under suitable conditions. The results are illustrated through several examples, including Huber regression, pseudo-Huber regression, and their penalized variants with non-smooth regularizer. We provide explicit generalization error estimates for iterates generated from GD and SGD, or from proximal SGD in the presence of a non-smooth regularizer. The proposed risk estimates serve as effective proxies for the actual generalization error, allowing us to determine the optimal stopping iteration that minimizes the generalization error. Extensive simulations confirm the effectiveness of the proposed generalization error estimates.", "pdf": "https://openreview.net/pdf/303193742edff79eea3620b2e2526badbcb840dd.pdf"} {"title": "First-Explore, then Exploit: Meta-Learning to Solve Hard Exploration-Exploitation Trade-Offs", "url": "https://openreview.net/forum?id=AhjTu2aiiW", "detail_url": "https://openreview.net/forum?id=AhjTu2aiiW", "authors": "Ben Norman,Jeff Clune", "tags": "NIPS 2024,Poster", "abstract": "Standard reinforcement learning (RL) agents never intelligently explore like a human (i.e. taking into account complex domain priors and adapting quickly based on previous exploration). Across episodes, RL agents struggle to perform even simple exploration strategies, for example systematic search that avoids exploring the same location multiple times. This poor exploration limits performance on challenging domains. Meta-RL is a potential solution, as unlike standard RL, meta-RL can *learn* to explore, and potentially learn highly complex strategies far beyond those of standard RL, strategies such as experimenting in early episodes to learn new skills, or conducting experiments to learn about the current environment.\nTraditional meta-RL focuses on the problem of learning to optimally balance exploration and exploitation to maximize the *cumulative reward* of the episode sequence (e.g., aiming to maximize the total wins in a tournament -- while also improving as a player).\nWe identify a new challenge with state-of-the-art cumulative-reward meta-RL methods.\nWhen optimal behavior requires exploration that sacrifices immediate reward to enable higher subsequent reward, existing state-of-the-art cumulative-reward meta-RL methods become stuck on the local optimum of failing to explore.\nOur method, First-Explore, overcomes this limitation by learning two policies: one to solely explore, and one to solely exploit. When exploring requires forgoing early-episode reward, First-Explore significantly outperforms existing cumulative meta-RL methods. By identifying and solving the previously unrecognized problem of forgoing reward in early episodes, First-Explore represents a significant step towards developing meta-RL algorithms capable of human-like exploration on a broader range of domains.", "pdf": "https://openreview.net/pdf/d836aa944c9edfd65776c5dce9bdfa31dc753230.pdf"} {"title": "Iterative Reasoning Preference Optimization", "url": "https://openreview.net/forum?id=4XIKfvNYvx", "detail_url": "https://openreview.net/forum?id=4XIKfvNYvx", "authors": "Richard Yuanzhe Pang,Weizhe Yuan,He He,Kyunghyun Cho,Sainbayar Sukhbaatar,Jason E Weston", "tags": "NIPS 2024,Poster", "abstract": "Iterative preference optimization methods have recently been shown to perform well for general instruction tuning tasks, but typically make little improvement on reasoning tasks. In this work we develop an iterative approach that optimizes the preference between competing generated Chain-of-Thought (CoT) candidates by optimizing for winning vs. losing reasoning steps. We train using a modified DPO loss with an additional negative log-likelihood term, which we find to be crucial. We show reasoning improves across repeated iterations of this scheme. While only relying on examples in the training set, our approach results in increasing accuracy on GSM8K, MATH, and ARC-Challenge for Llama-2-70B-Chat, outperforming other Llama-2-based models not relying on additionally sourced datasets. For example, we see a large improvement from 55.6% to 81.6% on GSM8K and an accuracy of 88.7% with majority voting out of 32 samples.", "pdf": "https://openreview.net/pdf/7e59c840774359c6db720256d9f471fcec640aa4.pdf"} {"title": "Robot Policy Learning with Temporal Optimal Transport Reward", "url": "https://openreview.net/forum?id=LEed5Is4oi", "detail_url": "https://openreview.net/forum?id=LEed5Is4oi", "authors": "Yuwei Fu,Haichao Zhang,Di Wu,Wei Xu,Benoit Boulet", "tags": "NIPS 2024,Poster", "abstract": "Reward specification is one of the most tricky problems in Reinforcement Learning, which usually requires tedious hand engineering in practice. One promising approach to tackle this challenge is to adopt existing expert video demonstrations for policy learning. Some recent work investigates how to learn robot policies from only a single/few expert video demonstrations. For example, reward labeling via Optimal Transport (OT) has been shown to be an effective strategy to generate a proxy reward by measuring the alignment between the robot trajectory and the expert demonstrations. However, previous work mostly overlooks that the OT reward is invariant to temporal order information, which could bring extra noise to the reward signal. To address this issue, in this paper, we introduce the Temporal Optimal Transport (TemporalOT) reward to incorporate temporal order information for learning a more accurate OT-based proxy reward. Extensive experiments on the Meta-world benchmark tasks validate the efficacy of the proposed method. Our code is available at: https://github.com/fuyw/TemporalOT.", "pdf": "https://openreview.net/pdf/546d5a3bfcb9e2fdc8b68c1bf6c486d493da366e.pdf"} {"title": "Reinforcement Learning Guided Semi-Supervised Learning", "url": "https://openreview.net/forum?id=PSMBefUZa2", "detail_url": "https://openreview.net/forum?id=PSMBefUZa2", "authors": "Marzi Heidari,Hanping Zhang,Yuhong Guo", "tags": "NIPS 2024,Poster", "abstract": "In recent years, semi-supervised learning (SSL) has gained significant attention due to its ability to leverage both labeled and unlabeled data to improve model performance, especially when labeled data is scarce. However, most current SSL methods rely on heuristics or predefined rules for generating pseudo-labels and leveraging unlabeled data. They are limited to exploiting loss functions and regularization methods within the standard norm. In this paper, we propose a novel Reinforcement Learning (RL) Guided SSL method, RLGSSL, that formulates SSL as a one-armed bandit problem and deploys an innovative RL loss based on weighted reward to adaptively guide the learning process of the prediction model. RLGSSL incorporates a carefully designed reward function that balances the use of labeled and unlabeled data to enhance generalization performance. A semi-supervised teacher-student framework is further deployed to increase the learning stability. We demonstrate the effectiveness of RLGSSL through extensive experiments on several benchmark datasets and show that our approach achieves consistent superior performance compared to state-of-the-art SSL methods.", "pdf": "https://openreview.net/pdf/2344add05a63e8e811361c96b898b85f89821417.pdf"} {"title": "Non-parametric classification via expand-and-sparsify representation", "url": "https://openreview.net/forum?id=0d50Il6enG", "detail_url": "https://openreview.net/forum?id=0d50Il6enG", "authors": "Kaushik Sinha", "tags": "NIPS 2024,Poster", "abstract": "In *expand-and-sparsify* (EaS) representation, a data point in $\\mathcal{S}^{d-1}$ is first randomly mapped to higher dimension $\\mathbb{R}^m$, where $m>d$, followed by a sparsification operation where the informative $k \\ll m$ of the $m$ coordinates are set to one and the rest are set to zero. We propose two algorithms for non-parametric classification using such EaS representation. For our first algorithm, we use *winners-take-all* operation for the sparsification step and show that the proposed classifier admits the form of a locally weighted average classifier and establish its consistency via Stone's Theorem. Further, assuming that the conditional probability function $P(y=1|x)=\\eta(x)$ is H\\\"{o}lder continuous and for optimal choice of $m$, we show that the convergence rate of this classifier is minimax-optimal. For our second algorithm, we use *empirical $k$-thresholding* operation for the sparsification step, and under the assumption that data lie on a low dimensional manifold of dimension $d_0\\ll d$, we show that the convergence rate of this classifier depends only on $d_0$ and is again minimax-optimal. Empirical evaluations performed on real-world datasets corroborate our theoretical results.", "pdf": "https://openreview.net/pdf/cf5fc42d6420b6f75735d9629078474bd70b836e.pdf"} {"title": "LoFiT: Localized Fine-tuning on LLM Representations", "url": "https://openreview.net/forum?id=dfiXFbECSZ", "detail_url": "https://openreview.net/forum?id=dfiXFbECSZ", "authors": "Fangcong Yin,Xi Ye,Greg Durrett", "tags": "NIPS 2024,Poster", "abstract": "Recent work in interpretability shows that large language models (LLMs) can be adapted for new tasks in a learning-free way: it is possible to intervene on LLM representations to elicit desired behaviors for alignment. For instance, adding certain bias vectors to the outputs of certain attention heads is reported to boost the truthfulness of models. In this work, we show that localized fine-tuning serves as an effective alternative to such representation intervention methods. We introduce a framework called Localized Fine-Tuning on LLM Representations (LoFiT), which identifies a subset of attention heads that are most important for learning a specific task, then trains offset vectors to add to the model's hidden representations at those selected heads. LoFiT localizes to a sparse set of heads (3%-10%) and learns the offset vectors from limited training data, comparable to the settings used for representation intervention. For truthfulness and reasoning tasks, we find that LoFiT's intervention vectors are more effective for LLM adaptation than vectors from representation intervention methods such as Inference-time Intervention. We also find that the localization step is important: selecting a task-specific set of attention heads can lead to higher performance than intervening on heads selected for a different task. Finally, across 7 tasks we study, LoFiT achieves comparable performance to other parameter-efficient fine-tuning methods such as LoRA, despite modifying 20x-200x fewer parameters than these methods.", "pdf": "https://openreview.net/pdf/82c808befa2777dd14ef26f962250a30ba8ec10f.pdf"} {"title": "Physics-Informed Variational State-Space Gaussian Processes", "url": "https://openreview.net/forum?id=tCf7S75xFa", "detail_url": "https://openreview.net/forum?id=tCf7S75xFa", "authors": "Oliver Hamelijnck,Arno Solin,Theodoros Damoulas", "tags": "NIPS 2024,Poster", "abstract": "Differential equations are important mechanistic models that are integral to many scientific and engineering applications. With the abundance of available data there has been a growing interest in data-driven physics-informed models. Gaussian processes (GPs) are particularly suited to this task as they can model complex, non-linear phenomena whilst incorporating prior knowledge and quantifying uncertainty. Current approaches have found some success but are limited as they either achieve poor computational scalings or focus only on the temporal setting. This work addresses these issues by introducing a variational spatio-temporal state-space GP that handles linear and non-linear physical constraints while achieving efficient linear-in-time computation costs. We demonstrate our methods in a range of synthetic and real-world settings and outperform the current state-of-the-art in both predictive and computational performance.", "pdf": "https://openreview.net/pdf/6a7d93ac3343cf1b9a2d5c8f88d19eceea0d58f8.pdf"} {"title": "Learning to Embed Distributions via Maximum Kernel Entropy", "url": "https://openreview.net/forum?id=A0cok1GK9c", "detail_url": "https://openreview.net/forum?id=A0cok1GK9c", "authors": "Oleksii Kachaiev,Stefano Recanatesi", "tags": "NIPS 2024,Poster", "abstract": "Empirical data can often be considered as samples from a set of probability distributions. Kernel methods have emerged as a natural approach for learning to classify these distributions. Although numerous kernels between distributions have been proposed, applying kernel methods to distribution regression tasks remains challenging, primarily because selecting a suitable kernel is not straightforward. Surprisingly, the question of learning a data-dependent distribution kernel has received little attention. In this paper, we propose a novel objective for the unsupervised learning of data-dependent distribution kernel, based on the principle of entropy maximization in the space of probability measure embeddings. We examine the theoretical properties of the latent embedding space induced by our objective, demonstrating that its geometric structure is well-suited for solving downstream discriminative tasks. Finally, we demonstrate the performance of the learned kernel across different modalities.", "pdf": "https://openreview.net/pdf/b809679dcc0eec32c0cfbce8bcd7515295f66753.pdf"} {"title": "When is Multicalibration Post-Processing Necessary?", "url": "https://openreview.net/forum?id=OONojmx3wH", "detail_url": "https://openreview.net/forum?id=OONojmx3wH", "authors": "Dutch Hansen,Siddartha Devic,Preetum Nakkiran,Vatsal Sharan", "tags": "NIPS 2024,Poster", "abstract": "Calibration is a well-studied property of predictors which guarantees meaningful uncertainty estimates. Multicalibration is a related notion --- originating in algorithmic fairness --- which requires predictors to be simultaneously calibrated over a potentially complex and overlapping collection of protected subpopulations (such as groups defined by ethnicity, race, or income). We conduct the first comprehensive study evaluating the usefulness of multicalibration post-processing across a broad set of tabular, image, and language datasets for models spanning from simple decision trees to 90 million parameter fine-tuned LLMs. Our findings can be summarized as follows: (1) models which are calibrated out of the box tend to be relatively multicalibrated without any additional post-processing; (2) multicalibration can help inherently uncalibrated models and also large vision and language models; and (3) traditional calibration measures may sometimes provide multicalibration implicitly. More generally, we also distill many independent observations which may be useful for practical and effective applications of multicalibration post-processing in real-world contexts.", "pdf": "https://openreview.net/pdf/0cdf6bb6b426ee8df5430bc9531e78cbd80ebeb7.pdf"} {"title": "Expected Probabilistic Hierarchies", "url": "https://openreview.net/forum?id=fMdrBucZnj", "detail_url": "https://openreview.net/forum?id=fMdrBucZnj", "authors": "Marcel Kollovieh,Bertrand Charpentier,Daniel Z\u00fcgner,Stephan G\u00fcnnemann", "tags": "NIPS 2024,Poster", "abstract": "Hierarchical clustering has usually been addressed by discrete optimization using heuristics or continuous optimization of relaxed scores for hierarchies. In this work, we propose to optimize expected scores under a probabilistic model over hierarchies. (1) We show theoretically that the global optimal values of the expected Dasgupta cost and Tree-Sampling divergence (TSD), two unsupervised metrics for hierarchical clustering, are equal to the optimal values of their discrete counterparts contrary to some relaxed scores. (2) We propose Expected Probabilistic Hierarchies (EPH), a probabilistic model to learn hierarchies in data by optimizing expected scores. EPH uses differentiable hierarchy sampling enabling end-to-end gradient descent based optimization, and an unbiased subgraph sampling approach to scale to large datasets. (3) We evaluate EPH on synthetic and real-world datasets including vector and graph datasets. EPH outperforms all other approaches quantitatively and provides meaningful hierarchies in qualitative evaluations.", "pdf": "https://openreview.net/pdf/b84db14f49a1687fab66baf0417f23e71dc598d3.pdf"} {"title": "Prompt Tuning Strikes Back: Customizing Foundation Models with Low-Rank Prompt Adaptation", "url": "https://openreview.net/forum?id=SyMhGilvCv", "detail_url": "https://openreview.net/forum?id=SyMhGilvCv", "authors": "Abhinav Jain,Swarat Chaudhuri,Thomas Reps,Chris Jermaine", "tags": "NIPS 2024,Poster", "abstract": "Parameter-Efficient Fine-Tuning (PEFT) has become the standard for customising Foundation Models (FMs) to user-specific downstream tasks. However, typical PEFT methods require storing multiple task-specific adapters, creating scalability issues as these adapters must be housed and run at the FM server. Traditional prompt tuning offers a potential solution by customising them through task-specific input prefixes, but it under-performs compared to other PEFT methods like LoRA. To address this gap, we propose Low-Rank Prompt Adaptation (LoPA), a prompt-tuning-based approach that performs on par with state-of-the-art PEFT methods and full fine-tuning while being more parameter-efficient and not requiring a server-based adapter. LoPA generates soft prompts by balancing between sharing task-specific information across instances and customization for each instance. It uses a low-rank decomposition of the soft-prompt component encoded for each instance to achieve parameter efficiency. We provide a comprehensive evaluation on multiple natural language understanding and code generation and understanding tasks across a wide range of foundation models with varying sizes.", "pdf": "https://openreview.net/pdf/f79a9adc44e79b654a39f910767c76091b4ab8ad.pdf"} {"title": "Differentially Private Graph Diffusion with Applications in Personalized PageRanks", "url": "https://openreview.net/forum?id=aon7bwYBiq", "detail_url": "https://openreview.net/forum?id=aon7bwYBiq", "authors": "Rongzhe Wei,Eli Chien,Pan Li", "tags": "NIPS 2024,Poster", "abstract": "Graph diffusion, which iteratively propagates real-valued substances among the graph, is used in numerous graph/network-involved applications. However, releasing diffusion vectors may reveal sensitive linking information in the data such as transaction information in financial network data. However, protecting the privacy of graph data is challenging due to its interconnected nature.\n This work proposes a novel graph diffusion framework with edge-level different privacy guarantees by using noisy diffusion iterates.\n The algorithm injects Laplace noise per diffusion iteration and adopts a degree-based thresholding function to mitigate the high sensitivity induced by low-degree nodes. Our privacy loss analysis is based on Privacy Amplification by Iteration (PABI), which to our best knowledge, is the first effort that analyzes PABI with Laplace noise and provides relevant applications.\n We also introduce a novel $\\infty$-Wasserstein distance tracking method, which tightens the analysis of privacy leakage and makes PABI more applicable in practice. \n We evaluate this framework by applying it to Personalized Pagerank computation for ranking tasks. Experiments on real-world network data demonstrate the superiority of our method under stringent privacy conditions.", "pdf": "https://openreview.net/pdf/8167cef85f4c4bf69b1b7b13a07c309e549f6be2.pdf"} {"title": "Hybrid Reinforcement Learning Breaks Sample Size Barriers In Linear MDPs", "url": "https://openreview.net/forum?id=bPuYxFBHyI", "detail_url": "https://openreview.net/forum?id=bPuYxFBHyI", "authors": "Kevin Tan,Wei Fan,Yuting Wei", "tags": "NIPS 2024,Poster", "abstract": "Hybrid Reinforcement Learning (RL), where an agent learns from both an offline dataset and online explorations in an unknown environment, has garnered significant recent interest. A crucial question posed by Xie et al. (2022) is whether hybrid RL can improve upon the existing lower bounds established in purely offline and purely online RL without relying on the single-policy concentrability assumption. \nWhile Li et al. (2023) provided an affirmative answer to this question in the tabular PAC RL case, the question remains unsettled for both the regret-minimizing RL case and the non-tabular case. In this work, building upon recent advancements in offline RL and reward-agnostic exploration, we develop computationally efficient algorithms for both PAC and regret-minimizing RL with linear function approximation, without requiring concentrability on the entire state-action space. We demonstrate that these algorithms achieve sharper error or regret bounds that are no worse than, and can improve on, the optimal sample complexity in offline RL (the first algorithm, for PAC RL) and online RL (the second algorithm, for regret-minimizing RL) in linear Markov decision processes (MDPs), regardless of the quality of the behavior policy. To our knowledge, this work establishes the tightest theoretical guarantees currently available for hybrid RL in linear MDPs.", "pdf": "https://openreview.net/pdf/b7cd776027da50edf1fb90be41e9a35d302b347b.pdf"} {"title": "Theoretical Foundations of Deep Selective State-Space Models", "url": "https://openreview.net/forum?id=3SzrqwupUx", "detail_url": "https://openreview.net/forum?id=3SzrqwupUx", "authors": "Nicola Muca Cirone,Antonio Orvieto,Benjamin Walker,Cristopher Salvi,Terry Lyons", "tags": "NIPS 2024,Poster", "abstract": "Structured state-space models (SSMs) are gaining popularity as effective foundational architectures for sequential data, demonstrating outstanding performance across a diverse set of domains alongside desirable scalability properties. Recent developments show that if the linear recurrence powering SSMs allows for a selectivity mechanism leveraging multiplicative interactions between inputs and hidden states (e.g. Mamba, GLA, Hawk/Griffin, HGRN2), then the resulting architecture can surpass attention-powered foundation models trained on text in both accuracy and efficiency, at scales of billion parameters. In this paper, we give theoretical grounding to the selectivity mechanism, often linked to in-context learning, using tools from Rough Path Theory. We provide a framework for the theoretical analysis of generalized selective SSMs, fully characterizing their expressive power and identifying the gating mechanism as the crucial architectural choice. Our analysis provides a closed-form description of the expressive powers of modern SSMs, such as Mamba, quantifying theoretically the drastic improvement in performance from the previous generation of models, such as S4. Our theory not only motivates the success of modern selective state-space models, but also provides a solid framework to understand the expressive power of future SSM variants. In particular, it suggests cross-channel interactions could play a vital role in future improvements.", "pdf": "https://openreview.net/pdf/4e86fe9ae93de98a547f68ad2934a6a01ebc450e.pdf"} {"title": "Divide-and-Conquer Predictive Coding: a structured Bayesian inference algorithm", "url": "https://openreview.net/forum?id=dxwIaCVkWU", "detail_url": "https://openreview.net/forum?id=dxwIaCVkWU", "authors": "Eli Zachary Sennesh,Hao Wu,Tommaso Salvatori", "tags": "NIPS 2024,Poster", "abstract": "Unexpected stimuli induce \"error\" or \"surprise\" signals in the brain. The theory of predictive coding promises to explain these observations in terms of Bayesian inference by suggesting that the cortex implements variational inference in a probabilistic graphical model. However, when applied to machine learning tasks, this family of algorithms has yet to perform on par with other variational approaches in high-dimensional, structured inference problems. To address this, we introduce a novel predictive coding algorithm for structured generative models, that we call divide-and-conquer predictive coding (DCPC); it differs from other formulations of predictive coding, as it respects the correlation structure of the generative model and provably performs maximum-likelihood updates of model parameters, all without sacrificing biological plausibility. Empirically, DCPC achieves better numerical performance than competing algorithms and provides accurate inference in a number of problems not previously addressed with predictive coding. We provide an open implementation of DCPC in Pyro on Github.", "pdf": "https://openreview.net/pdf/c5dcbc4afe1bf94b2c8c24246234641c0fb36cdd.pdf"} {"title": "Randomized algorithms and PAC bounds for inverse reinforcement learning in continuous spaces", "url": "https://openreview.net/forum?id=VUgXAWOCQz", "detail_url": "https://openreview.net/forum?id=VUgXAWOCQz", "authors": "Angeliki Kamoutsi,Peter Schmitt-F\u00f6rster,Tobias Sutter,Volkan Cevher,John Lygeros", "tags": "NIPS 2024,Poster", "abstract": "This work studies discrete-time discounted Markov decision processes with continuous state and action spaces and addresses the inverse problem of inferring a cost function from observed optimal behavior. We first consider the case in which we have access to the entire expert policy and characterize the set of solutions to the inverse problem by using occupation measures, linear duality, and complementary slackness conditions. To avoid trivial solutions and ill-posedness, we introduce a natural linear normalization constraint. This results in an infinite-dimensional linear feasibility problem, prompting a thorough analysis of its properties. Next, we use linear function approximators and adopt a randomized approach, namely the scenario approach and related probabilistic feasibility guarantees, to derive $\\varepsilon$-optimal solutions for the inverse problem. We further discuss the sample complexity for a desired approximation accuracy. Finally, we deal with the more realistic case where we only have access to a finite set of expert demonstrations and a generative model and provide bounds on the error made when working with samples.", "pdf": "https://openreview.net/pdf/6b739551f0cafd5d9a306eda4ee36802fb033a87.pdf"} {"title": "Stratified Prediction-Powered Inference for Effective Hybrid Evaluation of Language Models", "url": "https://openreview.net/forum?id=8CBcdDQFDQ", "detail_url": "https://openreview.net/forum?id=8CBcdDQFDQ", "authors": "Adam Fisch,Joshua Maynez,R. Alex Hofer,Bhuwan Dhingra,Amir Globerson,William W. Cohen", "tags": "NIPS 2024,Poster", "abstract": "Prediction-powered inference (PPI) is a method that improves statistical estimates based on limited human-labeled data. PPI achieves this by combining small amounts of human-labeled data with larger amounts of data labeled by a reasonably accurate---but potentially biased---automatic system, in a way that results in tighter confidence intervals for certain parameters of interest (e.g., the mean performance of a language model). In this paper, we propose a method called Stratified Prediction-Powered Inference (StratPPI), in which we show that the basic PPI estimates can be considerably improved by employing simple data stratification strategies. Without making any assumptions on the underlying automatic labeling system or data distribution, we derive an algorithm for computing provably valid confidence intervals for parameters of any dimensionality that is based on stratified sampling. In particular, we show both theoretically and empirically that, with appropriate choices of stratification and sample allocation, our approach can provide substantially tighter confidence intervals than unstratified approaches. Specifically, StratPPI is expected to improve in cases where the performance of the autorater varies across different conditional distributions of the target data.", "pdf": "https://openreview.net/pdf/4e7db3a23f6df7ed68c466099c4a79ff0c20e3b3.pdf"} {"title": "OASIS: Conditional Distribution Shaping for Offline Safe Reinforcement Learning", "url": "https://openreview.net/forum?id=3uDEmsf3Jf", "detail_url": "https://openreview.net/forum?id=3uDEmsf3Jf", "authors": "Yihang Yao,Zhepeng Cen,Wenhao Ding,Haohong Lin,Shiqi Liu,Tingnan Zhang,Wenhao Yu,Ding Zhao", "tags": "NIPS 2024,Poster", "abstract": "Offline safe reinforcement learning (RL) aims to train a policy that satisfies con- straints using a pre-collected dataset. Most current methods struggle with the mismatch between imperfect demonstrations and the desired safe and rewarding performance. In this paper, we mitigate this issue from a data-centric perspective and introduce OASIS (cOnditionAl diStributIon Shaping), a new paradigm in offline safe RL designed to overcome these critical limitations. OASIS utilizes a conditional diffusion model to synthesize offline datasets, thus shaping the data dis- tribution toward a beneficial target domain. Our approach makes compliance with safety constraints through effective data utilization and regularization techniques to benefit offline safe RL training. Comprehensive evaluations on public benchmarks and varying datasets showcase OASIS\u2019s superiority in benefiting offline safe RL agents to achieve high-reward behavior while satisfying the safety constraints, out- performing established baselines. Furthermore, OASIS exhibits high data efficiency and robustness, making it suitable for real-world applications, particularly in tasks where safety is imperative and high-quality demonstrations are scarce. More details are available at the website https://sites.google.com/view/saferl-oasis/home.", "pdf": "https://openreview.net/pdf/b117426626e9be7de2fbb787c26872b3a9a39334.pdf"} {"title": "Density-based User Representation using Gaussian Process Regression for Multi-interest Personalized Retrieval", "url": "https://openreview.net/forum?id=Px1hQM72iX", "detail_url": "https://openreview.net/forum?id=Px1hQM72iX", "authors": "Haolun Wu,Ofer Meshi,Masrour Zoghi,Fernando Diaz,Xue Liu,Craig Boutilier,MARYAM KARIMZADEHGAN", "tags": "NIPS 2024,Poster", "abstract": "Accurate modeling of the diverse and dynamic interests of users remains a significant challenge in the design of personalized recommender systems. Existing user modeling methods, like single-point and multi-point representations, have limitations w.r.t.\\ accuracy, diversity, and adaptability. To overcome these deficiencies, we introduce density-based user representations (DURs), a novel method that leverages Gaussian process regression (GPR) for effective multi-interest recommendation and retrieval. Our approach, GPR4DUR, exploits DURs to capture user interest variability without manual tuning, incorporates uncertainty-awareness, and scales well to large numbers of users. Experiments using real-world offline datasets confirm the adaptability and efficiency of GPR4DUR, while online experiments with simulated users demonstrate its ability to address the exploration-exploitation trade-off by effectively utilizing model uncertainty.", "pdf": "https://openreview.net/pdf/6fcc2fb80f8768e72d83fd0a25391e91b6872df1.pdf"} {"title": "WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models", "url": "https://openreview.net/forum?id=n5R6TvBVcX", "detail_url": "https://openreview.net/forum?id=n5R6TvBVcX", "authors": "Liwei Jiang,Kavel Rao,Seungju Han,Allyson Ettinger,Faeze Brahman,Sachin Kumar,Niloofar Mireshghallah,Ximing Lu,Maarten Sap,Yejin Choi,Nouha Dziri", "tags": "NIPS 2024,Poster", "abstract": "We introduce WildTeaming, an automatic red-teaming framework that mines in-the-wild user-chatbot interactions to discover 5.7K unique clusters of novel jailbreak tactics, and then composes selections of multiple mined tactics for systematic exploration of novel and even more challenging jailbreaks.\nCompared to prior work that performed red-teaming via recruited human workers, gradient-based optimization, or iterative revision with large language models (LLMs), our work investigates jailbreaks from chatbot users in-the-wild who were not specifically instructed to break the system. WildTeaming reveals previously unidentified vulnerabilities of frontier LLMs, resulting in more diverse and successful adversarial attacks compared to state-of-the-art jailbreaking methods. \n\nWhile there exist many datasets for jailbreak evaluation, very few open-source datasets exist for jailbreak training, as safety training data has been closed among all frontier models even when their weights are open. Therefore, with WildTeaming we create WildJailbreak, a large-scale open-source synthetic safety dataset with 262K vanilla (direct request) and adversarial (complex jailbreak) prompt-response pairs. In order to mitigate exaggerated safety behaviors, WildJailbreak provides two contrastive types of queries: 1) harmful queries (both vanilla and adversarial) and 2) benign queries that resemble harmful queries in form but contain no harmful intent. As WildJailbreak considerably upgrades the quality and scale of existing safety resources, it uniquely enables us to examine the scaling effects of data and the interplay of data properties and model capabilities during safety training. Through extensive model training and evaluations, we identify the training properties that enable an ideal balance of safety behaviors: appropriate safeguarding without over-refusal, effective handling of both vanilla and adversarial queries, and minimal, if any, decrease in general capabilities. All the components of WildJailbreak contribute to achieving balanced safety behaviors of models", "pdf": "https://openreview.net/pdf/5c0e189c5b92a109f691a752108334b171f24840.pdf"} {"title": "Structured flexibility in recurrent neural networks via neuromodulation", "url": "https://openreview.net/forum?id=HbIBqn3grD", "detail_url": "https://openreview.net/forum?id=HbIBqn3grD", "authors": "Julia C Costacurta,Shaunak Bhandarkar,David M. Zoltowski,Scott Linderman", "tags": "NIPS 2024,Poster", "abstract": "A core aim in theoretical and systems neuroscience is to develop models which help us better understand biological intelligence. \nSuch models range broadly in both complexity and biological plausibility. \nOne widely-adopted example is task-optimized recurrent neural networks (RNNs), which have been used to generate hypotheses about how the brain\u2019s neural dynamics may organize to accomplish tasks. \nHowever, task-optimized RNNs typically have a fixed weight matrix representing the synaptic connectivity between neurons. From decades of neuroscience research, we know that synaptic weights are constantly changing, controlled in part by chemicals such as neuromodulators. \nIn this work we explore the computational implications of synaptic gain scaling, a form of neuromodulation, using task-optimized low-rank RNNs.\nIn our neuromodulated RNN (NM-RNN) model, a neuromodulatory subnetwork outputs a low-dimensional neuromodulatory signal that dynamically scales the low-rank recurrent weights of an output-generating RNN. \nIn empirical experiments, we find that the structured flexibility in the NM-RNN allows it to both train and generalize with a higher degree of accuracy than low-rank RNNs on a set of canonical tasks.\nAdditionally, via theoretical analyses we show how neuromodulatory gain scaling endows networks with gating mechanisms commonly found in artificial RNNs. \nWe end by analyzing the low-rank dynamics of trained NM-RNNs, to show how task computations are distributed.", "pdf": "https://openreview.net/pdf/7621b2faa9f3d4ded6dd91fe4f5fc5a67af2525a.pdf"} {"title": "SEL-BALD: Deep Bayesian Active Learning for Selective Labeling with Instance Rejection", "url": "https://openreview.net/forum?id=tDMTwto6jv", "detail_url": "https://openreview.net/forum?id=tDMTwto6jv", "authors": "Ruijiang Gao,Mingzhang Yin,Maytal Saar-Tsechansky", "tags": "NIPS 2024,Poster", "abstract": "Machine learning systems are widely used in many high-stakes contexts in which experimental designs for assigning treatments are infeasible. When evaluating a decision instance is costly, such as investigating a fraud case, or evaluating a biopsy decision, a sample-efficient strategy is needed. However, while existing active learning methods assume humans will always label the instances selected by the machine learning model, in many critical applications, humans may decline to label instances selected by the machine learning model due to reasons such as regulation constraint, domain knowledge, or algorithmic aversion, thus not sample efficient. \nIn this paper, we propose the Active Learning with Instance Rejection (ALIR) problem, which is a new active learning problem that considers the human discretion behavior for high-stakes decision making problems. We propose new active learning algorithms under deep Bayesian active learning for selective labeling (SEL-BALD) to address the ALIR problem. Our algorithms consider how to acquire information for both the machine learning model and the human discretion model. We conduct experiments on both synthetic and real-world datasets to demonstrate the effectiveness of our proposed algorithms.", "pdf": "https://openreview.net/pdf/6d282e436d10a31e2f510dc93fd45a23aff5e571.pdf"} {"title": "Interpolating Item and User Fairness in Multi-Sided Recommendations", "url": "https://openreview.net/forum?id=tAOg1HdvGy", "detail_url": "https://openreview.net/forum?id=tAOg1HdvGy", "authors": "Qinyi Chen,Jason Cheuk Nam Liang,Negin Golrezaei,Djallel Bouneffouf", "tags": "NIPS 2024,Poster", "abstract": "Today's online platforms heavily lean on algorithmic recommendations for bolstering user engagement and driving revenue. However, these recommendations can impact multiple stakeholders simultaneously---the platform, items (sellers), and users (customers)---each with their unique objectives, making it difficult to find the right middle ground that accommodates all stakeholders. To address this, we introduce a novel fair recommendation framework, Problem (FAIR), that flexibly balances multi-stakeholder interests via a constrained optimization formulation. We next explore Problem (FAIR) in a dynamic online setting where data uncertainty further adds complexity, and propose a low-regret algorithm FORM that concurrently performs real-time learning and fair recommendations, two tasks that are often at odds. Via both theoretical analysis and a numerical case study on real-world data, we demonstrate the efficacy of our framework and method in maintaining platform revenue while ensuring desired levels of fairness for both items and users.", "pdf": "https://openreview.net/pdf/779d8fafd139b66f38faec4a1301dd5616d6e34f.pdf"} {"title": "Sparse High Rank Adapters", "url": "https://openreview.net/forum?id=6hY60tkiEK", "detail_url": "https://openreview.net/forum?id=6hY60tkiEK", "authors": "Kartikeya Bhardwaj,Nilesh Prasad Pandey,Sweta Priyadarshi,Viswanath Ganapathy,Shreya Kadambi,Rafael Esteves,Shubhankar Borse,Paul Whatmough,Risheek Garrepalli,Mart Van Baalen,Harris Teague,Markus Nagel", "tags": "NIPS 2024,Poster", "abstract": "Low Rank Adaptation (LoRA) has gained massive attention in the recent generative AI research. One of the main advantages of LoRA is its ability to be fused with pretrained models, adding no overhead during inference. However, from a mobile deployment standpoint, we can either avoid inference overhead in the fused mode but lose the ability to switch adapters rapidly, or suffer significant (up to 30% higher) inference latency while enabling rapid switching in the unfused mode. LoRA also exhibits concept-loss when multiple adapters are used concurrently. In this paper, we propose Sparse High Rank Adapters (SHiRA), a new paradigm which incurs no inference overhead, enables rapid switching, and significantly reduces concept-loss. Specifically, SHiRA can be trained by directly tuning only 1-2% of the base model weights while leaving others unchanged. This results in a highly sparse adapter which can be switched directly in the fused mode. We further provide theoretical and empirical insights on how high sparsity in SHiRA can aid multi-adapter fusion by reducing concept loss. Our extensive experiments on LVMs and LLMs demonstrate that finetuning only a small fraction of the parameters in the base model significantly outperforms LoRA while enabling both rapid switching and multi-adapter fusion. Finally, we provide a latency- and memory-efficient SHiRA implementation based on Parameter-Efficient Finetuning (PEFT) Library which trains at nearly the same speed as LoRA while consuming up to 16% lower peak GPU memory, thus making SHiRA easy to adopt for practical use cases. To demonstrate rapid switching benefits during inference, we show that loading SHiRA on a base model can be 5x-16x faster than LoRA fusion on a CPU.", "pdf": "https://openreview.net/pdf/8fbda02958d2d96c786fbd9463f21e8f2dabc6c3.pdf"} {"title": "Compact Proofs of Model Performance via Mechanistic Interpretability", "url": "https://openreview.net/forum?id=2zWbzx50mH", "detail_url": "https://openreview.net/forum?id=2zWbzx50mH", "authors": "Jason Gross,Rajashree Agrawal,Thomas Kwa,Euan Ong,Chun Hei Yip,Alex Gibson,Soufiane Noubir,Lawrence Chan", "tags": "NIPS 2024,Poster", "abstract": "We propose using mechanistic interpretability -- techniques for reverse engineering model weights into human-interpretable algorithms -- to derive and compactly prove formal guarantees on model performance.\nWe prototype this approach by formally proving accuracy lower bounds for a small transformer trained on Max-of-$K$, validating proof transferability across 151 random seeds and four values of $K$.\nWe create 102 different computer-assisted proof strategies and assess their length and tightness of bound on each of our models.\nUsing quantitative metrics, we find that shorter proofs seem to require and provide more mechanistic understanding.\nMoreover, we find that more faithful mechanistic understanding leads to tighter performance bounds.\nWe confirm these connections by qualitatively examining a subset of our proofs.\nFinally, we identify compounding structureless errors as a key challenge for using mechanistic interpretability to generate compact proofs on model performance.", "pdf": "https://openreview.net/pdf/2b080dafbe4fe995df64a4516389ff273902a32c.pdf"} {"title": "DISP-LLM: Dimension-Independent Structural Pruning for Large Language Models", "url": "https://openreview.net/forum?id=YxaY6tHgg0", "detail_url": "https://openreview.net/forum?id=YxaY6tHgg0", "authors": "Shangqian Gao,Chi-Heng Lin,Ting Hua,Zheng Tang,Yilin Shen,Hongxia Jin,Yen-Chang Hsu", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) have achieved remarkable success in various natural language processing tasks, including language modeling, understanding, and generation. However, the increased memory and computational costs associated with these models pose significant challenges for deployment on resource-limited devices. Structural pruning has emerged as a promising solution to reduce the costs of LLMs without requiring post-processing steps. Prior structural pruning methods either follow the dependence of structures at the cost of limiting flexibility, or introduce non-trivial additional parameters by incorporating different projection matrices. In this work, we propose a novel approach that relaxes the constraint imposed by regular structural pruning methods and eliminates the structural dependence along the embedding dimension. Our dimension-independent structural pruning method offers several benefits. Firstly, our method enables different blocks to utilize different subsets of the feature maps. Secondly, by removing structural dependence, we facilitate each block to possess varying widths along its input and output dimensions, thereby significantly enhancing the flexibility of structural pruning. We evaluate our method on various LLMs, including OPT, LLaMA, LLaMA-2, Phi-1.5, and Phi-2. Experimental results demonstrate that our approach outperforms other state-of-the-art methods, showing for the first time that structural pruning can achieve an accuracy similar to semi-structural pruning.", "pdf": "https://openreview.net/pdf/53109daab0c04edaf62237ef9cddf9b4256644bd.pdf"} {"title": "Learning Transferable Features for Implicit Neural Representations", "url": "https://openreview.net/forum?id=ABYdKpDb8p", "detail_url": "https://openreview.net/forum?id=ABYdKpDb8p", "authors": "Kushal Vyas,Ahmed Imtiaz Humayun,Aniket Dashpute,Richard Baraniuk,Ashok Veeraraghavan,Guha Balakrishnan", "tags": "NIPS 2024,Poster", "abstract": "Implicit neural representations (INRs) have demonstrated success in a variety of applications, including inverse problems and neural rendering. An INR is typically trained to capture one signal of interest, resulting in learned neural features that are highly attuned to that signal. Assumed to be less generalizable, we explore the aspect of transferability of such learned neural features for fitting similar signals. We introduce a new INR training framework, STRAINER that learns transferable features for fitting INRs to new signals from a given distribution, faster and with better reconstruction quality. Owing to the sequential layer-wise affine operations in an INR, we propose to learn transferable representations by sharing initial encoder layers across multiple INRs with independent decoder layers. At test time, the learned encoder representations are transferred as initialization for an otherwise randomly initialized INR. We find STRAINER to yield extremely powerful initialization for fitting images from the same domain and allow for a \u2248 +10dB gain in signal quality early on compared to an untrained INR itself. STRAINER also provides a simple way to encode data-driven priors in INRs. We evaluate STRAINER on multiple in-domain and out-of-domain signal fitting tasks and inverse problems and further provide detailed analysis and discussion on the transferability of STRAINER\u2019s features.", "pdf": "https://openreview.net/pdf/7acd584d735cad977d879073a05945b253109e05.pdf"} {"title": "Randomized Strategic Facility Location with Predictions", "url": "https://openreview.net/forum?id=YvOeN0kUzT", "detail_url": "https://openreview.net/forum?id=YvOeN0kUzT", "authors": "Eric Balkanski,Vasilis Gkatzelis,Golnoosh Shahkarami", "tags": "NIPS 2024,Poster", "abstract": "In the strategic facility location problem, a set of agents report their locations in a metric space and the goal is to use these reports to open a new facility, minimizing an aggregate distance measure from the agents to the facility. However, agents are strategic and may misreport their locations to influence the facility\u2019s placement in their favor. The aim is to design truthful mechanisms, ensuring agents cannot gain by misreporting. This problem was recently revisited through the learning-augmented framework, aiming to move beyond worst-case analysis and design truthful mechanisms that are augmented with (machine-learned) predictions. The focus of this work was on mechanisms that are deterministic and augmented with a prediction regarding the optimal facility location. In this paper, we provide a deeper understanding of this problem by exploring the power of randomization as well as the impact of different types of predictions on the performance of truthful learning-augmented mechanisms. We study both the single-dimensional and the Euclidean case and provide upper and lower bounds regarding the achievable approximation of the optimal egalitarian social cost.", "pdf": "https://openreview.net/pdf/10d8ade064748613e7375bb2d18cfa8a8826262c.pdf"} {"title": "How does Gradient Descent Learn Features --- A Local Analysis for Regularized Two-Layer Neural Networks", "url": "https://openreview.net/forum?id=XYw051ZmUn", "detail_url": "https://openreview.net/forum?id=XYw051ZmUn", "authors": "Mo Zhou,Rong Ge", "tags": "NIPS 2024,Poster", "abstract": "The ability of learning useful features is one of the major advantages of neural networks. Although recent works show that neural network can operate in a neural tangent kernel (NTK) regime that does not allow feature learning, many works also demonstrate the potential for neural networks to go beyond NTK regime and perform feature learning. Recently, a line of work highlighted the feature learning capabilities of the early stages of gradient-based training. In this paper we consider another mechanism for feature learning via gradient descent through a local convergence analysis. We show that once the loss is below a certain threshold, gradient descent with a carefully regularized objective will capture ground-truth directions. We further strengthen this local convergence analysis by incorporating early-stage feature learning analysis. Our results demonstrate that feature learning not only happens at the initial gradient steps, but can also occur towards the end of training.", "pdf": "https://openreview.net/pdf/0f1e81a2939ab48ebc34170b7937ba6f4308236b.pdf"} {"title": "Measuring Dejavu Memorization Efficiently", "url": "https://openreview.net/forum?id=v8RRFNbJ43", "detail_url": "https://openreview.net/forum?id=v8RRFNbJ43", "authors": "Narine Kokhlikyan,Bargav Jayaraman,Florian Bordes,Chuan Guo,Kamalika Chaudhuri", "tags": "NIPS 2024,Poster", "abstract": "Recent research has shown that representation learning models may accidentally memorize their training data. For example, the d\u00e9j\u00e0 vu method shows that for certain representation learning models and training images, it is sometimes possible to correctly predict the foreground label given only the representation of he background \u2013 better than through dataset-level correlations. However, their measurement method requires training two models \u2013 one to estimate dataset-level correlations and the other to estimate memorization. This multiple model setup becomes infeasible for large open-source models. In this work, we propose alter native simple methods to estimate dataset-level correlations, and show that these can be used to approximate an off-the-shelf model\u2019s memorization ability without any retraining. This enables, for the first time, the measurement of memorization in pre-trained open-source image representation and vision-language models. Our results show that different ways of measuring memorization yield very similar aggregate results. We also find that open-source models typically have lower aggregate memorization than similar models trained on a subset of the data. The code is available both for vision (https://github.com/facebookresearch/DejaVuOSS) and vision language (https://github.com/facebookresearch/VLMDejaVu) models.", "pdf": "https://openreview.net/pdf/6f697238a026167fa803f7aeaffa5b79df2b1057.pdf"} {"title": "A Topology-aware Graph Coarsening Framework for Continual Graph Learning", "url": "https://openreview.net/forum?id=VpINEEVLX0", "detail_url": "https://openreview.net/forum?id=VpINEEVLX0", "authors": "Xiaoxue Han,Zhuo Feng,Yue Ning", "tags": "NIPS 2024,Poster", "abstract": "Graph Neural Networks (GNNs) experience \"catastrophic forgetting\" in continual learning setups, where they tend to lose previously acquired knowledge and perform poorly on old tasks. Rehearsal-based methods, which consolidate old knowledge with a replay memory buffer, are a de facto solution due to their straightforward workflow. However, these methods often fail to adequately capture topological information, leading to incorrect input-label mappings in replay samples. To address this, we propose TACO, a topology-aware graph coarsening and continual learning framework that stores information from previous tasks as a reduced graph. Throughout each learning period, this reduced graph expands by integrating with a new graph and aligning shared nodes, followed by a \"zoom-out\" reduction process to maintain a stable size. We have developed a graph coarsening algorithm based on node representation proximities to efficiently reduce a graph while preserving essential topological information. We empirically demonstrate that the learning process on the reduced graph can closely approximate that on the original graph. We compare TACO with a wide range of state-of-the-art baselines, proving its superiority and the necessity of preserving high-quality topological information for effective replaying.", "pdf": "https://openreview.net/pdf/406408d7839e9d5c643715d8429ea93609e08c84.pdf"} {"title": "Score-based 3D molecule generation with neural fields", "url": "https://openreview.net/forum?id=9lGJrkqJUw", "detail_url": "https://openreview.net/forum?id=9lGJrkqJUw", "authors": "Matthieu Kirchmeyer,Pedro O. Pinheiro,Saeed Saremi", "tags": "NIPS 2024,Poster", "abstract": "We introduce a new functional representation for 3D molecules based on their continuous atomic density fields. Using this representation, we propose a new model based on neural empirical Bayes for unconditional 3D molecule generation in the continuous space using neural fields. Our model, FuncMol, encodes molecular fields into latent codes using a conditional neural field, samples noisy codes from a Gaussian-smoothed distribution with Langevin MCMC, denoises these samples in a single step and finally decodes them into molecular fields. FuncMol performs all-atom generation of 3D molecules without assumptions on the molecular structure and scales well with the size of molecules, unlike most existing approaches. Our method achieves competitive results on drug-like molecules and easily scales to macro-cyclic peptides, with at least one order of magnitude faster sampling. The code is available at https://github.com/prescient-design/funcmol.", "pdf": "https://openreview.net/pdf/a064e6a79267ef29c1fc0fc85ce268979434c99a.pdf"} {"title": "Hybrid Generative AI for De Novo Design of Co-Crystals with Enhanced Tabletability", "url": "https://openreview.net/forum?id=G4vFNmraxj", "detail_url": "https://openreview.net/forum?id=G4vFNmraxj", "authors": "Nina Gubina,Andrei Dmitrenko,Gleb Vitalevich Solovev,Lyubov Yamshchikova,Oleg Petrov,Ivan Lebedev,Nikita Serov,Grigorii Kirgizov,Nikolay Nikitin,Vladimir Vinogradov", "tags": "NIPS 2024,Poster", "abstract": "Co-crystallization is an accessible way to control physicochemical characteristics of organic crystals, which finds many biomedical applications. In this work, we present Generative Method for Co-crystal Design (GEMCODE), a novel pipeline for automated co-crystal screening based on the hybridization of deep generative models and evolutionary optimization for broader exploration of the target chemical space. GEMCODE enables fast *de novo* co-crystal design with target tabletability profiles, which is crucial for the development of pharmaceuticals. With a series of experimental studies highlighting validation and discovery cases, we show that GEMCODE is effective even under realistic computational constraints. Furthermore, we explore the potential of language models in generating co-crystals. Finally, we present numerous previously unknown co-crystals predicted by GEMCODE and discuss its potential in accelerating drug development.", "pdf": "https://openreview.net/pdf/a22aecaac8f6647154414ad4d6d6530c86631f90.pdf"} {"title": "Efficient and Private Marginal Reconstruction with Local Non-Negativity", "url": "https://openreview.net/forum?id=lKnl4CLhhS", "detail_url": "https://openreview.net/forum?id=lKnl4CLhhS", "authors": "Brett Mullins,Miguel Fuentes,Yingtai Xiao,Daniel Kifer,Cameron N Musco,Daniel Sheldon", "tags": "NIPS 2024,Poster", "abstract": "Differential privacy is the dominant standard for formal and quantifiable privacy and has been used in major deployments that impact millions of people. Many differentially private algorithms for query release and synthetic data contain steps that reconstruct answers to queries from answers to other queries that have been measured privately. Reconstruction is an important subproblem for such mechanisms to economize the privacy budget, minimize error on reconstructed answers, and allow for scalability to high-dimensional datasets. In this paper, we introduce a principled and efficient postprocessing method ReM (Residuals-to-Marginals) for reconstructing answers to marginal queries. Our method builds on recent work on efficient mechanisms for marginal query release, based on making measurements using a residual query basis that admits efficient pseudoinversion, which is an important primitive used in reconstruction. An extension GReM-LNN (Gaussian Residuals-to-Marginals with Local Non-negativity) reconstructs marginals under Gaussian noise satisfying consistency and non-negativity, which often reduces error on reconstructed answers. We demonstrate the utility of ReM and GReM-LNN by applying them to improve existing private query answering mechanisms.", "pdf": "https://openreview.net/pdf/74ef2a254d1aef2663edcdb2e0ac71b90a95897e.pdf"} {"title": "Achieving Constant Regret in Linear Markov Decision Processes", "url": "https://openreview.net/forum?id=02r24A8doi", "detail_url": "https://openreview.net/forum?id=02r24A8doi", "authors": "Weitong Zhang,Zhiyuan Fan,Jiafan He,Quanquan Gu", "tags": "NIPS 2024,Poster", "abstract": "We study the constant regret guarantees in reinforcement learning (RL). Our objective is to design an algorithm that incurs only finite regret over infinite episodes with high probability. We introduce an algorithm, Cert-LSVI-UCB, for misspecified linear Markov decision processes (MDPs) where both the transition kernel and the reward function can be approximated by some linear function up to misspecification level $\\zeta$. At the core of Cert-LSVI-UCB is an innovative certified estimator, which facilitates a fine-grained concentration analysis for multi-phase value-targeted regression, enabling us to establish an instance-dependent regret bound that is constant w.r.t. the number of episodes. Specifically, we demonstrate that for a linear MDP characterized by a minimal suboptimality gap $\\Delta$, Cert-LSVI-UCB has a cumulative regret of $\\tilde{\\mathcal{O}}(d^3H^5/\\Delta)$ with high probability, provided that the misspecification level $\\zeta$ is below $\\tilde{\\mathcal{O}}(\\Delta / (\\sqrt{d}H^2))$. Here $d$ is the dimension of the feature space and $H$ is the horizon. Remarkably, this regret bound is independent of the number of episodes $K$. To the best of our knowledge, Cert-LSVI-UCB is the first algorithm to achieve a constant, instance-dependent, high-probability regret bound in RL with linear function approximation without relying on prior distribution assumptions.", "pdf": "https://openreview.net/pdf/c4416b40b8b47e9d8fa8155df573d6a2c68b8f6e.pdf"} {"title": "Gaussian Process Bandits for Top-k Recommendations", "url": "https://openreview.net/forum?id=50nEnmVLRb", "detail_url": "https://openreview.net/forum?id=50nEnmVLRb", "authors": "Mohit Yadav,Cameron N Musco,Daniel Sheldon", "tags": "NIPS 2024,Poster", "abstract": "Algorithms that utilize bandit feedback to optimize top-k recommendations are vital for online marketplaces, search engines, and content platforms. However, the combinatorial nature of this problem poses a significant challenge, as the possible number of ordered top-k recommendations from $n$ items grows exponentially with $k$. As a result, previous work often relies on restrictive assumptions about the reward or bandit feedback models, such as assuming that the feedback discloses rewards for each recommended item rather than a single scalar feedback for the entire set of top-k recommendations. We introduce a novel contextual bandit algorithm for top-k recommendations, leveraging a Gaussian process with a Kendall kernel to model the reward function.\nOur algorithm requires only scalar feedback from \nthe top-k recommendations and does not impose restrictive assumptions on the reward structure. \nTheoretical analysis confirms that the proposed algorithm achieves sub-linear regret in relation to the number of rounds and arms. Additionally, empirical results using a bandit simulator demonstrate that the proposed algorithm outperforms other baselines across various scenarios.", "pdf": "https://openreview.net/pdf/94eed9db04418322ec15845e44a60d94cafb69a4.pdf"} {"title": "Mixture of Link Predictors on Graphs", "url": "https://openreview.net/forum?id=X3oeoyJlMw", "detail_url": "https://openreview.net/forum?id=X3oeoyJlMw", "authors": "Li Ma,Haoyu Han,Juanhui Li,Harry Shomer,Hui Liu,Xiaofeng Gao,Jiliang Tang", "tags": "NIPS 2024,Poster", "abstract": "Link prediction, which aims to forecast unseen connections in graphs, is a fundamental task in graph machine learning. Heuristic methods, leveraging a range of different pairwise measures such as common neighbors and shortest paths, often rival the performance of vanilla Graph Neural Networks (GNNs). Therefore, recent advancements in GNNs for link prediction (GNN4LP) have primarily focused on integrating one or a few types of pairwise information. \nIn this work, we reveal that different node pairs within the same dataset necessitate varied pairwise information for accurate prediction and models that only apply the same pairwise information uniformly could achieve suboptimal performance.\nAs a result, we propose a simple mixture of experts model Link-MoE for link prediction. Link-MoE utilizes various GNNs as experts and strategically selects the appropriate expert for each node pair based on various types of pairwise information. Experimental results across diverse real-world datasets demonstrate substantial performance improvement from Link-MoE. Notably, Link-Mo achieves a relative improvement of 18.71% on the MRR metric for the Pubmed dataset and 9.59% on the Hits@100 metric for the ogbl-ppa dataset, compared to the best baselines. The code is available at https://github.com/ml-ml/Link-MoE/.", "pdf": "https://openreview.net/pdf/d56be3eb2aef98f8ecc17ea9a679ec017299efbc.pdf"} {"title": "SmallToLarge (S2L): Scalable Data Selection for Fine-tuning Large Language Models by Summarizing Training Trajectories of Small Models", "url": "https://openreview.net/forum?id=K9IGlMQpif", "detail_url": "https://openreview.net/forum?id=K9IGlMQpif", "authors": "Yu Yang,Siddhartha Mishra,Jeffrey N Chiang,Baharan Mirzasoleiman", "tags": "NIPS 2024,Poster", "abstract": "Despite the effectiveness of data selection for pretraining and instruction fine-tuning\nlarge language models (LLMs), improving data efficiency in supervised fine-tuning\n(SFT) for specialized domains poses significant challenges due to the complexity\nof fine-tuning data. To bridge this gap, we introduce an effective and scalable\ndata selection method for SFT, SmallToLarge (S2L), which trains a small\nmodel, clusters loss trajectories of the examples, and samples from these clusters to\nguide data selection for larger models. We prove that during fine-tuning, samples\nwithin the same loss trajectory cluster exhibit similar gradients. Then, we show\nthat S2L subsets have a bounded gradient error w.r.t. the full data, hence guarantee\nconvergence to the neighborhood of the optimal solution. We demonstrate through\nextensive experiments that S2L significantly improves data efficiency in SFT for\nmathematical problem-solving, reducing the training data requirement to just $11$%\nof the original MathInstruct dataset to match full dataset performance while\noutperforming state-of-the-art data selection algorithms by an average of $4.7$%\nacross $6$ in- and out-domain evaluation datasets. Remarkably, selecting only 50K\ndata for SFT, S2L achieves a $32.7$% accuracy on the challenging MATH\nbenchmark, improving Phi-2 by $16.6$%. In clinical text summarization on the\nMIMIC-III dataset, S2L again outperforms training on the full dataset using\nonly $50$% of the data. Notably, S2L can perform scalable data selection using a\nreference model $100\\times$ smaller than the target model, proportionally reducing the\ncomputational cost.", "pdf": "https://openreview.net/pdf/0a2da6d64cd7f9e62b8aa4f9c56311ab881fcbfa.pdf"} {"title": "DeltaDEQ: Exploiting Heterogeneous Convergence for Accelerating Deep Equilibrium Iterations", "url": "https://openreview.net/forum?id=7qBkADV4zD", "detail_url": "https://openreview.net/forum?id=7qBkADV4zD", "authors": "Zuowen Wang,Longbiao Cheng,Pehuen Moure,Niklas Hahn,Shih-Chii Liu", "tags": "NIPS 2024,Poster", "abstract": "Implicit neural networks including deep equilibrium models have achieved superior task performance with better parameter efficiency in various applications. However, it is often at the expense of higher computation costs during inference. In this work, we identify a phenomenon named $\\textbf{heterogeneous convergence}$ that exists in deep equilibrium models and other iterative methods. We observe much faster convergence of state activations in certain dimensions therefore indicating the dimensionality of the underlying dynamics of the forward pass is much lower than the defined dimension of the states. We thereby propose to exploit heterogeneous convergence by storing past linear operation results (e.g., fully connected and convolutional layers) and only propagating the state activation when its change exceeds a threshold. Thus, for the already converged dimensions, the computations can be skipped. We verified our findings and reached 84\\% FLOPs reduction on the implicit neural representation task, 73\\% on the Sintel and 76\\% on the KITTI datasets for the optical flow estimation task while keeping comparable task accuracy with the models that perform the full update.", "pdf": "https://openreview.net/pdf/3557279fbfb0846e372d74d08c4eae97d63126db.pdf"} {"title": "Retrieval & Fine-Tuning for In-Context Tabular Models", "url": "https://openreview.net/forum?id=337dHOexCM", "detail_url": "https://openreview.net/forum?id=337dHOexCM", "authors": "Valentin Thomas,Junwei Ma,Rasa Hosseinzadeh,Keyvan Golestan,Guangwei Yu,Maksims Volkovs,Anthony L. Caterini", "tags": "NIPS 2024,Poster", "abstract": "Tabular data is a pervasive modality spanning a wide range of domains, and this inherent diversity poses a considerable challenge for deep learning. Recent advancements using transformer-based in-context learning have shown promise on smaller and less complex tabular datasets, but have struggled to scale to larger and more complex ones. To address this limitation, we propose a combination of retrieval and fine-tuning: we can adapt the transformer to a local subset of the data by collecting nearest neighbours, and then perform task-specific fine-tuning with this retrieved set of neighbours in context. Using TabPFN as the base model -- currently the best tabular in-context learner -- and applying our retrieval and fine-tuning scheme on top results in what we call a locally-calibrated PFN, or LoCalPFN. We conduct extensive evaluation on 95 datasets curated by TabZilla from OpenML, upon which we establish a new state-of-the-art with LoCalPFN -- even with respect to tuned tree-based models. Notably, we show a significant boost in performance compared to the base in-context model, demonstrating the efficacy of our approach and advancing the frontier of deep learning in tabular data.", "pdf": "https://openreview.net/pdf/3da8933f3aa37b7d79634b4c7b1c46ece4ec364a.pdf"} {"title": "Cost-aware Bayesian Optimization via the Pandora's Box Gittins Index", "url": "https://openreview.net/forum?id=Ouc1F0Sfb7", "detail_url": "https://openreview.net/forum?id=Ouc1F0Sfb7", "authors": "Qian Xie,Raul Astudillo,Peter I. Frazier,Ziv Scully,Alexander Terenin", "tags": "NIPS 2024,Poster", "abstract": "Bayesian optimization is a technique for efficiently optimizing unknown functions in a black-box manner. To handle practical settings where gathering data requires use of finite resources, it is desirable to explicitly incorporate function evaluation costs into Bayesian optimization policies. To understand how to do so, we develop a previously-unexplored connection between cost-aware Bayesian optimization and the Pandora's Box problem, a decision problem from economics. The Pandora's Box problem admits a Bayesian-optimal solution based on an expression called the Gittins index, which can be reinterpreted as an acquisition function. We study the use of this acquisition function for cost-aware Bayesian optimization, and demonstrate empirically that it performs well, particularly in medium-high dimensions. We further show that this performance carries over to classical Bayesian optimization without explicit evaluation costs. Our work constitutes a first step towards integrating techniques from Gittins index theory into Bayesian optimization.", "pdf": "https://openreview.net/pdf/6fbb510ffd1f4ed16480310bffe739f433d2595b.pdf"} {"title": "Online Budgeted Matching with General Bids", "url": "https://openreview.net/forum?id=Vtxy8wFpTj", "detail_url": "https://openreview.net/forum?id=Vtxy8wFpTj", "authors": "Jianyi Yang,Pengfei Li,Adam Wierman,Shaolei Ren", "tags": "NIPS 2024,Poster", "abstract": "Online Budgeted Matching (OBM) is a classic problem with important applications in online advertising, online service matching, revenue management, and beyond. Traditional online algorithms typically assume a small bid setting, where the maximum bid-to-budget ratio ($\\kappa$) is infinitesimally small. While recent algorithms have tried to address scenarios with non-small or general bids, they often rely on the Fractional Last Matching (FLM) assumption, which allows for accepting partial bids when the remaining budget is insufficient. This assumption, however, does not hold for many applications with indivisible bids. In this paper, we remove the FLM assumption and tackle the open problem of OBM with general bids. We first establish an upper bound of $1-\\kappa$ on the competitive ratio for any deterministic online algorithm. We then propose a novel meta algorithm, called MetaAd, which reduces to different algorithms with first known provable competitive ratios parameterized by the maximum bid-to-budget ratio $\\kappa\\in [0,1]$. As a by-product, we extend MetaAd to the FLM setting and get provable competitive algorithms. Finally, we apply our competitive analysis to the design learning- augmented algorithms.", "pdf": "https://openreview.net/pdf/ebe77c6c808116602dfd7ba418a75985e1fd02c9.pdf"} {"title": "Risk-Averse Fine-tuning of Large Language Models", "url": "https://openreview.net/forum?id=1BZKqZphsW", "detail_url": "https://openreview.net/forum?id=1BZKqZphsW", "authors": "Sapana Chaudhary,Ujwal Dinesha,Dileep Kalathil,Srinivas Shakkottai", "tags": "NIPS 2024,Poster", "abstract": "We consider the challenge of mitigating the generation of negative or toxic content by the Large Language Models (LLMs) in response to certain prompts. We propose integrating risk-averse principles into LLM fine-tuning to minimize the occurrence of harmful outputs, particularly rare but significant events. By optimizing the risk measure of Conditional Value at Risk (CVaR), our methodology trains LLMs to exhibit superior performance in avoiding toxic outputs while maintaining effectiveness in generative tasks. Empirical evaluations on sentiment modification and toxicity mitigation tasks demonstrate the efficacy of risk-averse reinforcement learning with human feedback (RLHF) in promoting a safer and more constructive online discourse environment.", "pdf": "https://openreview.net/pdf/8af24d9f39f3b0c337121b49bcb9650bd874a225.pdf"} {"title": "RankRAG: Unifying Context Ranking with Retrieval-Augmented Generation in LLMs", "url": "https://openreview.net/forum?id=S1fc92uemC", "detail_url": "https://openreview.net/forum?id=S1fc92uemC", "authors": "Yue Yu,Wei Ping,Zihan Liu,Boxin Wang,Jiaxuan You,Chao Zhang,Mohammad Shoeybi,Bryan Catanzaro", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) typically utilize the top-k contexts from a retriever in retrieval-augmented generation (RAG). In this work, we propose a novel method called RankRAG, which instruction-tunes a single LLM for both context ranking and answer generation in RAG. In particular, the instruction-tuned LLMs work surprisingly well by adding a small fraction of ranking data into the training blend, and outperform existing expert ranking models, including the same LLM exclusively fine-tuned on a large amount of ranking data. For generation, we compare our model with many strong baselines, including ChatQA-1.5, an open-sourced model with the state-of-the-art performance on RAG benchmarks. Specifically, our Llama3-RankRAG-8B and Llama3-RankRAG-70B significantly outperform Llama3-ChatQA-1.5-8B and Llama3-ChatQA-1.5-70B, respectively, on nine general knowledge-intensive benchmarks for RAG. In addition, it also performs comparably to GPT-4 on five RAG benchmarks in the biomedical domain without instruction fine-tuning on biomedical data, demonstrating its superb capability for generalization to new domains.", "pdf": "https://openreview.net/pdf/e799910ea1c9e2dfb86d87d93e60724fc05e0aab.pdf"} {"title": "ARC: A Generalist Graph Anomaly Detector with In-Context Learning", "url": "https://openreview.net/forum?id=IdIVfzjPK4", "detail_url": "https://openreview.net/forum?id=IdIVfzjPK4", "authors": "Yixin Liu,Shiyuan Li,Yu Zheng,Qingfeng Chen,Chengqi Zhang,Shirui Pan", "tags": "NIPS 2024,Poster", "abstract": "Graph anomaly detection (GAD), which aims to identify abnormal nodes that differ from the majority within a graph, has garnered significant attention. However, current GAD methods necessitate training specific to each dataset, resulting in high training costs, substantial data requirements, and limited generalizability when being applied to new datasets and domains. To address these limitations, this paper proposes ARC, a generalist GAD approach that enables a ``one-for-all'' GAD model to detect anomalies across various graph datasets on-the-fly. Equipped with in-context learning, ARC can directly extract dataset-specific patterns from the target dataset using few-shot normal samples at the inference stage, without the need for retraining or fine-tuning on the target dataset. ARC comprises three components that are well-crafted for capturing universal graph anomaly patterns: 1) smoothness-based feature **A**lignment module that unifies the features of different datasets into a common and anomaly-sensitive space; 2) ego-neighbor **R**esidual graph encoder that learns abnormality-related node embeddings; and 3) cross-attentive in-**C**ontext anomaly scoring module that predicts node abnormality by leveraging few-shot normal samples. Extensive experiments on multiple benchmark datasets from various domains demonstrate the superior anomaly detection performance, efficiency, and generalizability of ARC.", "pdf": "https://openreview.net/pdf/5901f52b70dd880c8fef76934a6deb110ec30d9a.pdf"} {"title": "Active learning of neural population dynamics using two-photon holographic optogenetics", "url": "https://openreview.net/forum?id=nLQeE8QGGe", "detail_url": "https://openreview.net/forum?id=nLQeE8QGGe", "authors": "Andrew Wagenmaker,Lu Mi,Marton Rozsa,Matthew Storm Bull,Karel Svoboda,Kayvon Daie,Matthew D. Golub,Kevin Jamieson", "tags": "NIPS 2024,Poster", "abstract": "Recent advances in techniques for monitoring and perturbing neural populations have greatly enhanced our ability to study circuits in the brain. In particular, two-photon holographic optogenetics now enables precise photostimulation of experimenter-specified groups of individual neurons, while simultaneous two-photon calcium imaging enables the measurement of ongoing and induced activity across the neural population. Despite the enormous space of potential photostimulation patterns and the time-consuming nature of photostimulation experiments, very little algorithmic work has been done to determine the most effective photostimulation patterns for identifying the neural population dynamics. Here, we develop methods to efficiently select which neurons to stimulate such that the resulting neural responses will best inform a dynamical model of the neural population activity. Using neural population responses to photostimulation in mouse motor cortex, we demonstrate the efficacy of a low-rank linear dynamical systems model, and develop an active learning procedure which takes advantage of low-rank structure to determine informative photostimulation patterns. We demonstrate our approach on both real and synthetic data, obtaining in some cases as much as a two-fold reduction in the amount of data required to reach a given predictive power. Our active stimulation design method is based on a novel active learning procedure for low-rank regression, which may be of independent interest.", "pdf": "https://openreview.net/pdf/f65e32e082781fef08ab450af2a506bb55487173.pdf"} {"title": "HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate Federated Learning", "url": "https://openreview.net/forum?id=6LVxO1C819", "detail_url": "https://openreview.net/forum?id=6LVxO1C819", "authors": "Momin Ahmad Khan,Yasra Chandio,Fatima M. Anwar", "tags": "NIPS 2024,Poster", "abstract": "Data heterogeneity among Federated Learning (FL) users poses a significant challenge, resulting in reduced global model performance. The community has designed various techniques to tackle this issue, among which Knowledge Distillation (KD)-based techniques are common.\n While these techniques effectively improve performance under high heterogeneity, they inadvertently cause higher accuracy degradation under model poisoning attacks (known as \\emph{attack amplification}). This paper presents a case study to reveal this critical vulnerability in KD-based FL systems. We show why KD causes this issue through empirical evidence and use it as motivation to design a hybrid distillation technique. We introduce a novel algorithm, Hybrid Knowledge Distillation for Robust and Accurate FL (HYDRA-FL), which reduces the impact of attacks in attack scenarios by offloading some of the KD loss to a shallow layer via an auxiliary classifier. We model HYDRA-FL as a generic framework and adapt it to two KD-based FL algorithms, FedNTD and MOON. Using these two as case studies, we demonstrate that our technique outperforms baselines in attack settings while maintaining comparable performance in benign settings.", "pdf": "https://openreview.net/pdf/cbd50dc60faa8cd6883e9b52f9d043e35ba178dc.pdf"} {"title": "Clustering with Non-adaptive Subset Queries", "url": "https://openreview.net/forum?id=lgtsXxk4dF", "detail_url": "https://openreview.net/forum?id=lgtsXxk4dF", "authors": "Hadley Black,Euiwoong Lee,Arya Mazumdar,Barna Saha", "tags": "NIPS 2024,Poster", "abstract": "Recovering the underlying clustering of a set $U$ of $n$ points by asking pair-wise same-cluster queries has garnered significant interest in the last decade. Given a query $S \\subset U$, $|S|=2$, the oracle returns \"yes\" if the points are in the same cluster and \"no\" otherwise. We study a natural generalization of this problem to subset queries for $|S|>2$, where the oracle returns the number of clusters intersecting $S$. Our aim is to determine the minimum number of queries needed for exactly recovering an arbitrary $k$-clustering. We focus on non-adaptive schemes, where all the queries are asked in one round, thus allowing for the querying process to be parallelized, which is a highly desirable property. \n\nFor adaptive algorithms with pair-wise queries, the complexity is known to be $\\Theta(nk)$, where $k$ is the number of clusters. \nIn contrast, non-adaptive pair-wise query algorithms are extremely limited: even for $k=3$, such algorithms require $\\Omega(n^2)$ queries, which matches the trivial $O(n^2)$ upper bound attained by querying every pair of points. Allowing for subset queries of unbounded size, $O(n)$ queries is possible with an adaptive scheme. However, the realm of non-adaptive algorithms remains completely unknown. Is it possible to attain algorithms that are non-adaptive while still making a near-linear number of queries?\n\nIn this paper, we give the first non-adaptive algorithms for clustering with subset queries. We provide, (i) a non-adaptive algorithm making $O(n \\log^2 n \\log k)$ queries which improves to $O(n \\log k)$ when the cluster sizes are within any constant factor of each other, (ii) for constant $k$, a non-adaptive algorithm making $O(n \\log{\\log{n}})$ queries. In addition to non-adaptivity, we take into account other practical considerations, such as enforcing a bound on query size. For constant $k$, we give an algorithm making $\\smash{\\widetilde{O}(n^2/s^2)}$ queries on subsets of size at most $s \\leq \\sqrt{n}$, which is optimal among all non-adaptive algorithms within a $\\log n$-factor. For arbitrary $k$, the dependence varies as $\\tilde{O}(n^2/s)$.", "pdf": "https://openreview.net/pdf/61ca56abf9e2fcb4a96d5c3908c1d3617a81cb55.pdf"} {"title": "FIDE: Frequency-Inflated Conditional Diffusion Model for Extreme-Aware Time Series Generation", "url": "https://openreview.net/forum?id=5HQhYiGnYb", "detail_url": "https://openreview.net/forum?id=5HQhYiGnYb", "authors": "Asadullah Hill Galib,Pang-Ning Tan,Lifeng Luo", "tags": "NIPS 2024,Poster", "abstract": "Time series generation is a crucial aspect of data analysis, playing a pivotal role in learning the temporal patterns and their underlying dynamics across diverse fields. Conventional time series generation methods often struggle to capture extreme values adequately, diminishing their value in critical applications such as scenario planning and management for healthcare, finance, climate change adaptation, and beyond. In this paper, we introduce a conditional diffusion model called FIDE to address the challenge of preserving the distribution of extreme values in generative modeling for time series. FIDE employs a novel high-frequency inflation strategy in the frequency domain, preventing premature fade-out of the extreme value. It also extends traditional diffusion-based model, enabling the generation of samples conditioned on the block maxima, thereby enhancing the model's capacity to capture extreme events. Additionally, the FIDE framework incorporates the Generalized Extreme Value (GEV) distribution within its generative modeling framework, ensuring fidelity to both block maxima and overall data distribution. Experimental results on real-world and synthetic data showcase the efficacy of FIDE over baseline methods, highlighting its potential in advancing Generative AI for time series analysis, specifically in accurately modeling extreme events.", "pdf": "https://openreview.net/pdf/285d2747b600e6d3316e485f911ea40684b85114.pdf"} {"title": "Robust Mixture Learning when Outliers Overwhelm Small Groups", "url": "https://openreview.net/forum?id=TrXV4dMDcG", "detail_url": "https://openreview.net/forum?id=TrXV4dMDcG", "authors": "Daniil Dmitriev,Rares-Darius Buhai,Stefan Tiegel,Alexander Wolters,Gleb Novikov,Amartya Sanyal,David Steurer,Fanny Yang", "tags": "NIPS 2024,Poster", "abstract": "We study the problem of estimating the means of well-separated mixtures when an adversary may add arbitrary outliers. While strong guarantees are available when the outlier fraction is significantly smaller than the minimum mixing weight, much less is known when outliers may crowd out low-weight clusters \u2013 a setting we refer to as list-decodable mixture learning (LD-ML). In this case, adversarial outliers can simulate additional spurious mixture components. Hence, if all means of the mixture must be recovered up to a small error in the output list, the list size needs to be larger than the number of (true) components. We propose an algorithm that obtains order-optimal error guarantees for each mixture mean with a minimal list-size overhead, significantly improving upon list-decodable mean estimation, the only existing method that is applicable for LD-ML. Although improvements are observed even when the mixture is non-separated, our algorithm achieves particularly strong guarantees when the mixture is separated: it can leverage the mixture structure to partially cluster the samples before carefully iterating a base learner for list-decodable mean estimation at different scales.", "pdf": "https://openreview.net/pdf/d1548289b846549ea783a01479447919fa5de63c.pdf"} {"title": "Revisiting Score Propagation in Graph Out-of-Distribution Detection", "url": "https://openreview.net/forum?id=jb5qN3212b", "detail_url": "https://openreview.net/forum?id=jb5qN3212b", "authors": "Longfei Ma,Yiyou Sun,Kaize Ding,Zemin Liu,Fei Wu", "tags": "NIPS 2024,Poster", "abstract": "The field of graph learning has been substantially advanced by the development of deep learning models, in particular graph neural networks. However, one salient yet largely under-explored challenge is detecting Out-of-Distribution (OOD) nodes on graphs. Prevailing OOD detection techniques developed in other domains like computer vision, do not cater to the interconnected nature of graphs. This work aims to fill this gap by exploring the potential of a simple yet effective method -- OOD score propagation, which propagates OOD scores among neighboring nodes along the graph structure. This post hoc solution can be easily integrated with existing OOD scoring functions, showcasing its excellent flexibility and effectiveness in most scenarios. However, the conditions under which score propagation proves beneficial remain not fully elucidated. Our study meticulously derives these conditions and, inspired by this discovery, introduces an innovative edge augmentation strategy with theoretical guarantee. Empirical evaluations affirm the superiority of our proposed method, outperforming strong OOD detection baselines in various scenarios and settings.", "pdf": "https://openreview.net/pdf/b0f4cc1c8ccb1100775d7e2f880543d59e43318d.pdf"} {"title": "FewViewGS: Gaussian Splatting with Few View Matching and Multi-stage Training", "url": "https://openreview.net/forum?id=liHe9iumIi", "detail_url": "https://openreview.net/forum?id=liHe9iumIi", "authors": "Ruihong Yin,Vladimir Yugay,Yue Li,Sezer Karaoglu,Theo Gevers", "tags": "NIPS 2024,Poster", "abstract": "The field of novel view synthesis from images has seen rapid advancements with the introduction of Neural Radiance Fields (NeRF) and more recently with 3D Gaussian Splatting. Gaussian Splatting became widely adopted due to its efficiency and ability to render novel views accurately. While Gaussian Splatting performs well when a sufficient amount of training images are available, its unstructured explicit representation tends to overfit in scenarios with sparse input images, resulting in poor rendering performance. To address this, we present a 3D Gaussian-based novel view synthesis method using sparse input images that can accurately render the scene from the viewpoints not covered by the training images. We propose a multi-stage training scheme with matching-based consistency constraints imposed on the novel views without relying on pre-trained depth estimation or diffusion models. This is achieved by using the matches of the available training images to supervise the generation of the novel views sampled between the training frames with color, geometry, and semantic losses. In addition, we introduce a locality preserving regularization for 3D Gaussians which removes rendering artifacts by preserving the local color structure of the scene. Evaluation on synthetic and real-world datasets demonstrates competitive or superior performance of our method in few-shot novel view synthesis compared to existing state-of-the-art methods.", "pdf": "https://openreview.net/pdf/ed5cc740cd652472f5409c5773b77bab5a4ff3c2.pdf"} {"title": "Adversarial Representation Engineering: A General Model Editing Framework for Large Language Models", "url": "https://openreview.net/forum?id=dQ9ji8e9qQ", "detail_url": "https://openreview.net/forum?id=dQ9ji8e9qQ", "authors": "Yihao Zhang,Zeming Wei,Jun Sun,Meng Sun", "tags": "NIPS 2024,Poster", "abstract": "Since the rapid development of Large Language Models (LLMs) has achieved remarkable success, understanding and rectifying their internal complex mechanisms has become an urgent issue. Recent research has attempted to interpret their behaviors through the lens of inner representation. However, developing practical and efficient methods for applying these representations for general and flexible model editing remains challenging. In this work, we explore how to leverage insights from representation engineering to guide the editing of LLMs by deploying a representation sensor as an editing oracle. We first identify the importance of a robust and reliable sensor during editing, then propose an \\textbf{A}dversarial \\textbf{R}epresentation \\textbf{E}ngineering (\\textbf{ARE}) framework to provide a unified and interpretable approach for conceptual model editing without compromising baseline performance. Experiments on multiple tasks demonstrate the effectiveness of ARE in various model editing scenarios. Our code and data are available at \\url{https://github.com/Zhang-Yihao/Adversarial-Representation-Engineering}.", "pdf": "https://openreview.net/pdf/d67c48f284a2271ad2cdb6755e1439c333e17f11.pdf"} {"title": "DistrictNet: Decision-aware learning for geographical districting", "url": "https://openreview.net/forum?id=njwYBFau8E", "detail_url": "https://openreview.net/forum?id=njwYBFau8E", "authors": "Cheikh Ahmed,Alexandre Forel,Axel Parmentier,Thibaut Vidal", "tags": "NIPS 2024,Poster", "abstract": "Districting is a complex combinatorial problem that consists in partitioning a geographical area into small districts. In logistics, it is a major strategic decision determining operating costs for several years. Solving districting problems using traditional methods is intractable even for small geographical areas and existing heuristics often provide sub-optimal results. We present a structured learning approach to find high-quality solutions to real-world districting problems in a few minutes. It is based on integrating a combinatorial optimization layer, the capacitated minimum spanning tree problem, into a graph neural network architecture. To train this pipeline in a decision-aware fashion, we show how to construct target solutions embedded in a suitable space and learn from target solutions. Experiments show that our approach outperforms existing methods as it can significantly reduce costs on real-world cities.", "pdf": "https://openreview.net/pdf/f64d0fcbdecbca899d8871dd921130e3f6b558b9.pdf"} {"title": "Optimal Algorithms for Learning Partitions with Faulty Oracles", "url": "https://openreview.net/forum?id=ygDl8q02gA", "detail_url": "https://openreview.net/forum?id=ygDl8q02gA", "authors": "Adela Frances DePavia,Olga Medrano Mart\u00edn del Campo,Erasmo Tani", "tags": "NIPS 2024,Poster", "abstract": "We consider a clustering problem where a learner seeks to partition a finite set by querying a faulty oracle. This models applications where learners crowdsource information from non-expert human workers or conduct noisy experiments to determine group structure. The learner aims to exactly recover a partition by submitting queries of the form ``are $u$ and $v$ in the same group?'' for any pair of elements $u$ and $v$ in the set. Moreover, because the learner only has access to faulty sources of information, they require an error-tolerant algorithm for this task: i.e. they must fully recover the correct partition, even if up to $\\ell$ answers are incorrect, for some error-tolerance parameter $\\ell$. We study the question: for any given error-tolerance $\\ell$, what is the minimum number of queries needed to learn a finite set partition of $n$ elements into $k$ groups? We design algorithms for this task and prove that they achieve optimal query complexity. To analyze our algorithms, we first highlight a connection between this task and correlation clustering. We then use this connection to build a R\u00e9nyi-Ulam style analytical framework for this problem, which yields matching lower bounds. Our analysis also reveals an inherent asymmetry between the query complexity necessary to be robust against false negative errors as opposed to false positive errors.", "pdf": "https://openreview.net/pdf/2abac03a655a803c28aaeea7d0cefd63e537be44.pdf"} {"title": "DMPlug: A Plug-in Method for Solving Inverse Problems with Diffusion Models", "url": "https://openreview.net/forum?id=81IFFsfQUj", "detail_url": "https://openreview.net/forum?id=81IFFsfQUj", "authors": "Hengkang Wang,Xu Zhang,Taihui Li,Yuxiang Wan,Tiancong Chen,Ju Sun", "tags": "NIPS 2024,Poster", "abstract": "Pretrained diffusion models (DMs) have recently been popularly used in solving inverse problems (IPs). The existing methods mostly interleave iterative steps in the reverse diffusion process and iterative steps to bring the iterates closer to satisfying the measurement constraint. However, such interleaving methods struggle to produce final results that look like natural objects of interest (i.e., manifold feasibility) and fit the measurement (i.e., measurement feasibility), especially for nonlinear IPs. Moreover, their capabilities to deal with noisy IPs with unknown types and levels of measurement noise are unknown. In this paper, we advocate viewing the reverse process in DMs as a function and propose a novel plug-in method for solving IPs using pretrained DMs, dubbed DMPlug. DMPlug addresses the issues of manifold feasibility and measurement feasibility in a principled manner, and also shows great potential for being robust to unknown types and levels of noise. Through extensive experiments across various IP tasks, including two linear and three nonlinear IPs, we demonstrate that DMPlug consistently outperforms state-of-the-art methods, often by large margins especially for nonlinear IPs.", "pdf": "https://openreview.net/pdf/417ea67c80aa8472615bf862fe2af28a3fba492f.pdf"} {"title": "Invariant Tokenization of Crystalline Materials for Language Model Enabled Generation", "url": "https://openreview.net/forum?id=18FGRNd0wZ", "detail_url": "https://openreview.net/forum?id=18FGRNd0wZ", "authors": "Keqiang Yan,Xiner Li,Hongyi Ling,Kenna Ashen,Carl Edwards,Raymundo Arroyave,Marinka Zitnik,Heng Ji,Xiaofeng Qian,Xiaoning Qian,Shuiwang Ji", "tags": "NIPS 2024,Poster", "abstract": "We consider the problem of crystal materials generation using language models (LMs). A key step is to convert 3D crystal structures into 1D sequences to be processed by LMs. Prior studies used the crystallographic information framework (CIF) file stream, which fails to ensure SE(3) and periodic invariance and may not lead to unique sequence representations for a given crystal structure. Here, we propose a novel method, known as Mat2Seq, to tackle this challenge. Mat2Seq converts 3D crystal structures into 1D sequences and ensures that different mathematical descriptions of the same crystal are represented in a single unique sequence, thereby provably achieving SE(3) and periodic invariance. Experimental results show that, with language models, Mat2Seq achieves promising performance in crystal structure generation as compared with prior methods.", "pdf": "https://openreview.net/pdf/5ba6a7cb433f251e0cb54187f36a69682d004aa0.pdf"} {"title": "Achieving Domain-Independent Certified Robustness via Knowledge Continuity", "url": "https://openreview.net/forum?id=v07KRLYxDX", "detail_url": "https://openreview.net/forum?id=v07KRLYxDX", "authors": "Alan Sun,Chiyu Ma,Kenneth Ge,Soroush Vosoughi", "tags": "NIPS 2024,Poster", "abstract": "We present *knowledge continuity*, a novel definition inspired by Lipschitz continuity which aims to certify the robustness of neural networks across input domains (such as continuous and discrete domains in vision and language, respectively). Most existing approaches that seek to certify robustness, especially Lipschitz continuity, lie within the continuous domain with norm and distribution-dependent guarantees. In contrast, our proposed definition yields certification guarantees that depend only on the loss function and the intermediate learned metric spaces of the neural network. These bounds are independent of domain modality, norms, and distribution. We further demonstrate that the expressiveness of a model class is not at odds with its knowledge continuity. This implies that achieving robustness by maximizing knowledge continuity should not theoretically hinder inferential performance. Finally, to complement our theoretical results, we present several applications of knowledge continuity such as regularization, a certification algorithm, and show that knowledge continuity can be used to localize vulnerable components of a neural network.", "pdf": "https://openreview.net/pdf/5637147709875e41d94cb35d3fb0ef47bf7ebba5.pdf"} {"title": "BoNBoN Alignment for Large Language Models and the Sweetness of Best-of-n Sampling", "url": "https://openreview.net/forum?id=haSKMlrbX5", "detail_url": "https://openreview.net/forum?id=haSKMlrbX5", "authors": "Lin Gui,Cristina Garbacea,Victor Veitch", "tags": "NIPS 2024,Poster", "abstract": "This paper concerns the problem of aligning samples from large language models to human preferences using *best-of-$n$* sampling, where we draw $n$ samples, rank them, and return the best one. We consider two fundamental problems. First: what is the relationship between best-of-$n$ and other (RLHF-type) approaches to aligning LLMs? In particular, when should one be preferred to the other? We show that the best-of-$n$ sampling distribution is essentially equivalent to the policy learned by RLHF if we apply a particular monotone transformation to the reward function. Moreover, we show that this transformation yields the best possible trade-off between win-rate against the base model vs KL distance from the base model. Then, best-of-$n$ is a Pareto-optimal win-rate vs KL solution.\nThe second problem we consider is how to fine-tune a model to mimic the best-of-$n$ sampling distribution, to avoid drawing $n$ samples for each inference. We derive *BonBon Alignment* as a method for achieving this. Experiments show that BonBon alignment yields a model that achieves high win rates while minimally affecting off-target aspects of the generations.", "pdf": "https://openreview.net/pdf/77322ec05b42e4b97cc959fb943934f0a32388b0.pdf"} {"title": "Computation-Aware Gaussian Processes: Model Selection And Linear-Time Inference", "url": "https://openreview.net/forum?id=tDvFa5OJyS", "detail_url": "https://openreview.net/forum?id=tDvFa5OJyS", "authors": "Jonathan Wenger,Kaiwen Wu,Philipp Hennig,Jacob R. Gardner,Geoff Pleiss,John Patrick Cunningham", "tags": "NIPS 2024,Poster", "abstract": "Model selection in Gaussian processes scales prohibitively with the size of the training dataset, both in time and memory.\nWhile many approximations exist, all incur inevitable approximation error.\nRecent work accounts for this error in the form of computational uncertainty, which enables---at the cost of quadratic complexity---an explicit tradeoff between computation and precision.\nHere we extend this development to model selection, which requires significant enhancements to the existing approach, including linear-time scaling in the size of the dataset.\nWe propose a novel training loss for hyperparameter optimization and demonstrate empirically that the resulting method can outperform SGPR, CGGP and SVGP, state-of-the-art methods for GP model selection, on medium to large-scale datasets.\nOur experiments show that model selection for computation-aware GPs trained on 1.8 million data points can be done within a few hours on a single GPU.\nAs a result of this work, Gaussian processes can be trained on large-scale datasets without significantly compromising their ability to quantify uncertainty---a fundamental prerequisite for optimal decision-making.", "pdf": "https://openreview.net/pdf/9fd16df1a39147c5ed0812c4231d133453a8f9ab.pdf"} {"title": "When is an Embedding Model More Promising than Another?", "url": "https://openreview.net/forum?id=VqFz7iTGcl", "detail_url": "https://openreview.net/forum?id=VqFz7iTGcl", "authors": "Maxime DARRIN,Philippe Formont,Ismail Ben Ayed,Jackie CK Cheung,Pablo Piantanida", "tags": "NIPS 2024,Poster", "abstract": "Embedders play a central role in machine learning, projecting any object into numerical representations that can, in turn, be leveraged to perform various downstream tasks. The evaluation of embedding models typically depends on domain-specific empirical approaches utilizing downstream tasks, primarily because of the lack of a standardized framework for comparison. However, acquiring adequately large and representative datasets for conducting these assessments is not always viable and can prove to be prohibitively expensive and time-consuming. In this paper, we present a unified approach to evaluate embedders. First, we establish theoretical foundations for comparing embedding models, drawing upon the concepts of sufficiency and informativeness. We then leverage these concepts to devise a tractable comparison criterion (information sufficiency), leading to a task-agnostic and self-supervised ranking procedure. We demonstrate experimentally that our approach aligns closely with the capability of embedding models to facilitate various downstream tasks in both natural language processing and molecular biology. This effectively offers practitioners a valuable tool for prioritizing model trials.", "pdf": "https://openreview.net/pdf/cf6fa76e8219524c5bf4afc2931bd636ada5b79e.pdf"} {"title": "On the Expressive Power of Tree-Structured Probabilistic Circuits", "url": "https://openreview.net/forum?id=suYAAOI5bd", "detail_url": "https://openreview.net/forum?id=suYAAOI5bd", "authors": "Lang Yin,Han Zhao", "tags": "NIPS 2024,Poster", "abstract": "Probabilistic circuits (PCs) have emerged as a powerful framework compactly representing probability distributions for efficient and exact probabilistic inference. It has been shown that PCs with general directed acyclic graph (DAG) structure can be understood as a mixture of exponentially (in its height) many components, each of which is a product distributions over univariate marginals. However, existing structure learning algorithms for PCs often generate tree-structured circuits, or using tree-structured circuits as intermediate steps to compress them into DAG-structured circuits. This leads to an intriguing question on whether there exists an exponential gap between DAGs and trees for the PC structure.\n\nIn this paper, we provide a negative answer to this conjecture by proving that, for $n$ variables, there is a quasi-polynomial upper bound $n^{O(\\log n)}$ on the size of an equivalent tree computing the same probability distribution. On the other hand, we will also show that given a depth restriction on the tree, there is a super-polynomial separation between tree and DAG-structured PCs. Our work takes an important step towards understanding the expressive power of tree-structured PCs, and our techniques may be of independent interest in the study of structure learning algorithms for PCs.", "pdf": "https://openreview.net/pdf/b43ed6380196199080503de5e5cbac4b6464df1a.pdf"} {"title": "SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data", "url": "https://openreview.net/forum?id=t9gNEhreht", "detail_url": "https://openreview.net/forum?id=t9gNEhreht", "authors": "Jialu Li,Jaemin Cho,Yi-Lin Sung,Jaehong Yoon,Mohit Bansal", "tags": "NIPS 2024,Poster", "abstract": "Recent text-to-image (T2I) generation models have demonstrated impressive capabilities in creating images from text descriptions. However, these T2I generation models often fail to generate images that precisely match the details of the text inputs, such as incorrect spatial relationship or missing objects. In this paper, we introduce SELMA: Skill-Specific Expert Learning and Merging with Auto-Generated Data, a novel paradigm to improve the faithfulness of T2I models by fine-tuning models on automatically generated, multi-skill image-text datasets, with skill-specific expert learning and merging. First, SELMA leverages an LLM\u2019s in-context learning capability to generate multiple datasets of text prompts that can teach different skills, and then generates the images with a T2I model based on the prompts. Next, SELMA adapts the T2I model to the new skills by learning multiple single-skill LoRA (low-rank adaptation) experts followed by expert merging. Our independent expert fine-tuning specializes multiple models for different skills, and expert merging helps build a joint multi-skill T2I model that can generate faithful images given diverse text prompts, while mitigating the knowledge conflict from different datasets. We empirically demonstrate that SELMA significantly improves the semantic alignment and text faithfulness of state-of-the-art T2I diffusion models on multiple benchmarks (+2.1% on TIFA and +6.9% on DSG), human preference metrics (PickScore, ImageReward, and HPS), as well as human evaluation. Moreover, fine-tuning with image-text pairs auto-collected via SELMA shows comparable performance to fine-tuning with ground truth data. Lastly, we show that fine-tuning with images from a weaker T2I model can help improve the generation quality of a stronger T2I model, suggesting promising weak-to-strong generalization in T2I models. We provide code in the supplementary materials.", "pdf": "https://openreview.net/pdf/9c5145da5a9316d95276533fdb05fffd3d83b851.pdf"} {"title": "Thought of Search: Planning with Language Models Through The Lens of Efficiency", "url": "https://openreview.net/forum?id=lNCsyA5uS1", "detail_url": "https://openreview.net/forum?id=lNCsyA5uS1", "authors": "Michael Katz,Harsha Kokel,Kavitha Srinivas,Shirin Sohrabi", "tags": "NIPS 2024,Poster", "abstract": "Among the most important properties of algorithms investigated in computer science are soundness, completeness, and complexity. These properties, however, are rarely analyzed for the vast collection of recently proposed methods for planning with large language models. In this work, we alleviate this gap. We analyse these properties of using LLMs for planning and highlight that recent trends abandon both soundness and completeness for the sake of inefficiency. We propose a significantly more efficient approach that can, at the same time, maintain both soundness and completeness. We exemplify on four representative search problems, comparing to the LLM-based solutions from the literature that attempt to solve these problems. We show that by using LLMs to produce the code for the search components we can solve the entire datasets with 100% accuracy with only a few calls to the LLM. In contrast, the compared approaches require hundreds of thousands of calls and achieve significantly lower accuracy. We argue for a responsible use of compute resources; urging research community to investigate sound and complete LLM-based approaches that uphold efficiency.", "pdf": "https://openreview.net/pdf/c3b9f6dac697975151973b9512513649e4a3cf31.pdf"} {"title": "Universal Rates of Empirical Risk Minimization", "url": "https://openreview.net/forum?id=6cWDg9t3z5", "detail_url": "https://openreview.net/forum?id=6cWDg9t3z5", "authors": "Steve Hanneke,Mingyue Xu", "tags": "NIPS 2024,Poster", "abstract": "The well-known $\\textit{empirical risk minimization}$ (ERM) principle is the basis of many widely used machine learning algorithms, and plays an essential role in the classical PAC theory. A common description of a learning algorithm's performance is its so-called \u201clearning curve\u201d, that is, the decay of the expected error as a function of the input sample size. As the PAC model fails to explain the behavior of learning curves, recent research has explored an alternative universal learning model and has ultimately revealed a distinction between optimal universal and uniform learning rates (Bousquet et al., 2021). However, a basic understanding of such differences with a particular focus on the ERM principle has yet to be developed. \n \n In this paper, we consider the problem of universal learning by ERM in the realizable case and study the possible universal rates. Our main result is a fundamental $\\textit{tetrachotomy}$: there are only four possible universal learning rates by ERM, namely, the learning curves of any concept class learnable by ERM decay either at $e^{-n}$, $1/n$, $\\log{(n)}/n$, or arbitrarily slow rates. Moreover, we provide a complete characterization of which concept classes fall into each of these categories, via new complexity structures. We also develop new combinatorial dimensions which supply sharp asymptotically-valid constant factors for these rates, whenever possible.", "pdf": "https://openreview.net/pdf/e3ad3581485fc4c7af6a155c8c66508125865813.pdf"} {"title": "Interventional Causal Discovery in a Mixture of DAGs", "url": "https://openreview.net/forum?id=mFrlCI8sov", "detail_url": "https://openreview.net/forum?id=mFrlCI8sov", "authors": "Burak Var\u0131c\u0131,Dmitriy A Katz,Dennis Wei,Prasanna Sattigeri,Ali Tajer", "tags": "NIPS 2024,Poster", "abstract": "Causal interactions among a group of variables are often modeled by a single causal graph. In some domains, however, these interactions are best described by multiple co-existing causal graphs, e.g., in dynamical systems or genomics. This paper addresses the hitherto unknown role of interventions in learning causal interactions among variables governed by a mixture of causal systems, each modeled by one directed acyclic graph (DAG). Causal discovery from mixtures is fundamentally more challenging than single-DAG causal discovery. Two major difficulties stem from (i) an inherent uncertainty about the skeletons of the component DAGs that constitute the mixture and (ii) possibly cyclic relationships across these component DAGs. This paper addresses these challenges and aims to identify edges that exist in at least one component DAG of the mixture, referred to as the *true* edges. First, it establishes matching necessary and sufficient conditions on the size of interventions required to identify the true edges. Next, guided by the necessity results, an adaptive algorithm is designed that learns all true edges using ${\\cal O}(n^2)$ interventions, where $n$ is the number of nodes. Remarkably, the size of the interventions is optimal if the underlying mixture model does not contain cycles across its components. More generally, the gap between the intervention size used by the algorithm and the optimal size is quantified. It is shown to be bounded by the *cyclic complexity number* of the mixture model, defined as the size of the minimal intervention that can break the cycles in the mixture, which is upper bounded by the number of cycles among the ancestors of a node.", "pdf": "https://openreview.net/pdf/1b479d170e7f3c5cbb35a67cb00b9fc4c4843848.pdf"} {"title": "Spatio-Spectral Graph Neural Networks", "url": "https://openreview.net/forum?id=Cb3kcwYBgw", "detail_url": "https://openreview.net/forum?id=Cb3kcwYBgw", "authors": "Simon Geisler,Arthur Kosmala,Daniel Herbst,Stephan G\u00fcnnemann", "tags": "NIPS 2024,Poster", "abstract": "Spatial Message Passing Graph Neural Networks (MPGNNs) are widely used for learning on graph-structured data. However, key limitations of *\u2113*-step MPGNNs are that their \"receptive field\" is typically limited to the *\u2113*-hop neighborhood of a node and that information exchange between distant nodes is limited by over-squashing. Motivated by these limitations, we propose *Spatio-Spectral Graph Neural Networks (S\u00b2GNNs)* \u2013 a new modeling paradigm for Graph Neural Networks (GNNs) that synergistically combines spatially and spectrally parametrized graph filters. Parameterizing filters partially in the frequency domain enables global yet efficient information propagation. We show that S\u00b2GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs. Further, rethinking graph convolutions at a fundamental level unlocks new design spaces. For example, S\u00b2GNNs allow for free positional encodings that make them strictly more expressive than the 1-Weisfeiler-Leman (WL) test. Moreover, to obtain general-purpose S\u00b2GNNs, we propose spectrally parametrized filters for directed graphs. S\u00b2GNNs outperform spatial MPGNNs, graph transformers, and graph rewirings, e.g., on the peptide long-range benchmark tasks, and are competitive with state-of-the-art sequence modeling. On a 40 GB GPU, S\u00b2GNNs scale to millions of nodes.", "pdf": "https://openreview.net/pdf/3a46231cf042909d550605fa8da48926f0cf53e3.pdf"} {"title": "TabEBM: A Tabular Data Augmentation Method with Distinct Class-Specific Energy-Based Models", "url": "https://openreview.net/forum?id=FmNoFIImZG", "detail_url": "https://openreview.net/forum?id=FmNoFIImZG", "authors": "Andrei Margeloiu,Xiangjian Jiang,Nikola Simidjievski,Mateja Jamnik", "tags": "NIPS 2024,Poster", "abstract": "Data collection is often difficult in critical fields such as medicine, physics, and chemistry, yielding typically only small tabular datasets. However, classification methods tend to struggle with these small datasets, leading to poor predictive performance. Increasing the training set with additional synthetic data, similar to data augmentation in images, is commonly believed to improve downstream tabular classification performance. However, current tabular generative methods that learn either the joint distribution $ p(\\mathbf{x}, y) $ or the class-conditional distribution $ p(\\mathbf{x} \\mid y) $ often overfit on small datasets, resulting in poor-quality synthetic data, usually worsening classification performance compared to using real data alone. To solve these challenges, we introduce TabEBM, a novel class-conditional generative method using Energy-Based Models (EBMs). Unlike existing tabular methods that use a shared model to approximate all class-conditional densities, our key innovation is to create distinct EBM generative models for each class, each modelling its class-specific data distribution individually. This approach creates robust energy landscapes, even in ambiguous class distributions. Our experiments show that TabEBM generates synthetic data with higher quality and better statistical fidelity than existing methods. When used for data augmentation, our synthetic data consistently leads to improved classification performance across diverse datasets of various sizes, especially small ones. Code is available at https://github.com/andreimargeloiu/TabEBM.", "pdf": "https://openreview.net/pdf/25a6e595799705e000238621240dbbb44f3cf7d7.pdf"} {"title": "Learning to Reason Iteratively and Parallelly for Complex Visual Reasoning Scenarios", "url": "https://openreview.net/forum?id=uoJQ9qadjY", "detail_url": "https://openreview.net/forum?id=uoJQ9qadjY", "authors": "Shantanu Jaiswal,Debaditya Roy,Basura Fernando,Cheston Tan", "tags": "NIPS 2024,Poster", "abstract": "Complex visual reasoning and question answering (VQA) is a challenging task that requires compositional multi-step processing and higher-level reasoning capabilities beyond the immediate recognition and localization of objects and events. Here, we introduce a fully neural Iterative and Parallel Reasoning Mechanism (IPRM) that combines two distinct forms of computation -- iterative and parallel -- to better address complex VQA scenarios. Specifically, IPRM's \"iterative\" computation facilitates compositional step-by-step reasoning for scenarios wherein individual operations need to be computed, stored, and recalled dynamically (e.g. when computing the query \u201cdetermine the color of pen to the left of the child in red t-shirt sitting at the white table\u201d). Meanwhile, its \"parallel'' computation allows for the simultaneous exploration of different reasoning paths and benefits more robust and efficient execution of operations that are mutually independent (e.g. when counting individual colors for the query: \"determine the maximum occurring color amongst all t-shirts'\"). We design IPRM as a lightweight and fully-differentiable neural module that can be conveniently applied to both transformer and non-transformer vision-language backbones. It notably outperforms prior task-specific methods and transformer-based attention modules across various image and video VQA benchmarks testing distinct complex reasoning capabilities such as compositional spatiotemporal reasoning (AGQA), situational reasoning (STAR), multi-hop reasoning generalization (CLEVR-Humans) and causal event linking (CLEVRER-Humans). Further, IPRM's internal computations can be visualized across reasoning steps, aiding interpretability and diagnosis of its errors.", "pdf": "https://openreview.net/pdf/c993bb7e15acae1c52ff8d50243206913deb4cc9.pdf"} {"title": "Zero-Shot Transfer of Neural ODEs", "url": "https://openreview.net/forum?id=OgnYoIxtIN", "detail_url": "https://openreview.net/forum?id=OgnYoIxtIN", "authors": "Tyler Ingebrand,Adam Thorpe,ufuk topcu", "tags": "NIPS 2024,Poster", "abstract": "Autonomous systems often encounter environments and scenarios beyond the scope of their training data, which underscores a critical challenge: the need to generalize and adapt to unseen scenarios in real time. This challenge necessitates new mathematical and algorithmic tools that enable adaptation and zero-shot transfer. To this end, we leverage the theory of function encoders, which enables zero-shot transfer by combining the flexibility of neural networks with the mathematical principles of Hilbert spaces. Using this theory, we first present a method for learning a space of dynamics spanned by a set of neural ODE basis functions. After training, the proposed approach can rapidly identify dynamics in the learned space using an efficient inner product calculation. Critically, this calculation requires no gradient calculations or retraining during the online phase. This method enables zero-shot transfer for autonomous systems at runtime and opens the door for a new class of adaptable control algorithms. We demonstrate state-of-the-art system modeling accuracy for two MuJoCo robot environments and show that the learned models can be used for more efficient MPC control of a quadrotor.", "pdf": "https://openreview.net/pdf/6a650a5c71241b227459d9edda79c36d9a8fac28.pdf"} {"title": "Graph neural networks and non-commuting operators", "url": "https://openreview.net/forum?id=6aJrEC28hR", "detail_url": "https://openreview.net/forum?id=6aJrEC28hR", "authors": "Mauricio Velasco,Kaiying O'Hare,Bernardo Rychtenberg,Soledad Villar", "tags": "NIPS 2024,Poster", "abstract": "Graph neural networks (GNNs) provide state-of-the-art results in a wide variety of tasks which typically involve predicting features at the vertices of a graph. They are built from layers of graph convolutions which serve as a powerful inductive bias for describing the flow of information among the vertices. Often, more than one data modality is available. This work considers a setting in which several graphs have the same vertex set and a common vertex-level learning task. This generalizes standard GNN models to GNNs with several graph operators that do not commute. We may call this model graph-tuple neural networks (GtNN). \n\nIn this work, we develop the mathematical theory to address the stability and transferability of GtNNs using properties of non-commuting non-expansive operators. We develop a limit theory of graphon-tuple neural networks and use it to prove a universal transferability theorem that guarantees that all graph-tuple neural networks are transferable on convergent graph-tuple sequences. In particular, there is no non-transferable energy under the convergence we consider here. Our theoretical results extend well-known transferability theorems for GNNs to the case of several simultaneous graphs (GtNNs) and provide a strict improvement on what is currently known even in the GNN case.\n\nWe illustrate our theoretical results with simple experiments on synthetic and real-world data. To this end, we derive a training procedure that provably enforces the stability of the resulting model.", "pdf": "https://openreview.net/pdf/c7c518f298e5899f7e9285c55ca3425a0e7104ce.pdf"} {"title": "Learning Goal-Conditioned Representations for Language Reward Models", "url": "https://openreview.net/forum?id=Swh8LxuycA", "detail_url": "https://openreview.net/forum?id=Swh8LxuycA", "authors": "Vaskar Nath,Dylan Z Slack,Jeff Da,Yuntao Ma,Hugh Zhang,Spencer Whitehead,Sean M. Hendryx", "tags": "NIPS 2024,Poster", "abstract": "Techniques that learn improved representations via offline data or self-supervised objectives have shown impressive results in traditional reinforcement learning.\nNevertheless, it is unclear how improved representation learning can benefit reinforcement learning from human feedback on language models.\nIn this work, we propose training reward models (RMs) in a contrastive, $\\textit{goal-conditioned}$ fashion by increasing the representation similarity of future states along sampled preferred trajectories and decreasing the similarity along randomly sampled dispreferred trajectories.\nThis objective significantly improves reward model performance by up to 0.09 AUROC across challenging benchmarks, such as MATH and GSM8k. These findings extend to general alignment as well -- on the Helpful-Harmless dataset, we observe 2.3\\% increase in accuracy.\nBeyond improving reward model performance, we show this way of training RM representations enables improved steerability because it allows us to evaluate the likelihood of an action achieving a particular goal-state (e.g. whether a solution is correct or helpful).\nLeveraging this insight, we find that we can filter up to 55\\% of generated tokens during majority voting by discarding trajectories likely to end up in an \"incorrect\" state, which leads to significant cost savings.\nWe additionally find that these representations can perform fine-grained control by conditioning on desired future goal-states.\nFor example, we show that steering a Llama 3 model towards helpful generations with our approach improves helpfulness by $9.6$\\% over a supervised-fine-tuning trained baseline.\nSimilarly, steering the model towards complex generations improves complexity by $21.6$\\% over the baseline.\nOverall, we find that training RMs in this contrastive, goal-conditioned fashion significantly improves performance and enables model steerability.", "pdf": "https://openreview.net/pdf/83089f8621985bae77c25ae6a4e253c93dc4c5b0.pdf"} {"title": "Rethinking Optimal Transport in Offline Reinforcement Learning", "url": "https://openreview.net/forum?id=hKloKv7pR2", "detail_url": "https://openreview.net/forum?id=hKloKv7pR2", "authors": "Arip Asadulaev,Rostislav Korst,Alexander Korotin,Vage Egiazarian,Andrey Filchenkov,Evgeny Burnaev", "tags": "NIPS 2024,Poster", "abstract": "We propose a novel algorithm for offline reinforcement learning using optimal transport. Typically, in offline reinforcement learning, the data is provided by various experts and some of them can be sub-optimal. To extract an efficient policy, it is necessary to \\emph{stitch} the best behaviors from the dataset. To address this problem, we rethink offline reinforcement learning as an optimal transportation problem. And based on this, we present an algorithm that aims to find a policy that maps states to a \\emph{partial} distribution of the best expert actions for each given state. We evaluate the performance of our algorithm on continuous control problems from the D4RL suite and demonstrate improvements over existing methods.", "pdf": "https://openreview.net/pdf/6850154a5d5c4752d3397faaf8db1c686cc0f0c7.pdf"} {"title": "Learning Identifiable Factorized Causal Representations of Cellular Responses", "url": "https://openreview.net/forum?id=AhlaBDHMQh", "detail_url": "https://openreview.net/forum?id=AhlaBDHMQh", "authors": "Haiyi Mao,Romain Lopez,Kai Liu,Jan-Christian Huetter,David Richmond,Panayiotis V. Benos,Lin Qiu", "tags": "NIPS 2024,Poster", "abstract": "The study of cells and their responses to genetic or chemical perturbations promises to accelerate the discovery of therapeutics targets. However, designing adequate and insightful models for such data is difficult because the response of a cell to perturbations essentially depends on contextual covariates (e.g., genetic background or type of the cell). There is therefore a need for models that can identify interactions between drugs and contextual covariates. This is crucial for discovering therapeutics targets, as such interactions may reveal drugs that affect certain cell types but not others.\nWe tackle this problem with a novel Factorized Causal Representation (FCR) learning method, an identifiable deep generative model that reveals causal structure in single-cell perturbation data from several cell lines. FCR learns multiple cellular representations that are disentangled, comprised of covariate-specific (Z_x), treatment-specific (Z_t) and interaction-specific (Z_tx) representations. Based on recent advances of non-linear ICA theory, we prove the component-wise identifiability of Z_tx and block-wise identifiability of Z_t and Z_x. Then, we present our implementation of FCR, and empirically demonstrate that FCR outperforms state-of-the-art baselines in various tasks across four single-cell datasets.", "pdf": "https://openreview.net/pdf/4cd13872900e5194675e0f652bb052c6decdaffc.pdf"} {"title": "Improved Sample Complexity Bounds for Diffusion Model Training", "url": "https://openreview.net/forum?id=OxcqkYOy8q", "detail_url": "https://openreview.net/forum?id=OxcqkYOy8q", "authors": "Shivam Gupta,Aditya Parulekar,Eric Price,Zhiyang Xun", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models have become the most popular approach to deep generative modeling of images, largely due to their empirical performance and reliability. From a theoretical standpoint, a number of recent works [CCL+23, CCSW22, BBDD24] have studied the iteration complexity of sampling, assuming access to an accurate diffusion model. In this work, we focus on understanding the *sample complexity* of training such a model; how many samples are needed to learn an accurate diffusion model using a sufficiently expressive neural network? Prior work [BMR20] showed bounds polynomial in the dimension, desired Total Variation error, and Wasserstein error. We show an *exponential improvement* in the dependence on Wasserstein error and depth, along with improved dependencies on other relevant parameters.", "pdf": "https://openreview.net/pdf/d9933ee6c6aa89329e716f365afa61a8a4b32bf0.pdf"} {"title": "DiffusionPDE: Generative PDE-Solving under Partial Observation", "url": "https://openreview.net/forum?id=z0I2SbjN0R", "detail_url": "https://openreview.net/forum?id=z0I2SbjN0R", "authors": "Jiahe Huang,Guandao Yang,Zichen Wang,Jeong Joon Park", "tags": "NIPS 2024,Poster", "abstract": "We introduce a general framework for solving partial differential equations (PDEs) using generative diffusion models. In particular, we focus on the scenarios where we do not have the full knowledge of the scene necessary to apply classical solvers. Most existing forward or inverse PDE approaches perform poorly when the observations on the data or the underlying coefficients are incomplete, which is a common assumption for real-world measurements. In this work, we propose DiffusionPDE that can simultaneously fill in the missing information and solve a PDE by modeling the joint distribution of the solution and coefficient spaces. We show that the learned generative priors lead to a versatile framework for accurately solving a wide range of PDEs under partial observation, significantly outperforming the state-of-the-art methods for both forward and inverse directions.", "pdf": "https://openreview.net/pdf/d12cbf722d1e7501e11593285562cb5fb783d08a.pdf"} {"title": "Understanding Transformer Reasoning Capabilities via Graph Algorithms", "url": "https://openreview.net/forum?id=AfzbDw6DSp", "detail_url": "https://openreview.net/forum?id=AfzbDw6DSp", "authors": "Clayton Sanford,Bahare Fatemi,Ethan Hall,Anton Tsitsulin,Mehran Kazemi,Jonathan Halcrow,Bryan Perozzi,Vahab Mirrokni", "tags": "NIPS 2024,Poster", "abstract": "Which transformer scaling regimes are able to perfectly solve different classes of algorithmic problems? While tremendous empirical advances have been attained by transformer-based neural networks, a theoretical understanding of their algorithmic reasoning capabilities in realistic parameter regimes is lacking. We investigate this question in terms of the network\u2019s depth, width, and number of extra tokens for algorithm execution. Our novel representational hierarchy separates 9 algorithmic reasoning problems into classes solvable by transformers in different realistic parameter scaling regimes. We prove that logarithmic depth is necessary and sufficient for tasks like graph connectivity, while single-layer transformers with small embedding dimensions can solve contextual retrieval tasks. We also support our theoretical analysis with ample empirical evidence using the GraphQA benchmark. These results show that transformers excel at many graph reasoning tasks, even outperforming specialized graph neural networks.", "pdf": "https://openreview.net/pdf/d7fefe7e015fce36140124dc24c77636ac89cd5c.pdf"} {"title": "Mutual Information Estimation via $f$-Divergence and Data Derangements", "url": "https://openreview.net/forum?id=PThi9hf9UT", "detail_url": "https://openreview.net/forum?id=PThi9hf9UT", "authors": "Nunzio Alexandro Letizia,Nicola Novello,Andrea M Tonello", "tags": "NIPS 2024,Poster", "abstract": "Estimating mutual information accurately is pivotal across diverse applications, from machine learning to communications and biology, enabling us to gain insights into the inner mechanisms of complex systems. Yet, dealing with high-dimensional data presents a formidable challenge, due to its size and the presence of intricate relationships. Recently proposed neural methods employing variational lower bounds on the mutual information have gained prominence. However, these approaches suffer from either high bias or high variance, as the sample size and the structure of the loss function directly influence the training process. In this paper, we propose a novel class of discriminative mutual information estimators based on the variational representation of the $f$-divergence. We investigate the impact of the permutation function used to obtain the marginal training samples and present a novel architectural solution based on derangements. The proposed estimator is flexible since it exhibits an excellent bias/variance trade-off. The comparison with state-of-the-art neural estimators, through extensive experimentation within established reference scenarios, shows that our approach offers higher accuracy and lower complexity.", "pdf": "https://openreview.net/pdf/3501069f61bdb73e7304fe1435267c1eb2e285d2.pdf"} {"title": "Repurposing Language Models into Embedding Models: Finding the Compute-Optimal Recipe", "url": "https://openreview.net/forum?id=kVL5rvkqGG", "detail_url": "https://openreview.net/forum?id=kVL5rvkqGG", "authors": "Albert Q. Jiang,Alicja Ziarko,Bartosz Piotrowski,Wenda Li,Mateja Jamnik,Piotr Mi\u0142o\u015b", "tags": "NIPS 2024,Poster", "abstract": "Text embeddings are essential for tasks such as document retrieval, clustering, and semantic similarity assessment. In this paper, we study how to contrastively train text embedding models in a compute-optimal fashion, given a suite of pretrained decoder-only language models. Our innovation is an algorithm that produces optimal configurations of model sizes, data quantities, and fine-tuning methods for text-embedding models at different computational budget levels. The resulting recipe, which we obtain through extensive experiments, can be used by practitioners to make informed design choices for their embedding models. Specifically, our findings suggest that full fine-tuning and Low-Rank Adaptation fine-tuning produce optimal models at lower and higher computational budgets respectively.", "pdf": "https://openreview.net/pdf/eefb08b5efe5e72f4d802bf9c78722becbd5a0f9.pdf"} {"title": "Scaling Laws for Reward Model Overoptimization in Direct Alignment Algorithms", "url": "https://openreview.net/forum?id=pf4OuJyn4Q", "detail_url": "https://openreview.net/forum?id=pf4OuJyn4Q", "authors": "Rafael Rafailov,Yaswanth Chittepu,Ryan Park,Harshit Sikchi,Joey Hejna,W. Bradley Knox,Chelsea Finn,Scott Niekum", "tags": "NIPS 2024,Poster", "abstract": "Reinforcement Learning from Human Feedback (RLHF)has been crucial to the recent success of Large Language Models (LLMs), however it is often a complex and brittle process. In the classical RLHF framework, a reward model is first trained to represent human preferences, which is in turn used by an online reinforcement learning (RL) algorithm to optimized the LLM. A prominent issue with such methods is reward over-optimization or reward hacking, where the performance as measured by the learned proxy reward model increases, but the true model quality plateaus or even deteriorates. Direct Alignment Algorithms (DDAs), such as Direct Preference Optimization (DPO) have emerged as alternatives to the classical RLHF pipeline. However, despite not training a separate proxy reward model or using RL, they still commonly deteriorate from over-optimization. While the so-called reward hacking phenomenon is not well-defined for DAAs, we still uncover similar trends: at higher KL-budgets, DAA algorithms exhibit similar degradation patters to their classic RLHF counterparts. In particular, we find that DAA methods deteriorate not only across a wide range of KL-budgets, but also often before even a single epoch of the dataset is completed. Through extensive empirical experimentation this work formulates the reward over-optimization or hacking problem for DAAs and explores its consequences across objectives, training regimes, and model scales.", "pdf": "https://openreview.net/pdf/e28365fb2dd2a9b94f8225bee790989091ef456f.pdf"} {"title": "Mixture of Demonstrations for In-Context Learning", "url": "https://openreview.net/forum?id=uqxSLoCw3K", "detail_url": "https://openreview.net/forum?id=uqxSLoCw3K", "authors": "Song Wang,Zihan Chen,Chengshuai Shi,Cong Shen,Jundong Li", "tags": "NIPS 2024,Poster", "abstract": "In-Context Learning (ICL) empowers Large Language Models (LLMs) to tackle various tasks by providing input-output examples as additional inputs, referred to as demonstrations. Nevertheless, the performance of ICL could be easily impacted by the quality of selected demonstrations. Existing efforts generally learn a retriever model to score each demonstration for selecting suitable demonstrations, however, the effect is suboptimal due to the large search space and the noise from unhelpful demonstrations. In this study, we introduce MoD, which partitions the demonstration pool into groups, each governed by an expert to reduce search space. We further design an expert-wise training strategy to alleviate the impact of unhelpful demonstrations when optimizing the retriever model. During inference, experts collaboratively retrieve demonstrations for the input query to enhance the ICL performance. We validate MoD via experiments across a range of NLP datasets and tasks, demonstrating its state-of-the-art performance and shedding new light on the future design of retrieval methods for ICL.", "pdf": "https://openreview.net/pdf/fc5db0bdacac2fe51883bcf90ff63fbac5bdbf0d.pdf"} {"title": "VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks", "url": "https://openreview.net/forum?id=kuCY0mW4Q3", "detail_url": "https://openreview.net/forum?id=kuCY0mW4Q3", "authors": "Yang Li,Shaobo Han,Shihao Ji", "tags": "NIPS 2024,Poster", "abstract": "As the adoption of large language models increases and the need for per-user or per-task model customization grows, the parameter-efficient fine-tuning (PEFT) methods, such as low-rank adaptation (LoRA) and its variants, incur substantial storage and transmission costs. To further reduce stored parameters, we introduce a \"divide-and-share\" paradigm that breaks the barriers of low-rank decomposition across matrix dimensions, modules, and layers by sharing parameters globally via a vector bank. As an instantiation of the paradigm to LoRA, our proposed VB-LoRA composites all the low-rank matrices of LoRA from a shared vector bank with a differentiable top-$k$ admixture module. VB-LoRA achieves extreme parameter efficiency while maintaining comparable or better performance compared to state-of-the-art PEFT methods. Extensive experiments demonstrate the effectiveness of VB-LoRA on natural language understanding, natural language generation, instruction tuning, and mathematical reasoning tasks. When fine-tuning the Llama2-13B model, VB-LoRA only uses 0.4% of LoRA's stored parameters, yet achieves superior results. Our source code is available at https://github.com/leo-yangli/VB-LoRA. This method has been merged into the Hugging Face PEFT package.", "pdf": "https://openreview.net/pdf/d0ded460792ec7f1fc26f1bf560cf85baa3118db.pdf"} {"title": "The Impact of Geometric Complexity on Neural Collapse in Transfer Learning", "url": "https://openreview.net/forum?id=PLbFid00aU", "detail_url": "https://openreview.net/forum?id=PLbFid00aU", "authors": "Michael Munn,Benoit Dherin,Javier Gonzalvo", "tags": "NIPS 2024,Poster", "abstract": "Many of the recent advances in computer vision and language models can be attributed to the success of transfer learning via the pre-training of large foundation models. However, a theoretical framework which explains this empirical success is incomplete and remains an active area of research. Flatness of the loss surface and neural collapse have recently emerged as useful pre-training metrics which shed light on the implicit biases underlying pre-training. In this paper, we explore the geometric complexity of a model's learned representations as a fundamental mechanism that relates these two concepts. We show through experiments and theory that mechanisms which affect the geometric complexity of the pre-trained network also influence the neural collapse. Furthermore, we show how this effect of the geometric complexity generalizes to the neural collapse of new classes as well, thus encouraging better performance on downstream tasks, particularly in the few-shot setting.", "pdf": "https://openreview.net/pdf/4a8f5a51cf4f2f62fae15a4d92e58f25651664b5.pdf"} {"title": "Fine-Tuning is Fine, if Calibrated", "url": "https://openreview.net/forum?id=XRJXKBeeTD", "detail_url": "https://openreview.net/forum?id=XRJXKBeeTD", "authors": "Zheda Mai,Arpita Chowdhury,Ping Zhang,Cheng-Hao Tu,Hong-You Chen,Vardaan Pahuja,Tanya Berger-Wolf,Song Gao,Charles Stewart,Yu Su,Wei-Lun Chao", "tags": "NIPS 2024,Poster", "abstract": "Fine-tuning is arguably the most straightforward way to tailor a pre-trained model (e.g., a foundation model) to downstream applications, but it also comes with the risk of losing valuable knowledge the model had learned in pre-training. For example, fine-tuning a pre-trained classifier capable of recognizing a large number of classes to master a subset of classes at hand is shown to drastically degrade the model's accuracy in the other classes it had previously learned. As such, it is hard to further use the fine-tuned model when it encounters classes beyond the fine-tuning data. In this paper, we systematically dissect the issue, aiming to answer the fundamental question, \"What has been damaged in the fine-tuned model?\" To our surprise, we find that the fine-tuned model neither forgets the relationship among the other classes nor degrades the features to recognize these classes. Instead, the fine-tuned model often produces more discriminative features for these other classes, even if they were missing during fine-tuning! What really hurts the accuracy is the discrepant logit scales between the fine-tuning classes and the other classes, implying that a simple post-processing calibration would bring back the pre-trained model's capability and at the same time unveil the feature improvement over all classes. We conduct an extensive empirical study to demonstrate the robustness of our findings and provide preliminary explanations underlying them, suggesting new directions for future theoretical analysis.", "pdf": "https://openreview.net/pdf/03216393b8d5442283cc9fa69b62f9c7a7120339.pdf"} {"title": "Adversarially Robust Decision Transformer", "url": "https://openreview.net/forum?id=WEf2LT8NtY", "detail_url": "https://openreview.net/forum?id=WEf2LT8NtY", "authors": "Xiaohang Tang,Afonso Marques,Parameswaran Kamalaruban,Ilija Bogunovic", "tags": "NIPS 2024,Poster", "abstract": "Decision Transformer (DT), as one of the representative Reinforcement Learning via Supervised Learning (RvS) methods, has achieved strong performance in offline learning tasks by leveraging the powerful Transformer architecture for sequential decision-making. However, in adversarial environments, these methods can be non-robust, since the return is dependent on the strategies of both the decision-maker and adversary. Training a probabilistic model conditioned on observed return to predict action can fail to generalize, as the trajectories that achieve a return in the dataset might have done so due to a suboptimal behavior adversary. To address this, we propose a worst-case-aware RvS algorithm, the Adversarially Robust Decision Transformer (ARDT), which learns and conditions the policy on in-sample minimax returns-to-go. \nARDT aligns the target return with the worst-case return learned through minimax expectile regression, thereby enhancing robustness against powerful test-time adversaries. In experiments conducted on sequential games with full data coverage, ARDT can generate a maximin (Nash Equilibrium) strategy, the solution with the largest adversarial robustness. In large-scale sequential games and continuous adversarial RL environments with partial data coverage, ARDT demonstrates significantly superior robustness to powerful test-time adversaries and attains higher worst-case returns compared to contemporary DT methods.", "pdf": "https://openreview.net/pdf/1b9376570681f65dffb0184cd4e6ab76cdc18367.pdf"} {"title": "User-Creator Feature Polarization in Recommender Systems with Dual Influence", "url": "https://openreview.net/forum?id=yWq89o19wf", "detail_url": "https://openreview.net/forum?id=yWq89o19wf", "authors": "Tao Lin,Kun Jin,Andrew Estornell,Xiaoying Zhang,Yiling Chen,Yang Liu", "tags": "NIPS 2024,Poster", "abstract": "Recommender systems serve the dual purpose of presenting relevant content to users and helping content creators reach their target audience. The dual nature of these systems naturally influences both users and creators: users' preferences are affected by the items they are recommended, while creators may be incentivized to alter their content to attract more users. We define a model, called user-creator feature dynamics, to capture the dual influence of recommender systems. We prove that a recommender system with dual influence is guaranteed to polarize, causing diversity loss in the system. We then investigate, both theoretically and empirically, approaches for mitigating polarization and promoting diversity in recommender systems. Unexpectedly, we find that common diversity-promoting approaches do not work in the presence of dual influence, while relevancy-optimizing methods like top-$k$ truncation can prevent polarization and improve diversity of the system.", "pdf": "https://openreview.net/pdf/85ebe96bf9fedbc2fcdde28d66e0c8df3b4c3061.pdf"} {"title": "LoQT: Low-Rank Adapters for Quantized Pretraining", "url": "https://openreview.net/forum?id=Pnv8C0bU9t", "detail_url": "https://openreview.net/forum?id=Pnv8C0bU9t", "authors": "Sebastian Bugge Loeschcke,Mads Toftrup,Michael Kastoryano,Serge Belongie,V\u00e9steinn Sn\u00e6bjarnarson", "tags": "NIPS 2024,Poster", "abstract": "Despite advances using low-rank adapters and quantization, pretraining of large models on consumer hardware has not been possible without model sharding, offloading during training, or per-layer gradient updates. To address these limitations, we propose Low-Rank Adapters for Quantized Training (LoQT), a method for efficiently training quantized models. LoQT uses gradient-based tensor factorization to initialize low-rank trainable weight matrices that are periodically merged into quantized full-rank weight matrices. Our approach is suitable for both pretraining and fine-tuning models. We demonstrate this for language modeling and downstream task adaptation, finding that LoQT enables efficient training of models up to 7B parameters on a 24GB GPU. We also demonstrate the feasibility of training a 13B model using per-layer gradient updates on the same hardware.", "pdf": "https://openreview.net/pdf/d532678cf4637d7d315ebd723285c6b6b58529b1.pdf"} {"title": "Unifying Generation and Prediction on Graphs with Latent Graph Diffusion", "url": "https://openreview.net/forum?id=lvibangnAs", "detail_url": "https://openreview.net/forum?id=lvibangnAs", "authors": "Cai Zhou,Xiyuan Wang,Muhan Zhang", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we propose the first framework that enables solving graph learning tasks of all levels (node, edge and graph) and all types (generation, regression and classification) using one formulation. We first formulate prediction tasks including regression and classification into a generic (conditional) generation framework, which enables diffusion models to perform deterministic tasks with provable guarantees. We then propose Latent Graph Diffusion (LGD), a generative model that can generate node, edge, and graph-level features of all categories simultaneously. We achieve this goal by embedding the graph structures and features into a latent space leveraging a powerful encoder and decoder, then training a diffusion model in the latent space. LGD is also capable of conditional generation through a specifically designed cross-attention mechanism. Leveraging LGD and the ``all tasks as generation'' formulation, our framework is capable of solving graph tasks of various levels and types. We verify the effectiveness of our framework with extensive experiments, where our models achieve state-of-the-art or highly competitive results across a wide range of generation and regression tasks.", "pdf": "https://openreview.net/pdf/a4a43894359b12777ca40a61fe02fe738116f751.pdf"} {"title": "On the Role of Attention Masks and LayerNorm in Transformers", "url": "https://openreview.net/forum?id=lIH6oCdppg", "detail_url": "https://openreview.net/forum?id=lIH6oCdppg", "authors": "Xinyi Wu,Amir Ajorlou,Yifei Wang,Stefanie Jegelka,Ali Jadbabaie", "tags": "NIPS 2024,Poster", "abstract": "Self-attention is the key mechanism of transformers, which are the essential building blocks of modern foundation models. Recent studies have shown that pure self-attention suffers from an increasing degree of rank collapse as depth increases, limiting model expressivity and further utilization of model depth. The existing literature on rank collapse, however, has mostly overlooked other critical components in transformers that may alleviate the rank collapse issue. In this paper, we provide a general analysis of rank collapse under self-attention, taking into account the effects of attention masks and layer normalization (LayerNorm). In particular, we find that although pure masked attention still suffers from exponential collapse to a rank one subspace, sparse or local masked attention can provably slow down the collapse rate. In the case of self-attention with LayerNorm, we first show that for certain classes of value matrices, collapse to a rank one subspace still happens exponentially. However, through construction of nontrivial counterexamples, we then establish that with proper choice of value matrices, a general class of sequences may not converge to a rank one subspace, and the self-attention dynamics with LayerNorm can simultaneously possess a rich set of equilibria with any possible rank between one and full. Our result refutes the previous hypothesis that LayerNorm plays no role in the rank collapse of self-attention and suggests that self-attention with LayerNorm constitutes a much more expressive, versatile nonlinear dynamical system than what was originally thought.", "pdf": "https://openreview.net/pdf/e3f16b5718fda36c132a629e2933050ff3bab9e7.pdf"} {"title": "Grammar-Aligned Decoding", "url": "https://openreview.net/forum?id=5G7ve8E1Lu", "detail_url": "https://openreview.net/forum?id=5G7ve8E1Lu", "authors": "Kanghee Park,Jiayu Wang,Taylor Berg-Kirkpatrick,Nadia Polikarpova,Loris D'Antoni", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) struggle with reliably generating highly structured outputs, such as program code, mathematical formulas, or well-formed markup. Constrained decoding approaches mitigate this problem by greedily restricting what tokens an LLM can output at each step to guarantee that the output matches a given constraint. Specifically, in grammar-constrained decoding (GCD), the LLM's output must follow a given grammar. In this paper we demonstrate that GCD techniques (and in general constrained decoding techniques) can distort the LLM's distribution, leading to outputs that are grammatical but appear with likelihoods that are not proportional to the ones given by the LLM, and so ultimately are low-quality. We call the problem of aligning sampling with a grammar constraint, grammar-aligned decoding (GAD), and propose adaptive sampling with approximate expected futures (ASAp), a decoding algorithm that guarantees the output to be grammatical while provably producing outputs that match the conditional probability of the LLM's distribution conditioned on the given grammar constraint. Our algorithm uses prior sample outputs to soundly overapproximate the future grammaticality of different output prefixes. Our evaluation on code generation and structured NLP tasks shows how ASAp often produces outputs with higher likelihood (according to the LLM's distribution) than existing GCD techniques, while still enforcing the desired grammatical constraints.", "pdf": "https://openreview.net/pdf/504817cfb495898362dd26d0e8f8d704bfc7e323.pdf"} {"title": "Symmetry-Informed Governing Equation Discovery", "url": "https://openreview.net/forum?id=aeGSA8UoXF", "detail_url": "https://openreview.net/forum?id=aeGSA8UoXF", "authors": "Jianke Yang,Wang Rao,Nima Dehmamy,Robin Walters,Rose Yu", "tags": "NIPS 2024,Poster", "abstract": "Despite the advancements in learning governing differential equations from observations of dynamical systems, data-driven methods are often unaware of fundamental physical laws, such as frame invariance. As a result, these algorithms may search an unnecessarily large space and discover less accurate or overly complex equations. In this paper, we propose to leverage symmetry in automated equation discovery to compress the equation search space and improve the accuracy and simplicity of the learned equations. Specifically, we derive equivariance constraints from the time-independent symmetries of ODEs. Depending on the types of symmetries, we develop a pipeline for incorporating symmetry constraints into various equation discovery algorithms, including sparse regression and genetic programming. In experiments across diverse dynamical systems, our approach demonstrates better robustness against noise and recovers governing equations with significantly higher probability than baselines without symmetry.", "pdf": "https://openreview.net/pdf/250a0df869100c633849c08a12d81753505c0f07.pdf"} {"title": "Robust Conformal Prediction Using Privileged Information", "url": "https://openreview.net/forum?id=kkmPe0rzY1", "detail_url": "https://openreview.net/forum?id=kkmPe0rzY1", "authors": "Shai Feldman,Yaniv Romano", "tags": "NIPS 2024,Poster", "abstract": "We develop a method to generate prediction sets with a guaranteed coverage rate that is robust to corruptions in the training data, such as missing or noisy variables. \nOur approach builds on conformal prediction, a powerful framework to construct prediction sets that are valid under the i.i.d assumption. Importantly, naively applying conformal prediction does not provide reliable predictions in this setting, due to the distribution shift induced by the corruptions. \nTo account for the distribution shift, we assume access to privileged information (PI). The PI is formulated as additional features that explain the distribution shift, however, they are only available during training and absent at test time.\nWe approach this problem by introducing a novel generalization of weighted conformal prediction and support our method with theoretical coverage guarantees. \nEmpirical experiments on both real and synthetic datasets indicate that our approach achieves a valid coverage rate and constructs more informative predictions compared to existing methods, which are not supported by theoretical guarantees.", "pdf": "https://openreview.net/pdf/17e2a30dc421d29baba4305ca20cf807ff7774e7.pdf"} {"title": "Empowering Active Learning for 3D Molecular Graphs with Geometric Graph Isomorphism", "url": "https://openreview.net/forum?id=He2GCHeRML", "detail_url": "https://openreview.net/forum?id=He2GCHeRML", "authors": "Ronast Subedi,Lu Wei,Wenhan Gao,Shayok Chakraborty,Yi Liu", "tags": "NIPS 2024,Poster", "abstract": "Molecular learning is pivotal in many real-world applications, such as drug discovery. Supervised learning requires heavy human annotation, which is particularly challenging for molecular data, e.g., the commonly used density functional theory (DFT) is highly computationally expensive. Active learning (AL) automatically queries labels for most informative samples, thereby remarkably alleviating the annotation hurdle. In this paper, we present a principled AL paradigm for molecular learning, where we treat molecules as 3D molecular graphs. Specifically, we propose a new diversity sampling method to eliminate mutual redundancy built on distributions of 3D geometries. We first propose a set of new 3D graph isometries for 3D graph isomorphism analysis. Our method is provably at least as expressive as the Geometric Weisfeiler-Lehman (GWL) test. The moments of the distributions of the associated geometries are then extracted for efficient diversity computing. To ensure our AL paradigm selects samples with maximal uncertainties, we carefully design a Bayesian geometric graph neural network to compute uncertainties specifically for 3D molecular graphs. We pose active sampling as a quadratic programming (QP) problem using the proposed components. Experimental results demonstrate the effectiveness of our AL paradigm, as well as the proposed diversity and uncertainty methods.", "pdf": "https://openreview.net/pdf/55839f775cb921be502c4ab77345db777189a822.pdf"} {"title": "A Flexible, Equivariant Framework for Subgraph GNNs via Graph Products and Graph Coarsening", "url": "https://openreview.net/forum?id=9cFyqhjEHC", "detail_url": "https://openreview.net/forum?id=9cFyqhjEHC", "authors": "Guy Bar-Shalom,Yam Eitan,Fabrizio Frasca,Haggai Maron", "tags": "NIPS 2024,Poster", "abstract": "Subgraph GNNs enhance message-passing GNNs expressivity by representing graphs as sets of subgraphs, demonstrating impressive performance across various tasks. However, their scalability is hindered by the need to process large numbers of subgraphs. While previous approaches attempted to generate smaller subsets of subgraphs through random or learnable sampling, these methods often yielded suboptimal selections or were limited to small subset sizes, ultimately compromising their effectiveness. This paper introduces a new Subgraph GNN framework to address these issues. \nOur approach diverges from most previous methods by associating subgraphs with node clusters rather than with individual nodes. We show that the resulting collection of subgraphs can be viewed as the product of coarsened and original graphs, unveiling a new connectivity structure on which we perform generalized message passing.\n\nCrucially, controlling the coarsening function enables meaningful selection of any number of subgraphs. In addition, we reveal novel permutation symmetries in the resulting node feature tensor, characterize associated linear equivariant layers, and integrate them into our Subgraph GNN. We also introduce novel node marking strategies and provide a theoretical analysis of their expressive power and other key aspects of our approach. Extensive experiments on multiple graph learning benchmarks demonstrate that our method is significantly more flexible than previous approaches, as it can seamlessly handle any number of subgraphs, while consistently outperforming baseline approaches. \nOur code is available at https://github.com/BarSGuy/Efficient-Subgraph-GNNs.", "pdf": "https://openreview.net/pdf/f366866b89a021aabbbdbc564209bc7ac0008082.pdf"} {"title": "Mitigating Partial Observability in Sequential Decision Processes via the Lambda Discrepancy", "url": "https://openreview.net/forum?id=YaPhvbGqwO", "detail_url": "https://openreview.net/forum?id=YaPhvbGqwO", "authors": "Cameron Allen,Aaron T. Kirtland,Ruo Yu Tao,Sam Lobel,Daniel Scott,Nicholas Petrocelli,Omer Gottesman,Ronald Parr,Michael Littman,George Konidaris", "tags": "NIPS 2024,Poster", "abstract": "Reinforcement learning algorithms typically rely on the assumption that the environment dynamics and value function can be expressed in terms of a Markovian state representation. However, when state information is only partially observable, how can an agent learn such a state representation, and how can it detect when it has found one? We introduce a metric that can accomplish both objectives, without requiring access to---or knowledge of---an underlying, unobservable state space. Our metric, the \u03bb-discrepancy, is the difference between two distinct temporal difference (TD) value estimates, each computed using TD(\u03bb) with a different value of \u03bb. Since TD(\u03bb=0) makes an implicit Markov assumption and TD(\u03bb=1) does not, a discrepancy between these estimates is a potential indicator of a non-Markovian state representation. Indeed, we prove that the \u03bb-discrepancy is exactly zero for all Markov decision processes and almost always non-zero for a broad class of partially observable environments. We also demonstrate empirically that, once detected, minimizing the \u03bb-discrepancy can help with learning a memory function to mitigate the corresponding partial observability. We then train a reinforcement learning agent that simultaneously constructs two recurrent value networks with different \u03bb parameters and minimizes the difference between them as an auxiliary loss. The approach scales to challenging partially observable domains, where the resulting agent frequently performs significantly better (and never performs worse) than a baseline recurrent agent with only a single value network.", "pdf": "https://openreview.net/pdf/677c00d620d79ecf3e441f1574c27cd2a075022d.pdf"} {"title": "UniBias: Unveiling and Mitigating LLM Bias through Internal Attention and FFN Manipulation", "url": "https://openreview.net/forum?id=luQiVmnviX", "detail_url": "https://openreview.net/forum?id=luQiVmnviX", "authors": "Hanzhang Zhou,Zijian Feng,Zixiao Zhu,Junlang Qian,Kezhi Mao", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have demonstrated impressive capabilities in various tasks using the in-context learning (ICL) paradigm. However, their effectiveness is often compromised by inherent bias, leading to prompt brittleness\u2014sensitivity to design settings such as example selection, order, and prompt formatting. Previous studies have addressed LLM bias through external adjustment of model outputs, but the internal mechanisms that lead to such bias remain unexplored. Our work delves into these mechanisms, particularly investigating how feedforward neural networks (FFNs) and attention heads result in the bias of LLMs. By Interpreting the contribution of individual FFN vectors and attention heads, we identify the biased LLM components that skew LLMs' prediction toward specific labels. To mitigate these biases, we introduce UniBias, an inference-only method that effectively identifies and eliminates biased FFN vectors and attention heads. Extensive experiments across 12 NLP datasets demonstrate that UniBias significantly enhances ICL performance and alleviates prompt brittleness of LLMs.", "pdf": "https://openreview.net/pdf/358d90ff7d8d269005fc19a00baabfed20752d30.pdf"} {"title": "UDC: A Unified Neural Divide-and-Conquer Framework for Large-Scale Combinatorial Optimization Problems", "url": "https://openreview.net/forum?id=dCgbyvmlwL", "detail_url": "https://openreview.net/forum?id=dCgbyvmlwL", "authors": "Zhi Zheng,Changliang Zhou,Tong Xialiang,Mingxuan Yuan,Zhenkun Wang", "tags": "NIPS 2024,Poster", "abstract": "Single-stage neural combinatorial optimization solvers have achieved near-optimal results on various small-scale combinatorial optimization (CO) problems without requiring expert knowledge. However, these solvers exhibit significant performance degradation when applied to large-scale CO problems. Recently, two-stage neural methods motivated by divide-and-conquer strategies have shown efficiency in addressing large-scale CO problems. Nevertheless, the performance of these methods highly relies on problem-specific heuristics in either the dividing or the conquering procedure, which limits their applicability to general CO problems. Moreover, these methods employ separate training schemes and ignore the interdependencies between the dividing and conquering strategies, often leading to sub-optimal solutions. To tackle these drawbacks, this article develops a unified neural divide-and-conquer framework (i.e., UDC) for solving general large-scale CO problems. UDC offers a Divide-Conquer-Reunion (DCR) training method to eliminate the negative impact of a sub-optimal dividing policy. Employing a high-efficiency Graph Neural Network (GNN) for global instance dividing and a fixed-length sub-path solver for conquering divided sub-problems, the proposed UDC framework demonstrates extensive applicability, achieving superior performance in 10 representative large-scale CO problems. The code is available at https://github.com/CIAM-Group/NCO_code/tree/main/single_objective/UDC-Large-scale-CO-master", "pdf": "https://openreview.net/pdf/62b7143496cc33a5dbe9b13c98094f20b2043fc5.pdf"} {"title": "FedNE: Surrogate-Assisted Federated Neighbor Embedding for Dimensionality Reduction", "url": "https://openreview.net/forum?id=zBMKodNgKX", "detail_url": "https://openreview.net/forum?id=zBMKodNgKX", "authors": "Ziwei Li,Xiaoqi Wang,Hong-You Chen,Han Wei Shen,Wei-Lun Chao", "tags": "NIPS 2024,Poster", "abstract": "Federated learning (FL) has rapidly evolved as a promising paradigm that enables collaborative model training across distributed participants without exchanging their local data. Despite its broad applications in fields such as computer vision, graph learning, and natural language processing, the development of a data projection model that can be effectively used to visualize data in the context of FL is crucial yet remains heavily under-explored. Neighbor embedding (NE) is an essential technique for visualizing complex high-dimensional data, but collaboratively learning a joint NE model is difficult. The key challenge lies in the objective function, as effective visualization algorithms like NE require computing loss functions among pairs of data. \nIn this paper, we introduce \\textsc{FedNE}, a novel approach that integrates the \\textsc{FedAvg} framework with the contrastive NE technique, without any requirements of shareable data. To address the lack of inter-client repulsion which is crucial for the alignment in the global embedding space, we develop a surrogate loss function that each client learns and shares with each other. Additionally, we propose a data-mixing strategy to augment the local data, aiming to relax the problems of invisible neighbors and false neighbors constructed by the local $k$NN graphs. We conduct comprehensive experiments on both synthetic and real-world datasets. The results demonstrate that our \\textsc{FedNE} can effectively preserve the neighborhood data structures and enhance the alignment in the global embedding space compared to several baseline methods.", "pdf": "https://openreview.net/pdf/713ead3a7d3c84218a49bae4d46cdf7a3a34d042.pdf"} {"title": "DisCEdit: Model Editing by Identifying Discriminative Components", "url": "https://openreview.net/forum?id=tuiqq1G8I5", "detail_url": "https://openreview.net/forum?id=tuiqq1G8I5", "authors": "Chaitanya Murti,Chiranjib Bhattacharyya", "tags": "NIPS 2024,Poster", "abstract": "Model editing is a growing area of research that is particularly valuable in contexts where modifying key model components, like neurons or filters, can significantly impact the model\u2019s performance. The key challenge lies in identifying important components useful to the model\u2019s predictions. We apply model editing to address two active areas of research, Structured Pruning, and Selective Class Forgetting. In this work, we adopt a distributional approach to the problem of identifying important components, leveraging the recently proposed discriminative filters hypothesis, which states that well-trained (convolutional) models possess discriminative filters that are essential to prediction. To do so, we define discriminative ability in terms of the Bayes error rate associated with the feature distributions, which is equivalent to computing the Total Variation (TV) distance between the distributions. However, computing the TV distance is intractable, motivating us to derive novel witness function-based lower bounds on the TV distance that require no assumptions on the underlying distributions; using this bound generalizes prior work such as Murti et al. [39] that relied on unrealistic Gaussianity assumptions on the feature distributions. With these bounds, we are able to discover critical subnetworks responsible for classwise predictions, and derive DISCEDIT-SP and DISCEDIT-U , algorithms for structured pruning requiring no access to the training data and loss function, and selective forgetting respectively. We apply DISCEDIT-U to selective class forgetting on models trained on CIFAR10 and CIFAR100, and we show that on average, we can reduce accuracy on a single class by over 80% with a minimal reduction in test accuracy on the remaining classes. Similarly, on Structured pruning problems, we obtain 40.8% sparsity on ResNet50 on Imagenet, with only a 2.6% drop in accuracy with minimal fine-tuning.", "pdf": "https://openreview.net/pdf/dbd295af5e5b4cdccb26116ce210661a174d96e3.pdf"} {"title": "Equivariant spatio-hemispherical networks for diffusion MRI deconvolution", "url": "https://openreview.net/forum?id=MxWpCherzD", "detail_url": "https://openreview.net/forum?id=MxWpCherzD", "authors": "Axel Elaldi,Guido Gerig,Neel Dey", "tags": "NIPS 2024,Poster", "abstract": "Each voxel in a diffusion MRI (dMRI) image contains a spherical signal corresponding to the direction and strength of water diffusion in the brain. This paper advances the analysis of such spatio-spherical data by developing convolutional network layers that are equivariant to the $\\mathbf{E(3) \\times SO(3)}$ group and account for the physical symmetries of dMRI including rotations, translations, and reflections of space alongside voxel-wise rotations. Further, neuronal fibers are typically antipodally symmetric, a fact we leverage to construct highly efficient spatio-*hemispherical* graph convolutions to accelerate the analysis of high-dimensional dMRI data. In the context of sparse spherical fiber deconvolution to recover white matter microstructure, our proposed equivariant network layers yield substantial performance and efficiency gains, leading to better and more practical resolution of crossing neuronal fibers and fiber tractography. These gains are experimentally consistent across both simulation and in vivo human datasets.", "pdf": "https://openreview.net/pdf/07454cead061994ce354011bd5b90e9556232add.pdf"} {"title": "SkipPredict: When to Invest in Predictions for Scheduling", "url": "https://openreview.net/forum?id=kVuw8vzsqZ", "detail_url": "https://openreview.net/forum?id=kVuw8vzsqZ", "authors": "Rana Shahout,Michael Mitzenmacher", "tags": "NIPS 2024,Poster", "abstract": "Expanding on recent work on scheduling with predicted job sizes, we consider the effect of the cost of predictions in queueing systems, removing the assumption in prior research that predictions are external to the system\u2019s resources and/or cost-free. Additionally, we introduce a novel approach to utilizing predictions, SkipPredict, designed to address their inherent cost. Rather than uniformly applying predictions to all jobs, we propose a tailored approach that categorizes jobs to improve the effectiveness of prediction on performance. To achieve this, we employ one-bit \u201ccheap predictions\u201d to classify jobs as either short or long. SkipPredict prioritizes predicted short jobs over long jobs, and for the long jobs, SkipPredict applies a second round of more detailed \u201cexpensive predictions\u201d to approximate Shortest Remaining Processing Time for these jobs. Importantly, our analyses take into account the cost of prediction. We derive closed-form formulas that calculate the mean response time of jobs with size predictions accounting for the prediction cost. We examine the effect of this cost for two distinct models in real-world and synthetic datasets. In the external cost model, predictions are generated by external method without impacting job service times but incur a cost. In the server time cost model, predictions themselves require server processing time and are scheduled on the same server as the jobs.", "pdf": "https://openreview.net/pdf/5162380d9c32a0fd2737b61b998c536a3eec2ceb.pdf"} {"title": "Deep Equilibrium Algorithmic Reasoning", "url": "https://openreview.net/forum?id=SuLxkxCENa", "detail_url": "https://openreview.net/forum?id=SuLxkxCENa", "authors": "Dobrik Georgiev Georgiev,JJ Wilson,Davide Buffelli,Pietro Lio", "tags": "NIPS 2024,Poster", "abstract": "Neural Algorithmic Reasoning (NAR) research has demonstrated that graph neural networks (GNNs) could learn to execute classical algorithms. However, most previous approaches have always used a recurrent architecture, where each iteration of the GNN matches an iteration of the algorithm. In this paper we study neurally solving algorithms from a different perspective: since the algorithm\u2019s solution is often an equilibrium, it is possible to find the solution directly by solving an equilibrium equation. Our approach requires no information on the ground-truth number of steps of the algorithm, both during train and test time. Furthermore, the proposed method improves the performance of GNNs on executing algorithms and is a step towards speeding up existing NAR models. Our empirical evidence, leveraging algorithms from the CLRS-30 benchmark, validates that one can train a network to solve algorithmic problems by directly finding the equilibrium. We discuss the practical implementation of such models and propose regularisations to improve the performance of these equilibrium reasoners.", "pdf": "https://openreview.net/pdf/8b85ffde2de0fe0a152c8cfa917ab32b850cee6b.pdf"} {"title": "Soft-Label Integration for Robust Toxicity Classification", "url": "https://openreview.net/forum?id=iYkhThIXG1", "detail_url": "https://openreview.net/forum?id=iYkhThIXG1", "authors": "Zelei Cheng,Xian Wu,Jiahao Yu,Shuo Han,Xin-Qiang Cai,Xinyu Xing", "tags": "NIPS 2024,Poster", "abstract": "Toxicity classification in textual content remains a significant problem. Data with labels from a single annotator fall short of capturing the diversity of human perspectives. Therefore, there is a growing need to incorporate crowdsourced annotations for training an effective toxicity classifier. Additionally, the standard approach to training a classifier using empirical risk minimization (ERM) may fail to address the potential shifts between the training set and testing set due to exploiting spurious correlations. This work introduces a novel bi-level optimization framework that integrates crowdsourced annotations with the soft-labeling technique and optimizes the soft-label weights by Group Distributionally Robust Optimization (GroupDRO) to enhance the robustness against out-of-distribution (OOD) risk. We theoretically prove the convergence of our bi-level optimization algorithm. Experimental results demonstrate that our approach outperforms existing baseline methods in terms of both average and worst-group accuracy, confirming its effectiveness in leveraging crowdsourced annotations to achieve more effective and robust toxicity classification.", "pdf": "https://openreview.net/pdf/57903154d4f4a6b9319a3259d24f77771bc8c00d.pdf"} {"title": "Constrained Diffusion with Trust Sampling", "url": "https://openreview.net/forum?id=dJUb9XRoZI", "detail_url": "https://openreview.net/forum?id=dJUb9XRoZI", "authors": "William Huang,Yifeng Jiang,Tom Van Wouwe,Karen Liu", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models have demonstrated significant promise in various generative tasks; however, they often struggle to satisfy challenging constraints. Our approach addresses this limitation by rethinking training-free loss-guided diffusion from an optimization perspective. We formulate a series of constrained optimizations throughout the inference process of a diffusion model. In each optimization, we allow the sample to take multiple steps along the gradient of the proxy constraint function until we can no longer trust the proxy, according to the variance at each diffusion level. Additionally, we estimate the state manifold of diffusion model to allow for early termination when the sample starts to wander away from the state manifold at each diffusion step. Trust sampling effectively balances between following the unconditional diffusion model and adhering to the loss guidance, enabling more flexible and accurate constrained generation. We demonstrate the efficacy of our method through extensive experiments on complex tasks, and in drastically different domains of images and 3D motion generation, showing significant improvements over existing methods in terms of generation quality. Our implementation is available at https://github.com/will-s-h/trust-sampling.", "pdf": "https://openreview.net/pdf/e1d858094c12ee2286a240c579e9c76aa92c8a7b.pdf"} {"title": "Probabilistic Graph Rewiring via Virtual Nodes", "url": "https://openreview.net/forum?id=LpvSHL9lcK", "detail_url": "https://openreview.net/forum?id=LpvSHL9lcK", "authors": "Chendi Qian,Andrei Manolache,Christopher Morris,Mathias Niepert", "tags": "NIPS 2024,Poster", "abstract": "Message-passing graph neural networks (MPNNs) have emerged as a powerful paradigm for graph-based machine learning. Despite their effectiveness, MPNNs face challenges such as under-reaching and over-squashing, where limited receptive fields and structural bottlenecks hinder information flow in the graph. While graph transformers hold promise in addressing these issues, their scalability is limited due to quadratic complexity regarding the number of nodes, rendering them impractical for larger graphs. Here, we propose implicitly rewired message-passing neural networks (IPR-MPNNs), a novel approach that integrates implicit probabilistic graph rewiring into MPNNs. By introducing a small number of virtual nodes, i.e., adding additional nodes to a given graph and connecting them to existing nodes, in a differentiable, end-to-end manner, IPR-MPNNs enable long-distance message propagation, circumventing quadratic complexity. Theoretically, we demonstrate that IPR-MPNNs surpass the expressiveness of traditional MPNNs. Empirically, we validate our approach by showcasing its ability to mitigate under-reaching and over-squashing effects, achieving state-of-the-art performance across multiple graph datasets. Notably, IPR-MPNNs outperform graph transformers while maintaining significantly faster computational efficiency.", "pdf": "https://openreview.net/pdf/e3c70652293f00541a7b1d204c28e31a80eabc54.pdf"} {"title": "Tight Rates for Bandit Control Beyond Quadratics", "url": "https://openreview.net/forum?id=mlm3nUwOeQ", "detail_url": "https://openreview.net/forum?id=mlm3nUwOeQ", "authors": "Y. Jennifer Sun,Zhou Lu", "tags": "NIPS 2024,Poster", "abstract": "Unlike classical control theory, such as Linear Quadratic Control (LQC), real-world control problems are highly complex. These problems often involve adversarial perturbations, bandit feedback models, and non-quadratic, adversarially chosen cost functions. A fundamental yet unresolved question is whether optimal regret can be achieved for these general control problems. The standard approach to addressing this problem involves a reduction to bandit convex optimization with memory. In the bandit setting, constructing a gradient estimator with low variance is challenging due to the memory structure and non-quadratic loss functions.\n\nIn this paper, we provide an affirmative answer to this question. Our main contribution is an algorithm that achieves an $\\tilde{O}(\\sqrt{T})$ optimal regret for bandit non-stochastic control with strongly-convex and smooth cost functions in the presence of adversarial perturbations, improving the previously known $\\tilde{O}(T^{2/3})$ regret bound from \\citep{cassel2020bandit}. Our algorithm overcomes the memory issue by reducing the problem to Bandit Convex Optimization (BCO) without memory and addresses general strongly-convex costs using recent advancements in BCO from \\citep{suggala2024second}. Along the way, we develop an improved algorithm for BCO with memory, which may be of independent interest.", "pdf": "https://openreview.net/pdf/21571625278a93f35acb6aaca70e4b0bb0577e37.pdf"} {"title": "Be Confident in What You Know: Bayesian Parameter Efficient Fine-Tuning of Vision Foundation Models", "url": "https://openreview.net/forum?id=loQCk0qruU", "detail_url": "https://openreview.net/forum?id=loQCk0qruU", "authors": "Deep Shankar Pandey,Spandan Pyakurel,Qi Yu", "tags": "NIPS 2024,Poster", "abstract": "Large transformer-based foundation models have been commonly used as pre-trained models that can be adapted to different challenging datasets and settings with state-of-the-art generalization performance. Parameter efficient fine-tuning ($\\texttt{PEFT}$) provides promising generalization performance in adaptation while incurring minimum computational overhead. However, adaptation of these foundation models through $\\texttt{PEFT}$ leads to accurate but severely underconfident models, especially in few-shot learning settings. Moreover, the adapted models lack accurate fine-grained uncertainty quantification capabilities limiting their broader applicability in critical domains. To fill out this critical gap, we develop a novel lightweight {Bayesian Parameter Efficient Fine-Tuning} (referred to as $\\texttt{Bayesian-PEFT}$) framework for large transformer-based foundation models. The framework integrates state-of-the-art $\\texttt{PEFT}$ techniques with two Bayesian components to address the under-confidence issue while ensuring reliable prediction under challenging few-shot settings. The first component performs base rate adjustment to strengthen the prior belief corresponding to the knowledge gained through pre-training, making the model more confident in its predictions; the second component builds an evidential ensemble that leverages belief regularization to ensure diversity among different ensemble components.\nOur thorough theoretical analysis justifies that the Bayesian components can ensure reliable and accurate few-shot adaptations with well-calibrated uncertainty quantification. Extensive experiments across diverse datasets, few-shot learning scenarios, and multiple $\\texttt{PEFT}$ techniques demonstrate the outstanding prediction and calibration performance by $\\texttt{Bayesian-PEFT}$.", "pdf": "https://openreview.net/pdf/0f211c89a99253cf0f4d97fe37005d821b00cfa5.pdf"} {"title": "Generating Code World Models with Large Language Models Guided by Monte Carlo Tree Search", "url": "https://openreview.net/forum?id=9SpWvX9ykp", "detail_url": "https://openreview.net/forum?id=9SpWvX9ykp", "authors": "Nicola Dainese,Matteo Merler,Minttu Alakuijala,Pekka Marttinen", "tags": "NIPS 2024,Poster", "abstract": "In this work we consider Code World Models, world models generated by a Large Language Model (LLM) in the form of Python code for model-based Reinforcement Learning (RL). Calling code instead of LLMs for planning has potential to be more precise, reliable, interpretable, and extremely efficient.\nHowever, writing appropriate Code World Models requires the ability to understand complex instructions, to generate exact code with non-trivial logic and to self-debug a long program with feedback from unit tests and environment trajectories. To address these challenges, we propose Generate, Improve and Fix with Monte Carlo Tree Search (GIF-MCTS), a new code generation strategy for LLMs. To test our approach in an offline RL setting, we introduce the Code World Models Benchmark (CWMB), a suite of program synthesis and planning tasks comprised of 18 diverse RL environments paired with corresponding textual descriptions and curated trajectories. GIF-MCTS surpasses all baselines on the CWMB and two other benchmarks, and we show that the Code World Models synthesized with it can be successfully used for planning, resulting in model-based RL agents with greatly improved sample efficiency and inference speed.", "pdf": "https://openreview.net/pdf/86db2e1fa0623f682614587d7c2e40d308e39203.pdf"} {"title": "Parametric model reduction of mean-field and stochastic systems via higher-order action matching", "url": "https://openreview.net/forum?id=qyaz3XP0FN", "detail_url": "https://openreview.net/forum?id=qyaz3XP0FN", "authors": "Jules Berman,Tobias Blickhan,Benjamin Peherstorfer", "tags": "NIPS 2024,Poster", "abstract": "The aim of this work is to learn models of population dynamics of physical systems that feature stochastic and mean-field effects and that depend on physics parameters. The learned models can act as surrogates of classical numerical models to efficiently predict the system behavior over the physics parameters. Building on the Benamou-Brenier formula from optimal transport and action matching, we use a variational problem to infer parameter- and time-dependent gradient fields that represent approximations of the population dynamics. The inferred gradient fields can then be used to rapidly generate sample trajectories that mimic the dynamics of the physical system on a population level over varying physics parameters. We show that combining Monte Carlo sampling with higher-order quadrature rules is critical for accurately estimating the training objective from sample data and for stabilizing the training process. We demonstrate on Vlasov-Poisson instabilities as well as on high-dimensional particle and chaotic systems that our approach accurately predicts population dynamics over a wide range of parameters and outperforms state-of-the-art diffusion-based and flow-based modeling that simply condition on time and physics parameters.", "pdf": "https://openreview.net/pdf/4f95db67d2e94e6fdc17fe1ba3eacae2b3028669.pdf"} {"title": "Scalable Optimization in the Modular Norm", "url": "https://openreview.net/forum?id=SFxAjB7UXx", "detail_url": "https://openreview.net/forum?id=SFxAjB7UXx", "authors": "Tim Large,Yang Liu,Minyoung Huh,Hyojin Bahng,Phillip Isola,Jeremy Bernstein", "tags": "NIPS 2024,Poster", "abstract": "To improve performance in contemporary deep learning, one is interested in scaling up the neural network in terms of both the number and the size of the layers. When ramping up the width of a single layer, graceful scaling of training has been linked to the need to normalize the weights and their updates in the \"natural norm\" particular to that layer. In this paper, we significantly generalize this idea by defining the modular norm, which is the natural norm on the full weight space of any neural network architecture. The modular norm is defined recursively in tandem with the network architecture itself. We show that the modular norm has several promising applications. On the practical side, the modular norm can be used to normalize the updates of any base optimizer so that the learning rate becomes transferable across width and depth. This means that the user does not need to compute optimizer-specific scale factors in order to scale training. On the theoretical side, we show that for any neural network built from \"well-behaved\" atomic modules, the gradient of the network is Lipschitz-continuous in the modular norm, with the Lipschitz constant admitting a simple recursive formula. This characterization opens the door to porting standard ideas in optimization theory over to deep learning. We have created a Python package called Modula that automatically normalizes weight updates in the modular norm of the architecture. Both the Modula package and code for our experiments are provided in the supplementary material.", "pdf": "https://openreview.net/pdf/f3aa0267adde40f2438c5f134aac596b9e198960.pdf"} {"title": "Identifying Latent State-Transition Processes for Individualized Reinforcement Learning", "url": "https://openreview.net/forum?id=kREpCQtHdN", "detail_url": "https://openreview.net/forum?id=kREpCQtHdN", "authors": "Yuewen Sun,Biwei Huang,Yu Yao,Donghuo Zeng,Xinshuai Dong,Songyao Jin,Boyang Sun,Roberto Legaspi,Kazushi Ikeda,Peter Spirtes,Kun Zhang", "tags": "NIPS 2024,Poster", "abstract": "In recent years, the application of reinforcement learning (RL) involving interactions with individuals has seen significant growth. These interactions, influenced by individual-specific factors ranging from personal preferences to physiological differences, can causally affect state transitions, such as the health conditions in healthcare or learning progress in education. Consequently, different individuals may exhibit different state-transition processes. Understanding these individualized state-transition processes is crucial for optimizing individualized policies. In practice, however, identifying these state-transition processes is challenging, especially since individual-specific factors often remain latent. In this paper, we establish the identifiability of these latent factors and present a practical method that effectively learns these processes from observed state-action trajectories. Our experiments on various datasets show that our method can effectively identify the latent state-transition processes and help learn individualized RL policies.", "pdf": "https://openreview.net/pdf/85ac46c122945045e96dafd261221d1a23e4ec95.pdf"} {"title": "Back to the Continuous Attractor", "url": "https://openreview.net/forum?id=fvG6ZHrH0B", "detail_url": "https://openreview.net/forum?id=fvG6ZHrH0B", "authors": "\u00c1bel S\u00e1godi,Guillermo Mart\u00edn-S\u00e1nchez,Piotr A Sokol,Il Memming Park", "tags": "NIPS 2024,Poster", "abstract": "Continuous attractors offer a unique class of solutions for storing continuous-valued variables in recurrent system states for indefinitely long time intervals.\nUnfortunately, continuous attractors suffer from severe structural instability in general---they are destroyed by most infinitesimal changes of the dynamical law that defines them.\nThis fragility limits their utility especially in biological systems as their recurrent dynamics are subject to constant perturbations.\nWe observe that the bifurcations from continuous attractors in theoretical neuroscience models display various structurally stable forms.\nAlthough their asymptotic behaviors to maintain memory are categorically distinct, their finite-time behaviors are similar.\nWe build on the persistent manifold theory to explain the commonalities between bifurcations from and approximations of continuous attractors.\nFast-slow decomposition analysis uncovers the existence of a persistent slow manifold that survives the seemingly destructive bifurcation, relating the flow within the manifold to the size of the perturbation. Moreover, this allows the bounding of the memory error of these approximations of continuous attractors.\nFinally, we train recurrent neural networks on analog memory tasks to support the appearance of these systems as solutions and their generalization capabilities.\nTherefore, we conclude that continuous attractors are functionally robust and remain useful as a universal analogy for understanding analog memory.", "pdf": "https://openreview.net/pdf/c5d74247c35d26fced9942711f051a6415b08bbc.pdf"} {"title": "ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization", "url": "https://openreview.net/forum?id=JNl6h3U3oW", "detail_url": "https://openreview.net/forum?id=JNl6h3U3oW", "authors": "Haoran You,Yipin Guo,Yichao Fu,Wei Zhou,Huihong Shi,Xiaofan Zhang,Souvik Kundu,Amir Yazdanbakhsh,Yingyan Celine Lin", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have shown impressive performance on language tasks but face challenges when deployed on resource-constrained devices due to their extensive parameters and reliance on dense multiplications, resulting in high memory demands and latency bottlenecks. Shift-and-add reparameterization offers a promising solution by replacing costly multiplications with hardware-friendly primitives in both the attention and multi-layer perceptron (MLP) layers of an LLM. However, current reparameterization techniques require training from scratch or full parameter fine-tuning to restore accuracy, which is resource-intensive for LLMs. To address this, we propose accelerating pretrained LLMs through post-training shift-and-add reparameterization, creating efficient multiplication-free models, dubbed ShiftAddLLM. Specifically, we quantize each weight matrix into binary matrices paired with group-wise scaling factors. The associated multiplications are reparameterized into (1) shifts between activations and scaling factors and (2) queries and adds according to the binary matrices. To reduce accuracy loss, we present a multi-objective optimization method to minimize both weight and output activation reparameterization errors. Additionally, based on varying sensitivity across layers to reparameterization, we develop an automated bit allocation strategy to further reduce memory usage and latency. Experiments on five LLM families and eight tasks consistently validate the effectiveness of ShiftAddLLM, achieving average perplexity reductions of 5.6 and 22.7 points at comparable or lower latency compared to the most competitive quantized LLMs at 3- and 2-bit precision, respectively, and more than 80% memory and energy reductions over the original LLMs. Codes and models are available at https://github.com/GATECH-EIC/ShiftAddLLM.", "pdf": "https://openreview.net/pdf/dbaa21ce19cb724f3f9cb7dcaa53e0ea77e53334.pdf"} {"title": "Noether's Razor: Learning Conserved Quantities", "url": "https://openreview.net/forum?id=dpvqBkEp1f", "detail_url": "https://openreview.net/forum?id=dpvqBkEp1f", "authors": "Tycho F. A. van der Ouderaa,Mark van der Wilk,Pim De Haan", "tags": "NIPS 2024,Poster", "abstract": "Symmetries have proven useful in machine learning models, improving generalisation and overall performance. At the same time, recent advancements in learning dynamical systems rely on modelling the underlying Hamiltonian to guarantee the conservation of energy.\nThese approaches can be connected via a seminal result in mathematical physics: Noether's theorem, which states that symmetries in a dynamical system correspond to conserved quantities.\nThis work uses Noether's theorem to parameterise symmetries as learnable conserved quantities. We then allow conserved quantities and associated symmetries to be learned directly from train data through approximate Bayesian model selection, jointly with the regular training procedure. As training objective, we derive a variational lower bound to the marginal likelihood. The objective automatically embodies an Occam's Razor effect that avoids collapse of conversation laws to the trivial constant, without the need to manually add and tune additional regularisers. We demonstrate a proof-of-principle on n-harmonic oscillators and n-body systems. We find that our method correctly identifies the correct conserved quantities and U(n) and SE(n) symmetry groups, improving overall performance and predictive accuracy on test data.", "pdf": "https://openreview.net/pdf/e7c16ba035c5c4994f07a3aa0f69eb4e452dfbde.pdf"} {"title": "Credal Learning Theory", "url": "https://openreview.net/forum?id=AH5KwUSsln", "detail_url": "https://openreview.net/forum?id=AH5KwUSsln", "authors": "Michele Caprio,Maryam Sultana,Eleni Elia,Fabio Cuzzolin", "tags": "NIPS 2024,Poster", "abstract": "Statistical learning theory is the foundation of machine learning, providing theoretical bounds for the risk of models learned from a (single) training set, assumed to issue from an unknown probability distribution. In actual deployment, however, the data distribution may (and often does) vary, causing domain adaptation/generalization issues. In this paper we lay the foundations for a `credal' theory of learning, using convex sets of probabilities (credal sets) to model the variability in the data-generating distribution. Such credal sets, we argue, may be inferred from a finite sample of training sets. Bounds are derived for the case of finite hypotheses spaces (both assuming realizability or not), as well as infinite model spaces, which directly generalize classical results.", "pdf": "https://openreview.net/pdf/19306051f44e70ecb2b1164b1ce92657b16c483b.pdf"} {"title": "Distribution Learning with Valid Outputs Beyond the Worst-Case", "url": "https://openreview.net/forum?id=L7i5FjgKjc", "detail_url": "https://openreview.net/forum?id=L7i5FjgKjc", "authors": "Nicholas Rittler,Kamalika Chaudhuri", "tags": "NIPS 2024,Poster", "abstract": "Generative models at times produce \"invalid\" outputs, such as images with generation artifacts and unnatural sounds. Validity-constrained distribution learning attempts to address this problem by requiring that the learned distribution have a provably small fraction of its mass in invalid parts of space -- something which standard loss minimization does not always ensure. To this end, a learner in this model can guide the learning via \"validity queries\", which allow it to ascertain the validity of individual examples. Prior work on this problem takes a worst-case stance, showing that proper learning requires an exponential number of validity queries, and demonstrating an improper algorithm which -- while generating guarantees in a wide-range of settings -- makes a relatively large polynomial number of validity queries. In this work, we take a first step towards characterizing regimes where guaranteeing validity is easier than in the worst-case. We show that when the data distribution lies in the model class and the log-loss is minimized, the number samples required to ensure validity has a weak dependence on the validity requirement. Additionally, we show that when the validity region belongs to a VC-class, a limited number of validity queries are often sufficient.", "pdf": "https://openreview.net/pdf/c2cb0299f9ccbbc0514a603d3440c500c530a129.pdf"} {"title": "Online Control in Population Dynamics", "url": "https://openreview.net/forum?id=ZBBrBujopT", "detail_url": "https://openreview.net/forum?id=ZBBrBujopT", "authors": "Noah Golowich,Elad Hazan,Zhou Lu,Dhruv Rohatgi,Y. Jennifer Sun", "tags": "NIPS 2024,Poster", "abstract": "The study of population dynamics originated with early sociological works but has since extended into many fields, including biology, epidemiology, evolutionary game theory, and economics. Most studies on population dynamics focus on the problem of prediction rather than control. Existing mathematical models for population control are often restricted to specific, noise-free dynamics, while real-world population changes can be complex and adversarial. \n\nTo address this gap, we propose a new framework based on the paradigm of online control. We first characterize a set of linear dynamical systems that can naturally model evolving populations. We then give an efficient gradient-based controller for these systems, with near-optimal regret bounds with respect to a broad class of linear policies. Our empirical evaluations demonstrate the effectiveness of the proposed algorithm for population control even in non-linear models such as SIR and replicator dynamics.", "pdf": "https://openreview.net/pdf/dcccb3b68a8f2187bf2814b594c28b266dd7bc08.pdf"} {"title": "Evidential Mixture Machines: Deciphering Multi-Label Correlations for Active Learning Sensitivity", "url": "https://openreview.net/forum?id=n5lLSskwtu", "detail_url": "https://openreview.net/forum?id=n5lLSskwtu", "authors": "Dayou Yu,Minghao Li,Weishi Shi,Qi Yu", "tags": "NIPS 2024,Poster", "abstract": "Multi-label active learning is a crucial yet challenging area in contemporary machine learning, often complicated by a large and sparse label space. This challenge is further exacerbated in active learning scenarios where labeling resources are constrained. Drawing inspiration from existing mixture of Bernoulli models, which efficiently compress the label space into a more manageable weight coefficient space by learning correlated Bernoulli components, we propose a novel model called Evidential Mixture Machines (EMM). Our model leverages mixture components derived from unsupervised learning in the label space and improves prediction accuracy by predicting weight coefficients following the evidential learning paradigm. These coefficients are aggregated as proxy pseudo counts to enhance component offset predictions. The evidential learning approach provides an uncertainty-aware connection between input features and the predicted coefficients and components. Additionally, our method combines evidential uncertainty with predicted label embedding covariances for active sample selection, creating a richer, multi-source uncertainty metric beyond traditional uncertainty scores. Experiments on synthetic datasets show the effectiveness of evidential uncertainty prediction and EMM's capability to capture label correlations through predicted components. Further testing on real-world datasets demonstrates improved performance compared to existing multi-label active learning methods.", "pdf": "https://openreview.net/pdf/0ebfc3e5e8ec8b72e72310ea6967dd4bc4e22452.pdf"} {"title": "On the Scalability of GNNs for Molecular Graphs", "url": "https://openreview.net/forum?id=klqhrq7fvB", "detail_url": "https://openreview.net/forum?id=klqhrq7fvB", "authors": "Maciej Sypetkowski,Frederik Wenkel,Farimah Poursafaei,Nia Dickson,Karush Suri,Philip Fradkin,Dominique Beaini", "tags": "NIPS 2024,Poster", "abstract": "Scaling deep learning models has been at the heart of recent revolutions in language modelling and image generation. Practitioners have observed a strong relationship between model size, dataset size, and performance. However, structure-based architectures such as Graph Neural Networks (GNNs) are yet to show the benefits of scale mainly due to lower efficiency of sparse operations, large data requirements, and lack of clarity about the effectiveness of various architectures. We address this drawback of GNNs by studying their scaling behavior. Specifically, we analyze message-passing networks, graph Transformers, and hybrid architectures on the largest public collection of 2D molecular graphs for supervised pretraining. For the first time, we observe that GNNs benefit tremendously from the increasing scale of depth, width, number of molecules and associated labels. A major factor is the diversity of the pretraining data that comprises thousands of labels per molecule derived from bio-assays, quantum simulations, transcriptomics and phenomic imaging. We further demonstrate strong finetuning scaling behavior on 38 highly competitive downstream tasks, outclassing previous large models. This gives rise to MolGPS, a new graph foundation model that allows to navigate the chemical space, outperforming the previous state-of-the-arts on 26 out the 38 downstream tasks. We hope that our work paves the way for an era where foundational GNNs drive pharmaceutical drug discovery.", "pdf": "https://openreview.net/pdf/10d89ae98e5eb08d3571feb3470901a034427b55.pdf"} {"title": "DiffPO: A causal diffusion model for learning distributions of potential outcomes", "url": "https://openreview.net/forum?id=merJ77Jipt", "detail_url": "https://openreview.net/forum?id=merJ77Jipt", "authors": "Yuchen Ma,Valentyn Melnychuk,Jonas Schweisthal,Stefan Feuerriegel", "tags": "NIPS 2024,Poster", "abstract": "Predicting potential outcomes of interventions from observational data is crucial for decision-making in medicine, but the task is challenging due to the fundamental problem of causal inference. Existing methods are largely limited to point estimates of potential outcomes with no uncertain quantification; thus, the full information about the distributions of potential outcomes is typically ignored. In this paper, we propose a novel causal diffusion model called DiffPO, which is carefully designed for reliable inferences in medicine by learning the distribution of potential outcomes. In our DiffPO, we leverage a tailored conditional denoising diffusion model to learn complex distributions, where we address the selection bias through a novel orthogonal diffusion loss. Another strength of our DiffPO method is that it is highly flexible (e.g., it can also be used to estimate different causal quantities such as CATE). Across a wide range of experiments, we show that our method achieves state-of-the-art performance.", "pdf": "https://openreview.net/pdf/3ab42b4019b91a4da0db8fd15de4fc431fae84d8.pdf"} {"title": "Online Learning with Sublinear Best-Action Queries", "url": "https://openreview.net/forum?id=9uKeqtIoGZ", "detail_url": "https://openreview.net/forum?id=9uKeqtIoGZ", "authors": "Matteo Russo,Andrea Celli,Riccardo Colini Baldeschi,Federico Fusco,Daniel Haimovich,Dima Karamshuk,Stefano Leonardi,Niek Tax", "tags": "NIPS 2024,Poster", "abstract": "In online learning, a decision maker repeatedly selects one of a set of actions, with the goal of minimizing the overall loss incurred. Following the recent line of research on algorithms endowed with additional predictive features, we revisit this problem by allowing the decision maker to acquire additional information on the actions to be selected. In particular, we study the power of \\emph{best-action queries}, which reveal beforehand the identity of the best action at a given time step. In practice, predictive features may be expensive, so we allow the decision maker to issue at most $k$ such queries.\n\nWe establish tight bounds on the performance any algorithm can achieve when given access to $k$ best-action queries for different types of feedback models. In particular, we prove that in the full feedback model, $k$ queries are enough to achieve an optimal regret of $\\Theta(\\min\\{\\sqrt T, \\frac{T}{k}\\})$. This finding highlights the significant multiplicative advantage in the regret rate achievable with even a modest (sublinear) number $k \\in \\Omega(\\sqrt{T})$ of queries.\n \nAdditionally, we study the challenging setting in which the only available feedback is obtained during the time steps corresponding to the $k$ best-action queries. There, we provide a tight regret rate of $\\Theta(\\min\\{\\frac{T}{\\sqrt k},\\frac{T^2}{k^2}\\})$, which improves over the standard $\\Theta(\\frac{T}{\\sqrt k})$ regret rate for label efficient prediction for $k \\in \\Omega(T^{2/3})$.", "pdf": "https://openreview.net/pdf/92da4f5f1c9c98618f22314dda0744a9205f3325.pdf"} {"title": "Parallelizing Linear Transformers with the Delta Rule over Sequence Length", "url": "https://openreview.net/forum?id=y8Rm4VNRPH", "detail_url": "https://openreview.net/forum?id=y8Rm4VNRPH", "authors": "Songlin Yang,Bailin Wang,Yu Zhang,Yikang Shen,Yoon Kim", "tags": "NIPS 2024,Poster", "abstract": "Transformers with linear attention (i.e., linear transformers) and state-space models have recently been suggested as a viable linear-time alternative to transformers with softmax attention. However, these models still underperform transformers especially on tasks that require in-context retrieval. While more expressive variants of linear transformers which replace the additive update in linear transformers with the delta rule (DeltaNet) have been found to be more effective at associative recall, existing algorithms for training such models do not parallelize over sequence length and are thus inefficient to train on modern hardware. This work describes a hardware-efficient algorithm for training linear transformers with the delta rule, which exploits a memory-efficient representation for computing products of Householder matrices. This algorithm allows us to scale up DeltaNet to standard language modeling settings. We train a 1.3B model for 100B tokens and find that it outperforms recent linear-time baselines such as Mamba and GLA in terms of perplexity and zero-shot performance on downstream tasks. We also experiment with two hybrid models which combine DeltaNet layers with (1) sliding-window attention layers every other layer or (2) two global attention layers, and find that these hybrids outperform strong transformer baselines.", "pdf": "https://openreview.net/pdf/66ae05815e599e785fe690f90966433fc3f19b1b.pdf"} {"title": "Preference Alignment with Flow Matching", "url": "https://openreview.net/forum?id=EKN8AGS1wG", "detail_url": "https://openreview.net/forum?id=EKN8AGS1wG", "authors": "Minu Kim,Yongsik Lee,Sehyeok Kang,Jihwan Oh,Song Chong,Se-Young Yun", "tags": "NIPS 2024,Poster", "abstract": "We present Preference Flow Matching (PFM), a new framework for preference alignment that streamlines the integration of preferences into an arbitrary class of pre-trained models. Existing alignment methods require fine-tuning pre-trained models, which presents challenges such as scalability, inefficiency, and the need for model modifications, especially with black-box APIs like GPT-4. In contrast, PFM utilizes flow matching techniques to directly learn from preference data, thereby reducing the dependency on extensive fine-tuning of pre-trained models. By leveraging flow-based models, PFM transforms less preferred data into preferred outcomes, and effectively aligns model outputs with human preferences without relying on explicit or implicit reward function estimation, thus avoiding common issues like overfitting in reward models. We provide theoretical insights that support our method\u2019s alignment with standard preference alignment objectives. Experimental results indicate the practical effectiveness of our method, offering a new direction in aligning a pre-trained model to preference. Our code is available at https://github.com/jadehaus/preference-flow-matching.", "pdf": "https://openreview.net/pdf/6a7e75898419ae05e0ab639a8fd5fe7e29f2b0e5.pdf"} {"title": "Dynamic Service Fee Pricing under Strategic Behavior: Actions as Instruments and Phase Transition", "url": "https://openreview.net/forum?id=Tnl2K6Iz9j", "detail_url": "https://openreview.net/forum?id=Tnl2K6Iz9j", "authors": "Rui Ai,David Simchi-Levi,Feng Zhu", "tags": "NIPS 2024,Poster", "abstract": "We study a dynamic pricing problem for third-party platform service fees under strategic, far-sighted customers. In each time period, the platform sets a service fee based on historical data, observes the resulting transaction quantities, and collects revenue. The platform also monitors equilibrium prices influenced by both demand and supply. The objective is to maximize total revenue over a time horizon $T$. Our problem incorporates three practical challenges: (a) initially, the platform lacks knowledge of the demand side beforehand, necessitating a balance between exploring (learning the demand curve) and exploiting (maximizing revenue) simultaneously; (b) since only equilibrium prices and quantities are observable, traditional Ordinary Least Squares (OLS) estimators would be biased and inconsistent; (c) buyers are rational and strategic, seeking to maximize their consumer surplus and potentially misrepresenting their preferences. To address these challenges, we propose novel algorithmic solutions. Our approach involves: (i) a carefully designed active randomness injection to balance exploration and exploitation effectively; (ii) using non-i.i.d. actions as instrumental variables (IV) to consistently estimate demand; (iii) a low-switching cost design that promotes nearly truthful buyer behavior. We show an expected regret bound of $\\tilde{\\mathcal{O}} (\\sqrt{T}\\wedge\\sigma_S^{-2})$ and demonstrate its optimality, up to logarithmic factors, with respect to both the time horizon $T$ and the randomness in supply $\\sigma_S$. Despite its simplicity, our model offers valuable insights into the use of actions as estimation instruments, the benefits of low-switching pricing policies in mitigating strategic buyer behavior, and the role of supply randomness in facilitating exploration which leads to a phase transition of policy performance.", "pdf": "https://openreview.net/pdf/46f7434d34df46fc8b6a666002f862599e33e608.pdf"} {"title": "Boosted Conformal Prediction Intervals", "url": "https://openreview.net/forum?id=Tw032H2onS", "detail_url": "https://openreview.net/forum?id=Tw032H2onS", "authors": "Ran Xie,Rina Foygel Barber,Emmanuel Candes", "tags": "NIPS 2024,Poster", "abstract": "This paper introduces a boosted conformal procedure designed to tailor conformalized prediction intervals toward specific desired properties, such as enhanced conditional coverage or reduced interval length. We employ machine learning techniques, notably gradient boosting, to systematically improve upon a predefined conformity score function. This process is guided by carefully constructed loss functions that measure the deviation of prediction intervals from the targeted properties. The procedure operates post-training, relying solely on model predictions and without modifying the trained model (e.g., the deep network). Systematic experiments demonstrate that starting from conventional conformal methods, our boosted procedure achieves substantial improvements in reducing interval length and decreasing deviation from target conditional coverage.", "pdf": "https://openreview.net/pdf/b473ef6a51b3c486561543de0461b928c3f4583e.pdf"} {"title": "The tree autoencoder model, with application to hierarchical data visualization", "url": "https://openreview.net/forum?id=Yy0KUmneV6", "detail_url": "https://openreview.net/forum?id=Yy0KUmneV6", "authors": "Miguel \u00c1. Carreira-Perpi\u00f1\u00e1n,Kuat Gazizov", "tags": "NIPS 2024,Poster", "abstract": "We propose a new model for dimensionality reduction, the PCA tree, which works like a regular autoencoder, having explicit projection and reconstruction mappings. The projection is effected by a sparse oblique tree, having hard, hyperplane splits using few features and linear leaves. The reconstruction mapping is a set of local linear mappings. Thus, rather than producing a global map as in t-SNE and other methods, which often leads to distortions, it produces a hierarchical set of local PCAs. The use of a sparse oblique tree and PCA makes the overall model interpretable and very fast to project or reconstruct new points. Joint optimization of all the parameters in the tree is a nonconvex nondifferentiable problem. We propose an algorithm that is guaranteed to decrease the error monotonically and which scales to large datasets without any approximation. In experiments, we show PCA trees are able to identify a wealth of low-dimensional and cluster structure in image and document datasets.", "pdf": "https://openreview.net/pdf/e2fb6778e9b106355a1a94237bf6ac47ee019883.pdf"} {"title": "Exploration by Learning Diverse Skills through Successor State Representations", "url": "https://openreview.net/forum?id=oyiBLfNJvY", "detail_url": "https://openreview.net/forum?id=oyiBLfNJvY", "authors": "Paul-Antoine LE TOLGUENEC,Yann BESSE,Florent Teichteil-K\u00f6nigsbuch,Dennis George Wilson,Emmanuel Rachelson", "tags": "NIPS 2024,Poster", "abstract": "The ability to perform different skills can encourage agents to explore. In this work, we aim to construct a set of diverse skills that uniformly cover the state space. We propose a formalization of this search for diverse skills, building on a previous definition based on the mutual information between states and skills. We consider the distribution of states reached by a policy conditioned on each skill and leverage the successor state representation to maximize the difference between these skill distributions. We call this approach LEADS: Learning Diverse Skills through Successor State Representations. We demonstrate our approach on a set of maze navigation and robotic control tasks which show that our method is capable of constructing a diverse set of skills which exhaustively cover the state space without relying on reward or exploration bonuses. Our findings demonstrate that this new formalization promotes more robust and efficient exploration by combining mutual information maximization and exploration bonuses.", "pdf": "https://openreview.net/pdf/dd7daaa95d0d7d28d2b3debd6bc2adb0031ae0f9.pdf"} {"title": "SpecExec: Massively Parallel Speculative Decoding For Interactive LLM Inference on Consumer Devices", "url": "https://openreview.net/forum?id=JAhNsZ9dvG", "detail_url": "https://openreview.net/forum?id=JAhNsZ9dvG", "authors": "Ruslan Svirschevski,Avner May,Zhuoming Chen,Beidi Chen,Zhihao Jia,Max Ryabinin", "tags": "NIPS 2024,Poster", "abstract": "As large language models gain widespread adoption, running them efficiently becomes a crucial task. Recent works on LLM inference use speculative decoding to achieve extreme speedups. However, most of these works implicitly design their algorithms for high-end datacenter hardware. In this work, we ask the opposite question: how fast can we run LLMs on consumer machines? Consumer GPUs can no longer fit the largest available models and must offload them to RAM or SSD. With parameter offloading, hundreds or thousands of tokens can be processed in batches within the same time as just one token, making it a natural fit for speculative decoding. We propose SpecExec (Speculative Execution), a simple parallel decoding method that can generate up to 20 tokens per target model iteration for popular LLM families. SpecExec takes the most probable continuations from the draft model to build a \"cache\" tree for the target model, which then gets validated in a single pass. Using SpecExec, we demonstrate inference of 50B+ parameter LLMs on consumer GPUs with RAM offloading at 4--6 tokens per second with 4-bit quantization or 2--3 tokens per second with 16-bit weights. Our code is available at https://github.com/yandex-research/specexec .", "pdf": "https://openreview.net/pdf/92424fd5845a7ee52f4c92a728675750ad954baf.pdf"} {"title": "Interpretable Generalized Additive Models for Datasets with Missing Values", "url": "https://openreview.net/forum?id=soUXmwL5aK", "detail_url": "https://openreview.net/forum?id=soUXmwL5aK", "authors": "Hayden McTavish,Jon Donnelly,Margo Seltzer,Cynthia Rudin", "tags": "NIPS 2024,Poster", "abstract": "Many important datasets contain samples that are missing one or more feature values. Maintaining the interpretability of machine learning models in the presence of such missing data is challenging. Singly or multiply imputing missing values complicates the model\u2019s mapping from features to labels. On the other hand, reasoning on indicator variables that represent missingness introduces a potentially large number of additional terms, sacrificing sparsity. We solve these problems with M-GAM, a sparse, generalized, additive modeling approach that incorporates missingness indicators and their interaction terms while maintaining sparsity through $\\ell_0$ regularization. We show that M-GAM provides similar or superior accuracy to prior methods while significantly improving sparsity relative to either imputation or na\u00efve inclusion of indicator variables.", "pdf": "https://openreview.net/pdf/d4ffc16a35baa261ecf09bc0c68828f805c6ec99.pdf"} {"title": "The Feature Speed Formula: a flexible approach to scale hyper-parameters of deep neural networks", "url": "https://openreview.net/forum?id=wsHMb4J2o9", "detail_url": "https://openreview.net/forum?id=wsHMb4J2o9", "authors": "L\u00e9na\u00efc Chizat,Praneeth Netrapalli", "tags": "NIPS 2024,Poster", "abstract": "Deep learning succeeds by doing hierarchical feature learning, yet tuning hyper-parameters (HP) such as initialization scales, learning rates etc., only give indirect control over this behavior. In this paper, we introduce a key notion to predict and control feature learning: the angle $\\theta_\\ell$ between the feature updates and the backward pass (at layer index $\\ell$). We show that the magnitude of feature updates after one GD step, at any training time, can be expressed via a simple and general *feature speed formula* in terms of this angle $\\theta_\\ell$, the loss decay, and the magnitude of the backward pass. This angle $\\theta_\\ell$ is controlled by the conditioning of the layer-to-layer Jacobians and at random initialization, it is determined by the spectrum of a certain kernel, which coincides with the Neural Tangent Kernel when $\\ell=\\text{depth}$. Given $\\theta_\\ell$, the feature speed formula provides us with rules to adjust HPs (scales and learning rates) so as to satisfy certain dynamical properties, such as feature learning and loss decay. We investigate the implications of our approach for ReLU MLPs and ResNets in the large width-then-depth limit. Relying on prior work, we show that in ReLU MLPs with iid initialization, the angle degenerates with depth as $\\cos(\\theta_\\ell)=\\Theta(1/\\sqrt{\\ell})$. In contrast, ResNets with branch scale $O(1/\\sqrt{\\text{depth}})$ maintain a non-degenerate angle $\\cos(\\theta_\\ell)=\\Theta(1)$. We use these insights to recover key properties of known HP scalings (such as $\\mu$P), and also introduce a new HP scaling for large depth ReLU MLPs with favorable theoretical properties.", "pdf": "https://openreview.net/pdf/b9078a3029a8ba5f5d9f97f0236d84340733175b.pdf"} {"title": "Inference of Neural Dynamics Using Switching Recurrent Neural Networks", "url": "https://openreview.net/forum?id=zb8jLAh2VN", "detail_url": "https://openreview.net/forum?id=zb8jLAh2VN", "authors": "Yongxu Zhang,Shreya Saxena", "tags": "NIPS 2024,Poster", "abstract": "Neural population activity often exhibits distinct dynamical features across time, which may correspond to distinct internal processes or behavior. Linear methods and variations thereof, such as Hidden Markov Model (HMM) and Switching Linear Dynamical System (SLDS), are often employed to identify discrete states with evolving neural dynamics. However, these techniques may not be able to capture the underlying nonlinear dynamics associated with neural propagation. Recurrent Neural Networks (RNNs) are commonly used to model neural dynamics thanks to their nonlinear characteristics. In our work, we develop Switching Recurrent Neural Networks (SRNN), RNNs with weights that switch across time, to reconstruct switching dynamics of neural time-series data. We apply these models to simulated data as well as cortical neural activity across mice and monkeys, which allows us to automatically detect discrete states that lead to the identification of varying neural dynamics. In a monkey reaching dataset with electrophysiology recordings, a mouse self-initiated lever pull dataset with widefield calcium recordings, and a mouse self-initiated decision making dataset with widefield calcium recording, SRNNs are able to automatically identify discrete states with distinct nonlinear neural dynamics. The inferred switches are aligned with the behavior, and the reconstructions show that the recovered neural dynamics are distinct across different stages of the behavior. We show that the neural dynamics have behaviorally-relevant switches across time and we are able to use SRNNs to successfully capture these switches and the corresponding dynamical features.", "pdf": "https://openreview.net/pdf/a72b19b658dbec5e7192f749e9871e5279caf5ab.pdf"} {"title": "Mutli-Armed Bandits with Network Interference", "url": "https://openreview.net/forum?id=ZxZOvVOiiL", "detail_url": "https://openreview.net/forum?id=ZxZOvVOiiL", "authors": "Abhineet Agarwal,Anish Agarwal,Lorenzo Masoero,Justin Whitehouse", "tags": "NIPS 2024,Poster", "abstract": "Online experimentation with interference is a common challenge in modern applications such as e-commerce and adaptive clinical trials in medicine. For example, in online marketplaces, the revenue of a good depends on discounts applied to competing goods. Statistical inference with interference is widely studied in the offline setting, but far less is known about how to adaptively assign treatments to minimize regret. We address this gap by studying a multi-armed bandit (MAB) problem where a learner (e-commerce platform) sequentially assigns one of possible $\\mathcal{A}$ actions (discounts) to $N$ units (goods) over $T$ rounds to minimize regret (maximize revenue). Unlike traditional MAB problems, the reward of each unit depends on the treatments assigned to other units, i.e., there is *interference* across the underlying network of units. With $\\mathcal{A}$ actions and $N$ units, minimizing regret is combinatorially difficult since the action space grows as $\\mathcal{A}^N$. To overcome this issue, we study a *sparse network interference* model, where the reward of a unit is only affected by the treatments assigned to $s$ neighboring units. We use tools from discrete Fourier analysis to develop a sparse linear representation of the unit-specific reward $r_n: [\\mathcal{A}]^N \\rightarrow \\mathbb{R} $, and propose simple, linear regression-based algorithms to minimize regret. Importantly, our algorithms achieve provably low regret both when the learner observes the interference neighborhood for all units and when it is unknown. This significantly generalizes other works on this topic which impose strict conditions on the strength of interference on a *known* network, and also compare regret to a markedly weaker optimal action. \nEmpirically, we corroborate our theoretical findings via numerical simulations.", "pdf": "https://openreview.net/pdf/7a7932fd0368f67a09848e14871c164691067a1e.pdf"} {"title": "CIFD: Controlled Information Flow to Enhance Knowledge Distillation", "url": "https://openreview.net/forum?id=xutrKezbPF", "detail_url": "https://openreview.net/forum?id=xutrKezbPF", "authors": "Yashas Malur Saidutta,Rakshith Sharma Srinivasa,Jaejin Cho,Ching-Hua Lee,Chouchang Yang,Yilin Shen,Hongxia Jin", "tags": "NIPS 2024,Poster", "abstract": "Knowledge Distillation is the mechanism by which the insights gained from a larger teacher model are transferred to a smaller student model. However, the transfer suffers when the teacher model is significantly larger than the student. To overcome this, prior works have proposed training intermediately sized models, Teacher Assistants (TAs) to help the transfer process. However, training TAs is expensive, as training these models is a knowledge transfer task in itself. Further, these TAs are larger than the student model and training them especially in large data settings can be computationally intensive. In this paper, we propose a novel framework called Controlled Information Flow for Knowledge Distillation (CIFD) consisting of two components. First, we propose a significantly smaller alternatives to TAs, the Rate-Distortion Module (RDM) which uses the teacher's penultimate layer embedding and a information rate-constrained bottleneck layer to replace the Teacher Assistant model. RDMs are smaller and easier to train than TAs, especially in large data regimes, since they operate on the teacher embeddings and do not need to relearn low level input feature extractors. Also, by varying the information rate across the bottleneck, RDMs can replace TAs of different sizes. Secondly, we propose the use of Information Bottleneck Module in the student model, which is crucial for regularization in the presence of a large number of RDMs. We show comprehensive state-of-the-art results of the proposed method over large datasets like Imagenet. Further, we show the significant improvement in distilling CLIP like models over a huge 12M image-text dataset. It outperforms CLIP specialized distillation methods across five zero-shot classification datasets and two zero-shot image-text retrieval datasets.", "pdf": "https://openreview.net/pdf/bf757aa0647765798d48b492071c603b7edc57aa.pdf"} {"title": "Communication-Efficient Federated Group Distributionally Robust Optimization", "url": "https://openreview.net/forum?id=xNZEjFe0mh", "detail_url": "https://openreview.net/forum?id=xNZEjFe0mh", "authors": "Zhishuai Guo,Tianbao Yang", "tags": "NIPS 2024,Poster", "abstract": "Federated learning faces challenges due to the heterogeneity in data volumes and distributions at different clients, which can compromise model generalization ability to various distributions. \nExisting approaches to address this issue based on group distributionally robust optimization (GDRO) often lead to high communication and sample complexity.\nTo this end, this work introduces algorithms tailored for communication-efficient Federated Group Distributionally Robust Optimization (FGDRO). Our contributions are threefold: Firstly, we introduce the FGDRO-CVaR algorithm, which optimizes the average top-K losses while reducing communication complexity to $O(1/\\epsilon^4)$, where $\\epsilon$ denotes the desired precision level. Secondly, our FGDRO-KL algorithm is crafted to optimize KL regularized FGDRO, cutting communication complexity to $O(1/\\epsilon^3)$. Lastly, we propose FGDRO-KL-Adam to utilize Adam-type local updates in FGDRO-KL, which not only maintains a communication cost of $O(1/\\epsilon^3)$ but also shows potential to surpass SGD-type local steps in practical applications.\nThe effectiveness of our algorithms has been demonstrated on a variety of real-world tasks, including natural language processing and computer vision.", "pdf": "https://openreview.net/pdf/a65e23614800d52b04091555fc2509133c2dc354.pdf"} {"title": "Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models", "url": "https://openreview.net/forum?id=JVKABhr6mP", "detail_url": "https://openreview.net/forum?id=JVKABhr6mP", "authors": "Byung-Kwan Lee,Chae Won Kim,Beomchan Park,Yong Man Ro", "tags": "NIPS 2024,Poster", "abstract": "The rapid development of large language and vision models (LLVMs) has been driven by advances in visual instruction tuning. Recently, open-source LLVMs have curated high-quality visual instruction tuning datasets and utilized additional vision encoders or multiple computer vision models in order to narrow the performance gap with powerful closed-source LLVMs. These advancements are attributed to multifaceted information required for diverse capabilities, including fundamental image understanding, real-world knowledge about common-sense and non-object concepts (e.g., charts, diagrams, symbols, signs, and math problems), and step-by-step procedures for solving complex questions. Drawing from the multifaceted information, we present a new efficient LLVM, Mamba-based traversal of rationales (Meteor), which leverages multifaceted rationale to enhance understanding and answering capabilities. To embed lengthy rationales containing abundant information, we employ the Mamba architecture, capable of processing sequential data with linear time complexity. We introduce a new concept of traversal of rationale that facilitates efficient embedding of rationale. Subsequently, the backbone multimodal language model (MLM) is trained to generate answers with the aid of rationale. Through these steps, Meteor achieves significant improvements in vision language performances across multiple evaluation benchmarks requiring diverse capabilities, without scaling up the model size or employing additional vision encoders and computer vision models.", "pdf": "https://openreview.net/pdf/d68d63de15506ec221657ddca9ceb58cdd2987ea.pdf"} {"title": "Preference Learning Algorithms Do Not Learn Preference Rankings", "url": "https://openreview.net/forum?id=YkJ5BuEXdD", "detail_url": "https://openreview.net/forum?id=YkJ5BuEXdD", "authors": "Angelica Chen,Sadhika Malladi,Lily H Zhang,Xinyi Chen,Qiuyi Zhang,Rajesh Ranganath,Kyunghyun Cho", "tags": "NIPS 2024,Poster", "abstract": "Preference learning algorithms (e.g., RLHF and DPO) are frequently used to steer LLMs to produce generations that are more preferred by humans, but our understanding of their inner workings is still limited. In this work, we study the conventional wisdom that preference learning trains models to assign higher likelihoods to more preferred outputs than less preferred outputs, measured via *ranking accuracy*.\nSurprisingly, we find that most state-of-the-art preference-tuned models achieve a ranking accuracy of less than 60% on common preference datasets. We furthermore derive the *idealized ranking accuracy* that a preference-tuned LLM would achieve if it optimized the DPO or RLHF objective perfectly. We demonstrate that existing models exhibit a significant *alignment gap* -- *i.e.*, a gap between the observed and idealized ranking accuracies. \nWe attribute this discrepancy to the DPO objective, which is empirically and theoretically ill-suited to correct even mild ranking errors in the reference model, and derive a simple and efficient formula for quantifying the difficulty of learning a given preference datapoint.\nFinally, we demonstrate that ranking accuracy strongly correlates with the empirically popular win rate metric when the model is close to the reference model used in the objective, shedding further light on the differences between on-policy (e.g., RLHF) and off-policy (e.g., DPO) preference learning algorithms.", "pdf": "https://openreview.net/pdf/95c0afb58459cccd1a6085c7fc9b42a20a055bec.pdf"} {"title": "Symmetric Linear Bandits with Hidden Symmetry", "url": "https://openreview.net/forum?id=aLzA7MSc6Y", "detail_url": "https://openreview.net/forum?id=aLzA7MSc6Y", "authors": "Nam Phuong Tran,The-Anh Ta,Debmalya Mandal,Long Tran-Thanh", "tags": "NIPS 2024,Poster", "abstract": "High-dimensional linear bandits with low-dimensional structure have received considerable attention in recent studies due to their practical significance. The most common structure in the literature is sparsity. However, it may not be available in practice. Symmetry, where the reward is invariant under certain groups of transformations on the set of arms, is another important inductive bias in the high-dimensional case that covers many standard structures, including sparsity. In this work, we study high-dimensional symmetric linear bandits where the symmetry is hidden from the learner, and the correct symmetry needs to be learned in an online setting. We examine the structure of a collection of hidden symmetry and provide a method based on model selection within the collection of low-dimensional subspaces. Our algorithm achieves a regret bound of $ O(d_0^{2/3} T^{2/3} \\log(d))$, where $d$ is the ambient dimension which is potentially very large, and $d_0$ is the dimension of the true low-dimensional subspace such that $d_0 \\ll d$. With an extra assumption on well-separated models, we can further improve the regret to $ O(d_0 \\sqrt{T\\log(d)} )$.", "pdf": "https://openreview.net/pdf/1336a7d4d4672120c9554c0969fd58460f1fd738.pdf"} {"title": "SAMPa: Sharpness-aware Minimization Parallelized", "url": "https://openreview.net/forum?id=IGn0ktYDwV", "detail_url": "https://openreview.net/forum?id=IGn0ktYDwV", "authors": "Wanyun Xie,Thomas Pethick,Volkan Cevher", "tags": "NIPS 2024,Poster", "abstract": "Sharpness-aware minimization (SAM) has been shown to improve the generalization of neural networks. However, each SAM update requires _sequentially_ computing two gradients, effectively doubling the per-iteration cost compared to base optimizers like SGD. We propose a simple modification of SAM, termed SAMPa, which allows us to fully parallelize the two gradient computations. SAMPa achieves a twofold speedup of SAM under the assumption that communication costs between devices are negligible. Empirical results show that SAMPa ranks among the most efficient variants of SAM in terms of computational time. Additionally, our method consistently outperforms SAM across both vision and language tasks. Notably, SAMPa theoretically maintains convergence guarantees even for _fixed_ perturbation sizes, which is established through a novel Lyapunov function. We in fact arrive at SAMPa by treating this convergence guarantee as a hard requirement---an approach we believe is promising for developing SAM-based methods in general. Our code is available at https://github.com/LIONS-EPFL/SAMPa.", "pdf": "https://openreview.net/pdf/528cbf9a7eae9b617350469e36dedd94081c01bc.pdf"} {"title": "Transformation-Invariant Learning and Theoretical Guarantees for OOD Generalization", "url": "https://openreview.net/forum?id=u2gzfXRLaN", "detail_url": "https://openreview.net/forum?id=u2gzfXRLaN", "authors": "Omar Montasser,Han Shao,Emmanuel Abbe", "tags": "NIPS 2024,Poster", "abstract": "Learning with identical train and test distributions has been extensively investigated both practically and theoretically. Much remains to be understood, however, in statistical learning under distribution shifts. This paper focuses on a distribution shift setting where train and test distributions can be related by classes of (data) transformation maps. We initiate a theoretical study for this framework, investigating learning scenarios where the target class of transformations is either known or unknown. We establish learning rules and algorithmic reductions to Empirical Risk Minimization (ERM), accompanied with learning guarantees. We obtain upper bounds on the sample complexity in terms of the VC dimension of the class composing predictors with transformations, which we show in many cases is not much larger than the VC dimension of the class of predictors. We highlight that the learning rules we derive offer a game-theoretic viewpoint on distribution shift: a learner searching for predictors and an adversary searching for transformation maps to respectively minimize and maximize the worst-case loss.", "pdf": "https://openreview.net/pdf/6df5ab210a13b0c96e72de1101a3d2ac1755a6fd.pdf"} {"title": "A Theory of Optimistically Universal Online Learnability for General Concept Classes", "url": "https://openreview.net/forum?id=EAbNopo3os", "detail_url": "https://openreview.net/forum?id=EAbNopo3os", "authors": "Steve Hanneke,Hongao Wang", "tags": "NIPS 2024,Poster", "abstract": "We provide a full characterization of the concept classes that are optimistically universally online learnable with {0, 1} labels. The notion of optimistically universal online learning was defined in [Hanneke, 2021] in order to understand learnability under minimal assumptions. In this paper, following the philosophy behind that work, we investigate two questions, namely, for every concept class: (1) What are the minimal assumptions on the data process admitting online learnability? (2) Is there a learning algorithm which succeeds under every data process satisfying the minimal assumptions? Such an algorithm is said to be optimistically universal for the given concept class. We resolve both of these questions for all concept classes, and moreover, as part of our solution we design general learning algorithms for each case. Finally, we extend these algorithms and results to the agnostic case, showing an equivalence between the minimal assumptions on the data process for learnability in the agnostic and realizable cases, for every concept class, as well as the equivalence of optimistically universal learnability.", "pdf": "https://openreview.net/pdf/27e398868d5b35ab4ec0f66658b41b4f93dfcfec.pdf"} {"title": "The Bayesian sampling in a canonical recurrent circuit with a diversity of inhibitory interneurons", "url": "https://openreview.net/forum?id=VNmi0FHn6Z", "detail_url": "https://openreview.net/forum?id=VNmi0FHn6Z", "authors": "Eryn Sale,Wenhao Zhang", "tags": "NIPS 2024,Poster", "abstract": "Accumulating evidence suggests stochastic cortical circuits can perform sampling-based Bayesian inference to compute the latent stimulus posterior. Canonical cortical circuits consist of excitatory (E) neurons and types of inhibitory (I) interneurons. Nevertheless, nearly no sampling neural circuit models consider the diversity of interneurons, and thus how interneurons contribute to sampling remains poorly understood. To provide theoretical insight, we build a nonlinear canonical circuit model consisting of recurrently connected E neurons and two types of I neurons including Parvalbumin (PV) and Somatostatin (SOM) neurons. The E neurons are modeled as a canonical ring (attractor) model, receiving global inhibition from PV neurons, and locally tuning-dependent inhibition from SOM neurons.\nWe theoretically analyze the nonlinear circuit dynamics and analytically identify the Bayesian sampling algorithm performed by the circuit dynamics. We found a reduced circuit with only E and PV neurons performs Langevin sampling, and the inclusion of SOM neurons with tuning-dependent inhibition speeds up the sampling via upgrading the Langevin into Hamiltonian sampling. Moreover, the Hamiltonian framework requires SOM neurons to receive no direct feedforward connections, consistent with neuroanatomy. Our work provides overarching connections between nonlinear circuits with various types of interneurons and sampling algorithms, deepening our understanding of circuit implementation of Bayesian inference.", "pdf": "https://openreview.net/pdf/b1876afc8494a337b561fc0b83ea09026756814e.pdf"} {"title": "Large Language Models as Urban Residents: An LLM Agent Framework for Personal Mobility Generation", "url": "https://openreview.net/forum?id=1iHmhMHNyA", "detail_url": "https://openreview.net/forum?id=1iHmhMHNyA", "authors": "Jiawei Wang,Renhe Jiang,Chuang Yang,Zengqing Wu,Makoto Onizuka,Ryosuke Shibasaki,Noboru Koshizuka,Chuan Xiao", "tags": "NIPS 2024,Poster", "abstract": "This paper introduces a novel approach using Large Language Models (LLMs) integrated into an agent framework for flexible and effective personal mobility generation. LLMs overcome the limitations of previous models by effectively processing semantic data and offering versatility in modeling various tasks. Our approach addresses three research questions: aligning LLMs with real-world urban mobility data, developing reliable activity generation strategies, and exploring LLM applications in urban mobility. The key technical contribution is a novel LLM agent framework that accounts for individual activity patterns and motivations, including a self-consistency approach to align LLMs with real-world activity data and a retrieval-augmented strategy for interpretable activity generation. We evaluate our LLM agent framework and compare it with state-of-the-art personal mobility generation approaches, demonstrating the effectiveness of our approach and its potential applications in urban mobility. Overall, this study marks the pioneering work of designing an LLM agent framework for activity generation based on real-world human activity data, offering a promising tool for urban mobility analysis.", "pdf": "https://openreview.net/pdf/68fd30200398640d70b139a21fd58f711886738c.pdf"} {"title": "Nonconvex Federated Learning on Compact Smooth Submanifolds With Heterogeneous Data", "url": "https://openreview.net/forum?id=uO53206oLJ", "detail_url": "https://openreview.net/forum?id=uO53206oLJ", "authors": "Jiaojiao Zhang,Jiang Hu,Anthony Man-Cho So,Mikael Johansson", "tags": "NIPS 2024,Poster", "abstract": "Many machine learning tasks, such as principal component analysis and low-rank matrix completion, give rise to manifold optimization problems. Although there is a large body of work studying the design and analysis of algorithms for manifold optimization in the centralized setting, there are currently very few works addressing the federated setting. In this paper, we consider nonconvex federated learning\nover a compact smooth submanifold in the setting of heterogeneous client data. We propose an algorithm that leverages stochastic Riemannian gradients and a manifold projection operator to improve computational efficiency, uses local updates to improve communication efficiency, and avoids client drift. Theoretically, we show that our proposed algorithm converges sub-linearly to a neighborhood of a first-order optimal solution by using a novel analysis that jointly exploits the manifold structure and properties of the loss functions. Numerical experiments demonstrate that our algorithm has significantly smaller computational and communication overhead than existing methods.", "pdf": "https://openreview.net/pdf/0a6ebdd721a6f62f7e75a245c3c48a52794ba892.pdf"} {"title": "Disentangling Interpretable Factors with Supervised Independent Subspace Principal Component Analysis", "url": "https://openreview.net/forum?id=AFnSMlye5K", "detail_url": "https://openreview.net/forum?id=AFnSMlye5K", "authors": "Jiayu Su,David A. Knowles,Raul Rabadan", "tags": "NIPS 2024,Poster", "abstract": "The success of machine learning models relies heavily on effectively representing high-dimensional data. However, ensuring data representations capture human-understandable concepts remains difficult, often requiring the incorporation of prior knowledge and decomposition of data into multiple subspaces. Traditional linear methods fall short in modeling more than one space, while more expressive deep learning approaches lack interpretability. Here, we introduce Supervised Independent Subspace Principal Component Analysis ($\\texttt{sisPCA}$), a PCA extension designed for multi-subspace learning. Leveraging the Hilbert-Schmidt Independence Criterion (HSIC), $\\texttt{sisPCA}$ incorporates supervision and simultaneously ensures subspace disentanglement. We demonstrate $\\texttt{sisPCA}$'s connections with autoencoders and regularized linear regression and showcase its ability to identify and separate hidden data structures through extensive applications, including breast cancer diagnosis from image features, learning aging-associated DNA methylation changes, and single-cell analysis of malaria infection. Our results reveal distinct functional pathways associated with malaria colonization, underscoring the essentiality of explainable representation in high-dimensional data analysis.", "pdf": "https://openreview.net/pdf/720441a6bfcaf097936eec73adbaf8ecdc6d8b1b.pdf"} {"title": "Optimal Multiclass U-Calibration Error and Beyond", "url": "https://openreview.net/forum?id=7aFRgCC8Q7", "detail_url": "https://openreview.net/forum?id=7aFRgCC8Q7", "authors": "Haipeng Luo,Spandan Senapati,Vatsal Sharan", "tags": "NIPS 2024,Poster", "abstract": "We consider the problem of online multiclass U-calibration, where a forecaster aims to make sequential distributional predictions over $K$ classes with low U-calibration error, that is, low regret with respect to all bounded proper losses simultaneously.\n Kleinberg et al. (2023) developed an algorithm with U-calibration error $\\mathcal{O}(K\\sqrt{T})$ after $T$ rounds and raised the open question of what the optimal bound is.\n We resolve this question by showing that the optimal U-calibration error is $\\Theta(\\sqrt{KT})$ --- we start with a simple observation that the Follow-the-Perturbed-Leader algorithm of Daskalakis and Syrgkanis (2016) achieves this upper bound, followed by a matching lower bound constructed with a specific proper loss (which, as a side result, also proves the optimality of the algorithm of Daskalakis and Syrgkanis (2016) in the context of online learning against an adversary with finite choices).\n We also strengthen our results under natural assumptions on the loss functions, including $\\Theta(\\log T)$ U-calibration error for Lipschitz proper losses, $\\mathcal{O}(\\log T)$ U-calibration error for a certain class of decomposable proper losses, U-calibration error bounds for proper losses with a low covering number, and others.", "pdf": "https://openreview.net/pdf/82cd1a97753a9cf42f5ae24fcbc2898155113218.pdf"} {"title": "REBEL: Reinforcement Learning via Regressing Relative Rewards", "url": "https://openreview.net/forum?id=yxjWAJzUyV", "detail_url": "https://openreview.net/forum?id=yxjWAJzUyV", "authors": "Zhaolin Gao,Jonathan Daniel Chang,Wenhao Zhan,Owen Oertell,Gokul Swamy,Kiant\u00e9 Brantley,Thorsten Joachims,J. Andrew Bagnell,Jason D. Lee,Wen Sun", "tags": "NIPS 2024,Poster", "abstract": "While originally developed for continuous control problems, Proximal Policy Optimization (PPO) has emerged as the work-horse of a variety of reinforcement learning (RL) applications, including the fine-tuning of generative models. Unfortunately, PPO requires multiple heuristics to enable stable convergence (e.g. value networks, clipping), and is notorious for its sensitivity to the precise implementation of these components. In response, we take a step back and ask what a *minimalist* RL algorithm for the era of generative models would look like. We propose REBEL, an algorithm that cleanly reduces the problem of policy optimization to regressing the *relative reward* between two completions to a prompt in terms of the policy, enabling strikingly lightweight implementation. In theory, we prove that fundamental RL algorithms like Natural Policy Gradient can be seen as variants of REBEL, which allows us to match the strongest known theoretical guarantees in terms of convergence and sample complexity in the RL literature. REBEL can also cleanly incorporate offline data and be extended to handle the intransitive preferences we frequently see in practice. Empirically, we find that REBEL provides a unified approach to language modeling and image generation with stronger or similar performance as PPO and DPO, all while being simpler to implement and more computationally efficient than PPO. When fine-tuning Llama-3-8B-Instruct, REBEL achieves strong performance in AlpacaEval 2.0, MT-Bench, and Open LLM Leaderboard. Implementation of REBEL can be found at , and models trained by REBEL can be found at .", "pdf": "https://openreview.net/pdf/593bf12fbcf5e521841f522017b13aceee20e6e5.pdf"} {"title": "Amortized Active Causal Induction with Deep Reinforcement Learning", "url": "https://openreview.net/forum?id=7AXY27kdNH", "detail_url": "https://openreview.net/forum?id=7AXY27kdNH", "authors": "Yashas Annadani,Panagiotis Tigas,Stefan Bauer,Adam Foster", "tags": "NIPS 2024,Poster", "abstract": "We present Causal Amortized Active Structure Learning (CAASL), an active intervention design policy that can select interventions that are adaptive, real-time and that does not require access to the likelihood. This policy, an amortized network based on the transformer, is trained with reinforcement learning on a simulator of the design environment, and a reward function that measures how close the true causal graph is to a causal graph posterior inferred from the gathered data. On synthetic data and a single-cell gene expression simulator, we demonstrate empirically that the data acquired through our policy results in a better estimate of the underlying causal graph than alternative strategies. Our design policy successfully achieves amortized intervention design on the distribution of the training environment while also generalizing well to distribution shifts in test-time design environments. Further, our policy also demonstrates excellent zero-shot generalization to design environments with dimensionality higher than that during training, and to intervention types that it has not been trained on.", "pdf": "https://openreview.net/pdf/bbc0abf6bfbdb21646362fb7cd41326593efb1b9.pdf"} {"title": "User-item fairness tradeoffs in recommendations", "url": "https://openreview.net/forum?id=ZOZjMs3JTs", "detail_url": "https://openreview.net/forum?id=ZOZjMs3JTs", "authors": "Sophie Greenwood,Sudalakshmee Chiniah,Nikhil Garg", "tags": "NIPS 2024,Poster", "abstract": "In the basic recommendation paradigm, the most (predicted) relevant item is recommended to each user. This may result in some items receiving lower exposure than they \"should\"; to counter this, several algorithmic approaches have been developed to ensure *item fairness*. These approaches necessarily degrade recommendations for some users to improve outcomes for items, leading to *user fairness* concerns. In turn, a recent line of work has focused on developing algorithms for multi-sided fairness, to jointly optimize user fairness, item fairness, and overall recommendation quality. This induces the question: *what is the tradeoff between these objectives, and what are the characteristics of (multi-objective) optimal solutions?* Theoretically, we develop a model of recommendations with user and item fairness objectives and characterize the solutions of fairness-constrained optimization. We identify two phenomena: (a) when user preferences are diverse, there is \"free\" item and user fairness; and (b) users whose preferences are misestimated can be *especially* disadvantaged by item fairness constraints. Empirically, we prototype a recommendation system for preprints on arXiv and implement our framework, measuring the phenomena in practice and showing how these phenomena inform the *design* of markets with recommendation systems-intermediated matching.", "pdf": "https://openreview.net/pdf/a6194046ea76f3fed36b606423dbac2c3a30f65d.pdf"} {"title": "Language Grounded Multi-agent Reinforcement Learning with Human-interpretable Communication", "url": "https://openreview.net/forum?id=DUHX779C5q", "detail_url": "https://openreview.net/forum?id=DUHX779C5q", "authors": "Huao Li,Hossein Nourkhiz Mahjoub,Behdad Chalaki,Vaishnav Tadiparthi,Kwonjoon Lee,Ehsan Moradi Pari,Charles Michael Lewis,Katia P. Sycara", "tags": "NIPS 2024,Poster", "abstract": "Multi-Agent Reinforcement Learning (MARL) methods have shown promise in enabling agents to learn a shared communication protocol from scratch and accomplish challenging team tasks. However, the learned language is usually not interpretable to humans or other agents not co-trained together, limiting its applicability in ad-hoc teamwork scenarios. In this work, we propose a novel computational pipeline that aligns the communication space between MARL agents with an embedding space of human natural language by grounding agent communications on synthetic data generated by embodied Large Language Models (LLMs) in interactive teamwork scenarios. Our results demonstrate that introducing language grounding not only maintains task performance but also accelerates the emergence of communication. Furthermore, the learned communication protocols exhibit zero-shot generalization capabilities in ad-hoc teamwork scenarios with unseen teammates and novel task states. This work presents a significant step toward enabling effective communication and collaboration between artificial agents and humans in real-world teamwork settings.", "pdf": "https://openreview.net/pdf/6664b584d62b376a0ad3f0a353ad58f8ef896b2e.pdf"} {"title": "Doing Experiments and Revising Rules with Natural Language and Probabilistic Reasoning", "url": "https://openreview.net/forum?id=HXdAfK488A", "detail_url": "https://openreview.net/forum?id=HXdAfK488A", "authors": "Wasu Top Piriyakulkij,Cassidy Langenfeld,Tuan Anh Le,Kevin Ellis", "tags": "NIPS 2024,Poster", "abstract": "We give a model of how to infer natural language rules by doing experiments. The\nmodel integrates Large Language Models (LLMs) with Monte Carlo algorithms for\nprobabilistic inference, interleaving online belief updates with experiment design\nunder information-theoretic criteria. We conduct a human-model comparison on a\nZendo-style task, finding that a critical ingredient for modeling the human data is to\nassume that humans also consider fuzzy, probabilistic rules, in addition to assuming\nthat humans perform approximately-Bayesian belief updates. We also compare\nwith recent algorithms for using LLMs to generate and revise hypotheses, finding\nthat our online inference method yields higher accuracy at recovering the true\nunderlying rule, and provides better support for designing optimal experiments.", "pdf": "https://openreview.net/pdf/939890c333016bb05e716f12f54bebba3fc3b5dc.pdf"} {"title": "Tree of Attacks: Jailbreaking Black-Box LLMs Automatically", "url": "https://openreview.net/forum?id=SoM3vngOH5", "detail_url": "https://openreview.net/forum?id=SoM3vngOH5", "authors": "Anay Mehrotra,Manolis Zampetakis,Paul Kassianik,Blaine Nelson,Hyrum S Anderson,Yaron Singer,Amin Karbasi", "tags": "NIPS 2024,Poster", "abstract": "While Large Language Models (LLMs) display versatile functionality, they continue to generate harmful, biased, and toxic content, as demonstrated by the prevalence of human-designed *jailbreaks*. In this work, we present *Tree of Attacks with Pruning* (TAP), an automated method for generating jailbreaks that only requires black-box access to the target LLM. TAP utilizes an attacker LLM to iteratively refine candidate (attack) prompts until one of the refined prompts jailbreaks the target. In addition, before sending prompts to the target, TAP assesses them and prunes the ones unlikely to result in jailbreaks, reducing the number of queries sent to the target LLM. In empirical evaluations, we observe that TAP generates prompts that jailbreak state-of-the-art LLMs (including GPT4-Turbo and GPT4o) for more than 80% of the prompts. This significantly improves upon the previous state-of-the-art black-box methods for generating jailbreaks while using a smaller number of queries than them. Furthermore, TAP is also capable of jailbreaking LLMs protected by state-of-the-art *guardrails*, e.g., LlamaGuard.", "pdf": "https://openreview.net/pdf/4795c11baf761e1c1bfdee844318f70047907116.pdf"} {"title": "Sequential Probability Assignment with Contexts: Minimax Regret, Contextual Shtarkov Sums, and Contextual Normalized Maximum Likelihood", "url": "https://openreview.net/forum?id=uRnTYPkF3V", "detail_url": "https://openreview.net/forum?id=uRnTYPkF3V", "authors": "Ziyi Liu,Idan Attias,Daniel M. Roy", "tags": "NIPS 2024,Poster", "abstract": "We study the fundamental problem of sequential probability assignment, also known as online learning with logarithmic loss, with respect to an arbitrary, possibly nonparametric hypothesis class. Our goal is to obtain a complexity measure for the hypothesis class that characterizes the minimax regret and to determine a general, minimax optimal algorithm. Notably, the sequential $\\ell_{\\infty}$ entropy, extensively studied in the literature (Rakhlin and Sridharan, 2015, Bilodeau et al., 2020, Wu et al., 2023), was shown to not characterize minimax regret in general. Inspired by the seminal work of Shtarkov (1987)\n and Rakhlin, Sridharan, and Tewari (2010), we introduce a novel complexity measure, the \\emph{contextual Shtarkov sum}, corresponding to the Shtarkov sum after projection onto a multiary context tree, and show that the worst case log contextual Shtarkov sum equals the minimax regret. Using the contextual Shtarkov sum, we derive the minimax optimal strategy, dubbed \\emph{contextual Normalized Maximum Likelihood} (cNML). Our results hold for sequential experts, beyond binary labels, which are settings rarely considered in prior work. \n To illustrate the utility of this characterization, we provide a short proof of a new regret upper bound in terms of sequential $\\ell_{\\infty}$ entropy, unifying and sharpening state-of-the-art bounds by Bilodeau et al. (2020) and Wu et al. (2023).", "pdf": "https://openreview.net/pdf/45cc567d0c06061737d622879a849ccfc81d9c05.pdf"} {"title": "Multi-language Diversity Benefits Autoformalization", "url": "https://openreview.net/forum?id=2jjfRm2R6D", "detail_url": "https://openreview.net/forum?id=2jjfRm2R6D", "authors": "Albert Q. Jiang,Wenda Li,Mateja Jamnik", "tags": "NIPS 2024,Poster", "abstract": "Autoformalization is the task of translating natural language materials into machine-verifiable formalisations. Progress in autoformalization research is hindered by the lack of a sizeable dataset consisting of informal-formal pairs expressing the same essence. Existing methods tend to circumvent this challenge by manually curating small corpora or using few-shot learning with large language models. But these methods suffer from data scarcity and formal language acquisition difficulty. In this work, we create mma, a large, flexible, multi-language, and multi-domain dataset of informal-formal pairs, by using a language model to translate in the reverse direction, that is, from formal mathematical statements into corresponding informal ones. Experiments show that language models fine-tuned on mma can produce up to $29-31$\\% of statements acceptable with minimal corrections on the miniF2F and ProofNet benchmarks, up from $0$\\% with the base model. We demonstrate that fine-tuning on multi-language formal data results in more capable autoformalization models even on single-language tasks.", "pdf": "https://openreview.net/pdf/dbbeb43c17cecc25edccd1f44c16264838e429b8.pdf"} {"title": "PSL: Rethinking and Improving Softmax Loss from Pairwise Perspective for Recommendation", "url": "https://openreview.net/forum?id=PhjnK9KWOx", "detail_url": "https://openreview.net/forum?id=PhjnK9KWOx", "authors": "Weiqin Yang,Jiawei Chen,Xin Xin,Sheng Zhou,Binbin Hu,Yan Feng,Chun Chen,Can Wang", "tags": "NIPS 2024,Poster", "abstract": "Softmax Loss (SL) is widely applied in recommender systems (RS) and has demonstrated effectiveness. This work analyzes SL from a pairwise perspective, revealing two significant limitations: 1) the relationship between SL and conventional ranking metrics like DCG is not sufficiently tight; 2) SL is highly sensitive to false negative instances. Our analysis indicates that these limitations are primarily due to the use of the exponential function. To address these issues, this work extends SL to a new family of loss functions, termed Pairwise Softmax Loss (PSL), which replaces the exponential function in SL with other appropriate activation functions. While the revision is minimal, we highlight three merits of PSL: 1) it serves as a tighter surrogate for DCG with suitable activation functions; 2) it better balances data contributions; and 3) it acts as a specific BPR loss enhanced by Distributionally Robust Optimization (DRO). We further validate the effectiveness and robustness of PSL through empirical experiments. The code is available at https://github.com/Tiny-Snow/IR-Benchmark.", "pdf": "https://openreview.net/pdf/1c94dc4e0b5e11e6bf6e29f5acb683a120b33718.pdf"} {"title": "Rethinking the Capacity of Graph Neural Networks for Branching Strategy", "url": "https://openreview.net/forum?id=FEmag0szWo", "detail_url": "https://openreview.net/forum?id=FEmag0szWo", "authors": "Ziang Chen,Jialin Liu,Xiaohan Chen,Xinshang Wang,Wotao Yin", "tags": "NIPS 2024,Poster", "abstract": "Graph neural networks (GNNs) have been widely used to predict properties and heuristics of mixed-integer linear programs (MILPs) and hence accelerate MILP solvers. This paper investigates the capacity of GNNs to represent strong branching (SB), the most effective yet computationally expensive heuristic employed in the branch-and-bound algorithm. In the literature, message-passing GNN (MP-GNN), as the simplest GNN structure, is frequently used as a fast approximation of SB and we find that not all MILPs's SB can be represented with MP-GNN. We precisely define a class of ``MP-tractable\" MILPs for which MP-GNNs can accurately approximate SB scores. Particularly, we establish a universal approximation theorem: for any data distribution over the MP-tractable class, there always exists an MP-GNN that can approximate the SB score with arbitrarily high accuracy and arbitrarily high probability, which lays a theoretical foundation of the existing works on imitating SB with MP-GNN. For MILPs without the MP-tractability, unfortunately, a similar result is impossible, which can be illustrated by two MILP instances with different SB scores that cannot be distinguished by any MP-GNN, regardless of the number of parameters. Recognizing this, we explore another GNN structure called the second-order folklore GNN (2-FGNN) that overcomes this limitation, and the aforementioned universal approximation theorem can be extended to the entire MILP space using 2-FGNN, regardless of the MP-tractability. A small-scale numerical experiment is conducted to directly validate our theoretical findings.", "pdf": "https://openreview.net/pdf/6db61b59e054fcc7d0bbffd639194594cb0a8300.pdf"} {"title": "On Weak Regret Analysis for Dueling Bandits", "url": "https://openreview.net/forum?id=dY4YGqvfgW", "detail_url": "https://openreview.net/forum?id=dY4YGqvfgW", "authors": "El Mehdi Saad,Alexandra Carpentier,Tom\u00e1\u0161 Koc\u00e1k,Nicolas Verzelen", "tags": "NIPS 2024,Poster", "abstract": "We consider the problem of $K$-armed dueling bandits in the stochastic setting, under the sole assumption of the existence of a Condorcet winner. We study the objective of weak regret minimization, where the learner doesn't incur any loss if one of the selected arms is a Condorcet winner\u2014unlike strong regret minimization, where the learner has to select the Condorcet winner twice to incur no loss. This study is particularly motivated by practical scenarios such as content recommendation and online advertising, where frequently only one optimal choice out of the two presented options is necessary to achieve user satisfaction or engagement. This necessitates the development of strategies with more exploration. While existing literature introduces strategies for weak regret with constant bounds (that do not depend on the time horizon), the optimality of these strategies remains an unresolved question. This problem turns out to be really challenging as the optimal regret should heavily depend on the full structure of the dueling problem at hand, and in particular on whether the Condorcet winner has a large minimal optimality gap with the other arms. Our contribution is threefold: first, when said optimality gap is not negligible compared to other properties of the gap matrix, we characterize the optimal budget as a function of $K$ and the optimality gap. Second, we propose a new strategy called \\wrtinf that achieves this optimal regret and improves over the state-of-the-art both in $K$ and the optimality gap. When the optimality gap is negligible, we propose another algorithm that outperforms our first algorithm, highlighting the subtlety of this dueling bandit problem. Finally, we provide numerical simulations to assess our theoretical findings.", "pdf": "https://openreview.net/pdf/1a70063d7bfc76b3d9706d42ae91962b3ec04c3d.pdf"} {"title": "ID-to-3D: Expressive ID-guided 3D Heads via Score Distillation Sampling", "url": "https://openreview.net/forum?id=SLuZpdMDFg", "detail_url": "https://openreview.net/forum?id=SLuZpdMDFg", "authors": "Francesca Babiloni,Alexandros Lattas,Jiankang Deng,Stefanos Zafeiriou", "tags": "NIPS 2024,Poster", "abstract": "We propose ID-to-3D, a method to generate identity- and text-guided 3D human heads with disentangled expressions, starting from even a single casually captured \u2018in-the-wild\u2019 image of a subject. The foundation of our approach is anchored in compositionality, alongside the use of task-specific 2D diffusion models as priors for optimization. First, we extend a foundational model with a lightweight expression-aware and ID-aware architecture, and create 2D priors for geometric and texture generation, via fine-tuning only 0.2% of its available training parameters. Then, we jointly leverage a neural parametric representation for the expression of each subject and a multi-stage generation of highly detailed geometry and albedo texture. This combination of strong face identity embeddings and our neural representation enables accurate reconstruction of not only facial features but also accessories and hair, and can be meshed to provide render-ready assets for gaming and telepresence. Our results achieve an unprecedented level of id-consistent and high-quality texture and geometry generation, generalizing to a \u2018world\u2019 of unseen 3D identities, without relying on large 3D captured datasets of human assets.", "pdf": "https://openreview.net/pdf/92a04336ecde309751dd0e4c481515d1d79f2087.pdf"} {"title": "Credit Attribution and Stable Compression", "url": "https://openreview.net/forum?id=cRLFvSOrzt", "detail_url": "https://openreview.net/forum?id=cRLFvSOrzt", "authors": "Roi Livni,Shay Moran,Kobbi Nissim,Chirag Pabbaraju", "tags": "NIPS 2024,Poster", "abstract": "Credit attribution is crucial across various fields. In academic research, proper citation acknowledges prior work and establishes original contributions. Similarly, in generative models, such as those trained on existing artworks or music, it is important to ensure that any generated content influenced by these works appropriately credits the original creators.\n\nWe study credit attribution by machine learning algorithms. We propose new definitions--relaxations of Differential Privacy--that weaken the stability guarantees for a designated subset of $k$ datapoints. These $k$ datapoints can be used non-stably with permission from their owners, potentially in exchange for compensation. Meanwhile, the remaining datapoints are guaranteed to have no significant influence on the algorithm's output.\n\nOur framework extends well-studied notions of stability, including Differential Privacy ($k = 0$), differentially private learning with public data (where the $k$ public datapoints are fixed in advance),\nand stable sample compression (where the $k$ datapoints are selected adaptively by the algorithm).\nWe examine the expressive power of these stability notions within the PAC learning framework, provide a comprehensive characterization of learnability for algorithms adhering to these principles, and propose directions and questions for future research.", "pdf": "https://openreview.net/pdf/24cf86723cf1c5a9bbcad1ff71808adf33f01103.pdf"} {"title": "Robustly overfitting latents for flexible neural image compression", "url": "https://openreview.net/forum?id=NQB9myZksw", "detail_url": "https://openreview.net/forum?id=NQB9myZksw", "authors": "Yura Perugachi-Diaz,Arwin Gansekoele,Sandjai Bhulai", "tags": "NIPS 2024,Poster", "abstract": "Neural image compression has made a great deal of progress. State-of-the-art models are based on variational autoencoders and are outperforming classical models. Neural compression models learn to encode an image into a quantized latent representation that can be efficiently sent to the decoder, which decodes the quantized latent into a reconstructed image. While these models have proven successful in practice, they lead to sub-optimal results due to imperfect optimization and limitations in the encoder and decoder capacity. Recent work shows how to use stochastic Gumbel annealing (SGA) to refine the latents of pre-trained neural image compression models. \nWe extend this idea by introducing SGA+, which contains three different methods that build upon SGA.\nWe show how our method improves the overall compression performance in terms of the R-D trade-off, compared to its predecessors. Additionally, we show how refinement of the latents with our best-performing method improves the compression performance on both the Tecnick and CLIC dataset. Our method is deployed for a pre-trained hyperprior and for a more flexible model.\nFurther, we give a detailed analysis of our proposed methods and show that they are less sensitive to hyperparameter choices. Finally, we show how each method can be extended to three- instead of two-class rounding.", "pdf": "https://openreview.net/pdf/f2fc6ad684fcfaba2c4700d7c66b0cdd859bab53.pdf"} {"title": "Is Programming by Example solved by LLMs?", "url": "https://openreview.net/forum?id=xqc8yyhScL", "detail_url": "https://openreview.net/forum?id=xqc8yyhScL", "authors": "Wen-Ding Li,Kevin Ellis", "tags": "NIPS 2024,Poster", "abstract": "Programming-by-Examples (PBE) aims to generate an algorithm from input-output examples.\nSuch systems are practically and theoretically important:\nfrom an end-user perspective, they are deployed to millions of people, and from an AI perspective, PBE corresponds to a very general form of few-shot inductive inference.\nGiven the success of Large Language Models (LLMs) in code-generation tasks, we investigate here the extent to which LLMs can be said to have \"solved\" PBE.\nWe experiment on classic domains such as lists and strings, and an uncommon graphics programming domain not well represented in typical pretraining data.\nWe find that pretrained models are not effective at PBE, but that they can be fine-tuned for much higher performance, provided the test problems are in-distribution.\nWe analyze empirically what causes these models to succeed and fail, and take steps toward understanding how to achieve better out-of-distribution generalization.\nCollectively these results suggest that LLMs make strong progress toward solving the typical suite of PBE tasks, potentially increasing the flexibility and applicability of PBE systems, while also identifying ways in which LLMs still fall short.", "pdf": "https://openreview.net/pdf/33f765535b07939dd68a31a9eb396750d9b18474.pdf"} {"title": "SHED: Shapley-Based Automated Dataset Refinement for Instruction Fine-Tuning", "url": "https://openreview.net/forum?id=Gqou8PRgWq", "detail_url": "https://openreview.net/forum?id=Gqou8PRgWq", "authors": "Yexiao He,Ziyao Wang,Zheyu Shen,Guoheng Sun,Yucong Dai,Yongkai Wu,Hongyi Wang,Ang Li", "tags": "NIPS 2024,Poster", "abstract": "The pre-trained Large Language Models (LLMs) can be adapted for many downstream tasks and tailored to align with human preferences through fine-tuning. Recent studies have discovered that LLMs can achieve desirable performance with only a small amount of high-quality data, suggesting that a large portion of the data in these extensive datasets is redundant or even harmful. Identifying high-quality data from vast datasets to curate small yet effective datasets has emerged as a critical challenge. In this paper, we introduce SHED, an automated dataset refinement framework based on Shapley value for instruction fine-tuning. SHED eliminates the need for human intervention or the use of commercial LLMs. Moreover, the datasets curated through SHED exhibit transferability, indicating they can be reused across different LLMs with consistently high performance. We conduct extensive experiments to evaluate the datasets curated by SHED. The results demonstrate SHED's superiority over state-of-the-art methods across various tasks and LLMs; notably, datasets comprising only 10% of the original data selected by SHED achieve performance comparable to or surpassing that of the full datasets.", "pdf": "https://openreview.net/pdf/846d9d6a053222ba873273dd298c3f351c6ee6c8.pdf"} {"title": "WorldCoder, a Model-Based LLM Agent: Building World Models by Writing Code and Interacting with the Environment", "url": "https://openreview.net/forum?id=QGJSXMhVaL", "detail_url": "https://openreview.net/forum?id=QGJSXMhVaL", "authors": "Hao Tang,Darren Yan Key,Kevin Ellis", "tags": "NIPS 2024,Poster", "abstract": "We give a model-based agent that builds a Python program representing its knowledge of the world based on its interactions with the environment. The world model tries to explain its interactions, while also being optimistic about what reward it can achieve. We define this optimism as a logical constraint between a program and a planner. We study our agent on gridworlds, and on task planning, finding our approach is more sample-efficient compared to deep RL, more compute-efficient compared to ReAct-style agents, and that it can transfer its knowledge across environments by editing its code.", "pdf": "https://openreview.net/pdf/29aa008f69c07f80f1be31b7e9909f05dcad7e2a.pdf"} {"title": "Computing the Bias of Constant-step Stochastic Approximation with Markovian Noise", "url": "https://openreview.net/forum?id=RxXdokK2qz", "detail_url": "https://openreview.net/forum?id=RxXdokK2qz", "authors": "Sebastian Allmeier,Nicolas Gast", "tags": "NIPS 2024,Poster", "abstract": "We study stochastic approximation algorithms with Markovian noise and constant step-size $\\alpha$. We develop a method based on infinitesimal generator comparisons to study the bias of the algorithm, which is the expected difference between $\\theta_n$ ---the value at iteration $n$--- and $\\theta^*$ ---the unique equilibrium of the corresponding ODE. We show that, under some smoothness conditions, this bias is of order $O(\\alpha)$. Furthermore, we show that the time-averaged bias is equal to $\\alpha V + O(\\alpha^2)$, where $V$ is a constant characterized by a Lyapunov equation, showing that $E[\\bar{\\theta}_n] \\approx \\theta^*+V\\alpha + O(\\alpha^2)$, where $\\bar{\\theta}_n$ is the Polyak-Ruppert average. We also show that $\\bar{\\theta}_n$ converges with high probability around $\\theta^*+\\alpha V$. We illustrate how to combine this with Richardson-Romberg extrapolation to derive an iterative scheme with a bias of order $O(\\alpha^2)$.", "pdf": "https://openreview.net/pdf/877e9201c2340e06615f875918849bb65bbc4b8b.pdf"} {"title": "Data Acquisition via Experimental Design for Data Markets", "url": "https://openreview.net/forum?id=VXJVNdmXO4", "detail_url": "https://openreview.net/forum?id=VXJVNdmXO4", "authors": "Charles Lu,Baihe Huang,Sai Praneeth Karimireddy,Praneeth Vepakomma,Michael Jordan,Ramesh Raskar", "tags": "NIPS 2024,Poster", "abstract": "The acquisition of training data is crucial for machine learning applications. Data markets can increase the supply of data, particularly in data-scarce domains such as healthcare, by incentivizing potential data providers to join the market. A major challenge for a data buyer in such a market is choosing the most valuable data points from a data seller. Unlike prior work in data valuation, which assumes centralized data access, we propose a federated approach to the data acquisition problem that is inspired by linear experimental design. Our proposed data acquisition method achieves lower prediction error without requiring labeled validation data and can be optimized in a fast and federated procedure. The key insight of our work is that a method that directly estimates the benefit of acquiring data for test set prediction is particularly compatible with a decentralized market setting.", "pdf": "https://openreview.net/pdf/00597f8eafa4eee9a4f41c3b6d536bd5efd48185.pdf"} {"title": "Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input", "url": "https://openreview.net/forum?id=unA5hxIn6v", "detail_url": "https://openreview.net/forum?id=unA5hxIn6v", "authors": "Ziang Chen,Rong Ge", "tags": "NIPS 2024,Poster", "abstract": "In this work, we study the mean-field flow for learning subspace-sparse polynomials using stochastic gradient descent and two-layer neural networks, where the input distribution is standard Gaussian and the output only depends on the projection of the input onto a low-dimensional subspace. We establish a necessary condition for SGD-learnability, involving both the characteristics of the target function and the expressiveness of the activation function. In addition, we prove that the condition is almost sufficient, in the sense that a condition slightly stronger than the necessary condition can guarantee the exponential decay of the loss functional to zero.", "pdf": "https://openreview.net/pdf/4691488d8bc10ca94a443c4c7cb8592a00ec1873.pdf"} {"title": "VeLoRA: Memory Efficient Training using Rank-1 Sub-Token Projections", "url": "https://openreview.net/forum?id=bFoQXD7Uls", "detail_url": "https://openreview.net/forum?id=bFoQXD7Uls", "authors": "Roy Miles,Pradyumna Reddy,Ismail Elezi,Jiankang Deng", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have recently emerged as powerful tools for tackling many language-processing tasks. Despite their success, training and fine-tuning these models is still far too computationally and memory intensive. In this paper, we identify and characterise the important components needed for effective model convergence using gradient descent. In doing so we find that the intermediate activations used to implement backpropagation can be excessively compressed without incurring any degradation in performance. This result leads us to a cheap and memory-efficient algorithm for both fine-tuning and pre-training LLMs. The proposed algorithm simply divides the tokens up into smaller sub-tokens before projecting them onto a fixed 1-dimensional subspace during the forward pass. These features are then coarsely reconstructed during the backward pass to implement the update rules. We confirm the effectiveness of our algorithm as being complimentary to many state-of-the-art PEFT methods on the VTAB-1k fine-tuning benchmark. Furthermore, we outperform QLoRA for fine-tuning LLaMA and show competitive performance against other memory-efficient pre-training methods on the large-scale C4 dataset.", "pdf": "https://openreview.net/pdf/bf66c7a2b80fcfa6cc72e4a85be544eea2936311.pdf"} {"title": "OccFusion: Rendering Occluded Humans with Generative Diffusion Priors", "url": "https://openreview.net/forum?id=CZwphz5vgz", "detail_url": "https://openreview.net/forum?id=CZwphz5vgz", "authors": "Adam Sun,Tiange Xiang,Scott Delp,Li Fei-Fei,Ehsan Adeli", "tags": "NIPS 2024,Poster", "abstract": "Existing human rendering methods require every part of the human to be fully visible throughout the input video. However, this assumption does not hold in real-life settings where obstructions are common, resulting in only partial visibility of the human. Considering this, we present OccFusion, an approach that utilizes efficient 3D Gaussian splatting supervised by pretrained 2D diffusion models for efficient and high-fidelity human rendering. We propose a pipeline consisting of three stages. In the Initialization stage, complete human masks are generated from partial visibility masks. In the Optimization stage, 3D human Gaussians are optimized with additional supervisions by Score-Distillation Sampling (SDS) to create a complete geometry of the human. Finally, in the Refinement stage, in-context inpainting is designed to further improve rendering quality on the less observed human body parts. We evaluate OccFusion on ZJU-MoCap and challenging OcMotion sequences and found that it achieves state-of-the-art performance in the rendering of occluded humans.", "pdf": "https://openreview.net/pdf/d036f481a4da3c0b70ef68e4fff24e1a4a0723e6.pdf"} {"title": "Outlier-Robust Distributionally Robust Optimization via Unbalanced Optimal Transport", "url": "https://openreview.net/forum?id=V8HVsyTSu6", "detail_url": "https://openreview.net/forum?id=V8HVsyTSu6", "authors": "Zifan Wang,Yi Shen,Michael M. Zavlanos,Karl Henrik Johansson", "tags": "NIPS 2024,Poster", "abstract": "Distributionally Robust Optimization (DRO) accounts for uncertainty in data distributions by optimizing the model performance against the worst possible distribution within an ambiguity set. In this paper, we propose a DRO framework that relies on a new distance inspired by Unbalanced Optimal Transport (UOT). The proposed UOT distance employs a soft penalization term instead of hard constraints, enabling the construction of an ambiguity set that is more resilient to outliers. Under smoothness conditions, we establish strong duality of the proposed DRO problem. Moreover, we introduce a computationally efficient Lagrangian penalty formulation for which we show that strong duality also holds. Finally, we provide empirical results that demonstrate that our method offers improved robustness to outliers and is computationally less demanding for regression and classification tasks.", "pdf": "https://openreview.net/pdf/95d9ba322753817c2ea86e6a7285c4d8d05d0211.pdf"} {"title": "Utilizing Image Transforms and Diffusion Models for Generative Modeling of Short and Long Time Series", "url": "https://openreview.net/forum?id=2NfBBpbN9x", "detail_url": "https://openreview.net/forum?id=2NfBBpbN9x", "authors": "Ilan Naiman,Nimrod Berman,Itai Pemper,Idan Arbiv,Gal Fadlon,Omri Azencot", "tags": "NIPS 2024,Poster", "abstract": "Lately, there has been a surge in interest surrounding generative modeling of time series data. Most existing approaches are designed either to process short sequences or to handle long-range sequences. This dichotomy can be attributed to gradient issues with recurrent networks, computational costs associated with transformers, and limited expressiveness of state space models. Towards a unified generative model for varying-length time series, we propose in this work to transform sequences into images. By employing invertible transforms such as the delay embedding and the short-time Fourier transform, we unlock three main advantages: i) We can exploit advanced diffusion vision models; ii) We can remarkably process short- and long-range inputs within the same framework; and iii) We can harness recent and established tools proposed in the time series to image literature. We validate the effectiveness of our method through a comprehensive evaluation across multiple tasks, including unconditional generation, interpolation, and extrapolation. We show that our approach achieves consistently state-of-the-art results against strong baselines. In the unconditional generation tasks, we show remarkable mean improvements of $58.17$% over previous diffusion models in the short discriminative score and $132.61$% in the (ultra-)long classification scores. Code is at https://github.com/azencot-group/ImagenTime.", "pdf": "https://openreview.net/pdf/56f133e39f5d561c8ca74a3ce2bd48c9b3e1804a.pdf"} {"title": "Understanding Information Storage and Transfer in Multi-Modal Large Language Models", "url": "https://openreview.net/forum?id=s63dtq0mwA", "detail_url": "https://openreview.net/forum?id=s63dtq0mwA", "authors": "Samyadeep Basu,Martin Grayson,Cecily Morrison,Besmira Nushi,Soheil Feizi,Daniela Massiceti", "tags": "NIPS 2024,Poster", "abstract": "Understanding the mechanisms of information storage and transfer in Transformer-based models is important for driving model understanding progress. Recent work has studied these mechanisms for Large Language Models (LLMs), revealing insights on how information is stored in a model's parameters and how information flows to and from these parameters in response to specific prompts. However, these studies have not yet been extended to Multi-modal Large Language Models (MLLMs). Given their expanding capabilities and real-world use, we start by studying one aspect of these models -- how MLLMs process information in a factual visual question answering task. We use a constraint-based formulation which views a visual question as having a set of visual or textual constraints that the model's generated answer must satisfy to be correct (e.g. What movie directed by \\emph{the director in this photo} has won a \\emph{Golden Globe}?). Under this setting, we contribute i) a method that extends causal information tracing from pure language to the multi-modal setting, and ii) \\emph{VQA-Constraints}, a test-bed of 9.7K visual questions annotated with constraints. We use these tools to study two open-source MLLMs, LLaVa and multi-modal Phi-2. Our key findings show that these MLLMs rely on MLP and self-attention blocks in much earlier layers for information storage, compared to LLMs whose mid-layer MLPs are more important. We also show that a consistent small subset of visual tokens output by the vision encoder are responsible for transferring information from the image to these causal blocks. We validate these mechanisms by introducing MultEdit a model-editing algorithm that can correct errors and insert new long-tailed information into MLLMs by targeting these causal blocks. We will publicly release our dataset and code.", "pdf": "https://openreview.net/pdf/6c393eac463a81dce11be157a7bde9017cf23675.pdf"} {"title": "Consistency of Neural Causal Partial Identification", "url": "https://openreview.net/forum?id=GEbnPxD9EF", "detail_url": "https://openreview.net/forum?id=GEbnPxD9EF", "authors": "Jiyuan Tan,Jose Blanchet,Vasilis Syrgkanis", "tags": "NIPS 2024,Poster", "abstract": "Recent progress in Neural Causal Models (NCMs) showcased how identification and partial identification of causal effects can be automatically carried out via training of neural generative models that respect the constraints encoded in a given causal graph [Xia et al. 2022, Balazadeh et al. 2022]. However, formal consistency of these methods has only been proven for the case of discrete variables or only for linear causal models. In this work, we prove the consistency of partial identification via NCMs in a general setting with both continuous and categorical variables. Further, our results highlight the impact of the design of the underlying neural network architecture in terms of depth and connectivity as well as the importance of applying Lipschitz regularization in the training phase. In particular, we provide a counterexample showing that without Lipschitz regularization this method may not be asymptotically consistent. Our results are enabled by new results on the approximability of Structural Causal Models (SCMs) via neural generative models, together with an analysis of the sample complexity of the resulting architectures and how that translates into an error in the constrained optimization problem that defines the partial identification bounds.", "pdf": "https://openreview.net/pdf/73a53f152a3722a06c03b478d5053b6f1120aca6.pdf"} {"title": "Doubly Hierarchical Geometric Representations for Strand-based Human Hairstyle Generation", "url": "https://openreview.net/forum?id=h34jVnPo1c", "detail_url": "https://openreview.net/forum?id=h34jVnPo1c", "authors": "Yunlu Chen,Francisco Vicente Carrasco,Christian H\u00e4ne,Giljoo Nam,Jean-Charles Bazin,Fernando De la Torre", "tags": "NIPS 2024,Poster", "abstract": "We introduce a doubly hierarchical generative representation for strand-based hair geometry that progresses from coarse, low-pass filtered guide hair to densely populated hair strands rich in high-frequency details. We employ the Discrete Cosine Transform (DCT) to separate low-frequency structural curves from high-frequency curliness and noise, avoiding the Gibbs' oscillation issues associated with the standard Fourier transform in open curves. Unlike the guide hair sampled from the scalp UV map grids which may lose capturing details of the hairstyle in existing methods, our method samples optimal sparse guide strands by utilizing $k$-medoids clustering centres from low-pass filtered dense strands, which more accurately retain the hairstyle's inherent characteristics. The proposed variational autoencoder-based generation network, with an architecture inspired by geometric deep learning and implicit neural representations, facilitates flexible, off-the-grid guide strand modelling and enables the completion of dense strands in any quantity and density, drawing on principles from implicit neural representations. Empirical evaluations confirm the capacity of the model to generate convincing guide hair and dense strands, complete with nuanced high-frequency details.", "pdf": "https://openreview.net/pdf/92962235e34f69473aa33130e42278c289850ca2.pdf"} {"title": "Code Repair with LLMs gives an Exploration-Exploitation Tradeoff", "url": "https://openreview.net/forum?id=o863gX6DxA", "detail_url": "https://openreview.net/forum?id=o863gX6DxA", "authors": "Hao Tang,Keya Hu,Jin Peng Zhou,Si Cheng Zhong,Wei-Long Zheng,Xujie Si,Kevin Ellis", "tags": "NIPS 2024,Poster", "abstract": "Iteratively improving and repairing source code with large language models (LLMs), known as refinement, has emerged as a popular way of generating programs that would be too complex to construct in one shot. Given a bank of test cases, together with a candidate program, an LLM can improve that program by being prompted with failed test cases. But it remains an open question how to best iteratively refine code, with prior work employing simple greedy or breadth-first strategies. We show here that refinement exposes an explore-exploit tradeoff: exploit by refining the program that passes the most test cases, or explore by refining a lesser considered program. We frame this as an arm-acquiring bandit problem, which we solve with Thompson Sampling. The resulting LLM-based program synthesis algorithm is broadly applicable: Across loop invariant synthesis, visual reasoning puzzles, and competition programming problems, we find that our new method can solve more problems using fewer language model calls.", "pdf": "https://openreview.net/pdf/f4dac508dca07ac2e37c96306bb1435333481698.pdf"} {"title": "LFME: A Simple Framework for Learning from Multiple Experts in Domain Generalization", "url": "https://openreview.net/forum?id=SYjxhKcXoN", "detail_url": "https://openreview.net/forum?id=SYjxhKcXoN", "authors": "Liang Chen,Yong Zhang,Yibing Song,Zhiqiang Shen,Lingqiao Liu", "tags": "NIPS 2024,Poster", "abstract": "Domain generalization (DG) methods aim to maintain good performance in an unseen target domain by using training data from multiple source domains. While success on certain occasions are observed, enhancing the baseline across most scenarios remains challenging. This work introduces a simple yet effective framework, dubbed learning from multiple experts (LFME), that aims to make the target model an expert in all source domains to improve DG. Specifically, besides learning the target model used in inference, LFME will also train multiple experts specialized in different domains, whose output probabilities provide professional guidance by simply regularizing the logit of the target model. Delving deep into the framework, we reveal that the introduced logit regularization term implicitly provides effects of enabling the target model to harness more information, and mining hard samples from the experts during training. Extensive experiments on benchmarks from different DG tasks demonstrate that LFME is consistently beneficial to the baseline and can achieve comparable performance to existing arts. Code is available at https://github.com/liangchen527/LFME.", "pdf": "https://openreview.net/pdf/abdbf557f3ef17f902813f735892688ad6e31888.pdf"} {"title": "Dueling over Dessert, Mastering the Art of Repeated Cake Cutting", "url": "https://openreview.net/forum?id=mfTvNzhsht", "detail_url": "https://openreview.net/forum?id=mfTvNzhsht", "authors": "Simina Branzei,MohammadTaghi Hajiaghayi,Reed Phillips,Suho Shin,Kun Wang", "tags": "NIPS 2024,Poster", "abstract": "We consider the setting of repeated fair division between two players, denoted Alice and Bob, with private valuations over a cake. In each round, a new cake arrives, which is identical to the ones in previous rounds. Alice cuts the cake at a point of her choice, while Bob chooses the left piece or the right piece, leaving the remainder for Alice. \nWe consider two versions: sequential, where Bob observes Alice's cut point before choosing left/right, and simultaneous, where he only observes her cut point after making his choice. The simultaneous version was first considered by Aumann and Maschler.\n \nWe observe that if Bob is almost myopic and chooses his favorite piece too often, then he can be systematically exploited by Alice through a strategy akin to a binary search. This strategy allows Alice to approximate Bob's preferences with increasing precision, thereby securing a disproportionate share of the resource over time.\n\nWe analyze the limits of how much a player can exploit the other one and show that fair utility profiles are in fact achievable. Specifically, the players can enforce the equitable utility profile of $(1/2, 1/2)$ in the limit on every trajectory of play, by keeping the other player's utility to approximately $1/2$ on average while guaranteeing they themselves get at least approximately $1/2$ on average. We show this theorem using a connection with Blackwell approachability.\n\nFinally, we analyze a natural dynamic known as fictitious play, where players best respond to the empirical distribution of the other player. We show that\nfictitious play converges to the equitable utility profile of $(1/2, 1/2)$ at a rate of $O(1/\\sqrt{T})$.", "pdf": "https://openreview.net/pdf/7f68a3d996ea18de7f300e14ce2c1a0e9687e069.pdf"} {"title": "Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad", "url": "https://openreview.net/forum?id=EdG59dnOzN", "detail_url": "https://openreview.net/forum?id=EdG59dnOzN", "authors": "Sayantan Choudhury,Nazarii Tupitsa,Nicolas Loizou,Samuel Horv\u00e1th,Martin Tak\u00e1\u010d,Eduard Gorbunov", "tags": "NIPS 2024,Poster", "abstract": "Adaptive methods are extremely popular in machine learning as they make learning rate tuning less expensive. This paper introduces a novel optimization algorithm named KATE, which presents a scale-invariant adaptation of the well-known AdaGrad algorithm. We prove the scale-invariance of KATE for the case of Generalized Linear Models. Moreover, for general smooth non-convex problems, we establish a convergence rate of $O((\\log T)/\\sqrt{T})$ for KATE, matching the best-known ones for AdaGrad and Adam. We also compare KATE to other state-of-the-art adaptive algorithms Adam and AdaGrad in numerical experiments with different problems, including complex machine learning tasks like image classification and text classification on real data. The results indicate that KATE consistently outperforms AdaGrad and matches/surpasses the performance of Adam in all considered scenarios.", "pdf": "https://openreview.net/pdf/ade6c999a88709f0764b270b2d8c9c31026a472c.pdf"} {"title": "Error Correction Output Codes for Robust Neural Networks against Weight-errors: A Neural Tangent Kernel Point of View", "url": "https://openreview.net/forum?id=7LIm53Jiic", "detail_url": "https://openreview.net/forum?id=7LIm53Jiic", "authors": "Anlan Yu,Shusen Jing,Ning Lyu,Wujie Wen,Zhiyuan Yan", "tags": "NIPS 2024,Poster", "abstract": "Error correcting output code (ECOC) is a classic method that encodes binary classifiers to tackle the multi-class classification problem in decision trees and neural networks.\nAmong ECOCs, the one-hot code has become the default choice in modern deep neural networks (DNNs) due to its simplicity in decision making. However, it suffers from a significant limitation in its ability to achieve high robust accuracy, particularly in the presence of weight errors. While recent studies have experimentally demonstrated that the non-one-hot ECOCs with multi-bits error correction ability, could be a better solution, there is a notable absence of theoretical foundations that can elucidate the relationship between codeword design, weight-error magnitude, and network characteristics, so as to provide robustness guarantees. This work is positioned to bridge this gap through the lens of neural tangent kernel (NTK). We have two important theoretical findings: 1) In clean models (without weight errors), utilizing one-hot code and non-one-hot ECOC is akin to altering decoding metrics from $l_2$ distance to Mahalanobis distance. 2) In non-clean models (with weight errors), if the normalized distance exceeds a threshold, then non-clean DNNs can reach the clean model's accuracy as long as the code length approaches infinity. This threshold is determined by DNN architecture (e.g. layer number, activation), weight error magnitude, and the distance between the output and the nearest codeword. Based on these findings, we further demonstrate how to practically use them to identify optimal ECOCs for simple tasks (short-code ECOCs) and complex tasks (long-code ECOCs), by balancing the code orthogonality (as per finding 1) and code distance (as per finding 2). Extensive experimental results across four datasets and four DNN models validate the superior performance of constructed codes, guided by our findings, compared to existing ECOCs. To our best knowledge, this is the first work that provides theoretical explanations for the effectiveness of ECOCS and offers associated design guidance for optimal ECOCs specifically tailored to DNNs.", "pdf": "https://openreview.net/pdf/26535f3cf11b8648425cae51b636c32d376bdc7d.pdf"} {"title": "Adversarially Robust Dense-Sparse Tradeoffs via Heavy-Hitters", "url": "https://openreview.net/forum?id=MPidsCd9e7", "detail_url": "https://openreview.net/forum?id=MPidsCd9e7", "authors": "David Woodruff,Samson Zhou", "tags": "NIPS 2024,Poster", "abstract": "In the adversarial streaming model, the input is a sequence of adaptive updates that defines an underlying dataset and the goal is to approximate, collect, or compute some statistic while using space sublinear in the size of the dataset. In 2022, Ben-Eliezer, Eden, and Onak showed a dense-sparse trade-off technique that elegantly combined sparse recovery with known techniques using differential privacy and sketch switching to achieve adversarially robust algorithms for $L_p$ estimation and other algorithms on turnstile streams. However, there has been no progress since, either in terms of achievability or impossibility. In this work, we first give improved algorithms for adversarially robust $L_p$-heavy hitters, utilizing deterministic turnstile heavy-hitter algorithms with better tradeoffs. We then utilize our heavy-hitter algorithm to reduce the problem to estimating the frequency moment of the tail vector. We give a new algorithm for this problem in the classical streaming setting, which achieves additive error and uses space independent in the size of the tail. We then leverage these ingredients to give an improved algorithm for adversarially robust $L_p$ estimation on turnstile streams. We believe that our results serve as an important conceptual message, demonstrating that there is no inherent barrier at the previous state-of-the-art.", "pdf": "https://openreview.net/pdf/11be5222e3e3a0d9609c2377367f623090dfa1aa.pdf"} {"title": "SpeedLoader: An I/O efficient scheme for heterogeneous and distributed LLM operation", "url": "https://openreview.net/forum?id=Y2I0Fy4sm7", "detail_url": "https://openreview.net/forum?id=Y2I0Fy4sm7", "authors": "Yiqi Zhang,Yang You", "tags": "NIPS 2024,Poster", "abstract": "With the surging growth of model parameters, foundation models pose unprecedented challenges to traditional computational infrastructures. These large models inherently require substantial accelerator memory to accommodate massive tensors during pre-training, fine-tuning, and even inference stages, making it even more challenging to deploy a model with restricted computational resources. Given this challenge, distribution and offloading the model states are two major solutions. Partitioning the required states to participating workers, and storing them in lower speed media, such as host DRAM and block devices, largely alleviate the accelerator memory pressure. However, the prohibitive costs of tensor communication render it a theoretically plausible yet practically inefficient solution. Previous efforts to improve efficiency include maximizing rematerialization and employing chunk-based tensor management to reduce host-device communication. Despite these efforts, the reported training throughput only achieves 36.54% of model FLOPs utilization (MFUs), still not comparable to full on-device training. In this work, we redesign the data flow of heterogeneous hardware and sharded model training to minimize the excessive communication overhead. Our proposed scheme significantly enhances training and inference throughput of large language models under restrictive computational resources. We confirmed a large leap in effective compute time by looking into the kernel-level runtime behavior of our trials, where the MFUs can achieve up to 51%. Compared to the state-of-the-art approach, our framework robustly achieves remarkable speedups from 3x to 30x in multiple distributed heterogeneous training setups and inference speedups of 1.5x to 2.35x without compromising arithmetic precision.", "pdf": "https://openreview.net/pdf/c4af89397c0445f2b9eb6ba052951391bfd5f0dd.pdf"} {"title": "On Socially Fair Low-Rank Approximation and Column Subset Selection", "url": "https://openreview.net/forum?id=EO1Qev952p", "detail_url": "https://openreview.net/forum?id=EO1Qev952p", "authors": "Zhao Song,Ali Vakilian,David Woodruff,Samson Zhou", "tags": "NIPS 2024,Poster", "abstract": "Low-rank approximation and column subset selection are two fundamental and related problems that are applied across a wealth of machine learning applications. In this paper, we study the question of socially fair low-rank approximation and socially fair column subset selection, where the goal is to minimize the loss over all sub-populations of the data. We show that surprisingly, even constant-factor approximation to fair low-rank approximation requires exponential time under certain standard complexity hypotheses. On the positive side, we give an algorithm for fair low-rank approximation that, for a constant number of groups and constant-factor accuracy, runs in $2^{\\text{poly}(k)}$ rather than the naive $n^{\\text{poly}(k)}$, which is a substantial improvement when the dataset has a large number $n$ of observations. We then show that there exist bicriteria approximation algorithms for fair low-rank approximation and fair column subset selection that runs in polynomial time.", "pdf": "https://openreview.net/pdf/0452df32bc811c52083c5be24eb48d78529e8efa.pdf"} {"title": "EAI: Emotional Decision-Making of LLMs in Strategic Games and Ethical Dilemmas", "url": "https://openreview.net/forum?id=8aAaYEwNR4", "detail_url": "https://openreview.net/forum?id=8aAaYEwNR4", "authors": "Mikhail Mozikov,Nikita Severin,Valeria Bodishtianu,Maria Glushanina,Ivan Nasonov,Daniil Orekhov,Vladislav Pekhotin,Ivan Makovetskiy,Mikhail Baklashkin,Vasily Lavrentyev,Akim Tsvigun,Denis Turdakov,Tatiana Shavrina,Andrey Savchenko,Ilya Makarov", "tags": "NIPS 2024,Poster", "abstract": "One of the urgent tasks of artificial intelligence is to assess the safety and alignment of large language models (LLMs) with human behavior. Conventional verification only in pure natural language processing benchmarks can be insufficient. Since emotions often influence human decisions, this paper examines LLM alignment in complex strategic and ethical environments, providing an in-depth analysis of the drawbacks of our psychology and the emotional impact on decision-making in humans and LLMs. We introduce the novel EAI framework for integrating emotion modeling into LLMs to examine the emotional impact on ethics and LLM-based decision-making in various strategic games, including bargaining and repeated games. Our experimental study with various LLMs demonstrated that emotions can significantly alter the ethical decision-making landscape of LLMs, highlighting the need for robust mechanisms to ensure consistent ethical standards. Our game-theoretic analysis revealed that LLMs are susceptible to emotional biases influenced by model size, alignment strategies, and primary pretraining language. Notably, these biases often diverge from typical human emotional responses, occasionally leading to unexpected drops in cooperation rates, even under positive emotional influence. Such behavior complicates the alignment of multiagent systems, emphasizing the need for benchmarks that can rigorously evaluate the degree of emotional alignment. Our framework provides a foundational basis for developing such benchmarks.", "pdf": "https://openreview.net/pdf/159a1d5750f81749e9bbaea0ef0e4acfcda51ff2.pdf"} {"title": "FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Adaptations", "url": "https://openreview.net/forum?id=TcCorXxNJQ", "detail_url": "https://openreview.net/forum?id=TcCorXxNJQ", "authors": "Ziyao Wang,Zheyu Shen,Yexiao He,Guoheng Sun,Hongyi Wang,Lingjuan Lyu,Ang Li", "tags": "NIPS 2024,Poster", "abstract": "The rapid development of Large Language Models (LLMs) has been pivotal in advancing AI, with pre-trained LLMs being adaptable to diverse downstream tasks through fine-tuning. Federated learning (FL) further enhances fine-tuning in a privacy-aware manner by utilizing clients' local data through in-situ computation, eliminating the need for data movement. However, fine-tuning LLMs, given their massive scale of parameters, poses challenges for clients with constrained and heterogeneous resources in FL. Previous methods employed low-rank adaptation (LoRA) for efficient federated fine-tuning but utilized traditional FL aggregation strategies on LoRA adapters. This approach led to mathematically inaccurate aggregation noise, reducing fine-tuning effectiveness and failing to address heterogeneous LoRAs. In this work, we first highlight the mathematical incorrectness of LoRA aggregation in existing federated fine-tuning methods. We introduce a new approach called FLoRA that enables federated fine-tuning on heterogeneous LoRA adapters across clients through a novel stacking-based aggregation method. Our approach is noise-free and seamlessly supports heterogeneous LoRAs. Extensive experiments demonstrate FLoRA's superior performance in both homogeneous and heterogeneous settings, surpassing state-of-the-art methods. We envision this work as a milestone for efficient, privacy-preserving, and accurate federated fine-tuning of LLMs.", "pdf": "https://openreview.net/pdf/599023d33c952139d8e9a17ee1e817707d24749f.pdf"} {"title": "ActSort: An active-learning accelerated cell sorting algorithm for large-scale calcium imaging datasets", "url": "https://openreview.net/forum?id=4czwwExZKQ", "detail_url": "https://openreview.net/forum?id=4czwwExZKQ", "authors": "Yiqi Jiang,Hakki Orhun Akengin,Ji Zhou,Mehmet Anil Aslihak,Yang Li,Radoslaw Chrapkiewicz,Oscar Hernandez,Sadegh Ebrahimi,Omar Jaidar,Yanping Zhang,Hakan Inan,Christopher Miranda,Fatih Dinc,Marta Blanco-Pozo,Mark Schnitzer", "tags": "NIPS 2024,Poster", "abstract": "Recent advances in calcium imaging enable simultaneous recordings of up to a million neurons in behaving animals, producing datasets of unprecedented scales. Although individual neurons and their activity traces can be extracted from these videos with automated algorithms, the results often require human curation to remove false positives, a laborious process called \\emph{cell sorting}. To address this challenge, we introduce ActSort, an active-learning algorithm for sorting large-scale datasets that integrates features engineered by domain experts together with data formats with minimal memory requirements. By strategically bringing outlier cell candidates near the decision boundary up for annotation, ActSort reduces human labor to about 1\u20133\\% of cell candidates and improves curation accuracy by mitigating annotator bias. To facilitate the algorithm's widespread adoption among experimental neuroscientists, we created a user-friendly software and conducted a first-of-its-kind benchmarking study involving about 160,000 annotations. Our tests validated ActSort's performance across different experimental conditions and datasets from multiple animals. Overall, ActSort addresses a crucial bottleneck in processing large-scale calcium videos of neural activity and thereby facilitates systems neuroscience experiments at previously inaccessible scales. (\\url{https://github.com/schnitzer-lab/ActSort-public})", "pdf": "https://openreview.net/pdf/923176eb3df00fb8a2c900e226885cf4119a4d9b.pdf"} {"title": "Federated Behavioural Planes: Explaining the Evolution of Client Behaviour in Federated Learning", "url": "https://openreview.net/forum?id=5FHzrRGOKR", "detail_url": "https://openreview.net/forum?id=5FHzrRGOKR", "authors": "Dario Fenoglio,Gabriele Dominici,Pietro Barbiero,Alberto Tonda,Martin Gjoreski,Marc Langheinrich", "tags": "NIPS 2024,Poster", "abstract": "Federated Learning (FL), a privacy-aware approach in distributed deep learning environments, enables many clients to collaboratively train a model without sharing sensitive data, thereby reducing privacy risks. However, enabling human trust and control over FL systems requires understanding the evolving behaviour of clients, whether beneficial or detrimental for the training, which still represents a key challenge in the current literature. To address this challenge, we introduce Federated Behavioural Planes (FBPs), a novel method to analyse, visualise, and explain the dynamics of FL systems, showing how clients behave under two different lenses: predictive performance (error behavioural space) and decision-making processes (counterfactual behavioural space). Our experiments demonstrate that FBPs provide informative trajectories describing the evolving states of clients and their contributions to the global model, thereby enabling the identification of clusters of clients with similar behaviours. Leveraging the patterns identified by FBPs, we propose a robust aggregation technique named Federated Behavioural Shields to detect malicious or noisy client models, thereby enhancing security and surpassing the efficacy of existing state-of-the-art FL defense mechanisms. Our code is publicly available on GitHub.", "pdf": "https://openreview.net/pdf/6307ae9e1c51366ba3fd178784e1f443f00861d2.pdf"} {"title": "Diffusion PID: Interpreting Diffusion via Partial Information Decomposition", "url": "https://openreview.net/forum?id=aBpxukZS37", "detail_url": "https://openreview.net/forum?id=aBpxukZS37", "authors": "Shaurya Rajat Dewan,Rushikesh Zawar,Prakanshul Saxena,Yingshan Chang,Andrew Luo,Yonatan Bisk", "tags": "NIPS 2024,Poster", "abstract": "Text-to-image diffusion models have made significant progress in generating naturalistic images from textual inputs, and demonstrate the capacity to learn and represent complex visual-semantic relationships. While these diffusion models have achieved remarkable success, the underlying mechanisms driving their performance are not yet fully accounted for, with many unanswered questions surrounding what they learn, how they represent visual-semantic relationships, and why they sometimes fail to generalize. Our work presents Diffusion Partial Information Decomposition (DiffusionPID), a novel technique that applies information-theoretic principles to decompose the input text prompt into its elementary components, enabling a detailed examination of how individual tokens and their interactions shape the generated image. We introduce a formal approach to analyze the uniqueness, redundancy, and synergy terms by applying PID to the denoising model at both the image and pixel level. This approach enables us to characterize how individual tokens and their interactions affect the model output. We first present a fine-grained analysis of characteristics utilized by the model to uniquely localize specific concepts, we then apply our approach in bias analysis and show it can recover gender and ethnicity biases. Finally, we use our method to visually characterize word ambiguity and similarity from the model\u2019s perspective and illustrate the efficacy of our method for prompt intervention. Our results show that PID is a potent tool for evaluating and diagnosing text-to-image diffusion models. Link to project page: https://rbz-99.github.io/Diffusion-PID/.", "pdf": "https://openreview.net/pdf/aab6660c0f98a00ae5bdd4e840d194414e2cccbd.pdf"} {"title": "From Causal to Concept-Based Representation Learning", "url": "https://openreview.net/forum?id=r5nev2SHtJ", "detail_url": "https://openreview.net/forum?id=r5nev2SHtJ", "authors": "Goutham Rajendran,Simon Buchholz,Bryon Aragam,Bernhard Sch\u00f6lkopf,Pradeep Kumar Ravikumar", "tags": "NIPS 2024,Poster", "abstract": "To build intelligent machine learning systems, modern representation learning attempts to recover latent generative factors from data, such as in causal representation learning. A key question in this growing field is to provide rigorous conditions under which latent factors can be identified and thus, potentially learned. Motivated by extensive empirical literature on linear representations and concept learning, we propose to relax causal notions with a geometric notion of concepts. We formally define a notion of concepts and show rigorously that they can be provably recovered from diverse data. Instead of imposing assumptions on the \"true\" generative latent space, we assume that concepts can be represented linearly in this latent space. The tradeoff is that instead of identifying the \"true\" generative factors, we identify a subset of desired human-interpretable concepts that are relevant for a given application. Experiments on synthetic data, multimodal CLIP models and large language models supplement our results and show the utility of our approach. In this way, we provide a foundation for moving from causal representations to interpretable, concept-based representations by bringing together ideas from these two neighboring disciplines.", "pdf": "https://openreview.net/pdf/76709bf0688353a9c1a6ed8402de1c1d61b474a2.pdf"} {"title": "Autobidder's Dilemma: Why More Sophisticated Autobidders Lead to Worse Auction Efficiency", "url": "https://openreview.net/forum?id=hQJksiskaa", "detail_url": "https://openreview.net/forum?id=hQJksiskaa", "authors": "Yuan Deng,Jieming Mao,Vahab Mirrokni,Hanrui Zhang,Song Zuo", "tags": "NIPS 2024,Poster", "abstract": "The recent increasing adoption of autobidding has inspired the growing interest in analyzing the performance of classic mechanism with value-maximizing autobidders both theoretically and empirically. It is known that optimal welfare can be obtained in first-price auctions if autobidders are restricted to uniform bid-scaling and the price of anarchy is $2$ when non-uniform bid-scaling strategies are allowed. \n\nIn this paper, we provide a fine-grained price of anarchy analysis for non-uniform bid-scaling strategies in first-price auctions, demonstrating the reason why more powerful (individual) non-uniform bid-scaling strategies may lead to worse (aggregated) performance in social welfare. Our theoretical results match recent empirical findings that a higher level of non-uniform bid-scaling leads to lower welfare performance in first-price auctions.", "pdf": "https://openreview.net/pdf/0341bb7a92a9e12cdb453245443a05e694ce1325.pdf"} {"title": "Convergence of $\\text{log}(1/\\epsilon)$ for Gradient-Based Algorithms in Zero-Sum Games without the Condition Number: A Smoothed Analysis", "url": "https://openreview.net/forum?id=hoVXLC8vQU", "detail_url": "https://openreview.net/forum?id=hoVXLC8vQU", "authors": "Ioannis Anagnostides,Tuomas Sandholm", "tags": "NIPS 2024,Poster", "abstract": "Gradient-based algorithms have shown great promise in solving large (two-player) zero-sum games. However, their success has been mostly confined to the low-precision regime since the number of iterations grows polynomially in $1/\\epsilon$, where $\\epsilon > 0$ is the duality gap. While it has been well-documented that linear convergence---an iteration complexity scaling as $\\text{log}(1/\\epsilon)$---can be attained even with gradient-based algorithms, that comes at the cost of introducing a dependency on certain condition number-like quantities which can be exponentially large in the description of the game. To address this shortcoming, we examine the iteration complexity of several gradient-based algorithms in the celebrated framework of smoothed analysis, and we show that they have polynomial smoothed complexity, in that their number of iterations grows as a polynomial in the dimensions of the game, $\\text{log}(1/\\epsilon)$, and $1/\\sigma$, where $\\sigma$ measures the magnitude of the smoothing perturbation. Our result applies to optimistic gradient and extra-gradient descent/ascent, as well as a certain iterative variant of Nesterov's smoothing technique. From a technical standpoint, the proof proceeds by characterizing and performing a smoothed analysis of a certain error bound, the key ingredient driving linear convergence in zero-sum games. En route, our characterization also makes a natural connection between the convergence rate of such algorithms and perturbation-stability properties of the equilibrium, which is of interest beyond the model of smoothed complexity.", "pdf": "https://openreview.net/pdf/685db8e56f53301efbc51c9cf35dd8414e2df9dd.pdf"} {"title": "On the Complexity of Learning Sparse Functions with Statistical and Gradient Queries", "url": "https://openreview.net/forum?id=Q0KwoyZlSo", "detail_url": "https://openreview.net/forum?id=Q0KwoyZlSo", "authors": "Nirmit Joshi,Theodor Misiakiewicz,Nathan Srebro", "tags": "NIPS 2024,Poster", "abstract": "The goal of this paper is to investigate the complexity of gradient algorithms when learning sparse functions (juntas). We introduce a type of Statistical Queries ($\\mathsf{SQ}$), which we call Differentiable Learning Queries ($\\mathsf{DLQ}$), to model gradient queries on a specified loss with respect to an arbitrary model. We provide a tight characterization of the query complexity of $\\mathsf{DLQ}$ for learning the support of a sparse function over generic product distributions. This complexity crucially depends on the loss function. For the squared loss, $\\mathsf{DLQ}$ matches the complexity of Correlation Statistical Queries $(\\mathsf{CSQ})$\u2014potentially much worse than $\\mathsf{SQ}$. But for other simple loss functions, including the $\\ell_1$ loss, $\\mathsf{DLQ}$ always achieves the same complexity as $\\mathsf{SQ}$. We also provide evidence that $\\mathsf{DLQ}$ can indeed capture learning with (stochastic) gradient descent by showing it correctly describes the complexity of learning with a two-layer neural network in the mean field regime and linear scaling.", "pdf": "https://openreview.net/pdf/af545f82453eb4cb87f0a39a2bc570d606f175a4.pdf"} {"title": "Policy Aggregation", "url": "https://openreview.net/forum?id=ybiUVIxJth", "detail_url": "https://openreview.net/forum?id=ybiUVIxJth", "authors": "Parand A. Alamdari,Soroush Ebadian,Ariel D. Procaccia", "tags": "NIPS 2024,Poster", "abstract": "We consider the challenge of AI value alignment with multiple individuals that have different reward functions and optimal policies in an underlying Markov decision process. We formalize this problem as one of *policy aggregation*, where the goal is to identify a desirable collective policy. We argue that an approach informed by social choice theory is especially suitable. Our key insight is that social choice methods can be reinterpreted by identifying ordinal preferences with volumes of subsets of the *state-action occupancy polytope*. Building on this insight, we demonstrate that a variety of methods \u2014 including approval voting, Borda count, the proportional veto core, and quantile fairness \u2014 can be practically applied to policy aggregation.", "pdf": "https://openreview.net/pdf/fbe1222a75cee72847eef7783a09fb1b0b56c748.pdf"} {"title": "Vision Transformer Neural Architecture Search for Out-of-Distribution Generalization: Benchmark and Insights", "url": "https://openreview.net/forum?id=2AIwiIkE0s", "detail_url": "https://openreview.net/forum?id=2AIwiIkE0s", "authors": "Sy-Tuyen Ho,Tuan Van Vo,Somayeh Ebrahimkhani,Ngai-man Cheung", "tags": "NIPS 2024,Poster", "abstract": "While Vision Transformer (ViT) have achieved success across various machine learning tasks, deploying them in real-world scenarios faces a critical challenge: generalizing under Out-of-Distribution (OoD) shifts. A crucial research gap remains in understanding how to design ViT architectures \u2013 both manually and automatically \u2013 to excel in OoD generalization. **To address this gap,** we introduce OoD-ViT-NAS, the first systematic benchmark for ViT Neural Architecture Search (NAS) focused on OoD generalization. This comprehensive benchmark includes 3,000 ViT architectures of varying model computational budgets evaluated on common large-scale OoD datasets. With this comprehensive benchmark at hand, we analyze the factors that contribute to the OoD generalization of ViT architecture. Our analysis uncovers several key insights. Firstly, we show that ViT architecture designs have a considerable impact on OoD generalization. Secondly, we observe that In-Distribution (ID) accuracy might not be a very good indicator of OoD accuracy. This underscores the risk that ViT architectures optimized for ID accuracy might not perform well under OoD shifts. Thirdly, we conduct the first study to explore NAS for ViT\u2019s OoD robustness. Specifically, we study 9 Training-free NAS for their OoD generalization performance on our benchmark. We observe that existing Training-free NAS are largely ineffective in predicting OoD accuracy despite their effectiveness at predicting ID accuracy. Moreover, simple proxies like #Param or #Flop surprisingly outperform more complex Training-free NAS in predicting ViTs OoD accuracy. Finally, we study how ViT architectural attributes impact OoD generalization. We discover that increasing embedding dimensions of a ViT architecture generally can improve the OoD generalization. We show that ViT architectures in our benchmark exhibit a wide range of OoD accuracy, with up to 11.85% for some OoD shift, prompting the importance to study ViT architecture design for OoD. We firmly believe that our OoD-ViT-NAS benchmark and our analysis can catalyze and streamline important research on understanding how ViT architecture designs influence OoD generalization. **Our OoD-NAS-ViT benchmark and code are available at [https://hosytuyen.github.io/projects/OoD-ViT-NAS](https://hosytuyen.github.io/projects/OoD-ViT-NAS)**", "pdf": "https://openreview.net/pdf/e09f0f841c1a1e896d379df46847220d8144443d.pdf"} {"title": "Equivariant Machine Learning on Graphs with Nonlinear Spectral Filters", "url": "https://openreview.net/forum?id=y8P633E5HQ", "detail_url": "https://openreview.net/forum?id=y8P633E5HQ", "authors": "Ya-Wei Eileen Lin,Ronen Talmon,Ron Levie", "tags": "NIPS 2024,Poster", "abstract": "Equivariant machine learning is an approach for designing deep learning models that respect the symmetries of the problem, with the aim of reducing model complexity and improving generalization. \nIn this paper, we focus on an extension of shift equivariance, which is the basis of convolution networks on images, to general graphs. Unlike images, graphs do not have a natural notion of domain translation. \nTherefore, we consider the graph functional shifts as the symmetry group: the unitary operators that commute with the graph shift operator. \nNotably, such symmetries operate in the signal space rather than directly in the spatial space.\nWe remark that each linear filter layer of a standard spectral graph neural network (GNN) commutes with graph functional shifts, but the activation function breaks this symmetry. Instead, we propose nonlinear spectral filters (NLSFs) that are fully equivariant to graph functional shifts and show that they have universal approximation properties. \nThe proposed NLSFs are based on a new form of spectral domain that is transferable between graphs. \nWe demonstrate the superior performance of NLSFs over existing spectral GNNs in node and graph classification benchmarks.", "pdf": "https://openreview.net/pdf/7cf055b02c2ad0760653ed4c078ae8d3ffd7a0eb.pdf"} {"title": "Efficient $\\Phi$-Regret Minimization with Low-Degree Swap Deviations in Extensive-Form Games", "url": "https://openreview.net/forum?id=c4ElkpA0kh", "detail_url": "https://openreview.net/forum?id=c4ElkpA0kh", "authors": "Brian Hu Zhang,Ioannis Anagnostides,Gabriele Farina,Tuomas Sandholm", "tags": "NIPS 2024,Poster", "abstract": "Recent breakthrough results by Dagan, Daskalakis, Fishelson and Golowich [2023] and Peng and Rubinstein [2023] established an efficient algorithm attaining at most $\\epsilon$ swap regret over extensive-form strategy spaces of dimension $N$ in $N^{\\tilde O(1/\\epsilon)}$ rounds. On the other extreme, Farina and Pipis [2023] developed an efficient algorithm for minimizing the weaker notion of linear-swap regret in $\\mathsf{poly}(N)/\\epsilon^2$ rounds. In this paper, we develop efficient parameterized algorithms for regimes between these two extremes. We introduce the set of $k$-mediator deviations, which generalize the untimed communication deviations recently introduced by Zhang, Farina and Sandholm [2024] to the case of having multiple mediators, and we develop algorithms for minimizing the regret with respect to this set of deviations in $N^{O(k)}/\\epsilon^2$ rounds. Moreover, by relating $k$-mediator deviations to low-degree polynomials, we show that regret minimization against degree-$k$ polynomial swap deviations is achievable in $N^{O(kd)^3}/\\epsilon^2$ rounds, where $d$ is the depth of the game, assuming a constant branching factor. For a fixed degree $k$, this is polynomial for Bayesian games and quasipolynomial more broadly when $d = \\mathsf{polylog} N$---the usual balancedness assumption on the game tree. The first key ingredient in our approach is a relaxation of the usual notion of a fixed point required in the framework of Gordon, Greenwald and Marks [2008]. Namely, for a given deviation $\\phi$, we show that it suffices to compute what we refer to as a fixed point in expectation; that is, a distribution $\\pi$ such that $\\mathbb{E}_{x \\sim \\pi} [\\phi(x) - x] \\approx 0$. Unlike the problem of computing an actual (approximate) fixed point $x \\approx \\phi(x)$, which we show is \\PPAD-hard, there is a simple and efficient algorithm for finding a solution that satisfies our relaxed notion. As a byproduct, we provide, to our knowledge, the fastest algorithm for computing $\\epsilon$-correlated equilibria in normal-form games in the medium-precision regime, obviating the need to solve a linear system in every round. Our second main contribution is a characterization of the set of low-degree deviations, made possible through a connection to low-depth decisions trees from Boolean analysis.", "pdf": "https://openreview.net/pdf/cd5268d9f81aa04152bac698f286b1341e972771.pdf"} {"title": "Contextual Linear Optimization with Bandit Feedback", "url": "https://openreview.net/forum?id=lOdBHkqzRH", "detail_url": "https://openreview.net/forum?id=lOdBHkqzRH", "authors": "Yichun Hu,Nathan Kallus,Xiaojie Mao,Yanchen Wu", "tags": "NIPS 2024,Poster", "abstract": "Contextual linear optimization (CLO) uses predictive contextual features to reduce uncertainty in random cost coefficients and thereby improve average-cost performance. An example is the stochastic shortest path problem with random edge costs (e.g., traffic) and contextual features (e.g., lagged traffic, weather). Existing work on CLO assumes the data has fully observed cost coefficient vectors, but in many applications, we can only see the realized cost of a historical decision, that is, just one projection of the random cost coefficient vector, to which we refer as bandit feedback. We study a class of offline learning algorithms for CLO with bandit feedback, which we term induced empirical risk minimization (IERM), where we fit a predictive model to directly optimize the downstream performance of the policy it induces. We show a fast-rate regret bound for IERM that allows for misspecified model classes and flexible choices of the optimization estimate, and we develop computationally tractable surrogate losses. A byproduct of our theory of independent interest is fast-rate regret bound for IERM with full feedback and misspecified policy class. We compare the performance of different modeling choices numerically using a stochastic shortest path example and provide practical insights from the empirical results.", "pdf": "https://openreview.net/pdf/ad86d25f249f89cccc7115f67824dcf6d0c13831.pdf"} {"title": "Dynamic Model Predictive Shielding for Provably Safe Reinforcement Learning", "url": "https://openreview.net/forum?id=x2zY4hZcmg", "detail_url": "https://openreview.net/forum?id=x2zY4hZcmg", "authors": "Arko Banerjee,Kia Rahmani,Joydeep Biswas,Isil Dillig", "tags": "NIPS 2024,Poster", "abstract": "Among approaches for provably safe reinforcement learning, Model Predictive Shielding (MPS) has proven effective at complex tasks in continuous, high-dimensional state spaces, by leveraging a *backup policy* to ensure safety when the learned policy attempts to take risky actions. However, while MPS can ensure safety both during and after training, it often hinders task progress due to the conservative and task-oblivious nature of backup policies.\nThis paper introduces *Dynamic Model Predictive Shielding* (DMPS), which optimizes reinforcement learning objectives while maintaining provable safety. DMPS employs a local planner to dynamically select safe recovery actions that maximize both short-term progress as well as long-term rewards. Crucially, the planner and the neural policy play a synergistic role in DMPS. When planning recovery actions for ensuring safety, the planner utilizes the neural policy to estimate long-term rewards, allowing it to *observe* beyond its short-term planning horizon. \nConversely, the neural policy under training learns from the recovery plans proposed by the planner, converging to policies that are both *high-performing* and *safe* in practice.\nThis approach guarantees safety during and after training, with bounded recovery regret that decreases exponentially with planning horizon depth. Experimental results demonstrate that DMPS converges to policies that rarely require shield interventions after training and achieve higher rewards compared to several state-of-the-art baselines.", "pdf": "https://openreview.net/pdf/c706ae06d8c7d1a093c6b0c855ff445166fb6629.pdf"} {"title": "Contrastive dimension reduction: when and how?", "url": "https://openreview.net/forum?id=IgU8gMKy4D", "detail_url": "https://openreview.net/forum?id=IgU8gMKy4D", "authors": "Sam Hawke,Yueen Ma,Didong Li", "tags": "NIPS 2024,Poster", "abstract": "Dimension reduction (DR) is an important and widely studied technique in exploratory data analysis. However, traditional DR methods are not applicable to datasets with with a contrastive structure, where data are split into a foreground group of interest (case or treatment group), and a background group (control group). This type of data, common in biomedical studies, necessitates contrastive dimension reduction (CDR) methods to effectively capture information unique to or enriched in the foreground group relative to the background group. Despite the development of various CDR methods, two critical questions remain underexplored: when should these methods be applied, and how can the information unique to the foreground group be quantified? In this work, we address these gaps by proposing a hypothesis test to determine the existence of contrastive information, and introducing a contrastive dimension estimator (CDE) to quantify the unique components in the foreground group. We provide theoretical support for our methods and validate their effectiveness through extensive simulated, semi-simulated, and real experiments involving images, gene expressions, protein expressions, and medical sensors, demonstrating their ability to identify the unique information in the foreground group.", "pdf": "https://openreview.net/pdf/f99b532c1ef357831e90bd094e2ef4184329b90c.pdf"} {"title": "Belief-State Query Policies for User-Aligned Planning under Partial Observability", "url": "https://openreview.net/forum?id=i2oacRDF5L", "detail_url": "https://openreview.net/forum?id=i2oacRDF5L", "authors": "Daniel Richard Bramblett,Siddharth Srivastava", "tags": "NIPS 2024,Poster", "abstract": "Planning in real-world settings often entails addressing partial observability while aligning with users' requirements. We present a novel framework for expressing users' constraints and preferences about agent behavior in a partially observable setting using parameterized belief-state query (BSQ) constraints in the setting of goal-oriented partially observable Markov decision processes (gPOMDPs). We present the first formal analysis of such constraints and prove that while the expected cost of a BSQ constraint is not a convex function w.r.t its parameters, it is piecewise constant and yields an implicit discrete parameter search space that is finite for finite horizons. This theoretical result leads to novel algorithms that optimize gPOMDP agent behavior with guaranteed user alignment. Theoretical analysis proves that our algorithms converge to the optimal user-aligned behavior in the limit. Empirical results show that BSQ constraints provide a computationally feasible approach for user-aligned planning in partially observable settings.", "pdf": "https://openreview.net/pdf/dd420e889e4e098f674d68c8d97d9928ea143245.pdf"} {"title": "Randomized Sparse Matrix Compression for Large-Scale Constrained Optimization in Cancer Radiotherapy", "url": "https://openreview.net/forum?id=ItzD2Cnu9y", "detail_url": "https://openreview.net/forum?id=ItzD2Cnu9y", "authors": "Shima Adeli,Mojtaba Tefagh,Gourav Jhanwar,Masoud Zarepisheh", "tags": "NIPS 2024,Poster", "abstract": "Radiation therapy, treating over half of all cancer patients, involves using specialized machines to direct high-energy beams at tumors, aiming to damage cancer cells while minimizing harm to nearby healthy tissues. Customizing the shape and intensity of radiation beams for each patient leads to solving large-scale constrained optimization problems that need to be solved within tight clinical time-frame. At the core of these challenges is a large matrix that is commonly sparsified for computational efficiency by neglecting small elements. Such a crude approximation can degrade the quality of treatment, potentially causing unnecessary radiation exposure to healthy tissues\u2014this may lead to significant radiation-induced side effects\u2014or delivering inadequate radiation to the tumor, which is crucial for effective tumor treatment. In this work, we demonstrate, for the first time, that randomized sketch tools can effectively sparsify this matrix without sacrificing treatment quality. We also develop a novel randomized sketch method with desirable theoretical guarantees that outperforms existing techniques in practical application. Beyond developing a novel randomized sketch method, this work emphasizes the potential of harnessing scientific computing tools, crucial in today's big data analysis, to tackle computationally intensive challenges in healthcare. The application of these tools could have a profound impact on the lives of numerous cancer patients. Code and sample data available at https://github.com/PortPy-Project/CompressRTP", "pdf": "https://openreview.net/pdf/a5960a209cd4710630f17d2f94d764797cca3879.pdf"} {"title": "Last-Iterate Global Convergence of Policy Gradients for Constrained Reinforcement Learning", "url": "https://openreview.net/forum?id=2vywag2lVC", "detail_url": "https://openreview.net/forum?id=2vywag2lVC", "authors": "Alessandro Montenegro,Marco Mussi,Matteo Papini,Alberto Maria Metelli", "tags": "NIPS 2024,Poster", "abstract": "*Constrained Reinforcement Learning* (CRL) tackles sequential decision-making problems where agents are required to achieve goals by maximizing the expected return while meeting domain-specific constraints, which are often formulated on expected costs. In this setting, *policy-based* methods are widely used since they come with several advantages when dealing with continuous-control problems. These methods search in the policy space with an *action-based* or *parameter-based* exploration strategy, depending on whether they learn directly the parameters of a stochastic policy or those of a stochastic hyperpolicy. In this paper, we propose a general framework for addressing CRL problems via *gradient-based primal-dual* algorithms, relying on an alternate ascent/descent scheme with dual-variable regularization. We introduce an exploration-agnostic algorithm, called C-PG, which exhibits global last-iterate convergence guarantees under (weak) gradient domination assumptions, improving and generalizing existing results. Then, we design C-PGAE and C-PGPE, the action-based and the parameter-based versions of C-PG, respectively, and we illustrate how they naturally extend to constraints defined in terms of *risk measures* over the costs, as it is often requested in safety-critical scenarios. Finally, we numerically validate our algorithms on constrained control problems, and compare them with state-of-the-art baselines, demonstrating their effectiveness.", "pdf": "https://openreview.net/pdf/93f400f33e9b9fb8ec944c9bc5e0fec693bf0bd5.pdf"} {"title": "On the Minimax Regret for Contextual Linear Bandits and Multi-Armed Bandits with Expert Advice", "url": "https://openreview.net/forum?id=AkiPax5SXu", "detail_url": "https://openreview.net/forum?id=AkiPax5SXu", "authors": "Shinji Ito", "tags": "NIPS 2024,Poster", "abstract": "This paper examines two extensions of multi-armed bandit problems: multi-armed bandits with expert advice and contextual linear bandits. For the former problem, multi-armed bandits with expert advice, the previously known best upper and lower bounds have been $O(\\sqrt{KT \\log \\frac{N}{K} })$ and $\\Omega( \\sqrt{KT \\frac{ \\log N }{\\log K }} )$, respectively. Here, $K$, $N$, and $T$ represent the numbers of arms, experts, and rounds, respectively. This paper closes the gap between these bounds by presenting a matching lower bound of $\\Omega( \\sqrt{KT \\log \\frac{N}{K}} )$. \n This lower bound is shown for the problem setting in which the player chooses an expert before observing the advices in each round.\n For the latter problem, contextual linear bandits, we provide an algorithm that achieves $O ( \\sqrt{d T \\log ( K \\min\\{ 1, \\frac{S}{d} \\} )} )$ together with a matching lower bound, where $d$ and $S$ represent the dimensionality of feature vectors and the size of the context space, respectively.", "pdf": "https://openreview.net/pdf/2109cb64ca1cfcad414d86b7deb817ebabab9caf.pdf"} {"title": "Nonparametric Evaluation of Noisy ICA Solutions", "url": "https://openreview.net/forum?id=GVgRbz8MvG", "detail_url": "https://openreview.net/forum?id=GVgRbz8MvG", "authors": "Syamantak Kumar,Derek Bean,Peter Bickel,Purnamrita Sarkar", "tags": "NIPS 2024,Poster", "abstract": "Independent Component Analysis (ICA) was introduced in the 1980's as a model for Blind Source Separation (BSS), which refers to the process of recovering the sources underlying a mixture of signals, with little knowledge about the source signals or the mixing process. While there are many sophisticated algorithms for estimation, different methods have different shortcomings. In this paper, we develop a nonparametric score to adaptively pick the right algorithm for ICA with arbitrary Gaussian noise. The novelty of this score stems from the fact that it just assumes a finite second moment of the data and uses the characteristic function to evaluate the quality of the estimated mixing matrix without any knowledge of the parameters of the noise distribution. In addition, we propose some new contrast functions and algorithms that enjoy the same fast computability as existing algorithms like FASTICA and JADE but work in domains where the former may fail. While these also may have weaknesses, our proposed diagnostic, as shown by our simulations, can remedy them. Finally, we propose a theoretical framework to analyze the local and global convergence properties of our algorithms.", "pdf": "https://openreview.net/pdf/0cf5205d7ba6e60ef56906c75c376e14d531b1b4.pdf"} {"title": "Analysis of Corrected Graph Convolutions", "url": "https://openreview.net/forum?id=MSsQDWUWpd", "detail_url": "https://openreview.net/forum?id=MSsQDWUWpd", "authors": "Robert Wang,Aseem Baranwal,Kimon Fountoulakis", "tags": "NIPS 2024,Poster", "abstract": "Machine learning for node classification on graphs is a prominent area driven by applications such as recommendation systems. State-of-the-art models often use multiple graph convolutions on the data, as empirical evidence suggests they can enhance performance. However, it has been shown empirically and theoretically, that too many graph convolutions can degrade performance significantly, a phenomenon known as oversmoothing. In this paper, we provide a rigorous theoretical analysis, based on the two-class contextual stochastic block model (CSBM), of the performance of vanilla graph convolution from which we remove the principal eigenvector to avoid oversmoothing. We perform a spectral analysis for $k$ rounds of corrected graph convolutions, and we provide results for partial and exact classification. For partial classification, we show that each round of convolution can reduce the misclassification error exponentially up to a saturation level, after which performance does not worsen. We also extend this analysis to the multi-class setting with features distributed according to a Gaussian mixture model. For exact classification, we show that the separability threshold can be improved exponentially up to $O({\\log{n}}/{\\log\\log{n}})$ corrected convolutions.", "pdf": "https://openreview.net/pdf/15af0daea33b88342a38fa9ca6d66b1105cbbe1a.pdf"} {"title": "Oja's Algorithm for Streaming Sparse PCA", "url": "https://openreview.net/forum?id=clQdPtooRD", "detail_url": "https://openreview.net/forum?id=clQdPtooRD", "authors": "Syamantak Kumar,Purnamrita Sarkar", "tags": "NIPS 2024,Poster", "abstract": "Oja's algorithm for Streaming Principal Component Analysis (PCA) for $n$ data-points in a $d$ dimensional space achieves the same sin-squared error $O(r_{\\mathsf{eff}}/n)$ as the offline algorithm in $O(d)$ space and $O(nd)$ time and a single pass through the datapoints. Here $r_{\\mathsf{eff}}$ is the effective rank (ratio of the trace and the principal eigenvalue of the population covariance matrix $\\Sigma$). Under this computational budget, we consider the problem of sparse PCA, where the principal eigenvector of $\\Sigma$ is $s$-sparse, and $r_{\\mathsf{eff}}$ can be large. In this setting, to our knowledge, *there are no known single-pass algorithms* that achieve the minimax error bound in $O(d)$ space and $O(nd)$ time without either requiring strong initialization conditions or assuming further structure (e.g., spiked) of the covariance matrix.\nWe show that a simple single-pass procedure that thresholds the output of Oja's algorithm (the Oja vector) can achieve the minimax error bound under some regularity conditions in $O(d)$ space and $O(nd)$ time. \nWe present a nontrivial and novel analysis of the entries of the unnormalized Oja vector, which involves the projection of a product of independent random matrices on a random initial vector. This is completely different from previous analyses of Oja's algorithm and matrix products, which have been done when the $r_{\\mathsf{eff}}$ is bounded.", "pdf": "https://openreview.net/pdf/18dc5c8619f6030ed556376f12d426c8359c5f2b.pdf"} {"title": "DEFT: Efficient Fine-tuning of Diffusion Models by Learning the Generalised $h$-transform", "url": "https://openreview.net/forum?id=AKBTFQhCjm", "detail_url": "https://openreview.net/forum?id=AKBTFQhCjm", "authors": "Alexander Denker,Francisco Vargas,Shreyas Padhy,Kieran Didi,Simon V Mathis,Riccardo Barbano,Vincent Dutordoir,Emile Mathieu,Urszula Julia Komorowska,Pietro Lio", "tags": "NIPS 2024,Poster", "abstract": "Generative modelling paradigms based on denoising diffusion processes have emerged as a leading candidate for conditional sampling in inverse problems. \nIn many real-world applications, we often have access to large, expensively trained unconditional diffusion models, which we aim to exploit for improving conditional sampling.\nMost recent approaches are motivated heuristically and lack a unifying framework, obscuring connections between them. Further, they often suffer from issues such as being very sensitive to hyperparameters, being expensive to train or needing access to weights hidden behind a closed API. In this work, we unify conditional training and sampling using the mathematically well-understood Doob's h-transform. This new perspective allows us to unify many existing methods under a common umbrella. Under this framework, we propose DEFT (Doob's h-transform Efficient FineTuning), a new approach for conditional generation that simply fine-tunes a very small network to quickly learn the conditional $h$-transform, while keeping the larger unconditional network unchanged. DEFT is much faster than existing baselines while achieving state-of-the-art performance across a variety of linear and non-linear benchmarks. On image reconstruction tasks, we achieve speedups of up to 1.6$\\times$, while having the best perceptual quality on natural images and reconstruction performance on medical images. Further, we also provide initial experiments on protein motif scaffolding and outperform reconstruction guidance methods.", "pdf": "https://openreview.net/pdf/3eeaef02a49b07769b199b3acba47b55845a4da8.pdf"} {"title": "AsCAN: Asymmetric Convolution-Attention Networks for Efficient Recognition and Generation", "url": "https://openreview.net/forum?id=r0eSCJ6qsL", "detail_url": "https://openreview.net/forum?id=r0eSCJ6qsL", "authors": "Anil Kag,Huseyin Coskun,Jierun Chen,Junli Cao,Willi Menapace,Aliaksandr Siarohin,Sergey Tulyakov,Jian Ren", "tags": "NIPS 2024,Poster", "abstract": "Neural network architecture design requires making many crucial decisions. The common desiderata is that similar decisions, with little modifications, can be reused in a variety of tasks and applications. To satisfy that, architectures must provide promising latency and performance trade-offs, support a variety of tasks, scale efficiently with respect to the amounts of data and compute, leverage available data from other tasks, and efficiently support various hardware. To this end, we introduce AsCAN---a hybrid architecture, combining both convolutional and transformer blocks. We revisit the key design principles of hybrid architectures and propose a simple and effective \\emph{asymmetric} architecture, where the distribution of convolutional and transformer blocks is \\emph{asymmetric}, containing more convolutional blocks in the earlier stages, followed by more transformer blocks in later stages. AsCAN supports a variety of tasks: recognition, segmentation, class-conditional image generation, and features a superior trade-off between performance and latency. We then scale the same architecture to solve a large-scale text-to-image task and show state-of-the-art performance compared to the most recent public and commercial models. Notably, without performing any optimization of inference time our model shows faster execution, even when compared to works that do such optimization, highlighting the advantages and the value of our approach.", "pdf": "https://openreview.net/pdf/4136dd4ddffaf95b4c4b7e86a402302343ad550b.pdf"} {"title": "Image Reconstruction Via Autoencoding Sequential Deep Image Prior", "url": "https://openreview.net/forum?id=K1EG2ABzNE", "detail_url": "https://openreview.net/forum?id=K1EG2ABzNE", "authors": "Ismail Alkhouri,Shijun Liang,Evan Bell,Qing Qu,Rongrong Wang,Saiprasad Ravishankar", "tags": "NIPS 2024,Poster", "abstract": "Recently, Deep Image Prior (DIP) has emerged as an effective unsupervised one-shot learner, delivering competitive results across various image recovery problems. This method only requires the noisy measurements and a forward operator, relying solely on deep networks initialized with random noise to learn and restore the structure of the data. However, DIP is notorious for its vulnerability to overfitting due to the overparameterization of the network. Building upon insights into the impact of the DIP input and drawing inspiration from the gradual denoising process in cutting-edge diffusion models, we introduce Autoencoding Sequential DIP (aSeqDIP) for image reconstruction. This method progressively denoises and reconstructs the image through a sequential optimization of network weights. This is achieved using an input-adaptive DIP objective, combined with an autoencoding regularization term. Compared to diffusion models, our method does not require training data and outperforms other DIP-based methods in mitigating noise overfitting while maintaining a similar number of parameter updates as Vanilla DIP. Through extensive experiments, we validate the effectiveness of our method in various image reconstruction tasks, such as MRI and CT reconstruction, as well as in image restoration tasks like image denoising, inpainting, and non-linear deblurring.", "pdf": "https://openreview.net/pdf/e5f87dcd503aa6582c44c74bd9bd7c7457391a94.pdf"} {"title": "A Novel Unified Architecture for Low-Shot Counting by Detection and Segmentation", "url": "https://openreview.net/forum?id=mtOPyMkSRk", "detail_url": "https://openreview.net/forum?id=mtOPyMkSRk", "authors": "Jer Pelhan,Alan Lukezic,Vitjan Zavrtanik,Matej Kristan", "tags": "NIPS 2024,Poster", "abstract": "Low-shot object counters estimate the number of objects in an image using few or no annotated exemplars. Objects are localized by matching them to prototypes, which are constructed by unsupervised image-wide object appearance aggregation.\nDue to potentially diverse object appearances, the existing approaches often lead to overgeneralization and false positive detections.\nFurthermore, the best-performing methods train object localization by a surrogate loss, that predicts a unit Gaussian at each object center. This loss is sensitive to annotation error, hyperparameters and does not directly optimize the detection task, leading to suboptimal counts.\nWe introduce GeCo, a novel low-shot counter that achieves accurate object detection, segmentation, and count estimation in a unified architecture.\nGeCo robustly generalizes the prototypes across objects appearances through a novel dense object query formulation. \nIn addition, a novel counting loss is proposed, that directly optimizes the detection task and avoids the issues of the standard surrogate loss. \nGeCo surpasses the leading few-shot detection-based counters by $\\sim$25\\% in the total count MAE, achieves superior detection accuracy and sets a new solid state-of-the-art result across all low-shot counting setups. \nThe code will be available on GitHub.", "pdf": "https://openreview.net/pdf/d0b2cb2b23e87543d69c0f54f25700b7c5a26296.pdf"} {"title": "Implicit Bias of Mirror Flow on Separable Data", "url": "https://openreview.net/forum?id=wiMaws0FWB", "detail_url": "https://openreview.net/forum?id=wiMaws0FWB", "authors": "Scott Pesme,Radu-Alexandru Dragomir,Nicolas Flammarion", "tags": "NIPS 2024,Poster", "abstract": "We examine the continuous-time counterpart of mirror descent, namely mirror flow, on classification problems which are linearly separable. Such problems are minimised \u2018at infinity\u2019 and have many possible solutions; we study which solution is preferred by the algorithm depending on the mirror potential. For exponential tailed losses and under mild assumptions on the potential, we show that the iterates converge in direction towards a $\\phi_\\infty$-maximum margin classifier. The function $\\phi_\\infty$ is the horizon function of the mirror potential and characterises its shape \u2018at infinity\u2019. When the potential is separable, a simple formula allows to compute this function. We analyse several examples of potentials and provide numerical experiments highlighting our results.", "pdf": "https://openreview.net/pdf/eca2a88950a4c4612135cbb0fbad857b5b6af6af.pdf"} {"title": "Mixture of In-Context Experts Enhance LLMs' Long Context Awareness", "url": "https://openreview.net/forum?id=RcPHbofiCN", "detail_url": "https://openreview.net/forum?id=RcPHbofiCN", "authors": "Hongzhan Lin,Ang Lv,Yuhan Chen,Chen Zhu,Yang Song,Hengshu Zhu,Rui Yan", "tags": "NIPS 2024,Poster", "abstract": "Many studies have revealed that large language models (LLMs) exhibit uneven awareness of different contextual positions. Their limited context awareness can lead to overlooking critical information and subsequent task failures. While several approaches have been proposed to enhance LLMs' context awareness, achieving both effectiveness and efficiency remains challenging. In this paper, for LLMs utilizing RoPE as position embeddings, we introduce a novel method called \"Mixture of In-Context Experts\" (MoICE) to address this challenge. MoICE comprises two key components: a router integrated into each attention head within LLMs and a lightweight router-only training optimization strategy:(1) MoICE views each RoPE angle as an 'in-context' expert, demonstrated to be capable of directing the attention of a head to specific contextual positions. Consequently, each attention head flexibly processes tokens using multiple RoPE angles dynamically selected by the router to attend to the needed positions. This approach mitigates the risk of overlooking essential contextual information. (2) The router-only training strategy entails freezing LLM parameters and exclusively updating routers for only a few steps. When applied to open-source LLMs including Llama and Mistral, MoICE surpasses prior methods across multiple tasks on long context understanding and generation, all while maintaining commendable inference efficiency.", "pdf": "https://openreview.net/pdf/ab20c45ea14ca85988f5168c8ba66175978e42d5.pdf"} {"title": "Speculative Decoding with CTC-based Draft Model for LLM Inference Acceleration", "url": "https://openreview.net/forum?id=pGeAcYhnN5", "detail_url": "https://openreview.net/forum?id=pGeAcYhnN5", "authors": "Zhuofan Wen,Shangtong Gui,Yang Feng", "tags": "NIPS 2024,Poster", "abstract": "Inference acceleration of large language models (LLMs) has been put forward in many application scenarios and speculative decoding has shown its advantage in addressing inference acceleration. Speculative decoding usually introduces a draft model to assist the base LLM where the draft model produces drafts and the base LLM verifies the draft for acceptance or rejection. In this framework, the final inference speed is decided by the decoding speed of the draft model and the acceptance rate of the draft provided by the draft model. Currently the widely used draft models usually generate draft tokens for the next several positions in a non-autoregressive way without considering the correlations between draft tokens. Therefore, it has a high decoding speed but an unsatisfactory acceptance rate. In this paper, we focus on how to improve the performance of the draft model and aim to accelerate inference via a high acceptance rate. To this end, we propose a CTC-based draft model which strengthens the correlations between draft tokens during the draft phase, thereby generating higher-quality draft candidate sequences. Experiment results show that compared to strong baselines, the proposed method can achieve a higher acceptance rate and hence a faster inference speed.", "pdf": "https://openreview.net/pdf/24afb093510474acb2e9714e67a7d28648b9dd2e.pdf"} {"title": "S-MolSearch: 3D Semi-supervised Contrastive Learning for Bioactive Molecule Search", "url": "https://openreview.net/forum?id=wJAF8TGVUG", "detail_url": "https://openreview.net/forum?id=wJAF8TGVUG", "authors": "Gengmo Zhou,Zhen Wang,Feng Yu,Guolin Ke,Zhewei Wei,Zhifeng Gao", "tags": "NIPS 2024,Poster", "abstract": "Virtual Screening is an essential technique in the early phases of drug discovery, aimed at identifying promising drug candidates from vast molecular libraries. \nRecently, ligand-based virtual screening has garnered significant attention due to its efficacy in conducting extensive database screenings without relying on specific protein-binding site information.\nObtaining binding affinity data for complexes is highly expensive, resulting in a limited amount of available data that covers a relatively small chemical space. Moreover, these datasets contain a significant amount of inconsistent noise. It is challenging to identify an inductive bias that consistently maintains the integrity of molecular activity during data augmentation. To tackle these challenges, we propose S-MolSearch, the first framework to our knowledge, that leverages molecular 3D information and affinity information in semi-supervised contrastive learning for ligand-based virtual screening. \n% S-MolSearch processes both labeled and unlabeled data, trains molecular structural encoders, and generates soft labels for unlabeled data, drawing on the principles of inverse optimal transport.\nDrawing on the principles of inverse optimal transport, S-MolSearch efficiently processes both labeled and unlabeled data, training molecular structural encoders while generating soft labels for the unlabeled data.\nThis design allows S-MolSearch to adaptively utilize unlabeled data within the learning process.\nEmpirically, S-MolSearch demonstrates superior performance on widely-used benchmarks LIT-PCBA and DUD-E. It surpasses both structure-based and ligand-based virtual screening methods for AUROC, BEDROC and EF.", "pdf": "https://openreview.net/pdf/46d73ed2879dccfa7cc4be660af5ec5d12d274b1.pdf"} {"title": "On the Parameter Identifiability of Partially Observed Linear Causal Models", "url": "https://openreview.net/forum?id=EQZlEfjrkV", "detail_url": "https://openreview.net/forum?id=EQZlEfjrkV", "authors": "Xinshuai Dong,Ignavier Ng,Biwei Huang,Yuewen Sun,Songyao Jin,Roberto Legaspi,Peter Spirtes,Kun Zhang", "tags": "NIPS 2024,Poster", "abstract": "Linear causal models are important tools for modeling causal dependencies and yet in practice, only a subset of the variables can be observed. In this paper, we examine the parameter identifiability of these models by investigating whether the edge coefficients can be recovered given the causal structure and partially observed data. Our setting is more general than that of prior research\u2014we allow all variables, including both observed and latent ones, to be flexibly related, and we consider the coefficients of all edges, whereas most existing works focus only on the edges between observed variables. Theoretically, we identify three types of indeterminacy for the parameters in partially observed linear causal models. We then provide graphical conditions that are sufficient for all parameters to be identifiable and show that some of them are provably necessary. Methodologically, we propose a novel likelihood-based parameter estimation method that addresses the variance indeterminacy of latent variables in a specific way and can asymptotically recover the underlying parameters up to trivial indeterminacy. Empirical studies on both synthetic and real-world datasets validate our identifiability theory and the effectiveness of the proposed method in the finite-sample regime.", "pdf": "https://openreview.net/pdf/1a16ee67f1b9cdea455119606e794edb89c815ac.pdf"} {"title": "Low Precision Local Training is Enough for Federated Learning", "url": "https://openreview.net/forum?id=vvpewjtnvm", "detail_url": "https://openreview.net/forum?id=vvpewjtnvm", "authors": "Zhiwei Li,Yiqiu LI,Binbin Lin,Zhongming Jin,WEIZHONG ZHANG", "tags": "NIPS 2024,Poster", "abstract": "Federated Learning (FL) is a prevalent machine learning paradigm designed to address challenges posed by heterogeneous client data while preserving data privacy.\n Unlike distributed training, it typically orchestrates resource-constrained edge devices to communicate via a low-bandwidth communication network with a central server. This urges the development of more computation and communication efficient training algorithms. In this paper, we propose an efficient FL paradigm, where the local models in the clients are trained with low-precision operations and communicated with the server in low precision format, while only the model aggregation in the server is performed with high-precision computation. We surprisingly find that high precision models can be recovered from the low precision local models with proper aggregation in the server. \n In this way, both the workload in the client-side and the communication cost can be significantly reduced. We theoretically show that our proposed paradigm can converge to the optimal solution as the training goes on, which demonstrates that low precision local training is enough for FL. Our paradigm can be integrated with existing FL algorithms flexibly. Experiments across extensive benchmarks are conducted to showcase the effectiveness of our proposed method. Notably, the models trained by our method with the precision as low as 8 bits are comparable to those from the full precision training. As a by-product, we show that low precision local training can relieve the over-fitting issue in local training, which under heterogeneous client data can cause the client models drift further away from each other and lead to the failure in model aggregation. Code is released at https://github.com/digbangbang/LPT-FL.", "pdf": "https://openreview.net/pdf/aeb8909737e7debabd78bc8d5e4a4a9228bf658a.pdf"} {"title": "Matryoshka Query Transformer for Large Vision-Language Models", "url": "https://openreview.net/forum?id=B1vGiSgELw", "detail_url": "https://openreview.net/forum?id=B1vGiSgELw", "authors": "Wenbo Hu,Zi-Yi Dou,Liunian Harold Li,Amita Kamath,Nanyun Peng,Kai-Wei Chang", "tags": "NIPS 2024,Poster", "abstract": "Large Vision-Language Models (LVLMs) typically encode an image into a fixed number of visual tokens (e.g., 576) and process these tokens with a language model. Despite their strong performance, LVLMs face challenges in adapting to varying computational constraints. This raises the question: can we achieve flexibility in the number of visual tokens to suit different tasks and computational resources? We answer this with an emphatic yes. Inspired by Matryoshka Representation Learning, we introduce the Matryoshka Query Transformer (MQT), capable of encoding an image into $m$ visual tokens during inference, where $m$ can be any number up to a predefined maximum. This is achieved by employing a query transformer with $M$ latent query tokens to compress the visual embeddings. During each training step, we randomly select $m \\leq M$ latent query tokens and train the model using only these first $m$ tokens, discarding the rest.\nCombining MQT with LLaVA, we train a single model once, and flexibly and drastically reduce the number of inference-time visual tokens while maintaining similar or better performance compared to training independent models for each number of tokens. \nOur model, MQT-LLaVA, matches LLaVA-1.5 performance across 11 benchmarks using a maximum of 256 tokens instead of LLaVA\u2019s fixed 576. Reducing to 16 tokens (8x less TFLOPs) only sacrifices the performance by 2.4 points on MMBench. On certain tasks such as ScienceQA and MMMU, we can even go down to only 2 visual tokens with performance drops of just 3\\% and 6\\% each.\nOur exploration of the trade-off between the accuracy and computational cost brought about by the number of visual tokens facilitates future research to achieve the best of both worlds.", "pdf": "https://openreview.net/pdf/e88ad45800bd730a98f6139871a78e63dc6551f2.pdf"} {"title": "The Implicit Bias of Adam on Separable Data", "url": "https://openreview.net/forum?id=xRQxan3WkM", "detail_url": "https://openreview.net/forum?id=xRQxan3WkM", "authors": "Chenyang Zhang,Difan Zou,Yuan Cao", "tags": "NIPS 2024,Poster", "abstract": "Adam has become one of the most favored optimizers in deep learning problems. Despite its success in practice, numerous mysteries persist regarding its theoretical understanding. In this paper, we study the implicit bias of Adam in linear logistic regression. Specifically, we show that when the training data are linearly separable, the iterates of Adam converge towards a linear classifier that achieves the maximum $\\ell_\\infty$-margin in direction. Notably, for a general class of diminishing learning rates, this convergence occurs within polynomial time. Our result shed light on the difference between Adam and (stochastic) gradient descent from a theoretical perspective.", "pdf": "https://openreview.net/pdf/d4a15e237e7fcad39521c47bdcdaeb9981dca258.pdf"} {"title": "Faster Differentially Private Top-$k$ Selection: A Joint Exponential Mechanism with Pruning", "url": "https://openreview.net/forum?id=QyxE3W9Yni", "detail_url": "https://openreview.net/forum?id=QyxE3W9Yni", "authors": "Hao WU,Hanwen Zhang", "tags": "NIPS 2024,Poster", "abstract": "We study the differentially private top-$k$ selection problem, aiming to identify a sequence of $k$ items with approximately the highest scores from $d$ items. Recent work by Gillenwater et al. (2022) employs a direct sampling approach from the vast collection of $O(d^k)$ possible length-$k$ sequences, showing superior empirical accuracy compared to previous pure or approximate differentially private methods. Their algorithm has a time and space complexity of $\\tilde{O}(dk)$. \n\nIn this paper, we present an improved algorithm that achieves time and space complexity of $\\tilde{O}(d + k^2)$.\nExperimental results show that our algorithm runs orders of magnitude faster than their approach, while achieving similar empirical accuracy.", "pdf": "https://openreview.net/pdf/7663c0f5849a38cf3d7c9766d419319d1814edbf.pdf"} {"title": "The Evolution of Statistical Induction Heads: In-Context Learning Markov Chains", "url": "https://openreview.net/forum?id=qaRT6QTIqJ", "detail_url": "https://openreview.net/forum?id=qaRT6QTIqJ", "authors": "Ezra Edelman,Nikolaos Tsilivis,Benjamin L. Edelman,eran malach,Surbhi Goel", "tags": "NIPS 2024,Poster", "abstract": "Large language models have the ability to generate text that mimics patterns in their inputs. We introduce a simple Markov Chain sequence modeling task in order to study how this in-context learning capability emerges. In our setting, each example is sampled from a Markov chain drawn from a prior distribution over Markov chains. Transformers trained on this task form \\emph{statistical induction heads} which compute accurate next-token probabilities given the bigram statistics of the context. During the course of training, models pass through multiple phases: after an initial stage in which predictions are uniform, they learn to sub-optimally predict using in-context single-token statistics (unigrams); then, there is a rapid phase transition to the correct in-context bigram solution. We conduct an empirical and theoretical investigation of this multi-phase process, showing how successful learning results from the interaction between the transformer's layers, and uncovering evidence that the presence of the simpler unigram solution may delay formation of the final bigram solution. We examine how learning is affected by varying the prior distribution over Markov chains, and consider the generalization of our in-context learning of Markov chains (ICL-MC) task to $n$-grams for $n > 2$.", "pdf": "https://openreview.net/pdf/ff35ee38dadb41ad5a94966f71f681a7d61aa066.pdf"} {"title": "On the Computational Complexity of Private High-dimensional Model Selection", "url": "https://openreview.net/forum?id=PzG7xVlYqm", "detail_url": "https://openreview.net/forum?id=PzG7xVlYqm", "authors": "Saptarshi Roy,Zehua Wang,Ambuj Tewari", "tags": "NIPS 2024,Poster", "abstract": "We consider the problem of model selection in a high-dimensional sparse linear regression model under privacy constraints. We propose a differentially private (DP) best subset selection method with strong statistical utility properties by adopting the well-known exponential mechanism for selecting the best model. To achieve computational expediency, we propose an efficient Metropolis-Hastings algorithm and under certain regularity conditions, we establish that it enjoys polynomial mixing time to its stationary distribution. As a result, we also establish both approximate differential privacy and statistical utility for the estimates of the mixed Metropolis-Hastings chain. Finally, we perform some illustrative experiments on simulated data showing that our algorithm can quickly identify active features under reasonable privacy budget constraints.", "pdf": "https://openreview.net/pdf/4757219c74174ea2feefc3d596b96d96a0bd7788.pdf"} {"title": "Why Do We Need Weight Decay in Modern Deep Learning?", "url": "https://openreview.net/forum?id=YrAxxscKM2", "detail_url": "https://openreview.net/forum?id=YrAxxscKM2", "authors": "Francesco D'Angelo,Maksym Andriushchenko,Aditya Varre,Nicolas Flammarion", "tags": "NIPS 2024,Poster", "abstract": "Weight decay is a broadly used technique for training state-of-the-art deep networks from image classification to large language models. Despite its widespread usage and being extensively studied in the classical literature, its role remains poorly understood for deep learning. In this work, we highlight that the role of weight decay in modern deep learning is different from its regularization effect studied in classical learning theory. For deep networks on vision tasks trained with multipass SGD, we show how weight decay modifies the optimization dynamics enhancing the ever-present implicit regularization of SGD via the *loss stabilization mechanism*. In contrast, for large language models trained with nearly one-epoch training, we describe how weight decay balances the *bias-variance tradeoff* in stochastic optimization leading to lower training loss and improved training stability. \nOverall, we present a unifying perspective from ResNets on vision tasks to LLMs: weight decay is never useful as an explicit regularizer but instead changes the training dynamics in a desirable way.", "pdf": "https://openreview.net/pdf/9f1d1e2a1087bd1b61536fa0505d40c5fc8f338b.pdf"} {"title": "Open-Vocabulary Object Detection via Language Hierarchy", "url": "https://openreview.net/forum?id=TNQ0hxh3O1", "detail_url": "https://openreview.net/forum?id=TNQ0hxh3O1", "authors": "Jiaxing Huang,Jingyi Zhang,Kai Jiang,Shijian Lu", "tags": "NIPS 2024,Poster", "abstract": "Recent studies on generalizable object detection have attracted increasing attention with additional weak supervision from large-scale datasets with image-level labels.\nHowever, weakly-supervised detection learning often suffers from image-to-box label mismatch, i.e., image-level\nlabels do not convey precise object information.\nWe design Language Hierarchical Self-training (LHST) that introduces language hierarchy into weakly-supervised detector training for learning more generalizable detectors.\nLHST expands the image-level labels with language hierarchy and enables co-regularization between the expanded labels and self-training. Specifically, the expanded labels regularize self-training by providing richer supervision and mitigating the image-to-box label mismatch, while self-training allows assessing and selecting the expanded labels according to the predicted reliability.\nIn addition, we design language hierarchical prompt generation that introduces language hierarchy into prompt generation which helps bridge the vocabulary gaps between training and testing.\nExtensive experiments show that the proposed techniques achieve superior generalization performance consistently across 14 widely studied object detection datasets.", "pdf": "https://openreview.net/pdf/a59c67929a8ed43afd77cbc5fdccf8cafc35eab2.pdf"} {"title": "Grounded Answers for Multi-agent Decision-making Problem through Generative World Model", "url": "https://openreview.net/forum?id=QWsLks8LCO", "detail_url": "https://openreview.net/forum?id=QWsLks8LCO", "authors": "Zeyang Liu,Xinrui Yang,Shiguang Sun,Long Qian,Lipeng Wan,Xingyu Chen,Xuguang Lan", "tags": "NIPS 2024,Poster", "abstract": "Recent progress in generative models has stimulated significant innovations in many fields, such as image generation and chatbots. Despite their success, these models often produce sketchy and misleading solutions for complex multi-agent decision-making problems because they miss the trial-and-error experience and reasoning as humans. To address this limitation, we explore a paradigm that integrates a language-guided simulator into the multi-agent reinforcement learning pipeline to enhance the generated answer. The simulator is a world model that separately learns dynamics and reward, where the dynamics model comprises an image tokenizer as well as a causal transformer to generate interaction transitions autoregressively, and the reward model is a bidirectional transformer learned by maximizing the likelihood of trajectories in the expert demonstrations under language guidance. Given an image of the current state and the task description, we use the world model to train the joint policy and produce the image sequence as the answer by running the converged policy on the dynamics model. The empirical results demonstrate that this framework can improve the answers for multi-agent decision-making problems by showing superior performance on the training and unseen tasks of the StarCraft Multi-Agent Challenge benchmark. In particular, it can generate consistent interaction sequences and explainable reward functions at interaction states, opening the path for training generative models of the future.", "pdf": "https://openreview.net/pdf/f6adc813ef450f3b17fdc3c4422ae637c6df737f.pdf"} {"title": "Quantifying the Gain in Weak-to-Strong Generalization", "url": "https://openreview.net/forum?id=MyVyH5Jo1l", "detail_url": "https://openreview.net/forum?id=MyVyH5Jo1l", "authors": "Moses Charikar,Chirag Pabbaraju,Kirankumar Shiragur", "tags": "NIPS 2024,Poster", "abstract": "Recent advances in large language models have shown capabilities that are extraordinary and near-superhuman. These models operate with such complexity that reliably evaluating and aligning them proves challenging for humans. This leads to the natural question: can guidance from weak models (like humans) adequately direct the capabilities of strong models? In a recent and somewhat surprising work, Burns et al. (2023) empirically demonstrated that when strong models (like GPT-4) are finetuned using labels generated by weak supervisors (like GPT-2), the strong models outperform their weaker counterparts---a phenomenon they term *weak-to-strong generalization*.\n\nIn this work, we present a theoretical framework for understanding weak-to-strong generalization. Specifically, we show that the improvement in performance achieved by strong models over their weaker counterparts is quantified by the *misfit error* incurred by the strong model on labels generated by the weaker model. Our theory reveals several curious algorithmic insights. For instance, we can predict the amount by which the strong model will improve over the weak model, and also choose among different weak models to train the strong model, based on its misfit error. We validate our theoretical findings through various empirical assessments.", "pdf": "https://openreview.net/pdf/36318fbff6dff7b5a38aa7f1c82e90a342ffee0a.pdf"} {"title": "Improved Guarantees for Fully Dynamic $k$-Center Clustering with Outliers in General Metric Spaces", "url": "https://openreview.net/forum?id=OtYCp1yfbX", "detail_url": "https://openreview.net/forum?id=OtYCp1yfbX", "authors": "Leyla Biabani,Annika Hennes,Denise La Gordt Dillie,Morteza Monemizadeh,Melanie Schmidt", "tags": "NIPS 2024,Poster", "abstract": "The metric $k$-center clustering problem with $z$ outliers, also known as $(k,z)$-center clustering, \ninvolves clustering a given point set $P$ in a metric space $(M,d)$ using at most $k$ balls, \nminimizing the maximum ball radius while excluding up to $z$ points from the clustering. \nThis problem holds fundamental significance in various domains such as machine learning, \ndata mining, and database systems.\n\nThis paper addresses the fully dynamic version of the problem, where the point set undergoes continuous updates (insertions and deletions) over time. The objective is to maintain an approximate $(k,z)$-center clustering with efficient update times. \nWe propose a novel fully dynamic algorithm that maintains a $(4+\\epsilon)$-approximate \nsolution to the $(k,z)$-center clustering problem that covers \nall but at most $(1+\\epsilon)z$ points at any time in the sequence with probability $1-k/e^{\\Omega(\\log k)}$. \nThe algorithm achieves an expected amortized update time of $\\mathcal{O}(\\epsilon^{-2} k^6\\log(k) \\log(\\Delta))$, and is applicable to general metric spaces. \nOur dynamic algorithm presents a significant improvement over the recent dynamic $(14+\\epsilon)$-approximation algorithm by Chan, Lattanzi, Sozio, and Wang for this problem.", "pdf": "https://openreview.net/pdf/9382ddb52721ac5517229bf90c586a2ce4237117.pdf"} {"title": "Efficiency of the First-Price Auction in the Autobidding World", "url": "https://openreview.net/forum?id=4DHoSjET4R", "detail_url": "https://openreview.net/forum?id=4DHoSjET4R", "authors": "Yuan Deng,Jieming Mao,Vahab Mirrokni,Hanrui Zhang,Song Zuo", "tags": "NIPS 2024,Poster", "abstract": "We study the price of anarchy of first-price auctions in the autobidding world, where bidders can be either utility maximizers (i.e., traditional bidders) or value maximizers (i.e., autobidders). We show that with autobidders only, the price of anarchy of first-price auctions is $1/2$, and with both kinds of bidders, the price of anarchy degrades to about $0.457$ (the precise number is given by an optimization). These results complement the recent result by [Jin and Lu, 2022] showing that the price of anarchy of first-price auctions with traditional bidders is $1 - 1/e^2$. We further investigate a setting where the seller can utilize machine-learned advice to improve the efficiency of the auctions. There, we show that as the accuracy of the advice increases, the price of anarchy improves smoothly from about $0.457$ to $1$.", "pdf": "https://openreview.net/pdf/8b3128fe974bb6bb0a282719afb8291c2df9dd43.pdf"} {"title": "Tighter Convergence Bounds for Shuffled SGD via Primal-Dual Perspective", "url": "https://openreview.net/forum?id=qcPlGtzwW9", "detail_url": "https://openreview.net/forum?id=qcPlGtzwW9", "authors": "Xufeng Cai,Cheuk Yin Lin,Jelena Diakonikolas", "tags": "NIPS 2024,Poster", "abstract": "Stochastic gradient descent (SGD) is perhaps the most prevalent optimization method in modern machine learning. Contrary to the empirical practice of sampling from the datasets \\emph{without replacement} and with (possible) reshuffling at each epoch, the theoretical counterpart of SGD usually relies on the assumption of \\emph{sampling with replacement}. It is only very recently that SGD using sampling without replacement -- shuffled SGD -- has been analyzed with matching upper and lower bounds. However, we observe that those bounds are too pessimistic to explain often superior empirical performance of data permutations (sampling without replacement) over vanilla counterparts (sampling with replacement) on machine learning problems. Through fine-grained analysis in the lens of primal-dual cyclic coordinate methods and the introduction of novel smoothness parameters, we present several results for shuffled SGD on smooth and non-smooth convex losses, where our novel analysis framework provides tighter convergence bounds over all popular shuffling schemes (IG, SO, and RR). Notably, our new bounds predict faster convergence than existing bounds in the literature -- by up to a factor of $O(\\sqrt{n})$, mirroring benefits from tighter convergence bounds using component smoothness parameters in randomized coordinate methods. Lastly, we numerically demonstrate on common machine learning datasets that our bounds are indeed much tighter, thus offering a bridge between theory and practice.", "pdf": "https://openreview.net/pdf/01bfa90837d76a8a6b8a6d5ec563940e2c01557b.pdf"} {"title": "Non-asymptotic Analysis of Biased Adaptive Stochastic Approximation", "url": "https://openreview.net/forum?id=TzxSrNJE0T", "detail_url": "https://openreview.net/forum?id=TzxSrNJE0T", "authors": "Sobihan Surendran,Adeline Fermanian,Antoine Godichon-Baggioni,Sylvain Le Corff", "tags": "NIPS 2024,Poster", "abstract": "Stochastic Gradient Descent (SGD) with adaptive steps is widely used to train deep neural networks and generative models. Most theoretical results assume that it is possible to obtain unbiased gradient estimators, which is not the case in several recent deep learning and reinforcement learning applications that use Monte Carlo methods.\nThis paper provides a comprehensive non-asymptotic analysis of SGD with biased gradients and adaptive steps for non-convex smooth functions. Our study incorporates time-dependent bias and emphasizes the importance of controlling the bias of the gradient estimator. \nIn particular, we establish that Adagrad, RMSProp, and AMSGRAD, an exponential moving average variant of Adam, with biased gradients, converge to critical points for smooth non-convex functions at a rate similar to existing results in the literature for the unbiased case. Finally, we provide experimental results using Variational Autoenconders (VAE) and applications to several learning frameworks that illustrate our convergence results and show how the effect of bias can be reduced by appropriate hyperparameter tuning.", "pdf": "https://openreview.net/pdf/939a6ac9098894abfa237e7036f3aea28b927916.pdf"} {"title": "IRCAN: Mitigating Knowledge Conflicts in LLM Generation via Identifying and Reweighting Context-Aware Neurons", "url": "https://openreview.net/forum?id=ZfXRAqbBKX", "detail_url": "https://openreview.net/forum?id=ZfXRAqbBKX", "authors": "Dan Shi,Renren Jin,Tianhao Shen,Weilong Dong,Xinwei Wu,Deyi Xiong", "tags": "NIPS 2024,Poster", "abstract": "It is widely acknowledged that large language models (LLMs) encode a vast reservoir of knowledge after being trained on mass data. Recent studies disclose knowledge conflicts in LLM generation, wherein outdated or incorrect parametric knowledge (i.e., encoded knowledge) contradicts new knowledge provided in the context. To mitigate such knowledge conflicts, we propose a novel framework, IRCAN (Identifying and Reweighting Context-Aware Neurons) to capitalize on neurons that are crucial in processing contextual cues. Specifically, IRCAN first identifies neurons that significantly contribute to context processing, utilizing a context-aware attribution score derived from integrated gradients. Subsequently, the identified context-aware neurons are strengthened via reweighting. In doing so, we steer LLMs to generate context-sensitive outputs with respect to the new knowledge provided in the context. Extensive experiments conducted across a variety of models and tasks demonstrate that IRCAN not only achieves remarkable improvements in handling knowledge conflicts but also offers a scalable, plug-and-play solution that can be integrated seamlessly with existing models. Our codes are released at https://github.com/danshi777/IRCAN.", "pdf": "https://openreview.net/pdf/3eb846e698db8a1a2083d1a0c1f4fd5ba040207a.pdf"} {"title": "Automated Multi-Task Learning for Joint Disease Prediction on Electronic Health Records", "url": "https://openreview.net/forum?id=lbSI1j8m6p", "detail_url": "https://openreview.net/forum?id=lbSI1j8m6p", "authors": "Suhan Cui,Prasenjit Mitra", "tags": "NIPS 2024,Poster", "abstract": "In the realm of big data and digital healthcare, Electronic Health Records (EHR) have become a rich source of information with the potential to improve patient care and medical research. In recent years, machine learning models have proliferated for analyzing EHR data to predict patients' future health conditions. Among them, some studies advocate for multi-task learning (MTL) to jointly predict multiple target diseases for improving the prediction performance over single task learning. Nevertheless, current MTL frameworks for EHR data have significant limitations due to their heavy reliance on human experts to identify task groups for joint training and design model architectures. To reduce human intervention and improve the framework design, we propose an automated approach named AutoDP, which can search for the optimal configuration of task grouping and architectures simultaneously. To tackle the vast joint search space encompassing task combinations and architectures, we employ surrogate model-based optimization, enabling us to efficiently discover the optimal solution. Experimental results on real-world EHR data demonstrate the efficacy of the proposed AutoDP framework. It achieves significant performance improvements over both hand-crafted and automated state-of-the-art methods, also maintains a feasible search cost at the same time.", "pdf": "https://openreview.net/pdf/3e53a3f862bb0e5f4bc8cba56ba94762f362d6c2.pdf"} {"title": "The Edge-of-Reach Problem in Offline Model-Based Reinforcement Learning", "url": "https://openreview.net/forum?id=3dn1hINA6o", "detail_url": "https://openreview.net/forum?id=3dn1hINA6o", "authors": "Anya Sims,Cong Lu,Jakob Nicolaus Foerster,Yee Whye Teh", "tags": "NIPS 2024,Poster", "abstract": "Offline reinforcement learning (RL) aims to train agents from pre-collected datasets. However, this comes with the added challenge of estimating the value of behaviors not covered in the dataset. Model-based methods offer a potential solution by training an approximate dynamics model, which then allows collection of additional synthetic data via rollouts in this model. The prevailing theory treats this approach as online RL in an approximate dynamics model, and any remaining performance gap is therefore understood as being due to dynamics model errors. In this paper, we analyze this assumption and investigate how popular algorithms perform as the learned dynamics model is improved. In contrast to both intuition and theory, if the learned dynamics model is replaced by the true error-free dynamics, existing model-based methods completely fail. This reveals a key oversight: The theoretical foundations assume sampling of full horizon rollouts in the learned dynamics model; however, in practice, the number of model-rollout steps is aggressively reduced to prevent accumulating errors. We show that this truncation of rollouts results in a set of edge-of-reach states at which we are effectively \"bootstrapping from the void.\" This triggers pathological value overestimation and complete performance collapse. We term this the edge-of-reach problem. Based on this new insight, we fill important gaps in existing theory, and reveal how prior model-based methods are primarily addressing the edge-of-reach problem, rather than model-inaccuracy as claimed. Finally, we propose Reach-Aware Value Learning (RAVL), a simple and robust method that directly addresses the edge-of-reach problem and hence - unlike existing methods - does not fail as the dynamics model is improved. Since world models will inevitably improve, we believe this is a key step towards future-proofing offline RL.", "pdf": "https://openreview.net/pdf/6797eaef1557c755c3627309219c7f61c718c588.pdf"} {"title": "Elo Uncovered: Robustness and Best Practices in Language Model Evaluation", "url": "https://openreview.net/forum?id=Pc9LLjTL5f", "detail_url": "https://openreview.net/forum?id=Pc9LLjTL5f", "authors": "Meriem Boubdir,Edward Kim,Beyza Ermis,Sara Hooker,Marzieh Fadaee", "tags": "NIPS 2024,Poster", "abstract": "In Natural Language Processing (NLP), the Elo rating system, originally designed for ranking players in dynamic games such as chess, is increasingly being used to evaluate Large Language Models (LLMs) through \"A vs B\" paired comparisons.\nHowever, while popular, the system's suitability for assessing entities with constant skill levels, such as LLMs, remains relatively unexplored. \nWe study two fundamental axioms that evaluation methods should adhere to: reliability and transitivity. \nWe conduct an extensive evaluation of Elo behavior across simulated and real-world scenarios, demonstrating that individual Elo computations can exhibit significant volatility.\nWe show that both axioms are not always satisfied, raising questions about the reliability of current comparative evaluations of LLMs.\nIf the current use of Elo scores is intended to substitute the costly head-to-head comparison of LLMs, it is crucial to ensure the ranking is as robust as possible.\nGuided by the axioms, our findings offer concrete guidelines for enhancing the reliability of LLM evaluation methods, suggesting a need for reassessment of existing comparative approaches.", "pdf": "https://openreview.net/pdf/d1d200c5bd2e905e67daa04f04a785880470d728.pdf"} {"title": "Great Minds Think Alike: The Universal Convergence Trend of Input Salience", "url": "https://openreview.net/forum?id=7PORYhql4V", "detail_url": "https://openreview.net/forum?id=7PORYhql4V", "authors": "Yipei Wang,Jeffrey Mark Siskind,Xiaoqian Wang", "tags": "NIPS 2024,Poster", "abstract": "Uncertainty is introduced in optimized DNNs through stochastic algorithms, forming specific distributions. Training models can be seen as random sampling from this distribution of optimized models. In this work, we study the distribution of optimized DNNs as a family of functions by leveraging a pointwise approach. We focus on the input saliency maps, as the input gradient field is decisive to the models' mathematical essence. Our investigation of saliency maps reveals a counter-intuitive trend: two stochastically optimized models tend to resemble each other more as either of their capacities increases. Therefore, we hypothesize several properties of these distributions, suggesting that (1) Within the same model architecture (e.g., CNNs, ResNets), different family variants (e.g., varying capacities) tend to align in terms of their population mean directions of the input salience. And (2) the distributions of optimized models follow a convergence trend to their shared population mean as the capacity increases. Furthermore, we also propose semi-parametric distributions based on the Saw distribution to model the convergence trend, satisfying all the counter-intuitive observations. Our experiments shed light on the significant implications of our hypotheses in various application domains, including black-box attacks, deep ensembles, etc. These findings not only enhance our understanding of DNN behaviors but also offer valuable insights for their practical application in diverse areas of deep learning.", "pdf": "https://openreview.net/pdf/be4e0dc8ec089ff7420badf102ed082f4369f79d.pdf"} {"title": "Guided Trajectory Generation with Diffusion Models for Offline Model-based Optimization", "url": "https://openreview.net/forum?id=ioKQzb8SMr", "detail_url": "https://openreview.net/forum?id=ioKQzb8SMr", "authors": "Taeyoung Yun,Sujin Yun,Jaewoo Lee,Jinkyoo Park", "tags": "NIPS 2024,Poster", "abstract": "Optimizing complex and high-dimensional black-box functions is ubiquitous in science and engineering fields. Unfortunately, the online evaluation of these functions is restricted due to time and safety constraints in most cases. In offline model-based optimization (MBO), we aim to find a design that maximizes the target function using only a pre-existing offline dataset. While prior methods consider forward or inverse approaches to address the problem, these approaches are limited by conservatism and the difficulty of learning highly multi-modal mappings. Recently, there has been an emerging paradigm of learning to improve solutions with synthetic trajectories constructed from the offline dataset. In this paper, we introduce a novel conditional generative modeling approach to produce trajectories toward high-scoring regions. First, we construct synthetic trajectories toward high-scoring regions using the dataset while injecting locality bias for consistent improvement directions. Then, we train a conditional diffusion model to generate trajectories conditioned on their scores. Lastly, we sample multiple trajectories from the trained model with guidance to explore high-scoring regions beyond the dataset and select high-fidelity designs among generated trajectories with the proxy function. Extensive experiment results demonstrate that our method outperforms competitive baselines on Design-Bench and its practical variants. The code is publicly available in \\url{https://github.com/dbsxodud-11/GTG}.", "pdf": "https://openreview.net/pdf/1faf7eb1f45eb114876535eca5dd482cf97b705b.pdf"} {"title": "Separation and Bias of Deep Equilibrium Models on Expressivity and Learning Dynamics", "url": "https://openreview.net/forum?id=UJ9k3j93MD", "detail_url": "https://openreview.net/forum?id=UJ9k3j93MD", "authors": "Zhoutong Wu,Yimu Zhang,Cong Fang,Zhouchen Lin", "tags": "NIPS 2024,Poster", "abstract": "The deep equilibrium model (DEQ) generalizes the conventional feedforward neural network by fixing the same weights for each layer block and extending the number of layers to infinity. This novel model directly finds the fixed points of such a forward process as features for prediction. Despite empirical evidence showcasing its efficacy \ncompared to feedforward neural networks, a theoretical understanding for its separation and bias is still limited. In this paper, we take a step\nby proposing some separations and studying the bias of DEQ in its expressive power and learning dynamics. The results include: (1) A general separation is proposed, showing the existence of a width-$m$ DEQ that any fully connected neural networks (FNNs) with depth $O(m^{\\alpha})$ for $\\alpha \\in (0,1)$ cannot\napproximate unless its width is sub-exponential in $m$; (2) DEQ with polynomially bounded size and magnitude can efficiently approximate certain steep functions (which has very large derivatives) in $L^{\\infty}$ norm, whereas FNN with bounded depth and exponentially bounded width cannot unless its weights magnitudes are exponentially large; (3) The implicit regularization caused by gradient flow from a diagonal linear DEQ is characterized, with specific examples showing the benefits brought by such regularization. \nFrom the overall study, a high-level conjecture from our analysis and empirical validations is that DEQ has potential advantages in learning certain high-frequency components.", "pdf": "https://openreview.net/pdf/caf9cf87b2cc14579a8495c4559f0d48dd1cc92a.pdf"} {"title": "Bias in Motion: Theoretical Insights into the Dynamics of Bias in SGD Training", "url": "https://openreview.net/forum?id=QUYLbzwtTV", "detail_url": "https://openreview.net/forum?id=QUYLbzwtTV", "authors": "Anchit Jain,Rozhin Nobahari,Aristide Baratin,Stefano Sarao Mannelli", "tags": "NIPS 2024,Poster", "abstract": "Machine learning systems often acquire biases by leveraging undesired features in the data, impacting accuracy variably across different sub-populations of the data. However, our current understanding of bias formation mostly focuses on the initial and final stages of learning, leaving a gap in knowledge regarding the transient dynamics. To address this gap, this paper explores the evolution of bias in a teacher-student setup that models different data sub-populations with a Gaussian-mixture model. We provide an analytical description of the stochastic gradient descent dynamics of a linear classifier in this setup, which we prove to be exact in high dimension.\nNotably, our analysis identifies different properties of the sub-populations that drive bias at different timescales and hence shows a shifting preference of our classifier during training. By applying our general solution to fairness and robustness, we delineate how and when heterogeneous data and spurious features can generate and amplify bias. We empirically validate our results in more complex scenarios by training deeper networks on synthetic and real data, i.e. using CIFAR10, MNIST, and CelebA datasets.", "pdf": "https://openreview.net/pdf/2ed87c31602334ea03f38d06d97018af95930b88.pdf"} {"title": "GTA: Generative Trajectory Augmentation with Guidance for Offline Reinforcement Learning", "url": "https://openreview.net/forum?id=kZpNDbZrzy", "detail_url": "https://openreview.net/forum?id=kZpNDbZrzy", "authors": "Jaewoo Lee,Sujin Yun,Taeyoung Yun,Jinkyoo Park", "tags": "NIPS 2024,Poster", "abstract": "Offline Reinforcement Learning (Offline RL) presents challenges of learning effective decision-making policies from static datasets without any online interactions. Data augmentation techniques, such as noise injection and data synthesizing, aim to improve Q-function approximation by smoothing the learned state-action region. However, these methods often fall short of directly improving the quality of offline datasets, leading to suboptimal results. In response, we introduce GTA, Generative Trajectory Augmentation, a novel generative data augmentation approach designed to enrich offline data by augmenting trajectories to be both high-rewarding and dynamically plausible. GTA applies a diffusion model within the data augmentation framework. GTA partially noises original trajectories and then denoises them with classifier-free guidance via conditioning on amplified return value. Our results show that GTA, as a general data augmentation strategy, enhances the performance of widely used offline RL algorithms across various tasks with unique challenges. Furthermore, we conduct a quality analysis of data augmented by GTA and demonstrate that GTA improves the quality of the data. Our code is available at https://github.com/Jaewoopudding/GTA", "pdf": "https://openreview.net/pdf/0654ab64b73938184f454a5419d4545766a9f5c3.pdf"} {"title": "UV-free Texture Generation with Denoising and Geodesic Heat Diffusion", "url": "https://openreview.net/forum?id=Cb1Md0RvqF", "detail_url": "https://openreview.net/forum?id=Cb1Md0RvqF", "authors": "Simone Foti,Stefanos Zafeiriou,Tolga Birdal", "tags": "NIPS 2024,Poster", "abstract": "Seams, distortions, wasted UV space, vertex-duplication, and varying resolution over the surface are the most prominent issues of the standard UV-based texturing of meshes. These issues are particularly acute when automatic UV-unwrapping techniques are used. For this reason, instead of generating textures in automatically generated UV-planes like most state-of-the-art methods, we propose to represent textures as coloured point-clouds whose colours are generated by a denoising diffusion probabilistic model constrained to operate on the surface of 3D objects. Our sampling and resolution agnostic generative model heavily relies on heat diffusion over the surface of the meshes for spatial communication between points. To enable processing of arbitrarily sampled point-cloud textures and ensure long-distance texture consistency we introduce a fast re-sampling of the mesh spectral properties used during the heat diffusion and introduce a novel heat-diffusion-based self-attention mechanism. Our code and pre-trained models are available at github.com/simofoti/UV3-TeD.", "pdf": "https://openreview.net/pdf/bb7637750872fe9b0fa78407b7a2ce1722183dd5.pdf"} {"title": "Embedding Dimension of Contrastive Learning and $k$-Nearest Neighbors", "url": "https://openreview.net/forum?id=H0qu4moFly", "detail_url": "https://openreview.net/forum?id=H0qu4moFly", "authors": "Dmitrii Avdiukhin,Vaggos Chatziafratis,Orr Fischer,Grigory Yaroslavtsev", "tags": "NIPS 2024,Poster", "abstract": "We study the embedding dimension of distance comparison data in two settings: contrastive learning and $k$-nearest neighbors ($k$-NN). In both cases, the goal is to find the smallest dimension $d$ of an $\\ell_p$-space in which a given dataset can be represented. We show that the arboricity of the associated graphs plays a key role in designing embeddings. Using this approach, for the most frequently used $\\ell_2$-distance, we get matching upper and lower bounds in both settings.\n \nIn contrastive learning, we are given $m$ labeled samples of the form $(x_i, y_i^+, z_i^-)$ representing the fact that the positive example $y_i$ is closer to the anchor $x_i$ than the negative example $z_i$. We show that for representing such dataset in:\n\n- $\\ell_2$: $d = \\Theta(\\sqrt{m})$ is necessary and sufficient.\n- $\\ell_p$ for $p \\ge 1$: $d = O(m)$ is sufficient and $d = \\tilde \\Omega(\\sqrt{m})$ is necessary.\n- $\\ell_\\infty$: $d = O(m^{2/3})$ is sufficient and $d = \\tilde \\Omega(\\sqrt{m})$ is necessary.\n\nWe also give results for the more general scenario when $t$ negatives are allowed.\n\nIn $k$-NN, for each of the $n$ data points we are given an ordered set of the closest $k$ points. We show that for preserving the ordering of the $k$-NN for every point in:\n- $\\ell_2$: $d = \\Theta(k)$ is necessary and sufficient.\n- $\\ell_p$ for $p \\ge 1$: $d = \\tilde O(k^2)$ is sufficient and $d=\\tilde \\Omega(k)$ is necessary.\n- $\\ell_\\infty$ : $d = \\tilde \\Omega(k)$ is necessary.\n\nFurthermore, if the goal is to not just preserve the ordering of the $k$-NN but also keep them as the nearest neighbors then $d = \\tilde O (\\mathrm{poly}(k))$ suffices in $\\ell_p$ for $p \\ge 1$.", "pdf": "https://openreview.net/pdf/93374ea603599bcc8c1d0352a9e742a2870e07fe.pdf"} {"title": "Large Language Model Unlearning", "url": "https://openreview.net/forum?id=8Dy42ThoNe", "detail_url": "https://openreview.net/forum?id=8Dy42ThoNe", "authors": "Yuanshun Yao,Xiaojun Xu,Yang Liu", "tags": "NIPS 2024,Poster", "abstract": "We study how to perform unlearning, i.e. forgetting undesirable (mis)behaviors, on large language models (LLMs). We show at least three scenarios of aligning LLMs with human preferences can benefit from unlearning: (1) removing harmful responses, (2) erasing copyright-protected content as requested, and (3) reducing hallucinations. Unlearning, as an alignment technique, has three advantages. (1) It only requires negative (e.g. harmful) examples, which are much easier and cheaper to collect (e.g. via red teaming or user reporting) than positive (e.g. helpful and often human-written) examples required in the standard alignment process. (2) It is computationally efficient. (3) It is especially effective when we know which training samples cause the misbehavior. To the best of our knowledge, our work is among the first to explore LLM unlearning. We are also among the first to formulate the settings, goals, and evaluations in LLM unlearning. Despite only having negative samples, our ablation study shows that unlearning can still achieve better alignment performance than RLHF with just 2% of its computational time.", "pdf": "https://openreview.net/pdf/6c795998d557fd1b99f98cf8af18dbc8d9cdb772.pdf"} {"title": "Neural Model Checking", "url": "https://openreview.net/forum?id=dJ9KzkQ0oH", "detail_url": "https://openreview.net/forum?id=dJ9KzkQ0oH", "authors": "Mirco Giacobbe,Daniel Kroening,Abhinandan Pal,Michael Tautschnig", "tags": "NIPS 2024,Poster", "abstract": "We introduce a machine learning approach to model checking temporal logic, with application to formal hardware verification. Model checking answers the question of whether every execution of a given system satisfies a desired temporal logic specification. Unlike testing, model checking provides formal guarantees. Its application is expected standard in silicon design and the EDA industry has invested decades into the development of performant symbolic model checking algorithms. Our new approach combines machine learning and symbolic reasoning by using neural networks as formal proof certificates for linear temporal logic. We train our neural certificates from randomly generated executions of the system and we then symbolically check their validity using satisfiability solving which, upon the affirmative answer, establishes that the system provably satisfies the specification. We leverage the expressive power of neural networks to represent proof certificates as well as the fact that checking a certificate is much simpler than finding one. As a result, our machine learning procedure for model checking is entirely unsupervised, formally sound, and practically effective. We experimentally demonstrate that our method outperforms the state-of-the-art academic and commercial model checkers on a set of standard hardware designs written in SystemVerilog.", "pdf": "https://openreview.net/pdf/3e8a37fa3da5ebe097ec554ae5606f00e946dcda.pdf"} {"title": "Super Consistency of Neural Network Landscapes and Learning Rate Transfer", "url": "https://openreview.net/forum?id=rgwhJ7INtZ", "detail_url": "https://openreview.net/forum?id=rgwhJ7INtZ", "authors": "Lorenzo Noci,Alexandru Meterez,Thomas Hofmann,Antonio Orvieto", "tags": "NIPS 2024,Poster", "abstract": "Recently, there has been growing evidence that if the width and depth of a neural network are scaled toward the so-called rich feature learning limit ($\\mu$P and its depth extension), then some hyperparameters --- such as the learning rate --- exhibit transfer from small to very large models. From an optimization perspective, this phenomenon is puzzling, as it implies that the loss landscape is consistently similar across very different model sizes. In this work, we study the landscape through the lens of the Hessian, with a focus on its largest eigenvalue (i.e. the sharpness), and find that certain spectral properties under $\\mu$P are largely independent of the width and depth of the network along the training trajectory. We name this property *super consistency* of the landscape. On the other hand, we show that in the Neural Tangent Kernel (NTK) and other scaling regimes, the sharpness exhibits very different dynamics at different scales. But what causes these differences in the sharpness dynamics? Through a connection between the Hessian's and the NTK's spectrum, we argue that the cause lies in the presence (for $\\mu$P) or progressive absence (for the NTK scaling) of feature learning.\nWe corroborate our claims with a substantial suite of experiments, covering a wide range of datasets and architectures: from ResNets and Vision Transformers trained on benchmark vision datasets to Transformers-based language models trained on WikiText.", "pdf": "https://openreview.net/pdf/e9bcf33fbdd4a0b1303d654db1590e0078fe9d0d.pdf"} {"title": "Efficient Centroid-Linkage Clustering", "url": "https://openreview.net/forum?id=5VE1iLeYOz", "detail_url": "https://openreview.net/forum?id=5VE1iLeYOz", "authors": "Mohammadhossein Bateni,Laxman Dhulipala,Willem Fletcher,Kishen N Gowda,D Ellis Hershkowitz,Rajesh Jayaram,Jakub Lacki", "tags": "NIPS 2024,Poster", "abstract": "We give an algorithm for Centroid-Linkage Hierarchical Agglomerative Clustering (HAC), which computes a $c$-approximate clustering in roughly $n^{1+O(1/c^2)}$ time. We obtain our result by combining a new centroid-linkage HAC algorithm with a novel fully dynamic data structure for nearest neighbor search which works under adaptive updates.\n\nWe also evaluate our algorithm empirically. By leveraging a state-of-the-art nearest-neighbor search library, we obtain a fast and accurate centroid-linkage HAC algorithm. Compared to an existing state-of-the-art exact baseline, our implementation maintains the clustering quality while delivering up to a $36\\times$ speedup due to performing fewer distance comparisons.", "pdf": "https://openreview.net/pdf/ad66cda580bf94cf725dd031cb3bf70447a487f2.pdf"} {"title": "Improving Subgroup Robustness via Data Selection", "url": "https://openreview.net/forum?id=vJLTcCBZVT", "detail_url": "https://openreview.net/forum?id=vJLTcCBZVT", "authors": "Saachi Jain,Kimia Hamidieh,Kristian Georgiev,Andrew Ilyas,Marzyeh Ghassemi,Aleksander Madry", "tags": "NIPS 2024,Poster", "abstract": "Machine learning models can often fail on subgroups that are underrepresented\nduring training. While dataset balancing can improve performance on\nunderperforming groups, it requires access to training group annotations and can\nend up removing large portions of the dataset. In this paper, we introduce\nData Debiasing with Datamodels (D3M), a debiasing approach\nwhich isolates and removes specific training examples that drive the model's\nfailures on minority groups. Our approach enables us to efficiently train\ndebiased classifiers while removing only a small number of examples, and does\nnot require training group annotations or additional hyperparameter tuning.", "pdf": "https://openreview.net/pdf/a3f46e22e6e41370e2c814be79b1e92e6e971d7c.pdf"} {"title": "Neuronal Competition Groups with Supervised STDP for Spike-Based Classification", "url": "https://openreview.net/forum?id=GeE5qF6ICg", "detail_url": "https://openreview.net/forum?id=GeE5qF6ICg", "authors": "Gaspard Goupy,Pierre Tirilly,Ioan Marius Bilasco", "tags": "NIPS 2024,Poster", "abstract": "Spike Timing-Dependent Plasticity (STDP) is a promising substitute to backpropagation for local training of Spiking Neural Networks (SNNs) on neuromorphic hardware. STDP allows SNNs to address classification tasks by combining unsupervised STDP for feature extraction and supervised STDP for classification. Unsupervised STDP is usually employed with Winner-Takes-All (WTA) competition to learn distinct patterns. However, WTA for supervised STDP classification faces unbalanced competition challenges. In this paper, we propose a method to effectively implement WTA competition in a spiking classification layer employing first-spike coding and supervised STDP training. We introduce the Neuronal Competition Group (NCG), an architecture that improves classification capabilities by promoting the learning of various patterns per class. An NCG is a group of neurons mapped to a specific class, implementing intra-class WTA and a novel competition regulation mechanism based on two-compartment thresholds. We incorporate our proposed architecture into spiking classification layers trained with state-of-the-art supervised STDP rules. On top of two different unsupervised feature extractors, we obtain significant accuracy improvements on image recognition datasets such as CIFAR-10 and CIFAR-100. We show that our competition regulation mechanism is crucial for ensuring balanced competition and improved class separation.", "pdf": "https://openreview.net/pdf/c1100f357e431fd93a9d66908e0f9f752494371a.pdf"} {"title": "Learning Structured Representations with Hyperbolic Embeddings", "url": "https://openreview.net/forum?id=wBtmN8SZ2B", "detail_url": "https://openreview.net/forum?id=wBtmN8SZ2B", "authors": "Aditya Sinha,Siqi Zeng,Makoto Yamada,Han Zhao", "tags": "NIPS 2024,Poster", "abstract": "Most real-world datasets consist of a natural hierarchy between classes or an inherent label structure that is either already available or can be constructed cheaply. However, most existing representation learning methods ignore this hierarchy, treating labels as permutation invariant. Recent work [Zeng et al., 2022] proposes using this structured information explicitly, but the use of Euclidean distance may distort the underlying semantic context [Chen et al., 2013]. In this work, motivated by the advantage of hyperbolic spaces in modeling hierarchical relationships, we propose a novel approach HypStructure: a Hyperbolic Structured regularization approach to accurately embed the label hierarchy into the learned representations. HypStructure is a simple-yet-effective regularizer that consists of a hyperbolic tree-based representation loss along with a centering loss, and can be combined with any standard task loss to learn hierarchy-informed features. Extensive experiments on several large-scale vision benchmarks demonstrate the efficacy of HypStructure in reducing distortion and boosting generalization performance especially under low dimensional scenarios. For a better understanding of structured representation, we perform eigenvalue analysis that links the representation geometry to improved Out-of-Distribution (OOD) detection performance seen empirically.", "pdf": "https://openreview.net/pdf/f77a884e3428476dc16b83edd687de38b1e7a596.pdf"} {"title": "Exact Gradients for Stochastic Spiking Neural Networks Driven by Rough Signals", "url": "https://openreview.net/forum?id=mCWZj7pa0M", "detail_url": "https://openreview.net/forum?id=mCWZj7pa0M", "authors": "Christian Holberg,Cristopher Salvi", "tags": "NIPS 2024,Poster", "abstract": "We introduce a mathematically rigorous framework based on rough path theory to model stochastic spiking neural networks (SSNNs) as stochastic differential equations with event discontinuities (Event SDEs) and driven by c\u00e0dl\u00e0g rough paths. Our formalism is general enough to allow for potential jumps to be present both in the solution trajectories as well as in the driving noise. We then identify a set of sufficient conditions ensuring the existence of pathwise gradients of solution trajectories and event times with respect to the network's parameters and show how these gradients satisfy a recursive relation. Furthermore, we introduce a general-purpose loss function defined by means of a new class of signature kernels indexed on c\u00e0dl\u00e0g rough paths and use it to train SSNNs as generative models. We provide an end-to-end autodifferentiable solver for Event SDEs and make its implementation available as part of the $\\texttt{diffrax}$ library. Our framework is, to our knowledge, the first enabling gradient-based training of SSNNs with noise affecting both the spike timing and the network's dynamics.", "pdf": "https://openreview.net/pdf/12bfd1313c2ad9fae822910f9e4239ddc9fb8525.pdf"} {"title": "Single-Loop Stochastic Algorithms for Difference of Max-Structured Weakly Convex Functions", "url": "https://openreview.net/forum?id=NhtBXSNXKA", "detail_url": "https://openreview.net/forum?id=NhtBXSNXKA", "authors": "Quanqi Hu,Qi Qi,Zhaosong Lu,Tianbao Yang", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we study a class of non-smooth non-convex problems in the form of $\\min_{x}[\\max_{y\\in\\mathcal Y}\\phi(x, y) - \\max_{z\\in\\mathcal Z}\\psi(x, z)]$, where both $\\Phi(x) = \\max_{y\\in\\mathcal Y}\\phi(x, y)$ and $\\Psi(x)=\\max_{z\\in\\mathcal Z}\\psi(x, z)$ are weakly convex functions, and $\\phi(x, y), \\psi(x, z)$ are strongly concave functions in terms of $y$ and $z$, respectively. It covers two families of problems that have been studied but are missing single-loop stochastic algorithms, i.e., difference of weakly convex functions and weakly convex strongly-concave min-max problems. We propose a stochastic Moreau envelope approximate gradient method dubbed SMAG, the first single-loop algorithm for solving these problems, and provide a state-of-the-art non-asymptotic convergence rate. The key idea of the design is to compute an approximate gradient of the Moreau envelopes of $\\Phi, \\Psi$ using only one step of stochastic gradient update of the primal and dual variables. Empirically, we conduct experiments on positive-unlabeled (PU) learning and partial area under ROC curve (pAUC) optimization with an adversarial fairness regularizer to validate the effectiveness of our proposed algorithms.", "pdf": "https://openreview.net/pdf/18f96f713f423e4229742f8769d07324f96287f8.pdf"} {"title": "On the Stability and Generalization of Meta-Learning", "url": "https://openreview.net/forum?id=J8rOw29df2", "detail_url": "https://openreview.net/forum?id=J8rOw29df2", "authors": "Yunjuan Wang,Raman Arora", "tags": "NIPS 2024,Poster", "abstract": "We focus on developing a theoretical understanding of meta-learning. Given multiple tasks drawn i.i.d. from some (unknown) task distribution, the goal is to find a good pre-trained model that can be adapted to a new, previously unseen, task with little computational and statistical overhead. We introduce a novel notion of stability for meta-learning algorithms, namely *uniform meta-stability*. We instantiate two uniformly meta-stable learning algorithms based on regularized empirical risk minimization and gradient descent and give explicit generalization bounds for convex learning problems with smooth losses and for weakly convex learning problems with non-smooth losses. Finally, we extend our results to stochastic and adversarially robust variants of our meta-learning algorithm.", "pdf": "https://openreview.net/pdf/09d9e02802ba23711cda18c378c6c7a8c94f69f5.pdf"} {"title": "DN-4DGS: Denoised Deformable Network with Temporal-Spatial Aggregation for Dynamic Scene Rendering", "url": "https://openreview.net/forum?id=QQSyNX5s83", "detail_url": "https://openreview.net/forum?id=QQSyNX5s83", "authors": "Jiahao Lu,Jiacheng Deng,Ruijie Zhu,Yanzhe Liang,Wenfei Yang,Xu Zhou,Tianzhu Zhang", "tags": "NIPS 2024,Poster", "abstract": "Dynamic scenes rendering is an intriguing yet challenging problem. Although current methods based on NeRF have achieved satisfactory performance, they still can not reach real-time levels. Recently, 3D Gaussian Splatting (3DGS) has garnered researchers' attention due to their outstanding rendering quality and real-time speed. Therefore, a new paradigm has been proposed: defining a canonical 3D gaussians and deforming it to individual frames in deformable fields. However, since the coordinates of canonical 3D gaussians are filled with noise, which can transfer noise into the deformable fields, and there is currently no method that adequately considers the aggregation of 4D information. Therefore, we propose Denoised Deformable Network with Temporal-Spatial Aggregation for Dynamic Scene Rendering (DN-4DGS). Specifically, a Noise Suppression Strategy is introduced to change the distribution of the coordinates of the canonical 3D gaussians and suppress noise. Additionally, a Decoupled Temporal-Spatial Aggregation Module is designed to aggregate information from adjacent points and frames. Extensive experiments on various real-world datasets demonstrate that our method achieves state-of-the-art rendering quality under a real-time level. Code is available at https://github.com/peoplelu/DN-4DGS.", "pdf": "https://openreview.net/pdf/361a11c98015d35e6343b383a6f7bbd27fab481a.pdf"} {"title": "Dynamic Conditional Optimal Transport through Simulation-Free Flows", "url": "https://openreview.net/forum?id=tk0uaRynhH", "detail_url": "https://openreview.net/forum?id=tk0uaRynhH", "authors": "Gavin Kerrigan,Giosue Migliorini,Padhraic Smyth", "tags": "NIPS 2024,Poster", "abstract": "We study the geometry of conditional optimal transport (COT) and prove a dynamic formulation which generalizes the Benamou-Brenier Theorem. Equipped with these tools, we propose a simulation-free flow-based method for conditional generative modeling. Our method couples an arbitrary source distribution to a specified target distribution through a triangular COT plan, and a conditional generative model is obtained by approximating the geodesic path of measures induced by this COT plan. Our theory and methods are applicable in infinite-dimensional settings, making them well suited for a wide class of Bayesian inverse problems. Empirically, we demonstrate that our method is competitive on several challenging conditional generation tasks, including an infinite-dimensional inverse problem.", "pdf": "https://openreview.net/pdf/d4e9108f3aa259861f75b34a8398e06ef828b822.pdf"} {"title": "Persistence Homology Distillation for Semi-supervised Continual Learning", "url": "https://openreview.net/forum?id=qInb7EUmxz", "detail_url": "https://openreview.net/forum?id=qInb7EUmxz", "authors": "YanFan,Yu Wang,Pengfei Zhu,Dongyue Chen,Qinghua Hu", "tags": "NIPS 2024,Poster", "abstract": "Semi-supervised continual learning (SSCL) has attracted significant attention for addressing catastrophic forgetting in semi-supervised data. Knowledge distillation, which leverages data representation and pair-wise similarity, has shown significant potential in preserving information in SSCL. However, traditional distillation strategies often fail in unlabeled data with inaccurate or noisy information, limiting their efficiency in feature spaces undergoing substantial changes during continual learning. To address these limitations, we propose Persistence Homology Distillation (PsHD) to preserve intrinsic structural information that is insensitive to noise in semi-supervised continual learning. First, we capture the structural features using persistence homology by homological evolution across different scales in vision data, where the multi-scale characteristic established its stability under noise interference. Next, we propose a persistence homology distillation loss in SSCL and design an acceleration algorithm to reduce the computational cost of persistence homology in our module. Furthermore, we demonstrate the superior stability of PsHD compared to sample representation and pair-wise similarity distillation methods theoretically and experimentally. Finally, experimental results on three widely used datasets validate that the new PsHD outperforms state-of-the-art with 3.9% improvements on average, and also achieves 1.5% improvements while reducing 60% memory buffer size, highlighting the potential of utilizing unlabeled data in SSCL. Our code is available: https://github.com/fanyan0411/PsHD.", "pdf": "https://openreview.net/pdf/5f6f27efbe92894d0e9f59692d4fee8538becfa1.pdf"} {"title": "VidMan: Exploiting Implicit Dynamics from Video Diffusion Model for Effective Robot Manipulation", "url": "https://openreview.net/forum?id=YbhHz0X2j5", "detail_url": "https://openreview.net/forum?id=YbhHz0X2j5", "authors": "Youpeng Wen,Junfan Lin,Yi Zhu,Jianhua Han,Hang Xu,Shen Zhao,Xiaodan Liang", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements utilizing large-scale video data for learning video generation models demonstrate significant potential in understanding complex physical dynamics. It suggests the feasibility of leveraging diverse robot trajectory data to develop a unified, dynamics-aware model to enhance robot manipulation. However, given the relatively small amount of available robot data, directly fitting data without considering the relationship between visual observations and actions could lead to suboptimal data utilization. To this end, we propose \\textbf{VidMan} (\\textbf{Vid}eo Diffusion for Robot \\textbf{Man}ipulation), a novel framework that employs a two-stage training mechanism inspired by dual-process theory from neuroscience to enhance stability and improve data utilization efficiency. Specifically, in the first stage, VidMan is pre-trained on the Open X-Embodiment dataset (OXE) for predicting future visual trajectories in a video denoising diffusion manner, enabling the model to develop a long horizontal awareness of the environment's dynamics. In the second stage, a flexible yet effective layer-wise self-attention adapter is introduced to transform VidMan into an efficient inverse dynamics model that predicts action modulated by the implicit dynamics knowledge via parameter sharing. Our VidMan framework outperforms state-of-the-art baseline model GR-1 on the CALVIN benchmark, achieving a 11.7\\% relative improvement, and demonstrates over 9\\% precision gains on the OXE small-scale dataset. These results provide compelling evidence that world models can significantly enhance the precision of robot action prediction. Codes and models will be public.", "pdf": "https://openreview.net/pdf/e13dd37a46369b87a3c1fbc7cd460d8ed7a7039e.pdf"} {"title": "A Fast Convoluted Story: Scaling Probabilistic Inference for Integer Arithmetics", "url": "https://openreview.net/forum?id=OOiRS6fiM7", "detail_url": "https://openreview.net/forum?id=OOiRS6fiM7", "authors": "Lennert De Smet,Pedro Zuidberg Dos Martires", "tags": "NIPS 2024,Poster", "abstract": "As illustrated by the success of integer linear programming, linear integer arithmetics is a powerful tool for modelling combinatorial problems. Furthermore, the probabilistic extension of linear programming has been used to formulate problems in neurosymbolic AI. However, two key problems persist that prevent the adoption of neurosymbolic techniques beyond toy problems. First, probabilistic inference is inherently hard, #P-hard to be precise. Second, the discrete nature of integers renders the construction of meaningful gradients challenging, which is problematic for learning. In order to mitigate these issues, we formulate linear arithmetics over integer-valued random variables as tensor manipulations that can be implemented in a straightforward fashion using modern deep learning libraries. At the core of our formulation lies the observation that the addition of two integer-valued random variables can be performed by adapting the fast Fourier transform to probabilities in the log-domain. By relying on tensor operations we obtain a differentiable data structure, which unlocks, virtually for free, gradient-based learning. In our experimental validation we show that tensorising probabilistic integer linear arithmetics and leveraging the fast Fourier transform allows us to push the state of the art by several orders of magnitude in terms of inference and learning times.", "pdf": "https://openreview.net/pdf/db84620c031d715ef1d6a5c2cf2e96187e730928.pdf"} {"title": "ScaleKD: Strong Vision Transformers Could Be Excellent Teachers", "url": "https://openreview.net/forum?id=0WCFI2Qx85", "detail_url": "https://openreview.net/forum?id=0WCFI2Qx85", "authors": "Jiawei Fan,Chao Li,Xiaolong Liu,Anbang Yao", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we question if well pre-trained vision transformer (ViT) models could be used as teachers that exhibit scalable properties to advance cross architecture knowledge distillation research, in the context of adopting mainstream large-scale visual recognition datasets for evaluation. To make this possible, our analysis underlines the importance of seeking effective strategies to align (1) feature computing paradigm differences, (2) model scale differences, and (3) knowledge density differences. By combining three closely coupled components namely *cross attention projector*, *dual-view feature mimicking* and *teacher parameter perception* tailored to address the alignment problems stated above, we present a simple and effective knowledge distillation method, called *ScaleKD*. Our method can train student backbones that span across a variety of convolutional neural network (CNN), multi-layer perceptron (MLP), and ViT architectures on image classification datasets, achieving state-of-the-art knowledge distillation performance. For instance, taking a well pre-trained Swin-L as the teacher model, our method gets 75.15\\%|82.03\\%|84.16\\%|78.63\\%|81.96\\%|83.93\\%|83.80\\%|85.53\\% top-1 accuracies for MobileNet-V1|ResNet-50|ConvNeXt-T|Mixer-S/16|Mixer-B/16|ViT-S/16|Swin-T|ViT-B/16 models trained on ImageNet-1K dataset from scratch, showing 3.05\\%|3.39\\%|2.02\\%|4.61\\%|5.52\\%|4.03\\%|2.62\\%|3.73\\% absolute gains to the individually trained counterparts. Intriguingly, when scaling up the size of teacher models or their pre-training datasets, our method showcases the desired scalable properties, bringing increasingly larger gains to student models. We also empirically show that the student backbones trained by our method transfer well on downstream MS-COCO and ADE20K datasets. More importantly, our method could be used as a more efficient alternative to the time-intensive pre-training paradigm for any target student model on large-scale datasets if a strong pre-trained ViT is available, reducing the amount of viewed training samples up to 195$\\times$. The code is available at *https://github.com/deep-optimization/ScaleKD*.", "pdf": "https://openreview.net/pdf/430fc2b527713941d316c8a20f97ba5e1fa87274.pdf"} {"title": "CALVIN: Improved Contextual Video Captioning via Instruction Tuning", "url": "https://openreview.net/forum?id=7Kz7icCZ6H", "detail_url": "https://openreview.net/forum?id=7Kz7icCZ6H", "authors": "Gowthami Somepalli,Arkabandhu Chowdhury,Jonas Geiping,Ronen Basri,Tom Goldstein,David W. Jacobs", "tags": "NIPS 2024,Poster", "abstract": "The recent emergence of powerful Vision-Language models (VLMs) has significantly improved image captioning. Some of these models are extended to caption videos as well. However, their capabilities to understand complex scenes are limited, and the descriptions they provide for scenes tend to be overly verbose and focused on the superficial appearance of objects. Scene descriptions, especially in movies, require a deeper contextual understanding, unlike general-purpose video captioning. To address this challenge, we propose a model, CALVIN, a specialized video LLM that leverages previous movie context to generate fully \"contextual\" scene descriptions. To achieve this, we train our model on a suite of tasks that integrate both image-based question-answering and video captioning within a unified framework, before applying instruction tuning to refine the model's ability to provide scene captions. Lastly, we observe that our model responds well to prompt engineering and few-shot in-context learning techniques, enabling the user to adapt it to any new movie with very little additional annotation.", "pdf": "https://openreview.net/pdf/e7d334ccdb86a2d6447dfd8764564be548452d6d.pdf"} {"title": "Smoke and Mirrors in Causal Downstream Tasks", "url": "https://openreview.net/forum?id=Iq2IAWozNr", "detail_url": "https://openreview.net/forum?id=Iq2IAWozNr", "authors": "Riccardo Cadei,Lukas Lindorfer,Sylvia Cremer,Cordelia Schmid,Francesco Locatello", "tags": "NIPS 2024,Poster", "abstract": "Machine Learning and AI have the potential to transform data-driven scientific discovery, enabling accurate predictions for several scientific phenomena. As many scientific questions are inherently causal, this paper looks at the causal inference task of treatment effect estimation, where the outcome of interest is recorded in high-dimensional observations in a Randomized Controlled Trial (RCT). Despite being the simplest possible causal setting and a perfect fit for deep learning, we theoretically find that many common choices in the literature may lead to biased estimates. To test the practical impact of these considerations, we recorded ISTAnt, the first real-world benchmark for causal inference downstream tasks on high-dimensional observations as an RCT studying how garden ants (Lasius neglectus) respond to microparticles applied onto their colony members by hygienic grooming. Comparing 6 480 models fine-tuned from state-of-the-art visual backbones, we find that the sampling and modeling choices significantly affect the accuracy of the causal estimate, and that classification accuracy is not a proxy thereof. We further validated the analysis, repeating it on a synthetically generated visual data set controlling the causal model. Our results suggest that future benchmarks should carefully consider real downstream scientific questions, especially causal ones. Further, we highlight guidelines for representation learning methods to help answer causal questions in the sciences.", "pdf": "https://openreview.net/pdf/0da02ead31eaa563152d1c602f5227264b7e0596.pdf"} {"title": "Improving Decision Sparsity", "url": "https://openreview.net/forum?id=GhqdnLZMAz", "detail_url": "https://openreview.net/forum?id=GhqdnLZMAz", "authors": "Yiyang Sun,Tong Wang,Cynthia Rudin", "tags": "NIPS 2024,Poster", "abstract": "Sparsity is a central aspect of interpretability in machine learning. Typically, sparsity is measured in terms of the size of a model globally, such as the number of variables it uses. However, this notion of sparsity is not particularly relevant for decision making; someone subjected to a decision does not care about variables that do not contribute to the decision. In this work, we dramatically expand a notion of *decision sparsity* called the *Sparse Explanation Value* (SEV) so that its explanations are more meaningful. SEV considers movement along a hypercube towards a reference point. By allowing flexibility in that reference and by considering how distances along the hypercube translate to distances in feature space, we can derive sparser and more meaningful explanations for various types of function classes. We present cluster-based SEV and its variant tree-based SEV, introduce a method that improves credibility of explanations, and propose algorithms that optimize decision sparsity in machine learning models.", "pdf": "https://openreview.net/pdf/76077d44b75dbae8d4598eae7f455c86e1519e71.pdf"} {"title": "Poseidon: Efficient Foundation Models for PDEs", "url": "https://openreview.net/forum?id=JC1VKK3UXk", "detail_url": "https://openreview.net/forum?id=JC1VKK3UXk", "authors": "Maximilian Herde,Bogdan Raonic,Tobias Rohner,Roger K\u00e4ppeli,Roberto Molinaro,Emmanuel de Bezenac,Siddhartha Mishra", "tags": "NIPS 2024,Poster", "abstract": "We introduce Poseidon, a foundation model for learning the solution operators of PDEs. It is based on a multiscale operator transformer, with time-conditioned layer norms that enable continuous-in-time evaluations. A novel training strategy leveraging the semi-group property of time-dependent PDEs to allow for significant scaling-up of the training data is also proposed. Poseidon is pretrained on a diverse, large scale dataset for the governing equations of fluid dynamics. It is then evaluated on a suite of 15 challenging downstream tasks that include a wide variety of PDE types and operators. We show that Poseidon exhibits excellent performance across the board by outperforming baselines significantly, both in terms of sample efficiency and accuracy. Poseidon also generalizes very well to new physics that is not seen during pretraining. Moreover, Poseidon scales with respect to model and data size, both for pretraining and for downstream tasks. Taken together, our results showcase the surprising ability of Poseidon to learn effective representations from a very small set of PDEs during pretraining in order to generalize well to unseen and unrelated PDEs downstream, demonstrating its potential as an effective, general purpose PDE foundation model. Finally, the Poseidon model as well as underlying pretraining and downstream datasets are open sourced, with code being available at https://github.com/camlab-ethz/poseidon and pretrained models and datasets at https://huggingface.co/camlab-ethz.", "pdf": "https://openreview.net/pdf/f323ab53b88fa008fde799a684021aaedb2a03c1.pdf"} {"title": "PointAD: Comprehending 3D Anomalies from Points and Pixels for Zero-shot 3D Anomaly Detection", "url": "https://openreview.net/forum?id=02CIZ8qeDc", "detail_url": "https://openreview.net/forum?id=02CIZ8qeDc", "authors": "Qihang Zhou,Jiangtao Yan,Shibo He,Wenchao Meng,Jiming Chen", "tags": "NIPS 2024,Poster", "abstract": "Zero-shot (ZS) 3D anomaly detection is a crucial yet unexplored field that addresses scenarios where target 3D training samples are unavailable due to practical concerns like privacy protection. This paper introduces PointAD, a novel approach that transfers the strong generalization capabilities of CLIP for recognizing 3D anomalies on unseen objects. PointAD provides a unified framework to comprehend 3D anomalies from both points and pixels. In this framework, PointAD renders 3D anomalies into multiple 2D renderings and projects them back into 3D space. To capture the generic anomaly semantics into PointAD, we propose hybrid representation learning that optimizes the learnable text prompts from 3D and 2D through auxiliary point clouds. The collaboration optimization between point and pixel representations jointly facilitates our model to grasp underlying 3D anomaly patterns, contributing to detecting and segmenting anomalies of unseen diverse 3D objects. Through the alignment of 3D and 2D space, our model can directly integrate RGB information, further enhancing the understanding of 3D anomalies in a plug-and-play manner. Extensive experiments show the superiority of PointAD in ZS 3D anomaly detection across diverse unseen objects.", "pdf": "https://openreview.net/pdf/514c993daadcf9006e79d05e65dcf706d696a9ce.pdf"} {"title": "Temporally Consistent Atmospheric Turbulence Mitigation with Neural Representations", "url": "https://openreview.net/forum?id=yURca4wi2L", "detail_url": "https://openreview.net/forum?id=yURca4wi2L", "authors": "Haoming Cai,Jingxi Chen,Brandon Y. Feng,Weiyun Jiang,Mingyang Xie,Kevin Zhang,Cornelia Fermuller,Yiannis Aloimonos,Ashok Veeraraghavan,Christopher Metzler", "tags": "NIPS 2024,Poster", "abstract": "Atmospheric turbulence, caused by random fluctuations in the atmosphere's refractive index, introduces complex spatio-temporal distortions in imagery captured at long range. Video Atmospheric Turbulence Mitigation (ATM) aims to restore videos affected by these distortions. However, existing video ATM methods, both supervised and self-supervised, struggle to maintain temporally consistent mitigation across frames, leading to visually incoherent results. This limitation arises from the stochastic nature of atmospheric turbulence, which varies across space and time. Inspired by the observation that atmospheric turbulence induces high-frequency temporal variations, we propose ConVRT, a novel framework for consistent video restoration through turbulence. ConVRT introduces a neural video representation that explicitly decouples spatial and temporal information into a spatial content field and a temporal deformation field, enabling targeted regularization of the network's temporal representation capability. By leveraging the low-pass filtering properties of the regularized temporal representations, ConVRT effectively mitigates turbulence-induced temporal frequency variations and promotes temporal consistency. Furthermore, our training framework seamlessly integrates supervised pre-training on synthetic turbulence data with self-supervised learning on real-world videos, significantly improving the temporally consistent mitigation of ATM methods on diverse real-world data. More information can be found on our project page: https://convrt-2024.github.io/", "pdf": "https://openreview.net/pdf/b24f5f37299924fd9dcef2c90341e7676d541ddb.pdf"} {"title": "Learning a Single Neuron Robustly to Distributional Shifts and Adversarial Label Noise", "url": "https://openreview.net/forum?id=Rv5dUg4JcZ", "detail_url": "https://openreview.net/forum?id=Rv5dUg4JcZ", "authors": "Shuyao Li,Sushrut Karmalkar,Ilias Diakonikolas,Jelena Diakonikolas", "tags": "NIPS 2024,Poster", "abstract": "We study the problem of learning a single neuron with respect to the $L_2^2$-loss in the presence of adversarial distribution shifts, where the labels can be arbitrary, and the goal is to find a \"best-fit\" function.\nMore precisely, given training samples from a reference distribution $p_0$, \nthe goal is to approximate the vector $\\mathbf{w}^*$\nwhich minimizes the squared loss with respect to the worst-case distribution \nthat is close in $\\chi^2$-divergence to $p_{0}$.\nWe design a computationally efficient algorithm that recovers a vector $ \\hat{\\mathbf{w}}$\nsatisfying \n$\\mathbb{E}\\_{p^*} (\\sigma(\\hat{\\mathbf{w}} \\cdot \\mathbf{x}) - y)^2 \\leq C \\hspace{0.2em} \\mathbb{E}\\_{p^*} (\\sigma(\\mathbf{w}^* \\cdot \\mathbf{x}) - y)^2 + \\epsilon$, where $C>1$ is a dimension-independent constant and $(\\mathbf{w}^*, p^*)$ is the witness attaining the min-max risk\n$\\min_{\\mathbf{w}:\\|\\mathbf{w}\\| \\leq W} \\max\\_{p} \\mathbb{E}\\_{(\\mathbf{x}, y) \\sim p} (\\sigma(\\mathbf{w} \\cdot \\mathbf{x}) - y)^2 - \\nu \\chi^2(p, p_0)$.\nOur algorithm follows the primal-dual framework and is \ndesigned by directly bounding the risk with respect to the original, nonconvex $L_2^2$ loss.\nFrom an optimization standpoint, our work opens new avenues for the design of primal-dual algorithms under structured nonconvexity.", "pdf": "https://openreview.net/pdf/e888ffaef012804a6b2bb9eec02341805066cb10.pdf"} {"title": "Preference-based Pure Exploration", "url": "https://openreview.net/forum?id=GvQU54uA7u", "detail_url": "https://openreview.net/forum?id=GvQU54uA7u", "authors": "Apurv Shukla,Debabrota Basu", "tags": "NIPS 2024,Poster", "abstract": "We study the preference-based pure exploration problem for bandits with vector-valued rewards and a set of preferences imposed over them. Specifically, we aim to identify the most preferred policy over a set of arms according to the preferences induced on the reward vectors by an ordering cone $C$. First, to quantify the impact of preferences, we derive a novel lower bound on the sample complexity for identifying the most preferred arm with confidence level $1-\\delta$. Our lower bound shows that how the geometry of the preferences and reward vectors changes the hardness of this problem. We further explicate this geometry for Gaussian distributions of rewards, and provide a convex reformulation of the lower bound solvable with linear programming. Then, we leverage this convex reformulation of the lower bound to design the Track and Stop with Preferences (TSwP) algorithm that identifies the most preferred policy. Finally, we derive a new concentration result for vector-valued rewards, and show that TSwP achieves a matching sample complexity upper bound.", "pdf": "https://openreview.net/pdf/7bc2aeade84a21e27139131829e7c621a073769d.pdf"} {"title": "Evaluating alignment between humans and neural network representations in image-based learning tasks", "url": "https://openreview.net/forum?id=8i6px5W1Rf", "detail_url": "https://openreview.net/forum?id=8i6px5W1Rf", "authors": "Can Demircan,Tankred Saanum,Leonardo Pettini,Marcel Binz,Blazej M Baczkowski,Christian F. Doeller,Mona M. Garvert,Eric Schulz", "tags": "NIPS 2024,Poster", "abstract": "Humans represent scenes and objects in rich feature spaces, carrying information that allows us to generalise about category memberships and abstract functions with few examples. What determines whether a neural network model generalises like a human? We tested how well the representations of $86$ pretrained neural network models mapped to human learning trajectories across two tasks where humans had to learn continuous relationships and categories of natural images. In these tasks, both human participants and neural networks successfully identified the relevant stimulus features within a few trials, demonstrating effective generalisation. We found that while training dataset size was a core determinant of alignment with human choices, contrastive training with multi-modal data (text and imagery) was a common feature of currently publicly available models that predicted human generalisation. Intrinsic dimensionality of representations had different effects on alignment for different model types. Lastly, we tested three sets of human-aligned representations and found no consistent improvements in predictive accuracy compared to the baselines. In conclusion, pretrained neural networks can serve to extract representations for cognitive models, as they appear to capture some fundamental aspects of cognition that are transferable across tasks. Both our paradigms and modelling approach offer a novel way to quantify alignment between neural networks and humans and extend cognitive science into more naturalistic domains.", "pdf": "https://openreview.net/pdf/0d8cc44c845397e95d973b16bb95312e979e605d.pdf"} {"title": "Ensemble sampling for linear bandits: small ensembles suffice", "url": "https://openreview.net/forum?id=SO7fnIFq0o", "detail_url": "https://openreview.net/forum?id=SO7fnIFq0o", "authors": "David Janz,Alexander Litvak,Csaba Szepesvari", "tags": "NIPS 2024,Poster", "abstract": "We provide the first useful and rigorous analysis of ensemble sampling for the stochastic linear bandit setting. In particular, we show that, under standard assumptions, for a $d$-dimensional stochastic linear bandit with an interaction horizon $T$, ensemble sampling with an ensemble of size of order $\\smash{d \\log T}$ incurs regret at most of the order $\\smash{(d \\log T)^{5/2} \\sqrt{T}}$. Ours is the first result in any structured setting not to require the size of the ensemble to scale linearly with $T$---which defeats the purpose of ensemble sampling---while obtaining near $\\smash{\\sqrt{T}}$ order regret. Our result is also the first to allow for infinite action sets.", "pdf": "https://openreview.net/pdf/8d05488e8d802ad8a7cf226f281d9ad6d837f818.pdf"} {"title": "Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning", "url": "https://openreview.net/forum?id=MuPlJ9fT4b", "detail_url": "https://openreview.net/forum?id=MuPlJ9fT4b", "authors": "Wuyang Chen,Jialin Song,Pu Ren,Shashank Subramanian,Dmitriy Morozov,Michael W. Mahoney", "tags": "NIPS 2024,Poster", "abstract": "Recent years have witnessed the promise of coupling machine learning methods and physical domain-specific insights for solving scientific problems based on partial differential equations (PDEs). However, being data-intensive, these methods still require a large amount of PDE data. This reintroduces the need for expensive numerical PDE solutions, partially undermining the original goal of avoiding these expensive simulations. In this work, seeking data efficiency, we design unsupervised pretraining for PDE operator learning. To reduce the need for training data with heavy simulation costs, we mine unlabeled PDE data without simulated solutions,\nand we pretrain neural operators with physics-inspired reconstruction-based proxy tasks. To improve out-of-distribution performance, we further assist neural operators in flexibly leveraging a similarity-based method that learns in-context examples, without incurring extra training costs or designs. Extensive empirical evaluations on a diverse set of PDEs demonstrate that our method is highly data-efficient, more generalizable, and even outperforms conventional vision-pretrained models. We provide our code at https://github.com/delta-lab-ai/data_efficient_nopt.", "pdf": "https://openreview.net/pdf/ffec703e260de54b6ed6b4a9d445481e4be67091.pdf"} {"title": "Ask, Attend, Attack: An Effective Decision-Based Black-Box Targeted Attack for Image-to-Text Models", "url": "https://openreview.net/forum?id=9uMJeCUeKk", "detail_url": "https://openreview.net/forum?id=9uMJeCUeKk", "authors": "Qingyuan Zeng,Zhenzhong Wang,Yiu-ming Cheung,Min Jiang", "tags": "NIPS 2024,Poster", "abstract": "While image-to-text models have demonstrated significant advancements in various vision-language tasks, they remain susceptible to adversarial attacks. Existing white-box attacks on image-to-text models require access to the architecture, gradients, and parameters of the target model, resulting in low practicality. Although the recently proposed gray-box attacks have improved practicality, they suffer from semantic loss during the training process, which limits their targeted attack performance. To advance adversarial attacks of image-to-text models, this paper focuses on a challenging scenario: decision-based black-box targeted attacks where the attackers only have access to the final output text and aim to perform targeted attacks. Specifically, we formulate the decision-based black-box targeted attack as a large-scale optimization problem. To efficiently solve the optimization problem, a three-stage process \\textit{Ask, Attend, Attack}, called \\textit{AAA}, is proposed to coordinate with the solver. \\textit{Ask} guides attackers to create target texts that satisfy the specific semantics. \\textit{Attend} identifies the crucial regions of the image for attacking, thus reducing the search space for the subsequent \\textit{Attack}. \\textit{Attack} uses an evolutionary algorithm to attack the crucial regions, where the attacks are semantically related to the target texts of \\textit{Ask}, thus achieving targeted attacks without semantic loss. Experimental results on transformer-based and CNN+RNN-based image-to-text models confirmed the effectiveness of our proposed \\textit{AAA}.", "pdf": "https://openreview.net/pdf/071e0f86eed8649ef02c858d99b8e45c9f883b81.pdf"} {"title": "Context-Aware Testing: A New Paradigm for Model Testing with Large Language Models", "url": "https://openreview.net/forum?id=d75qCZb7TX", "detail_url": "https://openreview.net/forum?id=d75qCZb7TX", "authors": "Paulius Rauba,Nabeel Seedat,Max Ruiz Luyten,Mihaela van der Schaar", "tags": "NIPS 2024,Poster", "abstract": "The predominant *de facto* paradigm of testing ML models relies on either using only held-out data to compute aggregate evaluation metrics or by assessing the performance on different subgroups. However, such *data-only testing* methods operate under the restrictive assumption that the available empirical data is the sole input for testing ML models, disregarding valuable contextual information that could guide model testing. In this paper, we challenge the go-to approach of *data-only testing* and introduce *Context-Aware Testing* (CAT) which uses context as an inductive bias to guide the search for meaningful model failures. We instantiate the first CAT system, *SMART Testing*, which employs large language models to hypothesize relevant and likely failures, which are evaluated on data using a *self-falsification mechanism*. Through empirical evaluations in diverse settings, we show that SMART automatically identifies more relevant and impactful failures than alternatives, demonstrating the potential of CAT as a testing paradigm.", "pdf": "https://openreview.net/pdf/a99618edec2b2c0df8a966cbd6c01793a3bd2e48.pdf"} {"title": "RFLPA: A Robust Federated Learning Framework against Poisoning Attacks with Secure Aggregation", "url": "https://openreview.net/forum?id=js74ZCddxG", "detail_url": "https://openreview.net/forum?id=js74ZCddxG", "authors": "Peihua Mai,Ran Yan,Yan Pang", "tags": "NIPS 2024,Poster", "abstract": "Federated learning (FL) allows multiple devices to train a model collaboratively without sharing their data. Despite its benefits, FL is vulnerable to privacy leakage and poisoning attacks. To address the privacy concern, secure aggregation (SecAgg) is often used to obtain the aggregation of gradients on sever without inspecting individual user updates. Unfortunately, existing defense strategies against poisoning attacks rely on the analysis of local updates in plaintext, making them incompatible with SecAgg. To reconcile the conflicts, we propose a robust federated learning framework against poisoning attacks (RFLPA) based on SecAgg protocol. Our framework computes the cosine similarity between local updates and server updates to conduct robust aggregation. Furthermore, we leverage verifiable packed Shamir secret sharing to achieve reduced communication cost of $O(M+N)$ per user, and design a novel dot product aggregation algorithm to resolve the issue of increased information leakage. Our experimental results show that RFLPA significantly reduces communication and computation overhead by over $75\\%$ compared to the state-of-the-art secret sharing method, BREA, while maintaining competitive accuracy.", "pdf": "https://openreview.net/pdf/afd6cfbf0ec32edd02c01595b7160353ed91fa1d.pdf"} {"title": "FINALLY: fast and universal speech enhancement with studio-like quality", "url": "https://openreview.net/forum?id=18RdkSv9h9", "detail_url": "https://openreview.net/forum?id=18RdkSv9h9", "authors": "Nicholas Babaev,Kirill Tamogashev,Azat Saginbaev,Ivan Shchekotov,Hanbin Bae,Hosang Sung,WonJun Lee,Hoon-Young Cho,Pavel Andreev", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we address the challenge of speech enhancement in real-world recordings, which often contain various forms of distortion, such as background noise, reverberation, and microphone artifacts.\nWe revisit the use of Generative Adversarial Networks (GANs) for speech enhancement and theoretically show that GANs are naturally inclined to seek the point of maximum density within the conditional clean speech distribution, which, as we argue, is essential for speech enhancement task.\nWe study various feature extractors for perceptual loss to facilitate the stability of adversarial training, developing a methodology for probing the structure of the feature space.\nThis leads us to integrate WavLM-based perceptual loss into MS-STFT adversarial training pipeline, creating an effective and stable training procedure for the speech enhancement model.\nThe resulting speech enhancement model, which we refer to as FINALLY, builds upon the HiFi++ architecture, augmented with a WavLM encoder and a novel training pipeline.\nEmpirical results on various datasets confirm our model's ability to produce clear, high-quality speech at 48 kHz, achieving state-of-the-art performance in the field of speech enhancement. Demo page: https://samsunglabs.github.io/FINALLY-page/", "pdf": "https://openreview.net/pdf/2e3b3e577243cca83d9bdcdcc537a17d24af94b5.pdf"} {"title": "Semantic Density: Uncertainty Quantification for Large Language Models through Confidence Measurement in Semantic Space", "url": "https://openreview.net/forum?id=LOH6qzI7T6", "detail_url": "https://openreview.net/forum?id=LOH6qzI7T6", "authors": "Xin Qiu,Risto Miikkulainen", "tags": "NIPS 2024,Poster", "abstract": "With the widespread application of Large Language Models (LLMs) to various domains, concerns regarding the trustworthiness of LLMs in safety-critical scenarios have been raised, due to their unpredictable tendency to hallucinate and generate misinformation. Existing LLMs do not have an inherent functionality to provide the users with an uncertainty/confidence metric for each response it generates, making it difficult to evaluate trustworthiness. Although several studies aim to develop uncertainty quantification methods for LLMs, they have fundamental limitations, such as being restricted to classification tasks, requiring additional training and data, considering only lexical instead of semantic information, and being prompt-wise but not response-wise. A new framework is proposed in this paper to address these issues. Semantic density extracts uncertainty/confidence information for each response from a probability distribution perspective in semantic space. It has no restriction on task types and is \"off-the-shelf\" for new models and tasks. Experiments on seven state-of-the-art LLMs, including the latest Llama 3 and Mixtral-8x22B models, on four free-form question-answering benchmarks demonstrate the superior performance and robustness of semantic density compared to prior approaches.", "pdf": "https://openreview.net/pdf/7fd39b6f714e11de71c0823ddd585c960e84b6c3.pdf"} {"title": "Wasserstein convergence of Cech persistence diagrams for samplings of submanifolds", "url": "https://openreview.net/forum?id=ZehccYKkNH", "detail_url": "https://openreview.net/forum?id=ZehccYKkNH", "authors": "Charles Arnal,David Cohen-Steiner,Vincent Divol", "tags": "NIPS 2024,Poster", "abstract": "Cech Persistence diagrams (PDs) are topological descriptors routinely used to capture the geometry of complex datasets. They are commonly compared using the Wasserstein distances $\\mathrm{OT}_p$; however, the extent to which PDs are stable with respect to these metrics remains poorly understood. \nWe partially close this gap by focusing on the case where datasets are sampled on an $m$-dimensional submanifold of $\\mathbb{R}^d$. Under this manifold hypothesis, we show that convergence with respect to the $\\mathrm{OT}_p$ metric happens exactly when $p>m$. We also provide improvements upon the bottleneck stability theorem in this case and prove new laws of large numbers for the total $\\alpha$-persistence of PDs. Finally, we show how these theoretical findings shed new light on the behavior of the feature maps on the space of PDs that are used in ML-oriented applications of Topological Data Analysis.", "pdf": "https://openreview.net/pdf/94991a8d3e49f98650f89b5671865e9439393ec5.pdf"} {"title": "Combining Observational Data and Language for Species Range Estimation", "url": "https://openreview.net/forum?id=IOKLUxB05h", "detail_url": "https://openreview.net/forum?id=IOKLUxB05h", "authors": "Max Hamilton,Christian Lange,Elijah Cole,Alexander Shepard,Samuel Heinrich,Oisin Mac Aodha,Grant Van Horn,Subhransu Maji", "tags": "NIPS 2024,Poster", "abstract": "Species range maps (SRMs) are essential tools for research and policy-making in ecology, conservation, and environmental management. However, traditional SRMs rely on the availability of environmental covariates and high-quality observational data, both of which can be challenging to obtain due to geographic inaccessibility and resource constraints. We propose a novel approach combining millions of citizen science species observations with textual descriptions from Wikipedia, covering habitat preferences and range descriptions for tens of thousands of species. Our framework maps location, species, and text descriptions into a common space, facilitating the learning of rich spatial covariates at a global scale and enabling zero-shot range estimation from textual descriptions. Evaluated on held-out species, our zero-shot SRMs significantly outperform baselines and match the performance of SRMs obtained using tens of observations. Our approach also acts as a strong prior when combined with observational data, resulting in more accurate range estimation with less data. We present extensive quantitative and qualitative analyses of the learned representations in the context of range estimation and other spatial tasks, demonstrating the effectiveness of our approach.", "pdf": "https://openreview.net/pdf/9a289a4d9fa5b45836aeb3be28d3ab67d9037bf5.pdf"} {"title": "Almost Surely Asymptotically Constant Graph Neural Networks", "url": "https://openreview.net/forum?id=Dn68qdfTry", "detail_url": "https://openreview.net/forum?id=Dn68qdfTry", "authors": "Sam Adam-Day,Michael Benedikt,Ismail Ilkan Ceylan,Ben Finkelshtein", "tags": "NIPS 2024,Poster", "abstract": "We present a new angle on the expressive power of graph neural networks (GNNs) by studying how the predictions of real-valued GNN classifiers, such as those classifying graphs probabilistically, evolve as we apply them on larger graphs drawn from some random graph model. We show that the output converges to a constant function, which upper-bounds what these classifiers can uniformly express. This strong convergence phenomenon applies to a very wide class of GNNs, including state of the art models, with aggregates including mean and the attention-based mechanism of graph transformers. Our results apply to a broad class of random graph models, including sparse and dense variants of the Erd\u0151s-R\u00e9nyi model, the stochastic block model, and the Barab\u00e1si-Albert model. We empirically validate these findings, observing that the convergence phenomenon appears not only on random graphs but also on some real-world graphs.", "pdf": "https://openreview.net/pdf/f4319bbfd37fe4a0c38d84577193fc73bc082959.pdf"} {"title": "Identifying Spatio-Temporal Drivers of Extreme Events", "url": "https://openreview.net/forum?id=DdKdr4kqxh", "detail_url": "https://openreview.net/forum?id=DdKdr4kqxh", "authors": "Mohamad Hakam Shams Eddin,Juergen Gall", "tags": "NIPS 2024,Poster", "abstract": "The spatio-temporal relations of impacts of extreme events and their drivers in climate data are not fully understood and there is a need of machine learning approaches to identify such spatio-temporal relations from data. The task, however, is very challenging since there are time delays between extremes and their drivers, and the spatial response of such drivers is inhomogeneous. In this work, we propose a first approach and benchmarks to tackle this challenge. Our approach is trained end-to-end to predict spatio-temporally extremes and spatio-temporally drivers in the physical input variables jointly. By enforcing the network to predict extremes from spatio-temporal binary masks of identified drivers, the network successfully identifies drivers that are correlated with extremes. We evaluate our approach on three newly created synthetic benchmarks, where two of them are based on remote sensing or reanalysis climate data, and on two real-world reanalysis datasets. The source code and datasets are publicly available at the project page https://hakamshams.github.io/IDE.", "pdf": "https://openreview.net/pdf/de07db4e16c50434ebcb77a7bc5885fc4bb45cdc.pdf"} {"title": "Generative Modelling of Structurally Constrained Graphs", "url": "https://openreview.net/forum?id=A3hxp0EeNW", "detail_url": "https://openreview.net/forum?id=A3hxp0EeNW", "authors": "Manuel Madeira,Clement Vignac,Dorina Thanou,Pascal Frossard", "tags": "NIPS 2024,Poster", "abstract": "Graph diffusion models have emerged as state-of-the-art techniques in graph generation; yet, integrating domain knowledge into these models remains challenging. \nDomain knowledge is particularly important in real-world scenarios, where invalid generated graphs hinder deployment in practical applications.\nUnconstrained and conditioned graph diffusion models fail to guarantee such domain-specific structural properties. \nWe present ConStruct, a novel framework that enables graph diffusion models to incorporate hard constraints on specific properties, such as planarity or acyclicity.\nOur approach ensures that the sampled graphs remain within the domain of graphs that satisfy the specified property throughout the entire trajectory in both the forward and reverse processes. This is achieved by introducing an edge-absorbing noise model and a new projector operator.\nConStruct demonstrates versatility across several structural and edge-deletion invariant constraints and achieves state-of-the-art performance for both synthetic benchmarks and attributed real-world datasets. \nFor example, by incorporating planarity constraints in digital pathology graph datasets, the proposed method outperforms existing baselines, improving data validity by up to 71.1 percentage points.", "pdf": "https://openreview.net/pdf/9c3bce2103ac76e47aca2a8ca0dd6d75ee549d16.pdf"} {"title": "General bounds on the quality of Bayesian coresets", "url": "https://openreview.net/forum?id=SAZeQV2PtT", "detail_url": "https://openreview.net/forum?id=SAZeQV2PtT", "authors": "Trevor Campbell", "tags": "NIPS 2024,Poster", "abstract": "Bayesian coresets speed up posterior inference in the large-scale data regime by approximating the full-data log-likelihood function with a surrogate log-likelihood based on a small, weighted subset of the data. But while Bayesian coresets and methods for construction are applicable in a wide range of models, existing theoretical analysis of the posterior inferential error incurred by coreset approximations only apply in restrictive settings---i.e., exponential family models, or models with strong log-concavity and smoothness assumptions. This work presents general upper and lower bounds on the Kullback-Leibler (KL) divergence of coreset approximations that reflect the full range of applicability of Bayesian coresets. The lower bounds require only mild model assumptions typical of Bayesian asymptotic analyses, while the upper bounds require the log-likelihood functions to satisfy a generalized subexponentiality criterion that is weaker than conditions used in earlier work. The lower bounds are applied to obtain fundamental limitations on the quality of coreset approximations, and to provide a theoretical explanation for the previously-observed poor empirical performance of importance sampling-based construction methods. The upper bounds are used to analyze the performance of recent subsample-optimize methods. The flexibility of the theory is demonstrated in validation experiments involving multimodal, unidentifiable, heavy-tailed Bayesian posterior distributions.", "pdf": "https://openreview.net/pdf/e04da7dbcda0889395e65a856553dfca0ab684a7.pdf"} {"title": "Wormhole Loss for Partial Shape Matching", "url": "https://openreview.net/forum?id=gPhBvrPdEs", "detail_url": "https://openreview.net/forum?id=gPhBvrPdEs", "authors": "Amit Bracha,Thomas Dag\u00e8s,Ron Kimmel", "tags": "NIPS 2024,Poster", "abstract": "When matching parts of a surface to its whole, a fundamental question arises: Which points should be included in the matching process? The issue is intensified when using isometry to measure similarity, as it requires the validation of whether distances measured between pairs of surface points should influence the matching process. The approach we propose treats surfaces as manifolds equipped with geodesic distances, and addresses the partial shape matching challenge by introducing a novel criterion to meticulously search for consistent distances between pairs of points. The new criterion explores the relation between intrinsic geodesic distances between the points, geodesic distances between the points and surface boundaries, and extrinsic distances between boundary points measured in the embedding space. It is shown to be less restrictive compared to previous measures and achieves state-of-the-art results when used as a loss function in training networks for partial shape matching.", "pdf": "https://openreview.net/pdf/813a094afff57b020fdacb4bcd622b0174f1e063.pdf"} {"title": "QGFN: Controllable Greediness with Action Values", "url": "https://openreview.net/forum?id=kQ9LgM2JQT", "detail_url": "https://openreview.net/forum?id=kQ9LgM2JQT", "authors": "Elaine Lau,Stephen Zhewen Lu,Ling Pan,Doina Precup,Emmanuel Bengio", "tags": "NIPS 2024,Poster", "abstract": "Generative Flow Networks (GFlowNets; GFNs) are a family of energy-based generative methods for combinatorial objects, capable of generating diverse and high-utility samples. However, consistently biasing GFNs towards producing high-utility samples is non-trivial. In this work, we leverage connections between GFNs and reinforcement learning (RL) and propose to combine the GFN policy with an action-value estimate, $Q$, to create greedier sampling policies which can be controlled by a mixing parameter. We show that several variants of the proposed method, QGFN, are able to improve on the number of high-reward samples generated in a variety of tasks without sacrificing diversity.", "pdf": "https://openreview.net/pdf/12247b57958af9d2378bc7d2a6b85d40a7150ccb.pdf"} {"title": "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation", "url": "https://openreview.net/forum?id=rnUEUbRxVu", "detail_url": "https://openreview.net/forum?id=rnUEUbRxVu", "authors": "Chuanyang Zheng,Yihang Gao,Han Shi,Minbin Huang,Jingyao Li,Jing Xiong,Xiaozhe Ren,Michael Ng,Xin Jiang,Zhenguo Li,Yu Li", "tags": "NIPS 2024,Poster", "abstract": "Positional encoding plays a crucial role in transformers, significantly impact- ing model performance and length generalization. Prior research has introduced absolute positional encoding (APE) and relative positional encoding (RPE) to distinguish token positions in given sequences. However, both APE and RPE remain fixed after model training regardless of input data, limiting their adaptability and flexibility. Hence, we expect that the desired positional encoding should be data-adaptive and can be dynamically adjusted with the given attention. In this paper, we propose a Data-Adaptive Positional Encoding (DAPE) method, which dynamically and semantically adjusts based on input context and learned fixed priors. Experimental validation on real-world datasets (Arxiv, Books3, and CHE) demonstrates that DAPE enhances model performances in terms of trained length and length generalization, where the improvements are statistically significant. The model visualization suggests that our model can keep both local and anti-local information. Finally, we successfully train the model on sequence length 128 and achieve better performance at evaluation sequence length 8192, compared with other static positional encoding methods, revealing the benefit of the adaptive positional encoding method.", "pdf": "https://openreview.net/pdf/49a23aa447043bc41b6e583d0b3a6becd467fcd5.pdf"} {"title": "DiffusionBlend: Learning 3D Image Prior through Position-aware Diffusion Score Blending for 3D Computed Tomography Reconstruction", "url": "https://openreview.net/forum?id=h3Kv6sdTWO", "detail_url": "https://openreview.net/forum?id=h3Kv6sdTWO", "authors": "Bowen Song,Jason Hu,Zhaoxu Luo,Jeffrey A Fessler,Liyue Shen", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models face significant challenges when employed for large-scale medical image reconstruction in real practice such as 3D Computed Tomography (CT).\nDue to the demanding memory, time, and data requirements, it is difficult to train a diffusion model directly on the entire volume of high-dimensional data to obtain an efficient 3D diffusion prior. \nExisting works utilizing diffusion priors on single 2D image slice with hand-crafted cross-slice regularization would sacrifice the z-axis consistency, which results in severe artifacts along the z-axis. \nIn this work, we propose a novel framework that enables learning the 3D image prior through position-aware 3D-patch diffusion score blending for reconstructing large-scale 3D medical images. To the best of our knowledge, we are the first to utilize a 3D-patch diffusion prior for 3D medical image reconstruction. \nExtensive experiments on sparse view and limited angle CT reconstruction\nshow that our DiffusionBlend method significantly outperforms previous methods\nand achieves state-of-the-art performance on real-world CT reconstruction problems with high-dimensional 3D image (i.e., $256 \\times 256 \\times 500$). Our algorithm also comes with better or comparable computational efficiency than previous state-of-the-art methods. Code is available at https://github.com/efzero/DiffusionBlend.", "pdf": "https://openreview.net/pdf/bc751a481751ba1786a28349904e90bd3148b494.pdf"} {"title": "Fast Proxy Experiment Design for Causal Effect Identification", "url": "https://openreview.net/forum?id=Ci7II4CPwm", "detail_url": "https://openreview.net/forum?id=Ci7II4CPwm", "authors": "Sepehr Elahi,Sina Akbari,Jalal Etesami,Negar Kiyavash,Patrick Thiran", "tags": "NIPS 2024,Poster", "abstract": "Identifying causal effects is a key problem of interest across many disciplines. The two long-standing approaches to estimate causal effects are observational and experimental (randomized) studies. Observational studies can suffer from unmeasured confounding, which may render the causal effects unidentifiable. On the other hand, direct experiments on the target variable may be too costly or even infeasible to conduct. A middle ground between these two approaches is to estimate the causal effect of interest through proxy experiments, which are conducted on variables with a lower cost to intervene on compared to the main target. In an earlier work, we studied this setting and demonstrated that the problem of designing the optimal (minimum-cost) experiment for causal effect identification is NP-complete and provided a naive algorithm that may require solving exponentially many NP-hard problems as a sub-routine in the worst case. In this work, we provide a few reformulations of the problem that allow for designing significantly more efficient algorithms to solve it as witnessed by our extensive simulations. Additionally, we study the closely-related problem of designing experiments that enable us to identify a given effect through valid adjustments sets.", "pdf": "https://openreview.net/pdf/9ae33c68d613b38b28bdb1f803922e19f7c33cc9.pdf"} {"title": "Controlled maximal variability along with reliable performance in recurrent neural networks", "url": "https://openreview.net/forum?id=yXW2dCTQdi", "detail_url": "https://openreview.net/forum?id=yXW2dCTQdi", "authors": "Chiara Mastrogiuseppe,Rub\u00e9n Moreno-Bote", "tags": "NIPS 2024,Poster", "abstract": "Natural behaviors, even stereotyped ones, exhibit variability. Despite its role in exploring and learning, the function and neural basis of this variability is still not well understood. Given the coupling between neural activity and behavior, we ask what type of neural variability does not compromise behavioral performance. While previous studies typically curtail variability to allow for high task performance in neural networks, our approach takes the reversed perspective. We investigate how to generate maximal neural variability while at the same time having high network performance. \nTo do so, we extend to neural activity the maximum occupancy principle (MOP) developed for behavior, and refer to this new neural principle as NeuroMOP. NeuroMOP posits that the goal of the nervous system is to maximize future action-state entropy, a reward-free, intrinsic motivation that entails creating all possible activity patterns while avoiding terminal or dangerous ones.\nWe show that this goal can be achieved through a neural network controller that injects currents (actions) into a recurrent neural network of fixed random weights to maximize future cumulative action-state entropy. \nHigh activity variability can be induced while adhering to an energy constraint or while avoiding terminal states defined by specific neurons' activities, also in a context-dependent manner. The network solves these tasks by flexibly switching between stochastic and deterministic modes as needed and projecting noise onto a null space. Based on future maximum entropy production, NeuroMOP contributes to a novel theory of neural variability that reconciles stochastic and deterministic behaviors within a single framework.", "pdf": "https://openreview.net/pdf/bd4ce492257f560caead52dd6d5cf618cf2665fb.pdf"} {"title": "GLinSAT: The General Linear Satisfiability Neural Network Layer By Accelerated Gradient Descent", "url": "https://openreview.net/forum?id=m1PVjNHvtP", "detail_url": "https://openreview.net/forum?id=m1PVjNHvtP", "authors": "Hongtai Zeng,Chao Yang,Yanzhen Zhou,Cheng Yang,Qinglai Guo", "tags": "NIPS 2024,Poster", "abstract": "Ensuring that the outputs of neural networks satisfy specific constraints is crucial for applying neural networks to real-life decision-making problems. In this paper, we consider making a batch of neural network outputs satisfy bounded and general linear constraints. We first reformulate the neural network output projection problem as an entropy-regularized linear programming problem. We show that such a problem can be equivalently transformed into an unconstrained convex optimization problem with Lipschitz continuous gradient according to the duality theorem. Then, based on an accelerated gradient descent algorithm with numerical performance enhancement, we present our architecture, GLinSAT, to solve the problem. To the best of our knowledge, this is the first general linear satisfiability layer in which all the operations are differentiable and matrix-factorization-free. Despite the fact that we can explicitly perform backpropagation based on automatic differentiation mechanism, we also provide an alternative approach in GLinSAT to calculate the derivatives based on implicit differentiation of the optimality condition. Experimental results on constrained traveling salesman problems, partial graph matching with outliers, predictive portfolio allocation and power system unit commitment demonstrate the advantages of GLinSAT over existing satisfiability layers. Our implementation is available at https://github.com/HunterTracer/GLinSAT.", "pdf": "https://openreview.net/pdf/aeec6ec4aa9143f07f7c57ad8c1b6f5eaa7991c6.pdf"} {"title": "RaVL: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models", "url": "https://openreview.net/forum?id=UFRZHFYW8e", "detail_url": "https://openreview.net/forum?id=UFRZHFYW8e", "authors": "Maya Varma,Jean-Benoit Delbrouck,Zhihong Chen,Akshay S Chaudhari,Curtis Langlotz", "tags": "NIPS 2024,Poster", "abstract": "Fine-tuned vision-language models (VLMs) often capture spurious correlations between image features and textual attributes, resulting in degraded zero-shot performance at test time. Existing approaches for addressing spurious correlations (i) primarily operate at the global image-level rather than intervening directly on fine-grained image features and (ii) are predominantly designed for unimodal settings. In this work, we present RaVL, which takes a fine-grained perspective on VLM robustness by discovering and mitigating spurious correlations using local image features rather than operating at the global image level. Given a fine-tuned VLM, RaVL first discovers spurious correlations by leveraging a region-level clustering approach to identify precise image features contributing to zero-shot classification errors. Then, RaVL mitigates the identified spurious correlation with a novel region-aware loss function that enables the VLM to focus on relevant regions and ignore spurious relationships during fine-tuning. We evaluate RaVL on 654 VLMs with various model architectures, data domains, and learned spurious correlations. Our results show that RaVL accurately discovers (191% improvement over the closest baseline) and mitigates (8.2% improvement on worst-group image classification accuracy) spurious correlations. Qualitative evaluations on general-domain and medical-domain VLMs confirm our findings.", "pdf": "https://openreview.net/pdf/2bc283fb44fb4a82505447d9e845cab6da61d5f8.pdf"} {"title": "Customized Multiple Clustering via Multi-Modal Subspace Proxy Learning", "url": "https://openreview.net/forum?id=xbuaSTqAEz", "detail_url": "https://openreview.net/forum?id=xbuaSTqAEz", "authors": "Jiawei Yao,Qi Qian,Juhua Hu", "tags": "NIPS 2024,Poster", "abstract": "Multiple clustering aims to discover various latent structures of data from different aspects. Deep multiple clustering methods have achieved remarkable performance by exploiting complex patterns and relationships in data. However, existing works struggle to flexibly adapt to diverse user-specific needs in data grouping, which may require manual understanding of each clustering. To address these limitations, we introduce Multi-Sub, a novel end-to-end multiple clustering approach that incorporates a multi-modal subspace proxy learning framework in this work. Utilizing the synergistic capabilities of CLIP and GPT-4, Multi-Sub aligns textual prompts expressing user preferences with their corresponding visual representations. This is achieved by automatically generating proxy words from large language models that act as subspace bases, thus allowing for the customized representation of data in terms specific to the user\u2019s interests. Our method consistently outperforms existing baselines across a broad set of datasets in visual multiple clustering tasks. Our code is available at https://github.com/Alexander-Yao/Multi-Sub.", "pdf": "https://openreview.net/pdf/e80cabcafca9dea1dfdf22930f209be7f322a75a.pdf"} {"title": "LeDex: Training LLMs to Better Self-Debug and Explain Code", "url": "https://openreview.net/forum?id=d1XrZ4EINV", "detail_url": "https://openreview.net/forum?id=d1XrZ4EINV", "authors": "Nan Jiang,Xiaopeng Li,Shiqi Wang,Qiang Zhou,Soneya Binta Hossain,Baishakhi Ray,Varun Kumar,Xiaofei Ma,Anoop Deoras", "tags": "NIPS 2024,Poster", "abstract": "In the domain of code generation, self-debugging is crucial. It allows LLMs to refine their generated code based on execution feedback. This is particularly important because generating correct solutions in one attempt proves challenging for complex tasks. Prior works on self-debugging mostly focus on prompting methods by providing LLMs with few-shot examples, which work poorly on small open-sourced LLMs. In this work, we propose LeDex, a training framework that significantly improves the self-debugging capability of LLMs. Intuitively, we observe that a chain of explanations on the wrong code followed by code refinement helps LLMs better analyze the wrong code and do refinement. We thus propose an automated pipeline to collect a high-quality dataset for code explanation and refinement by generating a number of explanations and refinement trajectories from the LLM itself or a larger teacher model and filtering via execution verification. We perform supervised fine-tuning (SFT) and further reinforcement learning (RL) on both success and failure trajectories with a novel reward design considering code explanation and refinement quality. SFT improves the pass@1 by up to 15.92\\% and pass@10 by 9.30\\% over four benchmarks. RL training brings additional up to 3.54\\% improvement on pass@1 and 2.55\\% improvement on pass@10. The trained LLMs show iterative refinement ability and can keep refining code continuously. Lastly, our human evaluation shows that the LLMs trained with our framework generate more useful code explanations and help developers better understand bugs in source code.", "pdf": "https://openreview.net/pdf/b7b7ca5a4f76e5bb04b52ff5b8f00f3bc2c73c67.pdf"} {"title": "ZOPP: A Framework of Zero-shot Offboard Panoptic Perception for Autonomous Driving", "url": "https://openreview.net/forum?id=4jXaca2NYa", "detail_url": "https://openreview.net/forum?id=4jXaca2NYa", "authors": "Tao MA,Hongbin Zhou,Qiusheng Huang,Xuemeng Yang,Jianfei Guo,Bo Zhang,Min Dou,Yu Qiao,Botian Shi,Hongsheng Li", "tags": "NIPS 2024,Poster", "abstract": "Offboard perception aims to automatically generate high-quality 3D labels for autonomous driving (AD) scenes. Existing offboard methods focus on 3D object detection with closed-set taxonomy and fail to match human-level recognition capability on the rapidly evolving perception tasks. Due to heavy reliance on human labels and the prevalence of data imbalance and sparsity, a unified framework for offboard auto-labeling various elements in AD scenes that meets the distinct needs of perception tasks is not being fully explored. In this paper, we propose a novel multi-modal Zero-shot Offboard Panoptic Perception (ZOPP) framework for autonomous driving scenes. ZOPP integrates the powerful zero-shot recognition capabilities of vision foundation models and 3D representations derived from point clouds. To the best of our knowledge, ZOPP represents a pioneering effort in the domain of multi-modal panoptic perception and auto labeling for autonomous driving scenes. We conduct comprehensive empirical studies and evaluations on Waymo open dataset to validate the proposed ZOPP on various perception tasks. To further explore the usability and extensibility of our proposed ZOPP, we also conduct experiments in downstream applications. The results further demonstrate the great potential of our ZOPP for real-world scenarios. Code will be released at \\url{https://github.com/PJLab-ADG/ZOPP}.", "pdf": "https://openreview.net/pdf/26a89cc7016dbbbd9c1d03d686bceeb3ba7deb73.pdf"} {"title": "Fairness and Efficiency in Online Class Matching", "url": "https://openreview.net/forum?id=kMAXN7HF6d", "detail_url": "https://openreview.net/forum?id=kMAXN7HF6d", "authors": "MohammadTaghi Hajiaghayi,Shayan Chashm Jahan,Mohammad Sharifi,Suho Shin,Max Springer", "tags": "NIPS 2024,Poster", "abstract": "The online bipartite matching problem, extensively studied in the literature, deals with the allocation of online arriving vertices (items) to a predetermined set of offline vertices (agents). However, little attention has been given to the concept of class fairness, where agents are categorized into different classes, and the matching algorithm must ensure equitable distribution across these classes.\n\nWe here focus on randomized algorithms for the fair matching of indivisible items, subject to various definitions of fairness. Our main contribution is the first (randomized) non-wasteful algorithm that simultaneously achieves a $1/2$ approximation to class envy-freeness (CEF) while simultaneously ensuring an equivalent approximation to the class proportionality (CPROP) and utilitarian social welfare (USW) objectives. We supplement this result by demonstrating that no non-wasteful algorithm can achieve an $\\alpha$-CEF guarantee for $\\alpha > 0.761$. In a similar vein, we provide a novel input instance for deterministic divisible matching that demonstrates a nearly tight CEF approximation.\n\nLastly, we define the ``price of fairness,\" which represents the trade-off between optimal and fair matching. We demonstrate that increasing the level of fairness in the approximation of the solution leads to a decrease in the objective of maximizing USW, following an inverse proportionality relationship.", "pdf": "https://openreview.net/pdf/47d12f3d34deef31f170d34a952125b9823ca7e2.pdf"} {"title": "AlphaTablets: A Generic Plane Representation for 3D Planar Reconstruction from Monocular Videos", "url": "https://openreview.net/forum?id=7rrJQ9iWoX", "detail_url": "https://openreview.net/forum?id=7rrJQ9iWoX", "authors": "Yuze He,Wang Zhao,Shaohui Liu,Yubin Hu,Yushi Bai,Yu-Hui Wen,Yong-jin Liu", "tags": "NIPS 2024,Poster", "abstract": "We introduce AlphaTablets, a novel and generic representation of 3D planes that features continuous 3D surface and precise boundary delineation. By representing 3D planes as rectangles with alpha channels, AlphaTablets combine the advantages of current 2D and 3D plane representations, enabling accurate, consistent and flexible modeling of 3D planes. We derive differentiable rasterization on top of AlphaTablets to efficiently render 3D planes into images, and propose a novel bottom-up pipeline for 3D planar reconstruction from monocular videos. Starting with 2D superpixels and geometric cues from pre-trained models, we initialize 3D planes as AlphaTablets and optimize them via differentiable rendering. An effective merging scheme is introduced to facilitate the growth and refinement of AlphaTablets. Through iterative optimization and merging, we reconstruct complete and accurate 3D planes with solid surfaces and clear boundaries. Extensive experiments on the ScanNet dataset demonstrate state-of-the-art performance in 3D planar reconstruction, underscoring the great potential of AlphaTablets as a generic 3D plane representation for various applications.", "pdf": "https://openreview.net/pdf/44824c636faea80a214c56a1b8074278e4d9af9e.pdf"} {"title": "Proving Theorems Recursively", "url": "https://openreview.net/forum?id=yAa5l92TtQ", "detail_url": "https://openreview.net/forum?id=yAa5l92TtQ", "authors": "Haiming Wang,Huajian Xin,Zhengying Liu,Wenda Li,Yinya Huang,Jianqiao Lu,Zhicheng YANG,Jing Tang,Jian Yin,Zhenguo Li,Xiaodan Liang", "tags": "NIPS 2024,Poster", "abstract": "Recent advances in automated theorem proving leverages language models to explore expanded search spaces by step-by-step proof generation. However, such approaches are usually based on short-sighted heuristics (e.g., log probability or value function scores) that potentially lead to suboptimal or even distracting subgoals, preventing us from finding longer proofs. To address this challenge, we propose POETRY (PrOvE Theorems RecursivelY), which proves theorems in a recursive, level-by-level manner in the Isabelle theorem prover. Unlike previous step-by-step methods, POETRY searches for a verifiable sketch of the proof at each level and focuses on solving the current level's theorem or conjecture. Detailed proofs of intermediate conjectures within the sketch are temporarily replaced by a placeholder tactic called sorry, deferring their proofs to subsequent levels. This approach allows the theorem to be tackled incrementally by outlining the overall theorem at the first level and then solving the intermediate conjectures at deeper levels. Experiments are conducted on the miniF2F and PISA datasets and significant performance gains are observed in our POETRY approach over state-of-the-art methods. POETRY on miniF2F achieves an average proving success rate improvement of 5.1%. Moreover, we observe a substantial increase in the maximum proof length found by POETRY, from 10 to 26.", "pdf": "https://openreview.net/pdf/c496858b5797ffde1be425dcc94d5a7221a5dfb9.pdf"} {"title": "Hierarchical and Density-based Causal Clustering", "url": "https://openreview.net/forum?id=5S5NVpd6PV", "detail_url": "https://openreview.net/forum?id=5S5NVpd6PV", "authors": "Kwangho Kim,Jisu Kim,Larry Wasserman,Edward Kennedy", "tags": "NIPS 2024,Poster", "abstract": "Understanding treatment effect heterogeneity is vital for scientific and policy research. However, identifying and evaluating heterogeneous treatment effects pose significant challenges due to the typically unknown subgroup structure. Recently, a novel approach, causal k-means clustering, has emerged to assess heterogeneity of treatment effect by applying the k-means algorithm to unknown counterfactual regression functions. In this paper, we expand upon this framework by integrating hierarchical and density-based clustering algorithms. We propose plug-in estimators which are simple and readily implementable using off-the-shelf algorithms. Unlike k-means clustering, which requires the margin condition, our proposed estimators do not rely on strong structural assumptions on the outcome process. We go on to study their rate of convergence, and show that under the minimal regularity conditions, the additional cost of causal clustering is essentially the estimation error of the outcome regression functions. Our findings significantly extend the capabilities of the causal clustering framework, thereby contributing to the progression of methodologies for identifying homogeneous subgroups in treatment response, consequently facilitating more nuanced and targeted interventions. The proposed methods also open up new avenues for clustering with generic pseudo-outcomes. We explore finite sample properties via simulation, and illustrate the proposed methods in voting and employment projection datasets.", "pdf": "https://openreview.net/pdf/9beb64359ad8971cbcb7489f5a4f2b36dc194767.pdf"} {"title": "PhyRecon: Physically Plausible Neural Scene Reconstruction", "url": "https://openreview.net/forum?id=QrE9QPq4ya", "detail_url": "https://openreview.net/forum?id=QrE9QPq4ya", "authors": "Junfeng Ni,Yixin Chen,Bohan Jing,Nan Jiang,Bin Wang,Bo Dai,Puhao Li,Yixin Zhu,Song-Chun Zhu,Siyuan Huang", "tags": "NIPS 2024,Poster", "abstract": "We address the issue of physical implausibility in multi-view neural reconstruction. While implicit representations have gained popularity in multi-view 3D reconstruction, previous work struggles to yield physically plausible results, limiting their utility in domains requiring rigorous physical accuracy. This lack of plausibility stems from the absence of physics modeling in existing methods and their inability to recover intricate geometrical structures. In this paper, we introduce PHYRECON, the first approach to leverage both differentiable rendering and differentiable physics simulation to learn implicit surface representations. PHYRECON features a novel differentiable particle-based physical simulator built on neural implicit representations. Central to this design is an efficient transformation between SDF-based implicit representations and explicit surface points via our proposed Surface Points Marching Cubes (SP-MC), enabling differentiable learning with both rendering and physical losses. Additionally, PHYRECON models both rendering and physical uncertainty to identify and compensate for inconsistent and inaccurate monocular geometric priors. The physical uncertainty further facilitates physics-guided pixel sampling to enhance the learning of slender structures. By integrating these techniques, our model supports differentiable joint modeling of appearance, geometry, and physics. Extensive experiments demonstrate that PHYRECON significantly improves the reconstruction quality. Our results also exhibit superior physical stability in physical simulators, with at least a 40% improvement across all datasets, paving the way for future physics-based applications.", "pdf": "https://openreview.net/pdf/548cfe3c8194c5b55a33720177d0cfe03e37690f.pdf"} {"title": "Aligning Embeddings and Geometric Random Graphs: Informational Results and Computational Approaches for the Procrustes-Wasserstein Problem", "url": "https://openreview.net/forum?id=4NGlu45uyt", "detail_url": "https://openreview.net/forum?id=4NGlu45uyt", "authors": "Mathieu Even,Luca Ganassali,Jakob Maier,Laurent Massouli\u00e9", "tags": "NIPS 2024,Poster", "abstract": "The Procrustes-Wasserstein problem consists in matching two high-dimensional point clouds in an unsupervised setting, and has many applications in natural language processing and computer vision. \nWe consider a planted model with two datasets $X,Y$ that consist of $n$ datapoints in $\\mathbb{R}^d$, where $Y$ is a noisy version of $X$, up to an orthogonal transformation and a relabeling of the data points. \nThis setting is related to the graph alignment problem in geometric models.\nIn this work, we focus on the euclidean transport cost between the point clouds as a measure of performance for the alignment. We first establish information-theoretic results, in the high ($d \\gg \\log n$) and low ($d \\ll \\log n$) dimensional regimes. \nWe then study computational aspects and propose the \u2018Ping-Pong algorithm', alternatively estimating the orthogonal transformation and the relabeling, initialized via a Franke-Wolfe convex relaxation. We give sufficient conditions for the method to retrieve the planted signal after one single step. We provide experimental results to compare the proposed approach with the state-of-the-art method of Grave et al. (2019).", "pdf": "https://openreview.net/pdf/35512c515117ce3536b46b23ef6ee913523f436e.pdf"} {"title": "FUSE: Fast Unified Simulation and Estimation for PDEs", "url": "https://openreview.net/forum?id=dbnEf790Kv", "detail_url": "https://openreview.net/forum?id=dbnEf790Kv", "authors": "Levi E. Lingsch,Dana Grund,Siddhartha Mishra,Georgios Kissas", "tags": "NIPS 2024,Poster", "abstract": "The joint prediction of continuous fields and statistical estimation of the underlying discrete parameters is a common problem for many physical systems, governed by PDEs. Hitherto, it has been separately addressed by employing operator learning surrogates for field prediction while using simulation-based inference (and its variants) for statistical parameter determination. Here, we argue that solving both problems within the same framework can lead to consistent gains in accuracy and robustness. To this end, we propose a novel and flexible formulation of the operator learning problem that jointly predicts continuous quantities and infers distributions of discrete parameters, thereby amortizing the cost of both the inverse and the surrogate models to a joint pre-training step. We present the capabilities of the proposed methodology for predicting continuous and discrete biomarkers in full-body haemodynamics simulations under different levels of missing information. We also consider a test case for atmospheric large-eddy simulation of a two-dimensional dry cold bubble, where we infer both continuous time-series and information about the system's conditions. We present comparisons against different baselines to showcase significantly increased accuracy in both the inverse and the surrogate tasks.", "pdf": "https://openreview.net/pdf/1941b9c0e7faf85b94e0de62968964acbd121088.pdf"} {"title": "Unified Guidance for Geometry-Conditioned Molecular Generation", "url": "https://openreview.net/forum?id=HeoRsnaD44", "detail_url": "https://openreview.net/forum?id=HeoRsnaD44", "authors": "Sirine Ayadi,Leon Hetzel,Johanna Sommer,Fabian J Theis,Stephan G\u00fcnnemann", "tags": "NIPS 2024,Poster", "abstract": "Effectively designing molecular geometries is essential to advancing pharmaceutical innovations, a domain, which has experienced great attention through the success of generative models and, in particular, diffusion models. However, current molecular diffusion models are tailored towards a specific downstream task and lack adaptability. We introduce UniGuide, a framework for controlled geometric guidance of unconditional diffusion models that allows flexible conditioning during inference without the requirement of extra training or networks. We show how applications such as structure-based, fragment-based, and ligand-based drug design are formulated in the UniGuide\u00a0framework and demonstrate on-par or superior performance compared to specialised models. Offering a more versatile approach, UniGuide\u00a0has the potential to streamline the development of molecular generative models, allowing them to be readily used in diverse application scenarios.", "pdf": "https://openreview.net/pdf/704d9702fde4a6ea2dd90c4f71ddcbfcac8ab2e3.pdf"} {"title": "3D Equivariant Pose Regression via Direct Wigner-D Harmonics Prediction", "url": "https://openreview.net/forum?id=nw8cXoNvep", "detail_url": "https://openreview.net/forum?id=nw8cXoNvep", "authors": "Jongmin Lee,Minsu Cho", "tags": "NIPS 2024,Poster", "abstract": "Determining the 3D orientations of an object in an image, known as single-image pose estimation, is a crucial task in 3D vision applications. Existing methods typically learn 3D rotations parametrized in the spatial domain using Euler angles or quaternions, but these representations often introduce discontinuities and singularities. SO(3)-equivariant networks enable the structured capture of pose patterns with data-efficient learning, but the parametrizations in spatial domain are incompatible with their architecture, particularly spherical CNNs, which operate in the frequency domain to enhance computational efficiency. To overcome these issues, we propose a frequency-domain approach that directly predicts Wigner-D coefficients for 3D rotation regression, aligning with the operations of spherical CNNs. Our SO(3)-equivariant pose harmonics predictor overcomes the limitations of spatial parameterizations, ensuring consistent pose estimation under arbitrary rotations. Trained with a frequency-domain regression loss, our method achieves state-of-the-art results on benchmarks such as ModelNet10-SO(3) and PASCAL3D+, with significant improvements in accuracy, robustness, and data efficiency.", "pdf": "https://openreview.net/pdf/2c775e5dff2bb94590c91f17229f634168ee6ad9.pdf"} {"title": "A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers", "url": "https://openreview.net/forum?id=zuwLGhgxtQ", "detail_url": "https://openreview.net/forum?id=zuwLGhgxtQ", "authors": "Ye He,Alireza Mousavi-Hosseini,Krishna Balasubramanian,Murat A Erdogdu", "tags": "NIPS 2024,Poster", "abstract": "We study the complexity of heavy-tailed sampling and present a separation result in terms of obtaining high-accuracy versus low-accuracy guarantees i.e., samplers that require only $\\mathcal{O}(\\log(1/\\varepsilon))$ versus $\\Omega(\\text{poly}(1/\\varepsilon))$ iterations to output a sample which is $\\varepsilon$-close to the target in $\\chi^2$-divergence. Our results are presented for proximal samplers that are based on Gaussian versus stable oracles. We show that proximal samplers based on the Gaussian oracle have a fundamental barrier in that they necessarily achieve only low-accuracy guarantees when sampling from a class of heavy-tailed targets. In contrast, proximal samplers based on the stable oracle exhibit high-accuracy guarantees, thereby overcoming the aforementioned limitation. We also prove lower bounds for samplers under the stable oracle and show that our upper bounds cannot be fundamentally improved.", "pdf": "https://openreview.net/pdf/bd86dfe1f5fac662f55df1bccfbb1134cf9043ed.pdf"} {"title": "Pretraining with Random Noise for Fast and Robust Learning without Weight Transport", "url": "https://openreview.net/forum?id=DNGfCVBOnU", "detail_url": "https://openreview.net/forum?id=DNGfCVBOnU", "authors": "Jeonghwan Cheon,Sang Wan Lee,Se-Bum Paik", "tags": "NIPS 2024,Poster", "abstract": "The brain prepares for learning even before interacting with the environment, by refining and optimizing its structures through spontaneous neural activity that resembles random noise. However, the mechanism of such a process has yet to be understood, and it is unclear whether this process can benefit the algorithm of machine learning. Here, we study this issue using a neural network with a feedback alignment algorithm, demonstrating that pretraining neural networks with random noise increases the learning efficiency as well as generalization abilities without weight transport. First, we found that random noise training modifies forward weights to match backward synaptic feedback, which is necessary for teaching errors by feedback alignment. As a result, a network with pre-aligned weights learns notably faster and reaches higher accuracy than a network without random noise training, even comparable to the backpropagation algorithm. We also found that the effective dimensionality of weights decreases in a network pretrained with random noise. This pre-regularization allows the network to learn simple solutions of a low rank, reducing the generalization loss during subsequent training. This also enables the network robustly to generalize a novel, out-of-distribution dataset. Lastly, we confirmed that random noise pretraining reduces the amount of meta-loss, enhancing the network ability to adapt to various tasks. Overall, our results suggest that random noise training with feedback alignment offers a straightforward yet effective method of pretraining that facilitates quick and reliable learning without weight transport.", "pdf": "https://openreview.net/pdf/0888b42b058919a6572e5d014f727a8e6f016b64.pdf"} {"title": "MiSO: Optimizing brain stimulation to create neural activity states", "url": "https://openreview.net/forum?id=Gb0mXhn5h3", "detail_url": "https://openreview.net/forum?id=Gb0mXhn5h3", "authors": "Yuki Minai,Joana Soldado-Magraner,Matthew A. Smith,Byron M. Yu", "tags": "NIPS 2024,Poster", "abstract": "Brain stimulation has the potential to create desired neural population activity states. However, it is challenging to search the large space of stimulation parameters, for example, selecting which subset of electrodes to be used for stimulation. In this scenario, creating a model that maps the configuration of stimulation parameters to the brain\u2019s response can be beneficial. Training such an expansive model usually requires more stimulation-response samples than can be collected in a given experimental session. Furthermore, changes in the properties of the recorded activity over time can make it challenging to merge stimulation-response samples across sessions. To address these challenges, we propose MiSO (MicroStimulation Optimization), a closed-loop stimulation framework to drive neural population activity toward specified states by optimizing over a large stimulation parameter space. MiSO consists of three key components: 1) a neural activity alignment method to merge stimulation-response samples across sessions, 2) a statistical model trained on the merged samples to predict the brain's response to untested stimulation parameter configurations, and 3) an online optimization algorithm to adaptively update the stimulation parameter configuration based on the model's predictions. In this study, we implemented MiSO with a factor analysis (FA) based alignment method, a convolutional neural network (CNN), and an epsilon greedy optimization algorithm. We tested MiSO in closed-loop experiments using electrical microstimulation in the prefrontal cortex of a non-human primate. Guided by the CNN predictions, MiSO successfully searched amongst thousands of stimulation parameter configurations to drive the neural population activity toward specified states. More broadly, MiSO increases the clinical viability of neuromodulation technologies by enabling the use of many-fold larger stimulation parameter spaces.", "pdf": "https://openreview.net/pdf/9c703c3b27baa75a7bddacaabee244c200eb0e5b.pdf"} {"title": "Bayes-optimal learning of an extensive-width neural network from quadratically many samples", "url": "https://openreview.net/forum?id=R8znYRjxj3", "detail_url": "https://openreview.net/forum?id=R8znYRjxj3", "authors": "Antoine Maillard,Emanuele Troiani,Simon Martin,Florent Krzakala,Lenka Zdeborova", "tags": "NIPS 2024,Poster", "abstract": "We consider the problem of learning a target function corresponding to a single\nhidden layer neural network, with a quadratic activation function after the first layer,\nand random weights. We consider the asymptotic limit where the input dimension\nand the network width are proportionally large. Recent work [Cui et al., 2023]\nestablished that linear regression provides Bayes-optimal test error to learn such\na function when the number of available samples is only linear in the dimension.\nThat work stressed the open challenge of theoretically analyzing the optimal test\nerror in the more interesting regime where the number of samples is quadratic in\nthe dimension. In this paper, we solve this challenge for quadratic activations and\nderive a closed-form expression for the Bayes-optimal test error. We also provide an\nalgorithm, that we call GAMP-RIE, which combines approximate message passing\nwith rotationally invariant matrix denoising, and that asymptotically achieves the\noptimal performance. Technically, our result is enabled by establishing a link\nwith recent works on optimal denoising of extensive-rank matrices and on the\nellipsoid fitting problem. We further show empirically that, in the absence of\nnoise, randomly-initialized gradient descent seems to sample the space of weights,\nleading to zero training loss, and averaging over initialization leads to a test error\nequal to the Bayes-optimal one.", "pdf": "https://openreview.net/pdf/2f696d48108b0a09232520cb820238d0804e5ab1.pdf"} {"title": "Identifying and Solving Conditional Image Leakage in Image-to-Video Diffusion Model", "url": "https://openreview.net/forum?id=o9Lkiv1qpc", "detail_url": "https://openreview.net/forum?id=o9Lkiv1qpc", "authors": "Min Zhao,Hongzhou Zhu,Chendong Xiang,Kaiwen Zheng,Chongxuan Li,Jun Zhu", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models have obtained substantial progress in image-to-video generation. However, in this paper, we find that these models tend to generate videos with less motion than expected. We attribute this to the issue called conditional image leakage, where the image-to-video diffusion models (I2V-DMs) tend to over-rely on the conditional image at large time steps. We further address this challenge from both inference and training aspects. First, we propose to start the generation process from an earlier time step to avoid the unreliable large-time steps of I2V-DMs, as well as an initial noise distribution with optimal analytic expressions (Analytic-Init) by minimizing the KL divergence between it and the actual marginal distribution to bridge the training-inference gap. Second, we design a time-dependent noise distribution (TimeNoise) for the conditional image during training, applying higher noise levels at larger time steps to disrupt it and reduce the model's dependency on it. We validate these general strategies on various I2V-DMs on our collected open-domain image benchmark and the UCF101 dataset. Extensive results show that our methods outperform baselines by producing higher motion scores with lower errors while maintaining image alignment and temporal consistency, thereby yielding superior overall performance and enabling more accurate motion control. The project page: \\url{https://cond-image-leak.github.io/}.", "pdf": "https://openreview.net/pdf/ce3c145fb629337eae7208aec2f4d75453caa367.pdf"} {"title": "What Makes Partial-Label Learning Algorithms Effective?", "url": "https://openreview.net/forum?id=JpqEzPTuv6", "detail_url": "https://openreview.net/forum?id=JpqEzPTuv6", "authors": "Jiaqi Lv,Yangfan Liu,Shiyu Xia,Ning Xu,Miao Xu,Gang Niu,Min-Ling Zhang,Masashi Sugiyama,Xin Geng", "tags": "NIPS 2024,Poster", "abstract": "A partial label (PL) specifies a set of candidate labels for an instance and partial-label learning (PLL) trains multi-class classifiers with PLs.\nRecently, many methods that incorporate techniques from other domains have shown strong potential.\nThe expectation that stronger techniques would enhance performance has resulted in prominent PLL methods becoming not only highly complicated but also quite different from one another, making it challenging to choose the best direction for future algorithm design.\nWhile it is exciting to see higher performance, this leaves open a fundamental question: what makes a PLL method effective?\nWe present a comprehensive empirical analysis of this question and summarize the success of PLL so far into some minimal algorithm design principles.\nOur findings reveal that high accuracy on benchmark-simulated datasets with PLs can misleadingly amplify the perceived effectiveness of some general techniques, which may improve representation learning but have limited impact on addressing the inherent challenges of PLs. \nWe further identify the common behavior among successful PLL methods as a progressive transition from uniform to one-hot pseudo-labels, highlighting the critical role of mini-batch PL purification in achieving top performance.\nBased on our findings, we introduce a minimal working algorithm that is surprisingly simple yet effective, and propose an improved strategy to implement the design principles, suggesting a promising direction for improvements in PLL.", "pdf": "https://openreview.net/pdf/74c1e7226115548be8fe3a325788891fb6daa478.pdf"} {"title": "Optical Diffusion Models for Image Generation", "url": "https://openreview.net/forum?id=RY3rDQV0tQ", "detail_url": "https://openreview.net/forum?id=RY3rDQV0tQ", "authors": "Ilker Oguz,Niyazi Ulas Dinc,Mustafa Yildirim,Junjie Ke,Innfarn Yoo,QIFEI WANG,Feng Yang,Christophe Moser,Demetri Psaltis", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models generate new samples by progressively decreasing the noise from the initially provided random distribution. This inference procedure generally utilizes a trained neural network numerous times to obtain the final output, creating significant latency and energy consumption on digital electronic hardware such as GPUs. In this study, we demonstrate that the propagation of a light beam through a transparent medium can be programmed to implement a denoising diffusion model on image samples. This framework projects noisy image patterns through passive diffractive optical layers, which collectively only transmit the predicted noise term in the image. The optical transparent layers, which are trained with an online training approach, backpropagating the error to the analytical model of the system, are passive and kept the same across different steps of denoising. Hence this method enables high-speed image generation with minimal power consumption, benefiting from the bandwidth and energy efficiency of optical information processing.", "pdf": "https://openreview.net/pdf/fb29a42098ca727efc6a6f79396cd834a9f99b1c.pdf"} {"title": "FIARSE: Model-Heterogeneous Federated Learning via Importance-Aware Submodel Extraction", "url": "https://openreview.net/forum?id=bMbteQRhDI", "detail_url": "https://openreview.net/forum?id=bMbteQRhDI", "authors": "Feijie Wu,Xingchen Wang,Yaqing Wang,Tianci Liu,Lu Su,Jing Gao", "tags": "NIPS 2024,Poster", "abstract": "In federated learning (FL), accommodating clients' varied computational capacities poses a challenge, often limiting the participation of those with constrained resources in global model training. To address this issue, the concept of model heterogeneity through submodel extraction has emerged, offering a tailored solution that aligns the model's complexity with each client's computational capacity. In this work, we propose Federated Importance-Aware Submodel Extraction (FIARSE), a novel approach that dynamically adjusts submodels based on the importance of model parameters, thereby overcoming the limitations of previous static and dynamic submodel extraction methods. Compared to existing works, the proposed method offers a theoretical foundation for the submodel extraction and eliminates the need for additional information beyond the model parameters themselves to determine parameter importance, significantly reducing the overhead on clients. Extensive experiments are conducted on various datasets to showcase the superior performance of the proposed FIARSE.", "pdf": "https://openreview.net/pdf/c82b34cd9e4bcfbb61a51d8fca9a937b90c12be1.pdf"} {"title": "The Surprising Effectiveness of SP Voting with Partial Preferences", "url": "https://openreview.net/forum?id=CL9k2PaUQb", "detail_url": "https://openreview.net/forum?id=CL9k2PaUQb", "authors": "Hadi Hosseini,Debmalya Mandal,Amrit Puhan", "tags": "NIPS 2024,Poster", "abstract": "We consider the problem of recovering the ground truth ordering (ranking, top-$k$, or others) over a large number of alternatives. \nThe wisdom of crowd is a heuristic approach based on Condorcet's Jury theorem to address this problem through collective opinions.\nThis approach fails to recover the ground truth when the majority of the crowd is misinformed. The \\emph{surprisingly popular} (SP) algorithm~\\citep{prelec2017solution} is an alternative approach that is able to recover the ground truth even when experts are in minority. The SP algorithm requires the voters to predict other voters' report in the form of a full probability distribution over all rankings of alternatives. However, when the number of alternatives, $m$, is large, eliciting the prediction report or even the vote over $m$ alternatives might be too costly. \nIn this paper, we design a scalable alternative of the SP algorithm which only requires eliciting partial preferences from the voters, and propose new variants of the SP algorithm. In particular, we propose two versions---\\emph{Aggregated-SP} and \\emph{Partial-SP}---that ask voters to report vote and prediction on a subset of size $k$ ($\\ll m$) in terms of top alternative, partial rank, or an approval set. Through a large-scale crowdsourcing experiment on MTurk, we show that both of our approaches outperform conventional preference aggregation algorithms for the recovery of ground truth rankings, when measured in terms of Kendall-Tau distance and Spearman's $\\rho$. We further analyze the collected data and demonstrate that voters' behavior in the experiment, including the minority of the experts, and the SP phenomenon, can be correctly simulated by a concentric mixtures of Mallows model. Finally, we provide theoretical bounds on the sample complexity of SP algorithms with partial rankings to demonstrate the theoretical guarantees of the proposed methods.", "pdf": "https://openreview.net/pdf/90b4b8c73f1e8a80aadffc87dabbcfc099811129.pdf"} {"title": "CemiFace: Center-based Semi-hard Synthetic Face Generation for Face Recognition", "url": "https://openreview.net/forum?id=ykQnxko1cJ", "detail_url": "https://openreview.net/forum?id=ykQnxko1cJ", "authors": "Zhonglin Sun,Siyang Song,Ioannis Patras,Georgios Tzimiropoulos", "tags": "NIPS 2024,Poster", "abstract": "Privacy issue is a main concern in developing face recognition techniques. Although synthetic face images can partially mitigate potential legal risks while maintaining effective face recognition (FR) performance, FR models trained by face images synthesized by existing generative approaches frequently suffer from performance degradation problems due to the insufficient discriminative quality of these synthesized samples. In this paper, we systematically investigate what contributes to solid face recognition model training, and reveal that face images with certain degree of similarities to their identity centers show great effectiveness in the performance of trained FR models. Inspired by this, we propose a novel diffusion-based approach (namely **Ce**nter-based Se**mi**-hard Synthetic Face\nGeneration (**CemiFace**) which produces facial samples with various levels of similarity to the subject center, thus allowing to generate face datasets containing effective discriminative samples for training face recognition. Experimental results show that with a modest degree of similarity, training on the generated dataset can produce competitive performance compared to previous generation methods. The code will be available at:https://github.com/szlbiubiubiu/CemiFace", "pdf": "https://openreview.net/pdf/30f78a219d61e45796c28fce873caf8b9bd87ab7.pdf"} {"title": "How to Boost Any Loss Function", "url": "https://openreview.net/forum?id=MLgFu6dQYc", "detail_url": "https://openreview.net/forum?id=MLgFu6dQYc", "authors": "Richard Nock,Yishay Mansour", "tags": "NIPS 2024,Poster", "abstract": "Boosting is a highly successful ML-born optimization setting in which one is required to computationally efficiently learn arbitrarily good models based on the access to a weak learner oracle, providing classifiers performing at least slightly differently from random guessing. A key difference with gradient-based optimization is that boosting's original model does not requires access to first order information about a loss, yet the decades long history of boosting has quickly evolved it into a first order optimization setting -- sometimes even wrongfully *defining* it as such. Owing to recent progress extending gradient-based optimization to use only a loss' zeroth ($0^{th}$) order information to learn, this begs the question: what loss functions be efficiently optimized with boosting and what is the information really needed for boosting to meet the *original* boosting blueprint's requirements ?\n\nWe provide a constructive formal answer essentially showing that *any* loss function can be optimized with boosting and thus boosting can achieve a feat not yet known to be possible in the classical $0^{th}$ order setting, since loss functions are not required to be be convex, nor differentiable or Lipschitz -- and in fact not required to be continuous either. Some tools we use are rooted in quantum calculus, the mathematical field -- not to be confounded with quantum computation -- that studies calculus without passing to the limit, and thus without using first order information.", "pdf": "https://openreview.net/pdf/f6580f29c952afa409605e985f0cb9dada9b343e.pdf"} {"title": "Diffusion-based Curriculum Reinforcement Learning", "url": "https://openreview.net/forum?id=yRhrVaDOWE", "detail_url": "https://openreview.net/forum?id=yRhrVaDOWE", "authors": "Erdi Sayar,Giovanni Iacca,Ozgur S. Oguz,Alois Knoll", "tags": "NIPS 2024,Poster", "abstract": "Curriculum Reinforcement Learning (CRL) is an approach to facilitate the learning process of agents by structuring tasks in a sequence of increasing complexity. Despite its potential, many existing CRL methods struggle to efficiently guide agents toward desired outcomes, particularly in the absence of domain knowledge. This paper introduces DiCuRL (Diffusion Curriculum Reinforcement Learning), a novel method that leverages conditional diffusion models to generate curriculum goals. To estimate how close an agent is to achieving its goal, our method uniquely incorporates a $Q$-function and a trainable reward function based on Adversarial Intrinsic Motivation within the diffusion model. Furthermore, it promotes exploration through the inherent noising and denoising mechanism present in the diffusion models and is environment-agnostic. This combination allows for the generation of challenging yet achievable goals, enabling agents to learn effectively without relying on domain knowledge. We demonstrate the effectiveness of DiCuRL in three different maze environments and two robotic manipulation tasks simulated in MuJoCo, where it outperforms or matches nine state-of-the-art CRL algorithms from the literature.", "pdf": "https://openreview.net/pdf/c6d5c73ad71d17c7a0d816c227738b96c959bc7e.pdf"} {"title": "SF-V: Single Forward Video Generation Model", "url": "https://openreview.net/forum?id=PVgAeMm3MW", "detail_url": "https://openreview.net/forum?id=PVgAeMm3MW", "authors": "Zhixing Zhang,Yanyu Li,Yushu Wu,yanwu xu,Anil Kag,Ivan Skorokhodov,Willi Menapace,Aliaksandr Siarohin,Junli Cao,Dimitris N. Metaxas,Sergey Tulyakov,Jian Ren", "tags": "NIPS 2024,Poster", "abstract": "Diffusion-based video generation models have demonstrated remarkable success in obtaining high-fidelity videos through the iterative denoising process. However, these models require multiple denoising steps during sampling, resulting in high computational costs. In this work, we propose a novel approach to obtain single-step video generation models by leveraging adversarial training to fine-tune pre-trained video diffusion models. We show that, through the adversarial training, the multi-steps video diffusion model, i.e., Stable Video Diffusion (SVD), can be trained to perform single forward pass to synthesize high-quality videos, capturing both temporal and spatial dependencies in the video data. Extensive experiments demonstrate that our method achieves competitive generation quality of synthesized videos with significantly reduced computational overhead for the denoising process (i.e., around $23\\times$ speedup compared with SVD and $6\\times$ speedup compared with existing works, with even better generation quality), paving the way for real-time video synthesis and editing.", "pdf": "https://openreview.net/pdf/f2827983b13f042c1bdad973abf24e97a3e84c7f.pdf"} {"title": "GenRL: Multimodal-foundation world models for generalization in embodied agents", "url": "https://openreview.net/forum?id=za9Jx8yqUA", "detail_url": "https://openreview.net/forum?id=za9Jx8yqUA", "authors": "Pietro Mazzaglia,Tim Verbelen,Bart Dhoedt,Aaron Courville,Sai Rajeswar", "tags": "NIPS 2024,Poster", "abstract": "Learning generalist embodied agents, able to solve multitudes of tasks in different domains is a long-standing problem. Reinforcement learning (RL) is hard to scale up as it requires a complex reward design for each task. In contrast, language can specify tasks in a more natural way. Current foundation vision-language models (VLMs) generally require fine-tuning or other adaptations to be adopted in embodied contexts, due to the significant domain gap. However, the lack of multimodal data in such domains represents an obstacle to developing foundation models for embodied applications. In this work, we overcome these problems by presenting multimodal-foundation world models, able to connect and align the representation of foundation VLMs with the latent space of generative world models for RL, without any language annotations. The resulting agent learning framework, GenRL, allows one to specify tasks through vision and/or language prompts, ground them in the embodied domain\u2019s dynamics, and learn the corresponding behaviors in imagination.\nAs assessed through large-scale multi-task benchmarking in locomotion and manipulation domains, GenRL enables multi-task generalization from language and visual prompts. Furthermore, by introducing a data-free policy learning strategy, our approach lays the groundwork for foundational policy learning using generative world models. \nWebsite, code and data: https://mazpie.github.io/genrl/", "pdf": "https://openreview.net/pdf/70954c88d0935070ce03152cf1eb67a2f2ac5a2e.pdf"} {"title": "Geometry Awakening: Cross-Geometry Learning Exhibits Superiority over Individual Structures", "url": "https://openreview.net/forum?id=347aDObXEa", "detail_url": "https://openreview.net/forum?id=347aDObXEa", "authors": "Yadong Sun,Xiaofeng Cao,Yu Wang,Wei Ye,Jingcai Guo,Qing Guo", "tags": "NIPS 2024,Poster", "abstract": "Recent research has underscored the efficacy of Graph Neural Networks (GNNs) in modeling diverse geometric structures within graph data. However, real-world graphs typically exhibit geometrically heterogeneous characteristics, rendering the confinement to a single geometric paradigm insufficient for capturing their intricate structural complexities. To address this limitation, we examine the performance of GNNs across various geometries through the lens of knowledge distillation (KD) and introduce a novel cross-geometric framework. This framework encodes graphs by integrating both Euclidean and hyperbolic geometries in a space-mixing fashion. Our approach employs multiple teacher models, each generating hint embeddings that encapsulate distinct geometric properties. We then implement a structure-wise knowledge transfer module that optimally leverages these embeddings within their respective geometric contexts, thereby enhancing the training efficacy of the student model. Additionally, our framework incorporates a geometric optimization network designed to bridge the distributional disparities among these embeddings. Experimental results demonstrate that our model-agnostic framework more effectively captures topological graph knowledge, resulting in superior performance of the student models when compared to traditional KD methodologies.", "pdf": "https://openreview.net/pdf/60b7074a61607731d449e9ad7ae0dde16cde6ff7.pdf"} {"title": "Structural Inference of Dynamical Systems with Conjoined State Space Models", "url": "https://openreview.net/forum?id=xQWJBeK5rh", "detail_url": "https://openreview.net/forum?id=xQWJBeK5rh", "authors": "Aoran Wang,Jun Pang", "tags": "NIPS 2024,Poster", "abstract": "This paper introduces SICSM, a novel structural inference framework that integrates Selective State Space Models (selective SSMs) with Generative Flow Networks (GFNs) to handle the challenges posed by dynamical systems with irregularly sampled trajectories and partial observations. \nBy utilizing the robust temporal modeling capabilities of selective SSMs, our approach learns input-dependent transition functions that adapt to non-uniform time intervals, thereby enhancing the accuracy of structural inference. \nBy aggregating dynamics across diverse temporal dependencies and channeling them into the GFN, the SICSM adeptly approximates the posterior distribution of the system's structure. \nThis process not only enables precise inference of complex interactions within partially observed systems but also ensures the seamless integration of prior knowledge, enhancing the model\u2019s accuracy and robustness.\nExtensive evaluations on sixteen diverse datasets demonstrate that SICSM outperforms existing methods, particularly in scenarios characterized by irregular sampling and incomplete observations, which highlight its potential as a reliable tool for scientific discovery and system diagnostics in disciplines that demand precise modeling of complex interactions.", "pdf": "https://openreview.net/pdf/1b2f2766bee3909361bdfd7042fefaf2beb4f061.pdf"} {"title": "Faster Repeated Evasion Attacks in Tree Ensembles", "url": "https://openreview.net/forum?id=Ugr0yPzY71", "detail_url": "https://openreview.net/forum?id=Ugr0yPzY71", "authors": "Lorenzo Cascioli,Laurens Devos,Ondrej Kuzelka,Jesse Davis", "tags": "NIPS 2024,Poster", "abstract": "Tree ensembles are one of the most widely used model classes. However, these models are susceptible to adversarial examples, i.e., slightly perturbed examples that elicit a misprediction. There has been significant research on designing approaches to construct such examples for tree ensembles. But this is a computationally challenging problem that often must be solved a large number of times (e.g., for all examples in a training set). This is compounded by the fact that current approaches attempt to find such examples from scratch. In contrast, we exploit the fact that multiple similar problems are being solved. Specifically, our approach exploits the insight that adversarial examples for tree ensembles tend to perturb a consistent but relatively small set of features. We show that we can quickly identify this set of features and use this knowledge to speedup constructing adversarial examples.", "pdf": "https://openreview.net/pdf/cb636e341e2670f0e5474fcaac852c0990443b1a.pdf"} {"title": "WeiPer: OOD Detection using Weight Perturbations of Class Projections", "url": "https://openreview.net/forum?id=8HeUvbImKT", "detail_url": "https://openreview.net/forum?id=8HeUvbImKT", "authors": "Maximilian Granz,Manuel Heurich,Tim Landgraf", "tags": "NIPS 2024,Poster", "abstract": "Recent advances in out-of-distribution (OOD) detection on image data show that pre-trained neural network classifiers can separate in-distribution (ID) from OOD data well, leveraging the class-discriminative ability of the model itself. Methods have been proposed that either use logit information directly or that process the model's penultimate layer activations. With \"WeiPer\", we introduce perturbations of the class projections in the final fully connected layer which creates a richer representation of the input. We show that this simple trick can improve the OOD detection performance of a variety of methods and additionally propose a distance-based method that leverages the properties of the augmented WeiPer space. We achieve state-of-the-art OOD detection results across multiple benchmarks of the OpenOOD framework, especially pronounced in difficult settings in which OOD samples are positioned close to the training set distribution. We support our findings with theoretical motivations and empirical observations, and run extensive ablations to provide insights into why WeiPer works. Our code is available at: https://github.com/mgranz/weiper.", "pdf": "https://openreview.net/pdf/75df1f882e41eb03a7be990d6ab5a5bb9d09f5b8.pdf"} {"title": "Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond", "url": "https://openreview.net/forum?id=NhucGZtikE", "detail_url": "https://openreview.net/forum?id=NhucGZtikE", "authors": "Alan Jeffares,Alicia Curth,Mihaela van der Schaar", "tags": "NIPS 2024,Poster", "abstract": "Deep learning sometimes appears to work in unexpected ways. In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network consisting of a sequence of first-order approximations telescoping out into a single empirically operational tool for practical analysis. Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena in the literature -- including double descent, grokking, linear mode connectivity, and the challenges of applying deep learning on tabular data -- highlighting that this model allows us to construct and extract metrics that help predict and understand the a priori unexpected performance of neural networks. We also demonstrate that this model presents a pedagogical formalism allowing us to isolate components of the training process even in complex contemporary settings, providing a lens to reason about the effects of design choices such as architecture & optimization strategy, and reveals surprising parallels between neural network learning and gradient boosting.", "pdf": "https://openreview.net/pdf/dbfc7ecef4c47ce42471f0b1fa745601a58d20d2.pdf"} {"title": "Stochastic Concept Bottleneck Models", "url": "https://openreview.net/forum?id=iSjqTQ5S1f", "detail_url": "https://openreview.net/forum?id=iSjqTQ5S1f", "authors": "Moritz Vandenhirtz,Sonia Laguna,Ri\u010dards Marcinkevi\u010ds,Julia E Vogt", "tags": "NIPS 2024,Poster", "abstract": "Concept Bottleneck Models (CBMs) have emerged as a promising interpretable method whose final prediction is based on intermediate, human-understandable concepts rather than the raw input. Through time-consuming manual interventions, a user can correct wrongly predicted concept values to enhance the model's downstream performance. We propose *Stochastic Concept Bottleneck Models* (SCBMs), a novel approach that models concept dependencies. In SCBMs, a single-concept intervention affects all correlated concepts, thereby improving intervention effectiveness. Unlike previous approaches that model the concept relations via an autoregressive structure, we introduce an explicit, distributional parameterization that allows SCBMs to retain the CBMs' efficient training and inference procedure. \nAdditionally, we leverage the parameterization to derive an effective intervention strategy based on the confidence region. We show empirically on synthetic tabular and natural image datasets that our approach improves intervention effectiveness significantly. Notably, we showcase the versatility and usability of SCBMs by examining a setting with CLIP-inferred concepts, alleviating the need for manual concept annotations.", "pdf": "https://openreview.net/pdf/63df37a9a781b39408dfabd90823f0a1ff40b15d.pdf"} {"title": "A Foundation Model for Zero-shot Logical Query Reasoning", "url": "https://openreview.net/forum?id=JRSyMBBJi6", "detail_url": "https://openreview.net/forum?id=JRSyMBBJi6", "authors": "Mikhail Galkin,Jincheng Zhou,Bruno Ribeiro,Jian Tang,Zhaocheng Zhu", "tags": "NIPS 2024,Poster", "abstract": "Complex logical query answering (CLQA) in knowledge graphs (KGs) goes beyond simple KG completion and aims at answering compositional queries comprised of multiple projections and logical operations. Existing CLQA methods that learn parameters bound to certain entity or relation vocabularies can only be applied to the graph they are trained on which requires substantial training time before being deployed on a new graph. Here we present UltraQuery, the first foundation model for inductive reasoning that can zero-shot answer logical queries on any KG. The core idea of UltraQuery is to derive both projections and logical operations as vocabulary-independent functions which generalize to new entities and relations in any KG.\nWith the projection operation initialized from a pre-trained inductive KG completion model, UltraQuery can solve CLQA on any KG after finetuning on a single dataset. Experimenting on 23 datasets, UltraQuery in the zero-shot inference mode shows competitive or better query answering performance than best available baselines and sets a new state of the art on 15 of them.", "pdf": "https://openreview.net/pdf/ab8a9d83d6b2dbc48a452b9a40255213d891f552.pdf"} {"title": "All-in-One Image Coding for Joint Human-Machine Vision with Multi-Path Aggregation", "url": "https://openreview.net/forum?id=7vsx6PxAOH", "detail_url": "https://openreview.net/forum?id=7vsx6PxAOH", "authors": "Xu Zhang,Peiyao Guo,Ming Lu,Zhan Ma", "tags": "NIPS 2024,Poster", "abstract": "Image coding for multi-task applications, catering to both human perception and machine vision, has been extensively investigated. Existing methods often rely on multiple task-specific encoder-decoder pairs, leading to high overhead of parameter and bitrate usage, or face challenges in multi-objective optimization under a unified representation, failing to achieve both performance and efficiency. To this end, we propose Multi-Path Aggregation (MPA) integrated into existing coding models for joint human-machine vision, unifying the feature representation with an all-in-one architecture. MPA employs a predictor to allocate latent features among task-specific paths based on feature importance varied across tasks, maximizing the utility of shared features while preserving task-specific features for subsequent refinement. Leveraging feature correlations, we develop a two-stage optimization strategy to alleviate multi-task performance degradation. Upon the reuse of shared features, as low as 1.89\\% parameters are further augmented and fine-tuned for a specific task, which completely avoids extensive optimization of the entire model. Experimental results show that MPA achieves performance comparable to state-of-the-art methods in both task-specific and multi-objective optimization across human viewing and machine analysis tasks. Moreover, our all-in-one design supports seamless transitions between human- and machine-oriented reconstruction, enabling task-controllable interpretation without altering the unified model. Code is available at https://github.com/NJUVISION/MPA.", "pdf": "https://openreview.net/pdf/4a06bf3a54c2d26c25ec358c67b8e93a50468228.pdf"} {"title": "An Accelerated Gradient Method for Convex Smooth Simple Bilevel Optimization", "url": "https://openreview.net/forum?id=aFOdln7jBV", "detail_url": "https://openreview.net/forum?id=aFOdln7jBV", "authors": "Jincheng Cao,Ruichen Jiang,Erfan Yazdandoost Hamedani,Aryan Mokhtari", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we focus on simple bilevel optimization problems, where we minimize a convex smooth objective function over the optimal solution set of another convex smooth constrained optimization problem. We present a novel bilevel optimization method that locally approximates the solution set of the lower-level problem using a cutting plane approach and employs an accelerated gradient-based update to reduce the upper-level objective function over the approximated solution set. We measure the performance of our method in terms of suboptimality and infeasibility errors and provide non-asymptotic convergence guarantees for both error criteria. Specifically, when the feasible set is compact, we show that our method requires at most $\\mathcal{O}(\\max\\\\{1/\\sqrt{\\epsilon_{f}}, 1/\\epsilon_g\\\\})$ iterations to find a solution that is $\\epsilon_f$-suboptimal and $\\epsilon_g$-infeasible. Moreover, under the additional assumption that the lower-level objective satisfies the $r$-th H\u00f6lderian error bound, we show that our method achieves an iteration complexity of $\\mathcal{O}(\\max\\\\{\\epsilon_{f}^{-\\frac{2r-1}{2r}},\\epsilon_{g}^{-\\frac{2r-1}{2r}}\\\\})$, which matches the optimal complexity of single-level convex constrained optimization when $r=1$.", "pdf": "https://openreview.net/pdf/c01a85af79169e01822c2eb2dcb8b34924c5e985.pdf"} {"title": "Unity by Diversity: Improved Representation Learning for Multimodal VAEs", "url": "https://openreview.net/forum?id=Z4R2rkPgBy", "detail_url": "https://openreview.net/forum?id=Z4R2rkPgBy", "authors": "Thomas M. Sutter,Yang Meng,Andrea Agostini,Daphn\u00e9 Chopard,Norbert Fortin,Julia E Vogt,Babak Shahbaba,Stephan Mandt", "tags": "NIPS 2024,Poster", "abstract": "Variational Autoencoders for multimodal data hold promise for many tasks in data analysis, such as representation learning, conditional generation, and imputation.\nCurrent architectures either share the encoder output, decoder input, or both across modalities to learn a shared representation. \nSuch architectures impose hard constraints on the model. \nIn this work, we show that a better latent representation can be obtained by replacing these hard constraints with a soft constraint. We propose a new mixture-of-experts prior, softly guiding each modality's latent representation towards a shared aggregate posterior.\nThis approach results in a superior latent representation and allows each encoding to preserve information better from its uncompressed original features. In extensive experiments on multiple benchmark datasets and two challenging real-world datasets, we show improved learned latent representations and imputation of missing data modalities compared to existing methods.", "pdf": "https://openreview.net/pdf/387b178b70ee780b9939ef010177d6cfa5e41379.pdf"} {"title": "Bandits with Ranking Feedback", "url": "https://openreview.net/forum?id=aCaspFfAhG", "detail_url": "https://openreview.net/forum?id=aCaspFfAhG", "authors": "Davide Maran,Francesco Bacchiocchi,Francesco Emanuele Stradi,Matteo Castiglioni,Nicola Gatti,Marcello Restelli", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we introduce a novel variation of multi-armed bandits called bandits with ranking feedback. Unlike traditional bandits, this variation provides feedback to the learner that allows them to rank the arms based on previous pulls, without quantifying numerically the difference in performance. This type of feedback is well-suited for scenarios where the arms' values cannot be precisely measured using metrics such as monetary scores, probabilities, or occurrences. Common examples include human preferences in matchmaking problems. Furthermore, its investigation answers the theoretical question on how numerical rewards are crucial in bandit settings. In particular, we study the problem of designing no-regret algorithms with ranking feedback both in the stochastic and adversarial settings. We show that, with stochastic rewards, differently from what happens with non-ranking feedback, no algorithm can suffer a logarithmic regret in the time horizon $T$ in the instance-dependent case. Furthermore, we provide two algorithms. The first, namely DREE, guarantees a superlogarithmic regret in $T$ in the instance-dependent case thus matching our lower bound, while the second, namely R-LPE, guarantees a regret of $\\mathcal{\\widetilde O}(\\sqrt{T})$ in the instance-independent case. Remarkably, we show that no algorithm can have an optimal regret bound in both instance-dependent and instance-independent cases. Finally, we prove that no algorithm can achieve a sublinear regret when the rewards are adversarial.", "pdf": "https://openreview.net/pdf/6d745c4cd4f694de71e80ccf2e4d1ba8bcb29efb.pdf"} {"title": "Towards Editing Time Series", "url": "https://openreview.net/forum?id=qu5NTwZtxA", "detail_url": "https://openreview.net/forum?id=qu5NTwZtxA", "authors": "Baoyu Jing,Shuqi Gu,Tianyu Chen,Zhiyu Yang,Dongsheng Li,Jingrui He,Kan Ren", "tags": "NIPS 2024,Poster", "abstract": "Synthesizing time series data is pivotal in modern society, aiding effective decision making and ensuring privacy preservation in various scenarios. Time series are associated with various attributes, including trends, seasonality, and external information such as location. Recent research has predominantly focused on random unconditional synthesis or conditional synthesis. Nonetheless, these paradigms generate time series from scratch and are incapable of manipulating existing time series samples. This paper introduces a novel task, called Time Series Editing (TSE), to synthesize time series by manipulating existing time series. The objective is to modify the given time series according to the specified attributes while preserving other properties unchanged. This task is not trivial due to the inadequacy of data coverage and the intricate relationships between time series and their attributes. To address these issues, we introduce a novel diffusion model, called TEdit. The proposed TEdit is trained using a novel bootstrap learning algorithm that effectively enhances the coverage of the original data. It is also equipped with an innovative multi-resolution modeling and generation paradigm to capture the complex relationships between time series and their attributes. Experimental results demonstrate the efficacy of TEdit for editing specified attributes upon the existing time series data. The project page is at https://seqml.github.io/tse.", "pdf": "https://openreview.net/pdf/aa6814d8038dae41d99378ac4404411bfc04b7c0.pdf"} {"title": "Thinking Forward: Memory-Efficient Federated Finetuning of Language Models", "url": "https://openreview.net/forum?id=dGQtja9X2C", "detail_url": "https://openreview.net/forum?id=dGQtja9X2C", "authors": "Kunjal Panchal,Nisarg Parikh,Sunav Choudhary,Lijun Zhang,Yuriy Brun,Hui Guan", "tags": "NIPS 2024,Poster", "abstract": "Finetuning large language models (LLMs) in federated learning (FL) settings has become increasingly important as it allows resource-constrained devices to finetune a model using private data. However, finetuning LLMs using backpropagation requires excessive memory (especially from intermediate activations) for resource-constrained devices. While Forward-mode Auto-Differentiation (AD) can significantly reduce memory footprint from activations, we observe that directly applying it to LLM finetuning results in slow convergence and poor accuracy. In this paper, we introduce Spry, an FL algorithm that splits trainable weights of an LLM among participating clients, such that each client computes gradients using forward-mode AD that are closer estimations of the true gradients. Spry achieves a low memory footprint, high accuracy, and fast convergence. We formally prove that the global gradients in Spry are unbiased estimators of true global gradients for homogeneous data distributions across clients, while heterogeneity increases bias of the estimates. We also derive Spry's convergence rate, showing that the gradients decrease inversely proportional to the number of FL rounds, indicating the convergence up to the limits of heterogeneity. Empirically, Spry reduces the memory footprint during training by 1.4-7.1$\\times$ in contrast to backpropagation, while reaching comparable accuracy, across a wide range of language tasks, models, and FL settings. \nSpry reduces the convergence time by 1.2-20.3$\\times$ and achieves 5.2-13.5\\% higher accuracy against state-of-the-art zero-order methods. When finetuning Llama2-7B with LoRA, compared to the peak memory consumption of 33.9GB of backpropagation, Spry only consumes 6.2GB of peak memory. For OPT13B, the reduction is from 76.5GB to 10.8GB. Spry makes feasible previously impossible FL deployments on commodity mobile and edge devices. Our source code is available for replication at https://github.com/Astuary/Spry.", "pdf": "https://openreview.net/pdf/4d5869beb1b5ae4cc55db0909ee3f4f4db51290b.pdf"} {"title": "UniSDF: Unifying Neural Representations for High-Fidelity 3D Reconstruction of Complex Scenes with Reflections", "url": "https://openreview.net/forum?id=Ty25oVKTqj", "detail_url": "https://openreview.net/forum?id=Ty25oVKTqj", "authors": "Fangjinhua Wang,Marie-Julie Rakotosaona,Michael Niemeyer,Richard Szeliski,Marc Pollefeys,Federico Tombari", "tags": "NIPS 2024,Poster", "abstract": "Neural 3D scene representations have shown great potential for 3D reconstruction from 2D images. However, reconstructing real-world captures of complex scenes still remains a challenge. Existing generic 3D reconstruction methods often struggle to represent fine geometric details and do not adequately model reflective surfaces of large-scale scenes. Techniques that explicitly focus on reflective surfaces can model complex and detailed reflections by exploiting better reflection parameterizations. However, we observe that these methods are often not robust in real scenarios where non-reflective as well as reflective components are present. In this work, we propose UniSDF, a general purpose 3D reconstruction method that can reconstruct large complex scenes with reflections. We investigate both camera view as well as reflected view-based color parameterization techniques and find that explicitly blending these representations in 3D space enables reconstruction of surfaces that are more geometrically accurate, especially for reflective surfaces. We further combine this representation with a multi-resolution grid backbone that is trained in a coarse-to-fine manner, enabling faster reconstructions than prior methods. Extensive experiments on object-level datasets DTU, Shiny Blender as well as unbounded datasets Mip-NeRF 360 and Ref-NeRF real demonstrate that our method is able to robustly reconstruct complex large-scale scenes with fine details and reflective surfaces, leading to the best overall performance. Project page: https://fangjinhuawang.github.io/UniSDF.", "pdf": "https://openreview.net/pdf/48c9b632bf49cfab4489cf5a1edb8093e92efc87.pdf"} {"title": "Topological Generalization Bounds for Discrete-Time Stochastic Optimization Algorithms", "url": "https://openreview.net/forum?id=6U5fCHIWOC", "detail_url": "https://openreview.net/forum?id=6U5fCHIWOC", "authors": "Rayna Andreeva,Benjamin Dupuis,Rik Sarkar,Tolga Birdal,Umut Simsekli", "tags": "NIPS 2024,Poster", "abstract": "We present a novel set of rigorous and computationally efficient topology-based complexity notions that exhibit a strong correlation with the generalization gap in modern deep neural networks (DNNs). DNNs show remarkable generalization properties, yet the source of these capabilities remains elusive, defying the established statistical learning theory. Recent studies have revealed that properties of training trajectories can be indicative of generalization. Building on this insight, state-of-the-art methods have leveraged the topology of these trajectories, particularly their fractal dimension, to quantify generalization. Most existing works compute this quantity by assuming continuous- or infinite-time training dynamics, complicating the development of practical estimators capable of accurately predicting generalization without access to test data. In this paper, we respect the discrete-time nature of training trajectories and investigate the underlying topological quantities that can be amenable to topological data analysis tools. This leads to a new family of reliable topological complexity measures that provably bound the generalization error, eliminating the need for restrictive geometric assumptions. These measures are computationally friendly, enabling us to propose simple yet effective algorithms for computing generalization indices. Moreover, our flexible framework can be extended to different domains, tasks, and architectures. Our experimental results demonstrate that our new complexity measures exhibit a strong correlation with generalization error in industry-standard architectures such as transformers and deep graph networks. Our approach consistently outperforms existing topological bounds across a wide range of datasets, models, and optimizers, highlighting the practical relevance and effectiveness of our complexity measures.", "pdf": "https://openreview.net/pdf/79567c549957519631e2997bf27bdd788f5e906c.pdf"} {"title": "BLAST: Block-Level Adaptive Structured Matrices for Efficient Deep Neural Network Inference", "url": "https://openreview.net/forum?id=n0arS0DDot", "detail_url": "https://openreview.net/forum?id=n0arS0DDot", "authors": "Changwoo Lee,Soo Min Kwon,Qing Qu,Hun-Seok Kim", "tags": "NIPS 2024,Poster", "abstract": "Large-scale foundation models have demonstrated exceptional performance in language and vision tasks. However, the numerous dense matrix-vector operations involved in these large networks pose significant computational challenges during inference. To address these challenges, we introduce the Block-Level Adaptive STructured (BLAST) matrix, designed to learn and leverage efficient structures prevalent in the weight matrices of linear layers within deep learning models. Compared to existing structured matrices, the BLAST matrix offers substantial flexibility, as it can represent various types of structures that are either learned from data or computed from pre-existing weight matrices. We demonstrate the efficiency of using the BLAST matrix for compressing both language and vision tasks, showing that (i) for medium-sized models such as ViT and GPT-2, training with BLAST weights boosts performance while reducing complexity by 70\\% and 40\\%, respectively; and (ii) for large foundation models such as Llama-7B and DiT-XL, the BLAST matrix achieves a 2x compression while exhibiting the lowest performance degradation among all tested structured matrices. Our code is available at https://github.com/changwoolee/BLAST.", "pdf": "https://openreview.net/pdf/245959aba0605ea588b564289cc90eb61c7e3dd9.pdf"} {"title": "Boosting the Transferability of Adversarial Attack on Vision Transformer with Adaptive Token Tuning", "url": "https://openreview.net/forum?id=sNz7tptCH6", "detail_url": "https://openreview.net/forum?id=sNz7tptCH6", "authors": "Di Ming,Peng Ren,Yunlong Wang,Xin Feng", "tags": "NIPS 2024,Poster", "abstract": "Vision transformers (ViTs) perform exceptionally well in various computer vision tasks but remain vulnerable to adversarial attacks. Recent studies have shown that the transferability of adversarial examples exists for CNNs, and the same holds true for ViTs. However, existing ViT attacks aggressively regularize the largest token gradients to exact zero within each layer of the surrogate model, overlooking the interactions between layers, which limits their transferability in attacking black-box models. Therefore, in this paper, we focus on boosting the transferability of adversarial attacks on ViTs through adaptive token tuning (ATT). Specifically, we propose three optimization strategies: an adaptive gradient re-scaling strategy to reduce the overall variance of token gradients, a self-paced patch out strategy to enhance the diversity of input tokens, and a hybrid token gradient truncation strategy to weaken the effectiveness of attention mechanism. We demonstrate that scaling correction of gradient changes using gradient variance across different layers can produce highly transferable adversarial examples. In addition, introducing attentional truncation can mitigate the overfitting over complex interactions between tokens in deep ViT layers to further improve the transferability. On the other hand, using feature importance as a guidance to discard a subset of perturbation patches in each iteration, along with combining self-paced learning and progressively more sampled attacks, significantly enhances the transferability over attacks that use all perturbation patches. Extensive experiments conducted on ViTs, undefended CNNs, and defended CNNs validate the superiority of our proposed ATT attack method. On average, our approach improves the attack performance by 10.1% compared to state-of-the-art transfer-based attacks. Notably, we achieve the best attack performance with an average of 58.3% on three defended CNNs. Code is available at https://github.com/MisterRpeng/ATT.", "pdf": "https://openreview.net/pdf/0429c2dc8456d1a78c7c8157e892aae9cf46fa69.pdf"} {"title": "Implicit Regularization Paths of Weighted Neural Representations", "url": "https://openreview.net/forum?id=oXCmwwkQTZ", "detail_url": "https://openreview.net/forum?id=oXCmwwkQTZ", "authors": "Jin-Hong Du,Pratik Patil", "tags": "NIPS 2024,Poster", "abstract": "We study the implicit regularization effects induced by (observation) weighting of pretrained features.\nFor weight and feature matrices of bounded operator norms that are infinitesimally free with respect to (normalized) trace functionals, we derive equivalence paths connecting different weighting matrices and ridge regularization levels.\nSpecifically, we show that ridge estimators trained on weighted features along the same path are asymptotically equivalent when evaluated against test vectors of bounded norms.\nThese paths can be interpreted as matching the effective degrees of freedom of ridge estimators fitted with weighted features.\nFor the special case of subsampling without replacement, our results apply to independently sampled random features and kernel features and confirm recent conjectures (Conjectures 7 and 8) of the authors on the existence of such paths in Patil and Du (2023).\nWe also present an additive risk decomposition for ensembles of weighted estimators and show that the risks are equivalent along the paths when the ensemble size goes to infinity.\nAs a practical consequence of the path equivalences, we develop an efficient cross-validation method for tuning and apply it to subsampled pretrained representations across several models (e.g., ResNet-50) and datasets (e.g., CIFAR-100).", "pdf": "https://openreview.net/pdf/afbfa56e2585dc77250b4794887ebc050756e118.pdf"} {"title": "GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models", "url": "https://openreview.net/forum?id=vunJCq9PwU", "detail_url": "https://openreview.net/forum?id=vunJCq9PwU", "authors": "ZAITANG LI,Pin-Yu Chen,Tsung-Yi Ho", "tags": "NIPS 2024,Poster", "abstract": "Current studies on adversarial robustness mainly focus on aggregating \\textit{local} robustness results from a set of data samples to evaluate and rank different models. However, the local statistics may not well represent the true \\textit{global} robustness of the underlying unknown data distribution. To address this challenge, this paper makes the first attempt to present a new framework, called \\textit{GREAT Score}, for global robustness evaluation of adversarial perturbation using generative models. Formally, GREAT Score carries the physical meaning of a global statistic capturing a mean certified attack-proof perturbation level over all samples drawn from a generative model. For finite-sample evaluation, we also derive a probabilistic guarantee on the sample complexity and the difference between the sample mean and the true mean. GREAT Score has several advantages: (1) Robustness evaluations using GREAT Score are efficient and scalable to large models, by sparing the need of running adversarial attacks. In particular, we show high correlation and significantly reduced computation cost of GREAT Score when compared to the attack-based model ranking on RobustBench \\cite{croce2021robustbench}. (2) The use of generative models facilitates the approximation of the unknown data distribution. In our ablation study with different generative adversarial networks (GANs), we observe consistency between global robustness evaluation and the quality of GANs. (3) GREAT Score can be used for remote auditing of privacy-sensitive black-box models, as demonstrated by our robustness evaluation on several online facial recognition services.", "pdf": "https://openreview.net/pdf/9b455fd219afd050f8e2c5821f56718aa6c2426f.pdf"} {"title": "One-to-Multiple: A Progressive Style Transfer Unsupervised Domain-Adaptive Framework for Kidney Tumor Segmentation", "url": "https://openreview.net/forum?id=cMwSoXLCVi", "detail_url": "https://openreview.net/forum?id=cMwSoXLCVi", "authors": "Kai Hu,Jinhao Li,Yuan Zhang,Xiongjun Ye,Xieping Gao", "tags": "NIPS 2024,Poster", "abstract": "In multi-sequence Magnetic Resonance Imaging (MRI), the accurate segmentation of the kidney and tumor based on traditional supervised methods typically necessitates detailed annotation for each sequence, which is both time-consuming and labor-intensive. Unsupervised Domain Adaptation (UDA) methods can effectively mitigate inter-domain differences by aligning cross-modal features, thereby reducing the annotation burden. However, most existing UDA methods are limited to one-to-one domain adaptation, which tends to be inefficient and resource-intensive when faced with multi-target domain transfer tasks. To address this challenge, we propose a novel and efficient One-to-Multiple Progressive Style Transfer Unsupervised Domain-Adaptive (PSTUDA) framework for kidney and tumor segmentation in multi-sequence MRI. Specifically, we develop a multi-level style dictionary to explicitly store the style information of each target domain at various stages, which alleviates the burden of a single generator in a multi-target transfer task and enables effective decoupling of content and style. Concurrently, we employ multiple cascading style fusion modules that utilize point-wise instance normalization to progressively recombine content and style features, which enhances cross-modal alignment and structural consistency. Experiments conducted on the private MSKT and public KiTS19 datasets demonstrate the superiority of the proposed PSTUDA over comparative methods in multi-sequence kidney and tumor segmentation. The average Dice Similarity Coefficients are increased by at least 1.8% and 3.9%, respectively. Impressively, our PSTUDA not only significantly reduces the floating-point computation by approximately 72% but also reduces the number of model parameters by about 50%, bringing higher efficiency and feasibility to practical clinical applications.", "pdf": "https://openreview.net/pdf/dcb216c9e53776357a82d11791084f07c15c6f1f.pdf"} {"title": "HyperPrism: An Adaptive Non-linear Aggregation Framework for Distributed Machine Learning over Non-IID Data and Time-varying Communication Links", "url": "https://openreview.net/forum?id=3ie8NWA1El", "detail_url": "https://openreview.net/forum?id=3ie8NWA1El", "authors": "Haizhou Du,Yijian Chen,Ryan Yang,Yuchen Li,Linghe Kong", "tags": "NIPS 2024,Poster", "abstract": "While Distributed Machine Learning (DML) has been widely used to achieve decent performance, it is still challenging to take full advantage of data and devices distributed at multiple vantage points to adapt and learn, especially it is non-trivial to address dynamic and divergence challenges based on the linear aggregation framework as follows: (1) heterogeneous learning data at different devices (i.e., non-IID data) resulting in model divergence and (2) in the case of time-varying communication links, the limited ability for devices to reconcile model divergence. In this paper, we contribute a non-linear class aggregation framework HyperPrism that leverages distributed mirror descent with averaging done in the mirror descent dual space and adapts the degree of Weighted Power Mean (WPM) used in each round. Moreover, HyperPrism could adaptively choose different mapping for different layers of the local model with a dedicated hypernetwork per device, achieving automatic optimization of DML in high divergence settings. We perform rigorous analysis and experimental evaluations to demonstrate the effectiveness of adaptive, mirror-mapping DML. In particular, we extend the generalizability of existing related works and position them as special cases within HyperPrism. Our experimental results show that HyperPrism can improve the convergence speed up to 98.63% and scale well to more devices compared with the state-of-the-art, all with little additional computation overhead compared to traditional linear aggregation.", "pdf": "https://openreview.net/pdf/f864073e4fa2c28f9168aaa4ad28f8148fb9d809.pdf"} {"title": "Accelerating Matroid Optimization through Fast Imprecise Oracles", "url": "https://openreview.net/forum?id=0qb8KoPsej", "detail_url": "https://openreview.net/forum?id=0qb8KoPsej", "authors": "Franziska Eberle,Felix Hommelsheim,Alexander Lindermayr,Zhenwei Liu,Nicole Megow,Jens Schl\u00f6ter", "tags": "NIPS 2024,Poster", "abstract": "Querying complex models for precise information (e.g. traffic models, database systems, large ML models) often entails intense computations and results in long response times. Thus, weaker models which give imprecise results quickly can be advantageous, provided inaccuracies can be resolved using few queries to a stronger model. In the fundamental problem of computing a maximum-weight basis of a matroid, a well-known generalization of many combinatorial optimization problems, algorithms have access to a clean oracle to query matroid information. We additionally equip algorithms with a fast but dirty oracle. We design and analyze practical algorithms which only use few clean queries w.r.t. the quality of the dirty oracle, while maintaining robustness against arbitrarily poor dirty oracles, approaching the performance of classic algorithms for the given problem. Notably, we prove that our algorithms are, in many respects, best-possible. Further, we outline extensions to other matroid oracle types, non-free dirty oracles and other matroid problems.", "pdf": "https://openreview.net/pdf/bfe61671c74fb06c0182c32252b26c64125ab10a.pdf"} {"title": "Latent Neural Operator for Solving Forward and Inverse PDE Problems", "url": "https://openreview.net/forum?id=VLw8ZyKfcm", "detail_url": "https://openreview.net/forum?id=VLw8ZyKfcm", "authors": "Tian Wang,Chuang Wang", "tags": "NIPS 2024,Poster", "abstract": "Neural operators effectively solve PDE problems from data without knowing the explicit equations, which learn the map from the input sequences of observed samples to the predicted values. Most existing works build the model in the original geometric space, leading to high computational costs when the number of sample points is large. We present the Latent Neural Operator (LNO) solving PDEs in the latent space. In particular, we first propose Physics-Cross-Attention (PhCA) transforming representation from the geometric space to the latent space, then learn the operator in the latent space, and finally recover the real-world geometric space via the inverse PhCA map. Our model retains flexibility that can decode values in any position not limited to locations defined in the training set, and therefore can naturally perform interpolation and extrapolation tasks particularly useful for inverse problems. Moreover, the proposed LNO improves both prediction accuracy and computational efficiency. Experiments show that LNO reduces the GPU memory by 50%, speeds up training 1.8 times, and reaches state-of-the-art accuracy on four out of six benchmarks for forward problems and a benchmark for inverse problem. Code is available at https://github.com/L-I-M-I-T/LatentNeuralOperator.", "pdf": "https://openreview.net/pdf/faeb180b64c9c8a35b920ea406d0f13d6d8d4d88.pdf"} {"title": "Adam with model exponential moving average is effective for nonconvex optimization", "url": "https://openreview.net/forum?id=v416YLOQuU", "detail_url": "https://openreview.net/forum?id=v416YLOQuU", "authors": "Kwangjun Ahn,Ashok Cutkosky", "tags": "NIPS 2024,Poster", "abstract": "In this work, we offer a theoretical analysis of two modern optimization techniques for training large and complex models: (i) adaptive optimization algorithms, such as Adam, and (ii) the model exponential moving average (EMA). Specifically, we demonstrate that a clipped version of Adam with model EMA achieves the optimal convergence rates in various nonconvex optimization settings, both smooth and nonsmooth. Moreover, when the scale varies significantly across different coordinates, we demonstrate that the coordinate-wise adaptivity of Adam is provably advantageous. Notably, unlike previous analyses of Adam, our analysis crucially relies on its core elements---momentum and discounting factors---as well as model EMA, motivating their wide applications in practice.", "pdf": "https://openreview.net/pdf/5ce039de7fa679b571456addaec6a4ae9de508a6.pdf"} {"title": "Uni-Med: A Unified Medical Generalist Foundation Model For Multi-Task Learning Via Connector-MoE", "url": "https://openreview.net/forum?id=oyl2Fnzune", "detail_url": "https://openreview.net/forum?id=oyl2Fnzune", "authors": "Xun Zhu,Ying Hu,Fanbin Mo,Miao Li,Ji Wu", "tags": "NIPS 2024,Poster", "abstract": "Multi-modal large language models (MLLMs) have shown impressive capabilities as a general-purpose interface for various visual and linguistic tasks. However, building a unified MLLM for multi-task learning in the medical field remains a thorny challenge. To mitigate the tug-of-war problem of multi-modal multi-task optimization in MLLMs, recent advances primarily focus on improving the LLM components, while neglecting the connector that bridges the gap between modalities. In this paper, we introduce Uni-Med, a novel medical generalist foundation model which consists of a universal visual feature extraction module, a connector mixture-of-experts (CMoE) module, and an LLM. Benefiting from the proposed CMoE that leverages a well-designed router with a mixture of projection experts at the connector, Uni-Med achieves efficient solution to the tug-of-war problem and can perform six different medical tasks including question answering, visual question answering, report generation, referring expression comprehension, referring expression generation and image classification. To the best of our knowledge, Uni-Med is the first effort to tackle multi-task interference at the connector in MLLMs. Extensive ablation experiments validate the effectiveness of introducing CMoE under any configuration, with up to an average 8% performance gains. We further provide interpretation analysis of the tug-of-war problem from the perspective of gradient optimization and parameter statistics. Compared to previous state-of-the-art medical MLLMs, Uni-Med achieves competitive or superior evaluation metrics on diverse tasks. Code and resources are available at https://github.com/MSIIP/Uni-Med.", "pdf": "https://openreview.net/pdf/10ce18d0d85e41e7de3cde824ce12541b90ae181.pdf"} {"title": "Mixture of Adversarial LoRAs: Boosting Robust Generalization in Meta-Tuning", "url": "https://openreview.net/forum?id=HxGdbAmYYr", "detail_url": "https://openreview.net/forum?id=HxGdbAmYYr", "authors": "Xu Yang,Chen Liu,Ying Wei", "tags": "NIPS 2024,Poster", "abstract": "This paper introduces AMT, an \\textbf{A}dversarial \\textbf{M}eta-\\textbf{T}uning methodology, to boost the robust generalization of pre-trained models in the out-of-domain (OOD) few-shot learning. To address the challenge of transferring knowledge from source domains to unseen target domains, we construct the robust LoRAPool by meta-tuning LoRAs with dual perturbations applied to not only the inputs but also singular values and vectors of the weight matrices at various robustness levels. On top of that, we introduce a simple yet effective test-time merging mechanism to dynamically merge discriminative LoRAs for test-time task customization. Extensive evaluations demonstrate that AMT yields significant improvements, up to 12.92\\% in clean generalization and up to 49.72\\% in adversarial generalization, over previous state-of-the-art methods across a diverse range of OOD few-shot image classification tasks on three benchmarks, confirming the effectiveness of our approach to boost the robust generalization of pre-trained models. Our code is available at \\href{https://github.com/xyang583/AMT}{https://github.com/xyang583/AMT}.", "pdf": "https://openreview.net/pdf/8cb9e7cf5cd51dc6e96dd5f002397c2a16552518.pdf"} {"title": "Tight Bounds for Learning RUMs from Small Slates", "url": "https://openreview.net/forum?id=0nSY8NiILP", "detail_url": "https://openreview.net/forum?id=0nSY8NiILP", "authors": "Flavio Chierichetti,Mirko Giacchini,Ravi Kumar,Alessandro Panconesi,Andrew Tomkins", "tags": "NIPS 2024,Poster", "abstract": "A Random Utility Model (RUM) is a classical model of user behavior defined by a distribution over $\\mathbb{R}^n$. A user, presented with a subset of $\\\\{1,\\ldots,n\\\\}$, will select the item of the subset with the highest utility, according to a utility vector drawn from the specified distribution. In practical settings, the subset is often of small size, as in the ``ten blue links'' of web search. \n\n\nIn this paper, we consider a learning setting with complete information on user choices from subsets of size at most $k$. We show that $k=\\Theta(\\sqrt{n})$ is both necessary and sufficient to predict the distribution of all user choices with an arbitrarily small, constant error.\n\n\nBased on the upper bound, we obtain new algorithms for approximate RUM learning and variations thereof. Furthermore, we employ our lower bound for approximate RUM learning to derive lower bounds to fractional extensions of the well-studied $k$-deck and trace reconstruction problems.", "pdf": "https://openreview.net/pdf/9ef75002cecee8b96f6b79d08954a5b5d66f3961.pdf"} {"title": "Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach", "url": "https://openreview.net/forum?id=VikufBLOW1", "detail_url": "https://openreview.net/forum?id=VikufBLOW1", "authors": "Mathilde Caron,Alireza Fathi,Cordelia Schmid,Ahmet Iscen", "tags": "NIPS 2024,Poster", "abstract": "Web-scale visual entity recognition, the task of associating images with their corresponding entities within vast knowledge bases like Wikipedia, presents significant challenges due to the lack of clean, large-scale training data. In this paper, we propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation. Instead of relying on the multimodal LLM to directly annotate data, which we found to be suboptimal, we prompt it to reason about potential candidate entity labels by accessing additional contextually relevant information (such as Wikipedia), resulting in more accurate annotations. We further use the multimodal LLM to enrich the dataset by generating question-answer pairs and a grounded fine-grained textual description (referred to as \"rationale\") that explains the connection between images and their assigned entities. Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks (e.g. +6.9% improvement in OVEN entity task), underscoring the importance of high-quality training data in this domain.", "pdf": "https://openreview.net/pdf/8e7f6eeeb50d5f68be80ebf0c8608289a5ba6a64.pdf"} {"title": "You Don\u2019t Need Domain-Specific Data Augmentations When Scaling Self-Supervised Learning", "url": "https://openreview.net/forum?id=7RwKMRMNrc", "detail_url": "https://openreview.net/forum?id=7RwKMRMNrc", "authors": "Th\u00e9o Moutakanni,Maxime Oquab,Marc Szafraniec,Maria Vakalopoulou,Piotr Bojanowski", "tags": "NIPS 2024,Poster", "abstract": "Self-Supervised learning (SSL) with Joint-Embedding Architectures (JEA) has led to outstanding performances. All instantiations of this paradigm were trained using strong and well-established hand-crafted data augmentations, leading to the general belief that they are required for the proper training and performance of such models. On the other hand, generative reconstruction-based models such as BEIT and MAE or Joint-Embedding Predictive Architectures such as I-JEPA have shown strong performance without using data augmentations except masking. In this work, we challenge the importance of invariance and data-augmentation in JEAs at scale. By running a case-study on a recent SSL foundation model -- DINOv2 -- we show that strong image representations can be obtained with JEAs and only cropping without resizing provided the training data is large enough, reaching state-of-the-art results and using the least amount of augmentation in the literature. Through this study, we also discuss the impact of compute constraints on the outcomes of experimental deep learning research, showing that they can lead to very different conclusions.", "pdf": "https://openreview.net/pdf/d1fe3e57d0d1245c831c4c6be92fdf14b159e94a.pdf"} {"title": "Differentially Private Equivalence Testing for Continuous Distributions and Applications", "url": "https://openreview.net/forum?id=qDuqp1nZZ6", "detail_url": "https://openreview.net/forum?id=qDuqp1nZZ6", "authors": "Or Sheffet,Daniel Omer", "tags": "NIPS 2024,Poster", "abstract": "We present the first algorithm for testing equivalence between two continuous distributions using differential privacy (DP). Our algorithm is a private version of the algorithm of Diakonikolas et al. \nThe algorithm of Diakonikolas et al uses the data itself to repeatedly discretize the real line so that --- when the two distributions are far apart in ${\\cal A}_k$-norm --- one of the discretized distributions exhibits large $L_2$-norm difference; and upon repeated sampling such large gap would be detected. Designing its private analogue poses two difficulties. First, our DP algorithm can not resample new datapoints as a change to a single datapoint may lead to a very large change in the descretization of the real line. In contrast, the (sorted) index of the discretization point changes only by $1$ between neighboring instances, and so we use a novel algorithm that set the discretization points using random Bernoulli noise, resulting in only a few buckets being affected under the right coupling. Second, our algorithm, which doesn't resample data, requires we also revisit the utility analysis of the original algorithm and prove its correctness w.r.t. the original sorted data; a problem we tackle using sampling a subset of Poisson-drawn size from each discretized bin. Lastly, since any distribution can be reduced to a continuous distribution, our algorithm is successfully carried to multiple other families of distributions and thus has numerous applications.", "pdf": "https://openreview.net/pdf/f16088ff7a96775260e2b6bf5b1061b7e41ec0ad.pdf"} {"title": "MMSite: A Multi-modal Framework for the Identification of Active Sites in Proteins", "url": "https://openreview.net/forum?id=XHdwlbNSVb", "detail_url": "https://openreview.net/forum?id=XHdwlbNSVb", "authors": "Song Ouyang,Huiyu Cai,Yong Luo,Kehua Su,Lefei Zhang,Bo Du", "tags": "NIPS 2024,Poster", "abstract": "The accurate identification of active sites in proteins is essential for the advancement of life sciences and pharmaceutical development, as these sites are of critical importance for enzyme activity and drug design. Recent advancements in protein language models (PLMs), trained on extensive datasets of amino acid sequences, have significantly improved our understanding of proteins. However, compared to the abundant protein sequence data, functional annotations, especially precise per-residue annotations, are scarce, which limits the performance of PLMs. On the other hand, textual descriptions of proteins, which could be annotated by human experts or a pretrained protein sequence-to-text model, provide meaningful context that could assist in the functional annotations, such as the localization of active sites. This motivates us to construct a $\\textbf{ProT}$ein-$\\textbf{A}$ttribute text $\\textbf{D}$ataset ($\\textbf{ProTAD}$), comprising over 570,000 pairs of protein sequences and multi-attribute textual descriptions. Based on this dataset, we propose $\\textbf{MMSite}$, a multi-modal framework that improves the performance of PLMs to identify active sites by leveraging biomedical language models (BLMs). In particular, we incorporate manual prompting and design a MACross module to deal with the multi-attribute characteristics of textual descriptions. MMSite is a two-stage (\"First Align, Then Fuse\") framework: first aligns the textual modality with the sequential modality through soft-label alignment, and then identifies active sites via multi-modal fusion. Experimental results demonstrate that MMSite achieves state-of-the-art performance compared to existing protein representation learning methods. The dataset and code implementation are available at https://github.com/Gift-OYS/MMSite.", "pdf": "https://openreview.net/pdf/bed87ee8878ddba7d48aae1e57c504b09040e052.pdf"} {"title": "Identifying Selections for Unsupervised Subtask Discovery", "url": "https://openreview.net/forum?id=hH4bPkOhhh", "detail_url": "https://openreview.net/forum?id=hH4bPkOhhh", "authors": "Yiwen Qiu,Yujia Zheng,Kun Zhang", "tags": "NIPS 2024,Poster", "abstract": "When solving long-horizon tasks, it is intriguing to decompose the high-level task into subtasks. Decomposing experiences into reusable subtasks can improve data efficiency, accelerate policy generalization, and in general provide promising solutions to multi-task reinforcement learning and imitation learning problems. However, the concept of subtasks is not sufficiently understood and modeled yet, and existing works often overlook the true structure of the data generation process: subtasks are the results of a *selection* mechanism on actions, rather than possible underlying confounders or intermediates. Specifically, we provide a theory to identify, and experiments to verify the existence of selection variables in such data. These selections serve as subgoals that indicate subtasks and guide policy. In light of this idea, we develop a sequential non-negative matrix factorization (seq- NMF) method to learn these subgoals and extract meaningful behavior patterns as subtasks. Our empirical results on a challenging Kitchen environment demonstrate that the learned subtasks effectively enhance the generalization to new tasks in multi-task imitation learning scenarios. The codes are provided at this [*link*](https://anonymous.4open.science/r/Identifying\\_Selections\\_for\\_Unsupervised\\_Subtask\\_Discovery/README.md).", "pdf": "https://openreview.net/pdf/b0224e269ed5be6ca37f512216e134ce72f15466.pdf"} {"title": "The Dormant Neuron Phenomenon in Multi-Agent Reinforcement Learning Value Factorization", "url": "https://openreview.net/forum?id=4NGrHrhJPx", "detail_url": "https://openreview.net/forum?id=4NGrHrhJPx", "authors": "Haoyuan Qin,Chennan Ma,Mian Deng,Zhengzhu Liu,Songzhu Mei,Xinwang Liu,Cheng Wang,Siqi Shen", "tags": "NIPS 2024,Poster", "abstract": "In this work, we study the dormant neuron phenomenon in multi-agent reinforcement learning value factorization, where the mixing network suffers from reduced network expressivity caused by an increasing number of inactive neurons. We demonstrate the presence of the dormant neuron phenomenon across multiple environments and algorithms, and show that this phenomenon negatively affects the learning process. We show that dormant neurons correlates with the existence of over-active neurons, which have large activation scores. To address the dormant neuron issue, we propose ReBorn, a simple but effective method that transfers the weights from over-active neurons to dormant neurons. We theoretically show that this method can ensure the learned action preferences are not forgotten after the weight-transferring procedure, which increases learning effectiveness. Our extensive experiments reveal that ReBorn achieves promising results across various environments and improves the performance of multiple popular value factorization approaches. The source code of ReBorn is available in \\url{https://github.com/xmu-rl-3dv/ReBorn}.", "pdf": "https://openreview.net/pdf/adcdf7fb5e791061801164bd5612eca66289c9de.pdf"} {"title": "Using Time-Aware Graph Neural Networks to Predict Temporal Centralities in Dynamic Graphs", "url": "https://openreview.net/forum?id=6n709MszkP", "detail_url": "https://openreview.net/forum?id=6n709MszkP", "authors": "Franziska Heeg,Ingo Scholtes", "tags": "NIPS 2024,Poster", "abstract": "Node centralities play a pivotal role in network science, social network analysis, and recommender systems.\nIn temporal data, static path-based centralities like closeness or betweenness can give misleading results about the true importance of nodes in a temporal graph. To address this issue, temporal generalizations of betweenness and closeness have been defined that are based on the shortest time-respecting paths between pairs of nodes. However, a major issue of those generalizations is that the calculation of such paths is computationally expensive.\nAddressing this issue, we study the application of De Bruijn Graph Neural Networks (DBGNN), a time-aware graph neural network architecture, to predict temporal path-based centralities in time series data. We experimentally evaluate our approach in 13 temporal graphs from biological and social systems and show that it considerably improves the prediction of betweenness and closeness centrality compared to (i) a static Graph Convolutional Neural Network, (ii) an efficient sampling-based approximation technique for temporal betweenness, and (iii) two state-of-the-art time-aware graph learning techniques for dynamic graphs.", "pdf": "https://openreview.net/pdf/3e7e79cbafb4f514fe7d61815677a94c980fb899.pdf"} {"title": "Multi-Scale Representation Learning for Protein Fitness Prediction", "url": "https://openreview.net/forum?id=kWMVzIdCEn", "detail_url": "https://openreview.net/forum?id=kWMVzIdCEn", "authors": "Zuobai Zhang,Pascal Notin,Yining Huang,Aurelie Lozano,Vijil Chenthamarakshan,Debora Susan Marks,Payel Das,Jian Tang", "tags": "NIPS 2024,Poster", "abstract": "Designing novel functional proteins crucially depends on accurately modeling their fitness landscape. Given the limited availability of functional annotations from wet-lab experiments, previous methods have primarily relied on self-supervised models trained on vast, unlabeled protein sequence or structure datasets. While initial protein representation learning studies solely focused on either sequence or structural features, recent hybrid architectures have sought to merge these modalities to harness their respective strengths. However, these sequence-structure models have so far achieved only incremental improvements when compared to the leading sequence-only approaches, highlighting unresolved challenges effectively leveraging these modalities together. Moreover, the function of certain proteins is highly dependent on the granular aspects of their surface topology, which have been overlooked by prior models.\nTo address these limitations, we introduce the Sequence-Structure-Surface Fitness (**S3F**) model \u2014 a novel multimodal representation learning framework that integrates protein features across several scales. Our approach combines sequence representations from a protein language model with Geometric Vector Perceptron networks encoding protein backbone and detailed surface topology. The proposed method achieves state-of-the-art fitness prediction on the ProteinGym benchmark encompassing 217 substitution deep mutational scanning assays, and provides insights into the determinants of protein function.\nOur code is at https://github.com/DeepGraphLearning/S3F.", "pdf": "https://openreview.net/pdf/42a9efb3a54368d5d17852708b09e753ae1f9ef0.pdf"} {"title": "Discrete Modeling via Boundary Conditional Diffusion Processes", "url": "https://openreview.net/forum?id=7AWMTPMZES", "detail_url": "https://openreview.net/forum?id=7AWMTPMZES", "authors": "Yuxuan Gu,Xiaocheng Feng,Lei Huang,Yingsheng Wu,Zekun Zhou,Weihong Zhong,kun Zhu,Bing Qin", "tags": "NIPS 2024,Poster", "abstract": "We present an novel framework for efficiently and effectively extending the powerful continuous diffusion processes to discrete modeling.\nPrevious approaches have suffered from the discrepancy between discrete data and continuous modeling.\nOur study reveals that the absence of guidance from discrete boundaries in learning probability contours is one of the main reasons.\nTo address this issue, we propose a two-step forward process that first estimates the boundary as a prior distribution and then rescales the forward trajectory to construct a boundary conditional diffusion model.\nThe reverse process is proportionally adjusted to guarantee that the learned contours yield more precise discrete data.\nExperimental results indicate that our approach achieves strong performance in both language modeling and discrete image generation tasks.\nIn language modeling, our approach surpasses previous state-of-the-art continuous diffusion language models in three translation tasks and a summarization task, while also demonstrating competitive performance compared to auto-regressive transformers. Moreover, our method achieves comparable results to continuous diffusion models when using discrete ordinal pixels and establishes a new state-of-the-art for categorical image generation on the Cifar-10 dataset.", "pdf": "https://openreview.net/pdf/7f76468ef9524e51fc230dd01d451a25d531af01.pdf"} {"title": "A robust inlier identification algorithm for point cloud registration via $\\mathbf{\\ell_0}$-minimization", "url": "https://openreview.net/forum?id=BJrBaLoDRJ", "detail_url": "https://openreview.net/forum?id=BJrBaLoDRJ", "authors": "Yinuo Jiang,Tang Xiuchuan,Cheng Cheng,Ye Yuan", "tags": "NIPS 2024,Poster", "abstract": "Correspondences in point cloud registration are prone to outliers, significantly reducing registration accuracy and highlighting the need for precise inlier identification. In this paper, we propose a robust inlier identification algorithm for point cloud registration by reformulating the conventional registration problem as an alignment error $\\ell_0$-minimization problem. The $\\ell_0$-minimization problem is formulated for each local set, where those local sets are built on a compatibility graph of input correspondences. To resolve the $\\ell_0$-minimization, we develop a novel two-stage decoupling strategy, which first decouples the alignment error into a rotation fitting error and a translation fitting error. Second, null-space matrices are employed to decouple inlier identification from the estimation of rotation and translation respectively, thereby applying Bayesian theory to $\\ell_0$-minimization problems and solving for fitting errors. Correspondences with the smallest errors are identified as inliers to generate a transformation hypothesis for each local set. The best hypothesis is selected to perform registration. We demonstrate that the proposed inlier identification algorithm is robust under high outlier ratios and noise through experiments. Extensive results on the KITTI, 3DMatch, and 3DLoMatch datasets demonstrate that our method achieves state-of-the-art performance compared to both traditional and learning-based methods in various indoor and outdoor scenes.", "pdf": "https://openreview.net/pdf/3d29514905c099e0f0cbcb3e35303bce76baa50f.pdf"} {"title": "LiteVAE: Lightweight and Efficient Variational Autoencoders for Latent Diffusion Models", "url": "https://openreview.net/forum?id=mTAbl8kUzq", "detail_url": "https://openreview.net/forum?id=mTAbl8kUzq", "authors": "Seyedmorteza Sadat,Jakob Buhmann,Derek Bradley,Otmar Hilliges,Romann M. Weber", "tags": "NIPS 2024,Poster", "abstract": "Advances in latent diffusion models (LDMs) have revolutionized high-resolution image generation, but the design space of the autoencoder that is central to these systems remains underexplored. In this paper, we introduce LiteVAE, a new autoencoder design for LDMs, which leverages the 2D discrete wavelet transform to enhance scalability and computational efficiency over standard variational autoencoders (VAEs) with no sacrifice in output quality. We investigate the training methodologies and the decoder architecture of LiteVAE and propose several enhancements that improve the training dynamics and reconstruction quality. Our base LiteVAE model matches the quality of the established VAEs in current LDMs with a six-fold reduction in encoder parameters, leading to faster training and lower GPU memory requirements, while our larger model outperforms VAEs of comparable complexity across all evaluated metrics (rFID, LPIPS, PSNR, and SSIM).", "pdf": "https://openreview.net/pdf/b1fce93bd97fc7aaa3b8834237bc72fd9abd3095.pdf"} {"title": "Symbolic Regression with a Learned Concept Library", "url": "https://openreview.net/forum?id=B7S4jJGlvl", "detail_url": "https://openreview.net/forum?id=B7S4jJGlvl", "authors": "Arya Grayeli,Atharva Sehgal,Omar Costilla Reyes,Miles Cranmer,Swarat Chaudhuri", "tags": "NIPS 2024,Poster", "abstract": "We present a novel method for symbolic regression (SR), the task of searching for compact programmatic hypotheses that best explain a dataset. The problem is commonly solved using genetic algorithms; we show that we can enhance such methods by inducing a library of abstract textual concepts. Our algorithm, called LaSR, \nuses zero-shot queries to a large language model (LLM) to discover and evolve concepts occurring in known high-performing hypotheses. We discover new hypotheses using a mix of standard evolutionary steps and LLM-guided steps (obtained through zero-shot LLM queries) conditioned on discovered concepts. Once discovered, hypotheses are used in a new round of concept abstraction and evolution. We validate LaSR on the Feynman equations, a popular SR benchmark, \nas well as a set of synthetic tasks. On these benchmarks, LaSR substantially outperforms a variety of state-of-the-art SR approaches based on deep learning and evolutionary algorithms. Moreover, we show that LASR can be used to discover a new and powerful scaling law for LLMs.", "pdf": "https://openreview.net/pdf/407edd744beb8d1728086dba69df316654b6febe.pdf"} {"title": "Solving Zero-Sum Markov Games with Continuous State via Spectral Dynamic Embedding", "url": "https://openreview.net/forum?id=wvQHQgnpGN", "detail_url": "https://openreview.net/forum?id=wvQHQgnpGN", "authors": "Chenhao Zhou,Zebang Shen,Chao Zhang,Hanbin Zhao,Hui Qian", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we propose a provably efficient natural policy gradient algorithm called Spectral Dynamic Embedding Policy Optimization (\\SDEPO) for two-player zero-sum stochastic Markov games with continuous state space and finite action space.\n In the policy evaluation procedure of our algorithm, a novel kernel embedding method is employed to construct a finite-dimensional linear approximations to the state-action value function.\n We explicitly analyze the approximation error in policy evaluation, and show that \\SDEPO\\ achieves an $\\tilde{O}(\\frac{1}{(1-\\gamma)^3\\epsilon})$ last-iterate convergence to the $\\epsilon-$optimal Nash equilibrium, which is independent of the cardinality of the state space.\n The complexity result matches the best-known results for global convergence of policy gradient algorithms for single agent setting.\n Moreover, we also propose a practical variant of \\SDEPO\\ to deal with continuous action space and empirical results demonstrate the practical superiority of the proposed method.", "pdf": "https://openreview.net/pdf/12158a5941ae353d09451219c72139c623a117bb.pdf"} {"title": "SuperDeepFool: a new fast and accurate minimal adversarial attack", "url": "https://openreview.net/forum?id=pqD7ckR8AF", "detail_url": "https://openreview.net/forum?id=pqD7ckR8AF", "authors": "Alireza Abdolahpourrostam,Mahed Abroshan,Seyed-Mohsen Moosavi-Dezfooli", "tags": "NIPS 2024,Poster", "abstract": "Deep neural networks have been known to be vulnerable to adversarial examples, which are inputs that are modified slightly to fool the network into making incorrect predictions. This has led to a significant amount of research on evaluating the robustness of these networks against such perturbations. One particularly important robustness metric is the robustness to minimal $\\ell_{2}$ adversarial perturbations. However, existing methods for evaluating this robustness metric are either computationally expensive or not very accurate. In this paper, we introduce a new family of adversarial attacks that strike a balance between effectiveness and computational efficiency. Our proposed attacks are generalizations of the well-known DeepFool (DF) attack, while they remain simple to understand and implement. We demonstrate that our attacks outperform existing methods in terms of both effectiveness and computational efficiency. Our proposed attacks are also suitable for evaluating the robustness of large models and can be used to perform adversarial training (AT) to achieve state-of-the-art robustness to minimal $\\ell_{2}$ adversarial perturbations.", "pdf": "https://openreview.net/pdf/5c55feebf4e4e8379a8f6d39df9ec3dbd17d77b4.pdf"} {"title": "Infusing Self-Consistency into Density Functional Theory Hamiltonian Prediction via Deep Equilibrium Models", "url": "https://openreview.net/forum?id=PSVkinBs4u", "detail_url": "https://openreview.net/forum?id=PSVkinBs4u", "authors": "Zun Wang,Chang Liu,Nianlong Zou,He Zhang,Xinran Wei,Lin Huang,Lijun Wu,Bin Shao", "tags": "NIPS 2024,Poster", "abstract": "In this study, we introduce a unified neural network architecture, the Deep Equilibrium Density Functional Theory Hamiltonian (DEQH) model, which incorporates Deep Equilibrium Models (DEQs) for predicting Density Functional Theory (DFT) Hamiltonians. The DEQH model inherently captures the self-consistency nature of Hamiltonian, a critical aspect often overlooked by traditional machine learning approaches for Hamiltonian prediction. By employing DEQ within our model architecture, we circumvent the need for DFT calculations during the training phase to introduce the Hamiltonian's self-consistency, thus addressing computational bottlenecks associated with large or complex systems. We propose a versatile framework that combines DEQ with off-the-shelf machine learning models for predicting Hamiltonians. When benchmarked on the MD17 and QH9 datasets, DEQHNet, an instantiation of the DEQH framework, has demonstrated a significant improvement in prediction accuracy. Beyond a predictor, the DEQH model is a Hamiltonian solver, in the sense that it uses the fixed-point solving capability of the deep equilibrium model to iteratively solve for the Hamiltonian. Ablation studies of DEQHNet further elucidate the network's effectiveness, offering insights into the potential of DEQ-integrated networks for Hamiltonian learning. We open source our implementation at https://github.com/Zun-Wang/DEQHNet.", "pdf": "https://openreview.net/pdf/ba5d5d1575982a6fa2d9fff9789aa6d0a7aff912.pdf"} {"title": "Visual Data Diagnosis and Debiasing with Concept Graphs", "url": "https://openreview.net/forum?id=XNGsx3WCU9", "detail_url": "https://openreview.net/forum?id=XNGsx3WCU9", "authors": "Rwiddhi Chakraborty,Yinong Oliver Wang,Jialu Gao,Runkai Zheng,Cheng Zhang,Fernando De la Torre", "tags": "NIPS 2024,Poster", "abstract": "The widespread success of deep learning models today is owed to the curation of extensive datasets significant in size and complexity. However, such models frequently pick up inherent biases in the data during the training process, leading to unreliable predictions. Diagnosing and debiasing datasets is thus a necessity to ensure reliable model performance. In this paper, we present ConBias, a novel framework for diagnosing and mitigating Concept co-occurrence Biases in visual datasets. ConBias represents visual datasets as knowledge graphs of concepts, enabling meticulous analysis of spurious concept co-occurrences to uncover concept imbalances across the whole dataset. Moreover, we show that by employing a novel clique-based concept balancing strategy, we can mitigate these imbalances, leading to enhanced performance on downstream tasks. Extensive experiments show that data augmentation based on a balanced concept distribution augmented by ConBias improves generalization performance across multiple datasets compared to state-of-the-art methods.", "pdf": "https://openreview.net/pdf/4420cc628daccc962c995aacf09239960d4451bd.pdf"} {"title": "Model LEGO: Creating Models Like Disassembling and Assembling Building Blocks", "url": "https://openreview.net/forum?id=nxL7eazKBI", "detail_url": "https://openreview.net/forum?id=nxL7eazKBI", "authors": "Jiacong Hu,Jing Gao,Jingwen Ye,Yang Gao,Xingen Wang,Zunlei Feng,Mingli Song", "tags": "NIPS 2024,Poster", "abstract": "With the rapid development of deep learning, the increasing complexity and scale of parameters make training a new model increasingly resource-intensive. In this paper, we start from the classic convolutional neural network (CNN) and explore a paradigm that does not require training to obtain new models. Similar to the birth of CNN inspired by receptive fields in the biological visual system, we draw inspiration from the information subsystem pathways in the biological visual system and propose Model Disassembling and Assembling (MDA). During model disassembling, we introduce the concept of relative contribution and propose a component locating technique to extract task-aware components from trained CNN classifiers. For model assembling, we present the alignment padding strategy and parameter scaling strategy to construct a new model tailored for a specific task, utilizing the disassembled task-aware components.\nThe entire process is akin to playing with LEGO bricks, enabling arbitrary assembly of new models, and providing a novel perspective for model creation and reuse. Extensive experiments showcase that task-aware components disassembled from CNN classifiers or new models assembled using these components closely match or even surpass the performance of the baseline,\ndemonstrating its promising results for model reuse. Furthermore, MDA exhibits diverse potential applications, with comprehensive experiments exploring model decision route analysis, model compression, knowledge distillation, and more.", "pdf": "https://openreview.net/pdf/945f5c76c0a70e491cc75696fdda4b0ec30242b6.pdf"} {"title": "Learning from Teaching Regularization: Generalizable Correlations Should be Easy to Imitate", "url": "https://openreview.net/forum?id=B1FOes6cyq", "detail_url": "https://openreview.net/forum?id=B1FOes6cyq", "authors": "Can Jin,Tong Che,Hongwu Peng,Yiyuan Li,Dimitris N. Metaxas,Marco Pavone", "tags": "NIPS 2024,Poster", "abstract": "Generalization remains a central challenge in machine learning. In this work, we propose *Learning from Teaching* (**LoT**), a novel regularization technique for deep neural networks to enhance generalization. Inspired by the human ability to capture concise and abstract patterns, we hypothesize that generalizable correlations are expected to be easier to imitate. LoT operationalizes this concept to improve the generalization of the main model with auxiliary student learners. The student learners are trained by the main model and, in turn, provide feedback to help the main model capture more generalizable and imitable correlations. Our experimental results across several domains, including Computer Vision, Natural Language Processing, and methodologies like Reinforcement Learning, demonstrate that the introduction of LoT brings significant benefits compared to training models on the original dataset. The results suggest the effectiveness and efficiency of LoT in identifying generalizable information at the right scales while discarding spurious data correlations, thus making LoT a valuable addition to current machine learning. Code is available at https://github.com/jincan333/LoT.", "pdf": "https://openreview.net/pdf/a3c9c48846be59a0f86f0e898a78797f68a38d43.pdf"} {"title": "Parameter Competition Balancing for Model Merging", "url": "https://openreview.net/forum?id=l5SbrtvSRS", "detail_url": "https://openreview.net/forum?id=l5SbrtvSRS", "authors": "Guodong DU,Junlin Lee,Jing Li,Runhua Jiang,Yifei Guo,Shuyang Yu,Hanting Liu,Sim Kuan Goh,Ho-Kin Tang,Daojing He,Min Zhang", "tags": "NIPS 2024,Poster", "abstract": "While fine-tuning pretrained models has become common practice, these models often underperform outside their specific domains. Recently developed model merging techniques enable the direct integration of multiple models, each fine-tuned for distinct tasks, into a single model. This strategy promotes multitasking capabilities without requiring retraining on the original datasets. However, existing methods fall short in addressing potential conflicts and complex correlations between tasks, especially in parameter-level adjustments, posing a challenge in effectively balancing parameter competition across various tasks. This paper introduces an innovative technique named **PCB-Merging** (Parameter Competition Balancing), a *lightweight* and *training-free* technique that adjusts the coefficients of each parameter for effective model merging. PCB-Merging employs intra-balancing to gauge parameter significance within individual tasks and inter-balancing to assess parameter similarities across different tasks. Parameters with low importance scores are dropped, and the remaining ones are rescaled to form the final merged model. We assessed our approach in diverse merging scenarios, including cross-task, cross-domain, and cross-training configurations, as well as out-of-domain generalization. The experimental results reveal that our approach achieves substantial performance enhancements across multiple modalities, domains, model sizes, number of tasks, fine-tuning forms, and large language models, outperforming existing model merging methods.", "pdf": "https://openreview.net/pdf/3ae464a0e9569a90e72beea9c144f50bef1f0f03.pdf"} {"title": "Collaborative Cognitive Diagnosis with Disentangled Representation Learning for Learner Modeling", "url": "https://openreview.net/forum?id=JxlQ2pbyzS", "detail_url": "https://openreview.net/forum?id=JxlQ2pbyzS", "authors": "Weibo Gao,Qi Liu,Linan Yue,Fangzhou Yao,Hao Wang,Yin Gu,Zheng Zhang", "tags": "NIPS 2024,Poster", "abstract": "Learners sharing similar implicit cognitive states often display comparable observable problem-solving performances. Leveraging collaborative connections among such similar learners proves valuable in comprehending human learning. Motivated by the success of collaborative modeling in various domains, such as recommender systems, we aim to investigate how collaborative signals among learners contribute to the diagnosis of human cognitive states (i.e., knowledge proficiency) in the context of intelligent education.\nThe primary challenges lie in identifying implicit collaborative connections and disentangling the entangled cognitive factors of learners for improved explainability and controllability in learner Cognitive Diagnosis (CD). However, there has been no work on CD capable of simultaneously modeling collaborative and disentangled cognitive states. To address this gap, we present Coral, a $\\underline{Co}$llabo$\\underline{ra}$tive cognitive diagnosis model with disentang$\\underline{l}$ed representation learning. Specifically, Coral first introduces a disentangled state encoder to achieve the initial disentanglement of learners' states.\nSubsequently, a meticulously designed collaborative representation learning procedure captures collaborative signals. It dynamically constructs a collaborative graph of learners by iteratively searching for optimal neighbors in a context-aware manner. Using the constructed graph, collaborative information is extracted through node representation learning. Finally, a decoding process aligns the initial cognitive states and collaborative states, achieving co-disentanglement with practice performance reconstructions.\nExtensive experiments demonstrate the superior performance of Coral, showcasing significant improvements over state-of-the-art methods across several real-world datasets.\nOur code is available at https://github.com/bigdata-ustc/Coral.", "pdf": "https://openreview.net/pdf/724e5236d4f5d086bb268c4e4248016823d5cd8f.pdf"} {"title": "Self-Taught Recognizer: Toward Unsupervised Adaptation for Speech Foundation Models", "url": "https://openreview.net/forum?id=oLoqHRbXYE", "detail_url": "https://openreview.net/forum?id=oLoqHRbXYE", "authors": "Yuchen Hu,Chen Chen,Chao-Han Huck Yang,Chengwei Qin,Pin-Yu Chen,EngSiong Chng,Chao Zhang", "tags": "NIPS 2024,Poster", "abstract": "We propose an unsupervised adaptation framework, Self-TAught Recognizer (STAR), which leverages unlabeled data to enhance the robustness of automatic speech recognition (ASR) systems in diverse target domains, such as noise and accents. STAR is developed for prevalent speech foundation models based on Transformer-related architecture with auto-regressive decoding (e.g., Whisper, Canary). Specifically, we propose a novel indicator that empirically integrates step-wise information during decoding to assess the token-level quality of pseudo labels without ground truth, thereby guiding model updates for effective unsupervised adaptation. Experimental results show that STAR achieves an average of 13.5% relative reduction in word error rate across 14 target domains, and it sometimes even approaches the upper-bound performance of supervised adaptation. Surprisingly, we also observe that STAR prevents the adapted model from the common catastrophic forgetting problem without recalling source-domain data. Furthermore, STAR exhibits high data efficiency that only requires less than one-hour unlabeled data, and seamless generality to alternative large speech models and speech translation tasks. Our code aims to open source to the research communities.", "pdf": "https://openreview.net/pdf/079a83ef1d5b8c3d225dd3ed926e2ec0289db3be.pdf"} {"title": "Typicalness-Aware Learning for Failure Detection", "url": "https://openreview.net/forum?id=SDWeIGPAh9", "detail_url": "https://openreview.net/forum?id=SDWeIGPAh9", "authors": "Yijun Liu,Jiequan Cui,Zhuotao Tian,Senqiao Yang,Qingdong He,wangxiaoling,Jingyong Su", "tags": "NIPS 2024,Poster", "abstract": "Deep neural networks (DNNs) often suffer from the overconfidence issue, where incorrect predictions are made with high confidence scores, hindering the applications in critical systems. In this paper, we propose a novel approach called Typicalness-Aware Learning (TAL) to address this issue and improve failure detection performance. \nWe observe that, with the cross-entropy loss, model predictions are optimized to align with the corresponding labels via increasing logit magnitude or refining logit direction. However, regarding atypical samples, the image content and their labels may exhibit disparities. This discrepancy can lead to overfitting on atypical samples, ultimately resulting in the overconfidence issue that we aim to address.\nTo address this issue, we have devised a metric that quantifies the typicalness of each sample, enabling the dynamic adjustment of the logit magnitude during the training process. By allowing relatively atypical samples to be adequately fitted while preserving reliable logit direction, the problem of overconfidence can be mitigated. TAL has been extensively evaluated on benchmark datasets, and the results demonstrate its superiority over existing failure detection methods. Specifically, TAL achieves a more than 5\\% improvement on CIFAR100 in terms of the Area Under the Risk-Coverage Curve (AURC) compared to the state-of-the-art. Code is available at https://github.com/liuyijungoon/TAL.", "pdf": "https://openreview.net/pdf/bd6acddd61e24cf1beca749dc1ec0244e1721ee7.pdf"} {"title": "Do's and Don'ts: Learning Desirable Skills with Instruction Videos", "url": "https://openreview.net/forum?id=7X5zu6GIuW", "detail_url": "https://openreview.net/forum?id=7X5zu6GIuW", "authors": "Hyunseung Kim,Byungkun Lee,Hojoon Lee,Dongyoon Hwang,Donghu Kim,Jaegul Choo", "tags": "NIPS 2024,Poster", "abstract": "Unsupervised skill discovery is a learning paradigm that aims to acquire diverse behaviors without explicit rewards. However, it faces challenges in learning complex behaviors and often leads to learning unsafe or undesirable behaviors. For instance, in various continuous control tasks, current unsupervised skill discovery methods succeed in learning basic locomotions like standing but struggle with learning more complex movements such as walking and running. Moreover, they may acquire unsafe behaviors like tripping and rolling or navigate to undesirable locations such as pitfalls or hazardous areas. In response, we present **DoDont** (Do\u2019s and Dont\u2019s), an instruction-based skill discovery algorithm composed of two stages. First, in instruction learning stage, DoDont leverages action-free instruction videos to train an instruction network to distinguish desirable transitions from undesirable ones. Then, in the skill learning stage, the instruction network adjusts the reward function of the skill discovery algorithm to weight the desired behaviors. \nSpecifically, we integrate the instruction network into a distance-maximizing skill discovery algorithm, where the instruction network serves as the distance function. Empirically, with less than 8 instruction videos, DoDont effectively learns desirable behaviors and avoids undesirable ones across complex continuous control tasks. Code and videos are available at https://mynsng.github.io/dodont/", "pdf": "https://openreview.net/pdf/e431128608ec0568531d79024da632fe6fcd342d.pdf"} {"title": "Lower Bounds and Optimal Algorithms for Non-Smooth Convex Decentralized Optimization over Time-Varying Networks", "url": "https://openreview.net/forum?id=IUKff7nYmW", "detail_url": "https://openreview.net/forum?id=IUKff7nYmW", "authors": "Dmitry Kovalev,Ekaterina Borodich,Alexander Gasnikov,Dmitrii Feoktistov", "tags": "NIPS 2024,Poster", "abstract": "We consider the task of minimizing the sum of convex functions stored in a decentralized manner across the nodes of a communication network. This problem is relatively well-studied in the scenario when the objective functions are smooth, or the links of the network are fixed in time, or both. In particular, lower bounds on the number of decentralized communications and (sub)gradient computations required to solve the problem have been established, along with matching optimal algorithms. However, the remaining and most challenging setting of non-smooth decentralized optimization over time-varying networks is largely underexplored, as neither lower bounds nor optimal algorithms are known in the literature. We resolve this fundamental gap with the following contributions: (i) we establish the first lower bounds on the communication and subgradient computation complexities of solving non-smooth convex decentralized optimization problems over time-varying networks; (ii) we develop the first optimal algorithm that matches these lower bounds and offers substantially improved theoretical performance compared to the existing state of the art.", "pdf": "https://openreview.net/pdf/eadd1f74195b27ec41cd11718f0be510ed7e9623.pdf"} {"title": "LaSCal: Label-Shift Calibration without target labels", "url": "https://openreview.net/forum?id=TALJtWX7w4", "detail_url": "https://openreview.net/forum?id=TALJtWX7w4", "authors": "Teodora Popordanoska,Gorjan Radevski,Tinne Tuytelaars,Matthew B. Blaschko", "tags": "NIPS 2024,Poster", "abstract": "When machine learning systems face dataset shift, model calibration plays a pivotal role in ensuring their reliability.\nCalibration error (CE) provides insights into the alignment between the predicted confidence scores and the classifier accuracy.\nWhile prior works have delved into the implications of dataset shift on calibration, existing CE estimators either (i) assume access to labeled data from the target domain, often unavailable in practice, or (ii) are derived under a covariate shift assumption.\nIn this work we propose a novel, label-free, consistent CE estimator under label shift. Label shift is characterized by changes in the marginal label distribution p(Y), with a constant conditional p(X|Y) distribution between the source and target. We introduce a novel calibration method, called LaSCal, which uses the estimator in conjunction with a post-hoc calibration strategy, to perform unsupervised calibration on the target distribution. Our thorough empirical analysis demonstrates the effectiveness and reliability of the proposed approach across different modalities, model architectures and label shift intensities.", "pdf": "https://openreview.net/pdf/ae3d8df478e4f01f9a6bf23168a93efc83a3f41f.pdf"} {"title": "VideoTetris: Towards Compositional Text-to-Video Generation", "url": "https://openreview.net/forum?id=RPM7STrnVz", "detail_url": "https://openreview.net/forum?id=RPM7STrnVz", "authors": "Ye Tian,Ling Yang,Haotian Yang,Yuan Gao,Yufan Deng,Xintao Wang,Zhaochen Yu,Xin Tao,Pengfei Wan,Di ZHANG,Bin CUI", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models have demonstrated great success in text-to-video (T2V) generation. However, existing methods may face challenges when handling complex (long) video generation scenarios that involve multiple objects or dynamic changes in object numbers. To address these limitations, we propose VideoTetris, a novel framework that enables compositional T2V generation. Specifically, we propose spatio-temporal compositional diffusion to precisely follow complex textual semantics by manipulating and composing the attention maps of denoising networks spatially and temporally. Moreover, we propose a new dynamic-aware data processing pipeline and a consistency regularization method to enhance the consistency of auto-regressive video generation. Extensive experiments demonstrate that our VideoTetris achieves impressive qualitative and quantitative results in compositional T2V generation. Code is available at: https://github.com/YangLing0818/VideoTetris", "pdf": "https://openreview.net/pdf/bb68997ead13efc218660269a1fa4d189f0588a2.pdf"} {"title": "Similarity-Navigated Conformal Prediction for Graph Neural Networks", "url": "https://openreview.net/forum?id=iBZSOh027z", "detail_url": "https://openreview.net/forum?id=iBZSOh027z", "authors": "Jianqing Song,Jianguo Huang,Wenyu Jiang,Baoming Zhang,Shuangjie Li,Chongjun Wang", "tags": "NIPS 2024,Poster", "abstract": "Graph Neural Networks have achieved remarkable accuracy in semi-supervised node classification tasks. However, these results lack reliable uncertainty estimates. Conformal prediction methods provide a theoretical guarantee for node classification tasks, ensuring that the conformal prediction set contains the ground-truth label with a desired probability (e.g., 95\\%). In this paper, we empirically show that for each node, aggregating the non-conformity scores of nodes with the same label can improve the efficiency of conformal prediction sets while maintaining valid marginal coverage. This observation motivates us to propose a novel algorithm named $\\textit{Similarity-Navigated Adaptive Prediction Sets}$ (SNAPS), which aggregates the non-conformity scores based on feature similarity and structural neighborhood. The key idea behind SNAPS is that nodes with high feature similarity or direct connections tend to have the same label. By incorporating adaptive similar nodes information, SNAPS can generate compact prediction sets and increase the singleton hit ratio (correct prediction sets of size one). Moreover, we theoretically provide a finite-sample coverage guarantee of SNAPS. Extensive experiments demonstrate the superiority of SNAPS, improving the efficiency of prediction sets and singleton hit ratio while maintaining valid coverage.", "pdf": "https://openreview.net/pdf/62619864f0a9209625a3af9c92693c01013c0721.pdf"} {"title": "Optimal Rates for Vector-Valued Spectral Regularization Learning Algorithms", "url": "https://openreview.net/forum?id=U9e1d2xOc8", "detail_url": "https://openreview.net/forum?id=U9e1d2xOc8", "authors": "Dimitri Meunier,Zikai Shen,Mattes Mollenhauer,Arthur Gretton,Zhu Li", "tags": "NIPS 2024,Poster", "abstract": "We study theoretical properties of a broad class of regularized algorithms with vector-valued output. These spectral algorithms include kernel ridge regression, kernel principal component regression and various implementations of gradient descent. Our contributions are twofold. First, we rigorously confirm the so-called saturation effect for ridge regression with vector-valued output by deriving a novel lower bound on learning rates; this bound is shown to be suboptimal when the smoothness of the regression function exceeds a certain level.\nSecond, we present an upper bound on the finite sample risk for general vector-valued spectral algorithms, applicable to both well-specified and misspecified scenarios (where the true regression function lies outside of the hypothesis space), and show that this bound is minimax optimal in various regimes. All of our results explicitly allow the case of infinite-dimensional output variables, proving consistency of recent practical applications.", "pdf": "https://openreview.net/pdf/36250b9ba08493bfa5b0047a059de9faff07da43.pdf"} {"title": "Targeted Sequential Indirect Experiment Design", "url": "https://openreview.net/forum?id=U3Rgdb4li9", "detail_url": "https://openreview.net/forum?id=U3Rgdb4li9", "authors": "Elisabeth Ailer,Niclas Dern,Jason Hartford,Niki Kilbertus", "tags": "NIPS 2024,Poster", "abstract": "Scientific hypotheses typically concern specific aspects of complex, imperfectly understood or entirely unknown mechanisms, such as the effect of gene expression levels on phenotypes or how microbial communities influence environmental health. Such queries are inherently causal (rather than purely associational), but in many settings, experiments can not be conducted directly on the target variables of interest, but are indirect. Therefore, they perturb the target variable, but do not remove potential confounding factors. If, additionally, the resulting experimental measurements are high-dimensional and the studied mechanisms nonlinear, the query of interest is generally not identified. We develop an adaptive strategy to design indirect experiments that optimally inform a targeted query about the ground truth mechanism in terms of sequentially narrowing the gap between an upper and lower bound on the query. While the general formulation consists of a bi-level optimization procedure, we derive an efficiently estimable analytical kernel-based estimator of the bounds for the causal effect, a query of key interest, and demonstrate the efficacy of our approach in confounded, multivariate, nonlinear synthetic settings.", "pdf": "https://openreview.net/pdf/d6dbe3094fc9ef1881705acaf521213cc8dcb314.pdf"} {"title": "Conformalized Multiple Testing after Data-dependent Selection", "url": "https://openreview.net/forum?id=8wvH0RZPsG", "detail_url": "https://openreview.net/forum?id=8wvH0RZPsG", "authors": "Xiaoning Wang,Yuyang Huo,Liuhua Peng,Changliang Zou", "tags": "NIPS 2024,Poster", "abstract": "The task of distinguishing individuals of interest from a vast pool of candidates using predictive models has garnered significant attention in recent years. This task can be framed as a *conformalized multiple testing* procedure, which aims at quantifying prediction uncertainty by controlling the false discovery rate (FDR) via conformal inference. In this paper, we tackle the challenge of conformalized multiple testing after data-dependent selection procedures. To guarantee the construction of valid test statistics that accurately capture the distorted distribution resulting from the selection process, we leverage a holdout labeled set to closely emulate the selective distribution. Our approach involves adaptively picking labeled data to create a calibration set based on the stability of the selection rule. This strategy ensures that the calibration data and the selected test unit are exchangeable, allowing us to develop valid conformal p-values. Implementing with the famous Benjamini-Hochberg (BH) procedure, it effectively controls the FDR over the selected subset. To handle the randomness of the selected subset and the dependence among the constructed p-values, we establish a unified theoretical framework. This framework extends the application of conformalized multiple testing to complex selective settings. Furthermore, we conduct numerical studies to showcase the effectiveness and validity of our procedures across various scenarios.", "pdf": "https://openreview.net/pdf/883c1d55995af9e52b170311c3ad40253ca12bb5.pdf"} {"title": "Are High-Degree Representations Really Unnecessary in Equivariant Graph Neural Networks?", "url": "https://openreview.net/forum?id=M0ncNVuGYN", "detail_url": "https://openreview.net/forum?id=M0ncNVuGYN", "authors": "Jiacheng Cen,Anyi Li,Ning Lin,Yuxiang Ren,Zihe Wang,Wenbing Huang", "tags": "NIPS 2024,Poster", "abstract": "Equivariant Graph Neural Networks (GNNs) that incorporate E(3) symmetry have achieved significant success in various scientific applications. As one of the most successful models, EGNN leverages a simple scalarization technique to perform equivariant message passing over only Cartesian vectors (i.e., 1st-degree steerable vectors), enjoying greater efficiency and efficacy compared to equivariant GNNs using higher-degree steerable vectors. This success suggests that higher-degree representations might be unnecessary. In this paper, we disprove this hypothesis by exploring the expressivity of equivariant GNNs on symmetric structures, including $k$-fold rotations and regular polyhedra. We theoretically demonstrate that equivariant GNNs will always degenerate to a zero function if the degree of the output representations is fixed to 1 or other specific values. Based on this theoretical insight, we propose HEGNN, a high-degree version of EGNN to increase the expressivity by incorporating high-degree steerable vectors while maintaining EGNN's efficiency through the scalarization trick. Our extensive experiments demonstrate that HEGNN not only aligns with our theoretical analyses on toy datasets consisting of symmetric structures, but also shows substantial improvements on more complicated datasets such as $N$-body and MD17. Our theoretical findings and empirical results potentially open up new possibilities for the research of equivariant GNNs.", "pdf": "https://openreview.net/pdf/89d755ddf91820598b3eab42d134c18bbdd41c0a.pdf"} {"title": "Testably Learning Polynomial Threshold Functions", "url": "https://openreview.net/forum?id=5g0Z6PdogJ", "detail_url": "https://openreview.net/forum?id=5g0Z6PdogJ", "authors": "Lucas Slot,Stefan Tiegel,Manuel Wiedmer", "tags": "NIPS 2024,Poster", "abstract": "Rubinfeld \\& Vasilyan recently introduced the framework of *testable learning* as an extension of the classical agnostic model. It relaxes distributional assumptions which are difficult to verify by conditions that can be checked efficiently by a *tester*. The tester has to accept whenever the data truly satisfies the original assumptions, and the learner has to succeed whenever the tester accepts. We focus on the setting where the tester has to accept standard Gaussian data. There, it is known that basic concept classes such as halfspaces can be learned testably with the same time complexity as in the (distribution-specific) agnostic model. In this work, we ask whether there is a price to pay for testably learning more complex concept classes. In particular, we consider polynomial threshold functions (PTFs), which naturally generalize halfspaces. We show that PTFs of arbitrary constant degree can be testably learned up to excess error $\\varepsilon > 0$ in time $n^{\\mathrm{poly}(1/\\varepsilon)}$. This qualitatively matches the best known guarantees in the agnostic model. Our results build on a connection between testable learning and *fooling*. In particular, we show that distributions that approximately match at least $\\mathrm{poly}(1/\\varepsilon)$ moments of the standard Gaussian fool constant-degree PTFs (up to error $\\varepsilon$). As a secondary result, we prove that a direct approach to show testable learning (without fooling), which was successfully used for halfspaces, cannot work for PTFs.", "pdf": "https://openreview.net/pdf/f23b7f4a2c465e661ea2621911ed0418e23b7d12.pdf"} {"title": "Seeing Beyond the Crop: Using Language Priors for Out-of-Bounding Box Keypoint Prediction", "url": "https://openreview.net/forum?id=LGus3wXPxc", "detail_url": "https://openreview.net/forum?id=LGus3wXPxc", "authors": "Bavesh Balaji,Jerrin Bright,Yuhao Chen,Sirisha Rambhatla,John S. Zelek,David Anthony Clausi", "tags": "NIPS 2024,Poster", "abstract": "Accurate estimation of human pose and the pose of interacting objects, like a hockey stick, is crucial for action recognition and performance analysis, particularly in sports. Existing methods capture the object along with the human in the bounding boxes, assuming all keypoints are visible within the bounding box. This necessitates larger bounding boxes to capture the object, introducing unnecessary visual features and hindering performance in real-world cluttered environments. We propose a simple image and text-based multimodal solution TokenCLIPose that addresses this limitation. Our approach focuses solely on human keypoints within the bounding box, treating objects as unseen. TokenCLIPose leverages the rich semantic representations endowed by language for inducing keypoint-specific context, even for occluded keypoints. We evaluate the performance of TokenCLIPose on a real-world Ice-Hockey dataset, and demonstrate its generalizability through zero-shot transfer to a smaller Lacrosse dataset. Additionally, we showcase its flexibility on CrowdPose, a popular occlusion benchmark with keypoints within the bounding box. Our method significantly improves over state-of-the-art approaches on all three datasets, with gains of 4.36\\%, 2.35\\%, and 3.8\\%, respectively.", "pdf": "https://openreview.net/pdf/178218e0f6fe83edd73cb8626e3c3074ce8c36f5.pdf"} {"title": "Enhancing Domain Adaptation through Prompt Gradient Alignment", "url": "https://openreview.net/forum?id=14hLJr6kZ3", "detail_url": "https://openreview.net/forum?id=14hLJr6kZ3", "authors": "Hoang Phan,Tung Lam Tran,Quyen Tran,Trung Le", "tags": "NIPS 2024,Poster", "abstract": "Prior Unsupervised Domain Adaptation (UDA) methods often aim to train a domain-invariant feature extractor, which may hinder the model from learning sufficiently discriminative features. To tackle this, a line of works based on prompt learning leverages the power of large-scale pre-trained vision-language models to learn both domain-invariant and specific features through a set of domain-agnostic and domain-specific learnable prompts. Those studies typically enforce invariant constraints on representation, output, or prompt space to learn such prompts. Differently, we cast UDA as a multiple-objective optimization problem in which each objective is represented by a domain loss. Under this new framework, we propose aligning per-objective gradients to foster consensus between them. Additionally, to prevent potential overfitting when fine-tuning this deep learning architecture, we penalize the norm of these gradients. To achieve these goals, we devise a practical gradient update procedure that can work under both single-source and multi-source UDA. Empirically, our method consistently surpasses other vision language model adaptation methods by a large margin on a wide range of benchmarks. The implementation is available at https://github.com/VietHoang1512/PGA.", "pdf": "https://openreview.net/pdf/d6295c5cdf8565dcd726a48c3c7088bf4c64cf0c.pdf"} {"title": "The Secretary Problem with Predicted Additive Gap", "url": "https://openreview.net/forum?id=Lbuxdzg1pd", "detail_url": "https://openreview.net/forum?id=Lbuxdzg1pd", "authors": "Alexander Braun,Sherry Sarkar", "tags": "NIPS 2024,Poster", "abstract": "The secretary problem is one of the fundamental problems in online decision making; a tight competitive ratio for this problem of $1/e \\approx 0.368$ has been known since the 1960s. Much more recently, the study of algorithms with predictions was introduced: The algorithm is equipped with a (possibly erroneous) additional piece of information upfront which can be used to improve the algorithm's performance. Complementing previous work on secretary problems with prior knowledge, we tackle the following question: \n\n_What is the weakest piece of information that allows us to break the $1/e$ barrier?_\n\nTo this end, we introduce the secretary problem with predicted additive gap. As in the classical problem, weights are fixed by an adversary and elements appear in random order. In contrast to previous variants of predictions, our algorithm only has access to a much weaker piece of information: an _additive gap_ $c$. This gap is the difference between the highest and $k$-th highest weight in the sequence.\nUnlike previous pieces of advice, knowing an exact additive gap does not make the problem trivial. \n\nOur contribution is twofold. First, we show that for any index $k$ and any gap $c$, we can obtain a competitive ratio of $0.4$ when knowing the exact gap (even if we do not know $k$), hence beating the prevalent bound for the classical problem by a constant. Second, a slightly modified version of our algorithm allows to prove standard robustness-consistency properties as well as improved guarantees when knowing a range for the error of the prediction.", "pdf": "https://openreview.net/pdf/f312b2118c90dd9a3d061284731497c8649cdf28.pdf"} {"title": "Designs for Enabling Collaboration in Human-Machine Teaming via Interactive and Explainable Systems", "url": "https://openreview.net/forum?id=XrK4JK2jBr", "detail_url": "https://openreview.net/forum?id=XrK4JK2jBr", "authors": "Rohan R Paleja,Michael Joseph Munje,Kimberlee Chestnut Chang,Reed Jensen,Matthew Gombolay", "tags": "NIPS 2024,Poster", "abstract": "Collaborative robots and machine learning-based virtual agents are increasingly entering the human workspace with the aim of increasing productivity and enhancing safety. Despite this, we show in a ubiquitous experimental domain, Overcooked-AI, that state-of-the-art techniques for human-machine teaming (HMT), which rely on imitation or reinforcement learning, are brittle and result in a machine agent that aims to decouple the machine and human\u2019s actions to act independently rather than in a synergistic fashion. To remedy this deficiency, we develop HMT approaches that enable iterative, mixed-initiative team development allowing end-users to interactively reprogram interpretable AI teammates. Our 50-subject study provides several findings that we summarize into guidelines. While all approaches underperform a simple collaborative heuristic (a critical, negative result for learning-based methods), we find that white-box approaches supported by interactive modification can lead to significant team development, outperforming white-box approaches alone, and that black-box approaches are easier to train and result in better HMT performance highlighting a tradeoff between explainability and interactivity versus ease-of-training. Together, these findings present three important future research directions: 1) Improving the ability to generate collaborative agents with white-box models, 2) Better learning methods to facilitate collaboration rather than individualized coordination, and 3) Mixed-initiative interfaces that enable users, who may vary in ability, to improve collaboration.", "pdf": "https://openreview.net/pdf/dc456c7cc9a61128a809d224d0a8b36455e1efcc.pdf"} {"title": "Neural collapse vs. low-rank bias: Is deep neural collapse really optimal?", "url": "https://openreview.net/forum?id=0jld45XGgJ", "detail_url": "https://openreview.net/forum?id=0jld45XGgJ", "authors": "Peter S\u00faken\u00edk,Christoph H. Lampert,Marco Mondelli", "tags": "NIPS 2024,Poster", "abstract": "Deep neural networks (DNNs) exhibit a surprising structure in their final layer known as neural collapse (NC), and a growing body of works is currently investigated the propagation of neural collapse to earlier layers of DNNs -- a phenomenon called deep neural collapse (DNC). However, existing theoretical results are restricted to either linear models, the last two layers or binary classification. In contrast, we focus on non-linear models of arbitrary depth in multi-class classification and reveal a surprising qualitative shift. As soon as we go beyond two layers or two classes, DNC stops being optimal for the deep unconstrained features model (DUFM) -- the standard theoretical framework for the analysis of collapse. The main culprit is the low-rank bias of multi-layer regularization schemes. This bias leads to optimal solutions of even lower rank than the neural collapse. We support our theoretical findings with experiments on both DUFM and real data, which show the emergence of the low-rank structure in the solution found by gradient descent.", "pdf": "https://openreview.net/pdf/e80edb4cbb7624d281ab49e313509215014e4677.pdf"} {"title": "Light Unbalanced Optimal Transport", "url": "https://openreview.net/forum?id=co8KZws1YK", "detail_url": "https://openreview.net/forum?id=co8KZws1YK", "authors": "Milena Gazdieva,Arip Asadulaev,Evgeny Burnaev,Alexander Korotin", "tags": "NIPS 2024,Poster", "abstract": "While the continuous Entropic Optimal Transport (EOT) field has been actively developing in recent years, it became evident that the classic EOT problem is prone to different issues like the sensitivity to outliers and imbalance of classes in the source and target measures. This fact inspired the development of solvers that deal with the *unbalanced* EOT (UEOT) problem $-$ the generalization of EOT allowing for mitigating the mentioned issues by relaxing the marginal constraints. Surprisingly, it turns out that the existing solvers are either based on heuristic principles or heavy-weighted with complex optimization objectives involving several neural networks. We address this challenge and propose a novel theoretically-justified, lightweight, unbalanced EOT solver. Our advancement consists of developing a novel view on the optimization of the UEOT problem yielding tractable and a non-minimax optimization objective. We show that combined with a light parametrization recently proposed in the field our objective leads to a fast, simple, and effective solver which allows solving the continuous UEOT problem in minutes on CPU. We prove that our solver provides a universal approximation of UEOT solutions and obtain its generalization bounds. We give illustrative examples of the solver's performance.", "pdf": "https://openreview.net/pdf/c94ee87ee40fae19575230ebec990d596c06a3c6.pdf"} {"title": "Inflationary Flows: Calibrated Bayesian Inference with Diffusion-Based Models", "url": "https://openreview.net/forum?id=IM4LtYRWdE", "detail_url": "https://openreview.net/forum?id=IM4LtYRWdE", "authors": "Daniela F De Albuquerque,John Pearson", "tags": "NIPS 2024,Poster", "abstract": "Beyond estimating parameters of interest from data, one of the key goals of statistical inference is to properly quantify uncertainty in these estimates. In Bayesian inference, this uncertainty is provided by the posterior distribution, the computation of which typically involves an intractable high-dimensional integral. Among available approximation methods, sampling-based approaches come with strong theoretical guarantees but scale poorly to large problems, while variational approaches scale well but offer few theoretical guarantees. In particular, variational methods are known to produce overconfident estimates of posterior uncertainty and are typically non-identifiable, with many latent variable configurations generating equivalent predictions. Here, we address these challenges by showing how diffusion-based models (DBMs), which have recently produced state-of-the-art performance in generative modeling tasks, can be repurposed for performing calibrated, identifiable Bayesian inference. By exploiting a previously established connection between the stochastic and probability flow ordinary differential equations (pfODEs) underlying DBMs, we derive a class of models, \\emph{inflationary flows,} that uniquely and deterministically map high-dimensional data to a lower-dimensional Gaussian distribution via ODE integration. This map is both invertible and neighborhood-preserving, with controllable numerical error, with the result that uncertainties in the data are correctly propagated to the latent space. We demonstrate how such maps can be learned via standard DBM training using a novel noise schedule and are effective at both preserving and reducing intrinsic data dimensionality. The result is a class of highly expressive generative models, uniquely defined on a low-dimensional latent space, that afford principled Bayesian inference.", "pdf": "https://openreview.net/pdf/30bb352ff840b3bdf8812e87b2329dda31939b1a.pdf"} {"title": "Frieren: Efficient Video-to-Audio Generation Network with Rectified Flow Matching", "url": "https://openreview.net/forum?id=prXfM5X2Db", "detail_url": "https://openreview.net/forum?id=prXfM5X2Db", "authors": "Yongqi Wang,Wenxiang Guo,Rongjie Huang,Jiawei Huang,Zehan Wang,Fuming You,Ruiqi Li,Zhou Zhao", "tags": "NIPS 2024,Poster", "abstract": "Video-to-audio (V2A) generation aims to synthesize content-matching audio from silent video, and it remains challenging to build V2A models with high generation quality, efficiency, and visual-audio temporal synchrony. \nWe propose Frieren, a V2A model based on rectified flow matching. Frieren regresses the conditional transport vector field from noise to spectrogram latent with straight paths and conducts sampling by solving ODE, outperforming autoregressive and score-based models in terms of audio quality. By employing a non-autoregressive vector field estimator based on a feed-forward transformer and channel-level cross-modal feature fusion with strong temporal alignment, our model generates audio that is highly synchronized with the input video. Furthermore, through reflow and one-step distillation with guided vector field, our model can generate decent audio in a few, or even only one sampling step. Experiments indicate that Frieren achieves state-of-the-art performance in both generation quality and temporal alignment on VGGSound, with alignment accuracy reaching 97.22\\%, and 6.2\\% improvement in inception score over the strong diffusion-based baseline. Audio samples and code are available at http://frieren-v2a.github.io.", "pdf": "https://openreview.net/pdf/c239fa113e1666d0b63e267c415f2d45354382c0.pdf"} {"title": "Differential Privacy in Scalable General Kernel Learning via $K$-means Nystr{\\\"o}m Random Features", "url": "https://openreview.net/forum?id=MdmzAezNHq", "detail_url": "https://openreview.net/forum?id=MdmzAezNHq", "authors": "Bonwoo Lee,Jeongyoun Ahn,Cheolwoo Park", "tags": "NIPS 2024,Poster", "abstract": "As the volume of data invested in statistical learning increases and concerns regarding privacy grow, the privacy leakage issue has drawn significant attention. Differential privacy has emerged as a widely accepted concept capable of mitigating privacy concerns, and numerous differentially private (DP) versions of machine learning algorithms have been developed. However, existing works on DP kernel learning algorithms have exhibited practical limitations, including scalability, restricted choice of kernels, or dependence on test data availability. We propose DP scalable kernel empirical risk minimization (ERM) algorithms and a DP kernel mean embedding (KME) release algorithm suitable for general kernels. Our approaches address the shortcomings of previous algorithms by employing Nystr\u00f6m methods, classical techniques in non-private scalable kernel learning. These methods provide data-dependent low-rank approximations of the kernel matrix for general kernels in a DP manner. We present excess empirical risk bounds and computational complexities for the scalable kernel DP ERM, KME algorithms, contrasting them with established methodologies. Furthermore, we develop a private data-generating algorithm capable of learning diverse kernel models. We conduct experiments to demonstrate the performance of our algorithms, comparing them with existing methods to highlight their superiority.", "pdf": "https://openreview.net/pdf/233ffa5b709fdbca53c562ce8d26eb6672289e66.pdf"} {"title": "DiPEx: Dispersing Prompt Expansion for Class-Agnostic Object Detection", "url": "https://openreview.net/forum?id=NDs9Ejz4Pe", "detail_url": "https://openreview.net/forum?id=NDs9Ejz4Pe", "authors": "Jia Syuen Lim,Zhuoxiao Chen,Zhi Chen,Mahsa Baktashmotlagh,Xin Yu,Zi Huang,Yadan Luo", "tags": "NIPS 2024,Poster", "abstract": "Class-agnostic object detection (OD) can be a cornerstone or a bottleneck for many downstream vision tasks. Despite considerable advancements in bottom-up and multi-object discovery methods that leverage basic visual cues to identify salient objects, consistently achieving a high recall rate remains difficult due to the diversity of object types and their contextual complexity. In this work, we investigate using vision-language models (VLMs) to enhance object detection via a self-supervised prompt learning strategy. Our initial findings indicate that manually crafted text queries often result in undetected objects, primarily because detection confidence diminishes when the query words exhibit semantic overlap. To address this, we propose a Dispersing Prompt Expansion (DiPEx) approach. DiPEx progressively learns to expand a set of distinct, non-overlapping hyperspherical prompts to enhance recall rates, thereby improving performance in downstream tasks such as out-of-distribution OD. Specifically, DiPEx initiates the process by self-training generic parent prompts and selecting the one with the highest semantic uncertainty for further expansion. The resulting child prompts are expected to inherit semantics from their parent prompts while capturing more fine-grained semantics. We apply dispersion losses to ensure high inter-class discrepancy among child prompts while preserving semantic consistency between parent-child prompt pairs. To prevent excessive growth of the prompt sets, we utilize the maximum angular coverage (MAC) of the semantic space as a criterion for early termination. We demonstrate the effectiveness of DiPEx through extensive class-agnostic OD and OOD-OD experiments on MS-COCO and LVIS, surpassing other prompting methods by up to 20.1% in AR and achieving a 21.3% AP improvement over SAM.", "pdf": "https://openreview.net/pdf/af3d8187b35b135bc5f58bdbabeea0da01477f2a.pdf"} {"title": "Classifier Clustering and Feature Alignment for Federated Learning under Distributed Concept Drift", "url": "https://openreview.net/forum?id=6ejpSVIiIl", "detail_url": "https://openreview.net/forum?id=6ejpSVIiIl", "authors": "Junbao Chen,Jingfeng Xue,Yong Wang,Zhenyan Liu,Lu Huang", "tags": "NIPS 2024,Poster", "abstract": "Data heterogeneity is one of the key challenges in federated learning, and many efforts have been devoted to tackling this problem. However, distributed concept drift with data heterogeneity, where clients may additionally experience different concept drifts, is a largely unexplored area. In this work, we focus on real drift, where the conditional distribution $P(\\mathcal{Y}|\\mathcal{X})$ changes. We first study how distributed concept drift affects the model training and find that local classifier plays a critical role in drift adaptation. Moreover, to address data heterogeneity, we study the feature alignment under distributed concept drift, and find two factors that are crucial for feature alignment: the conditional distribution $P(\\mathcal{Y}|\\mathcal{X})$ and the degree of data heterogeneity. Motivated by the above findings, we propose FedCCFA, a federated learning framework with classifier clustering and feature alignment. To enhance collaboration under distributed concept drift, FedCCFA clusters local classifiers at class-level and generates clustered feature anchors according to the clustering results. Assisted by these anchors, FedCCFA adaptively aligns clients' feature spaces based on the entropy of label distribution $P(\\mathcal{Y})$, alleviating the inconsistency in feature space. Our results demonstrate that FedCCFA significantly outperforms existing methods under various concept drift settings. Code is available at https://github.com/Chen-Junbao/FedCCFA.", "pdf": "https://openreview.net/pdf/aa49cebc5b83887b9dc43b9be7377b1d6164abf7.pdf"} {"title": "Rethinking the Membrane Dynamics and Optimization Objectives of Spiking Neural Networks", "url": "https://openreview.net/forum?id=6AepMNrz7a", "detail_url": "https://openreview.net/forum?id=6AepMNrz7a", "authors": "Hangchi Shen,Qian Zheng,Huamin Wang,Gang Pan", "tags": "NIPS 2024,Poster", "abstract": "Despite spiking neural networks (SNNs) have demonstrated notable energy efficiency across various fields, the limited firing patterns of spiking neurons within fixed time steps restrict the expression of information, which impedes further improvement of SNN performance. In addition, current implementations of SNNs typically consider the firing rate or average membrane potential of the last layer as the output, lacking exploration of other possibilities. In this paper, we identify that the limited spike patterns of spiking neurons stem from the initial membrane potential (IMP), which is set to 0. By adjusting the IMP, the spiking neurons can generate additional firing patterns and pattern mappings. Furthermore, we find that in static tasks, the accuracy of SNNs at each time step increases as the membrane potential evolves from zero. This observation inspires us to propose a learnable IMP, which can accelerate the evolution of membrane potential and enables higher performance within a limited number of time steps. Additionally, we introduce the last time step (LTS) approach to accelerate convergence in static tasks, and we propose a label smooth temporal efficient training (TET) loss to mitigate the conflicts between optimization objective and regularization term in the vanilla TET. Our methods improve the accuracy by 4.05\\% on ImageNet compared to baseline and achieve state-of-the-art performance of 87.80\\% on CIFAR10-DVS and 87.86\\% on N-Caltech101.", "pdf": "https://openreview.net/pdf/19cf1b817a43654263e7d117b98146dd643ba9a1.pdf"} {"title": "Invertible Consistency Distillation for Text-Guided Image Editing in Around 7 Steps", "url": "https://openreview.net/forum?id=b1XPHC7MQB", "detail_url": "https://openreview.net/forum?id=b1XPHC7MQB", "authors": "Nikita Starodubcev,Mikhail Khoroshikh,Artem Babenko,Dmitry Baranchuk", "tags": "NIPS 2024,Poster", "abstract": "Diffusion distillation represents a highly promising direction for achieving faithful text-to-image generation in a few sampling steps. However, despite recent successes, existing distilled models still do not provide the full spectrum of diffusion abilities, such as real image inversion, which enables many precise image manipulation methods. This work aims to enrich distilled text-to-image diffusion models with the ability to effectively encode real images into their latent space. To this end, we introduce invertible Consistency Distillation (iCD), a generalized consistency distillation framework that facilitates both high-quality image synthesis and accurate image encoding in only 3-4 inference steps. Though the inversion problem for text-to-image diffusion models gets exacerbated by high classifier-free guidance scales, we notice that dynamic guidance significantly reduces reconstruction errors without noticeable degradation in generation performance. As a result, we demonstrate that iCD equipped with dynamic guidance may serve as a highly effective tool for zero-shot text-guided image editing, competing with more expensive state-of-the-art alternatives.", "pdf": "https://openreview.net/pdf/dbb548a54050f82ad788c1ff54b1ab069059edbd.pdf"} {"title": "Real-time Stereo-based 3D Object Detection for Streaming Perception", "url": "https://openreview.net/forum?id=IpHB5RC3za", "detail_url": "https://openreview.net/forum?id=IpHB5RC3za", "authors": "Changcai Li,Zonghua Gu,Gang Chen,Libo Huang,Wei Zhang,Huihui Zhou", "tags": "NIPS 2024,Poster", "abstract": "The ability to promptly respond to environmental changes is crucial for the perception system of autonomous driving. Recently, a new task called streaming perception was proposed. It jointly evaluate the latency and accuracy into a single metric for video online perception. In this work, we introduce StreamDSGN, the first real-time stereo-based 3D object detection framework designed for streaming perception. StreamDSGN is an end-to-end framework that directly predicts the 3D properties of objects in the next moment by leveraging historical information, thereby alleviating the accuracy degradation of streaming perception. Further, StreamDSGN applies three strategies to enhance the perception accuracy: (1) A feature-flow-based fusion method, which generates a pseudo-next feature at the current moment to address the misalignment issue between feature and ground truth. (2) An extra regression loss for explicit supervision of object motion consistency in consecutive frames. (3) A large kernel backbone with a large receptive field for effectively capturing long-range spatial contextual features caused by changes in object positions. Experiments on the KITTI Tracking dataset show that, compared with the strong baseline, StreamDSGN significantly improves the streaming average precision by up to 4.33%. Our code is available at https://github.com/weiyangdaren/streamDSGN-pytorch.", "pdf": "https://openreview.net/pdf/ea68e790047223682e2fe4c7a1b7212d236e1470.pdf"} {"title": "EDT: An Efficient Diffusion Transformer Framework Inspired by Human-like Sketching", "url": "https://openreview.net/forum?id=MihOCXte41", "detail_url": "https://openreview.net/forum?id=MihOCXte41", "authors": "Xinwang Chen,Ning Liu,Yichen Zhu,Feifei Feng,Jian Tang", "tags": "NIPS 2024,Poster", "abstract": "Transformer-based Diffusion Probabilistic Models (DPMs) have shown more potential than CNN-based DPMs, yet their extensive computational requirements hinder widespread practical applications. To reduce the computation budget of transformer-based DPMs, this work proposes the Efficient Diffusion Transformer (EDT) framework. This framework includes a lightweight-design diffusion model architecture, and a training-free Attention Modulation Matrix and its alternation arrangement in EDT inspired by human-like sketching. Additionally, we propose a token relation-enhanced masking training strategy tailored explicitly for EDT to augment its token relation learning capability. Our extensive experiments demonstrate the efficacy of EDT. The EDT framework reduces training and inference costs and surpasses existing transformer-based diffusion models in image synthesis performance, thereby achieving a significant overall enhancement. With lower FID, EDT-S, EDT-B, and EDT-XL attained speed-ups of 3.93x, 2.84x, and 1.92x respectively in the training phase, and 2.29x, 2.29x, and 2.22x respectively in inference, compared to the corresponding sizes of MDTv2. Our code is available at https://github.com/xinwangChen/EDT.", "pdf": "https://openreview.net/pdf/4c6f0243537d7425d6fea1ed627302446e395d8d.pdf"} {"title": "TinyTTA: Efficient Test-time Adaptation via Early-exit Ensembles on Edge Devices", "url": "https://openreview.net/forum?id=XIcBCBe6C3", "detail_url": "https://openreview.net/forum?id=XIcBCBe6C3", "authors": "Hong Jia,Young D. Kwon,Alessio Orsino,Ting Dang,Domenico Talia,Cecilia Mascolo", "tags": "NIPS 2024,Poster", "abstract": "The increased adoption of Internet of Things (IoT) devices has led to the generation of large data streams with applications in healthcare, sustainability, and robotics. In some cases, deep neural networks have been deployed directly on these resource-constrained units to limit communication overhead, increase efficiency and privacy, and enable real-time applications. However, a common challenge in this setting is the continuous adaptation of models necessary to accommodate changing environments, i.e., data distribution shifts. Test-time adaptation (TTA) has emerged as one potential solution, but its validity has yet to be explored in resource-constrained hardware settings, such as those involving microcontroller units (MCUs). TTA on constrained devices generally suffers from i) memory overhead due to the full backpropagation of a large pre-trained network, ii) lack of support for normalization layers on MCUs, and iii) either memory exhaustion with large batch sizes required for updating or poor performance with small batch sizes. In this paper, we propose TinyTTA, to enable, for the first time, efficient TTA on constrained devices with limited memory. To address the limited memory constraints, we introduce a novel self-ensemble and batch-agnostic early-exit strategy for TTA, which enables continuous adaptation with small batch sizes for reduced memory usage, handles distribution shifts, and improves latency efficiency. Moreover, we develop the TinyTTA Engine, a first-of-its-kind MCU library that enables on-device TTA. We validate TinyTTA on a Raspberry Pi Zero 2W and an STM32H747 MCU. Experimental results demonstrate that TinyTTA improves TTA accuracy by up to 57.6\\%, reduces memory usage by up to six times, and achieves faster and more energy-efficient TTA. Notably, TinyTTA is the only framework able to run TTA on MCU STM32H747 with a 512 KB memory constraint while maintaining high performance.", "pdf": "https://openreview.net/pdf/6848cd2f77fd5aa3442a4621c016dd14a2a1eb95.pdf"} {"title": "VISA: Variational Inference with Sequential Sample-Average Approximations", "url": "https://openreview.net/forum?id=lbLC5OV9GY", "detail_url": "https://openreview.net/forum?id=lbLC5OV9GY", "authors": "Heiko Zimmermann,Christian A. Naesseth,Jan-Willem van de Meent", "tags": "NIPS 2024,Poster", "abstract": "We present variational inference with sequential sample-average approximations (VISA), a method for approximate inference in computationally intensive models, such as those based on numerical simulations. VISA extends importance-weighted forward-KL variational inference by employing a sequence of sample-average approximations, which are considered valid inside a trust region. This makes it possible to reuse model evaluations across multiple gradient steps, thereby reducing computational cost. We perform experiments on high-dimensional Gaussians, Lotka-Volterra dynamics, and a Pickover attractor, which demonstrate that VISA can achieve comparable approximation accuracy to standard importance-weighted forward-KL variational inference with computational savings of a factor two or more for conservatively chosen learning rates.", "pdf": "https://openreview.net/pdf/e71b3bcd7e5e54a9dcfe8b02dd70dbe0cfe4e6d2.pdf"} {"title": "The Ladder in Chaos: Improving Policy Learning by Harnessing the Parameter Evolving Path in A Low-dimensional Space", "url": "https://openreview.net/forum?id=3vHfwL2stG", "detail_url": "https://openreview.net/forum?id=3vHfwL2stG", "authors": "Hongyao Tang,Min Zhang,Chen Chen,Jianye HAO", "tags": "NIPS 2024,Poster", "abstract": "Knowing the learning dynamics of policy is significant to unveiling the mysteries of Reinforcement Learning (RL). It is especially crucial yet challenging to Deep RL, from which the remedies to notorious issues like sample inefficiency and learning instability could be obtained. In this paper, we study how the policy networks of typical DRL agents evolve during the learning process by empirically investigating several kinds of temporal change for each policy parameter. In popular MuJoCo and DeepMind Control Suite (DMC) environments, we find common phenomena for TD3 and RAD agents: (1) the activity of policy network parameters is highly asymmetric and policy networks advance monotonically along a very limited number of major parameter directions; (2) severe detours occur in parameter update and harmonic-like changes are observed for all minor parameter directions. By performing a novel temporal SVD along the policy learning path, the major and minor parameter directions are identified as the columns of the right unitary matrix associated with dominant and insignificant singular values respectively. Driven by the discoveries above, we propose a simple and effective method, called Policy Path Trimming and Boosting (PPTB), as a general plug-in improvement to DRL algorithms. The key idea of PPTB is to trim the policy learning path by canceling the policy updates in minor parameter directions, and boost the learning path by encouraging the advance in major directions. In experiments, we demonstrate that our method improves the learning performance of TD3, RAD, and DoubleDQN regarding scores and efficiency in MuJoCo, DMC, and MinAtar tasks respectively.", "pdf": "https://openreview.net/pdf/3f6d39247b15c96e34130f3be627490881c10bd3.pdf"} {"title": "Autoregressive Policy Optimization for Constrained Allocation Tasks", "url": "https://openreview.net/forum?id=hRKsahifqj", "detail_url": "https://openreview.net/forum?id=hRKsahifqj", "authors": "David Winkel,Niklas Alexander Strau\u00df,Maximilian Bernhard,Zongyue Li,Thomas Seidl,Matthias Schubert", "tags": "NIPS 2024,Poster", "abstract": "Allocation tasks represent a class of problems where a limited amount of resources must be allocated to a set of entities at each time step. Prominent examples of this task include portfolio optimization or distributing computational workloads across servers.\nAllocation tasks are typically bound by linear constraints describing practical requirements that have to be strictly fulfilled at all times. In portfolio optimization, for example, investors may be obligated to allocate less than 30\\% of the funds into a certain industrial sector in any investment period. \nSuch constraints restrict the action space of allowed allocations in intricate ways, which makes learning a policy that avoids constraint violations difficult.\nIn this paper, we propose a new method for constrained allocation tasks based on an autoregressive process to sequentially sample allocations for each entity. In addition, we introduce a novel de-biasing mechanism to counter the initial bias caused by sequential sampling. We demonstrate the superior performance of our approach compared to a variety of Constrained Reinforcement Learning (CRL) methods on three distinct constrained allocation tasks: portfolio optimization, computational workload distribution, and a synthetic allocation benchmark. Our code is available at: https://github.com/niklasdbs/paspo", "pdf": "https://openreview.net/pdf/b6ea57a7f451dcfa6dbec839b2d5bfec73b40588.pdf"} {"title": "Learning an Actionable Discrete Diffusion Policy via Large-Scale Actionless Video Pre-Training", "url": "https://openreview.net/forum?id=Q7s8mFWqsx", "detail_url": "https://openreview.net/forum?id=Q7s8mFWqsx", "authors": "Haoran He,Chenjia Bai,Ling Pan,Weinan Zhang,Bin Zhao,Xuelong Li", "tags": "NIPS 2024,Poster", "abstract": "Learning a generalist embodied agent capable of completing multiple tasks poses challenges, primarily stemming from the scarcity of action-labeled robotic datasets. In contrast, a vast amount of human videos exist, capturing intricate tasks and interactions with the physical world. Promising prospects arise for utilizing actionless human videos for pre-training and transferring the knowledge to facilitate robot policy learning through limited robot demonstrations. However, it remains a challenge due to the domain gap between humans and robots. Moreover, it is difficult to extract useful information representing the dynamic world from human videos, because of its noisy and multimodal data structure. In this paper, we introduce a novel framework to tackle these challenges, which leverages a unified discrete diffusion to combine generative pre-training on human videos and policy fine-tuning on a small number of action-labeled robot videos. We start by compressing both human and robot videos into unified video tokens. In the pre-training stage, we employ a discrete diffusion model with a mask-and-replace diffusion strategy to predict future video tokens in the latent space. In the fine-tuning stage, we harness the imagined future videos to guide low-level action learning with a limited set of robot data. Experiments demonstrate that our method generates high-fidelity future videos for planning and enhances the fine-tuned policies compared to previous state-of-the-art approaches with superior performance.", "pdf": "https://openreview.net/pdf/27d1f274745ac6699344ffdee0a03b2011997c97.pdf"} {"title": "Improving the Training of Rectified Flows", "url": "https://openreview.net/forum?id=mSHs6C7Nfa", "detail_url": "https://openreview.net/forum?id=mSHs6C7Nfa", "authors": "Sangyun Lee,Zinan Lin,Giulia Fanti", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models have shown great promise for image and video generation, but sampling from state-of-the-art models requires expensive numerical integration of a generative ODE.\n One approach for tackling this problem is rectified flows, which iteratively learn smooth ODE paths that are less susceptible to truncation error.\n However, rectified flows still require a relatively large number of function evaluations (NFEs).\n In this work, we propose improved techniques for training rectified flows, allowing them to compete with knowledge distillation methods even in the low NFE setting.\n Our main insight is that under realistic settings, a single iteration of the Reflow algorithm for training rectified flows is sufficient to learn nearly straight trajectories; hence, the current practice of using multiple Reflow iterations is unnecessary.\n We thus propose techniques to improve one-round training of rectified flows, including a U-shaped timestep distribution and LPIPS-Huber premetric.\n With these techniques, we improve the FID of the previous 2-rectified flow by up to 75\\% in the 1 NFE setting on CIFAR-10.\n On ImageNet 64$\\times$64, our improved rectified flow outperforms the state-of-the-art distillation methods\n such as consistency distillation and progressive distillation in both one-step and two-step settings and rivals the performance of improved consistency training (iCT) in FID.\n Code is available at https://github.com/sangyun884/rfpp.", "pdf": "https://openreview.net/pdf/876809b80692e3d3bb48e5861b766c0b86adece6.pdf"} {"title": "Neural Concept Binder", "url": "https://openreview.net/forum?id=ypPzyflbYs", "detail_url": "https://openreview.net/forum?id=ypPzyflbYs", "authors": "Wolfgang Stammer,Antonia W\u00fcst,David Steinmann,Kristian Kersting", "tags": "NIPS 2024,Poster", "abstract": "The challenge in object-based visual reasoning lies in generating concept representations that are both descriptive and distinct. Achieving this in an unsupervised manner requires human users to understand the model's learned concepts and, if necessary, revise incorrect ones. To address this challenge, we introduce the Neural Concept Binder (NCB), a novel framework for deriving both discrete and continuous concept representations, which we refer to as \"concept-slot encodings\". NCB employs two types of binding: \"soft binding\", which leverages the recent SysBinder mechanism to obtain object-factor encodings, and subsequent \"hard binding\", achieved through hierarchical clustering and retrieval-based inference. This enables obtaining expressive, discrete representations from unlabeled images. Moreover, the structured nature of NCB's concept representations allows for intuitive inspection and the straightforward integration of external knowledge, such as human input or insights from other AI models like GPT-4. Additionally, we demonstrate that incorporating the hard binding mechanism preserves model performance while enabling seamless integration into both neural and symbolic modules for complex reasoning tasks. We validate the effectiveness of NCB through evaluations on our newly introduced CLEVR-Sudoku dataset.", "pdf": "https://openreview.net/pdf/83a61e046b4272eb1e838707fd28087549cbe396.pdf"} {"title": "Stochastic Newton Proximal Extragradient Method", "url": "https://openreview.net/forum?id=V4tzn87DtN", "detail_url": "https://openreview.net/forum?id=V4tzn87DtN", "authors": "Ruichen Jiang,Michal Derezinski,Aryan Mokhtari", "tags": "NIPS 2024,Poster", "abstract": "Stochastic second-order methods are known to achieve fast local convergence in strongly convex optimization by relying on noisy Hessian estimates to precondition the gradient. Yet, most of these methods achieve superlinear convergence only when the stochastic Hessian noise diminishes, requiring an increase in the per-iteration cost as time progresses. Recent work in \\cite{na2022hessian} addressed this issue via a Hessian averaging scheme that achieves a superlinear convergence rate without increasing the per-iteration cost. However, the considered method exhibits a slow global convergence rate, requiring up to $\\tilde{\\mathcal{O}}(\\kappa^2)$ iterations to reach the superlinear rate of $\\tilde{\\mathcal{O}}((1/t)^{t/2})$, where $\\kappa$ is the problem's condition number. In this paper, we propose a novel stochastic Newton proximal extragradient method that significantly improves these bounds, achieving a faster global linear rate and reaching the same fast superlinear rate in $\\tilde{\\mathcal{O}}(\\kappa)$ iterations. We achieve this by developing a novel extension of the Hybrid Proximal Extragradient (HPE) framework, which simultaneously achieves fast global and local convergence rates for strongly convex functions with access to a noisy Hessian oracle.", "pdf": "https://openreview.net/pdf/75abf7fb001295750674081df41373a60911f39b.pdf"} {"title": "Pin-Tuning: Parameter-Efficient In-Context Tuning for Few-Shot Molecular Property Prediction", "url": "https://openreview.net/forum?id=859DtlwnAD", "detail_url": "https://openreview.net/forum?id=859DtlwnAD", "authors": "Liang Wang,Qiang Liu,Shaozhen Liu,Xin Sun,Shu Wu,Liang Wang", "tags": "NIPS 2024,Poster", "abstract": "Molecular property prediction (MPP) is integral to drug discovery and material science, but often faces the challenge of data scarcity in real-world scenarios. Addressing this, few-shot molecular property prediction (FSMPP) has been developed. Unlike other few-shot tasks, FSMPP typically employs a pre-trained molecular encoder and a context-aware classifier, benefiting from molecular pre-training and molecular context information. Despite these advancements, existing methods struggle with the ineffective fine-tuning of pre-trained encoders. We attribute this issue to the imbalance between the abundance of tunable parameters and the scarcity of labeled molecules, and the lack of contextual perceptiveness in the encoders. To overcome this hurdle, we propose a parameter-efficient in-context tuning method, named Pin-Tuning. Specifically, we propose a lightweight adapter for pre-trained message passing layers (MP-Adapter) and Bayesian weight consolidation for pre-trained atom/bond embedding layers (Emb-BWC), to achieve parameter-efficient tuning while preventing over-fitting and catastrophic forgetting. Additionally, we enhance the MP-Adapters with contextual perceptiveness. This innovation allows for in-context tuning of the pre-trained encoder, thereby improving its adaptability for specific FSMPP tasks. When evaluated on public datasets, our method demonstrates superior tuning with fewer trainable parameters, improving few-shot predictive performance.", "pdf": "https://openreview.net/pdf/9003564fe9f87c9c737e63c73a37aefdd285d30c.pdf"} {"title": "Feint Behaviors and Strategies: Formalization, Implementation and Evaluation", "url": "https://openreview.net/forum?id=ACIDDnTbSJ", "detail_url": "https://openreview.net/forum?id=ACIDDnTbSJ", "authors": "Junyu Liu,Xiangjun Peng", "tags": "NIPS 2024,Poster", "abstract": "Feint behaviors refer to a set of deceptive behaviors in a nuanced manner, which enable players to obtain temporal and spatial advantages over opponents in competitive games. Such behaviors are crucial tactics in most competitive multi-player games (e.g., boxing, fencing, basketball, motor racing, etc.). However, existing literature does not provide a comprehensive (and/or concrete) formalization for Feint behaviors, and their implications on game strategies. In this work, we introduce the first comprehensive formalization of Feint behaviors at both action-level and strategy-level, and provide concrete implementation and quantitative evaluation of them in multi-player games. The key idea of our work is to (1) allow automatic generation of Feint behaviors via Palindrome-directed templates, combine them into meaningful behavior sequences via a Dual-Behavior Model; (2) concertize the implications from our formalization of Feint on game strategies, in terms of temporal, spatial, and their collective impacts respectively; and (3) provide a unified implementation scheme of Feint behaviors in existing MARL frameworks. The experimental results show that our design of Feint behaviors can (1) greatly improve the game reward gains; (2) significantly improve the diversity of Multi-Player Games; and (3) only incur negligible overheads in terms of time consumption.", "pdf": "https://openreview.net/pdf/8a0624ac9b6fb65426117d0a6de87a949b709c74.pdf"} {"title": "Parameter Symmetry and Noise Equilibrium of Stochastic Gradient Descent", "url": "https://openreview.net/forum?id=uhki1rE2NZ", "detail_url": "https://openreview.net/forum?id=uhki1rE2NZ", "authors": "Liu Ziyin,Mingze Wang,Hongchao Li,Lei Wu", "tags": "NIPS 2024,Poster", "abstract": "Symmetries are prevalent in deep learning and can significantly influence the learning dynamics of neural networks. In this paper, we examine how exponential symmetries -- a broad subclass of continuous symmetries present in the model architecture or loss function -- interplay with stochastic gradient descent (SGD). We first prove that gradient noise creates a systematic motion (a ``Noether flow\") of the parameters $\\theta$ along the degenerate direction to a unique initialization-independent fixed point $\\theta^*$. These points are referred to as the noise equilibria because, at these points, noise contributions from different directions are balanced and aligned. Then, we show that the balance and alignment of gradient noise can serve as a novel alternative mechanism for explaining important phenomena such as progressive sharpening/flattening and representation formation within neural networks and have practical implications for understanding techniques like representation normalization and warmup.", "pdf": "https://openreview.net/pdf/12403100cfe914c4de8db8333a848158ce24bacf.pdf"} {"title": "Fast T2T: Optimization Consistency Speeds Up Diffusion-Based Training-to-Testing Solving for Combinatorial Optimization", "url": "https://openreview.net/forum?id=xDrKZOZEOc", "detail_url": "https://openreview.net/forum?id=xDrKZOZEOc", "authors": "Yang Li,Jinpei Guo,Runzhong Wang,Hongyuan Zha,Junchi Yan", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models have recently advanced Combinatorial Optimization (CO) as a powerful backbone for neural solvers. However, their iterative sampling process requiring denoising across multiple noise levels incurs substantial overhead. We propose to learn direct mappings from different noise levels to the optimal solution for a given instance, facilitating high-quality generation with minimal shots. This is achieved through an optimization consistency training protocol, which, for a given instance, minimizes the difference among samples originating from varying generative trajectories and time steps relative to the optimal solution. The proposed model enables fast single-step solution generation while retaining the option of multi-step sampling to trade for sampling quality, which offers a more effective and efficient alternative backbone for neural solvers. In addition, within the training-to-testing (T2T) framework, to bridge the gap between training on historical instances and solving new instances, we introduce a novel consistency-based gradient search scheme during the test stage, enabling more effective exploration of the solution space learned during training. It is achieved by updating the latent solution probabilities under objective gradient guidance during the alternation of noise injection and denoising steps. We refer to this model as Fast T2T. Extensive experiments on two popular tasks, the Traveling Salesman Problem (TSP) and Maximal Independent Set (MIS), demonstrate the superiority of Fast T2T regarding both solution quality and efficiency, even outperforming LKH given limited time budgets. Notably, Fast T2T with merely one-step generation and one-step gradient search can mostly outperform the SOTA diffusion-based counterparts that require hundreds of steps, while achieving tens of times speedup.", "pdf": "https://openreview.net/pdf/fe5e7fe1fabed428781b4a7575b830ae83a8c609.pdf"} {"title": "Neural P$^3$M: A Long-Range Interaction Modeling Enhancer for Geometric GNNs", "url": "https://openreview.net/forum?id=ncqauwSyl5", "detail_url": "https://openreview.net/forum?id=ncqauwSyl5", "authors": "Yusong Wang,Chaoran Cheng,Shaoning Li,Yuxuan Ren,Bin Shao,Ge Liu,Pheng-Ann Heng,Nanning Zheng", "tags": "NIPS 2024,Poster", "abstract": "Geometric graph neural networks (GNNs) have emerged as powerful tools for modeling molecular geometry. However, they encounter limitations in effectively capturing long-range interactions in large molecular systems. To address this challenge, we introduce **Neural P$^3$M**, a versatile enhancer of geometric GNNs to expand the scope of their capabilities by incorporating mesh points alongside atoms and reimaging traditional mathematical operations in a trainable manner. Neural P$^3$M exhibits flexibility across a wide range of molecular systems and demonstrates remarkable accuracy in predicting energies and forces, outperforming on benchmarks such as the MD22 dataset. \nIt also achieves an average improvement of 22% on the OE62 dataset while integrating with various architectures. Codes are available at https://github.com/OnlyLoveKFC/Neural_P3M.", "pdf": "https://openreview.net/pdf/f41e4a83de59e078f549c988ebd0fda9f5e00c2b.pdf"} {"title": "Near-Optimal Dynamic Regret for Adversarial Linear Mixture MDPs", "url": "https://openreview.net/forum?id=LPyPRS2XcF", "detail_url": "https://openreview.net/forum?id=LPyPRS2XcF", "authors": "Long-Fei Li,Peng Zhao,Zhi-Hua Zhou", "tags": "NIPS 2024,Poster", "abstract": "We study episodic linear mixture MDPs with the unknown transition and adversarial rewards under full-information feedback, employing *dynamic regret* as the performance measure. We start with in-depth analyses of the strengths and limitations of the two most popular methods: occupancy-measure-based and policy-based methods. We observe that while the occupancy-measure-based method is effective in addressing non-stationary environments, it encounters difficulties with the unknown transition. In contrast, the policy-based method can deal with the unknown transition effectively but faces challenges in handling non-stationary environments. Building on this, we propose a novel algorithm that combines the benefits of both methods. Specifically, it employs (i) an *occupancy-measure-based global optimization* with a two-layer structure to handle non-stationary environments; and (ii) a *policy-based variance-aware value-targeted regression* to tackle the unknown transition. We bridge these two parts by a novel conversion. Our algorithm enjoys an $\\widetilde{\\mathcal{O}}(d \\sqrt{H^3 K} + \\sqrt{HK(H + \\bar{P}_K)})$ dynamic regret, where $d$ is the feature mapping dimension, $H$ is the episode length, $K$ is the number of episodes, $\\bar{P}_K$ is the non-stationarity measure. We show it is minimax optimal up to logarithmic factors by establishing a matching lower bound. To the best of our knowledge, this is the **first** work that achieves **near-optimal** dynamic regret for adversarial linear mixture MDPs with the unknown transition without prior knowledge of the non-stationarity measure.", "pdf": "https://openreview.net/pdf/ecc896ee5ba956d91cde26b99ea76eee654d65ab.pdf"} {"title": "Unsupervised Anomaly Detection in The Presence of Missing Values", "url": "https://openreview.net/forum?id=AoEeBqP8AD", "detail_url": "https://openreview.net/forum?id=AoEeBqP8AD", "authors": "Feng Xiao,Jicong Fan", "tags": "NIPS 2024,Poster", "abstract": "Anomaly detection methods typically require fully observed data for model training and inference and cannot handle incomplete data, while the missing data problem is pervasive in science and engineering, leading to challenges in many important applications such as abnormal user detection in recommendation systems and novel or anomalous cell detection in bioinformatics, where the missing rates can be higher than 30\\% or even 80\\%. In this work, first, we construct and evaluate a straightforward strategy, ''impute-then-detect'', via combining state-of-the-art imputation methods with unsupervised anomaly detection methods, where the training data are composed of normal samples only. We observe that such two-stage methods frequently yield imputation bias from normal data, namely, the imputation methods are inclined to make incomplete samples ''normal\", where the fundamental reason is that the imputation models learned only on normal data and cannot generalize well to abnormal data in the inference stage. To address this challenge, we propose an end-to-end method that integrates data imputation with anomaly detection into a unified optimization problem. The proposed model learns to generate well-designed pseudo-abnormal samples to mitigate the imputation bias and ensure the discrimination ability of both the imputation and detection processes. Furthermore, we provide theoretical guarantees for the effectiveness of the proposed method, proving that the proposed method can correctly detect anomalies with high probability. Experimental results on datasets with manually constructed missing values and inherent missing values demonstrate that our proposed method effectively mitigates the imputation bias and surpasses the baseline methods significantly. The source code of our method is available at https://github.com/jicongfan/ImAD-Anomaly-Detection-With-Missing-Data.", "pdf": "https://openreview.net/pdf/62beb575e1acad7208b5cc8b3049d91db40300c6.pdf"} {"title": "ManiPose: Manifold-Constrained Multi-Hypothesis 3D Human Pose Estimation", "url": "https://openreview.net/forum?id=xxY8d4rnSb", "detail_url": "https://openreview.net/forum?id=xxY8d4rnSb", "authors": "C\u00e9dric Rommel,Victor Letzelter,Nermin Samet,Renaud Marlet,Matthieu Cord,Patrick Perez,Eduardo Valle", "tags": "NIPS 2024,Poster", "abstract": "We propose ManiPose, a manifold-constrained multi-hypothesis model for human-pose 2D-to-3D lifting. We provide theoretical and empirical evidence that, due to the depth ambiguity inherent to monocular 3D human pose estimation, traditional regression models suffer from pose-topology consistency issues, which standard evaluation metrics (MPJPE, P-MPJPE and PCK) fail to assess. ManiPose addresses depth ambiguity by proposing multiple candidate 3D poses for each 2D input, each with its estimated plausibility. Unlike previous multi-hypothesis approaches, ManiPose forgoes generative models, greatly facilitating its training and usage. By constraining the outputs to lie on the human pose manifold, ManiPose guarantees the consistency of all hypothetical poses, in contrast to previous works. We showcase the performance of ManiPose on real-world datasets, where it outperforms state-of-the-art models in pose consistency by a large margin while being very competitive on the MPJPE metric.", "pdf": "https://openreview.net/pdf/aa9e7681c86797dad7f2bb93ba5e0c36ea2de62a.pdf"} {"title": "Neural Collapse Inspired Feature Alignment for Out-of-Distribution Generalization", "url": "https://openreview.net/forum?id=wQpNG9JnPK", "detail_url": "https://openreview.net/forum?id=wQpNG9JnPK", "authors": "Zhikang Chen,Min Zhang,Sen Cui,Haoxuan Li,Gang Niu,Mingming Gong,Changshui Zhang,Kun Zhang", "tags": "NIPS 2024,Poster", "abstract": "The spurious correlation between the background features of the image and its label arises due to that the samples labeled with the same class in the training set often co-occurs with a specific background, which will cause the encoder to extract non-semantic features for classification, resulting in poor out-of-distribution generalization performance. Although many studies have been proposed to address this challenge, the semantic and spurious features are still difficult to accurately decouple from the original image and fail to achieve high performance with deep learning models. This paper proposes a novel perspective inspired by neural collapse to solve the spurious correlation problem through the alternate execution of environment partitioning and learning semantic masks. Specifically, we propose to assign an environment to each sample by learning a local model for each environment and using maximum likelihood probability. At the same time, we require that the learned semantic mask neurally collapses to the same simplex equiangular tight frame (ETF) in each environment after being applied to the original input. We conduct extensive experiments on four datasets, and the results demonstrate that our method significantly improves out-of-distribution performance.", "pdf": "https://openreview.net/pdf/0e5a651b0723810c001303ef89ef83bf1da33e4c.pdf"} {"title": "Understanding Multi-Granularity for Open-Vocabulary Part Segmentation", "url": "https://openreview.net/forum?id=hE6ZxU0N3c", "detail_url": "https://openreview.net/forum?id=hE6ZxU0N3c", "authors": "Jiho Choi,Seonho Lee,Seungho Lee,Minhyun Lee,Hyunjung Shim", "tags": "NIPS 2024,Poster", "abstract": "Open-vocabulary part segmentation (OVPS) is an emerging research area focused on segmenting fine-grained entities using diverse and previously unseen vocabularies.\nOur study highlights the inherent complexities of part segmentation due to intricate boundaries and diverse granularity, reflecting the knowledge-based nature of part identification.\nTo address these challenges, we propose PartCLIPSeg, a novel framework utilizing generalized parts and object-level contexts to mitigate the lack of generalization in fine-grained parts.\nPartCLIPSeg integrates competitive part relationships and attention control, alleviating ambiguous boundaries and underrepresented parts.\nExperimental results demonstrate that PartCLIPSeg outperforms existing state-of-the-art OVPS methods, offering refined segmentation and an advanced understanding of part relationships within images.\nThrough extensive experiments, our model demonstrated a significant improvement over the state-of-the-art models on the Pascal-Part-116, ADE20K-Part-234, and PartImageNet datasets.", "pdf": "https://openreview.net/pdf/24a6fed180f3bbe6ae181943130cf3c182466fae.pdf"} {"title": "Model Decides How to Tokenize: Adaptive DNA Sequence Tokenization with MxDNA", "url": "https://openreview.net/forum?id=AQ1umQL7dZ", "detail_url": "https://openreview.net/forum?id=AQ1umQL7dZ", "authors": "Lifeng Qiao,Peng Ye,Yuchen Ren,Weiqiang Bai,chaoqi liang,Xinzhu Ma,Nanqing Dong,Wanli Ouyang", "tags": "NIPS 2024,Poster", "abstract": "Foundation models have made significant strides in understanding the genomic language of DNA sequences. However, previous models typically adopt the tokenization methods designed for natural language, which are unsuitable for DNA sequences due to their unique characteristics. In addition, the optimal approach to tokenize DNA remains largely under-explored, and may not be intuitively understood by humans even if discovered. To address these challenges, we introduce MxDNA, a novel framework where the model autonomously learns an effective DNA tokenization strategy through gradient decent. MxDNA employs a sparse Mixture of Convolution Experts coupled with a deformable convolution to model the tokenization process, with the discontinuous, overlapping, and ambiguous nature of meaningful genomic segments explicitly considered. On Nucleotide Transformer Benchmarks and Genomic Benchmarks, MxDNA demonstrates superior performance to existing methods with less pretraining data and time, highlighting its effectiveness. Finally, we show that MxDNA learns unique tokenization strategy distinct to those of previous methods and captures genomic functionalities at a token level during self-supervised pretraining. Our MxDNA aims to provide a new perspective on DNA tokenization, potentially offering broad applications in various domains and yielding profound insights. Code is available at https://github.com/qiaoqiaoLF/MxDNA.", "pdf": "https://openreview.net/pdf/3510b99799dbd0d18bebabb1ae8a85ae569f21ba.pdf"} {"title": "Causal discovery with endogenous context variables", "url": "https://openreview.net/forum?id=cU8d7LeOyx", "detail_url": "https://openreview.net/forum?id=cU8d7LeOyx", "authors": "Wiebke G\u00fcnther,Oana-Iuliana Popescu,Martin Rabel,Urmi Ninad,Andreas Gerhardus,Jakob Runge", "tags": "NIPS 2024,Poster", "abstract": "Systems with variations of the underlying generating mechanism between different contexts, i.e., different environments or internal states in which the system operates, are common in the real world, such as soil moisture regimes in Earth science. Besides understanding the shared properties of the system, in practice, the question of context-specific properties, i.e., the change in causal relationships between contexts, arises. For real-world data, contexts are often driven by system variables, e.g., precipitation highly influences soil moisture. Nevertheless, this setup needs to be studied more. To account for such endogenous contexts in causal discovery, our work proposes a constraint-based method that can efficiently discover context-specific causal graphs using an adaptive testing approach. Our approach tests conditional independence on the pooled datasets to infer the dependence between system variables, including the context, to avoid introducing selection bias. To yield context-specific insights, conditional independence is tested on context-specific data. We work out the theoretical framework for this adaptive testing approach and give a detailed discussion of the connection to structural causal models, including sufficiency assumptions, which allow to prove the soundness of our algorithm and to interpret the results causally. A simulation study to evaluate numerical properties shows that our approach behaves as expected, but also leads to a further understanding of current limitations and viable extensions.", "pdf": "https://openreview.net/pdf/f83ed96927369731b03183b8838b92d7a061163b.pdf"} {"title": "Boosting the Potential of Large Language Models with an Intelligent Information Assistant", "url": "https://openreview.net/forum?id=oZy4a11SUg", "detail_url": "https://openreview.net/forum?id=oZy4a11SUg", "authors": "Yujia Zhou,Zheng Liu,Zhicheng Dou", "tags": "NIPS 2024,Poster", "abstract": "The emergence of Large Language Models (LLMs) has significantly advanced natural language processing, but these models often generate factually incorrect information, known as \"hallucination.\" Initial retrieval-augmented generation (RAG) methods like the \"Retrieve-Read\" framework was inadequate for complex reasoning tasks. Subsequent prompt-based RAG strategies and Supervised Fine-Tuning (SFT) methods improved performance but required frequent retraining and risked altering foundational LLM capabilities. To cope with these challenges, we propose Assistant-based Retrieval-Augmented Generation (AssistRAG), integrating an intelligent information assistant within LLMs. This assistant manages memory and knowledge through tool usage, action execution, memory building, and plan specification. Using a two-phase training approach\u2014Curriculum Assistant Learning and Reinforced Preference Optimization\u2014AssistRAG enhances information retrieval and decision-making. Experiments show AssistRAG significantly outperforms benchmarks, especially benefiting less advanced LLMs, by providing superior reasoning capabilities and accurate responses.", "pdf": "https://openreview.net/pdf/a54fd782fb5a60604650797b3d2e049edfe0dc60.pdf"} {"title": "Understanding Generalizability of Diffusion Models Requires Rethinking the Hidden Gaussian Structure", "url": "https://openreview.net/forum?id=Sk2duBGvrK", "detail_url": "https://openreview.net/forum?id=Sk2duBGvrK", "authors": "Xiang Li,Yixiang Dai,Qing Qu", "tags": "NIPS 2024,Poster", "abstract": "In this work, we study the generalizability of diffusion models by looking into the hidden properties of the learned score functions, which are essentially a series of deep denoisers trained on various noise levels. We observe that as diffusion models transition from memorization to generalization, their corresponding nonlinear diffusion denoisers exhibit increasing linearity. This discovery leads us to investigate the linear counterparts of the nonlinear diffusion models, which are a series of linear models trained to match the function mappings of the nonlinear diffusion denoisers. Surprisingly, these linear denoisers are approximately the optimal denoisers for a multivariate Gaussian distribution characterized by the empirical mean and covariance of the training dataset. This finding implies that diffusion models have the inductive bias towards capturing and utilizing the Gaussian structure (covariance information) of the training dataset for data generation. We empirically demonstrate that this inductive bias is a unique property of diffusion models in the generalization regime, which becomes increasingly evident when the model's capacity is relatively small compared to the training dataset size. In the case that the model is highly overparameterized, this inductive bias emerges during the initial training phases before the model fully memorizes its training data. Our study provides crucial insights into understanding the notable strong generalization phenomenon recently observed in real-world diffusion models.", "pdf": "https://openreview.net/pdf/5f1744c4df89c02b9cc2dd2b8275f8fa2f2a82c2.pdf"} {"title": "DEX: Data Channel Extension for Efficient CNN Inference on Tiny AI Accelerators", "url": "https://openreview.net/forum?id=ftqjwZQz10", "detail_url": "https://openreview.net/forum?id=ftqjwZQz10", "authors": "Taesik Gong,Fahim Kawsar,Chulhong Min", "tags": "NIPS 2024,Poster", "abstract": "Tiny machine learning (TinyML) aims to run ML models on small devices and is increasingly favored for its enhanced privacy, reduced latency, and low cost. Recently, the advent of tiny AI accelerators has revolutionized the TinyML field by significantly enhancing hardware processing power. These accelerators, equipped with multiple parallel processors and dedicated per-processor memory instances, offer substantial performance improvements over traditional microcontroller units (MCUs). However, their limited data memory often necessitates downsampling input images, resulting in accuracy degradation. To address this challenge, we propose Data channel EXtension (DEX), a novel approach for efficient CNN execution on tiny AI accelerators. DEX incorporates additional spatial information from original images into input images through patch-wise even sampling and channel-wise stacking, effectively extending data across input channels. By leveraging underutilized processors and data memory for channel extension, DEX facilitates parallel execution without increasing inference latency. Our evaluation with four models and four datasets on tiny AI accelerators demonstrates that this simple idea improves accuracy on average by 3.5%p while keeping the inference latency the same on the AI accelerator. The source code is available at https://github.com/Nokia-Bell-Labs/data-channel-extension.", "pdf": "https://openreview.net/pdf/a58b00abc3cb8a4a9433b4e43080c9a07ad6897c.pdf"} {"title": "Alignment for Honesty", "url": "https://openreview.net/forum?id=67K3Xlvw8L", "detail_url": "https://openreview.net/forum?id=67K3Xlvw8L", "authors": "Yuqing Yang,Ethan Chern,Xipeng Qiu,Graham Neubig,Pengfei Liu", "tags": "NIPS 2024,Poster", "abstract": "Recent research has made significant strides in aligning large language models (LLMs) with helpfulness and harmlessness. In this paper, we argue for the importance of alignment for \\emph{honesty}, ensuring that LLMs proactively refuse to answer questions when they lack knowledge, while still not being overly conservative. However, a pivotal aspect of alignment for honesty involves discerning an LLM's knowledge boundaries, which demands comprehensive solutions in terms of metric development, benchmark creation, and training methodologies. We address these challenges by first establishing a precise problem definition and defining ``honesty'' inspired by the Analects of Confucius. This serves as a cornerstone for developing metrics that effectively measure an LLM's honesty by quantifying its progress post-alignment. Furthermore, we introduce a flexible training framework which is further instantiated by several efficient fine-tuning techniques that emphasize honesty without sacrificing performance on other tasks. Our extensive experiments reveal that these aligned models show a marked increase in honesty, as indicated by our proposed metrics. We open-source all relevant resources to facilitate future research at \\url{https://github.com/GAIR-NLP/alignment-for-honesty}.", "pdf": "https://openreview.net/pdf/c6548d84031bdcffe89f7404a9b3bd66b96f6928.pdf"} {"title": "Accelerating Nash Equilibrium Convergence in Monte Carlo Settings Through Counterfactual Value Based Fictitious Play", "url": "https://openreview.net/forum?id=eFD9N5zdFC", "detail_url": "https://openreview.net/forum?id=eFD9N5zdFC", "authors": "Qi Ju,Falin Hei,Ting Feng,Dengbing Yi,Zhemei Fang,YunFeng Luo", "tags": "NIPS 2024,Poster", "abstract": "Counterfactual Regret Minimization (CFR) and its variants are widely recognized as effective algorithms for solving extensive-form imperfect information games. Recently, many improvements have been focused on enhancing the convergence speed of the CFR algorithm. However, most of these variants are not applicable under Monte Carlo (MC) conditions, making them unsuitable for training in large-scale games. We introduce a new MC-based algorithm for solving extensive-form imperfect information games, called MCCFVFP (Monte Carlo Counterfactual Value-Based Fictitious Play). MCCFVFP combines CFR\u2019s counterfactual value calculations with fictitious play\u2019s best response strategy, leveraging the strengths of fictitious play to gain significant advantages in games with a high proportion of dominated strategies. Experimental results show that MCCFVFP achieved convergence speeds approximately 20\\%$\\sim$50\\% faster than the most advanced MCCFR variants in games like poker and other test games.", "pdf": "https://openreview.net/pdf/4a5cff4d4023b097b160408568c93151acd76762.pdf"} {"title": "Dual-Personalizing Adapter for Federated Foundation Models", "url": "https://openreview.net/forum?id=nkwPiBSw1f", "detail_url": "https://openreview.net/forum?id=nkwPiBSw1f", "authors": "yiyuan yang,Guodong Long,Tao Shen,Jing Jiang,Michael Blumenstein", "tags": "NIPS 2024,Poster", "abstract": "Recently, foundation models, particularly large language models (LLMs), have demonstrated an impressive ability to adapt to various tasks by fine-tuning diverse instruction data. Notably, federated foundation models (FedFM) emerge as a privacy preservation method to fine-tune models collaboratively under federated learning (FL) settings by leveraging many distributed datasets with non-IID data. To alleviate communication and computation overhead, parameter-efficient methods are introduced for efficiency, and some research adapted personalization methods to FedFM for better user preferences alignment. However, a critical gap in existing research is the neglect of test-time distribution shifts in real-world applications, and conventional methods for test-time distribution shifts in personalized FL are less effective for FedFM due to their failure to adapt to complex distribution shift scenarios and the requirement to train all parameters. To bridge this gap, we refine the setting in FedFM, termed test-time personalization, which aims to learn personalized federated foundation models on clients while effectively handling test-time distribution shifts simultaneously. To address challenges in this setting, we explore a simple yet effective solution, a Federated Dual-Personalizing Adapter (FedDPA) architecture. By co-working with a foundation model, a global adapter and a local adapter jointly tackle the test-time distribution shifts and client-specific personalization. Additionally, we introduce an instance-wise dynamic weighting mechanism that dynamically integrates the global and local adapters for each test instance during inference, facilitating effective test-time personalization. The effectiveness of the proposed method has been evaluated on benchmark datasets across different NLP tasks.", "pdf": "https://openreview.net/pdf/a6b36839b4ee3d9c5837662362dac36c10c4e38c.pdf"} {"title": "RestoreAgent: Autonomous Image Restoration Agent via Multimodal Large Language Models", "url": "https://openreview.net/forum?id=xgP5ynlZWf", "detail_url": "https://openreview.net/forum?id=xgP5ynlZWf", "authors": "Haoyu Chen,Wenbo Li,Jinjin Gu,Jingjing Ren,Sixiang Chen,Tian Ye,Renjing Pei,Kaiwen Zhou,Fenglong Song,Lei Zhu", "tags": "NIPS 2024,Poster", "abstract": "Natural images captured by mobile devices often suffer from multiple types of degradation, such as noise, blur, and low light. Traditional image restoration methods require manual selection of specific tasks, algorithms, and execution sequences, which is time-consuming and may yield suboptimal results. All-in-one models, though capable of handling multiple tasks, typically support only a limited range and often produce overly smooth, low-fidelity outcomes due to their broad data distribution fitting. To address these challenges, we first define a new pipeline for restoring images with multiple degradations, and then introduce RestoreAgent, an intelligent image restoration system leveraging multimodal large language models. RestoreAgent autonomously assesses the type and extent of degradation in input images and performs restoration through (1) determining the appropriate restoration tasks, (2) optimizing the task sequence, (3) selecting the most suitable models, and (4) executing the restoration. Experimental results demonstrate the superior performance of RestoreAgent in handling complex degradation, surpassing human experts. Furthermore, the system\u2019s modular design facilitates the fast integration of new tasks and models.", "pdf": "https://openreview.net/pdf/a7abb52bb68d236e32bce92953c8abf4bfa5f495.pdf"} {"title": "Listenable Maps for Zero-Shot Audio Classifiers", "url": "https://openreview.net/forum?id=lV1wGHKd5x", "detail_url": "https://openreview.net/forum?id=lV1wGHKd5x", "authors": "Francesco Paissan,Luca Della Libera,Mirco Ravanelli,Cem Subakan", "tags": "NIPS 2024,Poster", "abstract": "Interpreting the decisions of deep learning models, including audio classifiers, is crucial for ensuring the transparency and trustworthiness of this technology. In this paper, we introduce LMAC-ZS (Listenable Maps for Zero-Shot Audio Classifiers), which, to the best of our knowledge, is the first decoder-based post-hoc explanation method for explaining the decisions of zero-shot audio classifiers. The proposed method utilizes a novel loss function that aims to closely reproduce the original similarity patterns between text-and-audio pairs in the generated explanations. We provide an extensive evaluation using the Contrastive Language-Audio Pretraining (CLAP) model to showcase that our interpreter remains faithful to the decisions in a zero-shot classification context. Moreover, we qualitatively show that our method produces meaningful explanations that correlate well with different text prompts.", "pdf": "https://openreview.net/pdf/59598241497828f0093d082906906570bdbf6a0a.pdf"} {"title": "Confidence Calibration of Classifiers with Many Classes", "url": "https://openreview.net/forum?id=ebBnKVxMcZ", "detail_url": "https://openreview.net/forum?id=ebBnKVxMcZ", "authors": "Adrien Le Coz,St\u00e9phane Herbin,Faouzi Adjed", "tags": "NIPS 2024,Poster", "abstract": "For classification models based on neural networks, the maximum predicted class probability is often used as a confidence score. This score rarely predicts well the probability of making a correct prediction and requires a post-processing calibration step. However, many confidence calibration methods fail for problems with many classes. To address this issue, we transform the problem of calibrating a multiclass classifier into calibrating a single surrogate binary classifier. This approach allows for more efficient use of standard calibration methods. We evaluate our approach on numerous neural networks used for image or text classification and show that it significantly enhances existing calibration methods.", "pdf": "https://openreview.net/pdf/2fdec42fe147a3473c7412499f0b735571f4d0a3.pdf"} {"title": "Analytically deriving Partial Information Decomposition for affine systems of stable and convolution-closed distributions", "url": "https://openreview.net/forum?id=7CUUtpDeqN", "detail_url": "https://openreview.net/forum?id=7CUUtpDeqN", "authors": "Chaitanya Goswami,Amanda Merkley", "tags": "NIPS 2024,Poster", "abstract": "Bivariate partial information decomposition (PID) has emerged as a promising tool for analyzing interactions in complex systems, particularly in neuroscience. PID achieves this by decomposing the information that two sources (e.g., different brain regions) have about a target (e.g., a stimulus) into unique, redundant, and synergistic terms. However, the computation of PID remains a challenging problem, often involving optimization over distributions. While several works have been proposed to compute PID terms numerically, there is a surprising dearth of work on computing PID terms analytically. The only known analytical PID result is for jointly Gaussian distributions. In this work, we present two theoretical advances that enable analytical calculation of the PID terms for numerous well-known distributions, including distributions relevant to neuroscience, such as Poisson, Cauchy, and binomial. Our first result generalizes the analytical Gaussian PID result to the much larger class of stable distributions. We also discover a theoretical link between PID and the emerging fields of data thinning and data fission. Our second result utilizes this link to derive analytical PID terms for two more classes of distributions: convolution-closed distributions and a sub-class of the exponential family. Furthermore, we provide an analytical upper bound for approximately calculating PID for convolution-closed distributions, whose tightness we demonstrate in simulation.", "pdf": "https://openreview.net/pdf/b816f7e6b67cf16e8aca58ae089a7fd2b4d13e38.pdf"} {"title": "Exocentric-to-Egocentric Video Generation", "url": "https://openreview.net/forum?id=UHDCbIrCFL", "detail_url": "https://openreview.net/forum?id=UHDCbIrCFL", "authors": "Jia-Wei Liu,Weijia Mao,Zhongcong Xu,Jussi Keppo,Mike Zheng Shou", "tags": "NIPS 2024,Poster", "abstract": "We introduce Exo2Ego-V, a novel exocentric-to-egocentric diffusion-based video generation method for daily-life skilled human activities where sparse 4-view exocentric viewpoints are configured 360\u00b0 around the scene. This task is particularly challenging due to the significant variations between exocentric and egocentric viewpoints and high complexity of dynamic motions and real-world daily-life environments. To address these challenges, we first propose a new diffusion-based multi-view exocentric encoder to extract the dense multi-scale features from multi-view exocentric videos as the appearance conditions for egocentric video generation. Then, we design an exocentric-to-egocentric view translation prior to provide spatially aligned egocentric features as a concatenation guidance for the input of egocentric video diffusion model. Finally, we introduce the temporal attention layers into our egocentric video diffusion pipeline to improve the temporal consistency cross egocentric frames. Extensive experiments demonstrate that Exo2Ego-V significantly outperforms SOTA approaches on 5 categories from the Ego-Exo4D dataset with an average of 35% in terms of LPIPS. Our code and model will be made available on https://github.com/showlab/Exo2Ego-V.", "pdf": "https://openreview.net/pdf/9be37c4df614dc3a32b4bf3050dbf9c0ad7429c2.pdf"} {"title": "Beyond Redundancy: Information-aware Unsupervised Multiplex Graph Structure Learning", "url": "https://openreview.net/forum?id=xaqPAkJnAS", "detail_url": "https://openreview.net/forum?id=xaqPAkJnAS", "authors": "Zhixiang Shen,Shuo Wang,zhao kang", "tags": "NIPS 2024,Poster", "abstract": "Unsupervised Multiplex Graph Learning (UMGL) aims to learn node representations on various edge types without manual labeling. However, existing research overlooks a key factor: the reliability of the graph structure. Real-world data often exhibit a complex nature and contain abundant task-irrelevant noise, severely compromising UMGL's performance. Moreover, existing methods primarily rely on contrastive learning to maximize mutual information across different graphs, limiting them to multiplex graph redundant scenarios and failing to capture view-unique task-relevant information. In this paper, we focus on a more realistic and challenging task: to unsupervisedly learn a fused graph from multiple graphs that preserve sufficient task-relevant information while removing task-irrelevant noise. Specifically, our proposed Information-aware Unsupervised Multiplex Graph Fusion framework (InfoMGF) uses graph structure refinement to eliminate irrelevant noise and simultaneously maximizes view-shared and view-unique task-relevant information, thereby tackling the frontier of non-redundant multiplex graph. Theoretical analyses further guarantee the effectiveness of InfoMGF. Comprehensive experiments against various baselines on different downstream tasks demonstrate its superior performance and robustness. Surprisingly, our unsupervised method even beats the sophisticated supervised approaches. The source code and datasets are available at https://github.com/zxlearningdeep/InfoMGF.", "pdf": "https://openreview.net/pdf/e5b9f6af4bcc1edd63aea9284ca2c3aba26fc5b0.pdf"} {"title": "P$^2$C$^2$Net: PDE-Preserved Coarse Correction Network for efficient prediction of spatiotemporal dynamics", "url": "https://openreview.net/forum?id=motImXq3B1", "detail_url": "https://openreview.net/forum?id=motImXq3B1", "authors": "Qi Wang,Pu Ren,Hao Zhou,Xin-Yang Liu,Zhiwen Deng,Yi Zhang,Ruizhi Chengze,Hongsheng Liu,Zidong Wang,Jian-Xun Wang,Ji-Rong Wen,Hao Sun,Yang Liu", "tags": "NIPS 2024,Poster", "abstract": "When solving partial differential equations (PDEs), classical numerical methods often require fine mesh grids and small time stepping to meet stability, consistency, and convergence conditions, leading to high computational cost. Recently, machine learning has been increasingly utilized to solve PDE problems, but they often encounter challenges related to interpretability, generalizability, and strong dependency on rich labeled data. Hence, we introduce a new PDE-Preserved Coarse Correction Network (P$^2$C$^2$Net) to efficiently solve spatiotemporal PDE problems on coarse mesh grids in small data regimes. The model consists of two synergistic modules: (1) a trainable PDE block that learns to update the coarse solution (i.e., the system state), based on a high-order numerical scheme with boundary condition encoding, and (2) a neural network block that consistently corrects the solution on the fly. In particular, we propose a learnable symmetric Conv filter, with weights shared over the entire model, to accurately estimate the spatial derivatives of PDE based on the neural-corrected system state. The resulting physics-encoded model is capable of handling limited training data (e.g., 3--5 trajectories) and accelerates the prediction of PDE solutions on coarse spatiotemporal grids while maintaining a high accuracy. P$^2$C$^2$Net achieves consistent state-of-the-art performance with over 50\\% gain (e.g., in terms of relative prediction error) across four datasets covering complex reaction-diffusion processes and turbulent flows.", "pdf": "https://openreview.net/pdf/3eff2a5e3ed72cce48c48a7dd401562658d4cb4b.pdf"} {"title": "A Comprehensive Analysis on the Learning Curve in Kernel Ridge Regression", "url": "https://openreview.net/forum?id=IMlDpZmLnL", "detail_url": "https://openreview.net/forum?id=IMlDpZmLnL", "authors": "Tin Sum Cheng,Aurelien Lucchi,Anastasis Kratsios,David Belius", "tags": "NIPS 2024,Poster", "abstract": "This paper conducts a comprehensive study of the learning curves of kernel ridge regression (KRR) under minimal assumptions.\nOur contributions are three-fold: 1) we analyze the role of key properties of the kernel, such as its spectral eigen-decay, the characteristics of the eigenfunctions, and the smoothness of the kernel; 2) we demonstrate the validity of the Gaussian Equivalent Property (GEP), which states that the generalization performance of KRR remains the same when the whitened features are replaced by standard Gaussian vectors, thereby shedding light on the success of previous analyzes under the Gaussian Design Assumption; 3) we derive novel bounds that improve over existing bounds across a broad range of setting such as (in)dependent feature vectors and various combinations of eigen-decay rates in the over/underparameterized regimes.", "pdf": "https://openreview.net/pdf/517d3242b4f15717eb724441321c1e8d2e0a9a7b.pdf"} {"title": "Excluding the Irrelevant: Focusing Reinforcement Learning through Continuous Action Masking", "url": "https://openreview.net/forum?id=yVzWlFhpRW", "detail_url": "https://openreview.net/forum?id=yVzWlFhpRW", "authors": "Roland Stolz,Hanna Krasowski,Jakob Thumm,Michael Eichelbeck,Philipp Gassert,Matthias Althoff", "tags": "NIPS 2024,Poster", "abstract": "Continuous action spaces in reinforcement learning (RL) are commonly defined as multidimensional intervals. While intervals usually reflect the action boundaries for tasks well, they can be challenging for learning because the typically large global action space leads to frequent exploration of irrelevant actions. Yet, little task knowledge can be sufficient to identify significantly smaller state-specific sets of relevant actions. Focusing learning on these relevant actions can significantly improve training efficiency and effectiveness. In this paper, we propose to focus learning on the set of relevant actions and introduce three continuous action masking methods for exactly mapping the action space to the state-dependent set of relevant actions. Thus, our methods ensure that only relevant actions are executed, enhancing the predictability of the RL agent and enabling its use in safety-critical applications. We further derive the implications of the proposed methods on the policy gradient. Using proximal policy optimization ( PPO), we evaluate our methods on four control tasks, where the relevant action set is computed based on the system dynamics and a relevant state set. Our experiments show that the three action masking methods achieve higher final rewards and converge faster than the baseline without action masking.", "pdf": "https://openreview.net/pdf/8a1eecafd6f9a1b8919878b8b034860552119122.pdf"} {"title": "Only Strict Saddles in the Energy Landscape of Predictive Coding Networks?", "url": "https://openreview.net/forum?id=eTu6kvrkSq", "detail_url": "https://openreview.net/forum?id=eTu6kvrkSq", "authors": "Francesco Innocenti,El Mehdi Achour,Ryan Singh,Christopher Buckley", "tags": "NIPS 2024,Poster", "abstract": "Predictive coding (PC) is an energy-based learning algorithm that performs iterative inference over network activities before updating weights. Recent work suggests that PC can converge in fewer learning steps than backpropagation thanks to its inference procedure. However, these advantages are not always observed, and the impact of PC inference on learning is not theoretically well understood. Here, we study the effective landscape on which PC learns: the PC energy function at the inference equilibrium of the network activities. For deep linear networks, we first show that the equilibrated energy is simply a rescaled mean squared error loss with a weight-dependent rescaling. We then prove that many highly degenerate (non-strict) saddles of the loss including the origin become much easier to escape (strict) in the equilibrated energy. Our theory is validated by experiments on both linear and non-linear networks. Based on these and other results, we conjecture that all the saddles of the equilibrated energy are strict. Overall, this work suggests that PC inference makes the loss landscape more benign and robust to vanishing gradients, while also highlighting the fundamental challenge of scaling PC to deeper models.", "pdf": "https://openreview.net/pdf/bd4827e6206f07dbb002d1d6f6963f3ff722e7c7.pdf"} {"title": "Adversarial Schr\u00f6dinger Bridge Matching", "url": "https://openreview.net/forum?id=L3Knnigicu", "detail_url": "https://openreview.net/forum?id=L3Knnigicu", "authors": "Nikita Gushchin,Daniil Selikhanovych,Sergei Kholkin,Evgeny Burnaev,Alexander Korotin", "tags": "NIPS 2024,Poster", "abstract": "The Schr\u00f6dinger Bridge (SB) problem offers a powerful framework for combining optimal transport and diffusion models. A promising recent approach to solve the SB problem is the Iterative Markovian Fitting (IMF) procedure, which alternates between Markovian and reciprocal projections of continuous-time stochastic processes. However, the model built by the IMF procedure has a long inference time due to using many steps of numerical solvers for stochastic differential equations. To address this limitation, we propose a novel Discrete-time IMF (D-IMF) procedure in which learning of stochastic processes is replaced by learning just a few transition probabilities in discrete time. Its great advantage is that in practice it can be naturally implemented using the Denoising Diffusion GAN (DD-GAN), an already well-established adversarial generative modeling technique. We show that our D-IMF procedure can provide the same quality of unpaired domain translation as the IMF, using only several generation steps instead of hundreds.", "pdf": "https://openreview.net/pdf/0646965ea9dea938f0b85308b5bd90c83d799515.pdf"} {"title": "GITA: Graph to Visual and Textual Integration for Vision-Language Graph Reasoning", "url": "https://openreview.net/forum?id=SaodQ13jga", "detail_url": "https://openreview.net/forum?id=SaodQ13jga", "authors": "Yanbin Wei,Shuai Fu,Weisen Jiang,Zejian Zhang,Zhixiong Zeng,Qi Wu,James Kwok,Yu Zhang", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) are increasingly used for various tasks with graph structures. Though LLMs can process graph information in a textual format, they overlook the rich vision modality, which is an intuitive way for humans to comprehend structural information and conduct general graph reasoning. The potential benefits and capabilities of representing graph structures as visual images (i.e., $\\textit{visual graph}$) are still unexplored. To fill the gap, we innovatively propose an end-to-end framework, called $\\textbf{G}$raph to v$\\textbf{I}$sual and $\\textbf{T}$extual Integr$\\textbf{A}$tion (GITA), which firstly incorporates visual graphs into general graph reasoning. Besides, we establish $\\textbf{G}$raph-based $\\textbf{V}$ision-$\\textbf{L}$anguage $\\textbf{Q}$uestion $\\textbf{A}$nswering (GVLQA) dataset from existing graph data, which is the first vision-language dataset for general graph reasoning purposes. Extensive experiments on the GVLQA dataset and five real-world datasets show that GITA outperforms mainstream LLMs in terms of general graph reasoning capabilities. Moreover, We highlight the effectiveness of the layout augmentation on visual graphs and pretraining on the GVLQA dataset.", "pdf": "https://openreview.net/pdf/7fa41bc3ad751f40f27c2265bb5c2d0a2dcd60b7.pdf"} {"title": "Model Sensitivity Aware Continual Learning", "url": "https://openreview.net/forum?id=B5vQ7IQW7d", "detail_url": "https://openreview.net/forum?id=B5vQ7IQW7d", "authors": "Zhenyi Wang,Heng Huang", "tags": "NIPS 2024,Poster", "abstract": "Continual learning (CL) aims to adapt to non-stationary data distributions while retaining previously acquired knowledge. However, CL models typically face a trade-off between preserving old task knowledge and excelling in new task performance. Existing approaches often sacrifice one for the other. To overcome this limitation, orthogonal to existing approaches, we propose a novel perspective that views the CL model ability in preserving old knowledge and performing well in new task as a matter of model sensitivity to parameter updates. \\textit{Excessive} parameter sensitivity can lead to two drawbacks: (1) significant forgetting of previous knowledge; and (2) overfitting to new tasks. To reduce parameter sensitivity, we optimize the model's performance based on the parameter distribution, which achieves the worst-case CL performance within a distribution neighborhood. This innovative learning paradigm offers dual benefits: (1) reduced forgetting of old knowledge by mitigating drastic changes in model predictions under small parameter updates; and (2) enhanced new task performance by preventing overfitting to new tasks. Consequently, our method achieves superior ability in retaining old knowledge and achieving excellent new task performance simultaneously.\nImportantly, our approach is compatible with existing CL methodologies, allowing seamless integration while delivering significant improvements in effectiveness, efficiency, and versatility with both theoretical and empirical supports.", "pdf": "https://openreview.net/pdf/15dfef6e6816715617be7c10498f9f3d1daeff59.pdf"} {"title": "FNP: Fourier Neural Processes for Arbitrary-Resolution Data Assimilation", "url": "https://openreview.net/forum?id=4rrNcsVPDm", "detail_url": "https://openreview.net/forum?id=4rrNcsVPDm", "authors": "Kun Chen,Peng Ye,Hao Chen,kang chen,Tao Han,Wanli Ouyang,Tao Chen,LEI BAI", "tags": "NIPS 2024,Poster", "abstract": "Data assimilation is a vital component in modern global medium-range weather forecasting systems to obtain the best estimation of the atmospheric state by combining the short-term forecast and observations. Recently, AI-based data assimilation approaches have attracted increasing attention for their significant advantages over traditional techniques in terms of computational consumption. However, existing AI-based data assimilation methods can only handle observations with a specific resolution, lacking the compatibility and generalization ability to assimilate observations with other resolutions. Considering that complex real-world observations often have different resolutions, we propose the Fourier Neural Processes (FNP) for arbitrary-resolution data assimilation in this paper. Leveraging the efficiency of the designed modules and flexible structure of neural processes, FNP achieves state-of-the-art results in assimilating observations with varying resolutions, and also exhibits increasing advantages over the counterparts as the resolution and the amount of observations increase. Moreover, our FNP trained on a fixed resolution can directly handle the assimilation of observations with out-of-distribution resolutions and the observational information reconstruction task without additional fine-tuning, demonstrating its excellent generalization ability across data resolutions as well as across tasks. Code is available at https://github.com/OpenEarthLab/FNP.", "pdf": "https://openreview.net/pdf/642c98dce45e109b626715decbee9a827be6d309.pdf"} {"title": "Prediction-Powered Ranking of Large Language Models", "url": "https://openreview.net/forum?id=7V62sQ5Jra", "detail_url": "https://openreview.net/forum?id=7V62sQ5Jra", "authors": "Ivi Chatzi,Eleni Straitouri,Suhas Thejaswi,Manuel Gomez Rodriguez", "tags": "NIPS 2024,Poster", "abstract": "Large language models are often ranked according to their level of alignment with human preferences---a model is better than other models if its outputs are more frequently preferred by humans. One of the popular ways to elicit human preferences utilizes pairwise comparisons between the outputs provided by different models to the same inputs. However, since gathering pairwise comparisons by humans is costly and time-consuming, it has become a common practice to gather pairwise comparisons by a strong large language model---a model strongly aligned with human preferences. Surprisingly, practitioners cannot currently measure the uncertainty that any mismatch between human and model preferences may introduce in the constructed rankings. In this work, we develop a statistical framework to bridge this gap. Given a (small) set of pairwise comparisons by humans and a large set of pairwise comparisons by a model, our framework provides a rank-set---a set of possible ranking positions---for each of the models under comparison. Moreover, it guarantees that, with a probability greater than or equal to a user-specified value, the rank-sets cover the true ranking consistent with the distribution of human pairwise preferences asymptotically. Using pairwise comparisons made by humans in the LMSYS Chatbot Arena platform and pairwise comparisons made by three strong large language models, we empirically demonstrate the effectivity of our framework and show that the rank-sets constructed using only pairwise comparisons by the strong large language models are often inconsistent with (the distribution of) human pairwise preferences.", "pdf": "https://openreview.net/pdf/f8582c38c660b35bb79a235d90496710aa512bc8.pdf"} {"title": "Scalable Constrained Policy Optimization for Safe Multi-agent Reinforcement Learning", "url": "https://openreview.net/forum?id=pJlFURyTG5", "detail_url": "https://openreview.net/forum?id=pJlFURyTG5", "authors": "Lijun Zhang,Lin Li,Wei Wei,Huizhong Song,Yaodong Yang,Jiye Liang", "tags": "NIPS 2024,Poster", "abstract": "A challenging problem in seeking to bring multi-agent reinforcement learning (MARL) techniques into real-world applications, such as autonomous driving and drone swarms, is how to control multiple agents safely and cooperatively to accomplish tasks. Most existing safe MARL methods learn the centralized value function by introducing a global state to guide safety cooperation. However, the global coupling arising from agents\u2019 safety constraints and the exponential growth of the state-action space size limit their applicability in instant communication or computing resource-constrained systems and larger multi-agent systems.\u00a0In this paper, we develop a novel scalable\u00a0and theoretically-justified multi-agent constrained policy optimization method. This method utilizes the rigorous bounds of the trust region method and the bounds of the truncated advantage function to provide a new local policy optimization objective for each agent. Also, we prove that the safety constraints and the joint policy improvement can\u00a0be met\u00a0when each agent adopts a sequential update scheme to optimize a $\\kappa$-hop policy. Then, we propose a practical algorithm called Scalable MAPPO-Lagrangian (Scal-MAPPO-L). The proposed method\u2019s effectiveness\u00a0is verified\u00a0on a collection of benchmark tasks, and the results support our theory that decentralized training with local interactions can still improve reward performance and satisfy safe constraints.", "pdf": "https://openreview.net/pdf/928f1f6ddcbbdb2f197603e2a4dedf1d3eade0fa.pdf"} {"title": "SE(3)-bi-equivariant Transformers for Point Cloud Assembly", "url": "https://openreview.net/forum?id=EehS4erXWB", "detail_url": "https://openreview.net/forum?id=EehS4erXWB", "authors": "Ziming Wang,Rebecka J\u00f6rnsten", "tags": "NIPS 2024,Poster", "abstract": "Given a pair of point clouds, the goal of assembly is to recover a rigid transformation that aligns one point cloud to the other. This task is challenging because the point clouds may be non-overlapped, and they may have arbitrary initial positions. To address these difficulties, we propose a method, called $SE(3)$-bi-equivariant transformer (BITR), based on the $SE(3)$-bi-equivariance prior of the task:it guarantees that when the inputs are rigidly perturbed, the output will transform accordingly. Due to its equivariance property, BITR can not only handle non-overlapped PCs, but also guarantee robustness against initial positions. Specifically, BITR first extracts features of the inputs using a novel $SE(3) \\times SE(3)$-transformer, and then projects the learned feature to group $SE(3)$ as the output. Moreover, we theoretically show that swap and scale equivariances can be incorporated into BITR, thus it further guarantees stable performance under scaling and swapping the inputs. We experimentally show the effectiveness of BITR in practical tasks.", "pdf": "https://openreview.net/pdf/ef3dcf15826026311f6afa33fe4c47f945767f2d.pdf"} {"title": "$\\boldsymbol{\\mu}\\mathbf{P^2}$: Effective Sharpness Aware Minimization Requires Layerwise Perturbation Scaling", "url": "https://openreview.net/forum?id=pR5g1bBqoV", "detail_url": "https://openreview.net/forum?id=pR5g1bBqoV", "authors": "Moritz Haas,Jin Xu,Volkan Cevher,Leena Chennuru Vankadara", "tags": "NIPS 2024,Poster", "abstract": "Sharpness Aware Minimization (SAM) enhances performance across various neural architectures and datasets. As models are continually scaled up to improve performance, a rigorous understanding of SAM\u2019s scaling behaviour is paramount. To this end, we study the infinite-width limit of neural networks trained with SAM, using the Tensor Programs framework. Our findings reveal that the dynamics of standard SAM effectively reduce to applying SAM solely in the last layer in wide neural networks, even with optimal hyperparameters. In contrast, we identify a stable parameterization with layerwise perturbation scaling, which we call *Maximal Update and Perturbation Parameterization* ($\\mu$P$^2$), that ensures all layers are both feature learning and effectively perturbed in the limit. Through experiments with MLPs, ResNets and Vision Transformers, we empirically demonstrate that $\\mu$P$^2$ is the first parameterization to achieve hyperparameter transfer of the joint optimum of learning rate and perturbation radius across model scales. Moreover, we provide an intuitive condition to derive $\\mu$P$^2$ for other perturbation rules like Adaptive SAM and SAM-ON, also ensuring balanced perturbation effects across all layers.", "pdf": "https://openreview.net/pdf/4ab6f536cd02f98c5bbbcf339eaffb8b64552fed.pdf"} {"title": "ST$_k$: A Scalable Module for Solving Top-k Problems", "url": "https://openreview.net/forum?id=OdJKB9jSa5", "detail_url": "https://openreview.net/forum?id=OdJKB9jSa5", "authors": "Hanchen Xia,Weidong Liu,Xiaojun Mao", "tags": "NIPS 2024,Poster", "abstract": "The cost of ranking becomes significant in the new stage of deep learning. We propose ST$_k$, a fully differentiable module with a single trainable parameter, designed to solve the Top-k problem without requiring additional time or GPU memory. Due to its fully differentiable nature, ST$_k$ can be embedded end-to-end into neural networks and optimize the Top-k problems within a unified computational graph. We apply ST$_k$ to the Average Top-k Loss (AT$_k$), which inherently faces a Top-k problem. The proposed ST$_k$ Loss outperforms AT$_k$ Loss and achieves the best average performance on multiple benchmarks, with the lowest standard deviation. With the assistance of ST$_k$ Loss, we surpass the state-of-the-art (SOTA) on both CIFAR-100-LT and Places-LT leaderboards.", "pdf": "https://openreview.net/pdf/42feb8c4aa713aec5f94baa451fe499fc2f3ce7f.pdf"} {"title": "Hamiltonian Monte Carlo Inference of Marginalized Linear Mixed-Effects Models", "url": "https://openreview.net/forum?id=uXuObobJHO", "detail_url": "https://openreview.net/forum?id=uXuObobJHO", "authors": "Jinlin Lai,Justin Domke,Daniel Sheldon", "tags": "NIPS 2024,Poster", "abstract": "Bayesian reasoning in linear mixed-effects models (LMMs) is challenging and often requires advanced sampling techniques like Markov chain Monte Carlo (MCMC).\nA common approach is to write the model in a probabilistic programming language and then sample via Hamiltonian Monte Carlo (HMC).\nHowever, there are many ways a user can transform a model that make inference more or less efficient.\nIn particular, marginalizing some variables can greatly improve inference but is difficult for users to do manually.\nWe develop an algorithm to easily marginalize random effects in LMMs.\nA naive approach introduces cubic time operations within an inference algorithm like HMC, but we reduce the running time to linear using fast linear algebra techniques.\nWe show that marginalization is always beneficial when applicable and highlight improvements in various models, especially ones from cognitive sciences.", "pdf": "https://openreview.net/pdf/378559ad4751205920b97814178bfb59814e5aaa.pdf"} {"title": "Robust Sleep Staging over Incomplete Multimodal Physiological Signals via Contrastive Imagination", "url": "https://openreview.net/forum?id=bc1qt1sZsW", "detail_url": "https://openreview.net/forum?id=bc1qt1sZsW", "authors": "Qi Shen,Junchang Xin,Bing Tian Dai,Shudi Zhang,Zhiqiong Wang", "tags": "NIPS 2024,Poster", "abstract": "Multimodal physiological signals, such as EEG, EOG and EMG, provide rich and reliable physiological information for automated sleep staging (ASS). However, in the real world, the completeness of various modalities is difficult to guarantee, which seriously affects the performance of ASS based on multimodal learning. Furthermore, the exploration of temporal context information within PTSs is also a serious challenge. To this end, we propose a robust multimodal sleep staging framework named contrastive imagination modality sleep network (CIMSleepNet). Specifically, CIMSleepNet handles the issue of arbitrary modal missing through the combination of modal awareness imagination module (MAIM) and semantic & modal calibration contrastive learning (SMCCL). Among them, MAIM can capture the interaction among modalities by learning the shared representation distribution of all modalities. Meanwhile, SMCCL introduces prior information of semantics and modalities to check semantic consistency while maintaining the uniqueness of each modality. Utilizing the calibration of SMCCL, the data distribution recovered by MAIM is aligned with the real data distribution. We further design a multi-level cross-branch temporal attention mechanism, which can facilitate the mining of interactive temporal context representations at both the intra-epoch and inter-epoch levels. Extensive experiments on five multimodal sleep datasets demonstrate that CIMSleepNet remarkably outperforms other competitive methods under various missing modality patterns. The source code is available at: https://github.com/SQAIYY/CIMSleepNet.", "pdf": "https://openreview.net/pdf/db1a69a26a88f2e6066b0c19f7a29a5d2408a40b.pdf"} {"title": "MeLLoC: Lossless Compression with High-order Mechanism Learning", "url": "https://openreview.net/forum?id=NWctqX77b3", "detail_url": "https://openreview.net/forum?id=NWctqX77b3", "authors": "Xinyue Luo,Jin Cheng,Yu Chen", "tags": "NIPS 2024,Poster", "abstract": "Lossless compression of large-scale scientific floating-point data is critical yet challenging due to the presence of high-order information and noise that arises from model truncation and discretization errors. Existing entropy coding techniques fail to effectively leverage the mechanisms underlying the data generation process. This paper introduces MeLLoC(Mechanism Learning for Lossless Compression), a novel approach that combines high-order mechanism learning with classical encoding to enhance lossless compression for scientific data. The key idea is to treat the data as discrete samples from an underlying physical field described by differential equations and solve an inverse problem to identify the governing equation coefficients exhibiting more compressible numeric representations. Periodic extension techniques are employed to accelerate the decompression. Through extensive experiments on various scientific datasets, MeLLoC consistently outperforms state-of-the-art lossless compressors while offering compelling trade-offs between compression ratios and computational costs. This work opens up new avenues for exploiting domain knowledge and high-order information to improve data compression in scientific computing.", "pdf": "https://openreview.net/pdf/229985e7f9c260b16ec90acda6aa75f531622ffd.pdf"} {"title": "Multi-Reward Best Policy Identification", "url": "https://openreview.net/forum?id=x69O84Df2G", "detail_url": "https://openreview.net/forum?id=x69O84Df2G", "authors": "Alessio Russo,Filippo Vannella", "tags": "NIPS 2024,Poster", "abstract": "Rewards are a critical aspect of formulating Reinforcement Learning (RL) problems; often, one may be interested in testing multiple reward functions, or the problem may naturally involve multiple rewards. \nIn this study, we investigate the _Multi-Reward Best Policy Identification_ (MR-BPI) problem, where the goal is to determine the best policy for all rewards in a given set $\\mathcal{R}$ with minimal sample complexity and a prescribed confidence level. We derive a fundamental instance-specific lower bound on the sample complexity required by any Probably Correct (PC) algorithm in this setting. This bound guides the design of an optimal exploration policy attaining minimal sample complexity. However, this lower bound involves solving a hard non-convex optimization problem. We address this challenge by devising a convex approximation, enabling the design of sample-efficient algorithms. We propose MR-NaS, a PC algorithm with competitive performance on hard-exploration tabular environments. Extending this approach to Deep RL (DRL), we also introduce DBMR-BPI, an efficient algorithm for model-free exploration in multi-reward settings.", "pdf": "https://openreview.net/pdf/8bf6a8b9f3cb04bbf5c384c5090536740b9e6bef.pdf"} {"title": "Online Iterative Reinforcement Learning from Human Feedback with General Preference Model", "url": "https://openreview.net/forum?id=TwdX1W3M6S", "detail_url": "https://openreview.net/forum?id=TwdX1W3M6S", "authors": "Chenlu Ye,Wei Xiong,Yuheng Zhang,Hanze Dong,Nan Jiang,Tong Zhang", "tags": "NIPS 2024,Poster", "abstract": "We investigate Reinforcement Learning from Human Feedback (RLHF) in the context of a general preference oracle. In particular, we do not assume the existence of a reward function and an oracle preference signal drawn from the Bradley-Terry model as most of the prior works do. We consider a standard mathematical formulation, the reverse-KL regularized minimax game between two LLMs for RLHF under general preference oracle. The learning objective of this formulation is to find a policy so that it is consistently preferred by the KL-regularized preference oracle over any competing LLMs. We show that this framework is strictly more general than the reward-based one, and propose sample-efficient algorithms for both the offline learning from a pre-collected preference dataset and online learning where we can query the preference oracle along the way of training. Empirical studies verify the effectiveness of the proposed framework.", "pdf": "https://openreview.net/pdf/96b172ee4cb0c8a30b0ebfd65cb6f98db290a9fb.pdf"} {"title": "The Minimax Rate of HSIC Estimation for Translation-Invariant Kernels", "url": "https://openreview.net/forum?id=KyNO0n1bJ9", "detail_url": "https://openreview.net/forum?id=KyNO0n1bJ9", "authors": "Florian Kalinke,Zolt\u00e1n Szab\u00f3", "tags": "NIPS 2024,Poster", "abstract": "Kernel techniques are among the most influential approaches in data science and statistics. Under mild conditions, the reproducing kernel Hilbert space associated to a kernel is capable of encoding the independence of $M\\ge2$ random variables. Probably the most widespread independence measure relying on kernels is the so-called Hilbert-Schmidt independence criterion (HSIC; also referred to as distance covariance in the statistics literature). Despite various existing HSIC estimators designed since its introduction close to two decades ago, the fundamental question of the rate at which HSIC can be estimated is still open. In this work, we prove that the minimax optimal rate of HSIC estimation on $\\mathbb{R}^d$ for Borel measures containing the Gaussians with continuous bounded translation-invariant characteristic kernels is $\\mathcal{O}\\left(n^{-1/2}\\right)$. Specifically, our result implies the optimality in the minimax sense of many of the most-frequently used estimators (including the U-statistic, the V-statistic, and the Nystr\u00f6m-based one) on $\\mathbb{R}^d$.", "pdf": "https://openreview.net/pdf/8574bfd36490e1c960a24f60e4b65c075dbe89ca.pdf"} {"title": "A two-scale Complexity Measure for Deep Learning Models", "url": "https://openreview.net/forum?id=TY9VoSZZIA", "detail_url": "https://openreview.net/forum?id=TY9VoSZZIA", "authors": "Massimiliano Datres,Gian Paolo Leonardi,Alessio Figalli,David Sutter", "tags": "NIPS 2024,Poster", "abstract": "We introduce a novel capacity measure 2sED for statistical models based on the effective dimension. The new quantity provably bounds the generalization error under mild assumptions on the model. Furthermore, simulations on standard data sets and popular model architectures show that 2sED correlates well with the training error. For Markovian models, we show how to efficiently approximate 2sED from below through a layerwise iterative approach, which allows us to tackle deep learning models with a large number of parameters. Simulation results suggest that the approximation is good for different prominent models and data sets.", "pdf": "https://openreview.net/pdf/a0a1a6a520ddb2e2c7b4cb52f57df03fcb25c791.pdf"} {"title": "VideoLLM-MoD: Efficient Video-Language Streaming with Mixture-of-Depths Vision Computation", "url": "https://openreview.net/forum?id=NKPXHzYusG", "detail_url": "https://openreview.net/forum?id=NKPXHzYusG", "authors": "Shiwei Wu,Joya Chen,Kevin Qinghong Lin,Qimeng Wang,Yan Gao,Qianli Xu,Tong Xu,Yao Hu,Enhong Chen,Mike Zheng Shou", "tags": "NIPS 2024,Poster", "abstract": "A well-known dilemma in large vision-language models (e.g., GPT-4, LLaVA) is that while increasing the number of vision tokens generally enhances visual understanding, it also significantly raises memory and computational costs, especially in long-term, dense video frame streaming scenarios. Although learnable approaches like Q-Former and Perceiver Resampler have been developed to reduce the vision token burden, they overlook the context causally modeled by LLMs (i.e., key-value cache), potentially leading to missed visual cues when addressing user queries. In this paper, we introduce a novel approach to reduce vision compute by leveraging redundant vision tokens ``skipping layers'' rather than decreasing the number of vision tokens. Our method, VideoLLM-MoD, is inspired by mixture-of-depths LLMs and addresses the challenge of numerous vision tokens in long-term or streaming video. Specifically, for certain transformer layer, we learn to skip the computation for a high proportion (e.g., 80\\%) of vision tokens, passing them directly to the next layer. This approach significantly enhances model efficiency, achieving approximately 42% time and 30% memory savings for the entire training. Moreover, our method reduces the computation in the context and avoid decreasing the vision tokens, thus preserving or even improving performance compared to the vanilla model. We conduct extensive experiments to demonstrate the effectiveness of VideoLLM-MoD, showing its state-of-the-art results on multiple benchmarks, including narration, forecasting, and summarization tasks in COIN, Ego4D, and Ego-Exo4D datasets. The code and checkpoints will be made available at github.com/showlab/VideoLLM-online.", "pdf": "https://openreview.net/pdf/d9fc2fc49d05704d2aabfb775b81984498ee933d.pdf"} {"title": "Information-theoretic Generalization Analysis for Expected Calibration Error", "url": "https://openreview.net/forum?id=yltJAlwtW9", "detail_url": "https://openreview.net/forum?id=yltJAlwtW9", "authors": "Futoshi Futami,Masahiro Fujisawa", "tags": "NIPS 2024,Poster", "abstract": "While the expected calibration error (ECE), which employs binning, is widely adopted to evaluate the calibration performance of machine learning models, theoretical understanding of its estimation bias is limited. In this paper, we present the first comprehensive analysis of the estimation bias in the two common binning strategies, uniform mass and uniform width binning.\nOur analysis establishes upper bounds on the bias, achieving an improved convergence rate. Moreover, our bounds reveal, for the first time, the optimal number of bins to minimize the estimation bias. We further extend our bias analysis to generalization error analysis based on the information-theoretic approach, deriving upper bounds that enable the numerical evaluation of how small the ECE is for unknown data. Experiments using deep learning models show that our bounds are nonvacuous thanks to this information-theoretic generalization analysis approach.", "pdf": "https://openreview.net/pdf/c4a25ce5ac23050bc6d164b9b2269d2890c8bde3.pdf"} {"title": "Learning-Augmented Algorithms with Explicit Predictors", "url": "https://openreview.net/forum?id=0XKvW4ijxp", "detail_url": "https://openreview.net/forum?id=0XKvW4ijxp", "authors": "Marek Elias,Haim Kaplan,Yishay Mansour,Shay Moran", "tags": "NIPS 2024,Poster", "abstract": "Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data. These approaches have demonstrated an enhancement in performance when the predictions are accurate, while also ensuring robustness by providing worst-case guarantees when predictions fail. In this paper we focus on online problems; prior research in this context was focused on a paradigm where the algorithms are oblivious of the predictors' design, treating them as a black box. In contrast, in this work,\nwe unpack the predictor and integrate the learning problem it gives rise for within the algorithmic challenge. In particular we allow the predictor to learn as it receives larger parts of the input, with the ultimate goal of designing online learning algorithms specifically tailored for the algorithmic task at hand. Adopting this perspective, we focus on a number of fundamental problems, including caching and scheduling, which have been well-studied in the black-box setting. For each of the problems, we introduce new algorithms that take advantage of explicit and carefully designed learning rules. These pairings of online algorithms with corresponding learning rules yields improvements in the overall performance in comparison with previous work.", "pdf": "https://openreview.net/pdf/8bad122dc84e3ac87de505e5747405d57f3e2b77.pdf"} {"title": "Towards Neuron Attributions in Multi-Modal Large Language Models", "url": "https://openreview.net/forum?id=jMJVFP4BH6", "detail_url": "https://openreview.net/forum?id=jMJVFP4BH6", "authors": "Junfeng Fang,Zac Bi,Ruipeng Wang,Houcheng Jiang,Yuan Gao,Kun Wang,An Zhang,Jie Shi,Xiang Wang,Tat-Seng Chua", "tags": "NIPS 2024,Poster", "abstract": "As Large Language Models (LLMs) demonstrate impressive capabilities, demystifying their internal mechanisms becomes increasingly vital. Neuron attribution, which attributes LLM outputs to specific neurons to reveal the semantic properties they learn, has emerged as a key interpretability approach. However, while neuron attribution has made significant progress in deciphering text-only LLMs, its application to Multimodal LLMs (MLLMs) remains less explored. To address this gap, we propose a novel Neuron Attribution method tailored for MLLMs, termed NAM. Specifically, NAM not only reveals the modality-specific semantic knowledge learned by neurons within MLLMs, but also highlights several intriguing properties of neurons, such as cross-modal invariance and semantic sensitivity. These properties collectively elucidate the inner workings mechanism of MLLMs, providing a deeper understanding of how MLLMs process and generate multi-modal content. Through theoretical analysis and empirical validation, we demonstrate the efficacy of NAM and the valuable insights it offers. Furthermore, leveraging NAM, we introduce a multi-modal knowledge editing paradigm, underscoring the practical significance of our approach for downstream applications of MLLMs.", "pdf": "https://openreview.net/pdf/f589eea6b57a75f5c6b3ffd100081d157d9a08ce.pdf"} {"title": "LuSh-NeRF: Lighting up and Sharpening NeRFs for Low-light Scenes", "url": "https://openreview.net/forum?id=CcmHlE6N6u", "detail_url": "https://openreview.net/forum?id=CcmHlE6N6u", "authors": "Zefan Qu,Ke Xu,Gerhard Petrus Hancke,Rynson W. H. Lau", "tags": "NIPS 2024,Poster", "abstract": "Neural Radiance Fields (NeRFs) have shown remarkable performances in producing novel-view images from high-quality scene images. However, hand-held low-light photography challenges NeRFs as the captured images may simultaneously suffer from low visibility, noise, and camera shakes.\nWhile existing NeRF methods may handle either low light or motion, directly combining them or incorporating additional image-based enhancement methods does not work as these degradation factors are highly coupled.\nWe observe that noise in low-light images is always sharp regardless of camera shakes, which implies an implicit order of these degradation factors within the image formation process.\nThis inspires us to explore such an order to decouple and remove these degradation factors while training the NeRF.\nTo this end, we propose in this paper a novel model, named LuSh-NeRF, which can reconstruct a clean and sharp NeRF from a group of hand-held low-light images.\nThe key idea of LuSh-NeRF is to sequentially model noise and blur in the images via multi-view feature consistency and frequency information of NeRF, respectively.\nSpecifically, LuSh-NeRF includes a novel Scene-Noise Decomposition (SND) module for decoupling the noise from the scene representation and a novel Camera Trajectory Prediction (CTP) module for the estimation of camera motions based on low-frequency scene information.\nTo facilitate training and evaluations, we construct a new dataset containing both synthetic and real images.\nExperiments show that LuSh-NeRF outperforms existing approaches. Our code and dataset can be found here: https://github.com/quzefan/LuSh-NeRF.", "pdf": "https://openreview.net/pdf/06fe10fb3958bb2d5f2550de529a77924b664df2.pdf"} {"title": "Suitable is the Best: Task-Oriented Knowledge Fusion in Vulnerability Detection", "url": "https://openreview.net/forum?id=OP2D9sIdo4", "detail_url": "https://openreview.net/forum?id=OP2D9sIdo4", "authors": "Jingjing Wang,Minhuan Huang,Yuanping Nie,Xiang Li,Qianjin Du,Wei Kong,Huan Deng,Xiaohui Kuang", "tags": "NIPS 2024,Poster", "abstract": "Deep learning technologies have demonstrated remarkable performance in vulnerability detection. Existing works primarily adopt a uniform and consistent feature learning pattern across the entire target set. While designed for general-purpose detection tasks, they lack sensitivity towards target code comprising multiple functional modules or diverse vulnerability subtypes. In this paper, we present a knowledge fusion-based vulnerability detection method (KF-GVD) that integrates specific vulnerability knowledge into the Graph Neural Network feature learning process. KF-GVD achieves accurate vulnerability detection across different functional modules of the Linux kernel and vulnerability subtypes without compromising general task performance. Extensive experiments demonstrate that KF-GVD outperforms SOTAs on function-level and statement-level vulnerability detection across various target tasks, with an average increase of 40.9% in precision and 26.1% in recall. Notably, KF-GVD discovered 9 undisclosed vulnerabilities when employing on C/C++ open-source projects without ground truth.", "pdf": "https://openreview.net/pdf/3a3fbbaba31282a606d01820e6e4707fd0bc1220.pdf"} {"title": "Truth is Universal: Robust Detection of Lies in LLMs", "url": "https://openreview.net/forum?id=1Fc2Xa2cDK", "detail_url": "https://openreview.net/forum?id=1Fc2Xa2cDK", "authors": "Lennart B\u00fcrger,Fred A. Hamprecht,Boaz Nadler", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) have revolutionised natural language processing, exhibiting impressive human-like capabilities. In particular, LLMs are capable of \"lying\", knowingly outputting false statements. Hence, it is of interest and importance to develop methods to detect when LLMs lie. Indeed, several authors trained classifiers to detect LLM lies based on their internal model activations. However, other researchers showed that these classifiers may fail to generalise, for example to negated statements. \nIn this work, we aim to develop a robust method to detect when an LLM is lying. To this end, we make the following key contributions: (i) We demonstrate the existence of a two-dimensional subspace, along which the activation vectors of true and false statements can be separated. Notably, this finding is universal and holds for various LLMs, including Gemma-7B, LLaMA2-13B, Mistral-7B and LLaMA3-8B. Our analysis explains the generalisation failures observed in previous studies and sets the stage for more robust lie detection;\n(ii) Building upon (i), we construct an accurate LLM lie detector. Empirically, our proposed classifier achieves state-of-the-art performance, attaining 94\\% accuracy in both distinguishing true from false factual statements and detecting lies generated in real-world scenarios.", "pdf": "https://openreview.net/pdf/7f55120c9315237f90e8440eabbb65eac216de47.pdf"} {"title": "Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering", "url": "https://openreview.net/forum?id=Li9YTHoItP", "detail_url": "https://openreview.net/forum?id=Li9YTHoItP", "authors": "Zhihua Wen,Zhiliang Tian,Zexin Jian,Zhen Huang,Pei Ke,Yifu Gao,Minlie Huang,Dongsheng Li", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) are widely used for knowledge-seeking purposes yet suffer from hallucinations. The knowledge boundary of an LLM limits its factual understanding, beyond which it may begin to hallucinate. Investigating the perception of LLMs' knowledge boundary is crucial for detecting hallucinations and LLMs' reliable generation. Current studies perceive LLMs' knowledge boundary on questions with concrete answers (close-ended questions) while paying limited attention to semi-open-ended questions that correspond to many potential answers. Some researchers achieve it by judging whether the question is answerable or not. However, this paradigm is not so suitable for semi-open-ended questions, which are usually ``partially answerable questions'' containing both answerable answers and ambiguous (unanswerable) answers. Ambiguous answers are essential for knowledge-seeking, but it may go beyond the knowledge boundary of LLMs. In this paper, we perceive the LLMs' knowledge boundary with semi-open-ended questions by discovering more ambiguous answers. First, we apply an LLM-based approach to construct semi-open-ended questions and obtain answers from a target LLM. Unfortunately, the output probabilities of mainstream black-box LLMs are inaccessible to sample more low-probability ambiguous answers. Therefore, we apply an open-sourced auxiliary model to explore ambiguous answers for the target LLM. We calculate the nearest semantic representation for existing answers to estimate their probabilities, with which we reduce the generation probability of high-probability existing answers to achieve a more effective generation. Finally, we compare the results from the RAG-based evaluation and LLM self-evaluation to categorize four types of ambiguous answers that are beyond the knowledge boundary of the target LLM. Following our method, we construct a dataset to perceive the knowledge boundary for GPT-4. We find that GPT-4 performs poorly on semi-open-ended questions and is often unaware of its knowledge boundary. Besides, our auxiliary model, LLaMA-2-13B, is effective in discovering many ambiguous answers, including correct answers neglected by GPT-4 and delusive wrong answers GPT-4 struggles to identify.", "pdf": "https://openreview.net/pdf/e3307a79faec8b9056c92116d5cc35c920e12ad9.pdf"} {"title": "Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents", "url": "https://openreview.net/forum?id=ZC0PSk6Mc6", "detail_url": "https://openreview.net/forum?id=ZC0PSk6Mc6", "authors": "Quentin Delfosse,Sebastian Sztwiertnia,Mark Rothermel,Wolfgang Stammer,Kristian Kersting", "tags": "NIPS 2024,Poster", "abstract": "Goal misalignment, reward sparsity and difficult credit assignment are only a few of the many issues that make it difficult for deep reinforcement learning (RL) agents to learn optimal policies. \nUnfortunately, the black-box nature of deep neural networks impedes the inclusion of domain experts for inspecting the model and revising suboptimal policies.\n\nTo this end, we introduce Successive Concept Bottleneck Agents (SCoBots), that integrate consecutive concept bottleneck (CB) layers. \nIn contrast to current CB models, SCoBots do not just represent concepts as properties of individual objects, but also as relations between objects which is crucial for many RL tasks. \n\nOur experimental results provide evidence of SCoBots' competitive performances, but also of their potential for domain experts to understand and regularize their behavior. Among other things, SCoBots enabled us to identify a previously unknown misalignment problem in the iconic video game, Pong, and resolve it. Overall, SCoBots thus result in more human-aligned RL agents.", "pdf": "https://openreview.net/pdf/3d4c8c5715b383f54cebb261efc8929c843ec6a4.pdf"} {"title": "Certified Adversarial Robustness via Randomized $\\alpha$-Smoothing for Regression Models", "url": "https://openreview.net/forum?id=jLUbLxa4XV", "detail_url": "https://openreview.net/forum?id=jLUbLxa4XV", "authors": "Aref Miri Rekavandi,Farhad Farokhi,Olga Ohrimenko,Benjamin I. P. Rubinstein", "tags": "NIPS 2024,Poster", "abstract": "Certified adversarial robustness of large-scale deep networks has progressed substantially after the introduction of randomized smoothing. Deep net classifiers are now provably robust in their predictions against a large class of threat models, including $\\ell_1$, $\\ell_2$, and $\\ell_\\infty$ norm-bounded attacks. Certified robustness analysis by randomized smoothing has not been performed for deep regression networks where the output variable is continuous and unbounded. In this paper, we extend the existing results for randomized smoothing into regression models using powerful tools from robust statistics, in particular, $\\alpha$-trimming filter as the smoothing function. Adjusting the hyperparameter $\\alpha$ achieves a smooth trade-off between desired certified robustness and utility. For the first time, we propose a benchmark for certified robust regression in visual positioning systems using the Cambridge Landmarks dataset where robustness analysis is essential for autonomous navigation of AI agents and self-driving cars. Code is publicly available at \\url{https://github.com/arekavandi/Certified_adv_RRegression/}.", "pdf": "https://openreview.net/pdf/2c1928be34a76219c66d9466b72ff8cde2c15a4e.pdf"} {"title": "Generative Semi-supervised Graph Anomaly Detection", "url": "https://openreview.net/forum?id=zqLAMwVLkt", "detail_url": "https://openreview.net/forum?id=zqLAMwVLkt", "authors": "Hezhe Qiao,Qingsong Wen,Xiaoli Li,Ee-Peng Lim,Guansong Pang", "tags": "NIPS 2024,Poster", "abstract": "This work considers a practical semi-supervised graph anomaly detection (GAD) scenario, where part of the nodes in a graph are known to be normal, contrasting to the extensively explored unsupervised setting with a fully unlabeled graph. We reveal that having access to the normal nodes, even just a small percentage of normal nodes, helps enhance the detection performance of existing unsupervised GAD methods when they are adapted to the semi-supervised setting. However, their utilization of these normal nodes is limited. In this paper, we propose a novel Generative GAD approach (namely GGAD) for the semi-supervised scenario to better exploit the normal nodes. The key idea is to generate pseudo anomaly nodes, referred to as 'outlier nodes', for providing effective negative node samples in training a discriminative one-class classifier. The main challenge here lies in the lack of ground truth information about real anomaly nodes. To address this challenge, GGAD is designed to leverage two important priors about the anomaly nodes -- asymmetric local affinity and egocentric closeness -- to generate reliable outlier nodes that assimilate anomaly nodes in both graph structure and feature representations. Comprehensive experiments on six real-world GAD datasets are performed to establish a benchmark for semi-supervised GAD and show that GGAD substantially outperforms state-of-the-art unsupervised and semi-supervised GAD methods with varying numbers of training normal nodes.", "pdf": "https://openreview.net/pdf/3c33b4f4c3c23708a8d12f3c6cbda3a20a9ca71e.pdf"} {"title": "Long-Tailed Out-of-Distribution Detection via Normalized Outlier Distribution Adaptation", "url": "https://openreview.net/forum?id=cesWi7mMLY", "detail_url": "https://openreview.net/forum?id=cesWi7mMLY", "authors": "Wenjun Miao,Guansong Pang,Jin Zheng,Xiao Bai", "tags": "NIPS 2024,Poster", "abstract": "One key challenge in Out-of-Distribution (OOD) detection is the absence of ground-truth OOD samples during training. One principled approach to address this issue is to use samples from external datasets as outliers ($\\textit{i.e.}$, pseudo OOD samples) to train OOD detectors.\n However, we find empirically that the outlier samples often present a distribution shift compared to the true OOD samples, especially in Long-Tailed Recognition (LTR) scenarios, where ID classes are heavily imbalanced, $\\textit{i.e.}$, the true OOD samples exhibit very different probability distribution to the head and tailed ID classes from the outliers.\n In this work, we propose a novel approach, namely $\\textit{normalized outlier distribution adaptation}$ (AdaptOD), to tackle this distribution shift problem.\n One of its key components is $\\textit{dynamic outlier distribution adaptation}$ that effectively adapts a vanilla outlier distribution based on the outlier samples to the true OOD distribution by utilizing the OOD knowledge in the predicted OOD samples during inference.\n Further, to obtain a more reliable set of predicted OOD samples on long-tailed ID data, a novel $\\textit{dual-normalized energy loss}$ is introduced in AdaptOD, which leverages class- and sample-wise normalized energy to enforce a more balanced prediction energy on imbalanced ID samples. This helps avoid bias toward the head samples and learn a substantially better vanilla outlier distribution than existing energy losses during training. It also eliminates the need of manually tuning the sensitive margin hyperparameters in energy losses.\n Empirical results on three popular benchmarks for OOD detection in LTR show the superior performance of AdaptOD over state-of-the-art methods.\nCode is available at https://github.com/mala-lab/AdaptOD.", "pdf": "https://openreview.net/pdf/791cd025c6a11032f4e047a3507c5ea6f9e54d45.pdf"} {"title": "Geometric-Averaged Preference Optimization for Soft Preference Labels", "url": "https://openreview.net/forum?id=3HpCVZV9it", "detail_url": "https://openreview.net/forum?id=3HpCVZV9it", "authors": "Hiroki Furuta,Kuang-Huei Lee,Shixiang Shane Gu,Yutaka Matsuo,Aleksandra Faust,Heiga Zen,Izzeddin Gur", "tags": "NIPS 2024,Poster", "abstract": "Many algorithms for aligning LLMs with human preferences assume that human preferences are binary and deterministic.\nHowever, human preferences can vary across individuals, and therefore should be represented distributionally.\nIn this work, we introduce the distributional soft preference labels and improve Direct Preference Optimization (DPO) with a weighted geometric average of the LLM output likelihood in the loss function.\nThis approach adjusts the scale of learning loss based on the soft labels such that the loss would approach zero when the responses are closer to equally preferred.\nThis simple modification can be easily applied to any DPO-based methods and mitigate over-optimization and objective mismatch, which prior works suffer from.\nOur experiments simulate the soft preference labels with AI feedback from LLMs and demonstrate that geometric averaging consistently improves performance on standard benchmarks for alignment research. \nIn particular, we observe more preferable responses than binary labels and significant improvements where modestly-confident labels are in the majority.", "pdf": "https://openreview.net/pdf/90ad317d40f3e6e4eb2a620c5a6df29ce19fb2a3.pdf"} {"title": "Predicting Future Actions of Reinforcement Learning Agents", "url": "https://openreview.net/forum?id=QgaGs7peYe", "detail_url": "https://openreview.net/forum?id=QgaGs7peYe", "authors": "Stephen Chung,Scott Niekum,David Krueger", "tags": "NIPS 2024,Poster", "abstract": "As reinforcement learning agents become increasingly deployed in real-world scenarios, predicting future agent actions and events during deployment is important for facilitating better human-agent interaction and preventing catastrophic outcomes. This paper experimentally evaluates and compares the effectiveness of future action and event prediction for three types of RL agents: explicitly planning, implicitly planning, and non-planning. We employ two approaches: the inner state approach, which involves predicting based on the inner computations of the agents (e.g., plans or neuron activations), and a simulation-based approach, which involves unrolling the agent in a learned world model. Our results show that the plans of explicitly planning agents are significantly more informative for prediction than the neuron activations of the other types. Furthermore, using internal plans proves more robust to model quality compared to simulation-based approaches when predicting actions, while the results for event prediction are more mixed. These findings highlight the benefits of leveraging inner states and simulations to predict future agent actions and events, thereby improving interaction and safety in real-world deployments.", "pdf": "https://openreview.net/pdf/73d9aec42fd261d8a81b78b0781a5e54f0b00ca3.pdf"} {"title": "Learning to Merge Tokens via Decoupled Embedding for Efficient Vision Transformers", "url": "https://openreview.net/forum?id=pVPyCgXv57", "detail_url": "https://openreview.net/forum?id=pVPyCgXv57", "authors": "Dong Hoon Lee,Seunghoon Hong", "tags": "NIPS 2024,Poster", "abstract": "Recent token reduction methods for Vision Transformers (ViTs) incorporate token merging, which measures the similarities between token embeddings and combines the most similar pairs.\nHowever, their merging policies are directly dependent on intermediate features in ViTs, which prevents exploiting features tailored for merging and requires end-to-end training to improve token merging.\nIn this paper, we propose Decoupled Token Embedding for Merging (DTEM) that enhances token merging through a decoupled embedding learned via a continuously relaxed token merging process.\nOur method introduces a lightweight embedding module decoupled from the ViT forward pass to extract dedicated features for token merging, thereby addressing the restriction from using intermediate features.\nThe continuously relaxed token merging, applied during training, enables us to learn the decoupled embeddings in a differentiable manner.\nThanks to the decoupled structure, our method can be seamlessly integrated into existing ViT backbones and trained either modularly by learning only the decoupled embeddings or end-to-end by fine-tuning. \nWe demonstrate the applicability of DTEM on various tasks, including classification, captioning, and segmentation, with consistent improvement in token merging.\nEspecially in the ImageNet-1k classification, DTEM achieves a 37.2\\% reduction in FLOPs while maintaining a top-1 accuracy of 79.85\\% with DeiT-small.", "pdf": "https://openreview.net/pdf/c098dfd7c7d7b63200ff456647b3ba86799aa874.pdf"} {"title": "Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Study", "url": "https://openreview.net/forum?id=0ZZMUjZJYF", "detail_url": "https://openreview.net/forum?id=0ZZMUjZJYF", "authors": "Xuefei Ning,Zifu Wang,Shiyao Li,Zinan Lin,Peiran Yao,Tianyu Fu,Matthew B. Blaschko,Guohao Dai,Huazhong Yang,Yu Wang", "tags": "NIPS 2024,Poster", "abstract": "Teaching to improve student models (e.g., knowledge distillation) is an extensively studied methodology in LLMs. However, in human education, teaching enhances not only the students but also the teachers by fostering more rigorous and clearer reasoning, as well as deeper knowledge building. We ask: Can LLMs also learn by teaching (LbT) for better reasoning? If the answer is yes, we can potentially unlock the possibility of continuously advancing the models without solely relying on human-produced data or stronger models. In this paper, we provide a preliminary exploration of this question. We show that LbT ideas can be incorporated into existing LLM training/prompting pipelines and bring improvements. Specifically, we design three methods, each mimicking one of the three levels of LbT: observing students' feedback, learning from the feedback, and learning iteratively, with the goal of improving answer accuracy without training or improving models' inherent capability with fine-tuning. We reveal some findings: (1) Teaching materials that make it easier for students to learn (via in-context learning) have clearer and more accurate logic; (2) Weak-to-strong generalization: LbT might help improve strong models by teaching weak models; (3) Diversity in students might help: teaching multiple students could be better than teaching a single student or the teacher alone. We hope that our exploration can inspire future research on LbT and, more broadly, the adoption of advanced education techniques to improve LLMs. The code and website are at https://github.com/imagination-research/lbt and https://sites.google.com/view/llm-learning-by-teaching.", "pdf": "https://openreview.net/pdf/fdd9cdd13e0bda4ebda79ff3bfdf29d606b3092b.pdf"} {"title": "MoGU: A Framework for Enhancing Safety of LLMs While Preserving Their Usability", "url": "https://openreview.net/forum?id=SrFbgIjb53", "detail_url": "https://openreview.net/forum?id=SrFbgIjb53", "authors": "Yanrui Du,Sendong Zhao,Danyang Zhao,Ming Ma,Yuhan Chen,Liangyu Huo,Qing Yang,Dongliang Xu,Bing Qin", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) are increasingly deployed in various applications. As their usage grows, concerns regarding their safety are rising, especially in maintaining harmless responses when faced with malicious instructions. Many defense strategies have been developed to enhance the safety of LLMs. However, our research finds that existing defense strategies lead LLMs to predominantly adopt a rejection-oriented stance, thereby diminishing the usability of their responses to benign instructions. To solve this problem, we introduce the MoGU framework, designed to enhance LLMs' safety while preserving their usability. Our MoGU framework transforms the base LLM into two variants: the usable LLM and the safe LLM, and further employs dynamic routing to balance their contribution. When encountering malicious instructions, the router will assign a higher weight to the safe LLM to ensure that responses are harmless. Conversely, for benign instructions, the router prioritizes the usable LLM, facilitating usable and helpful responses. On various open-sourced LLMs, we compare multiple defense strategies to verify the superiority of our MoGU framework. Besides, our analysis provides key insights into the effectiveness of MoGU and verifies that our designed routing mechanism can effectively balance the contribution of each variant by assigning weights. Our work released the safer Llama2, Vicuna, Falcon, Dolphin, and Baichuan2.", "pdf": "https://openreview.net/pdf/068e2987e4d2e9d91ee8b84c19c7765419cc6735.pdf"} {"title": "Graph Classification via Reference Distribution Learning: Theory and Practice", "url": "https://openreview.net/forum?id=1zVinhehks", "detail_url": "https://openreview.net/forum?id=1zVinhehks", "authors": "Zixiao Wang,Jicong Fan", "tags": "NIPS 2024,Poster", "abstract": "Graph classification is a challenging problem owing to the difficulty in quantifying the similarity between graphs or representing graphs as vectors, though there have been a few methods using graph kernels or graph neural networks (GNNs). Graph kernels often suffer from computational costs and manual feature engineering, while GNNs commonly utilize global pooling operations, risking the loss of structural or semantic information. This work introduces Graph Reference Distribution Learning (GRDL), an efficient and accurate graph classification method. GRDL treats each graph's latent node embeddings given by GNN layers as a discrete distribution, enabling direct classification without global pooling, based on maximum mean discrepancy to adaptively learned reference distributions. To fully understand this new model (the existing theories do not apply) and guide its configuration (e.g., network architecture, references' sizes, number, and regularization) for practical use, we derive generalization error bounds for GRDL and verify them numerically. More importantly, our theoretical and numerical results both show that GRDL has a stronger generalization ability than GNNs with global pooling operations. Experiments on moderate-scale and large-scale graph datasets show the superiority of GRDL over the state-of-the-art, emphasizing its remarkable efficiency, being at least 10 times faster than leading competitors in both training and inference stages.", "pdf": "https://openreview.net/pdf/af841f4db59e8f498de57f39fb14c629373ca4a4.pdf"} {"title": "PURE: Prompt Evolution with Graph ODE for Out-of-distribution Fluid Dynamics Modeling", "url": "https://openreview.net/forum?id=z86knmjoUq", "detail_url": "https://openreview.net/forum?id=z86knmjoUq", "authors": "Hao Wu,Changhu Wang,Fan Xu,Jinbao Xue,Chong Chen,Xian-Sheng Hua,Xiao Luo", "tags": "NIPS 2024,Poster", "abstract": "This work studies the problem of out-of-distribution fluid dynamics modeling. Previous works usually design effective neural operators to learn from mesh-based data structures. However, in real-world applications, they would suffer from distribution shifts from the variance of system parameters and \ntemporal evolution of the dynamical system. In this paper, we propose a novel approach named \\underline{P}rompt Evol\\underline{u}tion with G\\underline{r}aph OD\\underline{E} (\\method{}) for out-of-distribution fluid dynamics modeling. The core of our \\method{} is to learn time-evolving prompts using a graph ODE to adapt spatio-temporal forecasting models to different scenarios. In particular, our \\method{} first learns from historical observations and system parameters in the frequency domain to explore multi-view context information, which could effectively initialize prompt embeddings. More importantly, we incorporate the interpolation of observation sequences into a graph ODE, which can capture the temporal evolution of prompt embeddings for model adaptation. These time-evolving prompt embeddings are then incorporated into basic forecasting models to overcome temporal distribution shifts. We also minimize the mutual information between prompt embeddings and observation embeddings to enhance the robustness of our model to different distributions. Extensive experiments on various benchmark datasets validate the superiority of the proposed \\method{} in comparison to various baselines.", "pdf": "https://openreview.net/pdf/c8b66405bc52ba1031d1591eec96d618612ff575.pdf"} {"title": "On the Complexity of Identification in Linear Structural Causal Models", "url": "https://openreview.net/forum?id=bNDwOoxj6W", "detail_url": "https://openreview.net/forum?id=bNDwOoxj6W", "authors": "Julian D\u00f6rfler,Benito van der Zander,Markus Bl\u00e4ser,Maciej Liskiewicz", "tags": "NIPS 2024,Poster", "abstract": "Learning the unknown causal parameters of a linear structural causal \nmodel is a fundamental task in causal analysis. The task, known as the \nproblem of identification, asks to estimate the parameters of the model from a\ncombination of assumptions on the graphical structure of the model and \nobservational data, represented as a non-causal covariance matrix.\nIn this paper, we give a new sound and complete algorithm for generic \nidentification which runs in polynomial space. By a standard simulation \nresult, namely $\\mathsf{PSPACE} \\subseteq \\mathsf{EXP}$,\nthis algorithm has exponential running time which vastly improves \nthe state-of-the-art double exponential time method using a Gr\u00f6bner basis \napproach. The paper also presents evidence that parameter identification \nis computationally hard in general. In particular, we prove, that the task\nasking whether, for a given feasible correlation matrix, there \nare exactly one or two or more parameter sets explaining the observed \nmatrix, is hard for $\\forall \\mathbb{R}$, the co-class of the existential theory \nof the reals. In particular, this problem is $\\mathsf{coNP}$-hard.\nTo our best knowledge, this is the first hardness result for some notion \nof identifiability.", "pdf": "https://openreview.net/pdf/f1d834eef7ed51da9b519b6b52e118176558050d.pdf"} {"title": "AlchemistCoder: Harmonizing and Eliciting Code Capability by Hindsight Tuning on Multi-source Data", "url": "https://openreview.net/forum?id=SAQXbnvv4t", "detail_url": "https://openreview.net/forum?id=SAQXbnvv4t", "authors": "Zifan Song,Yudong Wang,Wenwei Zhang,Kuikun Liu,Chengqi Lyu,Demin Song,Qipeng Guo,Hang Yan,Dahua Lin,Kai Chen,Cairong Zhao", "tags": "NIPS 2024,Poster", "abstract": "Open-source Large Language Models (LLMs) and their specialized variants, particularly Code LLMs, have recently delivered impressive performance. However, previous Code LLMs are typically fine-tuned on single-source data with limited quality and diversity, which may insufficiently elicit the potential of pre-trained Code LLMs. In this paper, we present AlchemistCoder, a series of Code LLMs with enhanced code generation and generalization capabilities fine-tuned on multi-source data. To achieve this, we pioneer to unveil inherent conflicts among the various styles and qualities in multi-source code corpora and introduce data-specific prompts with hindsight relabeling, termed AlchemistPrompts, to harmonize different data sources and instruction-response pairs. Additionally, we propose incorporating the data construction process into the fine-tuning data as code comprehension tasks, including instruction evolution, data filtering, and code review. Extensive experiments demonstrate that AlchemistCoder holds a clear lead among all models of the same size (6.7B/7B) and rivals or even surpasses larger models (15B/33B/70B), showcasing the efficacy of our method in refining instruction-following capabilities and advancing the boundaries of code intelligence. Source code and models are available at https://github.com/InternLM/AlchemistCoder.", "pdf": "https://openreview.net/pdf/be19144548bdda81c1264862c0c7092748ba442d.pdf"} {"title": "The Implicit Bias of Gradient Descent toward Collaboration between Layers: A Dynamic Analysis of Multilayer Perceptions", "url": "https://openreview.net/forum?id=jV6z08u7y0", "detail_url": "https://openreview.net/forum?id=jV6z08u7y0", "authors": "Zheng Wang,Geyong Min,Wenjie Ruan", "tags": "NIPS 2024,Poster", "abstract": "The implicit bias of gradient descent has long been considered the primary mechanism explaining the superior generalization of over-parameterized neural networks without overfitting, even when the training error is zero. However, the implicit bias toward adversarial robustness has rarely been considered in the research community, although it is crucial for the trustworthiness of machine learning models. To fill this gap, in this paper, we explore whether consecutive layers collaborate to strengthen adversarial robustness during gradient descent. By quantifying this collaboration between layers using our proposed concept, co-correlation, we demonstrate a monotonically increasing trend in co-correlation, which implies a decreasing trend in adversarial robustness during gradient descent. Additionally, we observe different behaviours between narrow and wide neural networks during gradient descent. We conducted extensive experiments that verified our proposed theorems.", "pdf": "https://openreview.net/pdf/04e0cbef1013f707e3974993e70c6d9c6b83079d.pdf"} {"title": "Optimal Flow Matching: Learning Straight Trajectories in Just One Step", "url": "https://openreview.net/forum?id=kqmucDKVcU", "detail_url": "https://openreview.net/forum?id=kqmucDKVcU", "authors": "Nikita Maksimovich Kornilov,Petr Mokrov,Alexander Gasnikov,Alexander Korotin", "tags": "NIPS 2024,Poster", "abstract": "Over the several recent years, there has been a boom in development of Flow Matching (FM) methods for generative modeling. One intriguing property pursued by the community is the ability to learn flows with straight trajectories which realize the Optimal Transport (OT) displacements. Straightness is crucial for the fast integration (inference) of the learned flow's paths. Unfortunately, most existing flow straightening methods are based on non-trivial iterative FM procedures which accumulate the error during training or exploit heuristics based on minibatch OT. To address these issues, we develop and theoretically justify the novel Optimal Flow Matching approach which allows recovering the straight OT displacement for the quadratic transport in just one FM step. The main idea of our approach is the employment of vector field for FM which are parameterized by convex functions. The code of our OFM implementation and the conducted experiments is available at https://github.com/Jhomanik/Optimal-Flow-Matching", "pdf": "https://openreview.net/pdf/05aa3a9c9c8809ebd4d4d2eedbec48e8773f322c.pdf"} {"title": "VeXKD: The Versatile Integration of Cross-Modal Fusion and Knowledge Distillation for 3D Perception", "url": "https://openreview.net/forum?id=S5coB5kqSD", "detail_url": "https://openreview.net/forum?id=S5coB5kqSD", "authors": "JI Yuzhe,Yijie CHEN,Liuqing Yang,Rui Ding,Meng Yang,Xinhu Zheng", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements in 3D perception have led to a proliferation of network architectures, particularly those involving multi-modal fusion algorithms. While these fusion algorithms improve accuracy, their complexity often impedes real-time performance. This paper introduces VeXKD, an effective and Versatile framework that integrates Cross-Modal Fusion with Knowledge Distillation. VeXKD applies knowledge distillation exclusively to the Bird's Eye View (BEV) feature maps, enabling the transfer of cross-modal insights to single-modal students without additional inference time overhead. It avoids volatile components that can vary across various 3D perception tasks and student modalities, thus improving versatility. The framework adopts a modality-general cross-modal fusion module to bridge the modality gap between the multi-modal teachers and single-modal students. Furthermore, leveraging byproducts generated during fusion, our BEV query guided mask generation network identifies crucial spatial locations across different BEV feature maps in a data-driven manner, significantly enhancing the effectiveness of knowledge distillation. Extensive experiments on the nuScenes dataset demonstrate notable improvements, with up to 6.9\\%/4.2\\% increase in mAP and NDS for 3D detection tasks and up to 4.3\\% rise in mIoU for BEV map segmentation tasks, narrowing the performance gap with multi-modal models.", "pdf": "https://openreview.net/pdf/4c7bc7179efead6672cb163b205a045d24eaf240.pdf"} {"title": "ADOPT: Modified Adam Can Converge with Any $\\beta_2$ with the Optimal Rate", "url": "https://openreview.net/forum?id=rzvVm0LsyK", "detail_url": "https://openreview.net/forum?id=rzvVm0LsyK", "authors": "Shohei Taniguchi,Keno Harada,Gouki Minegishi,Yuta Oshima,Seong Cheol Jeong,Go Nagahara,Tomoshi Iiyama,Masahiro Suzuki,Yusuke Iwasawa,Yutaka Matsuo", "tags": "NIPS 2024,Poster", "abstract": "Adam is one of the most popular optimization algorithms in deep learning. However, it is known that Adam does not converge in theory unless choosing a hyperparameter, i.e., $\\beta_2$, in a problem-dependent manner. There have been many attempts to fix the non-convergence (e.g., AMSGrad), but they require an impractical assumption that the gradient noise is uniformly bounded. In this paper, we propose a new adaptive gradient method named ADOPT, which achieves the optimal convergence rate of $\\mathcal{O} ( 1 / \\sqrt{T} )$ with any choice of $\\beta_2$ without depending on the bounded noise assumption. ADOPT addresses the non-convergence issue of Adam by removing the current gradient from the second moment estimate and changing the order of the momentum update and the normalization by the second moment estimate. We also conduct intensive numerical experiments, and verify that our ADOPT achieves superior results compared to Adam and its variants across a wide range of tasks, including image classification, generative modeling, natural language processing, and deep reinforcement learning. The implementation is available at https://github.com/iShohei220/adopt.", "pdf": "https://openreview.net/pdf/fb3531672b7ca07ba39640b8b32143a1ab71b357.pdf"} {"title": "LAM3D: Large Image-Point Clouds Alignment Model for 3D Reconstruction from Single Image", "url": "https://openreview.net/forum?id=7s53dAJlwz", "detail_url": "https://openreview.net/forum?id=7s53dAJlwz", "authors": "Ruikai Cui,Xibin Song,Weixuan Sun,Senbo Wang,Weizhe Liu,Shenzhou Chen,Taizhang Shang,YANG LI,Nick Barnes,Hongdong Li,Pan Ji", "tags": "NIPS 2024,Poster", "abstract": "Large Reconstruction Models have made significant strides in the realm of automated 3D content generation from single or multiple input images. Despite their success, these models often produce 3D meshes with geometric inaccuracies, stemming from the inherent challenges of deducing 3D shapes solely from image data. In this work, we introduce a novel framework, the Large Image and Point Cloud Alignment Model (LAM3D), which utilizes 3D point cloud data to enhance the fidelity of generated 3D meshes. Our methodology begins with the development of a point-cloud-based network that effectively generates precise and meaningful latent tri-planes, laying the groundwork for accurate 3D mesh reconstruction. Building upon this, our Image-Point-Cloud Feature Alignment technique processes a single input image, aligning to the latent tri-planes to imbue image features with robust 3D information. This process not only enriches the image features but also facilitates the production of high-fidelity 3D meshes without the need for multi-view input, significantly reducing geometric distortions. Our approach achieves state-of-the-art high-fidelity 3D mesh reconstruction from a single image in just 6 seconds, and experiments on various datasets demonstrate its effectiveness.", "pdf": "https://openreview.net/pdf/b05baa970b55ae7b1c1734a5feae0223dfd80dfd.pdf"} {"title": "LESS: Label-Efficient and Single-Stage Referring 3D Segmentation", "url": "https://openreview.net/forum?id=hRqaot0NZF", "detail_url": "https://openreview.net/forum?id=hRqaot0NZF", "authors": "Xuexun Liu,Xiaoxu Xu,Jinlong Li,Qiudan Zhang,Xu Wang,Nicu Sebe,Lin Ma", "tags": "NIPS 2024,Poster", "abstract": "Referring 3D Segmentation is a visual-language task that segments all points of the specified object from a 3D point cloud described by a sentence of query. Previous works perform a two-stage paradigm, first conducting language-agnostic instance segmentation then matching with given text query. However, the semantic concepts from text query and visual cues are separately interacted during the training, and both instance and semantic labels for each object are required, which is time consuming and human-labor intensive. To mitigate these issues, we propose a novel Referring 3D Segmentation pipeline, Label-Efficient and Single-Stage, dubbed LESS, which is only under the supervision of efficient binary mask. Specifically, we design a Point-Word Cross-Modal Alignment module for aligning the fine-grained features of points and textual embedding. Query Mask Predictor module and Query-Sentence Alignment module are introduced for coarse-grained alignment between masks and query. Furthermore, we propose an area regularization loss, which coarsely reduces irrelevant background predictions on a large scale. Besides, a point-to-point contrastive loss is proposed concentrating on distinguishing points with subtly similar features. Through extensive experiments, we achieve state-of-the-art performance on ScanRefer dataset by surpassing the previous methods about 3.7% mIoU using only binary labels. Code is available at https://github.com/mellody11/LESS.", "pdf": "https://openreview.net/pdf/0e0e742eb42c3f5666816bff212cdaefaefbd263.pdf"} {"title": "MOTE-NAS: Multi-Objective Training-based Estimate for Efficient Neural Architecture Search", "url": "https://openreview.net/forum?id=jKLyKeZfzv", "detail_url": "https://openreview.net/forum?id=jKLyKeZfzv", "authors": "Yuming Zhang,Jun Wei Hsieh,Xin Li,Ming-Ching Chang,Chun-Chieh Lee,Kuo-Chin Fan", "tags": "NIPS 2024,Poster", "abstract": "Neural Architecture Search (NAS) methods seek effective optimization toward performance metrics regarding model accuracy and generalization while facing challenges regarding search costs and GPU resources. Recent Neural Tangent Kernel (NTK) NAS methods achieve remarkable search efficiency based on a training-free model estimate; however, they overlook the non-convex nature of the DNNs in the search process. In this paper, we develop Multi-Objective Training-based Estimate (MOTE) for efficient NAS, retaining search effectiveness and achieving the new state-of-the-art in the accuracy and cost trade-off. To improve NTK and inspired by the Training Speed Estimation (TSE) method, MOTE is designed to model the actual performance of DNNs from macro to micro perspective by draw loss landscape and convergence speed simultaneously. Using two reduction strategies, the MOTE is generated based on a reduced architecture and a reduced dataset. Inspired by evolutionary search, our iterative ranking-based, coarse-to-fine architecture search is highly effective. Experiments on NASBench-201 show MOTE-NAS achieves 94.32% accuracy on CIFAR-10, 72.81% on CIFAR-100, and 46.38% on ImageNet-16-120, outperforming NTK-based NAS approaches. An evaluation-free (EF) version of MOTE-NAS delivers high efficiency in only 5 minutes, delivering a model more accurate than KNAS.", "pdf": "https://openreview.net/pdf/6358b7a3822d30b08eea78c1906a33747aea9f02.pdf"} {"title": "Transformers are Minimax Optimal Nonparametric In-Context Learners", "url": "https://openreview.net/forum?id=hF6vatntqc", "detail_url": "https://openreview.net/forum?id=hF6vatntqc", "authors": "Juno Kim,Tai Nakamaki,Taiji Suzuki", "tags": "NIPS 2024,Poster", "abstract": "In-context learning (ICL) of large language models has proven to be a surprisingly effective method of learning a new task from only a few demonstrative examples. In this paper, we shed light on the efficacy of ICL from the viewpoint of statistical learning theory. We develop approximation and generalization error analyses for a transformer model composed of a deep neural network and one linear attention layer, pretrained on nonparametric regression tasks sampled from general function spaces including the Besov space and piecewise $\\gamma$-smooth class. In particular, we show that sufficiently trained transformers can achieve -- and even improve upon -- the minimax optimal estimation risk in context by encoding the most relevant basis representations during pretraining. Our analysis extends to high-dimensional or sequential data and distinguishes the \\emph{pretraining} and \\emph{in-context} generalization gaps, establishing upper and lower bounds w.r.t. both the number of tasks and in-context examples. These findings shed light on the effectiveness of few-shot prompting and the roles of task diversity and representation learning for ICL.", "pdf": "https://openreview.net/pdf/fe45fefece1fae5d6259af61eb37b417f6652f1a.pdf"} {"title": "Memory-Efficient LLM Training with Online Subspace Descent", "url": "https://openreview.net/forum?id=P8rTCT6g45", "detail_url": "https://openreview.net/forum?id=P8rTCT6g45", "authors": "Kaizhao Liang,Bo Liu,Lizhang Chen,qiang liu", "tags": "NIPS 2024,Poster", "abstract": "Recently, a wide range of memory-efficient LLM training algorithms have gained substantial popularity. These methods leverage the low-rank structure of gradients to project optimizer states into a subspace using projection matrix found by singular value decomposition (SVD). However, convergence of these algorithms is highly dependent on the update rules of their projection matrix. In this work, we provide the \\emph{first} convergence guarantee for arbitrary update rules of projection matrix. This guarantee is generally applicable to optimizers that can be analyzed with Hamiltonian Descent, including most common ones, such as LION, Adam. Inspired by our theoretical understanding, we propose Online Subspace Descent, a new family of subspace descent optimizer without SVD. Instead of updating projection matrix with eigenvectors, Online Subspace Descent updates projection matrix wtih online PCA. Online Subspace Descent is flexible and introduces only minimum overhead to training. We demonstrate that, for the task of pretraining LLaMA models ranging from 60M to 1B parameters on the C4 dataset, Online Subspace Descent achieves lower perplexity than state-of-the-art low-rank training methods across different settings and narrows the gap with full-rank baselines.", "pdf": "https://openreview.net/pdf/5f0555ca4bd1590bcb2944dea4f12fc550f52b4d.pdf"} {"title": "AttnDreamBooth: Towards Text-Aligned Personalized Text-to-Image Generation", "url": "https://openreview.net/forum?id=4bINoegDcm", "detail_url": "https://openreview.net/forum?id=4bINoegDcm", "authors": "Lianyu Pang,Jian Yin,Baoquan Zhao,Feize Wu,Fu Lee Wang,Qing Li,Xudong Mao", "tags": "NIPS 2024,Poster", "abstract": "Recent advances in text-to-image models have enabled high-quality personalized image synthesis based on user-provided concepts with flexible textual control. In this work, we analyze the limitations of two primary techniques in text-to-image personalization: Textual Inversion and DreamBooth. When integrating the learned concept into new prompts, Textual Inversion tends to overfit the concept, while DreamBooth often overlooks it. We attribute these issues to the incorrect learning of the embedding alignment for the concept. To address this, we introduce AttnDreamBooth, a novel approach that separately learns the embedding alignment, the attention map, and the subject identity across different training stages. We also introduce a cross-attention map regularization term to enhance the learning of the attention map. Our method demonstrates significant improvements in identity preservation and text alignment compared to the baseline methods.", "pdf": "https://openreview.net/pdf/f29d4644e8143afbcc679951778ae1cc111b1ffb.pdf"} {"title": "Learning-Augmented Algorithms for the Bahncard Problem", "url": "https://openreview.net/forum?id=3cb6pF3Tvf", "detail_url": "https://openreview.net/forum?id=3cb6pF3Tvf", "authors": "Hailiang Zhao,Xueyan Tang,Peng Chen,Shuiguang Deng", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we study learning-augmented algorithms for the Bahncard problem. The Bahncard problem is a generalization of the ski-rental problem, where a traveler needs to irrevocably and repeatedly decide between a cheap short-term solution and an expensive long-term one with an unknown future. Even though the problem is canonical, only a primal-dual-based learning-augmented algorithm was explicitly designed for it. We develop a new learning-augmented algorithm, named PFSUM, that incorporates both history and short-term future to improve online decision making. We derive the competitive ratio of PFSUM as a function of the prediction error and conduct extensive experiments to show that PFSUM outperforms the primal-dual-based algorithm.", "pdf": "https://openreview.net/pdf/90243c519be883d402f99ac89fa629990d353cdb.pdf"} {"title": "Controlling Counterfactual Harm in Decision Support Systems Based on Prediction Sets", "url": "https://openreview.net/forum?id=PyTkA6HkzX", "detail_url": "https://openreview.net/forum?id=PyTkA6HkzX", "authors": "Eleni Straitouri,Suhas Thejaswi,Manuel Gomez Rodriguez", "tags": "NIPS 2024,Poster", "abstract": "Decision support systems based on prediction sets help humans solve multiclass classification tasks by narrowing down the set of potential label values to a subset of them, namely a prediction set, and asking them to always predict label values from the prediction sets. While this type of systems have been proven to be effective at improving the average accuracy of the predictions made by humans, by restricting human agency, they may cause harm---a human who has succeeded at predicting the ground-truth label of an instance on their own may have failed had they used these systems. In this paper, our goal is to control how frequently a decision support system based on prediction sets may cause harm, by design. To this end, we start by characterizing the above notion of harm using the theoretical framework of structural causal models. Then, we show that, under a natural, albeit unverifiable, monotonicity assumption, we can estimate how frequently a system may cause harm using only predictions made by humans on their own. Further, we also show that, under a weaker monotonicity assumption, which can be verified experimentally, we can bound how frequently a system may cause harm again using only predictions made by humans on their own. Building upon these assumptions, we introduce a computational framework to design decision support systems based on prediction sets that are guaranteed to cause harm less frequently than a user-specified value \nusing conformal risk control. We validate our framework using real human predictions from two different human subject studies and show that, in decision support systems based on prediction sets, there is a trade-off between accuracy and counterfactual harm.", "pdf": "https://openreview.net/pdf/9c045a4fca46dadeb1efbba8238c9daa6ceec487.pdf"} {"title": "Gene-Gene Relationship Modeling Based on Genetic Evidence for Single-Cell RNA-Seq Data Imputation", "url": "https://openreview.net/forum?id=gW0znG5JCG", "detail_url": "https://openreview.net/forum?id=gW0znG5JCG", "authors": "Daeho Um,Ji Won Yoon,Seong Jin Ahn,Yunha Yeo", "tags": "NIPS 2024,Poster", "abstract": "Single-cell RNA sequencing (scRNA-seq) technologies enable the exploration of cellular heterogeneity and facilitate the construction of cell atlases. However, scRNA-seq data often contain a large portion of missing values (false zeros) or noisy values, hindering downstream analyses. To recover these false zeros, propagation-based imputation methods have been proposed using $k$-NN graphs. However they model only associating relationships among genes within a cell, while, according to well-known genetic evidence, there are both associating and dissociating relationships among genes. To apply this genetic evidence to gene-gene relationship modeling, this paper proposes a novel imputation method that newly employs dissociating relationships in addition to associating relationships. Our method constructs a $k$-NN graph to additionally model dissociating relationships via the negation of a given cell-gene matrix. Moreover, our method standardizes the value distribution (mean and variance) of each gene to have standard distributions regardless of the gene. Through extensive experiments, we demonstrate that the proposed method achieves exceptional performance gains over state-of-the-art methods in both cell clustering and gene expression recovery across six scRNA-seq datasets, validating the significance of using complete gene-gene relationships in accordance with genetic evidence. The source code is available at https://github.com/daehoum1/scCR.", "pdf": "https://openreview.net/pdf/aeb0e8d7bbaae84572ff0627ba66a948f7357efa.pdf"} {"title": "Multi-modal Transfer Learning between Biological Foundation Models", "url": "https://openreview.net/forum?id=xImeJtdUiw", "detail_url": "https://openreview.net/forum?id=xImeJtdUiw", "authors": "Juan Jose Garau-Luis,Patrick Philippe Bordes,Liam Gonzalez,Ma\u0161a Roller,Bernardo P de Almeida,Christopher F. Blum,Lorenz Hexemer,Stefan Laurent,Maren Lang,Thomas PIERROT,Guillaume Richard", "tags": "NIPS 2024,Poster", "abstract": "Biological sequences encode fundamental instructions for the building blocks of life, in the form of DNA, RNA, and proteins. Modeling these sequences is key to understand disease mechanisms and is an active research area in computational biology. Recently, Large Language Models have shown great promise in solving certain biological tasks but current approaches are limited to a single sequence modality (DNA, RNA, or protein). Key problems in genomics intrinsically involve multiple modalities, but it remains unclear how to adapt general-purpose sequence models to those cases. In this work we propose a multi-modal model that connects DNA, RNA, and proteins by leveraging information from different pre-trained modality-specific encoders. We demonstrate its capabilities by applying it to the largely unsolved problem of predicting how multiple \\rna transcript isoforms originate from the same gene (i.e. same DNA sequence) and map to different transcription expression levels across various human tissues. We show that our model, dubbed IsoFormer, is able to accurately predict differential transcript expression, outperforming existing methods and leveraging the use of multiple modalities. Our framework also achieves efficient transfer knowledge from the encoders pre-training as well as in between modalities. We open-source our model, paving the way for new multi-modal gene expression approaches.", "pdf": "https://openreview.net/pdf/ef1635b8ab00b68d5872359001ea59e93a3a9846.pdf"} {"title": "Recurrent neural networks: vanishing and exploding gradients are not the end of the story", "url": "https://openreview.net/forum?id=46Jr4sgTWa", "detail_url": "https://openreview.net/forum?id=46Jr4sgTWa", "authors": "Nicolas Zucchet,Antonio Orvieto", "tags": "NIPS 2024,Poster", "abstract": "Recurrent neural networks (RNNs) notoriously struggle to learn long-term memories, primarily due to vanishing and exploding gradients. The recent success of state-space models (SSMs), a subclass of RNNs, to overcome such difficulties challenges our theoretical understanding. In this paper, we delve into the optimization challenges of RNNs and discover that, as the memory of a network increases, changes in its parameters result in increasingly large output variations, making gradient-based learning highly sensitive, even without exploding gradients. Our analysis further reveals the importance of the element-wise recurrence design pattern combined with careful parametrizations in mitigating this effect. This feature is present in SSMs, as well as in other architectures, such as LSTMs. Overall, our insights provide a new explanation for some of the difficulties in gradient-based learning of RNNs and why some architectures perform better than others.", "pdf": "https://openreview.net/pdf/1329b928d1d55f478cb4bb38a5d95414853cfc34.pdf"} {"title": "Data-Driven Discovery of Dynamical Systems in Pharmacology using Large Language Models", "url": "https://openreview.net/forum?id=KIrZmlTA92", "detail_url": "https://openreview.net/forum?id=KIrZmlTA92", "authors": "Samuel Holt,Zhaozhi Qian,Tennison Liu,Jim Weatherall,Mihaela van der Schaar", "tags": "NIPS 2024,Poster", "abstract": "The discovery of dynamical systems is crucial across a range of fields, including pharmacology, epidemiology, and physical sciences. *Accurate* and *interpretable* modeling of these systems is essential for understanding complex temporal processes, optimizing interventions, and minimizing adverse effects. In pharmacology, for example, precise modeling of drug dynamics is vital to maximize therapeutic efficacy while minimizing patient harm, as in chemotherapy. However, current models, often developed by human experts, are limited by high cost, lack of scalability, and restriction to existing human knowledge. In this paper, we present the **Data-Driven Discovery (D3)** framework, a novel approach leveraging Large Language Models (LLMs) to iteratively discover and refine interpretable models of dynamical systems, demonstrated here with pharmacological applications. Unlike traditional methods, D3 enables the LLM to propose, acquire, and integrate new features, validate, and compare dynamical systems models, uncovering new insights into pharmacokinetics. Experiments on a pharmacokinetic Warfarin dataset reveal that D3 identifies a new plausible model that is well-fitting, highlighting its potential for precision dosing in clinical applications.", "pdf": "https://openreview.net/pdf/079e0d0a60c71fd4673804db121f5c68ae99fd55.pdf"} {"title": "Weak-eval-Strong: Evaluating and Eliciting Lateral Thinking of LLMs with Situation Puzzles", "url": "https://openreview.net/forum?id=h024LpF3bZ", "detail_url": "https://openreview.net/forum?id=h024LpF3bZ", "authors": "Qi Chen,Bowen Zhang,Gang Wang,Qi Wu", "tags": "NIPS 2024,Poster", "abstract": "While advancements in NLP have significantly improved the performance of Large Language Models (LLMs) on tasks requiring vertical thinking, their lateral thinking capabilities remain under-explored and challenging to measure due to the complexity of assessing creative thought processes and the scarcity of relevant data. To address these challenges, we introduce SPLAT, a benchmark leveraging Situation Puzzles to evaluate and elicit LAteral Thinking of LLMs. This benchmark, containing 975 graded situation puzzles across three difficulty levels, employs a new multi-turn player-judge framework instead of the traditional model-based evaluation, which often necessitates a stronger evaluation model. This framework simulates an interactive game where the model (player) asks the evaluation model (judge) questions about an incomplete story to infer the full scenario. The judge answers based on a detailed reference scenario or evaluates if the player's predictions align with the reference one. This approach lessens dependence on more robust evaluation models, enabling the assessment of state-of-the-art LLMs. The experiments demonstrate that a robust evaluation model, such as WizardLM-2, closely matches human judgements in both intermediate question-answering and final scenario accuracy, achieving over 80% agreement--similar to the agreement levels among humans. Furthermore, applying data and reasoning processes from our benchmark to other lateral thinking-related benchmarks, e.g., RiddleSense and BrainTeaser, leads to performance enhancements. This suggests that our benchmark effectively evaluates and elicits the lateral thinking abilities of LLMs.", "pdf": "https://openreview.net/pdf/6151b635a876e25380984eea8180a126b504b546.pdf"} {"title": "Improved Sample Complexity for Multiclass PAC Learning", "url": "https://openreview.net/forum?id=l2yvtrz3On", "detail_url": "https://openreview.net/forum?id=l2yvtrz3On", "authors": "Steve Hanneke,Shay Moran,Qian Zhang", "tags": "NIPS 2024,Poster", "abstract": "We aim to understand the optimal PAC sample complexity in multiclass learning. While finiteness of the Daniely-Shalev-Shwartz (DS) dimension has been shown to characterize the PAC learnability of a concept class [Brukhim, Carmon, Dinur, Moran, and Yehudayoff, 2022], there exist polylog factor gaps in the leading term of the sample complexity. In this paper, we reduce the gap in terms of the dependence on the error parameter to a single log factor and also propose two possible routes towards completely resolving the optimal sample complexity, each based on a key open question we formulate: one concerning list learning with bounded list size, the other concerning a new type of shifting for multiclass concept classes. We prove that a positive answer to either of the two questions would completely resolve the optimal sample complexity up to log factors of the DS dimension.", "pdf": "https://openreview.net/pdf/fb5f5aaf84f1e16cc00ed0d7bb029ee322fa4259.pdf"} {"title": "FASTopic: Pretrained Transformer is a Fast, Adaptive, Stable, and Transferable Topic Model", "url": "https://openreview.net/forum?id=7t6aq0Fa9D", "detail_url": "https://openreview.net/forum?id=7t6aq0Fa9D", "authors": "Xiaobao Wu,Thong Thanh Nguyen,Delvin Ce Zhang,William Yang Wang,Anh Tuan Luu", "tags": "NIPS 2024,Poster", "abstract": "Topic models have been evolving rapidly over the years, from conventional to recent neural models. However, existing topic models generally struggle with either effectiveness, efficiency, or stability, highly impeding their practical applications. In this paper, we propose FASTopic, a fast, adaptive, stable, and transferable topic model. FASTopic follows a new paradigm: Dual Semantic-relation Reconstruction (DSR). Instead of previous conventional, VAE-based, or clustering-based methods, DSR directly models the semantic relations among document embeddings from a pretrained Transformer and learnable topic and word embeddings. By reconstructing through these semantic relations, DSR discovers latent topics. This brings about a neat and efficient topic modeling framework. We further propose a novel Embedding Transport Plan (ETP) method. Rather than early straightforward approaches, ETP explicitly regularizes the semantic relations as optimal transport plans. This addresses the relation bias issue and thus leads to effective topic modeling. Extensive experiments on benchmark datasets demonstrate that our FASTopic shows superior effectiveness, efficiency, adaptivity, stability, and transferability, compared to state-of-the-art baselines across various scenarios.", "pdf": "https://openreview.net/pdf/18cafd4d118c374041d80bb220ec15533d30ff6b.pdf"} {"title": "Penalty-based Methods for Simple Bilevel Optimization under H\u00f6lderian Error Bounds", "url": "https://openreview.net/forum?id=oQ1Zj9iH88", "detail_url": "https://openreview.net/forum?id=oQ1Zj9iH88", "authors": "Pengyu Chen,Xu Shi,Rujun Jiang,Jiulin Wang", "tags": "NIPS 2024,Poster", "abstract": "This paper investigates simple bilevel optimization problems where we minimize a convex upper-level objective over the optimal solution set of a convex lower-level objective. Existing methods for such problems either only guarantee asymptotic convergence, have slow sublinear rates, or require strong assumptions. To address these challenges, we propose a penalization framework that delineates the relationship between approximate solutions of the original problem and its reformulated counterparts. This framework accommodates varying assumptions regarding smoothness and convexity, enabling the application of specific methods with different complexity results. \nSpecifically, when both upper- and lower-level objectives are composite convex functions, under an $\\alpha$-H\u00f6lderian error bound condition and certain mild assumptions, our algorithm attains an $(\\epsilon,\\epsilon^{\\beta})$-optimal solution of the original problem for any $\\beta> 0$ within $\\mathcal{O}\\left(\\sqrt{{1}/{\\epsilon^{\\max\\\\{\\alpha,\\beta\\\\}}}}\\right)$ iterations. The result can be improved further if the smooth part of the upper-level objective is strongly convex. We also establish complexity results when the upper- and lower-level objectives are general nonsmooth functions. Numerical experiments demonstrate the effectiveness of our algorithms.", "pdf": "https://openreview.net/pdf/f85807a28c75a5e4ad5cbd440f7369205020e1dc.pdf"} {"title": "A Huber Loss Minimization Approach to Mean Estimation under User-level Differential Privacy", "url": "https://openreview.net/forum?id=TutGINeJzZ", "detail_url": "https://openreview.net/forum?id=TutGINeJzZ", "authors": "Puning Zhao,Lifeng Lai,Li Shen,Qingming Li,Jiafei Wu,Zhe Liu", "tags": "NIPS 2024,Poster", "abstract": "Privacy protection of users' entire contribution of samples is important in distributed systems. The most effective approach is the two-stage scheme, which finds a small interval first and then gets a refined estimate by clipping samples into the interval. However, the clipping operation induces bias, which is serious if the sample distribution is heavy-tailed. Besides, users with large local sample sizes can make the sensitivity much larger, thus the method is not suitable for imbalanced users. Motivated by these challenges, we propose a Huber loss minimization approach to mean estimation under user-level differential privacy. The connecting points of Huber loss can be adaptively adjusted to deal with imbalanced users. Moreover, it avoids the clipping operation, thus significantly reducing the bias compared with the two-stage approach. We provide a theoretical analysis of our approach, which gives the noise strength needed for privacy protection, as well as the bound of mean squared error. The result shows that the new method is much less sensitive to the imbalance of user-wise sample sizes and the tail of sample distributions. Finally, we perform numerical experiments to validate our theoretical analysis.", "pdf": "https://openreview.net/pdf/63c487d7f7f08d0ff8106c3cc23cbe398b2529ed.pdf"} {"title": "VFIMamba: Video Frame Interpolation with State Space Models", "url": "https://openreview.net/forum?id=4s5UsBUsUS", "detail_url": "https://openreview.net/forum?id=4s5UsBUsUS", "authors": "Guozhen Zhang,Chunxu Liu,Yutao Cui,Xiaotong Zhao,Kai Ma,Limin Wang", "tags": "NIPS 2024,Poster", "abstract": "Inter-frame modeling is pivotal in generating intermediate frames for video frame interpolation (VFI). Current approaches predominantly rely on convolution or attention-based models, which often either lack sufficient receptive fields or entail significant computational overheads. Recently, Selective State Space Models (S6) have emerged, tailored specifically for long sequence modeling, offering both linear complexity and data-dependent modeling capabilities. In this paper, we propose VFIMamba, a novel frame interpolation method for efficient and dynamic inter-frame modeling by harnessing the S6 model. Our approach introduces the Mixed-SSM Block (MSB), which initially rearranges tokens from adjacent frames in an interleaved fashion and subsequently applies multi-directional S6 modeling. This design facilitates the efficient transmission of information across frames while upholding linear complexity. Furthermore, we introduce a novel curriculum learning strategy that progressively cultivates proficiency in modeling inter-frame dynamics across varying motion magnitudes, fully unleashing the potential of the S6 model. Experimental findings showcase that our method attains state-of-the-art performance across diverse benchmarks, particularly excelling in high-resolution scenarios. In particular, on the X-TEST dataset, VFIMamba demonstrates a noteworthy improvement of 0.80 dB for 4K frames and 0.96 dB for 2K frames.", "pdf": "https://openreview.net/pdf/de833adbf41a6cd01e783ddcc38df5b08822bc0f.pdf"} {"title": "Data Distribution Valuation", "url": "https://openreview.net/forum?id=1067784F6e", "detail_url": "https://openreview.net/forum?id=1067784F6e", "authors": "Xinyi Xu,Shuaiqi Wang,Chuan-Sheng Foo,Bryan Kian Hsiang Low,Giulia Fanti", "tags": "NIPS 2024,Poster", "abstract": "Data valuation is a class of techniques for quantitatively assessing the value of data for applications like pricing in data marketplaces. Existing data valuation methods define a value for a discrete dataset. However, in many use cases, users are interested in not only the value of the dataset, but that of the distribution from which the dataset was sampled. For example, consider a buyer trying to evaluate whether to purchase data from different vendors. The buyer may observe (and compare) only a small preview sample from each vendor, to decide which vendor's data distribution is most useful to the buyer and purchase. The core question is how should we compare the values of data distributions from their samples? Under a Huber characterization of the data heterogeneity across vendors, we propose a maximum mean discrepancy (MMD)-based valuation method which enables theoretically principled and actionable policies for comparing data distributions from samples. We empirically demonstrate that our method is sample-efficient and effective in identifying valuable data distributions against several existing baselines, on multiple real-world datasets (e.g., network intrusion detection, credit card fraud detection) and downstream applications (classification, regression).", "pdf": "https://openreview.net/pdf/510ad3a0f8f09a33611c380714a4f86058917095.pdf"} {"title": "Over-parameterized Student Model via Tensor Decomposition Boosted Knowledge Distillation", "url": "https://openreview.net/forum?id=fT1RkAgrC3", "detail_url": "https://openreview.net/forum?id=fT1RkAgrC3", "authors": "Yu-Liang Zhan,Zhong-Yi Lu,Hao Sun,Ze-Feng Gao", "tags": "NIPS 2024,Poster", "abstract": "Increased training parameters have enabled large pre-trained models to excel in various downstream tasks. Nevertheless, the extensive computational requirements associated with these models hinder their widespread adoption within the community. We focus on Knowledge Distillation (KD), where a compact student model is trained to mimic a larger teacher model, facilitating the transfer of knowledge of large models. In contrast to much of the previous work, we scale up the parameters of the student model during training, to benefit from over-parameterization without increasing the inference latency. In particular, we propose a tensor decomposition strategy that effectively over-parameterizes the relatively small student model through an efficient and nearly lossless decomposition of its parameter matrices into higher-dimensional tensors. To ensure efficiency, we further introduce a tensor constraint loss to align the high-dimensional tensors between the student and teacher models. Comprehensive experiments validate the significant performance enhancement by our approach in various KD tasks, covering computer vision and natural language processing areas. Our code is available at https://github.com/intell-sci-comput/OPDF.", "pdf": "https://openreview.net/pdf/140bf4656eefa226a0fdb08c83ff16dd7cab1681.pdf"} {"title": "Addressing Spatial-Temporal Heterogeneity: General Mixed Time Series Analysis via Latent Continuity Recovery and Alignment", "url": "https://openreview.net/forum?id=EMV8nIDZJn", "detail_url": "https://openreview.net/forum?id=EMV8nIDZJn", "authors": "Jiawei Chen,Chunhui Zhao", "tags": "NIPS 2024,Poster", "abstract": "Mixed time series (MiTS) comprising both continuous variables (CVs) and discrete variables (DVs) are frequently encountered yet under-explored in time series analysis. Essentially, CVs and DVs exhibit different temporal patterns and distribution types. Overlooking these heterogeneities would lead to insufficient and imbalanced representation learning, bringing biased results. This paper addresses the problem with two insights: 1) DVs may originate from intrinsic latent continuous variables (LCVs), which lose fine-grained information due to extrinsic discretization; 2) LCVs and CVs share similar temporal patterns and interact spatially. Considering these similarities and interactions, we propose a general MiTS analysis framework MiTSformer, which recovers LCVs behind DVs for sufficient and balanced spatial-temporal modeling by designing two essential inductive biases: 1) hierarchically aggregating multi-scale temporal context information to enrich the information granularity of DVs; 2) adaptively learning the aggregation processes via the adversarial guidance from CVs. Subsequently, MiTSformer captures complete spatial-temporal dependencies within and across LCVs and CVs via cascaded self- and cross-attention blocks. Empirically, MiTSformer achieves consistent SOTA on five mixed time series analysis tasks, including classification, extrinsic regression, anomaly detection, imputation, and long-term forecasting. The code is available at https://github.com/chunhuiz/MiTSformer.", "pdf": "https://openreview.net/pdf/3bd4a604702e08ddbe73ff144bf1a2db17443ec1.pdf"} {"title": "End-to-End Ontology Learning with Large Language Models", "url": "https://openreview.net/forum?id=UqvEHAnCJC", "detail_url": "https://openreview.net/forum?id=UqvEHAnCJC", "authors": "Andy Lo,Albert Q. Jiang,Wenda Li,Mateja Jamnik", "tags": "NIPS 2024,Poster", "abstract": "Ontologies are useful for automatic machine processing of domain knowledge as they represent it in a structured format. Yet, constructing ontologies requires substantial manual effort. To automate part of this process, large language models (LLMs) have been applied to solve various subtasks of ontology learning. However, this partial ontology learning does not capture the interactions between subtasks. We address this gap by introducing OLLM, a general and scalable method for building the taxonomic backbone of an ontology from scratch. Rather than focusing on subtasks, like individual relations between entities, we model entire subcomponents of the target ontology by finetuning an LLM with a custom regulariser that reduces overfitting on high-frequency concepts. We introduce a novel suite of metrics for evaluating the quality of the generated ontology by measuring its semantic and structural similarity to the ground truth. In contrast to standard metrics, our metrics use deep learning techniques to define more robust distance measures between graphs. Both our quantitative and qualitative results on Wikipedia show that OLLM outperforms subtask composition methods, producing more semantically accurate ontologies while maintaining structural integrity. We further demonstrate that our model can be effectively adapted to new domains, like arXiv, needing only a small number of training examples. Our source code and datasets are available at https://github.com/andylolu2/ollm.", "pdf": "https://openreview.net/pdf/5d96880880337d808f498844efcffb20de960c0d.pdf"} {"title": "SyncVIS: Synchronized Video Instance Segmentation", "url": "https://openreview.net/forum?id=tTpVHsqTKf", "detail_url": "https://openreview.net/forum?id=tTpVHsqTKf", "authors": "rongkun Zheng,Lu Qi,Xi Chen,Yi Wang,Kun Wang,Yu Qiao,Hengshuang Zhao", "tags": "NIPS 2024,Poster", "abstract": "Recent DETR-based methods have advanced the development of Video Instance Segmentation (VIS) through transformers' efficiency and capability in modeling spatial and temporal information. Despite harvesting remarkable progress, existing works follow asynchronous designs, which model video sequences via either video-level queries only or adopting query-sensitive cascade structures, resulting in difficulties when handling complex and challenging video scenarios. In this work, we analyze the cause of this phenomenon and the limitations of the current solutions, and propose to conduct synchronized modeling via a new framework named SyncVIS. Specifically, SyncVIS explicitly introduces video-level query embeddings and designs two key modules to synchronize video-level query with frame-level query embeddings: a synchronized video-frame modeling paradigm and a synchronized embedding optimization strategy. The former attempts to promote the mutual learning of frame- and video-level embeddings with each other and the latter divides large video sequences into small clips for easier optimization. Extensive experimental evaluations are conducted on the challenging YouTube-VIS 2019 & 2021 & 2022, and OVIS benchmarks, and SyncVIS achieves state-of-the-art results, which demonstrates the effectiveness and generality of the proposed approach. The code is available at https://github.com/rkzheng99/SyncVIS.", "pdf": "https://openreview.net/pdf/8dabb4fe1f28a068b801955fb703d8b9826f8a1d.pdf"} {"title": "Learning Diffusion Priors from Observations by Expectation Maximization", "url": "https://openreview.net/forum?id=7v88Fh6iSM", "detail_url": "https://openreview.net/forum?id=7v88Fh6iSM", "authors": "Fran\u00e7ois Rozet,G\u00e9r\u00f4me Andry,Francois Lanusse,Gilles Louppe", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models recently proved to be remarkable priors for Bayesian inverse problems. However, training these models typically requires access to large amounts of clean data, which could prove difficult in some settings. In this work, we present a novel method based on the expectation-maximization algorithm for training diffusion models from incomplete and noisy observations only. Unlike previous works, our method leads to proper diffusion models, which is crucial for downstream tasks. As part of our method, we propose and motivate an improved posterior sampling scheme for unconditional diffusion models. We present empirical evidence supporting the effectiveness of our method.", "pdf": "https://openreview.net/pdf/7b4718e9c0bfb39f7e8ea6d56a3075519ad5050b.pdf"} {"title": "MixEval: Deriving Wisdom of the Crowd from LLM Benchmark Mixtures", "url": "https://openreview.net/forum?id=6A29LUZhfv", "detail_url": "https://openreview.net/forum?id=6A29LUZhfv", "authors": "Jinjie Ni,Fuzhao Xue,Xiang Yue,Yuntian Deng,Mahir Shah,Kabir Jain,Graham Neubig,Yang You", "tags": "NIPS 2024,Poster", "abstract": "Evaluating large language models (LLMs) is challenging. Traditional ground-truth- based benchmarks fail to capture the comprehensiveness and nuance of real-world queries, while LLM-as-judge benchmarks suffer from grading biases and limited query quantity. Both of them may also become contaminated over time. User- facing evaluation, such as Chatbot Arena, provides reliable signals but is costly and slow. In this work, we propose MixEval, a new paradigm for establishing efficient, gold-standard LLM evaluation by strategically mixing off-the-shelf bench- marks. It bridges (1) comprehensive and well-distributed real-world user queries and (2) efficient and fairly-graded ground-truth-based benchmarks, by matching queries mined from the web with similar queries from existing benchmarks. Based on MixEval, we further build MixEval-Hard, which offers more room for model improvement. Our benchmarks\u2019 advantages lie in (1) a 0.96 model ranking correlation with Chatbot Arena arising from the highly impartial query distribution and grading mechanism, (2) fast, cheap, and reproducible execution (6% of the time and cost of MMLU), and (3) dynamic evaluation enabled by the rapid and stable data update pipeline. We provide extensive meta-evaluation and analysis for our and existing LLM benchmarks to deepen the community\u2019s understanding of LLM evaluation and guide future research directions.", "pdf": "https://openreview.net/pdf/6ac6e01113dda9a1da43dd2cf8eea064151f65c9.pdf"} {"title": "Measuring Mutual Policy Divergence for Multi-Agent Sequential Exploration", "url": "https://openreview.net/forum?id=xvYI7TCiU6", "detail_url": "https://openreview.net/forum?id=xvYI7TCiU6", "authors": "Haowen Dou,Lujuan Dang,Zhirong Luan,Badong Chen", "tags": "NIPS 2024,Poster", "abstract": "Despite the success of Multi-Agent Reinforcement Learning (MARL) algorithms in cooperative tasks, previous works, unfortunately, face challenges in heterogeneous scenarios since they simply disable parameter sharing for agent specialization. Sequential updating scheme was thus proposed, naturally diversifies agents by encouraging agents to learn from preceding ones. However, the exploration strategy in sequential scheme has not been investigated. Benefiting from updating one-by-one, agents have the access to the information from preceding agents. Thus, in this work, we propose to exploit the preceding information to enhance exploration and heterogeneity sequentially. We present Multi-Agent Divergence Policy Optimization (MADPO), equipped with mutual policy divergence maximization framework. We quantify the policy discrepancies between episodes to enhance exploration and between agents to heterogenize agents, termed intra-agent and inter-agent policy divergence. To address the issue that traditional divergence measurements lack stability and directionality, we propose to employ the conditional Cauchy-Schwarz divergence to provide entropy-guided exploration incentives. Extensive experiments show that the proposed method outperforms state-of-the-art sequential updating approaches in two challenging multi-agent tasks with various heterogeneous scenarios.", "pdf": "https://openreview.net/pdf/8c826a2aefcd536db91f619be426a50d9521f482.pdf"} {"title": "ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models", "url": "https://openreview.net/forum?id=NrwASKGm7A", "detail_url": "https://openreview.net/forum?id=NrwASKGm7A", "authors": "Yuzhe Gu,Ziwei Ji,Wenwei Zhang,Chengqi Lyu,Dahua Lin,Kai Chen", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) exhibit hallucinations in long-form question-answering tasks across various domains and wide applications. Current hallucination detection and mitigation datasets are limited in domain and size, which struggle to scale due to prohibitive labor costs and insufficient reliability of existing hallucination annotators. To facilitate the scalable oversight of LLM hallucinations, this paper introduces an iterative self-training framework that simultaneously and progressively scales up the annotation dataset and improves the accuracy of the annotator. Based on the Expectation Maximization algorithm, in each iteration, the framework first applies an automatic hallucination annotation pipeline for a scaled dataset and then trains a more accurate annotator on the dataset. This new annotator is adopted in the annotation pipeline for the next iteration. Extensive experimental results demonstrate that the finally obtained hallucination annotator with only 7B parameters surpasses GPT-4 and obtains new state-of-the-art hallucination detection results on HaluEval and HalluQA by zero-shot inference. Such an annotator can not only evaluate the hallucination levels of various LLMs on the large-scale dataset but also help to mitigate the hallucination of LLMs generations, with the Natural Language Inference metric increasing from 25% to 37% on HaluEval.", "pdf": "https://openreview.net/pdf/27a31c0c65a8215ad754bb49afa17fce093a6472.pdf"} {"title": "Discovering Preference Optimization Algorithms with and for Large Language Models", "url": "https://openreview.net/forum?id=erjQDJ0z9L", "detail_url": "https://openreview.net/forum?id=erjQDJ0z9L", "authors": "Chris Lu,Samuel Holt,Claudio Fanconi,Alex James Chan,Jakob Nicolaus Foerster,Mihaela van der Schaar,Robert Tjarko Lange", "tags": "NIPS 2024,Poster", "abstract": "Offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.\nTypically, preference optimization is approached as an offline supervised learning task using manually crafted convex loss functions. While these methods are based on theoretical insights, they are inherently constrained by human creativity, so the large search space of possible loss functions remains under-explored. We address this by performing LLM-driven *objective discovery* to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention. Specifically, we iteratively prompt an LLM to propose and implement new preference optimization loss functions based on previously evaluated performance metrics. This process leads to the discovery of previously unknown and performant preference optimization algorithms. The best performing of these we call *Discovered Preference Optimization* (DiscoPOP), a novel algorithm that adaptively blends logistic and exponential losses. Experiments demonstrate the state-of-the-art performance of DiscoPOP and its successful transfer to held-out tasks.", "pdf": "https://openreview.net/pdf/79352602ab6c4a297fec47ab6365863d952f6a82.pdf"} {"title": "Predicting Ground State Properties: Constant Sample Complexity and Deep Learning Algorithms", "url": "https://openreview.net/forum?id=ybLXvqJyQA", "detail_url": "https://openreview.net/forum?id=ybLXvqJyQA", "authors": "Marc Wanner,Laura Lewis,Chiranjib Bhattacharyya,Devdatt Dubhashi,Alexandru Gheorghiu", "tags": "NIPS 2024,Poster", "abstract": "A fundamental problem in quantum many-body physics is that of finding ground states of local\nHamiltonians. A number of recent works gave provably efficient machine learning (ML) algorithms\nfor learning ground states. Specifically, [Huang et al. Science 2022], introduced an approach for learning\nproperties of the ground state of an $n$-qubit gapped local Hamiltonian $H$ from only $n^{\\mathcal{O}(1)}$ data\npoints sampled from Hamiltonians in the same phase of matter. This was subsequently improved\nby [Lewis et al. Nature Communications 2024], to $\\mathcal{O}(\\log \ud835\udc5b)$ samples when the geometry of the $n$-qubit system is known.\nIn this work, we introduce two approaches that achieve a constant sample complexity, independent\nof system size $n$, for learning ground state properties. Our first algorithm consists of a simple\nmodification of the ML model used by Lewis et al. and applies to a property of interest known beforehand. Our second algorithm, which applies even if a description of\nthe property is not known, is a deep neural network model. While empirical results showing the\nperformance of neural networks have been demonstrated, to our knowledge, this is the first rigorous\nsample complexity bound on a neural network model for predicting ground state properties. We also perform numerical experiments that confirm the improved scaling of our approach compared to earlier results.", "pdf": "https://openreview.net/pdf/83a1900fe49a28fcac07aec59f2f276432aaaff2.pdf"} {"title": "Is Score Matching Suitable for Estimating Point Processes?", "url": "https://openreview.net/forum?id=HQgHCVZiHw", "detail_url": "https://openreview.net/forum?id=HQgHCVZiHw", "authors": "Haoqun Cao,Zizhuo Meng,Tianjun Ke,Feng Zhou", "tags": "NIPS 2024,Poster", "abstract": "Score matching estimators for point processes have gained widespread attention in recent years because they do not require the calculation of intensity integrals, thereby effectively addressing the computational challenges in maximum likelihood estimation (MLE). Some existing works have proposed score matching estimators for point processes. However, this work demonstrates that the incompleteness of the estimators proposed in those works renders them applicable only to specific problems, and they fail for more general point processes. To address this issue, this work introduces the weighted score matching estimator to point processes. Theoretically, we prove the consistency of the estimator we propose. Experimental results indicate that our estimator accurately estimates model parameters on synthetic data and yields results consistent with MLE on real data. In contrast, existing score matching estimators fail to perform effectively. Codes are publicly available at \\url{https://github.com/KenCao2007/WSM_TPP}.", "pdf": "https://openreview.net/pdf/5d086f5651da668c868278266d3a7cc9a716a273.pdf"} {"title": "On the Ability of Developers' Training Data Preservation of Learnware", "url": "https://openreview.net/forum?id=wsqDJHPUHN", "detail_url": "https://openreview.net/forum?id=wsqDJHPUHN", "authors": "Hao-Yi Lei,Zhi-Hao Tan,Zhi-Hua Zhou", "tags": "NIPS 2024,Poster", "abstract": "The learnware paradigm aims to enable users to leverage numerous existing well-trained models instead of building machine learning models from scratch. In this paradigm, developers worldwide can submit their well-trained models spontaneously into a learnware dock system, and the system helps developers generate specification for each model to form a learnware. As the key component, a specification should characterize the capabilities of the model, enabling it to be adequately identified and reused, while preserving the developer's original data. Recently, the RKME (Reduced Kernel Mean Embedding) specification was proposed and most commonly utilized. This paper provides a theoretical analysis of RKME specification about its preservation ability for developer's training data. By modeling it as a geometric problem on manifolds and utilizing tools from geometric analysis, we prove that the RKME specification is able to disclose none of the developer's original data and possesses robust defense against common inference attacks, while preserving sufficient information for effective learnware identification.", "pdf": "https://openreview.net/pdf/194052c31fa1e87e185856fa6e15152cd51c45b1.pdf"} {"title": "Revisiting the Integration of Convolution and Attention for Vision Backbone", "url": "https://openreview.net/forum?id=ttUXtV2YrA", "detail_url": "https://openreview.net/forum?id=ttUXtV2YrA", "authors": "Lei Zhu,Xinjiang Wang,Wayne Zhang,Rynson W. H. Lau", "tags": "NIPS 2024,Poster", "abstract": "Convolutions (Convs) and multi-head self-attentions (MHSAs) are typically considered alternatives to each other for building vision backbones. Although some works try to integrate both, they apply the two operators simultaneously at the finest pixel granularity. With Convs responsible for per-pixel feature extraction already, the question is whether we still need to include the heavy MHSAs at such a fine-grained level. In fact, this is the root cause of the scalability issue w.r.t. the input resolution for vision transformers. To address this important problem, we propose in this work to use MSHAs and Convs in parallel \\textbf{at different granularity levels} instead. Specifically, in each layer, we use two different ways to represent an image: a fine-grained regular grid and a coarse-grained set of semantic slots. We apply different operations to these two representations: Convs to the grid for local features, and MHSAs to the slots for global features. A pair of fully differentiable soft clustering and dispatching modules is introduced to bridge the grid and set representations, thus \nenabling local-global fusion. Through extensive experiments on various vision tasks, we empirically verify the potential of the proposed integration scheme, named \\textit{GLMix}: by offloading the burden of fine-grained features to light-weight Convs, it is sufficient to use MHSAs in a few (e.g., 64) semantic slots to match the performance of recent state-of-the-art backbones, while being more efficient. Our visualization results also demonstrate that the soft clustering module produces a meaningful semantic grouping effect with only IN1k classification supervision, which may induce better interpretability and inspire new weakly-supervised semantic segmentation approaches. Code will be available at \\url{https://github.com/rayleizhu/GLMix}.", "pdf": "https://openreview.net/pdf/1bec4c01160f57a5560a6532b6328cafc3f4324e.pdf"} {"title": "Continual Counting with Gradual Privacy Expiration", "url": "https://openreview.net/forum?id=V6qdb1AgsM", "detail_url": "https://openreview.net/forum?id=V6qdb1AgsM", "authors": "Joel Daniel Andersson,Monika Henzinger,Rasmus Pagh,Teresa Anna Steiner,Jalaj Upadhyay", "tags": "NIPS 2024,Poster", "abstract": "Differential privacy with gradual expiration models the setting where data items arrive in a stream and at a given time $t$ the privacy loss guaranteed for a data item seen at time $(t-d)$ is $\\epsilon g(d)$, where $g$ is a monotonically non-decreasing function. We study the fundamental *continual (binary) counting* problem where each data item consists of a bit and the algorithm needs to output at each time step the sum of all the bits streamed so far. For a stream of length $T$ and privacy *without* expiration continual counting is possible with maximum (over all time steps) additive error $O(\\log^2(T)/\\varepsilon)$ and the best known lower bound is $\\Omega(\\log(T)/\\varepsilon)$; closing this gap is a challenging open problem. \n\nWe show that the situation is very different for privacy with gradual expiration by giving upper and lower bounds for a large set of expiration functions $g$. Specifically, our algorithm achieves an additive error of $O(\\log(T)/\\epsilon)$ for a large set of privacy expiration functions. We also give a lower bound that shows that if $C$ is the additive error of any $\\epsilon$-DP algorithm for this problem, then the product of $C$ and the privacy expiration function after $2C$ steps must be $\\Omega(\\log(T)/\\epsilon)$. Our algorithm matches this lower bound as its additive error is $O(\\log(T)/\\epsilon)$, even when $g(2C) = O(1)$.\n\nOur empirical evaluation shows that we achieve a slowly growing privacy loss that has significantly smaller empirical privacy loss for large values of $d$ than a natural baseline algorithm.", "pdf": "https://openreview.net/pdf/f8adf5112bba52e8ed999054153835f97fd6290e.pdf"} {"title": "Single Image Unlearning: Efficient Machine Unlearning in Multimodal Large Language Models", "url": "https://openreview.net/forum?id=YNx7ai4zTs", "detail_url": "https://openreview.net/forum?id=YNx7ai4zTs", "authors": "Jiaqi Li,Qianshan Wei,Chuanyi Zhang,Guilin Qi,Miaozeng Du,Yongrui Chen,Sheng Bi,Fan Liu", "tags": "NIPS 2024,Poster", "abstract": "Machine unlearning (MU) empowers individuals with the `right to be forgotten' by removing their private or sensitive information encoded in machine learning models. However, it remains uncertain whether MU can be effectively applied to Multimodal Large Language Models (MLLMs), particularly in scenarios of forgetting the leaked visual data of concepts. To overcome the challenge, we propose an efficient method, Single Image Unlearning (SIU), to unlearn the visual recognition of a concept by fine-tuning a single associated image for few steps. SIU consists of two key aspects: (i) Constructing Multifaceted fine-tuning data. We introduce four targets, based on which we construct fine-tuning data for the concepts to be forgotten; (ii) Joint training loss. To synchronously forget the visual recognition of concepts and preserve the utility of MLLMs, we fine-tune MLLMs through a novel Dual Masked KL-divergence Loss combined with Cross Entropy loss. Alongside our method, we establish MMUBench, a new benchmark for MU in MLLMs and introduce a collection of metrics for its evaluation. Experimental results on MMUBench show that SIU completely surpasses the performance of existing methods. Furthermore, we surprisingly find that SIU can avoid invasive membership inference attacks and jailbreak attacks. To the best of our knowledge, we are the first to explore MU in MLLMs. We will release the code and benchmark in the near future.", "pdf": "https://openreview.net/pdf/36bb66fb6be9ac982ed530b5063c36c7ec6c8d04.pdf"} {"title": "Non-geodesically-convex optimization in the Wasserstein space", "url": "https://openreview.net/forum?id=LGG1IQhbOr", "detail_url": "https://openreview.net/forum?id=LGG1IQhbOr", "authors": "Hoang Phuc Hau Luu,Hanlin Yu,Bernardo Williams,Petrus Mikkola,Marcelo Hartmann,Kai Puolam\u00e4ki,Arto Klami", "tags": "NIPS 2024,Poster", "abstract": "We study a class of optimization problems in the Wasserstein space (the space of probability measures) where the objective function is nonconvex along generalized geodesics. Specifically, the objective exhibits some difference-of-convex structure along these geodesics. The setting also encompasses sampling problems where the logarithm of the target distribution is difference-of-convex. We derive multiple convergence insights for a novel semi Forward-Backward Euler scheme under several nonconvex (and possibly nonsmooth) regimes. Notably, the semi Forward-Backward Euler is just a slight modification of the Forward-Backward Euler whose convergence is---to our knowledge---still unknown in our very general non-geodesically-convex setting.", "pdf": "https://openreview.net/pdf/459c148b7dd46adbce5fcbb5f07d51ef992d6be3.pdf"} {"title": "A Theoretical Understanding of Self-Correction through In-context Alignment", "url": "https://openreview.net/forum?id=OtvNLTWYww", "detail_url": "https://openreview.net/forum?id=OtvNLTWYww", "authors": "Yifei Wang,Yuyang Wu,Zeming Wei,Stefanie Jegelka,Yisen Wang", "tags": "NIPS 2024,Poster", "abstract": "Going beyond mimicking limited human experiences, recent studies show initial evidence that, like humans, large language models (LLMs) are capable of improving their abilities purely by self-correction, i.e., correcting previous responses through self-examination, as seen in models like OpenAI o1. Nevertheless, little is known about how such capabilities arise. In this work, based on a simplified setup akin to an alignment task, we theoretically analyze self-correction from an in-context learning perspective, showing that when LLMs give relatively accurate self-examinations as rewards, they are capable of refining responses in an in-context way. Notably, going beyond previous theories on over-simplified linear transformers, our theoretical construction underpins the roles of several key designs of realistic transformers for self-correction: softmax attention, multi-head attention, and the MLP block. We validate these findings extensively on synthetic datasets. Inspired by these findings, we propose a simple self-correction strategy, Checking as Context (CaC), which finds novel applications in alleviating social bias and defending against LLM jailbreaks. We believe that these findings will inspire further research on understanding, exploiting, and enhancing self-correction for building better foundation models. Code is at https://github.com/yifeiwang77/Self-Correction.", "pdf": "https://openreview.net/pdf/61c3046eb708df4bdeec88d9611a7f8586a706cb.pdf"} {"title": "One-Step Diffusion Distillation through Score Implicit Matching", "url": "https://openreview.net/forum?id=ogk236hsJM", "detail_url": "https://openreview.net/forum?id=ogk236hsJM", "authors": "Weijian Luo,Zemin Huang,Zhengyang Geng,J Zico Kolter,Guo-Jun Qi", "tags": "NIPS 2024,Poster", "abstract": "Despite their strong performances on many generative tasks, diffusion models require a large number of sampling steps in order to generate realistic samples. This has motivated the community to develop effective methods to distill pre-trained diffusion models into more efficient models, but these methods still typically require few-step inference or perform substantially worse than the underlying model. In this paper, we present Score Implicit Matching (SIM) a new approach to distilling pre-trained diffusion models into single-step generator models, while maintaining almost the same sample generation ability as the original model as well as being data-free with no need of training samples for distillation. The method rests upon the fact that, although the traditional score-based loss is intractable to minimize for generator models, under certain conditions we \\emph{can} efficiently compute the \\emph{gradients} for a wide class of score-based divergences between a diffusion model and a generator. SIM shows strong empirical performances for one-step generators: on the CIFAR10 dataset, it achieves an FID of 2.06 for unconditional generation and 1.96 for class-conditional generation. Moreover, by applying SIM to a leading transformer-based diffusion model, we distill a single-step generator for text-to-image (T2I) generation that attains an aesthetic score of 6.42 with no performance decline over the original multi-step counterpart, clearly outperforming the other one-step generators including SDXL-TURBO of 5.33, SDXL-LIGHTNING of 5.34 and HYPER-SDXL of 5.85. We will release this industry-ready one-step transformer-based T2I generator along with this paper.", "pdf": "https://openreview.net/pdf/6737446f78cac3f1ee056d202bc0989af0d0522d.pdf"} {"title": "Denoising Diffusion Path: Attribution Noise Reduction with An Auxiliary Diffusion Model", "url": "https://openreview.net/forum?id=bSv0MBDBF2", "detail_url": "https://openreview.net/forum?id=bSv0MBDBF2", "authors": "Yiming Lei,Zilong Li,Junping Zhang,Hongming Shan", "tags": "NIPS 2024,Poster", "abstract": "The explainability of deep neural networks (DNNs) is critical for trust and reliability in AI systems. Path-based attribution methods, such as integrated gradients (IG), aim to explain predictions by accumulating gradients along a path from a baseline to the target image. However, noise accumulated during this process can significantly distort the explanation. While existing methods primarily concentrate on finding alternative paths to circumvent noise, they overlook a critical issue: intermediate-step images frequently diverge from the distribution of training data, further intensifying the impact of noise. This work presents a novel Denoising Diffusion Path (DDPath) to tackle this challenge by harnessing the power of diffusionmodels for denoising. By exploiting the inherent ability of diffusion models to progressively remove noise from an image, DDPath constructs a piece-wise linear path. Each segment of this path ensures that samples drawn from a Gaussian distribution are centered around the target image. This approach facilitates a gradual reduction of noise along the path. We further demonstrate that DDPath adheres to essential axiomatic properties for attribution methods and can be seamlessly integrated with existing methods such as IG. Extensive experimental results demonstrate that DDPath can significantly reduce noise in the attributions\u2014resulting in clearer explanations\u2014and achieves better quantitative results than traditional path-based methods.", "pdf": "https://openreview.net/pdf/03d8ae20aa80078dbb40288ba9c7ffc811f16b47.pdf"} {"title": "Exploring Fixed Point in Image Editing: Theoretical Support and Convergence Optimization", "url": "https://openreview.net/forum?id=2wMJ4wq4az", "detail_url": "https://openreview.net/forum?id=2wMJ4wq4az", "authors": "chen hang,Zhe Ma,Haoming Chen,Xuwei Fang,Weisheng Xie,Faming Fang,Guixu Zhang,Hongbin Wang", "tags": "NIPS 2024,Poster", "abstract": "In image editing, Denoising Diffusion Implicit Models (DDIM) inversion has become a widely adopted method and is extensively used in various image editing approaches. The core concept of DDIM inversion stems from the deterministic sampling technique of DDIM, which allows the DDIM process to be viewed as an Ordinary Differential Equation (ODE) process that is reversible. This enables the prediction of corresponding noise from a reference image, ensuring that the restored image from this noise remains consistent with the reference image. Image editing exploits this property by modifying the cross-attention between text and images to edit specific objects while preserving the remaining regions. However, in the DDIM inversion, using the $t-1$ time step to approximate the noise prediction at time step $t$ introduces errors between the restored image and the reference image. Recent approaches have modeled each step of the DDIM inversion process as finding a fixed-point problem of an implicit function. This approach significantly mitigates the error in the restored image but lacks theoretical support regarding the existence of such fixed points. Therefore, this paper focuses on the study of fixed points in DDIM inversion and provides theoretical support. Based on the obtained theoretical insights, we further optimize the loss function for the convergence of fixed points in the original DDIM inversion, improving the visual quality of the edited image. Finally, we extend the fixed-point based image editing to the application of unsupervised image dehazing, introducing a novel text-based approach for unsupervised dehazing.", "pdf": "https://openreview.net/pdf/023cb1ca8a2612ffee2942eb9ac78e685c3c4d7a.pdf"} {"title": "Object segmentation from common fate: Motion energy processing enables human-like zero-shot generalization to random dot stimuli", "url": "https://openreview.net/forum?id=Po7iQKKT5b", "detail_url": "https://openreview.net/forum?id=Po7iQKKT5b", "authors": "Matthias Tangemann,Matthias Kuemmerer,Matthias Bethge", "tags": "NIPS 2024,Poster", "abstract": "Humans excel at detecting and segmenting moving objects according to the {\\it Gestalt} principle of \u201ccommon fate\u201d. Remarkably, previous works have shown that human perception generalizes this principle in a zero-shot fashion to unseen textures or random dots. In this work, we seek to better understand the computational basis for this capability by evaluating a broad range of optical flow models and a neuroscience inspired motion energy model for zero-shot figure-ground segmentation of random dot stimuli. Specifically, we use the extensively validated motion energy model proposed by Simoncelli and Heeger in 1998 which is fitted to neural recordings in cortex area MT. We find that a cross section of 40 deep optical flow models trained on different datasets struggle to estimate motion patterns in random dot videos, resulting in poor figure-ground segmentation performance. Conversely, the neuroscience-inspired model significantly outperforms all optical flow models on this task. For a direct comparison to human perception, we conduct a psychophysical study using a shape identification task as a proxy to measure human segmentation performance. All state-of-the-art optical flow models fall short of human performance, but only the motion energy model matches human capability. This neuroscience-inspired model successfully addresses the lack of human-like zero-shot generalization to random dot stimuli in current computer vision models, and thus establishes a compelling link between the Gestalt psychology of human object perception and cortical motion processing in the brain.\n\nCode, models and datasets are available at https://github.com/mtangemann/motion_energy_segmentation", "pdf": "https://openreview.net/pdf/4d11eb687f80d02d743969b7d6ad5521a75c8c97.pdf"} {"title": "Zeroth-Order Sampling Methods for Non-Log-Concave Distributions: Alleviating Metastability by Denoising Diffusion", "url": "https://openreview.net/forum?id=X3Aljulsw5", "detail_url": "https://openreview.net/forum?id=X3Aljulsw5", "authors": "Ye He,Kevin Rojas,Molei Tao", "tags": "NIPS 2024,Poster", "abstract": "This paper considers the problem of sampling from non-logconcave distribution, based on queries of its unnormalized density. It first describes a framework, Denoising Diffusion Monte Carlo (DDMC), based on the simulation of a denoising diffusion process with its score function approximated by a generic Monte Carlo estimator. DDMC is an oracle-based meta-algorithm, where its oracle is the assumed access to samples that generate a Monte Carlo score estimator. Then we provide an implementation of this oracle, based on rejection sampling, and this turns DDMC into a true algorithm, termed Zeroth-Order Diffusion Monte Carlo (ZOD-MC). We provide convergence analyses by first constructing a general framework, i.e. a performance guarantee for DDMC, without assuming the target distribution to be log-concave or satisfying any isoperimetric inequality. Then we prove that ZOD-MC admits an inverse polynomial dependence on the desired sampling accuracy, albeit still suffering from the curse of dimensionality. Consequently, for low dimensional distributions, ZOD-MC is a very efficient sampler, with performance exceeding latest samplers, including also-denoising-diffusion-based RDMC and RSDMC. Last, we experimentally demonstrate the insensitivity of ZOD-MC to increasingly higher barriers between modes or discontinuity in non-convex potential.", "pdf": "https://openreview.net/pdf/ffb1dbd5c419841e52a25c55c49853aaa3853ae2.pdf"} {"title": "Graph Neural Networks Need Cluster-Normalize-Activate Modules", "url": "https://openreview.net/forum?id=faj2EBhdHC", "detail_url": "https://openreview.net/forum?id=faj2EBhdHC", "authors": "Arseny Skryagin,Felix Divo,Mohammad Amin Ali,Devendra Singh Dhami,Kristian Kersting", "tags": "NIPS 2024,Poster", "abstract": "Graph Neural Networks (GNNs) are non-Euclidean deep learning models for graph-structured data. Despite their successful and diverse applications, oversmoothing prohibits deep architectures due to node features converging to a single fixed point. This severely limits their potential to solve complex tasks. To counteract this tendency, we propose a plug-and-play module consisting of three steps: Cluster\u2192Normalize\u2192Activate (CNA). By applying CNA modules, GNNs search and form super nodes in each layer, which are normalized and activated individually. We demonstrate in node classification and property prediction tasks that CNA significantly improves the accuracy over the state-of-the-art. Particularly, CNA reaches 94.18% and 95.75% accuracy on Cora and CiteSeer, respectively. It further benefits GNNs in regression tasks as well, reducing the mean squared error compared to all baselines. At the same time, GNNs with CNA require substantially fewer learnable parameters than competing architectures.", "pdf": "https://openreview.net/pdf/055e0100522ec18ba8cc265d0ac6611427eefa07.pdf"} {"title": "State Space Models on Temporal Graphs: A First-Principles Study", "url": "https://openreview.net/forum?id=UaJErAOssN", "detail_url": "https://openreview.net/forum?id=UaJErAOssN", "authors": "Jintang Li,Ruofan Wu,Xinzhou Jin,Boqun Ma,Liang Chen,Zibin Zheng", "tags": "NIPS 2024,Poster", "abstract": "Over the past few years, research on deep graph learning has shifted from static graphs to temporal graphs in response to real-world complex systems that exhibit dynamic behaviors. In practice, temporal graphs are formalized as an ordered sequence of static graph snapshots observed at discrete time points. Sequence models such as RNNs or Transformers have long been the predominant backbone networks for modeling such temporal graphs. Yet, despite the promising results, RNNs struggle with long-range dependencies, while transformers are burdened by quadratic computational complexity. Recently, state space models (SSMs), which are framed as discretized representations of an underlying continuous-time linear dynamical system, have garnered substantial attention and achieved breakthrough advancements in independent sequence modeling. In this work, we undertake a principled investigation that extends SSM theory to temporal graphs by integrating structural information into the online approximation objective via the adoption of a Laplacian regularization term. The emergent continuous-time system introduces novel algorithmic challenges, thereby necessitating our development of GraphSSM, a graph state space model for modeling the dynamics of temporal graphs. Extensive experimental results demonstrate the effectiveness of our GraphSSM framework across various temporal graph benchmarks.", "pdf": "https://openreview.net/pdf/d852dc04adec76511c1403354f79e5cdd589d7a2.pdf"} {"title": "ReMAP: Neural Model Reprogramming with Network Inversion and Retrieval-Augmented Mapping for Adaptive Motion Forecasting", "url": "https://openreview.net/forum?id=dVqZ0a7LdP", "detail_url": "https://openreview.net/forum?id=dVqZ0a7LdP", "authors": "Sharmita Dey,Sarath Ravindran Nair", "tags": "NIPS 2024,Poster", "abstract": "Mobility impairment caused by limb loss, aging, stroke, and other movement deficiencies is a significant challenge faced by millions of individuals worldwide. Advanced assistive technologies, such as prostheses and orthoses, have the potential to greatly improve the quality of life for such individuals. A critical component in the design of these technologies is the accurate forecasting of reference joint motion for impaired limbs, which is hindered by the scarcity of joint locomotion data available for these patients. To address this, we propose ReMAP, a novel model repurposing strategy that leverages deep learning's reprogramming property, incorporating network inversion principles and retrieval-augmented mapping. Our approach adapts models originally designed for able-bodied individuals to forecast joint motion in limb-impaired patients without altering model parameters. We demonstrate the efficacy of ReMAP through extensive empirical studies on data from below-knee amputated patients, showcasing significant improvements over traditional transfer learning and fine-tuning methods. These findings have significant implications for advancing assistive technology and mobility for patients with amputations, stroke, or aging.", "pdf": "https://openreview.net/pdf/c6ef6151a85bf914025a4754464120930730c582.pdf"} {"title": "Banded Square Root Matrix Factorization for Differentially Private Model Training", "url": "https://openreview.net/forum?id=KSyTvgoSrX", "detail_url": "https://openreview.net/forum?id=KSyTvgoSrX", "authors": "Kalinin Nikita,Christoph H. Lampert", "tags": "NIPS 2024,Poster", "abstract": "Current state-of-the-art methods for differentially private model training are based on matrix factorization techniques. However, these methods suffer from high computational overhead because they require numerically solving a demanding optimization problem to determine an approximately optimal factorization prior to the actual model training. In this work, we present a new matrix factorization approach, BSR, which overcomes this computational bottleneck. By exploiting properties of the standard matrix square root, BSR allows to efficiently handle also large-scale problems. For the key scenario of stochastic gradient descent with momentum and weight decay, we even derive analytical expressions for BSR that render the computational overhead negligible. We prove bounds on the approximation quality that hold both in the centralized and in the federated learning setting. Our numerical experiments demonstrate that models trained using BSR perform on par with the best existing methods, while completely avoiding their computational overhead.", "pdf": "https://openreview.net/pdf/7d1f2391d3a70355367dab43fedc74785be86bdc.pdf"} {"title": "4Diffusion: Multi-view Video Diffusion Model for 4D Generation", "url": "https://openreview.net/forum?id=SFk7AMpyhx", "detail_url": "https://openreview.net/forum?id=SFk7AMpyhx", "authors": "Haiyu Zhang,Xinyuan Chen,Yaohui Wang,Xihui Liu,Yunhong Wang,Yu Qiao", "tags": "NIPS 2024,Poster", "abstract": "Current 4D generation methods have achieved noteworthy efficacy with the aid of advanced diffusion generative models. However, these methods lack multi-view spatial-temporal modeling and encounter challenges in integrating diverse prior knowledge from multiple diffusion models, resulting in inconsistent temporal appearance and flickers. In this paper, we propose a novel 4D generation pipeline, namely $\\textbf{4Diffusion}$, aimed at generating spatial-temporally consistent 4D content from a monocular video. We first design a unified diffusion model tailored for multi-view video generation by incorporating a learnable motion module into a frozen 3D-aware diffusion model to capture multi-view spatial-temporal correlations. After training on a curated dataset, our diffusion model acquires reasonable temporal consistency and inherently preserves the generalizability and spatial consistency of the 3D-aware diffusion model. Subsequently, we propose 4D-aware Score Distillation Sampling loss, which is based on our multi-view video diffusion model, to optimize 4D representation parameterized by dynamic NeRF. This aims to eliminate discrepancies arising from multiple diffusion models, allowing for generating spatial-temporally consistent 4D content. Moreover, we devise an anchor loss to enhance the appearance details and facilitate the learning of dynamic NeRF. Extensive qualitative and quantitative experiments demonstrate that our method achieves superior performance compared to previous methods.", "pdf": "https://openreview.net/pdf/c82cad9ac993b7e7299eb3787716e0166d8f972c.pdf"} {"title": "Hierarchical Uncertainty Exploration via Feedforward Posterior Trees", "url": "https://openreview.net/forum?id=UddVRqTrjt", "detail_url": "https://openreview.net/forum?id=UddVRqTrjt", "authors": "Elias Nehme,Rotem Mulayoff,Tomer Michaeli", "tags": "NIPS 2024,Poster", "abstract": "When solving ill-posed inverse problems, one often desires to explore the space of potential solutions rather than be presented with a single plausible reconstruction. Valuable insights into these feasible solutions and their associated probabilities are embedded in the posterior distribution. However, when confronted with data of high dimensionality (such as images), visualizing this distribution becomes a formidable challenge, necessitating the application of effective summarization techniques before user examination. In this work, we introduce a new approach for visualizing posteriors across multiple levels of granularity using *tree*-valued predictions. Our method predicts a tree-valued hierarchical summarization of the posterior distribution for any input measurement, in a single forward pass of a neural network. We showcase the efficacy of our approach across diverse datasets and image restoration challenges, highlighting its prowess in uncertainty quantification and visualization. Our findings reveal that our method performs comparably to a baseline that hierarchically clusters samples from a diffusion-based posterior sampler, yet achieves this with orders of magnitude greater speed. Code and examples are available at our [webpage](https://eliasnehme.github.io/PosteriorTrees/).", "pdf": "https://openreview.net/pdf/6d1561688c12bec93feb26dc5c49bbb2f3e24334.pdf"} {"title": "PaDeLLM-NER: Parallel Decoding in Large Language Models for Named Entity Recognition", "url": "https://openreview.net/forum?id=vjw4TIf8Bo", "detail_url": "https://openreview.net/forum?id=vjw4TIf8Bo", "authors": "Jinghui Lu,Yanjie Wang,Ziwei Yang,Xuejing Liu,Brian Mac Namee,Can Huang", "tags": "NIPS 2024,Poster", "abstract": "In this study, we aim to reduce generation latency for Named Entity Recognition (NER) with Large Language Models (LLMs). The main cause of high latency in LLMs is the sequential decoding process, which autoregressively generates all labels and mentions for NER, significantly increase the sequence length. To this end, we introduce Parallel Decoding in LLM for NE} (PaDeLLM-NER), a approach that integrates seamlessly into existing generative model frameworks without necessitating additional modules or architectural modifications. PaDeLLM-NER allows for the simultaneous decoding of all mentions, thereby reducing generation latency. Experiments reveal that PaDeLLM-NER significantly increases inference speed that is 1.76 to 10.22 times faster than the autoregressive approach for both English and Chinese. Simultaneously it maintains the quality of predictions as evidenced by the performance that is on par with the state-of-the-art across various datasets. All resources are available at https://github.com/GeorgeLuImmortal/PaDeLLM_NER.", "pdf": "https://openreview.net/pdf/d41c719a3d75bbd4f587ed89d649f8de4444d47f.pdf"} {"title": "Out-Of-Distribution Detection with Diversification (Provably)", "url": "https://openreview.net/forum?id=C1hiRbzEH9", "detail_url": "https://openreview.net/forum?id=C1hiRbzEH9", "authors": "Haiyun Yao,Zongbo Han,Huazhu Fu,Xi Peng,Qinghua Hu,Changqing Zhang", "tags": "NIPS 2024,Poster", "abstract": "Out-of-distribution (OOD) detection is crucial for ensuring reliable deployment of machine learning models. Recent advancements focus on utilizing easily accessible auxiliary outliers (e.g., data from the web or other datasets) in training. However, we experimentally reveal that these methods still struggle to generalize their detection capabilities to unknown OOD data, due to the limited diversity of the auxiliary outliers collected. Therefore, we thoroughly examine this problem from the generalization perspective and demonstrate that a more diverse set of auxiliary outliers is essential for enhancing the detection capabilities. However, in practice, it is difficult and costly to collect sufficiently diverse auxiliary outlier data. Therefore, we propose a simple yet practical approach with a theoretical guarantee, termed Diversity-induced Mixup for OOD detection (diverseMix), which enhances the diversity of auxiliary outlier set for training in an efficient way. Extensive experiments show that diverseMix achieves superior performance on commonly used and recent challenging large-scale benchmarks, which further confirm the importance of the diversity of auxiliary outliers.", "pdf": "https://openreview.net/pdf/a41a62e5dc33a8d7bb4f0ac390f1bb5df07f1189.pdf"} {"title": "Drones Help Drones: A Collaborative Framework for Multi-Drone Object Trajectory Prediction and Beyond", "url": "https://openreview.net/forum?id=20QgErW5zH", "detail_url": "https://openreview.net/forum?id=20QgErW5zH", "authors": "Zhechao Wang,Peirui Cheng,Minxing Chen,Pengju Tian,Zhirui Wang,Xinming Li,Xue Yang,Xian Sun", "tags": "NIPS 2024,Poster", "abstract": "Collaborative trajectory prediction can comprehensively forecast the future motion of objects through multi-view complementary information. However, it encounters two main challenges in multi-drone collaboration settings. The expansive aerial observations make it difficult to generate precise Bird's Eye View (BEV) representations. Besides, excessive interactions can not meet real-time prediction requirements within the constrained drone-based communication bandwidth. To address these problems, we propose a novel framework named \"Drones Help Drones\" (DHD). Firstly, we incorporate the ground priors provided by the drone's inclined observation to estimate the distance between objects and drones, leading to\u00a0more precise BEV generation. Secondly, we design a selective mechanism based on the local feature discrepancy to prioritize the critical information contributing to prediction tasks during inter-drone interactions. Additionally, we create the first dataset for multi-drone collaborative prediction, named \"Air-Co-Pred\", and conduct quantitative and qualitative experiments to validate the effectiveness of our DHD framework. The results demonstrate that compared to state-of-the-art approaches, DHD reduces position deviation in BEV representations by over 20\\% and requires only a quarter of the transmission ratio for interactions while achieving comparable prediction performance. Moreover, DHD also shows promising generalization to the collaborative 3D object detection in CoPerception-UAVs.", "pdf": "https://openreview.net/pdf/8f21eeda79a3843e1c56da440fe12db2b4a36f1e.pdf"} {"title": "A Simple and Optimal Approach for Universal Online Learning with Gradient Variations", "url": "https://openreview.net/forum?id=yO5DVyCHZR", "detail_url": "https://openreview.net/forum?id=yO5DVyCHZR", "authors": "Yu-Hu Yan,Peng Zhao,Zhi-Hua Zhou", "tags": "NIPS 2024,Poster", "abstract": "We investigate the problem of universal online learning with gradient-variation regret. Universal online learning aims to achieve regret guarantees without prior knowledge of the curvature of the online functions. Moreover, we study the problem-dependent gradient-variation regret as it plays a crucial role in bridging stochastic and adversarial optimization as well as game theory. In this work, we design a universal approach with the *optimal* gradient-variation regret simultaneously for strongly convex, exp-concave, and convex functions, thus addressing an open problem highlighted by [Yan et al. [2023]](https://openreview.net/forum?id=AA1xrgAP5z). Our approach is *simple* since it is algorithmically efficient-to-implement with a two-layer online ensemble structure and only $1$ gradient query per round, and theoretically easy-to-analyze with a novel and alternative analysis to the gradient-variation regret. Concretely, previous works on gradient variations require controlling the algorithmic stability, which is challenging and leads to sub-optimal regret and less efficient algorithm design. Our analysis overcomes this issue by using a Bregman divergence negative term from linearization and a useful smoothness property.", "pdf": "https://openreview.net/pdf/8b9fe3addc587bb135fafb6f01d9025cca31a54c.pdf"} {"title": "Aligning Individual and Collective Objectives in Multi-Agent Cooperation", "url": "https://openreview.net/forum?id=2YSHEBRRol", "detail_url": "https://openreview.net/forum?id=2YSHEBRRol", "authors": "Yang Li,Wenhao Zhang,Jianhong Wang,Shao Zhang,Yali Du,Ying Wen,Wei Pan", "tags": "NIPS 2024,Poster", "abstract": "Among the research topics in multi-agent learning, mixed-motive cooperation is one of the most prominent challenges, primarily due to the mismatch between individual and collective goals. The cutting-edge research is focused on incorporating domain knowledge into rewards and introducing additional mechanisms to incentivize cooperation. However, these approaches often face shortcomings such as the effort on manual design and the absence of theoretical groundings. To close this gap, we model the mixed-motive game as a differentiable game for the ease of illuminating the learning dynamics towards cooperation. More detailed, we introduce a novel optimization method named \\textbf{\\textit{A}}ltruistic \\textbf{\\textit{G}}radient \\textbf{\\textit{A}}djustment (\\textbf{\\textit{AgA}}) that employs gradient adjustments to progressively align individual and collective objectives. Furthermore, we theoretically prove that AgA effectively attracts gradients to stable fixed points of the collective objective while considering individual interests, and we validate these claims with empirical evidence. We evaluate the effectiveness of our algorithm AgA through benchmark environments for testing mixed-motive collaboration with small-scale agents such as the two-player public good game and the sequential social dilemma games, Cleanup and Harvest, as well as our self-developed large-scale environment in the game StarCraft II.", "pdf": "https://openreview.net/pdf/6f11dcfe0c49634b10f7b0ba2f23b10acced59d3.pdf"} {"title": "Satformer: Accurate and Robust Traffic Data Estimation for Satellite Networks", "url": "https://openreview.net/forum?id=Vw1V9AgPXW", "detail_url": "https://openreview.net/forum?id=Vw1V9AgPXW", "authors": "Liang Qin,Xiyuan Liu,Wenting Wei,Liang Chengbin,Huaxi Gu", "tags": "NIPS 2024,Poster", "abstract": "The operations and maintenance of satellite networks heavily depend on traffic measurements. Due to the large-scale and highly dynamic nature of satellite networks, global measurement encounters significant challenges in terms of complexity and overhead. Estimating global network traffic data from partial traffic measurements is a promising solution. However, the majority of current estimation methods concentrate on low-rank linear decomposition, which is unable to accurately estimate. The reason lies in its inability to capture the intricate nonlinear spatio-temporal relationship found in large-scale, highly dynamic traffic data. This paper proposes Satformer, an accurate and robust method for estimating traffic data in satellite networks. In Satformer, we innovatively incorporate an adaptive sparse spatio-temporal attention mechanism. In the mechanism, more attention is paid to specific local regions of the input tensor to improve the model's sensitivity on details and patterns. This method enhances its capability to capture nonlinear spatio-temporal relationships. Experiments on small, medium, and large-scale satellite networks datasets demonstrate that Satformer outperforms mathematical and neural baseline methods notably. It provides substantial improvements in reducing errors and maintaining robustness, especially for larger networks. The approach shows promise for deployment in actual systems.", "pdf": "https://openreview.net/pdf/15cdcfdd2a47cca1c4f2b88e6b7567424955f481.pdf"} {"title": "CogVLM: Visual Expert for Pretrained Language Models", "url": "https://openreview.net/forum?id=6dYBP3BIwx", "detail_url": "https://openreview.net/forum?id=6dYBP3BIwx", "authors": "Weihan Wang,Qingsong Lv,Wenmeng Yu,Wenyi Hong,Ji Qi,Yan Wang,Junhui Ji,Zhuoyi Yang,Lei Zhao,Song XiXuan,Jiazheng Xu,Keqin Chen,Bin Xu,Juanzi Li,Yuxiao Dong,Ming Ding,Jie Tang", "tags": "NIPS 2024,Poster", "abstract": "We introduce CogVLM, a powerful open-source visual language foundation model. Different from the popular \\emph{shallow alignment} method which maps image features into the input space of language model, CogVLM bridges the gap between the frozen pretrained language model and image encoder by a trainable visual expert module in the attention and FFN layers. As a result, CogVLM enables a deep fusion of vision language features without sacrificing any performance on NLP tasks. CogVLM-17B achieves state-of-the-art performance on 17 classic cross-modal benchmarks, including 1) image captioning datasets: NoCaps, Flicker30k, 2) VQA datasets: OKVQA, TextVQA, OCRVQA, ScienceQA, 3) LVLM benchmarks: MM-Vet, MMBench, SEED-Bench, LLaVABench, POPE, MMMU, MathVista, 4) visual grounding datasets: RefCOCO, RefCOCO+, RefCOCOg, Visual7W. Codes and checkpoints are available at Github.", "pdf": "https://openreview.net/pdf/4277264715ebfb964c30e3e67d9923c50862b8a1.pdf"} {"title": "Energy-based Hopfield Boosting for Out-of-Distribution Detection", "url": "https://openreview.net/forum?id=VLQYtVMTYz", "detail_url": "https://openreview.net/forum?id=VLQYtVMTYz", "authors": "Claus Hofmann,Simon Lucas Schmid,Bernhard Lehner,Daniel Klotz,Sepp Hochreiter", "tags": "NIPS 2024,Poster", "abstract": "Out-of-distribution (OOD) detection is critical when deploying machine learning models in the real world. Outlier exposure methods, which incorporate auxiliary outlier data in the training process, can drastically improve OOD detection performance compared to approaches without advanced training strategies. We introduce Hopfield Boosting, a boosting approach, which leverages modern Hopfield energy to sharpen the decision boundary between the in-distribution and OOD data. Hopfield Boosting encourages the model to focus on hard-to-distinguish auxiliary outlier examples that lie close to the decision boundary between in-distribution and auxiliary outlier data. Our method achieves a new state-of-the-art in OOD detection with outlier exposure, improving the FPR95 from 2.28 to 0.92 on CIFAR-10, from 11.76 to 7.94 on CIFAR-100, and from 50.74 to 36.60 on ImageNet-1K.", "pdf": "https://openreview.net/pdf/7d5f8ca6c1597dde14aefb07da3cf79bd17dd091.pdf"} {"title": "CLAP4CLIP: Continual Learning with Probabilistic Finetuning for Vision-Language Models", "url": "https://openreview.net/forum?id=rF1YRtZfoJ", "detail_url": "https://openreview.net/forum?id=rF1YRtZfoJ", "authors": "Saurav Jha,Dong Gong,Lina Yao", "tags": "NIPS 2024,Poster", "abstract": "Continual learning (CL) aims to help deep neural networks to learn new knowledge while retaining what has been learned. Owing to their powerful generalizability, pre-trained vision-language models such as Contrastive Language-Image Pre-training (CLIP) have lately gained traction as practical CL candidates. However, the domain mismatch between the pre-training and the downstream CL tasks calls for finetuning of the CLIP on the latter. The deterministic nature of the existing finetuning methods makes them overlook the many possible interactions across the modalities and deems them unsafe for high-risk tasks requiring reliable uncertainty estimation. To address these, our work proposes **C**ontinual **L**e**A**rning with **P**robabilistic finetuning (CLAP) - a probabilistic modeling framework over visual-guided text features per task, thus providing more calibrated CL finetuning. Unlike recent data-hungry anti-forgetting CL techniques, CLAP alleviates forgetting by exploiting the rich pre-trained knowledge of CLIP for weight initialization and distribution regularization of task-specific parameters. Cooperating with the diverse range of existing prompting methods, CLAP can surpass the predominant deterministic finetuning approaches for CL with CLIP. We conclude with out-of-the-box applications of superior uncertainty estimation abilities of CLAP including novel data detection and exemplar selection within the existing CL setups. Our code is available at https://github.com/srvCodes/clap4clip.", "pdf": "https://openreview.net/pdf/649fc2bc1d6ab7ff1bb07d921e2180c36c2ccf3b.pdf"} {"title": "Neuro-Symbolic Data Generation for Math Reasoning", "url": "https://openreview.net/forum?id=CIcMZGLyZW", "detail_url": "https://openreview.net/forum?id=CIcMZGLyZW", "authors": "Zenan Li,Zhi Zhou,Yuan Yao,Xian Zhang,Yu-Feng Li,Chun Cao,Fan Yang,Xiaoxing Ma", "tags": "NIPS 2024,Poster", "abstract": "A critical question about Large Language Models (LLMs) is whether their apparent deficiency in mathematical reasoning is inherent, or merely a result of insufficient exposure to high-quality mathematical data. To explore this, we developed an automated method for generating high-quality, supervised mathematical datasets. The method carefully mutates existing math problems, ensuring both diversity and validity of the newly generated problems. This is achieved by a neuro-symbolic data generation framework combining the intuitive informalization strengths of LLMs, and the precise symbolic reasoning of math solvers along with projected Markov chain Monte Carlo sampling in the highly-irregular symbolic space.\nEmpirical experiments demonstrate the high quality of data generated by the proposed method, and that the LLMs, specifically LLaMA-2 and Mistral, when realigned with the generated data, surpass their state-of-the-art counterparts.", "pdf": "https://openreview.net/pdf/f8c52bbfb2b6029419e776355a1714eb3cb86d80.pdf"} {"title": "Large Language Models Play StarCraft II:Benchmarks and A Chain of Summarization Approach", "url": "https://openreview.net/forum?id=kEPpD7yETM", "detail_url": "https://openreview.net/forum?id=kEPpD7yETM", "authors": "Weiyu Ma,Qirui Mi,Yongcheng Zeng,Xue Yan,Runji Lin,Yuqiao Wu,Jun Wang,Haifeng Zhang", "tags": "NIPS 2024,Poster", "abstract": "With the continued advancement of Large Language Models (LLMs) Agents in reasoning, planning, and decision-making, benchmarks have become crucial in evaluating these skills. However, there is a notable gap in benchmarks for real-time strategic decision-making. StarCraft II (SC2), with its complex and dynamic nature, serves as an ideal setting for such evaluations. To this end, we have developed TextStarCraft II, a specialized environment for assessing LLMs in real-time strategic scenarios within SC2. Addressing the limitations of traditional Chain of Thought (CoT) methods, we introduce the Chain of Summarization (CoS) method, enhancing LLMs' capabilities in rapid and effective decision-making. Our key experiments included:\n1. LLM Evaluation: Tested 10 LLMs in TextStarCraft II, most of them defeating LV5 build-in AI, showcasing effective strategy skills.\n2. Commercial Model Knowledge: Evaluated four commercial models on SC2 knowledge; GPT-4 ranked highest by Grandmaster-level experts.\n3. Human-AI Matches: Experimental results showed that fine-tuned LLMs performed on par with Gold-level players in real-time matches, demonstrating comparable strategic abilities.\n\nAll code and data from this\nstudy have been made pulicly available at https://github.com/histmeisah/Large-Language-Models-play-StarCraftII", "pdf": "https://openreview.net/pdf/f00682bc7e1756fbaf5b9deed1c49567ba4f89a8.pdf"} {"title": "On Sampling Strategies for Spectral Model Sharding", "url": "https://openreview.net/forum?id=PgTHgLUFi3", "detail_url": "https://openreview.net/forum?id=PgTHgLUFi3", "authors": "Denis Korzhenkov,Christos Louizos", "tags": "NIPS 2024,Poster", "abstract": "The problem of heterogeneous clients in federated learning has recently drawn a lot of attention. Spectral model sharding, i.e., partitioning the model parameters into low-rank matrices based on the singular value decomposition, has been one of the proposed solutions for more efficient on-device training in such settings. In this work we present two sampling strategies for such sharding, obtained as solutions to specific optimization problems. The first produces unbiased estimators of the original weights, while the second aims to minimize the squared approximation error. We discuss how both of these estimators can be incorporated in the federated learning loop and practical considerations that arise during local training. Empirically, we demonstrate that both of these methods can lead to improved performance in various commonly used datasets.", "pdf": "https://openreview.net/pdf/28dd94aa51bf0cc9ab50d453c74452f4de1efc08.pdf"} {"title": "UMFC: Unsupervised Multi-Domain Feature Calibration for Vision-Language Models", "url": "https://openreview.net/forum?id=dHIKahbV6G", "detail_url": "https://openreview.net/forum?id=dHIKahbV6G", "authors": "Jiachen Liang,RuiBing Hou,Minyang Hu,Hong Chang,Shiguang Shan,Xilin Chen", "tags": "NIPS 2024,Poster", "abstract": "Pre-trained vision-language models (e.g., CLIP) have shown powerful zero-shot transfer capabilities. But they still struggle with domain shifts and typically require labeled data to adapt to downstream tasks, which could be costly. In this work, we aim to leverage unlabeled data that naturally spans multiple domains to enhance the transferability of vision-language models. Under this unsupervised multi-domain setting, we have identified inherent model bias within CLIP, notably in its visual and text encoders. Specifically, we observe that CLIP\u2019s visual encoder tends to prioritize encoding domain over discriminative category information, meanwhile its text encoder exhibits a preference for domain-relevant classes. To mitigate this model bias, we propose a training-free and label-free feature calibration method, Unsupervised Multi-domain Feature Calibration (UMFC). UMFC estimates image-level biases from domain-specific features and text-level biases from the direction of domain transition. These biases are subsequently subtracted from original image and text features separately, to render them domain-invariant. We evaluate our method on multiple settings including transductive learning and test-time adaptation. Extensive experiments show that our method outperforms CLIP and performs on par with the state-of-the-arts that need additional annotations or optimization.\nOur code is available at https://github.com/GIT-LJc/UMFC.", "pdf": "https://openreview.net/pdf/65b7f071bdd9772434b0c0683e3f346701c80d6f.pdf"} {"title": "TARSS-Net: Temporal-Aware Radar Semantic Segmentation Network", "url": "https://openreview.net/forum?id=5AeLrXb9sQ", "detail_url": "https://openreview.net/forum?id=5AeLrXb9sQ", "authors": "Youcheng Zhang,Liwen Zhang,ZijunHu,Pengcheng Pi,Teng Li,Yuanpei Chen,Shi Peng,Zhe Ma", "tags": "NIPS 2024,Poster", "abstract": "Radar signal interpretation plays a crucial role in remote detection and ranging. With the gradual display of the advantages of neural network technology in signal processing, learning-based radar signal interpretation is becoming a research hot-spot and made great progress. And since radar semantic segmentation (RSS) can provide more fine-grained target information, it has become a more concerned direction in this field. However, the temporal information, which is an important clue for analyzing radar data, has not been exploited sufficiently in present RSS frameworks. In this work, we propose a novel temporal information learning paradigm, i.e., data-driven temporal information aggregation with learned target-history relations. Following this idea, a flexible learning module, called Temporal Relation-Aware Module (TRAM) is carefully designed. TRAM contains two main blocks: i) an encoder for capturing the target-history temporal relations (TH-TRE) and ii) a learnable temporal relation attentive pooling (TRAP) for aggregating temporal information. Based on TRAM, an end-to-end Temporal-Aware RSS Network (TARSS-Net) is presented, which has outstanding performance on publicly available and our collected real-measured datasets. Code and supplementary materials are available at https://github.com/zlw9161/TARSS-Net.", "pdf": "https://openreview.net/pdf/0502eba299f9676de75cc94f9fa42403c40a195c.pdf"} {"title": "Optimizing over Multiple Distributions under Generalized Quasar-Convexity Condition", "url": "https://openreview.net/forum?id=lOV9kSX3Uo", "detail_url": "https://openreview.net/forum?id=lOV9kSX3Uo", "authors": "Shihong Ding,Long Yang,Luo Luo,Cong Fang", "tags": "NIPS 2024,Poster", "abstract": "We study a typical optimization model where the optimization variable is composed of multiple probability distributions. Though the model appears frequently in practice, such as for policy problems, it lacks specific analysis in the general setting. For this optimization problem, we propose a new structural condition/landscape description named generalized quasar-convexity (GQC) beyond the realms of convexity. In contrast to original quasar-convexity \\citep{hinder2020near}, GQC allows an individual quasar-convex parameter $\\gamma_i$ for each variable block $i$ and the smaller of $\\gamma_i$ implies less block-convexity. To minimize the objective function, we consider a generalized oracle termed as the internal function that includes the standard gradient oracle as a special case. We provide optimistic mirror descent (OMD) for multiple distributions and prove that the algorithm can achieve an adaptive $\\tilde{\\mathcal{O}}((\\sum_{i=1}^d1/\\gamma_i)\\epsilon^{-1})$ iteration complexity to find an $\\varepsilon$-suboptimal global solution without pre-known the exact values of $\\gamma_i$ when the objective admits ``polynomial-like'' structural. Notably, it achieves iteration complexity that does not explicitly depend on the number of distributions and strictly faster $(\\sum_{i=1}^d 1/\\gamma_i \\text{ v.s. } d\\max_{i\\in[1:d]} 1/\\gamma_i)$ than mirror decent methods. We also extend GQC to the minimax optimization problem proposing the generalized quasar-convexity-concavity (GQCC) condition and a decentralized variant of OMD with regularization. Finally, we show the applications of our algorithmic framework on discounted Markov Decision Processes problem and Markov games, which bring new insights on the landscape analysis of reinforcement learning.", "pdf": "https://openreview.net/pdf/7a8a0213db6bb1ad89cd679a12e0083e3643481d.pdf"} {"title": "LIVE: Learnable In-Context Vector for Visual Question Answering", "url": "https://openreview.net/forum?id=QhRemVrZbG", "detail_url": "https://openreview.net/forum?id=QhRemVrZbG", "authors": "Yingzhe Peng,chenduo hao,Xinting Hu,Jiawei Peng,Xin Geng,Xu Yang", "tags": "NIPS 2024,Poster", "abstract": "As language models continue to scale, Large Language Models (LLMs) have exhibited emerging capabilities in In-Context Learning (ICL), enabling them to solve language tasks by prefixing a few in-context demonstrations (ICDs) as context. Inspired by these advancements, researchers have extended these techniques to develop Large Multimodal Models (LMMs) with ICL capabilities. However, applying ICL usually faces two major challenges: 1) using more ICDs will largely increase the inference time and 2) the performance is sensitive to the selection of ICDs. These challenges are further exacerbated in LMMs due to the integration of multiple data types and the combinational complexity of multimodal ICDs. Recently, to address these challenges, some NLP studies introduce non-learnable In-Context Vectors (ICVs) which extract useful task information from ICDs into a single vector and then insert it into the LLM to help solve the corresponding task. However, although useful in simple NLP tasks, these non-learnable methods fail to handle complex multimodal tasks like Visual Question Answering (VQA). In this study, we propose \\underline{\\textbf{L}}earnable \\underline{\\textbf{I}}n-Context \\underline{\\textbf{Ve}}ctor (LIVE) to distill essential task information from demonstrations, improving ICL performance in LMMs. Experiments show that LIVE can significantly reduce computational costs while enhancing accuracy in VQA tasks compared to traditional ICL and other non-learnable ICV methods.", "pdf": "https://openreview.net/pdf/e0f546246f6fd6b9687c62503ab4790cb13186fa.pdf"} {"title": "WizardArena: Post-training Large Language Models via Simulated Offline Chatbot Arena", "url": "https://openreview.net/forum?id=VHva3d836i", "detail_url": "https://openreview.net/forum?id=VHva3d836i", "authors": "Haipeng Luo,Qingfeng Sun,Can Xu,Pu Zhao,Qingwei Lin,Jian-Guang Lou,Shifeng Chen,Yansong Tang,Weizhu Chen", "tags": "NIPS 2024,Poster", "abstract": "Recent work demonstrates that, post-training large language models with open-domain instruction following data have achieved colossal success. Simultaneously, human Chatbot Arena has emerged as one of the most reasonable benchmarks for model evaluation and developmental guidance. However, the processes of manually curating high-quality training data and utilizing online human evaluation platforms are both expensive and limited. To mitigate the manual and temporal costs associated with post-training, this paper introduces a Simulated Chatbot Arena named WizardArena, which is fully based on and powered by open-source LLMs. For evaluation scenario, WizardArena can efficiently predict accurate performance rankings among different models based on offline test set. For training scenario, we simulate arena battles among various state-of-the-art models on a large scale of instruction data, subsequently leveraging the battle results to constantly enhance target model in both the supervised fine-tuning and reinforcement learning . Experimental results demonstrate that our WizardArena aligns closely with the online human arena rankings, and our models trained on offline extensive battle data exhibit significant performance improvements during SFT, DPO, and PPO stages.", "pdf": "https://openreview.net/pdf/07f64618b1a60b5709b4ab03039b668042fb64d7.pdf"} {"title": "How Do Large Language Models Acquire Factual Knowledge During Pretraining?", "url": "https://openreview.net/forum?id=TYdzj1EvBP", "detail_url": "https://openreview.net/forum?id=TYdzj1EvBP", "authors": "Hoyeon Chang,Jinho Park,Seonghyeon Ye,Sohee Yang,Youngkyung Seo,Du-Seong Chang,Minjoon Seo", "tags": "NIPS 2024,Poster", "abstract": "Despite the recent observation that large language models (LLMs) can store substantial factual knowledge, there is a limited understanding of the mechanisms of how they acquire factual knowledge through pretraining. This work addresses this gap by studying how LLMs acquire factual knowledge during pretraining. The findings reveal several important insights into the dynamics of factual knowledge acquisition during pretraining. First, counterintuitively, we observe that pretraining on more data shows no significant improvement in the model's capability to acquire and maintain factual knowledge. Next, LLMs undergo forgetting of memorization and generalization of factual knowledge, and LLMs trained with duplicated training data exhibit faster forgetting. Third, training LLMs with larger batch sizes can enhance the models' robustness to forgetting. Overall, our observations suggest that factual knowledge acquisition in LLM pretraining occurs by progressively increasing the probability of factual knowledge presented in the pretraining data at each step. However, this increase is diluted by subsequent forgetting. Based on this interpretation, we demonstrate that we can provide plausible explanations on recently observed behaviors of LLMs, such as the poor performance of LLMs on long-tail knowledge and the benefits of deduplicating the pretraining corpus.", "pdf": "https://openreview.net/pdf/47e43baf9dbae8ce6a81b19d07f6f6a5705ac9ba.pdf"} {"title": "Avoiding Undesired Future with Minimal Cost in Non-Stationary Environments", "url": "https://openreview.net/forum?id=yhd2kHHNtB", "detail_url": "https://openreview.net/forum?id=yhd2kHHNtB", "authors": "Wen-Bo Du,Tian Qin,Tian-Zuo Wang,Zhi-Hua Zhou", "tags": "NIPS 2024,Poster", "abstract": "Machine learning (ML) has achieved remarkable success in prediction tasks. In many real-world scenarios, rather than solely predicting an outcome using an ML model, the crucial concern is how to make decisions to prevent the occurrence of undesired outcomes, known as the *avoiding undesired future (AUF)* problem. To this end, a new framework called *rehearsal learning* has been proposed recently, which works effectively in stationary environments by leveraging the influence relations among variables. In real tasks, however, the environments are usually non-stationary, where the influence relations may be *dynamic*, leading to the failure of AUF by the existing method. In this paper, we introduce a novel sequential methodology that effectively updates the estimates of dynamic influence relations, which are crucial for rehearsal learning to prevent undesired outcomes in non-stationary environments. Meanwhile, we take the cost of decision actions into account and provide the formulation of AUF problem with minimal action cost under non-stationarity. We prove that in linear Gaussian cases, the problem can be transformed into the well-studied convex quadratically constrained quadratic program (QCQP). In this way, we establish the first polynomial-time rehearsal-based approach for addressing the AUF problem. Theoretical and experimental results validate the effectiveness and efficiency of our method under certain circumstances.", "pdf": "https://openreview.net/pdf/55d4e7f5ce2356d9c39fa5ab1bfe2753b2323f87.pdf"} {"title": "Learning to Discuss Strategically: A Case Study on One Night Ultimate Werewolf", "url": "https://openreview.net/forum?id=1f82rnwCbl", "detail_url": "https://openreview.net/forum?id=1f82rnwCbl", "authors": "Xuanfa Jin,Ziyan Wang,Yali Du,Meng Fang,Haifeng Zhang,Jun Wang", "tags": "NIPS 2024,Poster", "abstract": "Communication is a fundamental aspect of human society, facilitating the exchange of information and beliefs among people. Despite the advancements in large language models (LLMs), recent agents built with these often neglect the control over discussion tactics, which are essential in communication scenarios and games. As a variant of the famous communication game Werewolf, *One Night Ultimate Werewolf* (ONUW) requires players to develop strategic discussion policies due to the potential role changes that increase the uncertainty and complexity of the game. In this work, we first present the existence of the Perfect Bayesian Equilibria (PBEs) in two scenarios of the ONUW game: one with discussion and one without. The results showcase that the discussion greatly changes players' utilities by affecting their beliefs, emphasizing the significance of discussion tactics. Based on the insights obtained from the analyses, we propose an RL-instructed language agent framework, where a discussion policy trained by reinforcement learning (RL) is employed to determine appropriate discussion tactics to adopt. Our experimental results on several ONUW game settings demonstrate the effectiveness and generalizability of our proposed framework.", "pdf": "https://openreview.net/pdf/842bfb79512726b06335c252644c298cd6cc3e08.pdf"} {"title": "DeiSAM: Segment Anything with Deictic Prompting", "url": "https://openreview.net/forum?id=cmSNX47aEH", "detail_url": "https://openreview.net/forum?id=cmSNX47aEH", "authors": "Hikaru Shindo,Manuel Brack,Gopika Sudhakaran,Devendra Singh Dhami,Patrick Schramowski,Kristian Kersting", "tags": "NIPS 2024,Poster", "abstract": "Large-scale, pre-trained neural networks have demonstrated strong capabilities in various tasks, including zero-shot image segmentation. To identify concrete objects in complex scenes, humans instinctively rely on deictic descriptions in natural language, i.e., referring to something depending on the context such as \"The object that is on the desk and behind the cup.\". However, deep learning approaches cannot reliably interpret such deictic representations due to their lack of reasoning capabilities in complex scenarios. To remedy this issue, we propose DeiSAM \u2014 a combination of large pre-trained neural networks with differentiable logic reasoners \u2014 for deictic promptable segmentation. Given a complex, textual segmentation description, DeiSAM leverages Large Language Models (LLMs) to generate first-order logic rules and performs differentiable forward reasoning on generated scene graphs. Subsequently, DeiSAM segments objects by matching them to the logically inferred image regions. As part of our evaluation, we propose the Deictic Visual Genome (DeiVG) dataset, containing paired visual input and complex, deictic textual prompts. Our empirical results demonstrate that DeiSAM is a substantial improvement over purely data-driven baselines for deictic promptable segmentation.", "pdf": "https://openreview.net/pdf/aed7fedaf025c324eb827116cccdbacee3483ea3.pdf"} {"title": "Renovating Names in Open-Vocabulary Segmentation Benchmarks", "url": "https://openreview.net/forum?id=Uw2eJOI822", "detail_url": "https://openreview.net/forum?id=Uw2eJOI822", "authors": "Haiwen Huang,Songyou Peng,Dan Zhang,Andreas Geiger", "tags": "NIPS 2024,Poster", "abstract": "Names are essential to both human cognition and vision-language models. Open-vocabulary models utilize class names as text prompts to generalize to categories unseen during training. However, the precision of these names is often overlooked in existing datasets. In this paper, we address this underexplored problem by presenting a framework for \"renovating\" names in open-vocabulary segmentation benchmarks (RENOVATE). Our framework features a renaming model that enhances the quality of names for each visual segment. Through experiments, we demonstrate that our renovated names help train stronger open-vocabulary models with up to 15% relative improvement and significantly enhance training efficiency with improved data quality. We also show that our renovated names improve evaluation by better measuring misclassification and enabling fine-grained model analysis. We provide our code and relabelings for several popular segmentation datasets to the research community on our project page: https://andrehuang.github.io/renovate.", "pdf": "https://openreview.net/pdf/9c96bbd923c9812f677bb1f96aa4a1a4be2d1e0e.pdf"} {"title": "RCDN: Towards Robust Camera-Insensitivity Collaborative Perception via Dynamic Feature-based 3D Neural Modeling", "url": "https://openreview.net/forum?id=xvVeSZoVJO", "detail_url": "https://openreview.net/forum?id=xvVeSZoVJO", "authors": "Tianhang Wang,Fan Lu,Zehan Zheng,Zhijun Li,Guang Chen,changjun jiang", "tags": "NIPS 2024,Poster", "abstract": "Collaborative perception is dedicated to tackling the constraints of single-agent perception, such as occlusions, based on the multiple agents' multi-view sensor inputs. However, most existing works assume an ideal condition that all agents' multi-view cameras are continuously available. In reality, cameras may be highly noisy, obscured or even failed during the collaboration. In this work, we introduce a new robust camera-insensitivity problem: how to overcome the issues caused by the failed camera perspectives, while stabilizing high collaborative performance with low calibration cost? To address above problems, we propose RCDN, a Robust Camera-insensitivity collaborative perception with a novel Dynamic feature-based 3D Neural modeling mechanism. The key intuition of RCDN is to construct collaborative neural rendering field representations to recover failed perceptual messages sent by multiple agents. To better model collaborative neural rendering field, RCDN first establishes a geometry BEV feature based time-invariant static field with other agents via fast hash grid modeling. Based on the static background field, the proposed time-varying dynamic field can model corresponding motion vector for foregrounds with appropriate positions. To validate RCDN, we create OPV2V-N, a new large-scale dataset with manual labelling under different camera failed scenarios. Extensive experiments conducted on OPV2V-N show that RCDN can be ported to other baselines and improve their robustness in extreme camera-insensitivity setting. Our code and datasets will be available soon.", "pdf": "https://openreview.net/pdf/50300dd9e9a38a5720ea27edc28c35276e11d4c3.pdf"} {"title": "Towards Learning Group-Equivariant Features for Domain Adaptive 3D Detection", "url": "https://openreview.net/forum?id=YEtirXhsh1", "detail_url": "https://openreview.net/forum?id=YEtirXhsh1", "authors": "Sangyun Shin,Yuhang He,Madhu Vankadari,Ta-Ying Cheng,Qian Xie,Andrew Markham,Niki Trigoni", "tags": "NIPS 2024,Poster", "abstract": "The performance of 3D object detection in large outdoor point clouds deteriorates significantly in an unseen environment due to the inter-domain gap. To address these challenges, most existing methods for domain adaptation harness self-training schemes and attempt to bridge the gap by focusing on a single factor that causes the inter-domain gap, such as objects' sizes, shapes, and foreground density variation. However, the resulting adaptations suggest that there is still a substantial inter-domain gap left to be minimized. We argue that this is due to two limitations: 1) Biased pseudo-label collection from self-training. 2) Multiple factors jointly contributing to how the object is perceived in the unseen target domain. In this work, we propose a grouping-exploration strategy framework, Group Explorer Domain Adaptation ($\\textbf{GroupEXP-DA}$), to addresses those two issues. Specifically, our grouping divides the available label sets into multiple clusters and ensures all of them have equal learning attention with the group-equivariant spatial feature, avoiding dominant types of objects causing imbalance problems. Moreover, grouping learns to divide objects by considering inherent factors in a data-driven manner, without considering each factor separately as existing works. On top of the group-equivariant spatial feature that selectively detects objects similar to the input group, we additionally introduce an explorative group update strategy that reduces the false negative detection in the target domain, further reducing the inter-domain gap. During inference, only the learned group features are necessary for making the group-equivariant spatial feature, placing our method as a simple add-on that can be applicable to most existing detectors. We show how each module contributes to substantially bridging the inter-domain gaps compared to existing works across large urban outdoor datasets such as NuScenes, Waymo, and KITTI.", "pdf": "https://openreview.net/pdf/3899686a167553549beec36a102011b5fb83475c.pdf"} {"title": "AdaPKC: PeakConv with Adaptive Peak Receptive Field for Radar Semantic Segmentation", "url": "https://openreview.net/forum?id=oLcPadFrY3", "detail_url": "https://openreview.net/forum?id=oLcPadFrY3", "authors": "Teng Li,Liwen Zhang,Youcheng Zhang,ZijunHu,Pengcheng Pi,Zongqing Lu,Qingmin Liao,Zhe Ma", "tags": "NIPS 2024,Poster", "abstract": "Deep learning-based radar detection technology is receiving increasing attention in areas such as autonomous driving, UAV surveillance, and marine monitoring. Among recent efforts, PeakConv (PKC) provides a solution that can retain the peak response characteristics of radar signals and play the characteristics of deep convolution, thereby improving the effect of radar semantic segmentation (RSS). However, due to the use of a pre-set fixed peak receptive field sampling rule, PKC still has limitations in dealing with problems such as inconsistency of target frequency domain response broadening, non-homogeneous and time-varying characteristic of noise/clutter distribution. Therefore, this paper proposes an idea of adaptive peak receptive field, and upgrades PKC to AdaPKC based on this idea. Beyond that, a novel fine-tuning technology to further boost the performance of AdaPKC-based RSS networks is presented. Through experimental verification using various real-measured radar data (including publicly available low-cost millimeter-wave radar dataset for autonomous driving and self-collected Ku-band surveillance radar dataset), we found that the performance of AdaPKC-based models surpasses other SoTA methods in RSS tasks. The code is available at https://github.com/lihua199710/AdaPKC.", "pdf": "https://openreview.net/pdf/9a29228d2f050bf7c0a7f473c4d93ef1491f7152.pdf"} {"title": "GeoNLF: Geometry guided Pose-Free Neural LiDAR Fields", "url": "https://openreview.net/forum?id=v3y785TN7B", "detail_url": "https://openreview.net/forum?id=v3y785TN7B", "authors": "Weiyi Xue,Zehan Zheng,Fan Lu,Haiyun Wei,Guang Chen,changjun jiang", "tags": "NIPS 2024,Poster", "abstract": "Although recent efforts have extended Neural Radiance Field (NeRF) into LiDAR point cloud synthesis, the majority of existing works exhibit a strong dependence on precomputed poses. However, point cloud registration methods struggle to achieve precise global pose estimation, whereas previous pose-free NeRFs overlook geometric consistency in global reconstruction. In light of this, we explore the geometric insights of point clouds, which provide explicit registration priors for reconstruction. Based on this, we propose Geometry guided Neural LiDAR Fields (GeoNLF), a hybrid framework performing alternately global neural reconstruction and pure geometric pose optimization. Furthermore, NeRFs tend to overfit individual frames and easily get stuck in local minima under sparse-view inputs. To tackle this issue, we develop a selective-reweighting strategy and introduce geometric constraints for robust optimization. Extensive experiments on NuScenes and KITTI-360 datasets demonstrate the superiority of GeoNLF in both novel view synthesis and multi-view registration of low-frequency large-scale point clouds.", "pdf": "https://openreview.net/pdf/b3ec2fcef289fdc0a522265263e68abbbc1b50ff.pdf"} {"title": "CALANet: Cheap All-Layer Aggregation for Human Activity Recognition", "url": "https://openreview.net/forum?id=ouoBW2PXFQ", "detail_url": "https://openreview.net/forum?id=ouoBW2PXFQ", "authors": "Jaegyun Park,Dae-Won Kim,Jaesung Lee", "tags": "NIPS 2024,Poster", "abstract": "With the steady growth of sensing technology and wearable devices, sensor-based human activity recognition has become essential in widespread applications, such as healthcare monitoring and fitness tracking, where accurate and real-time systems are required. \nTo achieve real-time response, recent studies have focused on lightweight neural network models.\nSpecifically, they designed the network architectures by restricting the number of layers shallowly or connections of each layer.\nHowever, these approaches suffer from limited accuracy because the classifier only uses the features at the last layer.\nIn this study, we propose a cheap all-layer aggregation network, CALANet, for accuracy improvement while maintaining the efficiency of existing real-time HAR models.\nSpecifically, CALANet allows the classifier to aggregate the features for all layers, resulting in a performance gain.\nIn addition, this work proves that the theoretical computation cost of CALANet is equivalent to that of conventional networks. \nEvaluated on seven publicly available datasets, CALANet outperformed existing methods, achieving state-of-the-art performance. \nThe source codes of the CALANet are publicly available at https://github.com/jgpark92/CALANet.", "pdf": "https://openreview.net/pdf/f165b0565c501ec05544215e221a6ea53e672cfa.pdf"} {"title": "Accuracy is Not All You Need", "url": "https://openreview.net/forum?id=QVG7j29Sta", "detail_url": "https://openreview.net/forum?id=QVG7j29Sta", "authors": "Abhinav Dutta,Sanjeev Krishnan,Nipun Kwatra,Ramachandran Ramjee", "tags": "NIPS 2024,Poster", "abstract": "When Large Language Models (LLMs) are compressed using techniques such as quantization, the predominant way to demonstrate the validity of such techniques is by measuring the model's accuracy on various benchmarks. If the accuracies of the baseline model and the compressed model are close, it is assumed that there was negligible degradation in quality. However, even when the accuracy of baseline and compressed model are similar, we observe the phenomenon of flips, wherein answers change from correct to incorrect and vice versa in proportion. We conduct a detailed study of metrics across multiple compression techniques, models and datasets, demonstrating that the behavior of compressed models as visible to end-users is often significantly different from the baseline model, even when accuracy is similar. We further evaluate compressed models qualitatively and quantitatively using MT-Bench and show that compressed models exhibiting high flips are worse than baseline models in this free-form generative task. Thus, we argue that accuracy and perplexity are necessary but not sufficient for evaluating compressed models, since these metrics hide large underlying changes that have not been observed by previous work. Hence, compression techniques should also be evaluated using distance metrics. We propose two such distance metrics, KL-Divergence and flips, and show that they are well correlated.", "pdf": "https://openreview.net/pdf/953b8fa61c136a6a2265c459acff26d9b78ea263.pdf"} {"title": "On the Expressivity and Sample Complexity of Node-Individualized Graph Neural Networks", "url": "https://openreview.net/forum?id=8APPypS0yN", "detail_url": "https://openreview.net/forum?id=8APPypS0yN", "authors": "Paolo Pellizzoni,Till Hendrik Schulz,Dexiong Chen,Karsten Borgwardt", "tags": "NIPS 2024,Poster", "abstract": "Graph neural networks (GNNs) employing message passing for graph classification are inherently limited by the expressive power of the Weisfeiler-Leman (WL) test for graph isomorphism. Node individualization schemes, which assign unique identifiers to nodes (e.g., by adding random noise to features), are a common approach for achieving universal expressiveness. However, the ability of GNNs endowed with individualization schemes to generalize beyond the training data is still an open question. To address this question, this paper presents a theoretical analysis of the sample complexity of such GNNs from a statistical learning perspective, employing Vapnik\u2013Chervonenkis (VC) dimension and covering number bounds. We demonstrate that node individualization schemes that are permutation-equivariant result in lower sample complexity, and design novel individualization schemes that exploit these results. As an application of this analysis, we also develop a novel architecture that can perform substructure identification (i.e., subgraph isomorphism) while having a lower VC dimension compared to competing methods. Finally, our theoretical findings are validated experimentally on both synthetic and real-world datasets.", "pdf": "https://openreview.net/pdf/4e6edfcef05c3faebb5b9471d9f24329ffa84de9.pdf"} {"title": "On the Adversarial Robustness of Benjamini Hochberg", "url": "https://openreview.net/forum?id=5jYFoldunM", "detail_url": "https://openreview.net/forum?id=5jYFoldunM", "authors": "Louis Chen,Roberto Szechtman,Matan Seri", "tags": "NIPS 2024,Poster", "abstract": "The Benjamini-Hochberg (BH) procedure is widely used to control the false detection rate (FDR) in multiple testing. Applications of this control abound in drug discovery, forensics, anomaly detection, and, in particular, machine learning, ranging from nonparametric outlier detection to out-of-distribution detection and one-class classification methods. Considering this control could be relied upon in critical safety/security contexts, we investigate its adversarial robustness. More precisely, we study under what conditions BH does and does not exhibit adversarial robustness, we present a class of simple and easily implementable adversarial test-perturbation algorithms, and we perform computational experiments. With our algorithms, we demonstrate that there are conditions under which BH's control can be significantly broken with relatively few (even just one) test score perturbation(s), and provide non-asymptotic guarantees on the expected adversarial-adjustment to FDR. Our technical analysis involves a combinatorial reframing of the BH procedure as a ``balls into bins'' process, and drawing a connection to generalized ballot problems to facilitate an information-theoretic approach for deriving non-asymptotic lower bounds.", "pdf": "https://openreview.net/pdf/3897f2e0032be9e6f2d54cb170aa49219fbefc03.pdf"} {"title": "Nearly Minimax Optimal Regret for Multinomial Logistic Bandit", "url": "https://openreview.net/forum?id=Q4NWfStqVf", "detail_url": "https://openreview.net/forum?id=Q4NWfStqVf", "authors": "Joongkyu Lee,Min-hwan Oh", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we study the contextual multinomial logit (MNL) bandit problem in which a learning agent sequentially selects an assortment based on contextual information, and user feedback follows an MNL choice model.\nThere has been a significant discrepancy between lower and upper regret bounds, particularly regarding the maximum assortment size $K$. Additionally, the variation in reward structures between these bounds complicates the quest for optimality. Under uniform rewards, where all items have the same expected reward, we establish a regret lower bound of $\\Omega(d\\sqrt{\\smash[b]{T/K}})$ and propose a constant-time algorithm, OFU-MNL+, that achieves a matching upper bound of $\\tilde{\\mathcal{O}}(d\\sqrt{\\smash[b]{T/K}})$. \nWe also provide instance-dependent minimax regret bounds under uniform rewards.\nUnder non-uniform rewards, we prove a lower bound of $\\Omega(d\\sqrt{T})$ and an upper bound of $\\tilde{\\mathcal{O}}(d\\sqrt{T})$, also achievable by OFU-MNL+. Our empirical studies support these theoretical findings. To the best of our knowledge, this is the first work in the contextual MNL bandit literature to prove minimax optimality --- for either uniform or non-uniform reward setting --- and to propose a computationally efficient algorithm that achieves this optimality up to logarithmic factors.", "pdf": "https://openreview.net/pdf/60739be1b6704bb91ca091e8e2a619097b82ea4d.pdf"} {"title": "DHA: Learning Decoupled-Head Attention from Transformer Checkpoints via Adaptive Heads Fusion", "url": "https://openreview.net/forum?id=g92nu7knRq", "detail_url": "https://openreview.net/forum?id=g92nu7knRq", "authors": "Yilong Chen,Linhao Zhang,Junyuan Shang,Zhenyu Zhang,Tingwen Liu,Shuohuan Wang,Yu Sun", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) with billions of parameters demonstrate impressive performance. However, the widely used Multi-Head Attention (MHA) in LLMs incurs substantial computational and memory costs during inference. While some efforts have optimized attention mechanisms by pruning heads or sharing parameters among heads, these methods often lead to performance degradation or necessitate substantial continued pre-training costs to restore performance. Based on the analysis of attention redundancy, we design a Decoupled-Head Attention (DHA) mechanism. DHA adaptively configures group sharing for key heads and value heads across various layers, achieving a better balance between performance and efficiency. Inspired by the observation of clustering similar heads, we propose to progressively transform the MHA checkpoint into the DHA model through linear fusion of similar head parameters step by step, retaining the parametric knowledge of the MHA checkpoint. We construct DHA models by transforming various scales of MHA checkpoints given target head budgets. Our experiments show that DHA remarkably requires a mere 0.25\\% of the original model's pre-training budgets to achieve 97.6\\% of performance while saving 75\\% of KV cache. Compared to Group-Query Attention (GQA), DHA achieves a 12$\\times$ training acceleration, a maximum of 24.85\\% performance improvement under 0.2B tokens budget, and finally 2.3\\% overall performance improvement.", "pdf": "https://openreview.net/pdf/50f7520fb81b98d1e7f835aae6ffd33c8cf2ad48.pdf"} {"title": "Marrying Causal Representation Learning with Dynamical Systems for Science", "url": "https://openreview.net/forum?id=MWHRxKz4mq", "detail_url": "https://openreview.net/forum?id=MWHRxKz4mq", "authors": "Dingling Yao,Caroline Muller,Francesco Locatello", "tags": "NIPS 2024,Poster", "abstract": "Causal representation learning promises to extend causal models to hidden causal variables from raw entangled measurements. However, most progress has focused on proving identifiability results in different settings, and we are not aware of any successful real-world application. At the same time, the field of dynamical systems benefited from deep learning and scaled to countless applications but does not allow parameter identification. In this paper, we draw a clear connection between the two and their key assumptions, allowing us to apply identifiable methods developed in causal representation learning to dynamical systems. At the same time, we can leverage scalable differentiable solvers developed for differential equations to build models that are both identifiable and practical. Overall, we learn explicitly controllable models that isolate the trajectory-specific parameters for further downstream tasks such as out-of-distribution classification or treatment effect estimation. We experiment with a wind simulator with partially known factors of variation. We also apply the resulting model to real-world climate data and successfully answer downstream causal questions in line with existing literature on climate change.", "pdf": "https://openreview.net/pdf/eda6757eec8ad8e0257788da61b811f522434a8d.pdf"} {"title": "Precipitation Downscaling with Spatiotemporal Video Diffusion", "url": "https://openreview.net/forum?id=hhnkH8ex5d", "detail_url": "https://openreview.net/forum?id=hhnkH8ex5d", "authors": "Prakhar Srivastava,Ruihan Yang,Gavin Kerrigan,Gideon Dresdner,Jeremy J McGibbon,Christopher S. Bretherton,Stephan Mandt", "tags": "NIPS 2024,Poster", "abstract": "In climate science and meteorology, high-resolution local precipitation (rain and snowfall) predictions are limited by the computational costs of simulation-based methods. Statistical downscaling, or super-resolution, is a common workaround where a low-resolution prediction is improved using statistical approaches. Unlike traditional computer vision tasks, weather and climate applications require capturing the accurate conditional distribution of high-resolution given low-resolution patterns to assure reliable ensemble averages and unbiased estimates of extreme events, such as heavy rain. This work extends recent video diffusion models to precipitation super-resolution, employing a deterministic downscaler followed by a temporally-conditioned diffusion model to capture noise characteristics and high-frequency patterns. We test our approach on FV3GFS output, an established large-scale global atmosphere model, and compare it against six state-of-the-art baselines. Our analysis, capturing CRPS, MSE, precipitation distributions, and qualitative aspects using California and the Himalayas as examples, establishes our method as a new standard for data-driven precipitation downscaling.", "pdf": "https://openreview.net/pdf/906d1111413d6660104c8156308a128cf4ecca3a.pdf"} {"title": "Handling Learnwares from Heterogeneous Feature Spaces with Explicit Label Exploitation", "url": "https://openreview.net/forum?id=3YIyB82rjX", "detail_url": "https://openreview.net/forum?id=3YIyB82rjX", "authors": "Peng Tan,Hai-Tian Liu,Zhi-Hao Tan,Zhi-Hua Zhou", "tags": "NIPS 2024,Poster", "abstract": "The learnware paradigm aims to help users leverage numerous existing high-performing models instead of starting from scratch, where a learnware consists of a well-trained model and the specification describing its capability. Numerous learnwares are accommodated by a learnware dock system. When users solve tasks with the system, models that fully match the task feature space are often rare or even unavailable. However, models with heterogeneous feature space can still be helpful. This paper finds that label information, particularly model outputs, is helpful yet previously less exploited in the accommodation of heterogeneous learnwares. We extend the specification to better leverage model pseudo-labels and subsequently enrich the unified embedding space for better specification evolvement. With label information, the learnware identification can also be improved by additionally comparing conditional distributions. Experiments demonstrate that, even without a model explicitly tailored to user tasks, the system can effectively handle tasks by leveraging models from diverse feature spaces.", "pdf": "https://openreview.net/pdf/20007120dab3aab048fc340ecf3519dba08744da.pdf"} {"title": "Universal Rates for Active Learning", "url": "https://openreview.net/forum?id=T0e4Nw09XX", "detail_url": "https://openreview.net/forum?id=T0e4Nw09XX", "authors": "Steve Hanneke,Amin Karbasi,Shay Moran,Grigoris Velegkas", "tags": "NIPS 2024,Poster", "abstract": "In this work we study the problem of actively learning binary classifiers\n from a given concept class, i.e., learning by utilizing unlabeled data \n and submitting targeted queries about their labels to a domain expert.\n We evaluate the quality of our solutions by considering the learning curves\n they induce, i.e., the rate of decrease\n of the misclassification probability as the number of label queries\n increases. The majority of the literature on active learning has \n focused on obtaining uniform guarantees on the error rate which are\n only able to explain the upper envelope of the learning curves over families\n of different data-generating distributions. We diverge from this line of\n work and we focus on the distribution-dependent framework of universal\n learning whose goal is to obtain guarantees that hold for any fixed distribution,\n but do not apply uniformly over all the distributions. We provide a \n complete characterization of the optimal learning rates that are achievable\n by algorithms that have to specify the number of unlabeled examples they\n use ahead of their execution. Moreover, we identify combinatorial complexity\n measures that give rise to each case of our tetrachotomic characterization.\n This resolves an open question that was posed by Balcan et al. (2010).\n As a byproduct of our main result,\n we develop an active learning algorithm for partial concept classes\n that achieves exponential learning rates in the uniform setting.", "pdf": "https://openreview.net/pdf/42dd652dc414d3e711da823c849ab2a094963e94.pdf"} {"title": "Wasserstein Distributionally Robust Optimization through the Lens of Structural Causal Models and Individual Fairness", "url": "https://openreview.net/forum?id=piOzFx9whU", "detail_url": "https://openreview.net/forum?id=piOzFx9whU", "authors": "Ahmad Reza Ehyaei,Golnoosh Farnadi,Samira Samadi", "tags": "NIPS 2024,Poster", "abstract": "In recent years, Wasserstein Distributionally Robust Optimization (DRO) has garnered substantial interest for its efficacy in data-driven decision-making under distributional uncertainty. However, limited research has explored the application of DRO to address individual fairness concerns, particularly when considering causal structures and discrete sensitive attributes in learning problems.\nTo address this gap, we first formulate the DRO problem from the perspectives of causality and individual fairness. We then present the DRO dual formulation as an efficient tool to convert the main problem into a more tractable and computationally efficient form. Next, we characterize the closed form of the approximate worst-case loss quantity as a regularizer, eliminating the max-step in the Min-Max DRO problem. We further estimate the regularizer in more general cases and explore the relationship between DRO and classical robust optimization. Finally, by removing the assumption of a known structural causal model, we provide finite sample error bounds when designing DRO with empirical distributions and estimated causal structures to ensure efficiency and robust learning.", "pdf": "https://openreview.net/pdf/8b6d32560285d89309a13bf12325235ba025cc2c.pdf"} {"title": "IDGen: Item Discrimination Induced Prompt Generation for LLM Evaluation", "url": "https://openreview.net/forum?id=zv4UISZzp5", "detail_url": "https://openreview.net/forum?id=zv4UISZzp5", "authors": "Fan Lin,Shuyi Xie,Yong Dai,Wenlin Yao,TianJiao Lang,Yu Zhang", "tags": "NIPS 2024,Poster", "abstract": "As Large Language Models (LLMs) become more capable of handling increasingly complex tasks, the evaluation set must keep pace with these advancements to ensure it remains sufficiently discriminative. Item Discrimination (ID) theory, which is widely used in educational assessment, measures the ability of individual test items to differentiate between high and low performers. Inspired by this theory, we propose an ID-induced prompt synthesis framework for evaluating LLMs so that the evaluation set continually updates and refines according to model abilities. \nOur data synthesis framework prioritizes both breadth and specificity. It can generate prompts that comprehensively evaluate the capabilities of LLMs while revealing meaningful performance differences between models, allowing for effective discrimination of their relative strengths and weaknesses across various tasks and domains.\nTo produce high-quality data, we incorporate a self-correct mechanism into our generalization framework and develop two models to predict prompt discrimination and difficulty score to facilitate our data synthesis framework, contributing valuable tools to evaluation data synthesis research. We apply our generated data to evaluate five SOTA models. Our data achieves an average score of 51.92, accompanied by a variance of 10.06. By contrast, previous works (i.e., SELF-INSTRUCT and WizardLM) obtain an average score exceeding 67, with a variance below 3.2.\nThe results demonstrate that the data generated by our framework is more challenging and discriminative compared to previous works.\nWe will release a dataset of over 3,000 carefully crafted prompts to facilitate evaluation research of LLMs.", "pdf": "https://openreview.net/pdf/74ed0078ffe00fb63ba32cc447f4540054349fbb.pdf"} {"title": "Pipeline Parallelism with Controllable Memory", "url": "https://openreview.net/forum?id=Vvcnqs8091", "detail_url": "https://openreview.net/forum?id=Vvcnqs8091", "authors": "Penghui Qi,Xinyi Wan,Nyamdavaa Amar,Min Lin", "tags": "NIPS 2024,Poster", "abstract": "Pipeline parallelism has been widely explored, but most existing schedules lack a systematic methodology. In this paper, we propose a framework to decompose pipeline schedules as repeating a building block, and show that the lifespan of the building block decides the peak activation memory of the pipeline schedule. Guided by the observations, we find that almost all existing pipeline schedules, to the best of our knowledge, are memory inefficient. To address this, we introduce a family of memory efficient building blocks with controllable activation memory, which can reduce the peak activation memory to 1/2 of 1F1B without sacrificing efficiency, and even to 1/3 with comparable throughput. We can also achieve almost zero pipeline bubbles while maintaining the same activation memory as 1F1B. Our evaluations demonstrate that in pure pipeline parallelism settings, our methods outperform 1F1B by from 7\\% to 55\\% in terms of throughput. When employing a grid search over hybrid parallelism hyperparameters in practical scenarios, our methods demonstrate a 16\\% throughput improvement over the 1F1B baseline for large language models. The implementation is open-sourced at https://github.com/sail-sg/zero-bubble-pipeline-parallelism.", "pdf": "https://openreview.net/pdf/d163bac9f8f19539ff43d3798775b82d470dfa97.pdf"} {"title": "Error Analysis of Spherically Constrained Least Squares Reformulation in Solving the Stackelberg Prediction Game", "url": "https://openreview.net/forum?id=7L2tCirpwB", "detail_url": "https://openreview.net/forum?id=7L2tCirpwB", "authors": "Xiyuan Li,Weiwei Liu", "tags": "NIPS 2024,Poster", "abstract": "The Stackelberg prediction game (SPG) is a popular model for characterizing strategic interactions between a learner and an adversarial data provider. Although optimization problems in SPGs are often NP-hard, a notable special case involving the least squares loss (SPG-LS) has gained significant research attention recently, (Bishop et al. 2020; Wang et al. 2021; Wang et al. 2022). The latest state-of-the-art method for solving the SPG-LS problem is the spherically constrained least squares reformulation (SCLS) method proposed in the work of Wang et al. (2022). However, the lack of theoretical analysis on the error of the SCLS method limits its large-scale applications. In this paper, we investigate the estimation error between the learner obtained by the SCLS method and the actual learner. Specifically, we reframe the estimation error of the SCLS method as a Primary Optimization ($\\textbf{PO}$) problem and utilize the Convex Gaussian min-max theorem (CGMT) to transform the $\\textbf{PO}$ problem into an Auxiliary Optimization ($\\textbf{AO}$) problem. Subsequently, we provide a theoretical error analysis for the SCLS method based on this simplified $\\textbf{AO}$ problem. This analysis not only strengthens the theoretical framework of the SCLS method but also confirms the reliability of the learner produced by it. We further conduct experiments to validate our theorems, and the results are in excellent agreement with our theoretical predictions.", "pdf": "https://openreview.net/pdf/9a6507ba66b130ed009e8683bab3328658585666.pdf"} {"title": "LaSe-E2V: Towards Language-guided Semantic-aware Event-to-Video Reconstruction", "url": "https://openreview.net/forum?id=3ilqQHBWTf", "detail_url": "https://openreview.net/forum?id=3ilqQHBWTf", "authors": "Kanghao Chen,Hangyu Li,Jiazhou Zhou,Zeyu Wang,Lin Wang", "tags": "NIPS 2024,Poster", "abstract": "Event cameras harness advantages such as low latency, high temporal resolution, and high dynamic range (HDR), compared to standard cameras. Due to the distinct imaging paradigm shift, a dominant line of research focuses on event-to-video (E2V) reconstruction to bridge event-based and standard computer vision. However, this task remains challenging due to its inherently ill-posed nature: event cameras only detect the edge and motion information locally. Consequently, the reconstructed videos are often plagued by artifacts and regional blur, primarily caused by the ambiguous semantics of event data. In this paper, we find language naturally conveys abundant semantic information, rendering it stunningly superior in ensuring semantic consistency for E2V reconstruction. Accordingly, we propose a novel framework, called LaSe-E2V, that can achieve semantic-aware high-quality E2V reconstruction from a language-guided perspective, buttressed by the text-conditional diffusion models. However, due to diffusion models' inherent diversity and randomness, it is hardly possible to directly apply them to achieve spatial and temporal consistency for E2V reconstruction. Thus, we first propose an Event-guided Spatiotemporal Attention (ESA) module to condition the event data to the denoising pipeline effectively. We then introduce an event-aware mask loss to ensure temporal coherence and a noise initialization strategy to enhance spatial consistency. Given the absence of event-text-video paired data, we aggregate existing E2V datasets and generate textual descriptions using the tagging models for training and evaluation. Extensive experiments on three datasets covering diverse challenging scenarios (e.g., fast motion, low light) demonstrate the superiority of our method. Demo videos for the results are attached to the project page.", "pdf": "https://openreview.net/pdf/7793a11fd5cecb06c47363429b62f4053e561f68.pdf"} {"title": "Randomized Truthful Auctions with Learning Agents", "url": "https://openreview.net/forum?id=Tt2xJaxDc4", "detail_url": "https://openreview.net/forum?id=Tt2xJaxDc4", "authors": "Gagan Aggarwal,Anupam Gupta,Andres Perlroth,Grigoris Velegkas", "tags": "NIPS 2024,Poster", "abstract": "We study a setting where agents use no-regret learning algorithms to participate in repeated auctions. Recently, Kolumbus and Nisan [2022a] showed, rather surprisingly, that when bidders participate in second-price auctions using no-regret bidding algorithms, no matter how large the number of interactions $T$ is, the runner-up bidder may not converge to bidding truthfully. Our first result shows that this holds forall deterministictruthful auctions. We also show that the ratio of the learning rates of different bidders can qualitatively affect the convergence of the bidders. Next, we consider the problem of revenue maximization in this environment. In the setting with fully rational bidders, the seminal result of Myerson [1981] showed that revenue can be maximized by using a second-price auction with reserves. We show that, in stark contrast, in our setting with learning bidders, randomized auctions can have strictly better revenue guarantees than second-price auctions with reserves, when $T$ is large enough. To do this, we provide a black-box transformation from any truthful auction $A$ to an auction $A'$ such that: i) all mean-based no-regret learners that participate in $A'$ converge to bidding truthfully, ii) the distance between the allocation rule and the payment rule between $A, A'$ is negligible. Finally, we study revenue maximization in the non-asymptotic regime. We define a notion of auctioneer regret that compares the revenue generated to the revenue of a second price auction with truthful bids. When the auctioneer has to use the same auction throughout the interaction, we show an (almost) tight regret bound of $\\tilde{\\Theta}(T^{3/4})$. Then, we consider the case where the auctioneer can use different auctions throughout the interaction, but in a way that is oblivious to the bids. For this setting, we show an (almost) tight bound of $\\tilde{\\Theta}(\\sqrt{T})$.", "pdf": "https://openreview.net/pdf/7876efe7105e4ba1839383e697641e3c90f42f0a.pdf"} {"title": "Can Simple Averaging Defeat Modern Watermarks?", "url": "https://openreview.net/forum?id=X2G7LA7Av9", "detail_url": "https://openreview.net/forum?id=X2G7LA7Av9", "authors": "Pei Yang,Hai Ci,Yiren Song,Mike Zheng Shou", "tags": "NIPS 2024,Poster", "abstract": "Digital watermarking techniques are crucial for copyright protection and source identification of images, especially in the era of generative AI models. However, many existing watermarking methods, particularly content-agnostic approaches that embed fixed patterns regardless of image content, are vulnerable to steganalysis attacks that can extract and remove the watermark with minimal perceptual distortion. In this work, we categorise watermarking algorithms into content-adaptive and content-agnostic ones, and demonstrate how averaging a collection of watermarked images could reveal the underlying watermark pattern. We then leverage this extracted pattern for effective watermark removal under both greybox and blackbox settings, even when the collection of images contains multiple watermark patterns. For some algorithms like Tree-Ring watermarks, the extracted pattern can also forge convincing watermarks on clean images. Our quantitative and qualitative evaluations across twelve watermarking methods highlight the threat posed by steganalysis to content-agnostic watermarks and the importance of designing watermarking techniques resilient to such analytical attacks. We propose security guidelines calling for using content-adaptive watermarking strategies and performing security evaluation against steganalysis. We also suggest multi-key assignments as potential mitigations against steganalysis vulnerabilities. Github page: \\url{https://github.com/showlab/watermark-steganalysis}.", "pdf": "https://openreview.net/pdf/7a2871bf86de51e19a4a9e60f0a253143e245330.pdf"} {"title": "On the Convergence of Loss and Uncertainty-based Active Learning Algorithms", "url": "https://openreview.net/forum?id=GLUIuli3Sm", "detail_url": "https://openreview.net/forum?id=GLUIuli3Sm", "authors": "Daniel Haimovich,Dima Karamshuk,Fridolin Linder,Niek Tax,Milan Vojnovic", "tags": "NIPS 2024,Poster", "abstract": "We investigate the convergence rates and data sample sizes required for training a machine learning model using a stochastic gradient descent (SGD) algorithm, where data points are sampled based on either their loss value or uncertainty value. These training methods are particularly relevant for active learning and data subset selection problems. For SGD with a constant step size update, we present convergence results for linear classifiers and linearly separable datasets using squared hinge loss and similar training loss functions. Additionally, we extend our analysis to more general classifiers and datasets, considering a wide range of loss-based sampling strategies and smooth convex training loss functions. We propose a novel algorithm called Adaptive-Weight Sampling (AWS) that utilizes SGD with an adaptive step size that achieves stochastic Polyak's step size in expectation. We establish convergence rate results for AWS for smooth convex training loss functions. Our numerical experiments demonstrate the efficiency of AWS on various datasets by using either exact or estimated loss values.", "pdf": "https://openreview.net/pdf/2c318df670310f590d6105e7be584e9921c8cb2c.pdf"} {"title": "Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack", "url": "https://openreview.net/forum?id=E2odGznGim", "detail_url": "https://openreview.net/forum?id=E2odGznGim", "authors": "Mingli Zhu,Siyuan Liang,Baoyuan Wu", "tags": "NIPS 2024,Poster", "abstract": "Deep neural networks face persistent challenges in defending against backdoor attacks, leading to an ongoing battle between attacks and defenses. While existing backdoor defense strategies have shown promising performance on reducing attack success rates, can we confidently claim that the backdoor threat has truly been eliminated from the model? To address it, we re-investigate the characteristics of the backdoored models after defense (denoted as defense models). Surprisingly, we find that the original backdoors still exist in defense models derived from existing post-training defense strategies, and the backdoor existence is measured by a novel metric called backdoor existence coefficient. It implies that the backdoors just lie dormant rather than being eliminated. To further verify this finding, we empirically show that these dormant backdoors can be easily re-activated during inference stage, by manipulating the original trigger with well-designed tiny perturbation using universal adversarial attack. More practically, we extend our backdoor re-activation to black-box scenario, where the defense model can only be queried by the adversary during inference stage, and develop two effective methods, i.e., query-based and transfer-based backdoor re-activation attacks. The effectiveness of the proposed methods are verified on both image classification and multimodal contrastive learning (i.e., CLIP) tasks. In conclusion, this work uncovers a critical vulnerability that has never been explored in existing defense strategies, emphasizing the urgency of designing more robust and advanced backdoor defense mechanisms in the future.", "pdf": "https://openreview.net/pdf/a49224ec8ee25bd0401dacd0178b189a45437d0a.pdf"} {"title": "Model Collapse Demystified: The Case of Regression", "url": "https://openreview.net/forum?id=bioHNTRnQk", "detail_url": "https://openreview.net/forum?id=bioHNTRnQk", "authors": "Elvis Dohmatob,Yunzhen Feng,Julia Kempe", "tags": "NIPS 2024,Poster", "abstract": "The era of proliferation of large language and image generation models begs the question of what happens if models are trained on the synthesized outputs of other models. The phenomenon of \"model collapse\" refers to the situation whereby as a model is trained recursively on data generated from previous generations of itself over time, its performance degrades until the model eventually becomes completely useless, i.e. the model collapses. In this work, we investigate this phenomenon within the context of high-dimensional regression with Gaussian data, considering both low- and high-dimensional asymptotics. We derive analytical formulas that quantitatively describe this phenomenon in both under-parameterized and over-parameterized regimes. We show how test error increases linearly in the number of model iterations in terms of all problem hyperparameters (covariance spectrum, regularization, label noise level, dataset size) and further isolate how model collapse affects both bias and variance terms in our setup. We show that even in the noise-free case, catastrophic (exponentially fast) model-collapse can happen in the over-parametrized regime. In the special case of polynomial decaying spectral and source conditions, we obtain modified scaling laws which exhibit new crossover phenomena from fast to slow rates. We also propose a simple strategy based on adaptive regularization to mitigate model collapse. Our theoretical results are validated with experiments.", "pdf": "https://openreview.net/pdf/f99e5843f04072c12b1f7f1cf50db4bb1cab6911.pdf"} {"title": "FastDrag: Manipulate Anything in One Step", "url": "https://openreview.net/forum?id=1PNwacZYik", "detail_url": "https://openreview.net/forum?id=1PNwacZYik", "authors": "Xuanjia Zhao,Jian Guan,Congyi Fan,Dongli Xu,Youtian Lin,Haiwei Pan,Pengming Feng", "tags": "NIPS 2024,Poster", "abstract": "Drag-based image editing using generative models provides precise control over image contents, enabling users to manipulate anything in an image with a few clicks. However, prevailing methods typically adopt $n$-step iterations for latent semantic optimization to achieve drag-based image editing, which is time-consuming and limits practical applications. In this paper, we introduce a novel one-step drag-based image editing method, i.e., FastDrag, to accelerate the editing process. Central to our approach is a latent warpage function (LWF), which simulates the behavior of a stretched material to adjust the location of individual pixels within the latent space. This innovation achieves one-step latent semantic optimization and hence significantly promotes editing speeds. Meanwhile, null regions emerging after applying LWF are addressed by our proposed bilateral nearest neighbor interpolation (BNNI) strategy. This strategy interpolates these regions using similar features from neighboring areas, thus enhancing semantic integrity. Additionally, a consistency-preserving strategy is introduced to maintain the consistency between the edited and original images by adopting semantic information from the original image, saved as key and value pairs in self-attention module during diffusion inversion, to guide the diffusion sampling. Our FastDrag is validated on the DragBench dataset, demonstrating substantial improvements in processing time over existing methods, while achieving enhanced editing performance.", "pdf": "https://openreview.net/pdf/157d0cfc0dd7fb9390ec1d666f00033816e0957c.pdf"} {"title": "Neur2BiLO: Neural Bilevel Optimization", "url": "https://openreview.net/forum?id=esVleaqkRc", "detail_url": "https://openreview.net/forum?id=esVleaqkRc", "authors": "Justin Dumouchelle,Esther Julien,Jannis Kurtz,Elias Boutros Khalil", "tags": "NIPS 2024,Poster", "abstract": "Bilevel optimization deals with nested problems in which *leader* takes the first decision to minimize their objective function while accounting for a *follower*'s best-response reaction. Constrained bilevel problems with integer variables are particularly notorious for their hardness. While exact solvers have been proposed for mixed-integer *linear* bilevel optimization, they tend to scale poorly with problem size and are hard to generalize to the non-linear case. On the other hand, problem-specific algorithms (exact and heuristic) are limited in scope. Under a data-driven setting in which similar instances of a bilevel problem are solved routinely, our proposed framework, Neur2BiLO, embeds a neural network approximation of the leader's or follower's value function, trained via supervised regression, into an easy-to-solve mixed-integer program. Neur2BiLO serves as a heuristic that produces high-quality solutions extremely fast for four applications with linear and non-linear objectives and pure and mixed-integer variables.", "pdf": "https://openreview.net/pdf/b7dc6351ad4f83c24529b6be026c1cf57c949ac3.pdf"} {"title": "PAC-Bayes-Chernoff bounds for unbounded losses", "url": "https://openreview.net/forum?id=CyzZeND3LB", "detail_url": "https://openreview.net/forum?id=CyzZeND3LB", "authors": "Ioar Casado,Luis A. Ortega,Aritz P\u00e9rez,Andres R Masegosa", "tags": "NIPS 2024,Poster", "abstract": "We introduce a new PAC-Bayes oracle bound for unbounded losses that extends Cram\u00e9r-Chernoff bounds to the PAC-Bayesian setting. The proof technique relies on controlling the tails of certain random variables involving the Cram\u00e9r transform of the loss. Our approach naturally leverages properties of Cram\u00e9r-Chernoff bounds, such as exact optimization of the free parameter in many PAC-Bayes bounds. We highlight several applications of the main theorem. Firstly, we show that our bound recovers and generalizes previous results. Additionally, our approach allows working with richer assumptions that result in more informative and potentially tighter bounds. In this direction, we provide a general bound under a new *model-dependent* assumption from which we obtain bounds based on parameter norms and log-Sobolev inequalities. Notably, many of these bounds can be minimized to obtain distributions beyond the Gibbs posterior and provide novel theoretical coverage to existing regularization techniques.", "pdf": "https://openreview.net/pdf/271f1b4fd3a612ef6d4677b739fc51c663983e99.pdf"} {"title": "AdanCA: Neural Cellular Automata As Adaptors For More Robust Vision Transformer", "url": "https://openreview.net/forum?id=BQh1SGvROG", "detail_url": "https://openreview.net/forum?id=BQh1SGvROG", "authors": "Yitao Xu,Tong Zhang,Sabine Susstrunk", "tags": "NIPS 2024,Poster", "abstract": "Vision Transformers (ViTs) demonstrate remarkable performance in image classification through visual-token interaction learning, particularly when equipped with local information via region attention or convolutions. Although such architectures improve the feature aggregation from different granularities, they often fail to contribute to the robustness of the networks. Neural Cellular Automata (NCA) enables the modeling of global visual-token representations through local interactions, with its training strategies and architecture design conferring strong generalization ability and robustness against noisy input. In this paper, we propose Adaptor Neural Cellular Automata (AdaNCA) for Vision Transformers that uses NCA as plug-and-play adaptors between ViT layers, thus enhancing ViT's performance and robustness against adversarial samples as well as out-of-distribution inputs. To overcome the large computational overhead of standard NCAs, we propose Dynamic Interaction for more efficient interaction learning. Using our analysis of AdaNCA placement and robustness improvement, we also develop an algorithm for identifying the most effective insertion points for AdaNCA. With less than a 3% increase in parameters, AdaNCA contributes to more than 10% absolute improvement in accuracy under adversarial attacks on the ImageNet1K benchmark. Moreover, we demonstrate with extensive evaluations across eight robustness benchmarks and four ViT architectures that AdaNCA, as a plug-and-play module, consistently improves the robustness of ViTs.", "pdf": "https://openreview.net/pdf/554d2c92ce2f9daf2f1865c07e97e461f3e1d088.pdf"} {"title": "LLaNA: Large Language and NeRF Assistant", "url": "https://openreview.net/forum?id=ExeIyx6U0Z", "detail_url": "https://openreview.net/forum?id=ExeIyx6U0Z", "authors": "Andrea Amaduzzi,Pierluigi Zama Ramirez,Giuseppe Lisanti,Samuele Salti,Luigi di Stefano", "tags": "NIPS 2024,Poster", "abstract": "Multimodal Large Language Models (MLLMs) have demonstrated an excellent understanding of images and 3D data. However, both modalities have shortcomings in holistically capturing the appearance and geometry of objects. Meanwhile, Neural Radiance Fields (NeRFs), which encode information within the weights of a simple Multi-Layer Perceptron (MLP), have emerged as an increasingly widespread modality that simultaneously encodes the geometry and photorealistic appearance of objects. This paper investigates the feasibility and effectiveness of ingesting NeRF into MLLM. We create LLaNA, the first general-purpose NeRF-language\nassistant capable of performing new tasks such as NeRF captioning and Q&A. Notably, our method directly processes the weights of the NeRF\u2019s MLP to extract information about the represented objects without the need to render images or materialize 3D data structures. Moreover, we build a dataset of NeRFs with text annotations for various NeRF-language tasks with no human intervention.\nBased on this dataset, we develop a benchmark to evaluate the NeRF understanding capability of our method. Results show that processing NeRF weights performs favourably against extracting 2D or 3D representations from NeRFs.", "pdf": "https://openreview.net/pdf/774a599cd0be03181b834cd41353395dbb642592.pdf"} {"title": "MimicTalk: Mimicking a personalized and expressive 3D talking face in minutes", "url": "https://openreview.net/forum?id=gjEzL0bamb", "detail_url": "https://openreview.net/forum?id=gjEzL0bamb", "authors": "Zhenhui Ye,Tianyun Zhong,Yi Ren,Ziyue Jiang,Jiawei Huang,Rongjie Huang,Jinglin Liu,Jinzheng He,Chen Zhang,Zehan Wang,Xize Cheng,Xiang Yin,Zhou Zhao", "tags": "NIPS 2024,Poster", "abstract": "Talking face generation (TFG) aims to animate a target identity's face to create realistic talking videos. Personalized TFG is a variant that emphasizes the perceptual identity similarity of the synthesized result (from the perspective of appearance and talking style). While previous works typically solve this problem by learning an individual neural radiance field (NeRF) for each identity to implicitly store its static and dynamic information, we find it inefficient and non-generalized due to the per-identity-per-training framework and the limited training data. To this end, we propose MimicTalk, the first attempt that exploits the rich knowledge from a NeRF-based person-agnostic generic model for improving the efficiency and robustness of personalized TFG. To be specific, (1) we first come up with a person-agnostic 3D TFG model as the base model and propose to adapt it into a specific identity; (2) we propose a static-dynamic-hybrid adaptation pipeline to help the model learn the personalized static appearance and facial dynamic features; (3) To generate the facial motion of the personalized talking style, we propose an in-context stylized audio-to-motion model that mimics the implicit talking style provided in the reference video without information loss by an explicit style representation. The adaptation process to an unseen identity can be performed in 15 minutes, which is 47 times faster than previous person-dependent methods. Experiments show that our MimicTalk surpasses previous baselines regarding video quality, efficiency, and expressiveness. Video samples are available at https://mimictalk.github.io .", "pdf": "https://openreview.net/pdf/b65a07fe2c7bc81fa8019055e835b36dc2f9a4fa.pdf"} {"title": "Reasons and Solutions for the Decline in Model Performance after Editing", "url": "https://openreview.net/forum?id=xjXYgdFM5M", "detail_url": "https://openreview.net/forum?id=xjXYgdFM5M", "authors": "Xiusheng Huang,Jiaxiang Liu,Yequan Wang,Kang Liu", "tags": "NIPS 2024,Poster", "abstract": "Knowledge editing technology has received widespread attention for low-cost updates of incorrect or outdated knowledge in large-scale language models. However, recent research has found that edited models often exhibit varying degrees of performance degradation. The reasons behind this phenomenon and potential solutions have not yet been provided. In order to investigate the reasons for the performance decline of the edited model and optimize the editing method, this work explores the underlying reasons from both data and model perspectives. Specifically, 1) from a data perspective, to clarify the impact of data on the performance of editing models, this paper first constructs a **M**ulti-**Q**uestion **D**ataset (**MQD**) to evaluate the impact of different types of editing data on model performance. The performance of the editing model is mainly affected by the diversity of editing targets and sequence length, as determined through experiments. 2) From a model perspective, this article explores the factors that affect the performance of editing models. The results indicate a strong correlation between the L1-norm of the editing model layer and the editing accuracy, and clarify that this is an important factor leading to the bottleneck of editing performance. Finally, in order to improve the performance of the editing model, this paper further proposes a **D**ump **for** **S**equence (**D4S**) method, which successfully overcomes the previous editing bottleneck by reducing the L1-norm of the editing layer, allowing users to perform multiple effective edits and minimizing model damage. Our code is available at https://github.com/nlpkeg/D4S.", "pdf": "https://openreview.net/pdf/29125d34caf7e2e65a3da6e297ad342a2583e2d3.pdf"} {"title": "Spherical Frustum Sparse Convolution Network for LiDAR Point Cloud Semantic Segmentation", "url": "https://openreview.net/forum?id=LqdcdqIeVD", "detail_url": "https://openreview.net/forum?id=LqdcdqIeVD", "authors": "Yu Zheng,Guangming Wang,Jiuming Liu,Marc Pollefeys,Hesheng Wang", "tags": "NIPS 2024,Poster", "abstract": "LiDAR point cloud semantic segmentation enables the robots to obtain fine-grained semantic information of the surrounding environment. Recently, many works project the point cloud onto the 2D image and adopt the 2D Convolutional Neural Networks (CNNs) or vision transformer for LiDAR point cloud semantic segmentation. However, since more than one point can be projected onto the same 2D position but only one point can be preserved, the previous 2D projection-based segmentation methods suffer from inevitable quantized information loss, which results in incomplete geometric structure, especially for small objects. To avoid quantized information loss, in this paper, we propose a novel spherical frustum structure, which preserves all points projected onto the same 2D position. Additionally, a hash-based representation is proposed for memory-efficient spherical frustum storage. Based on the spherical frustum structure, the Spherical Frustum sparse Convolution (SFC) and Frustum Farthest Point Sampling (F2PS) are proposed to convolve and sample the points stored in spherical frustums respectively. Finally, we present the Spherical Frustum sparse Convolution Network (SFCNet) to adopt 2D CNNs for LiDAR point cloud semantic segmentation without quantized information loss. Extensive experiments on the SemanticKITTI and nuScenes datasets demonstrate that our SFCNet outperforms previous 2D projection-based semantic segmentation methods based on conventional spherical projection and shows better performance on small object segmentation by preserving complete geometric structure. Codes will be available at https://github.com/IRMVLab/SFCNet.", "pdf": "https://openreview.net/pdf/5a871230d47f5a2d7722af373313a33ddcdb2d8e.pdf"} {"title": "Protecting Your LLMs with Information Bottleneck", "url": "https://openreview.net/forum?id=u9ShP64FJV", "detail_url": "https://openreview.net/forum?id=u9ShP64FJV", "authors": "Zichuan Liu,Zefan Wang,Linjie Xu,Jinyu Wang,Lei Song,Tianchun Wang,Chunlin Chen,Wei Cheng,Jiang Bian", "tags": "NIPS 2024,Poster", "abstract": "The advent of large language models (LLMs) has revolutionized the field of natural language processing, yet they might be attacked to produce harmful content.\nDespite efforts to ethically align LLMs, these are often fragile and can be circumvented by jailbreaking attacks through optimized or manual adversarial prompts.\nTo address this, we introduce the Information Bottleneck Protector (IBProtector), a defense mechanism grounded in the information bottleneck principle, and we modify the objective to avoid trivial solutions.\nThe IBProtector selectively compresses and perturbs prompts, facilitated by a lightweight and trainable extractor, preserving only essential information for the target LLMs to respond with the expected answer.\nMoreover, we further consider a situation where the gradient is not visible to be compatible with any LLM.\nOur empirical evaluations show that IBProtector outperforms current defense methods in mitigating jailbreak attempts, without overly affecting response quality or inference speed. \nIts effectiveness and adaptability across various attack methods and target LLMs underscore the potential of IBProtector as a novel, transferable defense that bolsters the security of LLMs without requiring modifications to the underlying models.", "pdf": "https://openreview.net/pdf/9e0d322217158c45f3a1f006a62564210eda4dc6.pdf"} {"title": "Transformers Represent Belief State Geometry in their Residual Stream", "url": "https://openreview.net/forum?id=YIB7REL8UC", "detail_url": "https://openreview.net/forum?id=YIB7REL8UC", "authors": "Adam Shai,Lucas Teixeira,Alexander Gietelink Oldenziel,Sarah Marzen,Paul M. Riechers", "tags": "NIPS 2024,Poster", "abstract": "What computational structure are we building into large language models when we train them on next-token prediction? Here, we present evidence that this structure is given by the meta-dynamics of belief updating over hidden states of the data- generating process. Leveraging the theory of optimal prediction, we anticipate and then find that belief states are linearly represented in the residual stream of transformers, even in cases where the predicted belief state geometry has highly nontrivial fractal structure. We investigate cases where the belief state geometry is represented in the final residual stream or distributed across the residual streams of multiple layers, providing a framework to explain these observations. Furthermore we demonstrate that the inferred belief states contain information about the entire future, beyond the local next-token prediction that the transformers are explicitly trained on. Our work provides a general framework connecting the structure of training data to the geometric structure of activations inside transformers.", "pdf": "https://openreview.net/pdf/9816d6e49ad1ae31e77d0a7bc05e61ed29190acd.pdf"} {"title": "Stochastic Kernel Regularisation Improves Generalisation in Deep Kernel Machines", "url": "https://openreview.net/forum?id=prgxz9fYbf", "detail_url": "https://openreview.net/forum?id=prgxz9fYbf", "authors": "Edward Milsom,Ben Anson,Laurence Aitchison", "tags": "NIPS 2024,Poster", "abstract": "Recent work developed convolutional deep kernel machines, achieving 92.7% test accuracy on CIFAR-10 using a ResNet-inspired architecture, which is SOTA for kernel methods. However, this still lags behind neural networks, which easily achieve over 94% test accuracy with similar architectures. In this work we introduce several modifications to improve the convolutional deep kernel machine\u2019s generalisation, including stochastic kernel regularisation, which adds noise to the learned Gram matrices during training. The resulting model achieves 94.5% test accuracy on CIFAR-10. This finding has important theoretical and practical implications, as it demonstrates that the ability to perform well on complex tasks like image classification is not unique to neural networks. Instead, other approaches including deep kernel methods can achieve excellent performance on such tasks, as long as they have the capacity to learn representations from data.", "pdf": "https://openreview.net/pdf/ba510678d2e5eebc0c16d764b5c28f2674005597.pdf"} {"title": "Unveiling The Matthew Effect Across Channels: Assessing Layer Width Sufficiency via Weight Norm Variance", "url": "https://openreview.net/forum?id=Tcft2V63Vd", "detail_url": "https://openreview.net/forum?id=Tcft2V63Vd", "authors": "Yiting Chen,Jiazi Bu,Junchi Yan", "tags": "NIPS 2024,Poster", "abstract": "The trade-off between cost and performance has been a longstanding and critical issue for deep neural networks. \nOne key factor affecting the computational cost is the width of each layer. \nHowever, in practice, the width of layers in a neural network is mostly empirically determined. In this paper, we show that a pattern regarding the variance of weight norm corresponding to different channels can indicate whether the layer is sufficiently wide and may help us better allocate computational resources across the layers.\nStarting from a simple intuition that channels with larger weights would have larger gradients and the difference in weight norm enlarges between channels with similar weight, we empirically validate that wide and narrow layers show two different patterns with experiments across different data modalities and network architectures. \nBased on the two different patterns, we identify three stages during training and explain each stage with corresponding evidence. We further propose to adjust the width based on the identified pattern and show that conventional layer width settings for CNNs could be adjusted to reduce the number of parameters while boosting the performance.", "pdf": "https://openreview.net/pdf/779844628d46f3e3e149d7d3735b4fcfa84a9011.pdf"} {"title": "Controlling Continuous Relaxation for Combinatorial Optimization", "url": "https://openreview.net/forum?id=ykACV1IhjD", "detail_url": "https://openreview.net/forum?id=ykACV1IhjD", "authors": "Yuma Ichikawa", "tags": "NIPS 2024,Poster", "abstract": "Unsupervised learning (UL)-based solvers for combinatorial optimization (CO) train a neural network that generates a soft solution by directly optimizing the CO objective using a continuous relaxation strategy. These solvers offer several advantages over traditional methods and other learning-based methods, particularly for large-scale CO problems. However, UL-based solvers face two practical issues: (I) an optimization issue, where UL-based solvers are easily trapped at local optima, and (II) a rounding issue, where UL-based solvers require artificial post-learning rounding from the continuous space back to the original discrete space, undermining the robustness of the results. This study proposes a Continuous Relaxation Annealing (CRA) strategy, an effective rounding-free learning method for UL-based solvers. CRA introduces a penalty term that dynamically shifts from prioritizing continuous solutions, effectively smoothing the non-convexity of the objective function, to enforcing discreteness, eliminating artificial rounding. Experimental results demonstrate that CRA significantly enhances the performance of UL-based solvers, outperforming existing UL-based solvers and greedy algorithms in complex CO problems. Additionally, CRA effectively eliminates artificial rounding and accelerates the learning process.", "pdf": "https://openreview.net/pdf/f074cb3b2669ebeeea6f148a6ea67cd0ef2b637c.pdf"} {"title": "What Rotary Position Embedding Can Tell Us: Identifying Query and Key Weights Corresponding to Basic Syntactic or High-level Semantic Information", "url": "https://openreview.net/forum?id=e5Mv7iWfVW", "detail_url": "https://openreview.net/forum?id=e5Mv7iWfVW", "authors": "Yiting Chen,Junchi Yan", "tags": "NIPS 2024,Poster", "abstract": "Transformer-based large language models (LLMs) have successfully handled various tasks. As one fundamental module in Transformers, position encoding encodes the positional information of tokens in a sequence. Specifically, rotary position embedding (RoPE), one of the most widely used techniques, encodes the positional information by dividing the query or key value with $d$ elements into $d/2$ pairs and rotating the 2d vectors corresponding to each pair of elements. Therefore, the direction of each pair and the position-related rotation jointly determine the attention score. In this paper, we show that the direction of the 2d pair is largely affected by the angle between the corresponding weight vector pair. We theoretically show that non-orthogonal weight vector pairs lead to great attention on tokens at a certain relative position and are less sensitive to the input which may correspond to basic syntactic information. Meanwhile, the orthogonal weight vector pairs are more flexible regarding the relative position, which may correspond to high-level syntactic information. Empirical evidence supports the hypothesis that shallow layers of LLMs focus more on local syntax and deep layers focus more on high-level semantics. Furthermore, we show that LLMs fine-tuning mainly changes the pairs of weight vectors that are nearly orthogonal, i.e., the weight corresponding to high-level semantics, which enables the reduction of the number of trainable parameters during fine-tuning without sacrificing performance. We propose a method namely Angle-based Weight Selection (AWS) to reduce the fine-tuning overhead and verify the effectiveness of the proposed method on widely used Alpaca fine-tuned Llama-2.", "pdf": "https://openreview.net/pdf/5d5a69ecf75e1413e8a4cbe55a98b16144a08473.pdf"} {"title": "Reparameterized Multi-Resolution Convolutions for Long Sequence Modelling", "url": "https://openreview.net/forum?id=RwgNbIpCpk", "detail_url": "https://openreview.net/forum?id=RwgNbIpCpk", "authors": "Harry Jake Cunningham,Giorgio Giannone,Mingtian Zhang,Marc Peter Deisenroth", "tags": "NIPS 2024,Poster", "abstract": "Global convolutions have shown increasing promise as powerful general-purpose sequence models. However, training long convolutions is challenging, and kernel parameterizations must be able to learn long-range dependencies without overfitting. This work introduces reparameterized multi-resolution convolutions ($\\texttt{MRConv}$), a novel approach to parameterizing global convolutional kernels for long-sequence modeling. By leveraging multi-resolution convolutions, incorporating structural reparameterization and introducing learnable kernel decay, $\\texttt{MRConv}$ learns expressive long-range kernels that perform well across various data modalities. Our experiments demonstrate state-of-the-art performance on the Long Range Arena, Sequential CIFAR, and Speech Commands tasks among convolution models and linear-time transformers. Moreover, we report improved performance on ImageNet classification by replacing 2D convolutions with 1D $\\texttt{MRConv}$ layers.", "pdf": "https://openreview.net/pdf/99fdd8c02bfd174bfdb9bf7e36f2b9016b370ad4.pdf"} {"title": "Instruction-Guided Visual Masking", "url": "https://openreview.net/forum?id=cA9gLXFaRo", "detail_url": "https://openreview.net/forum?id=cA9gLXFaRo", "authors": "Jinliang Zheng,Jianxiong Li,Sijie Cheng,Yinan Zheng,Jiaming Li,Jihao Liu,Yu Liu,Jingjing Liu,Xianyuan Zhan", "tags": "NIPS 2024,Poster", "abstract": "Instruction following is crucial in contemporary LLM. However, when extended to multimodal setting, it often suffers from misalignment between specific textual instruction and targeted local region of an image. To achieve more accurate and nuanced multimodal instruction following, we introduce Instruction-guided Visual Masking (IVM), a new versatile visual grounding model that is compatible with diverse multimodal models, such as LMM and robot model. By constructing visual masks for instruction-irrelevant regions, IVM-enhanced multimodal models can effectively focus on task-relevant image regions to better align with complex instructions. Specifically, we design a visual masking data generation pipeline and create an IVM-Mix-1M dataset with 1 million image-instruction pairs. We further introduce a new learning technique, Discriminator Weighted Supervised Learning (DWSL) for preferential IVM training that prioritizes high-quality data samples. Experimental results on generic multimodal tasks such as VQA and embodied robotic control demonstrate the versatility of IVM, which as a plug-and-play tool, significantly boosts the performance of diverse multimodal models, yielding new state-of-the-art results across challenging multimodal benchmarks. Code, model and data are available at https://github.com/2toinf/IVM.", "pdf": "https://openreview.net/pdf/8b3b0bd660a93a59b896bef55f7699a1728e3544.pdf"} {"title": "Panacea: Pareto Alignment via Preference Adaptation for LLMs", "url": "https://openreview.net/forum?id=gL5nT4y8fn", "detail_url": "https://openreview.net/forum?id=gL5nT4y8fn", "authors": "Yifan Zhong,Chengdong Ma,Xiaoyuan Zhang,Ziran Yang,Haojun Chen,Qingfu Zhang,Siyuan Qi,Yaodong Yang", "tags": "NIPS 2024,Poster", "abstract": "Current methods for large language model alignment typically use scalar human preference labels. However, this convention tends to oversimplify the multi-dimensional and heterogeneous nature of human preferences, leading to reduced expressivity and even misalignment. This paper presents Panacea, an innovative approach that reframes alignment as a multi-dimensional preference optimization problem. Panacea trains a single model capable of adapting online and Pareto-optimally to diverse sets of preferences without the need for further tuning. A major challenge here is using a low-dimensional preference vector to guide the model's behavior, despite it being governed by an overwhelmingly large number of parameters. To address this, Panacea is designed to use singular value decomposition (SVD)-based low-rank adaptation, which allows the preference vector to be simply injected online as singular values. Theoretically, we prove that Panacea recovers the entire Pareto front with common loss aggregation methods under mild conditions. Moreover, our experiments demonstrate, for the first time, the feasibility of aligning a single LLM to represent an exponentially vast spectrum of human preferences through various optimization methods. Our work marks a step forward in effectively and efficiently aligning models to diverse and intricate human preferences in a controllable and Pareto-optimal manner.", "pdf": "https://openreview.net/pdf/d75386fd57b4e895ac0601ff11f73dc08f8a8d1d.pdf"} {"title": "FilterNet: Harnessing Frequency Filters for Time Series Forecasting", "url": "https://openreview.net/forum?id=ugL2D9idAD", "detail_url": "https://openreview.net/forum?id=ugL2D9idAD", "authors": "Kun Yi,Jingru Fei,Qi Zhang,Hui He,Shufeng Hao,Defu Lian,Wei Fan", "tags": "NIPS 2024,Poster", "abstract": "Given the ubiquitous presence of time series data across various domains, precise forecasting of time series holds significant importance and finds widespread real-world applications such as energy, weather, healthcare, etc. While numerous forecasters have been proposed using different network architectures, the Transformer-based models have state-of-the-art performance in time series forecasting. However, forecasters based on Transformers are still suffering from vulnerability to high-frequency signals, efficiency in computation, and bottleneck in full-spectrum utilization, which essentially are the cornerstones for accurately predicting time series with thousands of points. In this paper, we explore a novel perspective of enlightening signal processing for deep time series forecasting. Inspired by the filtering process, we introduce one simple yet effective network, namely FilterNet, built upon our proposed learnable frequency filters to extract key informative temporal patterns by selectively passing or attenuating certain components of time series signals. Concretely, we propose two kinds of learnable filters in the FilterNet: (i) Plain shaping filter, that adopts a universal frequency kernel for signal filtering and temporal modeling; (ii) Contextual shaping filter, that utilizes filtered frequencies examined in terms of its compatibility with input signals for\ndependency learning. Equipped with the two filters, FilterNet can approximately surrogate the linear and attention mappings widely adopted in time series literature, while enjoying superb abilities in handling high-frequency noises and utilizing the whole frequency spectrum that is beneficial for forecasting. Finally, we conduct extensive experiments on eight time series forecasting benchmarks, and experimental results have demonstrated our superior performance in terms of both effectiveness and efficiency compared with state-of-the-art methods. Our code is available at$^1$.", "pdf": "https://openreview.net/pdf/742093ccdf0fec23ba128edac060f1cd661cc400.pdf"} {"title": "Divide-and-Conquer Posterior Sampling for Denoising Diffusion priors", "url": "https://openreview.net/forum?id=BOrut7M2X7", "detail_url": "https://openreview.net/forum?id=BOrut7M2X7", "authors": "Yazid Janati,Badr MOUFAD,Alain Oliviero Durmus,Eric Moulines,Jimmy Olsson", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements in solving Bayesian inverse problems have spotlighted denoising diffusion models (DDMs) as effective priors.\nAlthough these have great potential, DDM priors yield complex posterior distributions that are challenging to sample from.\nExisting approaches to posterior sampling in this context address this problem either by retraining model-specific components, leading to stiff and cumbersome methods, or by introducing approximations with uncontrolled errors that affect the accuracy of the produced samples.\nWe present an innovative framework, divide-and-conquer posterior sampling, which leverages the inherent structure of DDMs to construct a sequence of intermediate posteriors that guide the produced samples to the target posterior.\nOur method significantly reduces the approximation error associated with current techniques without the need for retraining.\nWe demonstrate the versatility and effectiveness of our approach for a wide range of Bayesian inverse problems.\nThe code is available at \\url{https://github.com/Badr-MOUFAD/dcps}", "pdf": "https://openreview.net/pdf/8a10688dd85040cdfeddb7d0c0035580f4b7af41.pdf"} {"title": "COSMIC: Compress Satellite Image Efficiently via Diffusion Compensation", "url": "https://openreview.net/forum?id=itbKmreqUZ", "detail_url": "https://openreview.net/forum?id=itbKmreqUZ", "authors": "Ziyuan Zhang,Han Qiu,Zhang Maosen,Jun Liu,Bin Chen,Tianwei Zhang,Hewu Li", "tags": "NIPS 2024,Poster", "abstract": "With the rapidly increasing number of satellites in space and their enhanced capabilities, the amount of earth observation images collected by satellites is exceeding the transmission limits of satellite-to-ground links. Although existing learned image compression solutions achieve remarkable performance by using a sophisticated encoder to extract fruitful features as compression and using a decoder to reconstruct. It is still hard to directly deploy those complex encoders on current satellites' embedded GPUs with limited computing capability and power supply to compress images in orbit. In this paper, we propose COSMIC, a simple yet effective learned compression solution to transmit satellite images. We first design a lightweight encoder (i.e. reducing FLOPs by 2.5~5X) on satellite to achieve a high image compression ratio to save satellite-to-ground links. Then, for reconstructions on the ground, to deal with the feature extraction ability degradation due to simplifying encoders, we propose a diffusion-based model to compensate image details when decoding. Our insight is that satellite's earth observation photos are not just images but indeed multi-modal data with a nature of Text-to-Image pairing since they are collected with rich sensor data (e.g. coordinates, timestep, etc.) that can be used as the condition for diffusion generation. Extensive experiments show that COSMIC outperforms state-of-the-art baselines on both perceptual and distortion metrics.", "pdf": "https://openreview.net/pdf/f8325d8e008612d6c564178e52821f16a3d03c8f.pdf"} {"title": "Optimal Classification under Performative Distribution Shift", "url": "https://openreview.net/forum?id=3J5hvO5UaW", "detail_url": "https://openreview.net/forum?id=3J5hvO5UaW", "authors": "Edwige Cyffers,Muni Sreenivas Pydi,Jamal Atif,Olivier Capp\u00e9", "tags": "NIPS 2024,Poster", "abstract": "Performative learning addresses the increasingly pervasive situations in which algorithmic decisions may induce changes in the data distribution as a consequence of their public deployment. We propose a novel view in which these performative effects are modelled as push forward measures. This general framework encompasses existing models and enables novel performative gradient estimation methods, leading to more efficient and scalable learning strategies. For distribution shifts, unlike previous models which require full specification of the data distribution, we only assume knowledge of the shift operator that represents the performative changes. This approach can also be integrated into various change-of-variable-based models, such as VAEs or normalizing flows. Focusing on classification with a linear-in-parameters performative effect, we prove the convexity of the performative risk under a new set of assumptions. Notably, we do not limit the strength of performative effects but rather their direction, requiring only that classification becomes harder when deploying more accurate models. In this case, we also establish a connection with adversarially robust classification by reformulating the performative risk as a min-max variational problem. Finally, we illustrate our approach on synthetic and real datasets.", "pdf": "https://openreview.net/pdf/ccad5b419763a0dc6b2014e2fd9874930f4e3bab.pdf"} {"title": "Cross-Modality Perturbation Synergy Attack for Person Re-identification", "url": "https://openreview.net/forum?id=LONd7ACEjy", "detail_url": "https://openreview.net/forum?id=LONd7ACEjy", "authors": "Yunpeng Gong,Zhun Zhong,Yansong Qu,Zhiming Luo,Rongrong Ji,Min Jiang", "tags": "NIPS 2024,Poster", "abstract": "In recent years, there has been significant research focusing on addressing security concerns in single-modal person re-identification (ReID) systems that are based on RGB images. However, the safety of cross-modality scenarios, which are more commonly encountered in practical applications involving images captured by infrared cameras, has not received adequate attention. The main challenge in cross-modality ReID lies in effectively dealing with visual differences between different modalities. For instance, infrared images are typically grayscale, unlike visible images that contain color information. Existing attack methods have primarily focused on the characteristics of the visible image modality, overlooking the features of other modalities and the variations in data distribution among different modalities. This oversight can potentially undermine the effectiveness of these methods in image retrieval across diverse modalities. This study represents the first exploration into the security of cross-modality ReID models and proposes a universal perturbation attack specifically designed for cross-modality ReID. This attack optimizes perturbations by leveraging gradients from diverse modality data, thereby disrupting the discriminator and reinforcing the differences between modalities. We conducted experiments on three widely used cross-modality datasets, namely RegDB, SYSU, and LLCM. The results not only demonstrate the effectiveness of our method but also provide insights for future improvements in the robustness of cross-modality ReID systems.", "pdf": "https://openreview.net/pdf/0ccd47c3b2176be7a582073d95b6ab80edf0fdb1.pdf"} {"title": "Why Warmup the Learning Rate? Underlying Mechanisms and Improvements", "url": "https://openreview.net/forum?id=NVl4SAmz5c", "detail_url": "https://openreview.net/forum?id=NVl4SAmz5c", "authors": "Dayal Singh Kalra,Maissam Barkeshli", "tags": "NIPS 2024,Poster", "abstract": "In modern deep learning, it is common to warm up the learning rate $\\eta$, often by a linear schedule between $\\eta_{\\text{init}} = 0$ and a predetermined target $\\eta_{\\text{trgt}}$. In this paper, we show through systematic experiments with SGD and Adam that the overwhelming benefit of warmup arises from allowing the network to tolerate larger $\\eta_{\\text{trgt}}$ by forcing the network to more well-conditioned areas of the loss landscape. The ability to handle larger target learning rates in turn makes hyperparameter tuning more robust while improving the final performance of the network. We uncover different regimes of operation during the warmup period, depending on whether the network training starts off in a progressive sharpening or sharpness reduction phase, which in turn depends on the initialization and parameterization. Using these insights, we show how $\\eta_{\\text{init}}$ can be properly chosen by utilizing the loss catapult mechanism, which saves on the number of warmup steps, in some cases completely eliminating the need for warmup. We also suggest an initialization for the variance in Adam, which provides benefits similar to warmup.", "pdf": "https://openreview.net/pdf/5517a080e116a110eb98fbcfaa5361dd6034bb25.pdf"} {"title": "An Efficient High-dimensional Gradient Estimator for Stochastic Differential Equations", "url": "https://openreview.net/forum?id=780uXnA4wN", "detail_url": "https://openreview.net/forum?id=780uXnA4wN", "authors": "Shengbo Wang,Jose Blanchet,Peter Glynn", "tags": "NIPS 2024,Poster", "abstract": "Overparameterized stochastic differential equation (SDE) models have achieved remarkable success in various complex environments, such as PDE-constrained optimization, stochastic control and reinforcement learning, financial engineering, and neural SDEs. These models often feature system evolution coefficients that are parameterized by a high-dimensional vector $\\theta \\in \\mathbb{R}^n$, aiming to optimize expectations of the SDE, such as a value function, through stochastic gradient ascent. Consequently, designing efficient gradient estimators for which the computational complexity scales well with $n$ is of significant interest. This paper introduces a novel unbiased stochastic gradient estimator\u2014the generator gradient estimator\u2014for which the computation time remains stable in $n$. In addition to establishing the validity of our methodology for general SDEs with jumps, we also perform numerical experiments that test our estimator in linear-quadratic control problems parameterized by high-dimensional neural networks. The results show a significant improvement in efficiency compared to the widely used pathwise differentiation method: Our estimator achieves near-constant computation times, increasingly outperforms its counterpart as $n$ increases, and does so without compromising estimation variance. These empirical findings highlight the potential of our proposed methodology for optimizing SDEs in contemporary applications.", "pdf": "https://openreview.net/pdf/22999cd3594eee49ea82b8e39018aa638c6b893b.pdf"} {"title": "Soft Prompt Threats: Attacking Safety Alignment and Unlearning in Open-Source LLMs through the Embedding Space", "url": "https://openreview.net/forum?id=CLxcLPfARc", "detail_url": "https://openreview.net/forum?id=CLxcLPfARc", "authors": "Leo Schwinn,David Dobre,Sophie Xhonneux,Gauthier Gidel,Stephan G\u00fcnnemann", "tags": "NIPS 2024,Poster", "abstract": "Current research in adversarial robustness of LLMs focuses on \\textit{discrete} input manipulations in the natural language space, which can be directly transferred to \\textit{closed-source} models. However, this approach neglects the steady progression of \\textit{open-source} models. As open-source models advance in capability, ensuring their safety becomes increasingly imperative. Yet, attacks tailored to open-source LLMs that exploit full model access remain largely unexplored. We address this research gap and propose the \\textit{embedding space attack}, which directly attacks the \\textit{continuous} embedding representation of input tokens.\nWe find that embedding space attacks circumvent model alignments and trigger harmful behaviors more efficiently than discrete attacks or model fine-tuning. Additionally, we demonstrate that models compromised by embedding attacks can be used to create discrete jailbreaks in natural language. Lastly, we present a novel threat model in the context of unlearning and show that embedding space attacks can extract supposedly deleted information from unlearned LLMs across multiple datasets and models. Our findings highlight embedding space attacks as an important threat model in open-source LLMs.", "pdf": "https://openreview.net/pdf/1aed0564b2d4e1c4dec60e34c8980ce373bdcf07.pdf"} {"title": "EnOF-SNN: Training Accurate Spiking Neural Networks via Enhancing the Output Feature", "url": "https://openreview.net/forum?id=SpcEwP6EYt", "detail_url": "https://openreview.net/forum?id=SpcEwP6EYt", "authors": "Yufei Guo,Weihang Peng,Xiaode Liu,Yuanpei Chen,Yuhan Zhang,Xin Tong,Zhou Jie,Zhe Ma", "tags": "NIPS 2024,Poster", "abstract": "Spiking neural networks (SNNs) have gained more and more interest as one of the energy-efficient alternatives of conventional artificial neural networks (ANNs). They exchange 0/1 spikes for processing information, thus most of the multiplications in networks can be replaced by additions. However, binary spike feature maps will limit the expressiveness of the SNN and result in unsatisfactory performance compared with ANNs. \nIt is shown that a rich output feature representation, i.e., the feature vector before classifier) is beneficial to training an accurate model in ANNs for classification. \nWe wonder if it also does for SNNs and how to improve the feature representation of the SNN.\nTo this end, we materialize this idea in two special designed methods for SNNs.\nFirst, inspired by some ANN-SNN methods that directly copy-paste the weight parameters from trained ANN with light modification to homogeneous SNN can obtain a well-performed SNN, we use rich information of the weight parameters from the trained ANN counterpart to guide the feature representation learning of the SNN. \nIn particular, we present the SNN's and ANN's feature representation from the same input to ANN's classifier to product SNN's and ANN's outputs respectively and then align the feature with the KL-divergence loss as in knowledge distillation methods, called L_ AF loss.\nIt can be seen as a novel and effective knowledge distillation method specially designed for the SNN that comes from both the knowledge distillation and ANN-SNN methods. Various ablation study shows that the L_AF loss is more powerful than the vanilla knowledge distillation method.\nSecond, we replace the last Leaky Integrate-and-Fire (LIF) activation layer as the ReLU activation layer to generate the output feature, thus a more powerful SNN with full-precision feature representation can be achieved but with only a little extra computation.\nExperimental results show that our method consistently outperforms the current state-of-the-art algorithms on both popular non-spiking static and neuromorphic datasets. We provide an extremely simple but effective way to train high-accuracy spiking neural networks.", "pdf": "https://openreview.net/pdf/5a4dfaf8dc6861efa8e8356b3bd86743ab98838d.pdf"} {"title": "Lower Bounds of Uniform Stability in Gradient-Based Bilevel Algorithms for Hyperparameter Optimization", "url": "https://openreview.net/forum?id=u3mZzd0Pdx", "detail_url": "https://openreview.net/forum?id=u3mZzd0Pdx", "authors": "Rongzhen Wang,Chenyu Zheng,Guoqiang Wu,Xu Min,Xiaolu Zhang,JUN ZHOU,Chongxuan Li", "tags": "NIPS 2024,Poster", "abstract": "Gradient-based bilevel programming leverages unrolling differentiation (UD) or implicit function theorem (IFT) to solve hyperparameter optimization (HO) problems, and is proven effective and scalable in practice. \nTo understand their generalization behavior, existing works establish upper bounds on the uniform stability of these algorithms, while their tightness is still unclear. \nTo this end, this paper attempts to establish stability lower bounds for UD-based and IFT-based algorithms. \nA central technical challenge arises from the dependency of each outer-level update on the concurrent stage of inner optimization in bilevel programming. \nTo address this problem, we introduce lower-bounded expansion properties to characterize the instability in update rules which can serve as general tools for lower-bound analysis. \nThese properties guarantee the hyperparameter divergence at the outer level and the Lipschitz constant of inner output at the inner level in the context of HO.\nGuided by these insights, we construct a quadratic example that yields tight lower bounds for the UD-based algorithm and meaningful bounds for a representative IFT-based algorithm.\nOur tight result indicates that uniform stability has reached its limit in stability analysis for the UD-based algorithm.", "pdf": "https://openreview.net/pdf/9805ac8ebc0d7d218ea746ba7ae5f031fac46932.pdf"} {"title": "Leveraging an ECG Beat Diffusion Model for Morphological Reconstruction from Indirect Signals", "url": "https://openreview.net/forum?id=Eu0nYM4BPo", "detail_url": "https://openreview.net/forum?id=Eu0nYM4BPo", "authors": "Lisa Bedin,Gabriel Cardoso,Josselin Duchateau,Remi Dubois,Eric Moulines", "tags": "NIPS 2024,Poster", "abstract": "Electrocardiogram (ECG) signals provide essential information about the heart's condition and are widely used for diagnosing cardiovascular diseases. The morphology of a single heartbeat over the available leads is a primary biosignal for monitoring cardiac conditions. However, analyzing heartbeat morphology can be challenging due to noise and artifacts, missing leads, and a lack of annotated data.\nGenerative models, such as denoising diffusion generative models (DDMs), have proven successful in generating complex data. We introduce $\\texttt{BeatDiff}$, a light-weight DDM tailored for the morphology of multiple leads heartbeats.\nWe then show that many important ECG downstream tasks can be formulated as conditional generation methods in a Bayesian inverse problem framework using $\\texttt{BeatDiff}$ as priors. We propose $\\texttt{EM-BeatDiff}$, an Expectation-Maximization algorithm, to solve this conditional generation tasks without fine-tuning. We illustrate our results with several tasks, such as removal of ECG noise and artifacts (baseline wander, electrode motion), reconstruction of a 12-lead ECG from a single lead (useful for ECG reconstruction of smartwatch experiments), and unsupervised explainable anomaly detection. Numerical experiments show that the combination of $\\texttt{BeatDiff}$ and $\\texttt{EM-BeatDiff}$ outperforms SOTA methods for the problems considered in this work.", "pdf": "https://openreview.net/pdf/3f7207fe3b1315a40ca57ff01e47acfdbfb18e9e.pdf"} {"title": "Advancing Open-Set Domain Generalization Using Evidential Bi-Level Hardest Domain Scheduler", "url": "https://openreview.net/forum?id=C3JCwbMXbU", "detail_url": "https://openreview.net/forum?id=C3JCwbMXbU", "authors": "Kunyu Peng,Di Wen,Kailun Yang,Ao Luo,Yufan Chen,Jia Fu,M. Saquib Sarfraz,Alina Roitberg,Rainer Stiefelhagen", "tags": "NIPS 2024,Poster", "abstract": "In Open-Set Domain Generalization (OSDG), the model is exposed to both new variations of data appearance (domains) and open-set conditions, where both known and novel categories are present at test time. The challenges of this task arise from the dual need to generalize across diverse domains and accurately quantify category novelty, which is critical for applications in dynamic environments. Recently, meta-learning techniques have demonstrated superior results in OSDG, effectively orchestrating the meta-train and -test tasks by employing varied random categories and predefined domain partition strategies. These approaches prioritize a well-designed training schedule over traditional methods that focus primarily on data augmentation and the enhancement of discriminative feature learning. \nThe prevailing meta-learning models in OSDG typically utilize a predefined sequential domain scheduler to structure data partitions. However, a crucial aspect that remains inadequately explored is the influence brought by strategies of domain schedulers during training. \nIn this paper, we observe that an adaptive domain scheduler benefits more in OSDG compared with prefixed sequential and random domain schedulers. We propose the Evidential Bi-Level Hardest Domain Scheduler (EBiL-HaDS) to achieve an adaptive domain scheduler. This method strategically sequences domains by assessing their reliabilities in utilizing a follower network, trained with confidence scores learned in an evidential manner, regularized by max rebiasing discrepancy, and optimized in a bilevel manner. We verify our approach on three OSDG benchmarks, i.e., PACS, DigitsDG, and OfficeHome. The results show that our method substantially improves OSDG performance and achieves more discriminative embeddings for both the seen and unseen categories, underscoring the advantage of a judicious domain scheduler for the generalizability to unseen domains and unseen categories. The source code is publicly available at https://github.com/KPeng9510/EBiL-HaDS.", "pdf": "https://openreview.net/pdf/375b1b4b1dc1c7ead98e69341a731730f0fff488.pdf"} {"title": "Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in LLMs", "url": "https://openreview.net/forum?id=CVpuVe1N22", "detail_url": "https://openreview.net/forum?id=CVpuVe1N22", "authors": "Zhiyuan Hu,Chumin Liu,Xidong Feng,Yilun Zhao,See-Kiong Ng,Anh Tuan Luu,Junxian He,Pang Wei Koh,Bryan Hooi", "tags": "NIPS 2024,Poster", "abstract": "In the face of uncertainty, the ability to *seek information* is of fundamental importance. In many practical applications, such as medical diagnosis and troubleshooting, the information needed to solve the task is not initially given, and has to be actively sought by asking follow-up questions (for example, a doctor asking a patient for more details about their symptoms). In this work, we introduce **Uncertainty of Thoughts (UoT)**, an algorithm to augment large language models with the ability to actively seek information by asking effective questions. UoT combines:\n\n1. An *uncertainty-aware simulation approach* which enables the model to simulate possible future scenarios and how likely they are to occur,\n2. *Uncertainty-based rewards* motivated by information gain which incentivizes the model to seek information, and\n3. A *reward propagation scheme* to select the optimal question to ask in a way that maximizes the expected reward.\n\nIn experiments on medical diagnosis, troubleshooting and the `20 Questions' game, UoT achieves an average performance improvement of 38.1% in the rate of successful task completion across multiple LLMs compared with direct prompting, and also improves efficiency (i.e., the number of questions needed to complete the task).", "pdf": "https://openreview.net/pdf/9dbb738b24d037d83f92279fae0409ecaf722228.pdf"} {"title": "Reflective Multi-Agent Collaboration based on Large Language Models", "url": "https://openreview.net/forum?id=wWiAR5mqXq", "detail_url": "https://openreview.net/forum?id=wWiAR5mqXq", "authors": "Xiaohe Bo,Zeyu Zhang,Quanyu Dai,Xueyang Feng,Lei Wang,Rui Li,Xu Chen,Ji-Rong Wen", "tags": "NIPS 2024,Poster", "abstract": "Benefiting from the powerful language expression and planning capabilities of Large Language Models (LLMs), LLM-based autonomous agents have achieved promising performance in various downstream tasks. Recently, based on the development of single-agent systems, researchers propose to construct LLM-based multi-agent systems to tackle more complicated tasks. In this paper, we propose a novel framework, named COPPER, to enhance the collaborative capabilities of LLM-based agents with the self-reflection mechanism. To improve the quality of reflections, we propose to fine-tune a shared reflector, which automatically tunes the prompts of actor models using our counterfactual PPO mechanism. On the one hand, we propose counterfactual rewards to assess the contribution of a single agent\u2019s reflection within the system, alleviating the credit assignment problem. On the other hand, we propose to train a shared reflector, which enables the reflector to generate personalized reflections according to agent roles, while reducing the computational resource requirements and improving training stability. We conduct experiments on three datasets to evaluate the performance of our model in multi-hop question answering, mathematics, and chess scenarios. Experimental results show that COPPER possesses stronger reflection capabilities and exhibits excellent generalization performance across different actor models.", "pdf": "https://openreview.net/pdf/3b17b8aba5d866085a47c8258c92406af2fc2e10.pdf"} {"title": "COLD: Causal reasOning in cLosed Daily activities", "url": "https://openreview.net/forum?id=7Mo1NOosNT", "detail_url": "https://openreview.net/forum?id=7Mo1NOosNT", "authors": "Abhinav Joshi,Areeb Ahmad,Ashutosh Modi", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) have shown state-of-the-art performance in a variety of tasks, including arithmetic and reasoning; however, to gauge the intellectual capabilities of LLMs, causal reasoning has become a reliable proxy for validating a general understanding of the mechanics and intricacies of the world similar to humans. Previous works in natural language processing (NLP) have either focused on open-ended causal reasoning via causal commonsense reasoning (CCR) or framed a symbolic representation-based question answering for theoretically backed-up analysis via a causal inference engine. The former adds an advantage of real-world grounding but lacks theoretically backed-up analysis/validation, whereas the latter is far from real-world grounding. In this work, we bridge this gap by proposing the COLD (Causal reasOning in cLosed Daily activities) framework, which is built upon human understanding of daily real-world activities to reason about the causal nature of events. We show that the proposed framework facilitates the creation of enormous causal queries (\u223c 9 million) and comes close to the mini-turing test, simulating causal reasoning to evaluate the understanding of a daily real-world task. We evaluate multiple LLMs on the created causal queries and find that causal reasoning is challenging even for activities trivial to humans. We further explore (the causal reasoning abilities of LLMs) using the backdoor criterion to determine the causal strength between events.", "pdf": "https://openreview.net/pdf/015330ba1dcf5481f996a511f4b038f8acfab65f.pdf"} {"title": "Improved Regret of Linear Ensemble Sampling", "url": "https://openreview.net/forum?id=6SSzMq3WTn", "detail_url": "https://openreview.net/forum?id=6SSzMq3WTn", "authors": "Harin Lee,Min-hwan Oh", "tags": "NIPS 2024,Poster", "abstract": "In this work, we close the fundamental gap of theory and practice by providing an improved regret bound for linear ensemble sampling. We prove that with an ensemble size logarithmic in $T$, linear ensemble sampling can achieve a frequentist regret bound of $\\tilde{\\mathcal{O}}(d^{3/2}\\sqrt{T})$, matching state-of-the-art results for randomized linear bandit algorithms, where $d$ and $T$ are the dimension of the parameter and the time horizon respectively. Our approach introduces a general regret analysis framework for linear bandit algorithms. Additionally, we reveal a significant relationship between linear ensemble sampling and Linear Perturbed-History Exploration (LinPHE), showing that LinPHE is a special case of linear ensemble sampling when the ensemble size equals $T$. This insight allows us to derive a new regret bound of $\\tilde{\\mathcal{O}}(d^{3/2}\\sqrt{T})$ for LinPHE, independent of the number of arms. Our contributions advance the theoretical foundation of ensemble sampling, bringing its regret bounds in line with the best known bounds for other randomized exploration algorithms.", "pdf": "https://openreview.net/pdf/91aa318cf3abdcf89e865f2a257fd7ae0d42e98c.pdf"} {"title": "Physics-Constrained Comprehensive Optical Neural Networks", "url": "https://openreview.net/forum?id=QhUXU2ilIG", "detail_url": "https://openreview.net/forum?id=QhUXU2ilIG", "authors": "Yanbing Liu,Jianwei Qin,Yan Liu,Xi Yue,Xun Liu,Guoqing Wang,Tianyu Li,Fangwei Ye,Wei Li", "tags": "NIPS 2024,Poster", "abstract": "With the advantages of low latency, low power consumption, and high parallelism, optical neural networks (ONN) offer a promising solution for time-sensitive and resource-limited artificial intelligence applications. However, the performance of the ONN model is often diminished by the gap between the ideal simulated system and the actual physical system. To bridge the gap, this work conducts extensive experiments to investigate systematic errors in the optical physical system within the context of image classification tasks. Through our investigation, two quantifiable errors\u2014light source instability and exposure time mismatches\u2014significantly impact the prediction performance of ONN. To address these systematic errors, a physics-constrained ONN learning framework is constructed, including a well designed loss function to mitigate the effect of light fluctuations, a CCD adjustment strategy to alleviate the effects of exposure time mismatches and a \u2019physics-prior based\u2019 error compensation network to manage other systematic errors, ensuring consistent light intensity across experimental results and simulations. In our experiments, the proposed method achieved a test classification accuracy of 96.5% on the MNIST dataset, a substantial improvement over the 61.6% achieved with the original ONN. For the more challenging QuickDraw16 and Fashion MNIST datasets, experimental accuracy improved from 63.0% to 85.7% and from 56.2% to 77.5%, respectively. Moreover, the comparison results further demonstrate the effectiveness of the proposed physics-constrained ONN learning framework over state-of-the-art ONN approaches. This lays the groundwork for more robust and precise optical computing applications.", "pdf": "https://openreview.net/pdf/d80f142c15db215c3cf2f4843b8992027fc3b0e3.pdf"} {"title": "Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language Models", "url": "https://openreview.net/forum?id=QHRLFdhkLu", "detail_url": "https://openreview.net/forum?id=QHRLFdhkLu", "authors": "Shi Luohe,Yao Yao,Zuchao Li,Lefei Zhang,hai zhao", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have rapidly advanced and demonstrated impressive capabilities. In-Context Learning (ICL) and Parameter-Efficient Fine-Tuning (PEFT) are currently two mainstream methods for augmenting LLMs to downstream tasks. ICL typically constructs a few-shot learning scenario, either manually or by setting up a Retrieval-Augmented Generation (RAG) system, helping models quickly grasp domain knowledge or question-answering patterns without changing model parameters. However, this approach involves trade-offs, such as slower inference speed and increased space occupancy. PEFT assists the model in adapting to tasks through minimal parameter modifications, but the training process still demands high hardware requirements, even with a small number of parameters involved. To address these challenges, we propose Reference Trustable Decoding (RTD), a paradigm that allows models to quickly adapt to new tasks without fine-tuning, maintaining low inference costs. RTD constructs a reference datastore from the provided training examples and optimizes the LLM's final vocabulary distribution by flexibly selecting suitable references based on the input, resulting in more trustable responses and enabling the model to adapt to downstream tasks at a low cost. Experimental evaluations on various LLMs using different benchmarks demonstrate that RTD establishes a new paradigm for augmenting models to downstream tasks. Furthermore, our method exhibits strong orthogonality with traditional methods, allowing for concurrent usage. Our code can be found at https://github.com/ShiLuohe/ReferenceTrustableDecoding.", "pdf": "https://openreview.net/pdf/1b93628bc996a9feeaad3c742ff5fe8e0c5ca616.pdf"} {"title": "Addressing Hidden Confounding with Heterogeneous Observational Datasets for Recommendation", "url": "https://openreview.net/forum?id=6CFHg7exjY", "detail_url": "https://openreview.net/forum?id=6CFHg7exjY", "authors": "Yanghao Xiao,Haoxuan Li,Yongqiang Tang,Wensheng Zhang", "tags": "NIPS 2024,Poster", "abstract": "The collected data in recommender systems generally suffers selection bias. Considerable works are proposed to address selection bias induced by observed user and item features, but they fail when hidden features (e.g., user age or salary) that affect both user selection mechanism and feedback exist, which is called hidden confounding. To tackle this issue, methods based on sensitivity analysis and leveraging a few randomized controlled trial (RCT) data for model calibration are proposed. However, the former relies on strong assumptions of hidden confounding strength, whereas the latter relies on the expensive RCT data, thereby limiting their applicability in real-world scenarios. In this paper, we propose to employ heterogeneous observational data to address hidden confounding, wherein some data is subject to hidden confounding while the remaining is not. We argue that such setup is more aligned with practical scenarios, especially when some users do not have complete personal information (thus assumed with hidden confounding), while others do have (thus assumed without hidden confounding). To achieve unbiased learning, we propose a novel meta-learning based debiasing method called MetaDebias. This method explicitly models oracle error imputation and hidden confounding bias, and utilizes bi-level optimization for model training. Extensive experiments on three public datasets validate our method achieves state-of-the-art performance in the presence of hidden confounding, regardless of RCT data availability.", "pdf": "https://openreview.net/pdf/ad0fb4062a8666b44cfae516c8e2fc437eba4223.pdf"} {"title": "To Learn or Not to Learn, That is the Question \u2014 A Feature-Task Dual Learning Model of Perceptual Learning", "url": "https://openreview.net/forum?id=g3MbZOw0qO", "detail_url": "https://openreview.net/forum?id=g3MbZOw0qO", "authors": "Xiao Liu,Muyang Lyu,Cong Yu,Si Wu", "tags": "NIPS 2024,Poster", "abstract": "Perceptual learning refers to the practices through which participants learn to improve their performance in perceiving sensory stimuli. Two seemingly conflicting phenomena of specificity and transfer have been widely observed in perceptual learning. \n\nHere, we propose a dual-learning model to reconcile these two phenomena. The model consists of two learning processes. One is task-based learning, which is fast and enables the brain to adapt to a task rapidly by using existing feature representations. The other is feature-based learning, which is slow and enables the brain to improve feature representations to match the statistical change of the environment. Associated with different training paradigms, the interactions between these two learning processes induce the rich phenomena of perceptual learning. Specifically, in the training paradigm where the same stimulus condition is presented excessively, feature-based learning is triggered, which incurs specificity, while in the paradigm where the stimulus condition varies during the training, task-based learning dominates to induce the transfer effect. As the number of training sessions under the same stimulus condition increases, a transition from transfer to specificity occurs. \n\nWe demonstrate that the dual-learning model can account for both the specificity and transfer phenomena observed in classical psychophysical experiments. We hope that this study gives us insight into understanding how the brain balances the accomplishment of a new task and the consumption of learning effort.", "pdf": "https://openreview.net/pdf/ff4fc8546ef6cfcf5051c31ab205ded589759897.pdf"} {"title": "Towards Understanding the Working Mechanism of Text-to-Image Diffusion Model", "url": "https://openreview.net/forum?id=zTu0QEpvtZ", "detail_url": "https://openreview.net/forum?id=zTu0QEpvtZ", "authors": "Mingyang Yi,Aoxue Li,Yi Xin,Zhenguo Li", "tags": "NIPS 2024,Poster", "abstract": "Recently, the strong latent Diffusion Probabilistic Model (DPM) has been applied to high-quality Text-to-Image (T2I) generation (e.g., Stable Diffusion), by injecting the encoded target text prompt into the gradually denoised diffusion image generator. Despite the success of DPM in practice, the mechanism behind it remains to be explored. To fill this blank, we begin by examining the intermediate statuses during the gradual denoising generation process in DPM. The empirical observations indicate, the shape of image is reconstructed after the first few denoising steps, and then the image is filled with details (e.g., texture). The phenomenon is because the low-frequency signal (shape relevant) of the noisy image is not corrupted until the final stage in the forward process (initial stage of generation) of adding noise in DPM. Inspired by the observations, we proceed to explore the influence of each token in the text prompt during the two stages. After a series of experiments of T2I generations conditioned on a set of text prompts. We conclude that in the earlier generation stage, the image is mostly decided by the special token [\\texttt{EOS}] in the text prompt, and the information in the text prompt is already conveyed in this stage. After that, the diffusion model completes the details of generated images by information from themselves. Finally, we propose to apply this observation to accelerate the process of T2I generation by properly removing text guidance, which finally accelerates the sampling up to 25\\%+.", "pdf": "https://openreview.net/pdf/e5a980c943643aa24ee25b4d6f1f338b4cf65964.pdf"} {"title": "DiffuserLite: Towards Real-time Diffusion Planning", "url": "https://openreview.net/forum?id=2TXDHUqyrQ", "detail_url": "https://openreview.net/forum?id=2TXDHUqyrQ", "authors": "Zibin Dong,Jianye HAO,Yifu Yuan,Fei Ni,Yitian Wang,Pengyi Li,YAN ZHENG", "tags": "NIPS 2024,Poster", "abstract": "Diffusion planning has been recognized as an effective decision-making paradigm in various domains. The capability of generating high-quality long-horizon trajectories makes it a promising research direction. However, existing diffusion planning methods suffer from low decision-making frequencies due to the expensive iterative sampling cost. To alleviate this, we introduce DiffuserLite, a super fast and lightweight diffusion planning framework, which employs a planning refinement process (PRP) to generate coarse-to-fine-grained trajectories, significantly reducing the modeling of redundant information and leading to notable increases in decision-making frequency. Our experimental results demonstrate that DiffuserLite achieves a decision-making frequency of $122.2$Hz ($112.7$x faster than predominant frameworks) and reaches state-of-the-art performance on D4RL, Robomimic, and FinRL benchmarks. In addition, DiffuserLite can also serve as a flexible plugin to increase the decision-making frequency of other diffusion planning algorithms, providing a structural design reference for future works. More details and visualizations are available at https://diffuserlite.github.io/.", "pdf": "https://openreview.net/pdf/4b18aa2be461b56e8f97f7fe0ef446f400a953ed.pdf"} {"title": "Dual Critic Reinforcement Learning under Partial Observability", "url": "https://openreview.net/forum?id=GruuYVTGXV", "detail_url": "https://openreview.net/forum?id=GruuYVTGXV", "authors": "Jinqiu Li,Enmin Zhao,Tong Wei,Junliang Xing,Shiming Xiang", "tags": "NIPS 2024,Poster", "abstract": "Partial observability in environments poses significant challenges that impede the formation of effective policies in reinforcement learning. Prior research has shown that borrowing the complete state information can enhance sample efficiency. This strategy, however, frequently encounters unstable learning with high variance in practical applications due to the over-reliance on complete information. This paper introduces DCRL, a Dual Critic Reinforcement Learning framework designed to adaptively harness full-state information during training to reduce variance for optimized online performance. In particular, DCRL incorporates two distinct critics: an oracle critic with access to complete state information and a standard critic functioning within the partially observable context. It innovates a synergistic strategy to meld the strengths of the oracle critic for efficiency improvement and the standard critic for variance reduction, featuring a novel mechanism for seamless transition and weighting between them. We theoretically prove that DCRL mitigates the learning variance while maintaining unbiasedness. Extensive experimental analyses across the Box2D and Box3D environments have verified DCRL's superior performance. The source code is available in the supplementary.", "pdf": "https://openreview.net/pdf/ba8a0334d227ef4fd53bbda11523ac92b0261781.pdf"} {"title": "MambaLRP: Explaining Selective State Space Sequence Models", "url": "https://openreview.net/forum?id=2n1Ysn1EDl", "detail_url": "https://openreview.net/forum?id=2n1Ysn1EDl", "authors": "Farnoush Rezaei Jafari,Gr\u00e9goire Montavon,Klaus Robert Muller,Oliver Eberle", "tags": "NIPS 2024,Poster", "abstract": "Recent sequence modeling approaches using selective state space sequence models, referred to as Mamba models, have seen a surge of interest. These models allow efficient processing of long sequences in linear time and are rapidly being adopted in a wide range of applications such as language modeling, demonstrating promising performance. To foster their reliable use in real-world scenarios, it is crucial to augment their transparency. Our work bridges this critical gap by bringing explainability, particularly Layer-wise Relevance Propagation (LRP), to the Mamba architecture. Guided by the axiom of relevance conservation, we identify specific components in the Mamba architecture, which cause unfaithful explanations. To remedy this issue, we propose MambaLRP, a novel algorithm within the LRP framework, which ensures a more stable and reliable relevance propagation through these components. Our proposed method is theoretically sound and excels in achieving state-of-the-art explanation performance across a diverse range of models and datasets. Moreover, MambaLRP facilitates a deeper inspection of Mamba architectures, uncovering various biases and evaluating their significance. It also enables the analysis of previous speculations regarding the long-range capabilities of Mamba models.", "pdf": "https://openreview.net/pdf/bfb5298a8176629465ba2b25c762c917c0e97825.pdf"} {"title": "Learning to Balance Altruism and Self-interest Based on Empathy in Mixed-Motive Games", "url": "https://openreview.net/forum?id=ry0RXTJwjy", "detail_url": "https://openreview.net/forum?id=ry0RXTJwjy", "authors": "Fanqi Kong,Yizhe Huang,Song-Chun Zhu,Siyuan Qi,Xue Feng", "tags": "NIPS 2024,Poster", "abstract": "Real-world multi-agent scenarios often involve mixed motives, demanding altruistic agents capable of self-protection against potential exploitation. However, existing approaches often struggle to achieve both objectives. In this paper, based on that empathic responses are modulated by learned social relationships between agents, we propose LASE (**L**earning to balance **A**ltruism and **S**elf-interest based on **E**mpathy), a distributed multi-agent reinforcement learning algorithm that fosters altruistic cooperation through gifting while avoiding exploitation by other agents in mixed-motive games. LASE allocates a portion of its rewards to co-players as gifts, with this allocation adapting dynamically based on the social relationship --- a metric evaluating the friendliness of co-players estimated by counterfactual reasoning. In particular, social relationship measures each co-player by comparing the estimated $Q$-function of current joint action to a counterfactual baseline which marginalizes the co-player's action, with its action distribution inferred by a perspective-taking module. Comprehensive experiments are performed in spatially and temporally extended mixed-motive games, demonstrating LASE's ability to promote group collaboration without compromising fairness and its capacity to adapt policies to various types of interactive co-players.", "pdf": "https://openreview.net/pdf/8b64690213f2681ae3c09e9d3cb33cc9b645d2c5.pdf"} {"title": "MTGS: A Novel Framework for Multi-Person Temporal Gaze Following and Social Gaze Prediction", "url": "https://openreview.net/forum?id=ALU676zGFE", "detail_url": "https://openreview.net/forum?id=ALU676zGFE", "authors": "Anshul Gupta,Samy Tafasca,Arya Farkhondeh,Pierre Vuillecard,Jean-marc Odobez", "tags": "NIPS 2024,Poster", "abstract": "Gaze following and social gaze prediction are fundamental tasks providing insights into human communication behaviors, intent, and social interactions. Most previous approaches addressed these tasks separately, either by designing highly specialized social gaze models that do not generalize to other social gaze tasks or by considering social gaze inference as an ad-hoc post-processing of the gaze following task. Furthermore, the vast majority of gaze following approaches have proposed models that can handle only one person at a time and are static, therefore failing to take advantage of social interactions and temporal dynamics. In this paper, we address these limitations and introduce a novel framework to jointly predict the gaze target and social gaze label for all people in the scene. It comprises (i) a temporal, transformer-based architecture that, in addition to frame tokens, handles person-specific tokens capturing the gaze information related to each individual; (ii) a new dataset, VSGaze, built from multiple gaze following and social gaze datasets by extending and validating head detections and tracks, and unifying annotation types. We demonstrate that our model can address and benefit from training on all tasks jointly, achieving state-of-the-art results for multi-person gaze following and social gaze prediction. Our annotations and code will be made publicly available.", "pdf": "https://openreview.net/pdf/c492fc1875efdef3a8127ddec02f635024f0e5a4.pdf"} {"title": "Efficient Multi-task LLM Quantization and Serving for Multiple LoRA Adapters", "url": "https://openreview.net/forum?id=HfpV6u0kbX", "detail_url": "https://openreview.net/forum?id=HfpV6u0kbX", "authors": "Yifei Xia,Fangcheng Fu,Wentao Zhang,Jiawei Jiang,Bin CUI", "tags": "NIPS 2024,Poster", "abstract": "With the remarkable achievements of large language models (LLMs), the demand for fine-tuning and deploying LLMs in various downstream tasks has garnered widespread interest. Parameter-efficient fine-tuning techniques represented by LoRA and model quantization techniques represented by GPTQ and AWQ are of paramount significance. However, although these techniques have been widely adopted in single-task scenarios, research is scarce in multi-task scenarios. To be specific, we find that mainstream quantization methods would prevent the base LLM from being shared among tasks, so current LLM serving systems are infeasible to integrate LLM quantization with multiple LoRA adapters to achieve memory-efficient multi-task serving. Moreover, existing LLM serving systems lack support for dynamic task addition and overlook the workload differences among tasks, leading to inefficiencies in multi-task scenarios.\n\nThis work proposes LoRA-Inlaid, an efficient multi-task LLM serving system. On the one hand, LoRA-Inlaid designs a flexible and efficient multi-task quantization algorithm (MLGPTQ) that facilitates the sharing of a single quantized model for multiple LoRA adapters, which significantly reduces the memory consumption for model deployment. Meanwhile, it supports adding LoRA adapters for new tasks on the fly, without sacrificing the stability of online services. On the other hand, LoRA-Inlaid develops a novel multi-task scheduling algorithm guided by output length prediction and grouping among different tasks, which effectively shrinks the memory consumption and avoids frequent switching of LoRA adapters. Empirical results verify that LoRA-Inlaid outperforms existing state-of-the-art LLM serving systems by up to 1.58 times in terms of throughput, 1.76 times in terms of average latency, 2 times in terms of job completion time, and 10 times in terms of SLO Attainment, while maintaining the same level of model quality.", "pdf": "https://openreview.net/pdf/441d9e07e128688c19bc771c6128fb721d7ca365.pdf"} {"title": "START: A Generalized State Space Model with Saliency-Driven Token-Aware Transformation", "url": "https://openreview.net/forum?id=mAdGQ1Hh3L", "detail_url": "https://openreview.net/forum?id=mAdGQ1Hh3L", "authors": "Jintao Guo,Lei Qi,Yinghuan Shi,Yang Gao", "tags": "NIPS 2024,Poster", "abstract": "Domain Generalization (DG) aims to enable models to generalize to unseen target domains by learning from multiple source domains. Existing DG methods primarily rely on convolutional neural networks (CNNs), which inherently learn texture biases due to their limited receptive fields, making them prone to overfitting source domains. While some works have introduced transformer-based methods (ViTs) for DG to leverage the global receptive field, these methods incur high computational costs due to the quadratic complexity of self-attention. Recently, advanced state space models (SSMs), represented by Mamba, have shown promising results in supervised learning tasks by achieving linear complexity in sequence length during training and fast RNN-like computation during inference. Inspired by this, we investigate the generalization ability of the Mamba model under domain shifts and find that input-dependent matrices within SSMs could accumulate and amplify domain-specific features, thus hindering model generalization. To address this issue, we propose a novel SSM-based architecture with saliency-based token-aware transformation (namely START), which achieves state-of-the-art (SOTA) performances and offers a competitive alternative to CNNs and ViTs. Our START can selectively perturb and suppress domain-specific features in salient tokens within the input-dependent matrices of SSMs, thus effectively reducing the discrepancy between different domains. Extensive experiments on five benchmarks demonstrate that START outperforms existing SOTA DG methods with efficient linear complexity. Our code is available at https://github.com/lingeringlight/START.", "pdf": "https://openreview.net/pdf/1e045e8d4c432c88ee7d528f95cd5406dfc62b43.pdf"} {"title": "FreqMark: Invisible Image Watermarking via Frequency Based Optimization in Latent Space", "url": "https://openreview.net/forum?id=01s5ODIHKd", "detail_url": "https://openreview.net/forum?id=01s5ODIHKd", "authors": "YiYang Guo,Ruizhe Li,Mude Hui,Hanzhong Allan Guo,Chen Zhang,Chuangjian Cai,Le Wan,Shangfei Wang", "tags": "NIPS 2024,Poster", "abstract": "Invisible watermarking is essential for safeguarding digital content, enabling copyright protection and content authentication. \nHowever, existing watermarking methods fall short in robustness against regeneration attacks.\nIn this paper, we propose a novel method called FreqMark that involves unconstrained optimization of the image latent frequency space obtained after VAE encoding. Specifically, FreqMark embeds the watermark by optimizing the latent frequency space of the images and then extracts the watermark through a pre-trained image encoder. This optimization allows a flexible trade-off between image quality with watermark robustness and effectively resists regeneration attacks.\nExperimental results demonstrate that FreqMark offers significant advantages in image quality and robustness, permits flexible selection of the encoding bit number, and achieves a bit accuracy exceeding 90\\% when encoding a 48-bit hidden message under various attack scenarios.", "pdf": "https://openreview.net/pdf/9234ff8acbbb7b2334d075f931eeea536868b987.pdf"} {"title": "Direct3D: Scalable Image-to-3D Generation via 3D Latent Diffusion Transformer", "url": "https://openreview.net/forum?id=vCOgjBIZuL", "detail_url": "https://openreview.net/forum?id=vCOgjBIZuL", "authors": "Shuang Wu,Youtian Lin,Yifei Zeng,Feihu Zhang,Jingxi Xu,Philip Torr,Xun Cao,Yao Yao", "tags": "NIPS 2024,Poster", "abstract": "Generating high-quality 3D assets from text and images has long been challenging, primarily due to the absence of scalable 3D representations capable of capturing intricate geometry distributions. In this work, we introduce Direct3D, a native 3D generative model scalable to in-the-wild input images, without requiring a multi-view diffusion model or SDS optimization. Our approach comprises two primary components: a Direct 3D Variational Auto-Encoder (D3D-VAE) and a Direct 3D Diffusion Transformer (D3D-DiT). D3D-VAE efficiently encodes high-resolution 3D shapes into a compact and continuous latent triplane space. Notably, our method directly supervises the decoded geometry using a semi-continuous surface sampling strategy, diverging from previous methods relying on rendered images as supervision signals. D3D-DiT models the distribution of encoded 3D latents and is specifically designed to fuse positional information from the three feature maps of the triplane latent, enabling a native 3D generative model scalable to large-scale 3D datasets. Additionally, we introduce an innovative image-to-3D generation pipeline incorporating semantic and pixel-level image conditions, allowing the model to produce 3D shapes consistent with the provided conditional image input. Extensive experiments demonstrate the superiority of our large-scale pre-trained Direct3D over previous image-to-3D approaches, achieving significantly better generation quality and generalization ability, thus establishing a new state-of-the-art for 3D content creation. Project page: https://www.neural4d.com/research/direct3d.", "pdf": "https://openreview.net/pdf/99efa36a5bad48e445b21a9e053764ed5033cd4b.pdf"} {"title": "The Impact of Initialization on LoRA Finetuning Dynamics", "url": "https://openreview.net/forum?id=sn3UrYRItk", "detail_url": "https://openreview.net/forum?id=sn3UrYRItk", "authors": "Soufiane Hayou,Nikhil Ghosh,Bin Yu", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we study the role of initialization in Low Rank Adaptation (LoRA) as originally introduced in Hu et al. (2021). Essentially, to start from the pretrained model, one can either initialize $B$ to zero and $A$ to random, or vice-versa. In both cases, the product $BA$ is equal to zero at initialization, which makes finetuning starts from the pretrained model. These two initialization schemes are seemingly similar. They should in-principle yield the same performance and share the same optimal learning rate. We demonstrate that this is an *incorrect intuition* and that the first scheme (of initializing $B$ to zero and $A$ to random) on average in our experiments yields better performance compared to the other scheme. Our theoretical analysis shows that the reason behind this might be that the first initialization allows the use of larger learning rates (without causing output instability) compared to the second initialization, resulting in more efficient learning of the first scheme. We validate our results with extensive experiments on LLMs.", "pdf": "https://openreview.net/pdf/73b61afa707a9caef336040d03963125b0ff17ef.pdf"} {"title": "Accelerating Non-Maximum Suppression: A Graph Theory Perspective", "url": "https://openreview.net/forum?id=0lau89u4oE", "detail_url": "https://openreview.net/forum?id=0lau89u4oE", "authors": "King-Siong Si,Lu Sun,Weizhan Zhang,Tieliang Gong,Jiahao Wang,Jiang Liu,Hao Sun", "tags": "NIPS 2024,Poster", "abstract": "Non-maximum suppression (NMS) is an indispensable post-processing step in object detection. With the continuous optimization of network models, NMS has become the ``last mile'' to enhance the efficiency of object detection. This paper systematically analyzes NMS from a graph theory perspective for the first time, revealing its intrinsic structure. Consequently, we propose two optimization methods, namely QSI-NMS and BOE-NMS. The former is a fast recursive divide-and-conquer algorithm with negligible mAP loss, and its extended version (eQSI-NMS) achieves optimal complexity of $\\mathcal{O}(n\\log n)$. The latter, concentrating on the locality of NMS, achieves an optimization at a constant level without an mAP loss penalty. Moreover, to facilitate rapid evaluation of NMS methods for researchers, we introduce NMS-Bench, the first benchmark designed to comprehensively assess various NMS methods. Taking the YOLOv8-N model on MS COCO 2017 as the benchmark setup, our method QSI-NMS provides $6.2\\times$ speed of original NMS on the benchmark, with a $0.1\\%$ decrease in mAP. The optimal eQSI-NMS, with only a $0.3\\%$ mAP decrease, achieves $10.7\\times$ speed. Meanwhile, BOE-NMS exhibits $5.1\\times$ speed with no compromise in mAP.", "pdf": "https://openreview.net/pdf/f50a62db9e34e537dd89a85a12e62c67265edbf2.pdf"} {"title": "Kronecker-Factored Approximate Curvature for Physics-Informed Neural Networks", "url": "https://openreview.net/forum?id=jrNlWfor7q", "detail_url": "https://openreview.net/forum?id=jrNlWfor7q", "authors": "Felix Dangel,Johannes M\u00fcller,Marius Zeinhofer", "tags": "NIPS 2024,Poster", "abstract": "Physics-Informed Neural Networks (PINNs) are infamous for being hard to train.\nRecently, second-order methods based on natural gradient and Gauss-Newton methods have shown promising performance, improving the accuracy achieved by first-order methods by several orders of magnitude. \nWhile promising, the proposed methods only scale to networks with a few thousand parameters due to the high computational cost to evaluate, store, and invert the curvature matrix.\nWe propose Kronecker-factored approximate curvature (KFAC) for PINN losses that greatly reduces the computational cost and allows scaling to much larger networks.\nOur approach goes beyond the popular KFAC for traditional deep learning problems as it captures contributions from a PDE's differential operator that are crucial for optimization. \nTo establish KFAC for such losses, we use Taylor-mode automatic differentiation to describe the differential operator's computation graph as a forward network with shared weights which allows us to apply a variant of KFAC for networks with weight-sharing. \nEmpirically, we find that our KFAC-based optimizers are competitive with expensive second-order methods on small problems, scale more favorably to higher-dimensional neural networks and PDEs, and consistently outperform first-order methods.", "pdf": "https://openreview.net/pdf/1323d85bd39ec82900c2bce4a296467999102857.pdf"} {"title": "Zero-Shot Tokenizer Transfer", "url": "https://openreview.net/forum?id=RwBObRsIzC", "detail_url": "https://openreview.net/forum?id=RwBObRsIzC", "authors": "Benjamin Minixhofer,Edoardo Ponti,Ivan Vuli\u0107", "tags": "NIPS 2024,Poster", "abstract": "Language models (LMs) are bound to their tokenizer, which maps raw text to a sequence of vocabulary items (tokens). This restricts their flexibility: for example, LMs trained primarily on English may still perform well in other natural and programming languages, but have vastly decreased efficiency due to their English-centric tokenizer. To mitigate this, we should be able to swap the original LM tokenizer with an arbitrary one, on the fly, without degrading performance. Hence, in this work we define a new problem: Zero-Shot Tokenizer Transfer (ZeTT). The challenge at the core of ZeTT is finding embeddings for the tokens in the vocabulary of the new tokenizer. Since prior heuristics for initializing embeddings often perform at chance level in a ZeTT setting, we propose a new solution: we train a hypernetwork taking a tokenizer as input and predicting the corresponding embeddings. We empirically demonstrate that the hypernetwork generalizes to new tokenizers both with encoder (e.g., XLM-R) and decoder LLMs (e.g., Mistral-7B). Our method comes close to the original models' performance in cross-lingual and coding tasks while markedly reducing the length of the tokenized sequence. We also find that the remaining gap can be quickly closed by continued training on less than 1B tokens. Finally, we show that a ZeTT hypernetwork trained for a base (L)LM can also be applied to fine-tuned variants without extra training. Overall, our results make substantial strides toward detaching LMs from their tokenizer.", "pdf": "https://openreview.net/pdf/1dc733ae700719e8340abb61164b8efb15224123.pdf"} {"title": "Hierarchical Selective Classification", "url": "https://openreview.net/forum?id=wzof7Y66xs", "detail_url": "https://openreview.net/forum?id=wzof7Y66xs", "authors": "Shani Goren,Ido Galil,Ran El-Yaniv", "tags": "NIPS 2024,Poster", "abstract": "Deploying deep neural networks for risk-sensitive tasks necessitates an uncertainty estimation mechanism. This paper introduces *hierarchical selective classification*, extending selective classification to a hierarchical setting. Our approach leverages the inherent structure of class relationships, enabling models to reduce the specificity of their predictions when faced with uncertainty. In this paper, we first formalize hierarchical risk and coverage, and introduce hierarchical risk-coverage curves. Next, we develop algorithms for hierarchical selective classification (which we refer to as \"inference rules\"), and propose an efficient algorithm that guarantees a target accuracy constraint with high probability. Lastly, we conduct extensive empirical studies on over a thousand ImageNet classifiers, revealing that training regimes such as CLIP, pretraining on ImageNet21k and knowledge distillation boost hierarchical selective performance.", "pdf": "https://openreview.net/pdf/e95eee2d490d9f20e34d6dc2b4c94931a53d3630.pdf"} {"title": "Pedestrian-Centric 3D Pre-collision Pose and Shape Estimation from Dashcam Perspective", "url": "https://openreview.net/forum?id=ldvfaYzG35", "detail_url": "https://openreview.net/forum?id=ldvfaYzG35", "authors": "MeiJun Wang,Yu Meng,Zhongwei Qiu,Chao Zheng,Yan Xu,Pengxiaorui,Jian Gao", "tags": "NIPS 2024,Poster", "abstract": "Pedestrian pre-collision pose is one of the key factors to determine the degree of pedestrian-vehicle injury in collision. Human pose estimation algorithm is an effective method to estimate pedestrian emergency pose from accident video. However, the pose estimation model trained by the existing daily human pose datasets has poor robustness under specific poses such as pedestrian pre-collision pose, and it is difficult to obtain human pose datasets in the wild scenes, especially lacking scarce data such as pedestrian pre-collision pose in traffic scenes. In this paper, we collect pedestrian-vehicle collision pose from the dashcam perspective of dashcam and construct the first Pedestrian-Vehicle Collision Pose dataset (PVCP) in a semi-automatic way, including 40k+ accident frames and 20K+ pedestrian pre-collision pose annotation (2D, 3D, Mesh). Further, we construct a Pedestrian Pre-collision Pose Estimation Network (PPSENet) to estimate the collision pose and shape sequence of pedestrians from pedestrian-vehicle accident videos. The PPSENet first estimates the 2D pose from the image (Image to Pose, ITP) and then lifts the 2D pose to 3D mesh (Pose to Mesh, PTM). Due to the small size of the dataset, we introduce a pre-training model that learns the human pose prior on a large number of pose datasets, and use iterative regression to estimate the pre-collision pose and shape of pedestrians. Further, we classify the pre-collision pose sequence and introduce pose class loss, which achieves the best accuracy compared with the existing relevant \\textit{state-of-the-art} methods. Code and data are available for research at https://github.com/wmj142326/PVCP.", "pdf": "https://openreview.net/pdf/b9fd46cc0bc56a82bb56b9555fd659ec25d1019f.pdf"} {"title": "Transferring disentangled representations: bridging the gap between synthetic and real images", "url": "https://openreview.net/forum?id=HfztZgwpxI", "detail_url": "https://openreview.net/forum?id=HfztZgwpxI", "authors": "Jacopo Dapueto,Nicoletta Noceti,Francesca Odone", "tags": "NIPS 2024,Poster", "abstract": "Developing meaningful and efficient representations that separate the fundamental structure of the data generation mechanism is crucial in representation learning. However, Disentangled Representation Learning has not fully shown its potential on real images, because of correlated generative factors, their resolution and limited access to ground truth labels. Specifically on the latter, we investigate the possibility of leveraging synthetic data to learn general-purpose disentangled representations applicable to real data, discussing the effect of fine-tuning and what properties of disentanglement are preserved after the transfer. We provide an extensive empirical study to address these issues. In addition, we propose a new interpretable intervention-based metric, to measure the quality of factors encoding in the representation. Our results indicate that some level of disentanglement, transferring a representation from synthetic to real data, is possible and effective.", "pdf": "https://openreview.net/pdf/0055fe64df12a35787aac784381943547daadc76.pdf"} {"title": "InfoRM: Mitigating Reward Hacking in RLHF via Information-Theoretic Reward Modeling", "url": "https://openreview.net/forum?id=3XnBVK9sD6", "detail_url": "https://openreview.net/forum?id=3XnBVK9sD6", "authors": "Yuchun Miao,Sen Zhang,Liang Ding,Rong Bao,Lefei Zhang,Dacheng Tao", "tags": "NIPS 2024,Poster", "abstract": "Despite the success of reinforcement learning from human feedback (RLHF) in aligning language models with human values, reward hacking, also termed reward overoptimization, remains a critical challenge. This issue primarily arises from reward misgeneralization, where reward models (RMs) compute reward using spurious features that are irrelevant to human preferences. In this work, we tackle this problem from an information-theoretic perspective and propose a framework for reward modeling, namely InfoRM, by introducing a variational information bottleneck objective to filter out irrelevant information.\nNotably, we further identify a correlation between overoptimization and outliers in the IB latent space of InfoRM, establishing it as a promising tool for detecting reward overoptimization.\nInspired by this finding, we propose the Cluster Separation Index (CSI), which quantifies deviations in the IB latent space, as an indicator of reward overoptimization to facilitate the development of online mitigation strategies. Extensive experiments on a wide range of settings and RM scales (70M, 440M, 1.4B, and 7B) demonstrate the effectiveness of InfoRM. Further analyses reveal that InfoRM's overoptimization detection mechanism is not only effective but also robust across a broad range of datasets, signifying a notable advancement in the field of RLHF. The code will be released upon acceptance.", "pdf": "https://openreview.net/pdf/f3488597e83544552c3f45b9d96c8af2911f5089.pdf"} {"title": "Multi-view Masked Contrastive Representation Learning for Endoscopic Video Analysis", "url": "https://openreview.net/forum?id=1M67AdMBbg", "detail_url": "https://openreview.net/forum?id=1M67AdMBbg", "authors": "Kai Hu,Ye Xiao,Yuan Zhang,Xieping Gao", "tags": "NIPS 2024,Poster", "abstract": "Endoscopic video analysis can effectively assist clinicians in disease diagnosis and treatment, and has played an indispensable role in clinical medicine. Unlike regular videos, endoscopic video analysis presents unique challenges, including complex camera movements, uneven distribution of lesions, and concealment, and it typically relies on contrastive learning in self-supervised pretraining as its mainstream technique. However, representations obtained from contrastive learning enhance the discriminability of the model but often lack fine-grained information, which is suboptimal in the pixel-level prediction tasks. In this paper, we develop a Multi-view Masked Contrastive Representation Learning (M$^2$CRL) framework for endoscopic video pre-training. Specifically, we propose a multi-view mask strategy for addressing the challenges of endoscopic videos. We utilize the frame-aggregated attention guided tube mask to capture global-level spatiotemporal sensitive representation from the global views, while the random tube mask is employed to focus on local variations from the local views. Subsequently, we combine multi-view mask modeling with contrastive learning to obtain endoscopic video representations that possess fine-grained perception and holistic discriminative capabilities simultaneously. The proposed M$^2$CRL is pre-trained on 7 publicly available endoscopic video datasets and fine-tuned on 3 endoscopic video datasets for 3 downstream tasks. Notably, our M$^2$CRL significantly outperforms the current state-of-the-art self-supervised endoscopic pre-training methods, e.g., Endo-FM (3.5% F1 for classification, 7.5% Dice for segmentation, and 2.2% F1 for detection) and other self-supervised methods, e.g., VideoMAE V2 (4.6% F1 for classification, 0.4% Dice for segmentation, and 2.1% F1 for detection).", "pdf": "https://openreview.net/pdf/2809f680c0ca4cf89d0fc94e4896a1196e949af8.pdf"} {"title": "Improving Sparse Decomposition of Language Model Activations with Gated Sparse Autoencoders", "url": "https://openreview.net/forum?id=zLBlin2zvW", "detail_url": "https://openreview.net/forum?id=zLBlin2zvW", "authors": "Senthooran Rajamanoharan,Arthur Conmy,Lewis Smith,Tom Lieberum,Vikrant Varma,Janos Kramar,Rohin Shah,Neel Nanda", "tags": "NIPS 2024,Poster", "abstract": "Recent work has found that sparse autoencoders (SAEs) are an effective technique for unsupervised discovery of interpretable features in language models' (LMs) activations, by finding sparse, linear reconstructions of those activations. We introduce the Gated Sparse Autoencoder (Gated SAE), which achieves a Pareto improvement over training with prevailing methods. In SAEs, the L1 penalty used to encourage sparsity introduces many undesirable biases, such as shrinkage -- systematic underestimation of feature activations. The key insight of Gated SAEs is to separate the functionality of (a) determining which directions to use and (b) estimating the magnitudes of those directions: this enables us to apply the L1 penalty only to the former, limiting the scope of undesirable side effects. Through training SAEs on LMs of up to 7B parameters we find that, in typical hyper-parameter ranges, Gated SAEs solve shrinkage, are similarly interpretable, and require half as many firing features to achieve comparable reconstruction fidelity.", "pdf": "https://openreview.net/pdf/43584951381c20709a2cb6cf3ebc6ae1b2d501df.pdf"} {"title": "Gradient-Variation Online Learning under Generalized Smoothness", "url": "https://openreview.net/forum?id=V75gAxpW40", "detail_url": "https://openreview.net/forum?id=V75gAxpW40", "authors": "Yan-Feng Xie,Peng Zhao,Zhi-Hua Zhou", "tags": "NIPS 2024,Poster", "abstract": "Gradient-variation online learning aims to achieve regret guarantees that scale with variations in the gradients of online functions, which is crucial for attaining fast convergence in games and robustness in stochastic optimization, hence receiving increased attention. Existing results often require the smoothness condition by imposing a fixed bound on gradient Lipschitzness, which may be unrealistic in practice. Recent efforts in neural network optimization suggest a generalized smoothness condition, allowing smoothness to correlate with gradient norms. In this paper, we systematically study gradient-variation online learning under generalized smoothness. We extend the classic optimistic mirror descent algorithm to derive gradient-variation regret by analyzing stability over the optimization trajectory and exploiting smoothness locally. Then, we explore universal online learning, designing a single algorithm with the optimal gradient-variation regrets for convex and strongly convex functions simultaneously, without requiring prior knowledge of curvature. This algorithm adopts a two-layer structure with a meta-algorithm running over a group of base-learners. To ensure favorable guarantees, we design a new Lipschitz-adaptive meta-algorithm, capable of handling potentially unbounded gradients while ensuring a second-order bound to effectively ensemble the base-learners. Finally, we provide the applications for fast-rate convergence in games and stochastic extended adversarial optimization.", "pdf": "https://openreview.net/pdf/67833e8cf56f79e03c8b9bd175963e1cfb9a2d3c.pdf"} {"title": "PIVOT-R: Primitive-Driven Waypoint-Aware World Model for Robotic Manipulation", "url": "https://openreview.net/forum?id=gnXTDQyxlU", "detail_url": "https://openreview.net/forum?id=gnXTDQyxlU", "authors": "Kaidong Zhang,Pengzhen Ren,Bingqian Lin,Junfan Lin,Shikui Ma,Hang Xu,Xiaodan Liang", "tags": "NIPS 2024,Poster", "abstract": "Language-guided robotic manipulation is a challenging task that requires an embodied agent to follow abstract user instructions to accomplish various complex manipulation tasks. Previous work generally maps instructions and visual perceptions directly to low-level executable actions, neglecting the modeling of critical waypoints (e.g., key states of \u201cclose to/grab/move up\u201d in action trajectories) in manipulation tasks.\nTo address this issue, we propose a PImitive-driVen waypOinT-aware world model for Robotic manipulation (PIVOT-R) that focuses solely on the prediction of task-relevant waypoints. Specifically, PIVOT-R consists of a Waypoint-aware World Model (WAWM) and a lightweight action prediction module. The former performs primitive action parsing and primitive-driven waypoint prediction, while the latter focuses on decoding low-level actions. Additionally, we also design an asynchronous hierarchical executor (AHE) for PIVOT-R, which can use different execution frequencies for different modules of the model, thereby helping the model reduce computational redundancy and improve model execution efficiency. Our PIVOT-R outperforms state-of-the-art (SoTA) open-source models on the SeaWave benchmark, achieving an average relative improvement of 19.45% across four levels of instruction tasks. Moreover, compared to the synchronously executed PIVOT-R, the execution efficiency of PIVOT-R with AHE is increased by 28-fold, with only a 2.9% drop in performance. These results provide compelling evidence that our PIVOT-R can significantly improve both the performance and efficiency of robotic manipulation.", "pdf": "https://openreview.net/pdf/070d36407fda81f907f8e4ea5020c264b9e28d52.pdf"} {"title": "Decomposed Prompt Decision Transformer for Efficient Unseen Task Generalization", "url": "https://openreview.net/forum?id=HcqnhqoXS3", "detail_url": "https://openreview.net/forum?id=HcqnhqoXS3", "authors": "Hongling Zheng,Li Shen,Yong Luo,Tongliang Liu,Jialie Shen,Dacheng Tao", "tags": "NIPS 2024,Poster", "abstract": "Multi-task offline reinforcement learning aims to develop a unified policy for diverse tasks without requiring real-time interaction with the environment. Recent work explores sequence modeling, leveraging the scalability of the transformer architecture as a foundation for multi-task learning. Given the variations in task content and complexity, formulating policies becomes a challenging endeavor, requiring careful parameter sharing and adept management of conflicting gradients to extract rich cross-task knowledge from multiple tasks and transfer it to unseen tasks. In this paper, we propose the Decomposed Prompt Decision Transformer (DPDT) that adopts a two-stage paradigm to efficiently learn prompts for unseen tasks in a parameter-efficient manner. We incorporate parameters from pre-trained language models (PLMs) to initialize DPDT, thereby providing rich prior knowledge encoded in language models. During the decomposed prompt tuning phase, we learn both cross-task and task-specific prompts on training tasks to achieve prompt decomposition. In the test time adaptation phase, the cross-task prompt, serving as a good initialization, were further optimized on unseen tasks through test time adaptation, enhancing the model's performance on these tasks. Empirical evaluation on a series of Meta-RL benchmarks demonstrates the superiority of our approach. The project is available at https://github.com/ruthless-man/DPDT.", "pdf": "https://openreview.net/pdf/84e3b884ce162c9d0924a7ab034f92fe21ca8934.pdf"} {"title": "DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation", "url": "https://openreview.net/forum?id=x4EoTQW7ka", "detail_url": "https://openreview.net/forum?id=x4EoTQW7ka", "authors": "Sunghyeon Woo,Baeseong park,Byeongwook Kim,Minjung Jo,Se Jung Kwon,Dongsuk Jeon,Dongsoo Lee", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have achieved significant success across various domains. However, training these LLMs typically involves substantial memory and computational costs during both forward and backward propagation. While parameter-efficient fine-tuning (PEFT) considerably reduces the training memory associated with parameters, it does not address the significant computational costs and activation memory. In this paper, we propose Dropping Backward Propagation (DropBP), a novel approach designed to reduce computational costs and activation memory while maintaining accuracy. DropBP randomly drops layers during backward propagation, which is essentially equivalent to training shallow submodules generated by undropped layers and residual connections. Additionally, DropBP calculates the sensitivity of each layer to assign an appropriate drop rate, thereby stabilizing the training process. DropBP is not only applicable to full fine-tuning but can also be orthogonally integrated with all types of PEFT by dropping layers during backward propagation. Specifically, DropBP can reduce training time by 44% with comparable accuracy to the baseline, accelerate convergence to the same perplexity by 1.5$\\times$, and enable training with a sequence length 6.2$\\times$ larger on a single NVIDIA-A100 GPU. Furthermore, our DropBP enabled a throughput increase of 79% on a NVIDIA A100 GPU and 117% on an Intel Gaudi2 HPU. The code is available at [https://github.com/WooSunghyeon/dropbp](https://github.com/WooSunghyeon/dropbp).", "pdf": "https://openreview.net/pdf/d3eb7ed964e39181b31efa8f58099b952ff6d8e5.pdf"} {"title": "Adaptive Domain Learning for Cross-domain Image Denoising", "url": "https://openreview.net/forum?id=gOtt78AQk4", "detail_url": "https://openreview.net/forum?id=gOtt78AQk4", "authors": "Zian Qian,Chenyang Qi,Ka Lung Law,Hao Fu,Chenyang Lei,Qifeng Chen", "tags": "NIPS 2024,Poster", "abstract": "Different camera sensors have different noise patterns, and thus an image denoising model trained on one sensor often does not generalize well to a different sensor. One plausible solution is to collect a large dataset for each sensor for training or fine-tuning, which is inevitably time-consuming. To address this cross-domain challenge, we present a novel adaptive domain learning (ADL) scheme for cross-domain RAW image denoising by utilizing existing data from different sensors (source domain) plus a small amount of data from the new sensor (target domain). The ADL training scheme automatically removes the data in the source domain that are harmful to fine-tuning a model for the target domain (some data are harmful as adding them during training lowers the performance due to domain gaps). Also, we introduce a modulation module to adopt sensor-specific information (sensor type and ISO) to understand input data for image denoising. We conduct extensive experiments on public datasets with various smartphone and DSLR cameras, which show our proposed model outperforms prior work on cross-domain image denoising, given a small amount of image data from the target domain sensor.", "pdf": "https://openreview.net/pdf/05d07f6247a71ba3b8a8328f5567f48a22972a72.pdf"} {"title": "Spike-based Neuromorphic Model for Sound Source Localization", "url": "https://openreview.net/forum?id=CyCDqnrymT", "detail_url": "https://openreview.net/forum?id=CyCDqnrymT", "authors": "Dehao Zhang,Shuai Wang,Ammar Belatreche,Wenjie Wei,Yichen Xiao,Haorui Zheng,Zijian Zhou,Malu Zhang,Yang Yang", "tags": "NIPS 2024,Poster", "abstract": "Biological systems possess remarkable sound source localization (SSL) capabilities that are critical for survival in complex environments. This ability arises from the collaboration between the auditory periphery, which encodes sound as precisely timed spikes, and the auditory cortex, which performs spike-based computations. Inspired by these biological mechanisms, we propose a novel neuromorphic SSL framework that integrates spike-based neural encoding and computation. The framework employs Resonate-and-Fire (RF) neurons with a phase-locking coding (RF-PLC) method to achieve energy-efficient audio processing. The RF-PLC method leverages the resonance properties of RF neurons to efficiently convert audio signals to time-frequency representation and encode interaural time difference (ITD) cues into discriminative spike patterns. In addition, biological adaptations like frequency band selectivity and short-term memory effectively filter out many environmental noises, enhancing SSL capabilities in real-world settings. Inspired by these adaptations, we propose a spike-driven multi-auditory attention (MAA) module that significantly improves both the accuracy and robustness of the proposed SSL framework. Extensive experimentation demonstrates that our SSL framework achieves state-of-the-art accuracy in SSL tasks. Furthermore, it shows exceptional noise robustness and maintains high accuracy even at very low signal-to-noise ratios. By mimicking biological hearing, this neuromorphic approach contributes to the development of high-performance and explainable artificial intelligence systems capable of superior performance in real-world environments.", "pdf": "https://openreview.net/pdf/2e322cd257ad7d625a305cc52890f6184b459759.pdf"} {"title": "Uncovering the Redundancy in Graph Self-supervised Learning Models", "url": "https://openreview.net/forum?id=7Ntft3U7jj", "detail_url": "https://openreview.net/forum?id=7Ntft3U7jj", "authors": "Zhibiao Wang,Xiao Wang,Haoyue Deng,Nian Liu,Shirui Pan,Chunming Hu", "tags": "NIPS 2024,Poster", "abstract": "Graph self-supervised learning, as a powerful pre-training paradigm for Graph Neural Networks (GNNs) without labels, has received considerable attention. We have witnessed the success of graph self-supervised learning on pre-training the parameters of GNNs, leading many not to doubt that whether the learned GNNs parameters are all useful. In this paper, by presenting the experimental evidence and analysis, we surprisingly discover that the graph self-supervised learning models are highly redundant at both of neuron and layer levels, e.g., even randomly removing 51.6\\% of parameters, the performance of graph self-supervised learning models still retains at least 96.2\\%. This discovery implies that the parameters of graph self-supervised models can be largely reduced, making simultaneously fine-tuning both graph self-supervised learning models and prediction layers more feasible. Therefore, we further design a novel graph pre-training and fine-tuning paradigm called SLImming DE-correlation Fine-tuning (SLIDE). The effectiveness of SLIDE is verified through extensive experiments on various benchmarks, and the performance can be even improved with fewer parameters of models in most cases. For example, in comparison with full fine-tuning GraphMAE on Amazon-Computers dataset, even randomly reducing 40\\% of parameters, we can still achieve the improvement of 0.24\\% and 0.27\\% for Micro-F1 and Macro-F1 scores respectively.", "pdf": "https://openreview.net/pdf/294fb34b5ca14489fb967f4a78f4cba4bf21b390.pdf"} {"title": "KptLLM: Unveiling the Power of Large Language Model for Keypoint Comprehension", "url": "https://openreview.net/forum?id=gwd3MQufGP", "detail_url": "https://openreview.net/forum?id=gwd3MQufGP", "authors": "Jie Yang,Wang ZENG,Sheng Jin,Lumin Xu,Wentao Liu,Chen Qian,Ruimao Zhang", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements in Multimodal Large Language Models (MLLMs) have greatly improved their abilities in image understanding. However, these models often struggle with grasping pixel-level semantic details, e.g., the keypoints of an object. To bridge this gap, we introduce the novel challenge of Semantic Keypoint Comprehension, which aims to comprehend keypoints across different task scenarios, including keypoint semantic understanding, visual prompt-based keypoint detection, and textual prompt-based keypoint detection. Moreover, we introduce KptLLM, a unified multimodal model that utilizes an identify-then-detect strategy to effectively address these challenges. KptLLM underscores the initial discernment of semantics in keypoints, followed by the precise determination of their positions through a chain-of-thought process. With several carefully designed modules, KptLLM adeptly handles various modality inputs, facilitating the interpretation of both semantic contents and keypoint locations. Our extensive experiments demonstrate KptLLM's superiority in various keypoint detection benchmarks and its unique semantic capabilities in interpreting keypoints.", "pdf": "https://openreview.net/pdf/0b56d57b58b44c6b9ced1533f39074a8887d6ba7.pdf"} {"title": "Optimistic Critic Reconstruction and Constrained Fine-Tuning for General Offline-to-Online RL", "url": "https://openreview.net/forum?id=XVfevb9XFx", "detail_url": "https://openreview.net/forum?id=XVfevb9XFx", "authors": "Qin-Wen Luo,Ming-Kun Xie,Ye-Wen Wang,Sheng-Jun Huang", "tags": "NIPS 2024,Poster", "abstract": "Offline-to-online (O2O) reinforcement learning (RL) provides an effective means of leveraging an offline pre-trained policy as initialization to improve performance rapidly with limited online interactions. Recent studies often design fine-tuning strategies for a specific offline RL method and cannot perform general O2O learning from any offline method. To deal with this problem, we disclose that there are evaluation and improvement mismatches between the offline dataset and the online environment, which hinders the direct application of pre-trained policies to online fine-tuning. In this paper, we propose to handle these two mismatches simultaneously, which aims to achieve general O2O learning from any offline method to any online method. Before online fine-tuning, we re-evaluate the pessimistic critic trained on the offline dataset in an optimistic way and then calibrate the misaligned critic with the reliable offline actor to avoid erroneous update. After obtaining an optimistic and and aligned critic, we perform constrained fine-tuning to combat distribution shift during online learning. We show empirically that the proposed method can achieve stable and efficient performance improvement on multiple simulated tasks when compared to the state-of-the-art methods.", "pdf": "https://openreview.net/pdf/becaa67512bd4568cc0886b915b0021488549f78.pdf"} {"title": "Zipfian Whitening", "url": "https://openreview.net/forum?id=pASJxzMJb7", "detail_url": "https://openreview.net/forum?id=pASJxzMJb7", "authors": "Sho Yokoi,Han Bao,Hiroto Kurita,Hidetoshi Shimodaira", "tags": "NIPS 2024,Poster", "abstract": "The word embedding space in neural models is skewed, and correcting this can improve task performance. We point out that most approaches for modeling, correcting, and measuring the symmetry of an embedding space implicitly assume that the word frequencies are *uniform*; in reality, word frequencies follow a highly non-uniform distribution, known as *Zipf's law*. Surprisingly, simply performing PCA whitening weighted by the empirical word frequency that follows Zipf's law significantly improves task performance, surpassing established baselines. From a theoretical perspective, both our approach and existing methods can be clearly categorized: word representations are distributed according to an exponential family with either uniform or Zipfian base measures. By adopting the latter approach, we can naturally emphasize informative low-frequency words in terms of their vector norm, which becomes evident from the information-geometric perspective, and in terms of the loss functions for imbalanced classification. Additionally, our theory corroborates that popular natural language processing methods, such as skip-gram negative sampling, WhiteningBERT, and headless language models, work well just because their word embeddings encode the empirical word frequency into the underlying probabilistic model.", "pdf": "https://openreview.net/pdf/6a42a2c4eb32dcf676cf215dd7f9e7cebc27057c.pdf"} {"title": "Real-Time Selection Under General Constraints via Predictive Inference", "url": "https://openreview.net/forum?id=wblxm5zdkE", "detail_url": "https://openreview.net/forum?id=wblxm5zdkE", "authors": "Yuyang Huo,Lin Lu,Haojie Ren,Changliang Zou", "tags": "NIPS 2024,Poster", "abstract": "Real-time decision-making gets more attention in the big data era. Here, we consider the problem of sample selection in the online setting, where one encounters a possibly infinite sequence of individuals collected over time with covariate information available. The goal is to select samples of interest that are characterized by their unobserved responses until the user-specified stopping time. We derive a new decision rule that enables us to find more preferable samples that meet practical requirements by simultaneously controlling two types of general constraints: individual and interactive constraints, which include the widely utilized False Selection Rate (FSR), cost limitations, and diversity of selected samples. The key elements of our approach involve quantifying the uncertainty of response predictions via predictive inference and addressing individual and interactive constraints in a sequential manner. Theoretical and numerical results demonstrate the effectiveness of the proposed method in controlling both individual and interactive constraints.", "pdf": "https://openreview.net/pdf/0de67725a73ec05f810a39fe0a220b3ca19c7aa0.pdf"} {"title": "EEG2Video: Towards Decoding Dynamic Visual Perception from EEG Signals", "url": "https://openreview.net/forum?id=RfsfRn9OFd", "detail_url": "https://openreview.net/forum?id=RfsfRn9OFd", "authors": "Xuanhao Liu,Yan-Kai Liu,Yansen Wang,Kan Ren,Hanwen Shi,Zilong Wang,Dongsheng Li,Bao-liang Lu,Wei-Long Zheng", "tags": "NIPS 2024,Poster", "abstract": "Our visual experience in daily life are dominated by dynamic change. Decoding such dynamic information from brain activity can enhance the understanding of the brain\u2019s visual processing system. However, previous studies predominately focus on reconstructing static visual stimuli. In this paper, we explore to decode dynamic visual perception from electroencephalography (EEG), a neuroimaging technique able to record brain activity with high temporal resolution (1000 Hz) for capturing rapid changes in brains. Our contributions are threefold: Firstly, we develop a large dataset recording signals from 20 subjects while they were watching 1400 dynamic video clips of 40 concepts. This dataset fills the gap in the lack of EEG-video pairs. Secondly, we annotate each video clips to investigate the potential for decoding some specific meta information (e.g., color, dynamic, human or not) from EEG. Thirdly, we propose a novel baseline EEG2Video for video reconstruction from EEG signals that better aligns dynamic movements with high temporal resolution brain signals by Seq2Seq architecture. EEG2Video achieves a 2-way accuracy of 79.8% in semantic classification tasks and 0.256 in structural similarity index (SSIM). Overall, our works takes an important step towards decoding dynamic visual perception from EEG signals. Our dataset and code will be released soon.", "pdf": "https://openreview.net/pdf/a5ebcd48c768c5e7095143727c967cdbc60cc9f7.pdf"} {"title": "RankUp: Boosting Semi-Supervised Regression with an Auxiliary Ranking Classifier", "url": "https://openreview.net/forum?id=d2lPM1Aczs", "detail_url": "https://openreview.net/forum?id=d2lPM1Aczs", "authors": "Pin-Yen Huang,Szu-Wei Fu,Yu Tsao", "tags": "NIPS 2024,Poster", "abstract": "State-of-the-art (SOTA) semi-supervised learning techniques, such as FixMatch and it's variants, have demonstrated impressive performance in classification tasks. However, these methods are not directly applicable to regression tasks. In this paper, we present RankUp, a simple yet effective approach that adapts existing semi-supervised classification techniques to enhance the performance of regression tasks. RankUp achieves this by converting the original regression task into a ranking problem and training it concurrently with the original regression objective. This auxiliary ranking classifier outputs a classification result, thus enabling integration with existing semi-supervised classification methods. Moreover, we introduce regression distribution alignment (RDA), a complementary technique that further enhances RankUp's performance by refining pseudo-labels through distribution alignment. Despite its simplicity, RankUp, with or without RDA, achieves SOTA results in across a range of regression benchmarks, including computer vision, audio, and natural language processing tasks. Our code and log data are open-sourced at [https://github.com/pm25/semi-supervised-regression](https://github.com/pm25/semi-supervised-regression).", "pdf": "https://openreview.net/pdf/b1bcf59aa8a1b737994a6dc20b2405baf051d2a1.pdf"} {"title": "G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering", "url": "https://openreview.net/forum?id=MPJ3oXtTZl", "detail_url": "https://openreview.net/forum?id=MPJ3oXtTZl", "authors": "Xiaoxin He,Yijun Tian,Yifei Sun,Nitesh V Chawla,Thomas Laurent,Yann LeCun,Xavier Bresson,Bryan Hooi", "tags": "NIPS 2024,Poster", "abstract": "Given a graph with textual attributes, we enable users to `chat with their graph': that is, to ask questions about the graph using a conversational interface. In response to a user's questions, our method provides textual replies and highlights the relevant parts of the graph. While existing works integrate large language models (LLMs) and graph neural networks (GNNs) in various ways, they mostly focus on either conventional graph tasks (such as node, edge, and graph classification), or on answering simple graph queries on small or synthetic graphs. In contrast, we develop a flexible question-answering framework targeting real-world textual graphs, applicable to multiple applications including scene graph understanding, common sense reasoning, and knowledge graph reasoning. Toward this goal, we first develop a Graph Question Answering (GraphQA) benchmark with data collected from different tasks. Then, we propose our \\textit{G-Retriever} method, introducing the first retrieval-augmented generation (RAG) approach for general textual graphs, which can be fine-tuned to enhance graph understanding via soft prompting. To resist hallucination and to allow for textual graphs that greatly exceed the LLM's context window size, \\textit{G-Retriever} performs RAG over a graph by formulating this task as a Prize-Collecting Steiner Tree optimization problem. Empirical evaluations show that our method outperforms baselines on textual graph tasks from multiple domains, scales well with larger graph sizes, and mitigates hallucination.~\\footnote{Our codes and datasets are available at: \\url{https://github.com/XiaoxinHe/G-Retriever}}", "pdf": "https://openreview.net/pdf/dd5411c5713d592bdad14949720408bf8cf0071f.pdf"} {"title": "Cross-modal Representation Flattening for Multi-modal Domain Generalization", "url": "https://openreview.net/forum?id=UixTytSVOl", "detail_url": "https://openreview.net/forum?id=UixTytSVOl", "authors": "Yunfeng FAN,Wenchao Xu,Haozhao Wang,Song Guo", "tags": "NIPS 2024,Poster", "abstract": "Multi-modal domain generalization (MMDG) requires that models trained on multi-modal source domains can generalize to unseen target distributions with the same modality set. Sharpness-aware minimization (SAM) is an effective technique for traditional uni-modal domain generalization (DG), however, with limited improvement in MMDG. In this paper, we identify that modality competition and discrepant uni-modal flatness are two main factors that restrict multi-modal generalization. To overcome these challenges, we propose to construct consistent flat loss regions and enhance knowledge exploitation for each modality via cross-modal knowledge transfer. Firstly, we turn to the optimization on representation-space loss landscapes instead of traditional parameter space, which allows us to build connections between modalities directly. Then, we introduce a novel method to flatten the high-loss region between minima from different modalities by interpolating mixed multi-modal representations. We implement this method by distilling and optimizing generalizable interpolated representations and assigning distinct weights for each modality considering their divergent generalization capabilities. Extensive experiments are performed on two benchmark datasets, EPIC-Kitchens and Human-Animal-Cartoon (HAC), with various modality combinations, demonstrating the effectiveness of our method under multi-source and single-source settings. Our code is open-sourced.", "pdf": "https://openreview.net/pdf/3c4d88560cade1a33eeb195ad6a88cf0aed6be41.pdf"} {"title": "Contextual Multinomial Logit Bandits with General Value Functions", "url": "https://openreview.net/forum?id=2ltOkbo67R", "detail_url": "https://openreview.net/forum?id=2ltOkbo67R", "authors": "Mengxiao Zhang,Haipeng Luo", "tags": "NIPS 2024,Poster", "abstract": "Contextual multinomial logit (MNL) bandits capture many real-world assortment recommendation problems such as online retailing/advertising. However, prior work has only considered (generalized) linear value functions, which greatly limits its applicability. Motivated by this fact, in this work, we consider contextual MNL bandits with a general value function class that contains the ground truth, borrowing ideas from a recent trend of studies on contextual bandits. Specifically, we consider both the stochastic and the adversarial settings, and propose a suite of algorithms, each with different computation-regret trade-off. When applied to the linear case, our results not only are the first ones with no dependence on a certain problem-dependent constant that can be exponentially large, but also enjoy other advantages such as computational efficiency, dimension-free regret bounds, or the ability to handle completely adversarial contexts and rewards.", "pdf": "https://openreview.net/pdf/62b3e93da2fcbedc00755775e2d97d7a753f582b.pdf"} {"title": "Is Multiple Object Tracking a Matter of Specialization?", "url": "https://openreview.net/forum?id=aujnNnIiiM", "detail_url": "https://openreview.net/forum?id=aujnNnIiiM", "authors": "Gianluca Mancusi,Mattia Bernardi,Aniello Panariello,Angelo Porrello,Rita Cucchiara,Simone Calderara", "tags": "NIPS 2024,Poster", "abstract": "End-to-end transformer-based trackers have achieved remarkable performance on most human-related datasets. However, training these trackers in heterogeneous scenarios poses significant challenges, including negative interference - where the model learns conflicting scene-specific parameters - and limited domain generalization, which often necessitates expensive fine-tuning to adapt the models to new domains. In response to these challenges, we introduce Parameter-efficient Scenario-specific Tracking Architecture (PASTA), a novel framework that combines Parameter-Efficient Fine-Tuning (PEFT) and Modular Deep Learning (MDL). Specifically, we define key scenario attributes (e.g, camera-viewpoint, lighting condition) and train specialized PEFT modules for each attribute. These expert modules are combined in parameter space, enabling systematic generalization to new domains without increasing inference time. Extensive experiments on MOTSynth, along with zero-shot evaluations on MOT17 and PersonPath22 demonstrate that a neural tracker built from carefully selected modules surpasses its monolithic counterpart. We release models and code.", "pdf": "https://openreview.net/pdf/924c2e1761fa4e3f0fa963e0f3a929825ef2cdb7.pdf"} {"title": "AdaNeg: Adaptive Negative Proxy Guided OOD Detection with Vision-Language Models", "url": "https://openreview.net/forum?id=vS5NC7jtCI", "detail_url": "https://openreview.net/forum?id=vS5NC7jtCI", "authors": "Yabin Zhang,Lei Zhang", "tags": "NIPS 2024,Poster", "abstract": "Recent research has shown that pre-trained vision-language models are effective at identifying out-of-distribution (OOD) samples by using negative labels as guidance. However, employing consistent negative labels across different OOD datasets often results in semantic misalignments, as these text labels may not accurately reflect the actual space of OOD images. To overcome this issue, we introduce \\textit{adaptive negative proxies}, which are dynamically generated during testing by exploring actual OOD images, to align more closely with the underlying OOD label space and enhance the efficacy of negative proxy guidance. Specifically, our approach utilizes a feature memory bank to selectively cache discriminative features from test images, representing the targeted OOD distribution. This facilitates the creation of proxies that can better align with specific OOD datasets. While task-adaptive proxies average features to reflect the unique characteristics of each dataset, the sample-adaptive proxies weight features based on their similarity to individual test samples, exploring detailed sample-level nuances. The final score for identifying OOD samples integrates static negative labels with our proposed adaptive proxies, effectively combining textual and visual knowledge for enhanced performance. Our method is training-free and annotation-free, and it maintains fast testing speed. Extensive experiments across various benchmarks demonstrate the effectiveness of our approach, abbreviated as AdaNeg. Notably, on the large-scale ImageNet benchmark, our AdaNeg significantly outperforms existing methods, with a 2.45\\% increase in AUROC and a 6.48\\% reduction in FPR95. Codes are available at \\url{https://github.com/YBZh/OpenOOD-VLM}.", "pdf": "https://openreview.net/pdf/7d0df0f32fd41d7ace28bcfe957e1aeb0ad53117.pdf"} {"title": "Coarse-to-Fine Concept Bottleneck Models", "url": "https://openreview.net/forum?id=RMdnTnffou", "detail_url": "https://openreview.net/forum?id=RMdnTnffou", "authors": "Konstantinos P. Panousis,Dino Ienco,Diego Marcos", "tags": "NIPS 2024,Poster", "abstract": "Deep learning algorithms have recently gained significant attention due to their impressive performance. However, their high complexity and un-interpretable mode of operation hinders their confident deployment in real-world safety-critical tasks. This work targets ante hoc interpretability, and specifically Concept Bottleneck Models (CBMs). Our goal is to design a framework that admits a highly interpretable decision making process with respect to human understandable concepts, on two levels of granularity. To this end, we propose a novel two-level concept discovery formulation leveraging: (i) recent advances in vision-language models, and (ii) an innovative formulation for coarse-to-fine concept selection via data-driven and sparsity inducing Bayesian arguments. Within this framework, concept information does not solely rely on the similarity between the whole image and general unstructured concepts; instead, we introduce the notion of concept hierarchy to uncover and exploit more granular concept information residing in patch-specific regions of the image scene. As we experimentally show, the proposed construction not only outperforms recent CBM approaches, but also yields a principled framework towards interpetability.", "pdf": "https://openreview.net/pdf/1290775eeb1b185f6b24c382791d515f1f2c3066.pdf"} {"title": "HuRef: HUman-REadable Fingerprint for Large Language Models", "url": "https://openreview.net/forum?id=RlZgnEZsOH", "detail_url": "https://openreview.net/forum?id=RlZgnEZsOH", "authors": "Boyi Zeng,Lizheng Wang,Yuncong Hu,Yi Xu,Chenghu Zhou,Xinbing Wang,Yu Yu,Zhouhan Lin", "tags": "NIPS 2024,Poster", "abstract": "Protecting the copyright of large language models (LLMs) has become crucial due to their resource-intensive training and accompanying carefully designed licenses. However, identifying the original base model of an LLM is challenging due to potential parameter alterations. In this\nstudy, we introduce HuRef, a human-readable fingerprint for LLMs that uniquely identifies the base model without interfering with training or exposing model parameters to the public.\nWe first observe that the vector direction of LLM parameters remains stable after the model has converged during pretraining, \nwith negligible perturbations through subsequent training steps, including continued pretraining, supervised fine-tuning, and RLHF, \nwhich makes it a sufficient condition\nto identify the base model.\nThe necessity is validated by continuing to train an LLM with an extra term to drive away the model parameters' direction and the model becomes damaged. However, this direction is vulnerable to simple attacks like dimension permutation or matrix rotation, which significantly change it without affecting performance. To address this, leveraging the Transformer structure, we systematically analyze potential attacks and define three invariant terms that identify an LLM's base model. \nDue to the potential risk of information leakage, we cannot publish invariant terms directly. Instead, we map them to a Gaussian vector using an encoder, then convert it into a natural image using StyleGAN2, and finally publish the image. In our black-box setting, all fingerprinting steps are internally conducted by the LLMs owners. To ensure the published fingerprints are honestly generated, we introduced Zero-Knowledge Proof (ZKP).\nExperimental results across various LLMs demonstrate the effectiveness of our method. The code is available at https://github.com/LUMIA-Group/HuRef.", "pdf": "https://openreview.net/pdf/0fbfe6ce7ca8d2354ae725aa0d710f987e51932b.pdf"} {"title": "ContactField: Implicit Field Representation for Multi-Person Interaction Geometry", "url": "https://openreview.net/forum?id=7su2GfqvmN", "detail_url": "https://openreview.net/forum?id=7su2GfqvmN", "authors": "Hansol Lee,Tackgeun You,Hansoo Park,Woohyeon Shim,Sanghyeon Kim,Hwasup Lim", "tags": "NIPS 2024,Poster", "abstract": "We introduce a novel implicit field representation tailored for multi-person interaction geometry in 3D spaces, capable of simultaneously reconstructing occupancy, instance identification (ID) tags, and contact fields. Volumetric representation of interacting human bodies presents significant challenges, including inaccurately captured geometries, varying degrees of occlusion, and data scarcity. Existing multi-view methods, which either reconstruct each subject in isolation or merge nearby 3D surfaces into a single unified mesh, often fail to capture the intricate geometry between interacting bodies and exploit on datasets with many views and a small group of people for training. Our approach utilizes an implicit representation for interaction geometry contextualized by a multi-view local-global feature module. This module adeptly aggregates both local and global information from individual views and interacting groups, enabling precise modeling of close physical interactions through dense point retrieval in small areas, supported by the implicit fields. Furthermore, we develop a synthetic dataset encompassing diverse multi-person interaction scenarios to enhance the robustness of our geometry estimation. The experimental results demonstrate the superiority of our method to accurately reconstruct human geometries and ID tags within three-dimensional spaces, outperforming conventional multi-view techniques. Notably, our method facilitates unsupervised estimation of contact points without the need for specific training data on contact supervision.", "pdf": "https://openreview.net/pdf/3503c030ccbf28bb3177944f8cd9e90c4bed6a41.pdf"} {"title": "How does PDE order affect the convergence of PINNs?", "url": "https://openreview.net/forum?id=8K6ul0hgtC", "detail_url": "https://openreview.net/forum?id=8K6ul0hgtC", "authors": "Chang hoon Song,Yesom Park,Myungjoo Kang", "tags": "NIPS 2024,Poster", "abstract": "This paper analyzes the inverse relationship between the order of partial differential equations (PDEs) and the convergence of gradient descent in physics-informed neural networks (PINNs) with the power of ReLU activation. The integration of the PDE into a loss function endows PINNs with a distinctive feature to require computing derivatives of model up to the PDE order. Although it has been empirically observed that PINNs encounter difficulties in convergence when dealing with high-order or high-dimensional PDEs, a comprehensive theoretical understanding of this issue remains elusive. This paper offers theoretical support for this pathological behavior by demonstrating that the gradient flow converges in a lower probability when the PDE order is higher. In addition, we show that PINNs struggle to address high-dimensional problems because the influence of dimensionality on convergence is exacerbated with increasing PDE order. To address the pathology, we use the insights garnered to consider variable splitting that decomposes the high-order PDE into a system of lower-order PDEs. We prove that by reducing the differential order, the gradient flow of variable splitting is more likely to converge to the global optimum. Furthermore, we present numerical experiments in support of our theoretical claims.", "pdf": "https://openreview.net/pdf/ac758c2f9b6262ffc8c24b4ec35b1688e4f22399.pdf"} {"title": "Warped Diffusion: Solving Video Inverse Problems with Image Diffusion Models", "url": "https://openreview.net/forum?id=LH94zPv8cu", "detail_url": "https://openreview.net/forum?id=LH94zPv8cu", "authors": "Giannis Daras,Weili Nie,Karsten Kreis,Alex Dimakis,Morteza Mardani,Nikola Borislavov Kovachki,Arash Vahdat", "tags": "NIPS 2024,Poster", "abstract": "Using image models naively for solving inverse video problems often suffers from flickering, texture-sticking, and temporal inconsistency in generated videos. To tackle these problems, in this paper, we view frames as continuous functions in the 2D space, and videos as a sequence of continuous warping transformations between different frames. This perspective allows us to train function space diffusion models only on **images** and utilize them to solve temporally correlated inverse problems. The function space diffusion models need to be equivariant with respect to the underlying spatial transformations. To ensure temporal consistency, we introduce a simple post-hoc test-time guidance towards (self)-equivariant solutions. Our method allows us to deploy state-of-the-art latent diffusion models such as Stable Diffusion XL to solve video inverse problems. We demonstrate the effectiveness of our method for video inpainting and $8\\times$ video super-resolution, outperforming existing techniques based on noise transformations. We provide generated video results in the following URL: https://giannisdaras.github.io/warped_diffusion.github.io/.", "pdf": "https://openreview.net/pdf/986e15731ceda0413dde4ca727d0e5021a2ed441.pdf"} {"title": "Online Composite Optimization Between Stochastic and Adversarial Environments", "url": "https://openreview.net/forum?id=MbEB5aKmMK", "detail_url": "https://openreview.net/forum?id=MbEB5aKmMK", "authors": "Yibo Wang,Sijia Chen,Wei Jiang,Wenhao Yang,Yuanyu Wan,Lijun Zhang", "tags": "NIPS 2024,Poster", "abstract": "We study online composite optimization under the Stochastically Extended Adversarial (SEA) model. Specifically, each loss function consists of two parts: a fixed non-smooth and convex regularizer, and a time-varying function which can be chosen either stochastically, adversarially, or in a manner that interpolates between the two extremes. In this setting, we show that for smooth and convex time-varying functions, optimistic composite mirror descent (OptCMD) can obtain an $\\mathcal{O}(\\sqrt{\\sigma_{1:T}^2} + \\sqrt{\\Sigma_{1:T}^2})$ regret bound, where $\\sigma_{1:T}^2$ and $\\Sigma_{1:T}^2$ denote the cumulative stochastic variance and the cumulative adversarial variation of time-varying functions, respectively. For smooth and strongly convex time-varying functions, we establish an $\\mathcal{O}((\\sigma_{\\max}^2 + \\Sigma_{\\max}^2)\\log(\\sigma_{1:T}^2 + \\Sigma_{1:T}^2))$ regret bound, where $\\sigma_{\\max}^2$ and $\\Sigma_{\\max}^2$ denote the maximal stochastic variance and the maximal adversarial variation, respectively. For smooth and exp-concave time-varying functions, we achieve an $\\mathcal{O}(d \\log (\\sigma_{1:T}^2 + \\Sigma_{1:T}^2))$ bound where $d$ denotes the dimensionality. Moreover, to deal with the unknown function type in practical problems, we propose a multi-level \\textit{universal} algorithm that is able to achieve the desirable bounds for three types of time-varying functions simultaneously. It should be noticed that all our findings match existing bounds for the SEA model without the regularizer, which implies that there is \\textit{no price} in regret bounds for the benefits gained from the regularizer.", "pdf": "https://openreview.net/pdf/df215a58976f42241695e977d0f08dd24ac99778.pdf"} {"title": "Confidence Regulation Neurons in Language Models", "url": "https://openreview.net/forum?id=0og7nmvDbe", "detail_url": "https://openreview.net/forum?id=0og7nmvDbe", "authors": "Alessandro Stolfo,Ben Peng Wu,Wes Gurnee,Yonatan Belinkov,Xingyi Song,Mrinmaya Sachan,Neel Nanda", "tags": "NIPS 2024,Poster", "abstract": "Despite their widespread use, the mechanisms by which large language models (LLMs) represent and regulate uncertainty in next-token predictions remain largely unexplored. This study investigates two critical components believed to influence this uncertainty: the recently discovered entropy neurons and a new set of components that we term token frequency neurons. Entropy neurons are characterized by an unusually high weight norm and influence the final layer normalization (LayerNorm) scale to effectively scale down the logits. Our work shows that entropy neurons operate by writing onto an \\textit{unembedding null space}, allowing them to impact the residual stream norm with minimal direct effect on the logits themselves. We observe the presence of entropy neurons across a range of models, up to 7 billion parameters. On the other hand, token frequency neurons, which we discover and describe here for the first time, boost or suppress each token\u2019s logit proportionally to its log frequency, thereby shifting the output distribution towards or away from the unigram distribution. Finally, we present a detailed case study where entropy neurons actively manage confidence: the setting of induction, i.e. detecting and continuing repeated subsequences.", "pdf": "https://openreview.net/pdf/b77a045af8d675ba687a62955d97da27a1bb15da.pdf"} {"title": "Latent Paraphrasing: Perturbation on Layers Improves Knowledge Injection in Language Models", "url": "https://openreview.net/forum?id=T1lFrYwtf7", "detail_url": "https://openreview.net/forum?id=T1lFrYwtf7", "authors": "Minki Kang,Sung Ju Hwang,Gibbeum Lee,Jaewoong Cho", "tags": "NIPS 2024,Poster", "abstract": "As Large Language Models (LLMs) are increasingly deployed in specialized domains with continuously evolving knowledge, the need for timely and precise knowledge injection has become essential. Fine-tuning with paraphrased data is a common approach to enhance knowledge injection, yet it faces two significant challenges: high computational costs due to repetitive external model usage and limited sample diversity. \nTo this end, we introduce LaPael, a latent-level paraphrasing method that applies input-dependent noise to early LLM layers.\nThis approach enables diverse and semantically consistent augmentations directly within the model. Furthermore, it eliminates the recurring costs of paraphrase generation for each knowledge update. \nOur extensive experiments on question-answering benchmarks demonstrate that LaPael improves knowledge injection over standard fine-tuning and existing noise-based approaches. \nAdditionally, combining LaPael with data-level paraphrasing further enhances performance.", "pdf": "https://openreview.net/pdf/314dc5597f192b8861f0ea63086dc6b896b95d87.pdf"} {"title": "Rad-NeRF: Ray-decoupled Training of Neural Radiance Field", "url": "https://openreview.net/forum?id=nBrnfYeKf9", "detail_url": "https://openreview.net/forum?id=nBrnfYeKf9", "authors": "Lidong Guo,Xuefei Ning,Yonggan Fu,Tianchen Zhao,Zhuoliang Kang,Jincheng Yu,Yingyan Celine Lin,Yu Wang", "tags": "NIPS 2024,Poster", "abstract": "Although the neural radiance field (NeRF) exhibits high-fidelity visualization on the rendering task, it still suffers from rendering defects, especially in complex scenes. In this paper, we delve into the reason for the unsatisfactory performance and conjecture that it comes from interference in the training process. Due to occlusions in complex scenes, a 3D point may be invisible to some rays. On such a point, training with those rays that do not contain valid information about the point might interfere with the NeRF training. Based on the above intuition, we decouple the training process of NeRF in the ray dimension softly and propose a Ray-decoupled Training Framework for neural rendering (Rad-NeRF). Specifically, we construct an ensemble of sub-NeRFs and train a soft gate module to assign the gating scores to these sub-NeRFs based on specific rays. The gate module is jointly optimized with the sub-NeRF ensemble to learn the preference of sub-NeRFs for different rays automatically. Furthermore, we introduce depth-based mutual learning to enhance the rendering consistency among multiple sub-NeRFs and mitigate the depth ambiguity. Experiments on five datasets demonstrate that Rad-NeRF can enhance the rendering performance across a wide range of scene types compared with existing single-NeRF and multi-NeRF methods. With only 0.2% extra parameters, Rad-NeRF improves rendering performance by up to 1.5dB. Code is available at https://github.com/thu-nics/Rad-NeRF.", "pdf": "https://openreview.net/pdf/156d15e45afe9ab5b87a685535ea6e3d6c612495.pdf"} {"title": "Proportional Fairness in Clustering: A Social Choice Perspective", "url": "https://openreview.net/forum?id=KsLX5pFpOs", "detail_url": "https://openreview.net/forum?id=KsLX5pFpOs", "authors": "Leon Kellerhals,Jannik Peters", "tags": "NIPS 2024,Poster", "abstract": "We study the proportional clustering problem of Chen et al. (ICML'19) and relate it to the area of multiwinner voting in computational social choice. We show that any clustering satisfying a weak proportionality notion of Brill and Peters (EC'23) simultaneously obtains the best known approximations to the proportional fairness notion of Chen et al., but also to individual fairness (Jung et al., FORC'20) and the ``core'' (Li et al., ICML'21). In fact, we show that any approximation to proportional fairness is also an approximation to individual fairness and vice versa. Finally, we also study stronger notions of proportional representation, in which deviations do not only happen to single, but multiple candidate centers, and show that stronger proportionality notions of Brill and Peters imply approximations to these stronger guarantees.", "pdf": "https://openreview.net/pdf/adc0d08524be32dd949b97fe091c5a9823c7912d.pdf"} {"title": "Training-Free Open-Ended Object Detection and Segmentation via Attention as Prompts", "url": "https://openreview.net/forum?id=XXVfj4P8nr", "detail_url": "https://openreview.net/forum?id=XXVfj4P8nr", "authors": "Zhiwei Lin,Yongtao Wang,Zhi Tang", "tags": "NIPS 2024,Poster", "abstract": "Existing perception models achieve great success by learning from large amounts of labeled data, but they still struggle with open-world scenarios. To alleviate this issue, researchers introduce open-set perception tasks to detect or segment unseen objects in the training set. However, these models require predefined object categories as inputs during inference, which are not available in real-world scenarios. Recently, researchers pose a new and more practical problem, i.e., open-ended object detection, which discovers unseen objects without any object categories as inputs. In this paper, we present VL-SAM, a training-free framework that combines the generalized object recognition model (i.e., Vision-Language Model) with the generalized object localization model (i.e., Segment-Anything Model), to address the open-ended object detection and segmentation task. Without additional training, we connect these two generalized models with attention maps as the prompts. Specifically, we design an attention map generation module by employing head aggregation and a regularized attention flow to aggregate and propagate attention maps across all heads and layers in VLM, yielding high-quality attention maps. Then, we iteratively sample positive and negative points from the attention maps with a prompt generation module and send the sampled points to SAM to segment corresponding objects. Experimental results on the long-tail instance segmentation dataset (LVIS) show that our method surpasses the previous open-ended method on the object detection task and can provide additional instance segmentation masks. Besides, VL-SAM achieves favorable performance on the corner case object detection dataset (CODA), demonstrating the effectiveness of VL-SAM in real-world applications. Moreover, VL-SAM exhibits good model generalization that can incorporate various VLMs and SAMs.", "pdf": "https://openreview.net/pdf/99174af4da8c035793b9234cade5fba284ef778b.pdf"} {"title": "Fast Iterative Hard Thresholding Methods with Pruning Gradient Computations", "url": "https://openreview.net/forum?id=09RKw0vXjR", "detail_url": "https://openreview.net/forum?id=09RKw0vXjR", "authors": "Yasutoshi Ida,Sekitoshi Kanai,Atsutoshi Kumagai,Tomoharu Iwata,Yasuhiro Fujiwara", "tags": "NIPS 2024,Poster", "abstract": "We accelerate the iterative hard thresholding (IHT) method, which finds \\(k\\) important elements from a parameter vector in a linear regression model. Although the plain IHT repeatedly updates the parameter vector during the optimization, computing gradients is the main bottleneck. Our method safely prunes unnecessary gradient computations to reduce the processing time.The main idea is to efficiently construct a candidate set, which contains \\(k\\) important elements in the parameter vector, for each iteration. Specifically, before computing the gradients, we prune unnecessary elements in the parameter vector for the candidate set by utilizing upper bounds on absolute values of the parameters. Our method guarantees the same optimization results as the plain IHT because our pruning is safe. Experiments show that our method is up to 73 times faster than the plain IHT without degrading accuracy.", "pdf": "https://openreview.net/pdf/a97ce714f40bb23dc2626c2363a019d73add26da.pdf"} {"title": "Weight Diffusion for Future: Learn to Generalize in Non-Stationary Environments", "url": "https://openreview.net/forum?id=2cFUYnNL1m", "detail_url": "https://openreview.net/forum?id=2cFUYnNL1m", "authors": "Mixue Xie,Shuang Li,Binhui Xie,Chi Harold Liu,Jian Liang,Zixun Sun,Ke Feng,Chengwei Zhu", "tags": "NIPS 2024,Poster", "abstract": "Enabling deep models to generalize in non-stationary environments is vital for real-world machine learning, as data distributions are often found to continually change. Recently, evolving domain generalization (EDG) has emerged to tackle the domain generalization in a time-varying system, where the domain gradually evolves over time in an underlying continuous structure. Nevertheless, it typically assumes multiple source domains simultaneously ready. It still remains an open problem to address EDG in the domain-incremental setting, where source domains are non-static and arrive sequentially to mimic the evolution of training domains. To this end, we propose Weight Diffusion (W-Diff), a novel framework that utilizes the conditional diffusion model in the parameter space to learn the evolving pattern of classifiers during the domain-incremental training process. Specifically, the diffusion model is conditioned on the classifier weights of different historical domain (regarded as a reference point) and the prototypes of current domain, to learn the evolution from the reference point to the classifier weights of current domain (regarded as the anchor point). In addition, a domain-shared feature encoder is learned by enforcing prediction consistency among multiple classifiers, so as to mitigate the overfitting problem and restrict the evolving pattern to be reflected in the classifier as much as possible. During inference, we adopt the ensemble manner based on a great number of target domain-customized classifiers, which are cheaply obtained via the conditional diffusion model, for robust prediction. Comprehensive experiments on both synthetic and real-world datasets show the superior generalization performance of W-Diff on unseen domains in the future.", "pdf": "https://openreview.net/pdf/a91611475a0cb860eb07bc5e0074c428b7cdb839.pdf"} {"title": "Query-Efficient Correlation Clustering with Noisy Oracle", "url": "https://openreview.net/forum?id=WRCFuoiz1h", "detail_url": "https://openreview.net/forum?id=WRCFuoiz1h", "authors": "Yuko Kuroki,Atsushi Miyauchi,Francesco Bonchi,Wei Chen", "tags": "NIPS 2024,Poster", "abstract": "We study a general clustering setting in which we have $n$ elements to be clustered, and we aim to perform as few queries as possible to an oracle that returns a noisy sample of the weighted similarity between two elements. Our setting encompasses many application domains in which the similarity function is costly to compute and inherently noisy. We introduce two novel formulations of online learning problems rooted in the paradigm of Pure Exploration in Combinatorial Multi-Armed Bandits (PE-CMAB): fixed confidence and fixed budget settings. For both settings, we design algorithms that combine a sampling strategy with a classic approximation algorithm for correlation clustering and study their theoretical guarantees. Our results are the first examples of polynomial-time algorithms that work for the case of PE-CMAB in which the underlying offline optimization problem is NP-hard.", "pdf": "https://openreview.net/pdf/0dd592a6736279a160968fb66094f2e422dc026c.pdf"} {"title": "EMVP: Embracing Visual Foundation Model for Visual Place Recognition with Centroid-Free Probing", "url": "https://openreview.net/forum?id=V6w7keoTqn", "detail_url": "https://openreview.net/forum?id=V6w7keoTqn", "authors": "Qibo Qiu,Shun Zhang,Haiming Gao,Honghui Yang,Haochao Ying,Wenxiao Wang,Xiaofei He", "tags": "NIPS 2024,Poster", "abstract": "Visual Place Recognition (VPR) is essential for mobile robots as it enables them to retrieve images from a database closest to their current location. The progress of Visual Foundation Models (VFMs) has significantly advanced VPR by capturing representative descriptors in images. However, existing fine-tuning efforts for VFMs often overlook the crucial role of probing in effectively adapting these descriptors for improved image representation. In this paper, we propose the Centroid-Free Probing (CFP) stage, making novel use of second-order features for more effective use of descriptors from VFMs. Moreover, to control the preservation of task-specific information adaptively based on the context of the VPR, we introduce the Dynamic Power Normalization (DPN) module in both the recalibration and CFP stages, forming a novel Parameter Efficiency Fine-Tuning (PEFT) pipeline (EMVP) tailored for the VPR task. Extensive experiments demonstrate the superiority of the proposed CFP over existing probing methods. Moreover, the EMVP pipeline can further enhance fine-tuning performance in terms of accuracy and efficiency. Specifically, it achieves 93.9\\%, 96.5\\%, and 94.6\\% Recall@1 on the MSLS Validation, Pitts250k-test, and SPED datasets, respectively, while saving 64.3\\% of trainable parameters compared with the existing SOTA PEFT method.", "pdf": "https://openreview.net/pdf/ba161d12803881f620c602fb54d33c2a0e5a14a8.pdf"} {"title": "Can an AI Agent Safely Run a Government? Existence of Probably Approximately Aligned Policies", "url": "https://openreview.net/forum?id=xM5m7J6Lbl", "detail_url": "https://openreview.net/forum?id=xM5m7J6Lbl", "authors": "Fr\u00e9d\u00e9ric Berdoz,Roger Wattenhofer", "tags": "NIPS 2024,Poster", "abstract": "While autonomous agents often surpass humans in their ability to handle vast and complex data, their potential misalignment (i.e., lack of transparency regarding their true objective) has thus far hindered their use in critical applications such as social decision processes. More importantly, existing alignment methods provide no formal guarantees on the safety of such models. Drawing from utility and social choice theory, we provide a novel quantitative definition of alignment in the context of social decision-making. Building on this definition, we introduce probably approximately aligned (i.e., near-optimal) policies, and we derive a sufficient condition for their existence. Lastly, recognizing the practical difficulty of satisfying this condition, we introduce the relaxed concept of safe (i.e., nondestructive) policies, and we propose a simple yet robust method to safeguard the black-box policy of any autonomous agent, ensuring all its actions are verifiably safe for the society.", "pdf": "https://openreview.net/pdf/bf68ea6396e873fe823317adb57a897d04b26805.pdf"} {"title": "An Information Theoretic Perspective on Conformal Prediction", "url": "https://openreview.net/forum?id=gKLgY3m9zj", "detail_url": "https://openreview.net/forum?id=gKLgY3m9zj", "authors": "Alvaro Correia,Fabio Valerio Massoli,Christos Louizos,Arash Behboodi", "tags": "NIPS 2024,Poster", "abstract": "Conformal Prediction (CP) is a distribution-free uncertainty estimation framework that constructs prediction sets guaranteed to contain the true answer with a user-specified probability. Intuitively, the size of the prediction set encodes a general notion of uncertainty, with larger sets associated with higher degrees of uncertainty. In this work, we leverage information theory to connect conformal prediction to other notions of uncertainty. More precisely, we prove three different ways to upper bound the intrinsic uncertainty, as described by the conditional entropy of the target variable given the inputs, by combining CP with information theoretical inequalities. Moreover, we demonstrate two direct and useful applications of such connection between conformal prediction and information theory: (i) more principled and effective conformal training objectives that generalize previous approaches and enable end-to-end training of machine learning models from scratch, and (ii) a natural mechanism to incorporate side information into conformal prediction. We empirically validate both applications in centralized and federated learning settings, showing our theoretical results translate to lower inefficiency (average prediction set size) for popular CP methods.", "pdf": "https://openreview.net/pdf/00277a7cce5afc55a23a674f830064833d5ae4dd.pdf"} {"title": "Gaussian Approximation and Multiplier Bootstrap for Polyak-Ruppert Averaged Linear Stochastic Approximation with Applications to TD Learning", "url": "https://openreview.net/forum?id=S0Ci1AsJL5", "detail_url": "https://openreview.net/forum?id=S0Ci1AsJL5", "authors": "Sergey Samsonov,Eric Moulines,Qi-Man Shao,Zhuo-Song Zhang,Alexey Naumov", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we obtain the Berry\u2013Esseen bound for multivariate normal approximation for the Polyak-Ruppert averaged iterates of the linear stochastic approximation (LSA) algorithm with decreasing step size. Moreover, we prove the non-asymptotic validity of the confidence intervals for parameter estimation with LSA based on multiplier bootstrap. This procedure updates the LSA estimate together with a set of randomly perturbed LSA estimates upon the arrival of subsequent observations. We illustrate our findings in the setting of temporal difference learning with linear function approximation.", "pdf": "https://openreview.net/pdf/6c80e1182fe1d982f739c2785e61bde580f3ebbd.pdf"} {"title": "GVKF: Gaussian Voxel Kernel Functions for Highly Efficient Surface Reconstruction in Open Scenes", "url": "https://openreview.net/forum?id=DQD0DNRjxk", "detail_url": "https://openreview.net/forum?id=DQD0DNRjxk", "authors": "Gaochao Song,Chong Cheng,Hao Wang", "tags": "NIPS 2024,Poster", "abstract": "In this paper we present a novel method for efficient and effective 3D surface reconstruction in open scenes. Existing Neural Radiance Fields (NeRF) based works typically require extensive training and rendering time due to the adopted implicit representations. \nIn contrast, 3D Gaussian splatting (3DGS) uses an explicit and discrete representation, hence the reconstructed surface is built by the huge number of Gaussian primitives, which leads to excessive memory consumption and rough surface details in sparse Gaussian areas.\nTo address these issues, we propose Gaussian Voxel Kernel Functions (GVKF), which establish a continuous scene representation based on discrete 3DGS through kernel regression. The GVKF integrates fast 3DGS rasterization and highly effective scene implicit representations, achieving high-fidelity open scene surface reconstruction. Experiments on challenging scene datasets demonstrate the efficiency and effectiveness of our proposed GVKF, featuring with high reconstruction quality, real-time rendering speed, significant savings in storage and training memory consumption.", "pdf": "https://openreview.net/pdf/d5b0d3f6a473cbb1b1de7313378d329ebaf6ceea.pdf"} {"title": "How Sparse Can We Prune A Deep Network: A Fundamental Limit Perspective", "url": "https://openreview.net/forum?id=IAAPhOLhcX", "detail_url": "https://openreview.net/forum?id=IAAPhOLhcX", "authors": "Qiaozhe Zhang,Ruijie ZHANG,Jun Sun,Yingzhuang Liu", "tags": "NIPS 2024,Poster", "abstract": "Network pruning is a commonly used measure to alleviate the storage and computational burden of deep neural networks. However, the fundamental limit of network pruning is still lacking. To close the gap, in this work we'll take a first-principles approach, i.e. we'll directly impose the sparsity constraint on the loss function and leverage the framework of *statistical dimension* in convex geometry, thus we're able to characterize the sharp phase transition point, i.e. the fundamental limit of the pruning ratio. Through this fundamental limit, we're able to identify two key factors that determine the pruning ratio limit, namely, *weight magnitude* and *network flatness*. Generally speaking, the flatter the loss landscape or the smaller the weight magnitude, the smaller pruning ratio. Moreover, we provide efficient countermeasures to address the challenges in the computation of the pruning limit, which involves accurate spectrum estimation of a large-scale and non-positive Hessian matrix. Moreover, through the lens of the pruning ratio threshold, we can provide rigorous interpretations on several heuristics in existing pruning algorithms. Extensive experiments are performed that demonstrate that our theoretical pruning ratio threshold coincides very well with the experiments. All codes are available at: https://anonymous.4open.science/r/Global-One-shot-Pruning-BC7B", "pdf": "https://openreview.net/pdf/c9bd2e43865cf1531a7e1d76f1e894ac0f35f2f3.pdf"} {"title": "Enhancing Multiple Dimensions of Trustworthiness in LLMs via Sparse Activation Control", "url": "https://openreview.net/forum?id=aSkckaNxnO", "detail_url": "https://openreview.net/forum?id=aSkckaNxnO", "authors": "Yuxin Xiao,Chaoqun Wan,Yonggang Zhang,Wenxiao Wang,Binbin Lin,Xiaofei He,Xu Shen,Jieping Ye", "tags": "NIPS 2024,Poster", "abstract": "As the development and application of Large Language Models (LLMs) continue to advance rapidly, enhancing their trustworthiness and aligning them with human preferences has become a critical area of research. Traditional methods rely heavily on extensive data for Reinforcement Learning from Human Feedback (RLHF), but representation engineering offers a new, training-free approach. This technique leverages semantic features to control the representation of LLM's intermediate hidden states, enabling the model to meet specific requirements such as increased honesty or heightened safety awareness. However, a significant challenge arises when attempting to fulfill multiple requirements simultaneously. It proves difficult to encode various semantic contents, like honesty and safety, into a singular semantic feature, restricting its practicality.\nIn this work, we address this challenge through Sparse Activation Control. By delving into the intrinsic mechanisms of LLMs, we manage to identify and pinpoint modules that are closely related to specific tasks within the model, i.e. attention heads. These heads display sparse characteristics that allow for near-independent control over different tasks. Our experiments, conducted on the open-source Llama series models, have yielded encouraging results. The models were able to align with human preferences on issues of safety, factualness, and bias concurrently.", "pdf": "https://openreview.net/pdf/0d27acf6c9521c10a7ebf4985c40b51abd290dd8.pdf"} {"title": "On provable privacy vulnerabilities of graph representations", "url": "https://openreview.net/forum?id=LSqDcfX3xU", "detail_url": "https://openreview.net/forum?id=LSqDcfX3xU", "authors": "Ruofan Wu,Guanhua Fang,Mingyang Zhang,Qiying Pan,Tengfei LIU,Weiqiang Wang", "tags": "NIPS 2024,Poster", "abstract": "Graph representation learning (GRL) is critical for extracting insights from complex network structures, but it also raises security concerns due to potential privacy vulnerabilities in these representations. This paper investigates the structural vulnerabilities in graph neural models where sensitive topological information can be inferred through edge reconstruction attacks. Our research primarily addresses the theoretical underpinnings of similarity-based edge reconstruction attacks (SERA), furnishing a non-asymptotic analysis of their reconstruction capacities. Moreover, we present empirical corroboration indicating that such attacks can perfectly reconstruct sparse graphs as graph size increases. Conversely, we establish that sparsity is a critical factor for SERA's effectiveness, as demonstrated through analysis and experiments on (dense) stochastic block models. Finally, we explore the resilience of private graph representations produced via noisy aggregation (NAG) mechanism against SERA. Through theoretical analysis and empirical assessments, we affirm the mitigation of SERA using NAG . In parallel, we also empirically delineate instances wherein SERA demonstrates both efficacy and deficiency in its capacity to function as an instrument for elucidating the trade-off between privacy and utility.", "pdf": "https://openreview.net/pdf/9c519528c0fdf4fbbe6f707f80b913d9a6ce3505.pdf"} {"title": "LLaMo: Large Language Model-based Molecular Graph Assistant", "url": "https://openreview.net/forum?id=WKTNdU155n", "detail_url": "https://openreview.net/forum?id=WKTNdU155n", "authors": "Jinyoung Park,Minseong Bae,Dohwan Ko,Hyunwoo J. Kim", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) have demonstrated remarkable generalization and instruction-following capabilities with instruction tuning. The advancements in LLMs and instruction tuning have led to the development of Large Vision-Language Models (LVLMs). However, the competency of the LLMs and instruction tuning have been less explored in the molecular domain. Thus, we propose LLaMo: Large Language Model-based Molecular graph assistant, which is an end-to- end trained large molecular graph-language model. To bridge the discrepancy between the language and graph modalities, we present the multi-level graph projector that transforms graph representations into graph tokens by abstracting the output representations of each GNN layer and motif representations with the cross-attention mechanism. We also introduce machine-generated molecular graph instruction data to instruction-tune the large molecular graph-language model for general-purpose molecule and language understanding. Our extensive experiments demonstrate that LLaMo shows the best performance on diverse tasks, such as molecular description generation, property prediction, and IUPAC name prediction. The code of LLaMo is available at https://github.com/mlvlab/LLaMo.", "pdf": "https://openreview.net/pdf/8f626327840fdc505e7c33ae594771b700d8dbcc.pdf"} {"title": "This Too Shall Pass: Removing Stale Observations in Dynamic Bayesian Optimization", "url": "https://openreview.net/forum?id=kN7GTUss0l", "detail_url": "https://openreview.net/forum?id=kN7GTUss0l", "authors": "Anthony Bardou,Patrick Thiran,Giovanni Ranieri", "tags": "NIPS 2024,Poster", "abstract": "Bayesian Optimization (BO) has proven to be very successful at optimizing a static, noisy, costly-to-evaluate black-box function $f : \\mathcal{S} \\to \\mathbb{R}$. However, optimizing a black-box which is also a function of time (*i.e.*, a *dynamic* function) $f : \\mathcal{S} \\times \\mathcal{T} \\to \\mathbb{R}$ remains a challenge, since a dynamic Bayesian Optimization (DBO) algorithm has to keep track of the optimum over time. This changes the nature of the optimization problem in at least three aspects: (i) querying an arbitrary point in $\\mathcal{S} \\times \\mathcal{T}$ is impossible, (ii) past observations become less and less relevant for keeping track of the optimum as time goes by and (iii) the DBO algorithm must have a high sampling frequency so it can collect enough relevant observations to keep track of the optimum through time. In this paper, we design a Wasserstein distance-based criterion able to quantify the relevancy of an observation with respect to future predictions. Then, we leverage this criterion to build W-DBO, a DBO algorithm able to remove irrelevant observations from its dataset on the fly, thus maintaining simultaneously a good predictive performance and a high sampling frequency, even in continuous-time optimization tasks with unknown horizon. Numerical experiments establish the superiority of W-DBO, which outperforms state-of-the-art methods by a comfortable margin.", "pdf": "https://openreview.net/pdf/1c922085c278e2badc969eaf12a8be24ef8316a2.pdf"} {"title": "On Causal Discovery in the Presence of Deterministic Relations", "url": "https://openreview.net/forum?id=pfvcsgFrJ6", "detail_url": "https://openreview.net/forum?id=pfvcsgFrJ6", "authors": "Loka Li,Haoyue Dai,Hanin Al Ghothani,Biwei Huang,Jiji Zhang,Shahar Harel,Isaac Bentwich,Guangyi Chen,Kun Zhang", "tags": "NIPS 2024,Poster", "abstract": "Many causal discovery methods typically rely on the assumption of independent noise, yet real-life situations often involve deterministic relationships. In these cases, observed variables are represented as deterministic functions of their parental variables without noise.\nWhen determinism is present, constraint-based methods encounter challenges due to the violation of the faithfulness assumption. In this paper, we find, supported by both theoretical analysis and empirical evidence, that score-based methods with exact search can naturally address the issues of deterministic relations under rather mild assumptions. Nonetheless, exact score-based methods can be computationally expensive. To enhance the efficiency and scalability, we develop a novel framework for causal discovery that can detect and handle deterministic relations, called Determinism-aware Greedy Equivalent Search (DGES). DGES comprises three phases: (1) identify minimal deterministic clusters (i.e., a minimal set of variables with deterministic relationships), (2) run modified Greedy Equivalent Search (GES) to obtain an initial graph, and (3) perform exact search exclusively on the deterministic cluster and its neighbors. The proposed DGES accommodates both linear and nonlinear causal relationships, as well as both continuous and discrete data types. Furthermore, we investigate the identifiability conditions of DGES. We conducted extensive experiments on both simulated and real-world datasets to show the efficacy of our proposed method.", "pdf": "https://openreview.net/pdf/ce7a7f7e4d625dc499209811e78bd27b12c56ca4.pdf"} {"title": "Time-Varying LoRA: Towards Effective Cross-Domain Fine-Tuning of Diffusion Models", "url": "https://openreview.net/forum?id=SgODU2mx9T", "detail_url": "https://openreview.net/forum?id=SgODU2mx9T", "authors": "Zhan Zhuang,Yulong Zhang,Xuehao Wang,Jiangang Lu,Ying Wei,Yu Zhang", "tags": "NIPS 2024,Poster", "abstract": "Large-scale diffusion models are adept at generating high-fidelity images and facilitating image editing and interpolation. However, they have limitations when tasked with generating images in dynamic, evolving domains. In this paper, we introduce Terra, a novel Time-varying low-rank adapter that offers a fine-tuning framework specifically tailored for domain flow generation. The key innovation of Terra lies in its construction of a continuous parameter manifold through a time variable, with its expressive power analyzed theoretically. This framework not only enables interpolation of image content and style but also offers a generation-based approach to address the domain shift problems in unsupervised domain adaptation and domain generalization. Specifically, Terra transforms images from the source domain to the target domain and generates interpolated domains with various styles to bridge the gap between domains and enhance the model generalization, respectively. We conduct extensive experiments on various benchmark datasets, empirically demonstrate the effectiveness of Terra. Our source code is publicly available on https://github.com/zwebzone/terra.", "pdf": "https://openreview.net/pdf/2f7264c2aef194d93b5eb5d612e6809b966d3cfa.pdf"} {"title": "Entity Alignment with Noisy Annotations from Large Language Models", "url": "https://openreview.net/forum?id=qfCQ54ZTX1", "detail_url": "https://openreview.net/forum?id=qfCQ54ZTX1", "authors": "Shengyuan Chen,Qinggang Zhang,Junnan Dong,Wen Hua,Qing Li,Xiao Huang", "tags": "NIPS 2024,Poster", "abstract": "Entity alignment (EA) aims to merge two knowledge graphs (KGs) by identifying equivalent entity pairs. While existing methods heavily rely on human-generated labels, it is prohibitively expensive to incorporate cross-domain experts for annotation in real-world scenarios. The advent of Large Language Models (LLMs) presents new avenues for automating EA with annotations, inspired by their comprehensive capability to process semantic information. However, it is nontrivial to directly apply LLMs for EA since the annotation space in real-world KGs is large. LLMs could also generate noisy labels that may mislead the alignment. To this end, we propose a unified framework, LLM4EA, to effectively leverage LLMs for EA. Specifically, we design a novel active learning policy to significantly reduce the annotation space by prioritizing the most valuable entities based on the entire inter-KG and intra-KG structure. Moreover, we introduce an unsupervised label refiner to continuously enhance label accuracy through in-depth probabilistic reasoning. We iteratively optimize the policy based on the feedback from a base EA model. Extensive experiments demonstrate the advantages of LLM4EA on four benchmark datasets in terms of effectiveness, robustness, and efficiency.", "pdf": "https://openreview.net/pdf/5452a4d776ca7849e8b6bafef8a42a3bf6b72a37.pdf"} {"title": "Temporal Sentence Grounding with Relevance Feedback in Videos", "url": "https://openreview.net/forum?id=eOonmxzzno", "detail_url": "https://openreview.net/forum?id=eOonmxzzno", "authors": "Jianfeng Dong,Xiaoman Peng,Daizong Liu,Xiaoye Qu,Xun Yang,Cuizhu Bao,Meng Wang", "tags": "NIPS 2024,Poster", "abstract": "As a widely explored multi-modal task, Temporal Sentence Grounding in videos (TSG) endeavors to retrieve a specific video segment matched with a given query text from a video. The traditional paradigm for TSG generally assumes that relevant segments always exist within a given video. However, this assumption is restrictive and unrealistic in real-world applications where the existence of a query-related segment is uncertain, easily resulting in erroneous grounding. Motivated by the research gap and practical application, this paper introduces a new task, named Temporal Sentence Grounding with Relevance Feedback (TSG-RF) in videos, which accommodates the possibility that a video may or may not include a segment related to the query. This task entails localizing precise video segments that semantically align with the query text when such content is present, while delivering definitive feedback on the non-existence of related segments when absent. Moreover, we propose a novel Relation-aware Temporal Sentence Grounding (RaTSG) network for addressing this challenging task. This network first reformulates the TSG-RF task as a foreground-background detection problem by investigating whether the query-related semantics exist in both frame and video levels. Then, a multi-granularity relevance discriminator is exploited to produce precise video-query relevance feedback and a relation-aware segment grounding module is employed to selectively conduct the grounding process, dynamically adapting to the presence or absence of query-related segments in videos. To validate our RaTSG network, we reconstruct two popular TSG datasets, establishing a rigorous benchmark for TSG-RF. Experimental results demonstrate the effectiveness of our proposed RaTSG for the TSG-RF task. Our source code is available at https://github.com/HuiGuanLab/RaTSG.", "pdf": "https://openreview.net/pdf/ac3ee99ecda12792b3fd39366458ff793bf25285.pdf"} {"title": "SAND: Smooth imputation of sparse and noisy functional data with Transformer networks", "url": "https://openreview.net/forum?id=MXRO5kukST", "detail_url": "https://openreview.net/forum?id=MXRO5kukST", "authors": "Ju-Sheng Hong,Junwen Yao,Jonas Mueller,Jane-Ling Wang", "tags": "NIPS 2024,Poster", "abstract": "Although the transformer architecture has come to dominate other models for text and image data, its application to irregularly-spaced longitudinal data has been limited. We introduce a variant of the transformer that enables it to more smoothly impute such functional data. We augment the vanilla transformer with a simple module we call SAND (self-attention on derivatives), which naturally encourages smoothness by modeling the sub-derivative of the imputed curve. On the theoretical front, we prove the number of hidden nodes required by a network with SAND to achieve an $\\epsilon$ prediction error bound for functional imputation. Extensive experiments over various types of functional data demonstrate that transformers with SAND produce better imputations than both their standard counterparts as well as transformers augmented with alternative approaches to encode the inductive bias of smoothness. SAND also outperforms standard statistical methods for functional imputation like kernel smoothing and PACE.", "pdf": "https://openreview.net/pdf/dec6165853b4311a51c3d8046c7c347f52553441.pdf"} {"title": "Free Lunch in Pathology Foundation Model: Task-specific Model Adaptation with Concept-Guided Feature Enhancement", "url": "https://openreview.net/forum?id=dwYekpbmYG", "detail_url": "https://openreview.net/forum?id=dwYekpbmYG", "authors": "Yanyan Huang,Weiqin Zhao,Yihang Chen,Yu Fu,Lequan Yu", "tags": "NIPS 2024,Poster", "abstract": "Whole slide image (WSI) analysis is gaining prominence within the medical imaging field. Recent advances in pathology foundation models have shown the potential to extract powerful feature representations from WSIs for downstream tasks. However, these foundation models are usually designed for general-purpose pathology image analysis and may not be optimal for specific downstream tasks or cancer types. In this work, we present Concept Anchor-guided Task-specific Feature Enhancement (CATE), an adaptable paradigm that can boost the expressivity and discriminativeness of pathology foundation models for specific downstream tasks. Based on a set of task-specific concepts derived from the pathology vision-language model with expert-designed prompts, we introduce two interconnected modules to dynamically calibrate the generic image features extracted by foundation models for certain tasks or cancer types. Specifically, we design a Concept-guided Information Bottleneck module to enhance task-relevant characteristics by maximizing the mutual information between image features and concept anchors while suppressing superfluous information. Moreover, a Concept-Feature Interference module is proposed to utilize the similarity between calibrated features and concept anchors to further generate discriminative task-specific features. The extensive experiments on public WSI datasets demonstrate that CATE significantly enhances the performance and generalizability of MIL models. Additionally, heatmap and umap visualization results also reveal the effectiveness and interpretability of CATE.", "pdf": "https://openreview.net/pdf/db908e6ff75332631fe62db1beffd84b71e3c5d6.pdf"} {"title": "B-ary Tree Push-Pull Method is Provably Efficient for Distributed Learning on Heterogeneous Data", "url": "https://openreview.net/forum?id=3MnXAcTBD3", "detail_url": "https://openreview.net/forum?id=3MnXAcTBD3", "authors": "Runze You,Shi Pu", "tags": "NIPS 2024,Poster", "abstract": "This paper considers the distributed learning problem where a group of agents cooperatively minimizes the summation of their local cost functions based on peer-to-peer communication. Particularly, we propose a highly efficient algorithm, termed ``B-ary Tree Push-Pull'' (BTPP), that employs two B-ary spanning trees for distributing the information related to the parameters and stochastic gradients across the network. The simple method is efficient in communication since each agent interacts with at most $(B+1)$ neighbors per iteration. More importantly, BTPP achieves linear speedup for smooth nonconvex objective functions with only $\\tilde{O}(n)$ transient iterations, significantly outperforming the state-of-the-art results to the best of our knowledge.", "pdf": "https://openreview.net/pdf/5f661c1e1e18b563378e1de999699d5d46f1a58c.pdf"} {"title": "Multi-Winner Reconfiguration", "url": "https://openreview.net/forum?id=kZfxICBXd1", "detail_url": "https://openreview.net/forum?id=kZfxICBXd1", "authors": "Jiehua Chen,Christian Hatschka,Sofia Simola", "tags": "NIPS 2024,Poster", "abstract": "We introduce a multi-winner reconfiguration model to examine how to transition between subsets of alternatives (aka. committees) through a sequence of minor yet impactful modifications, called reconfiguration path. We analyze this model under four approval-based voting rules: Chamberlin-Courant (CC), Proportional Approval Voting (PAV), Approval Voting (AV), and Satisfaction Approval Voting (SAV). The problem exhibits computational intractability for CC and PAV, and polynomial solvability for AV and SAV. We provide a detailed multivariate complexity analysis for CC and PAV, demonstrating that although the problem remains challenging in many scenarios, there are specific cases that allow for efficient parameterized algorithms.", "pdf": "https://openreview.net/pdf/1d43788fe5883a34010d38de4e5a32865b58c3f8.pdf"} {"title": "Conformal Classification with Equalized Coverage for Adaptively Selected Groups", "url": "https://openreview.net/forum?id=3pWHKxK1sC", "detail_url": "https://openreview.net/forum?id=3pWHKxK1sC", "authors": "Yanfei Zhou,Matteo Sesia", "tags": "NIPS 2024,Poster", "abstract": "This paper introduces a conformal inference method to evaluate uncertainty in classification by generating prediction sets with valid coverage conditional on adaptively chosen features. These features are carefully selected to reflect potential model limitations or biases. This can be useful to find a practical compromise between efficiency---by providing informative predictions---and algorithmic fairness---by ensuring equalized coverage for the most sensitive groups. We demonstrate the validity and effectiveness of this method on simulated and real data sets.", "pdf": "https://openreview.net/pdf/52f7344ad5acaf1290282d3a2a5ed551d3683b2f.pdf"} {"title": "Bandits with Abstention under Expert Advice", "url": "https://openreview.net/forum?id=l04i6dPMxK", "detail_url": "https://openreview.net/forum?id=l04i6dPMxK", "authors": "Stephen Pasteris,Alberto Rumi,Maximilian Thiessen,Shota Saito,Atsushi Miyauchi,Fabio Vitale,Mark Herbster", "tags": "NIPS 2024,Poster", "abstract": "We study the classic problem of prediction with expert advice under bandit feedback. Our model assumes that one action, corresponding to the learner's abstention from play, has no reward or loss on every trial. We propose the CBA (Confidence-rated Bandits with Abstentions) algorithm, which exploits this assumption to obtain reward bounds that can significantly improve those of the classical Exp4 algorithm. Our problem can be construed as the aggregation of confidence-rated predictors, with the learner having the option to abstain from play. We are the first to achieve bounds on the expected cumulative reward for general confidence-rated predictors. In the special case of specialists, we achieve a novel reward bound, significantly improving previous bounds of SpecialistExp (treating abstention as another action). We discuss how CBA can be applied to the problem of adversarial contextual bandits with the option of abstaining from selecting any action. We are able to leverage a wide range of inductive biases, outperforming previous approaches both theoretically and in preliminary experimental analysis. Additionally, we achieve a reduction in runtime from quadratic to almost linear in the number of contexts for the specific case of metric space contexts.", "pdf": "https://openreview.net/pdf/2e3c2588e8326b867dbf1d68a87379ed9a71cffd.pdf"} {"title": "Externally Valid Policy Evaluation from Randomized Trials Using Additional Observational Data", "url": "https://openreview.net/forum?id=2pgc5xDJ1b", "detail_url": "https://openreview.net/forum?id=2pgc5xDJ1b", "authors": "Sofia Ek,Dave Zachariah", "tags": "NIPS 2024,Poster", "abstract": "Randomized trials are widely considered as the gold standard for evaluating the effects of decision policies. Trial data is, however, drawn from a population which may differ from the intended target population and this raises a problem of external validity (aka. generalizability). In this paper we seek to use trial data to draw valid inferences about the outcome of a policy on the target population. Additional covariate data from the target population is used to model the sampling of individuals in the trial study. We develop a method that yields certifiably valid trial-based policy evaluations under any specified range of model miscalibrations. The method is nonparametric and the validity is assured even with finite samples. The certified policy evaluations are illustrated using both simulated and real data.", "pdf": "https://openreview.net/pdf/44b65dfbb2a2d50f18aa986e1c05cb32dff73b03.pdf"} {"title": "DETAIL: Task DEmonsTration Attribution for Interpretable In-context Learning", "url": "https://openreview.net/forum?id=4jRNkAH15k", "detail_url": "https://openreview.net/forum?id=4jRNkAH15k", "authors": "Zijian Zhou,Xiaoqiang Lin,Xinyi Xu,Alok Prakash,Daniela Rus,Bryan Kian Hsiang Low", "tags": "NIPS 2024,Poster", "abstract": "In-context learning (ICL) allows transformer-based language models that are pre-trained on general text to quickly learn a specific task with a few \"task demonstrations\" without updating their parameters, significantly boosting their flexibility and generality. ICL possesses many distinct characteristics from conventional machine learning, thereby requiring new approaches to interpret this learning paradigm. Taking the viewpoint of recent works showing that transformers learn in context by formulating an internal optimizer, we propose an influence function-based attribution technique, DETAIL, that addresses the specific characteristics of ICL. We empirically verify the effectiveness of our approach for demonstration attribution while being computationally efficient. Leveraging the results, we then show how DETAIL can help improve model performance in real-world scenarios through demonstration reordering and curation. Finally, we experimentally prove the wide applicability of DETAIL by showing our attribution scores obtained on white-box models are transferable to black-box models in improving model performance.", "pdf": "https://openreview.net/pdf/c9dd3c707fc21747c510f63203c9ecb534feca01.pdf"} {"title": "Graph-based Unsupervised Disentangled Representation Learning via Multimodal Large Language Models", "url": "https://openreview.net/forum?id=a1wf2N967T", "detail_url": "https://openreview.net/forum?id=a1wf2N967T", "authors": "Baao Xie,Qiuyu Chen,Yunnan Wang,Zequn Zhang,Xin Jin,Wenjun Zeng", "tags": "NIPS 2024,Poster", "abstract": "Disentangled representation learning (DRL) aims to identify and decompose underlying factors behind observations, thus facilitating data perception and generation. However, current DRL approaches often rely on the unrealistic assumption that semantic factors are statistically independent. In reality, these factors may exhibit correlations, which off-the-shelf solutions have yet to properly address. To tackle this challenge, we introduce a bidirectional weighted graph-based framework, to learn factorized attributes and their interrelations within complex data. Specifically, we propose a $\\beta$-VAE based module to extract factors as the initial nodes of the graph, and leverage the multimodal large language model (MLLM) to discover and rank latent correlations, thereby updating the weighted edges. By integrating these complementary modules, our model successfully achieves fine-grained, practical and unsupervised disentanglement. Experiments demonstrate our method's superior performance in disentanglement and reconstruction. Furthermore, the model inherits enhanced interpretability and generalizability from MLLMs.", "pdf": "https://openreview.net/pdf/7f8ecf18c6b2d4d9c41dc8f2ab6c6d58dc99da9e.pdf"} {"title": "Does Reasoning Emerge? Examining the Probabilities of Causation in Large Language Models", "url": "https://openreview.net/forum?id=b1ylCyjAZk", "detail_url": "https://openreview.net/forum?id=b1ylCyjAZk", "authors": "Javier Gonzalez,Aditya V. Nori", "tags": "NIPS 2024,Poster", "abstract": "Recent advances in AI have been significantly driven by the capabilities of large language models (LLMs) to solve complex problems in ways that resemble human thinking. However, there is an ongoing debate about the extent to which LLMs are capable of\nactual reasoning. Central to this debate are two key probabilistic concepts that are essential for connecting causes\nto their effects: the probability of necessity (PN) and the probability of sufficiency (PS). This paper introduces a framework that is both theoretical and practical, aimed at assessing how effectively LLMs are able to replicate real-world reasoning mechanisms using these probabilistic measures. By viewing LLMs as abstract machines that process information through a natural language interface, we examine the conditions under which it is possible to compute suitable approximations of PN and PS. Our research marks an important step towards gaining a deeper understanding of when LLMs are capable of reasoning, as illustrated by a series of math examples.", "pdf": "https://openreview.net/pdf/35b8761b06e13718ff2092c8cd862b889b6bc863.pdf"} {"title": "Learning Infinitesimal Generators of Continuous Symmetries from Data", "url": "https://openreview.net/forum?id=wl44W8xpc7", "detail_url": "https://openreview.net/forum?id=wl44W8xpc7", "authors": "Gyeonghoon Ko,Hyunsu Kim,Juho Lee", "tags": "NIPS 2024,Poster", "abstract": "Exploiting symmetry inherent in data can significantly improve the sample efficiency of a learning procedure and the generalization of learned models. When data clearly reveals underlying symmetry, leveraging this symmetry can naturally inform the design of model architectures or learning strategies. Yet, in numerous real-world scenarios, identifying the specific symmetry within a given data distribution often proves ambiguous. To tackle this, some existing works learn symmetry in a data-driven manner, parameterizing and learning expected symmetry through data. However, these methods often rely on explicit knowledge, such as pre-defined Lie groups, which are typically restricted to linear or affine transformations. In this paper, we propose a novel symmetry learning algorithm based on transformations defined with one-parameter groups, continuously parameterized transformations flowing along the directions of vector fields called infinitesimal generators. Our method is built upon minimal inductive biases, encompassing not only commonly utilized symmetries rooted in Lie groups but also extending to symmetries derived from nonlinear generators. To learn these symmetries, we introduce a notion of a validity score that examine whether the transformed data is still valid for the given task. The validity score is designed to be fully differentiable and easily computable, enabling effective searches for transformations that achieve symmetries innate to the data. We apply our method mainly in two domains: image data and partial differential equations, and demonstrate its advantages. Our codes are available at \\url{https://github.com/kogyeonghoon/learning-symmetry-from-scratch.git}.", "pdf": "https://openreview.net/pdf/6a4437eaa3052c0cae09ea647c48c13dd7223585.pdf"} {"title": "Prompt-Agnostic Adversarial Perturbation for Customized Diffusion Models", "url": "https://openreview.net/forum?id=oMHpejyGdx", "detail_url": "https://openreview.net/forum?id=oMHpejyGdx", "authors": "Cong Wan,Yuhang He,Xiang Song,Yihong Gong", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models have revolutionized customized text-to-image generation, allowing for efficient synthesis of photos from personal data with textual descriptions. However, these advancements bring forth risks including privacy breaches and unauthorized replication of artworks. Previous researches primarily center around using \u201cprompt-specific methods\u201d to generate adversarial examples to protect personal images, yet the effectiveness of existing methods is hindered by constrained adaptability to different prompts.\nIn this paper, we introduce a Prompt-Agnostic Adversarial Perturbation (PAP) method for customized diffusion models. PAP first models the prompt distribution using a Laplace Approximation, and then produces prompt-agnostic perturbations by maximizing a disturbance expectation based on the modeled distribution.\nThis approach effectively tackles the prompt-agnostic attacks, leading to improved defense stability.\nExtensive experiments in face privacy and artistic style protection, demonstrate the superior generalization of our method in comparison to existing techniques.", "pdf": "https://openreview.net/pdf/9a93ace094b2a0a79ded25792fcdd8fdacf875e7.pdf"} {"title": "Localize, Understand, Collaborate: Semantic-Aware Dragging via Intention Reasoner", "url": "https://openreview.net/forum?id=kcQKIzQPZj", "detail_url": "https://openreview.net/forum?id=kcQKIzQPZj", "authors": "Xing Cui,Pei Pei Li,Zekun Li,Xuannan Liu,Yueying Zou,Zhaofeng He", "tags": "NIPS 2024,Poster", "abstract": "Flexible and accurate drag-based editing is a challenging task that has recently garnered significant attention. Current methods typically model this problem as automatically learning \"how to drag\" through point dragging and often produce one deterministic estimation, which presents two key limitations: 1) Overlooking the inherently ill-posed nature of drag-based editing, where multiple results may correspond to a given input, as illustrated in Fig.1; 2) Ignoring the constraint of image quality, which may lead to unexpected distortion.\nTo alleviate this, we propose LucidDrag, which shifts the focus from \"how to drag\" to \"what-then-how\" paradigm. LucidDrag comprises an intention reasoner and a collaborative guidance sampling mechanism. The former infers several optimal editing strategies, identifying what content and what semantic direction to be edited. Based on the former, the latter addresses \"how to drag\" by collaboratively integrating existing editing guidance with the newly proposed semantic guidance and quality guidance.\nSpecifically, semantic guidance is derived by establishing a semantic editing direction based on reasoned intentions, while quality guidance is achieved through classifier guidance using an image fidelity discriminator.\nBoth qualitative and quantitative comparisons demonstrate the superiority of LucidDrag over previous methods.", "pdf": "https://openreview.net/pdf/b0da9fc404710a3b44c4e77d1232f7812394483e.pdf"} {"title": "Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision", "url": "https://openreview.net/forum?id=qwgfh2fTtN", "detail_url": "https://openreview.net/forum?id=qwgfh2fTtN", "authors": "Zhiqing Sun,Longhui Yu,Yikang Shen,Weiyang Liu,Yiming Yang,Sean Welleck,Chuang Gan", "tags": "NIPS 2024,Poster", "abstract": "Current AI alignment methodologies rely on human-provided demonstrations or judgments, and the learned capabilities of AI systems would be upper-bounded by human capabilities as a result. This raises a challenging research question: How can we keep improving the systems when their capabilities have surpassed the levels of humans? This paper answers this question in the context of tackling hard reasoning tasks (e.g., level 4-5 MATH problems) via learning from human annotations on easier tasks (e.g., level 1-3 MATH problems), which we term as easy-to-hard generalization. Our key insight is that an evaluator (reward model) trained on supervisions for easier tasks can be effectively used for scoring candidate solutions of harder tasks and hence facilitating easy-to-hard generalization over different levels of tasks. Based on this insight, we propose a novel approach to scalable alignment, which firstly trains the (process-supervised) reward models on easy problems (e.g., level 1-3), and then uses them to evaluate the performance of policy models on hard problems. We show that such easy-to-hard generalization from evaluators can enable easy-to-hard generalizations in generators either through re-ranking or reinforcement learning (RL). Notably, our process-supervised 7b RL model and 34b model (reranking@1024) achieves an accuracy of 34.0% and 52.5% on MATH500, respectively, despite only using human supervision on easy problems. Our approach suggests a promising path toward AI systems that advance beyond the frontier of human supervision.", "pdf": "https://openreview.net/pdf/7ac27b372b7a8932bf5f686c56f5f50af5154e87.pdf"} {"title": "Uniform Last-Iterate Guarantee for Bandits and Reinforcement Learning", "url": "https://openreview.net/forum?id=J3w0AXtEhp", "detail_url": "https://openreview.net/forum?id=J3w0AXtEhp", "authors": "Junyan Liu,Yunfan Li,Ruosong Wang,Lin Yang", "tags": "NIPS 2024,Poster", "abstract": "Existing metrics for reinforcement learning (RL) such as regret, PAC bounds, or uniform-PAC (Dann et al., 2017), typically evaluate the cumulative performance, while allowing the play of an arbitrarily bad policy at any finite time t. Such a behavior can be highly detrimental in high-stakes applications. This paper introduces a stronger metric, uniform last-iterate (ULI) guarantee, capturing both cumulative and instantaneous performance of RL algorithms. Specifically, ULI characterizes the instantaneous performance since it ensures that the per-round suboptimality of the played policy is bounded by a function, monotonically decreasing w.r.t. (large) round t, preventing revisits to bad policies when sufficient samples are available. We demonstrate that a near-optimal ULI guarantee directly implies near-optimal cumulative performance across aforementioned metrics, but not the other way around. \nTo examine the achievability of ULI, we first provide two positive results for bandit problems with finite arms, showing that some elimination-based algorithms and high-probability adversarial algorithms with stronger analysis or additional designs, can attain near-optimal ULI guarantees. We also provide a negative result, indicating that optimistic algorithms cannot achieve a near-optimal ULI guarantee. Furthermore, we propose an efficient algorithm for linear bandits with infinitely many arms, which achieves the ULI guarantee, given access to an optimization oracle. Finally, we propose an algorithm that achieves a near-optimal ULI guarantee for the online reinforcement learning setting.", "pdf": "https://openreview.net/pdf/aeb002e9a0edbf2317adbb08ee3839039faad54d.pdf"} {"title": "Dissecting the Failure of Invariant Learning on Graphs", "url": "https://openreview.net/forum?id=7eFS8aZHAM", "detail_url": "https://openreview.net/forum?id=7eFS8aZHAM", "authors": "Qixun Wang,Yifei Wang,Yisen Wang,Xianghua Ying", "tags": "NIPS 2024,Poster", "abstract": "Enhancing node-level Out-Of-Distribution (OOD) generalization on graphs remains a crucial area. In this paper, we develop a Structural Causal Model (SCM) to theoretically dissect the performance of two prominent invariant learning methods--Invariant Risk Minimization (IRM) and Variance-Risk Extrapolation (VREx)--in node-level OOD settings. Our analysis reveals a critical limitation: these methods may struggle to identify invariant features due to the complexities introduced by the message-passing mechanism, which can obscure causal features within a range of neighboring samples. To address this, we propose Cross-environment Intra-class Alignment (CIA), which explicitly eliminates spurious features by aligning representations within the same class, bypassing the need for explicit knowledge of underlying causal patterns. To adapt CIA to node-level OOD scenarios where environment labels are hard to obtain, we further propose CIA-LRA (Localized Reweighting Alignment) that leverages the distribution of neighboring labels to selectively align node representations, effectively distinguishing and preserving invariant features while removing spurious ones, all without relying on environment labels. We theoretically prove CIA-LRA's effectiveness by deriving an OOD generalization error bound based on PAC-Bayesian analysis. Experiments on graph OOD benchmarks validate the superiority of CIA and CIA-LRA, marking a significant advancement in node-level OOD generalization.", "pdf": "https://openreview.net/pdf/70976efca44add9f3d5fe0175b030e8119065507.pdf"} {"title": "SemFlow: Binding Semantic Segmentation and Image Synthesis via Rectified Flow", "url": "https://openreview.net/forum?id=E3P1X94Y51", "detail_url": "https://openreview.net/forum?id=E3P1X94Y51", "authors": "Chaoyang Wang,Xiangtai Li,Lu Qi,Henghui Ding,Yunhai Tong,Ming-Hsuan Yang", "tags": "NIPS 2024,Poster", "abstract": "Semantic segmentation and semantic image synthesis are two representative tasks in visual perception and generation. While existing methods consider them as two distinct tasks, we propose a unified framework (SemFlow) and model them as a pair of reverse problems. Specifically, motivated by rectified flow theory, we train an ordinary differential equation (ODE) model to transport between the distributions of real images and semantic masks. As the training object is symmetric, samples belonging to the two distributions, images and semantic masks, can be effortlessly transferred reversibly. For semantic segmentation, our approach solves the contradiction between the randomness of diffusion outputs and the uniqueness of segmentation results. For image synthesis, we propose a finite perturbation approach to enhance the diversity of generated results without changing the semantic categories. Experiments show that our SemFlow achieves competitive results on semantic segmentation and semantic image synthesis tasks. We hope this simple framework will motivate people to rethink the unification of low-level and high-level vision.", "pdf": "https://openreview.net/pdf/89ddb37b939435070dacdd4741527b1cc5d96229.pdf"} {"title": "LOVA3: Learning to Visual Question Answering, Asking and Assessment", "url": "https://openreview.net/forum?id=vIOKLMl6wu", "detail_url": "https://openreview.net/forum?id=vIOKLMl6wu", "authors": "Hengyuan Zhao,Pan Zhou,Difei Gao,Zechen Bai,Mike Zheng Shou", "tags": "NIPS 2024,Poster", "abstract": "Question answering, asking, and assessment are three innate human traits crucial for understanding the world and acquiring knowledge. By enhancing these capabilities, humans can more effectively utilize data, leading to better comprehension and learning outcomes. However, current Multimodal Large Language Models (MLLMs) primarily focus on question answering, often neglecting the full potential of questioning and assessment skills. In this study, we introduce LOVA3, an innovative framework named ``Learning tO Visual Question Answering, Asking and Assessment,'' designed to equip MLLMs with these additional capabilities. Our approach involves the creation of two supplementary training tasks GenQA and EvalQA, aiming at fostering the skills of asking and assessing questions in the context of images. To develop the questioning ability, we compile a comprehensive set of multimodal foundational tasks. For assessment, we introduce a new benchmark called EvalQABench, comprising 64,000 training samples (split evenly between positive and negative samples) and 5,000 testing samples. We posit that enhancing MLLMs with the capabilities to answer, ask, and assess questions \nwill enhance their multimodal comprehension, ultimately improving overall performance. To validate this hypothesis, we train MLLMs using the LOVA3 framework and evaluate them on a range of multimodal datasets and benchmarks. Our results demonstrate consistent performance gains, underscoring the critical role of these additional tasks in fostering comprehensive intelligence in MLLMs.", "pdf": "https://openreview.net/pdf/be25638b9cadbd2cb9d3cfeb011f69cddc0f09fa.pdf"} {"title": "FasMe: Fast and Sample-efficient Meta Estimator for Precision Matrix Learning in Small Sample Settings", "url": "https://openreview.net/forum?id=wHFaAH3E8z", "detail_url": "https://openreview.net/forum?id=wHFaAH3E8z", "authors": "Xiao Tan,Yiqin Wang,Yangyang Shen,Dian Shen,Meng Wang,Peibo Duan,Beilun Wang", "tags": "NIPS 2024,Poster", "abstract": "Precision matrix estimation is a ubiquitous task featuring numerous applications such as rare disease diagnosis and neural connectivity exploration. However, this task becomes challenging in small sample settings, where the number of samples is significantly less than the number of dimensions, leading to unreliable estimates. Previous approaches either fail to perform well in small sample settings or suffer from inefficient estimation processes, even when incorporating meta-learning techniques.\nTo this end, we propose a novel approach FasMe for Fast and Sample-efficient Meta Precision Matrix Learning, which first extracts meta-knowledge through a multi-task learning diagram. Then, meta-knowledge constraints are applied using a maximum determinant matrix completion algorithm for the novel task. As a result, we reduce the sample size requirements to $O(\\log p/K)$ per meta-training task and $O(\\log\\vert \\mathcal{G}\\vert)$ for the meta-testing task. Moreover, the hereby proposed model only needs $O(p \\log\\epsilon^{-1})$ time and $O(p)$ memory for converging to an $\\epsilon$-accurate solution. On multiple synthetic and biomedical datasets, FasMe is at least ten times faster than the four baselines while promoting prediction accuracy in small sample settings.", "pdf": "https://openreview.net/pdf/7fb400b0af5f12f29a6d981142457903acaa8378.pdf"} {"title": "xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token", "url": "https://openreview.net/forum?id=6pTlXqrO0p", "detail_url": "https://openreview.net/forum?id=6pTlXqrO0p", "authors": "Xin Cheng,Xun Wang,Xingxing Zhang,Tao Ge,Si-Qing Chen,Furu Wei,Huishuai Zhang,Dongyan Zhao", "tags": "NIPS 2024,Poster", "abstract": "This paper introduces xRAG, an innovative context compression method tailored for retrieval-augmented generation. xRAG reinterprets document embeddings in dense retrieval--traditionally used solely for retrieval--as features from the retrieval modality. By employing a modality fusion methodology, xRAG seamlessly integrates these embeddings into the language model representation space, effectively eliminating the need for their textual counterparts and achieving an extreme compression rate. \nIn xRAG, the only trainable component is the modality bridge, while both the retriever and the language model remain frozen. This design choice allows for the reuse of offline-constructed document embeddings and preserves the plug-and-play nature of retrieval augmentation. \nExperimental results demonstrate that xRAG achieves an average improvement of over 10% across six knowledge-intensive tasks, adaptable to various language model backbones, ranging from a dense 7B model to an 8x7B Mixture of Experts configuration. xRAG not only significantly outperforms previous context compression methods but also matches the performance of uncompressed models on several datasets, while reducing overall FLOPs by a factor of 3.53. Our work pioneers new directions in retrieval-augmented generation from the perspective of multimodality fusion, and we hope it lays the foundation for future efficient and scalable retrieval-augmented systems.", "pdf": "https://openreview.net/pdf/8b11d8aaee39bfdcd3311156dd6900a4e823454f.pdf"} {"title": "NoiseGPT: Label Noise Detection and Rectification through Probability Curvature", "url": "https://openreview.net/forum?id=VRRvJnxgQe", "detail_url": "https://openreview.net/forum?id=VRRvJnxgQe", "authors": "Haoyu Wang,Zhuo Huang,Zhiwei Lin,Tongliang Liu", "tags": "NIPS 2024,Poster", "abstract": "Machine learning craves high-quality data which is a major bottleneck during realistic deployment, as it takes abundant resources and massive human labor to collect and label data. Unfortunately, label noise where image data mismatches with incorrect label exists ubiquitously in all kinds of datasets, significantly degrading the learning performance of deep networks. Learning with Label Noise (LNL) has been a common strategy for mitigating the influence of noisy labels. However, existing LNL methods either require pertaining using the memorization effect to separate clean data from noisy ones or rely on dataset assumptions that cannot extend to various scenarios. Thanks to the development of Multimodal Large Language Models (MLLMs) which possess massive knowledge and hold In-Context Learning (ICL) ability, this paper proposes NoiseGPT to effectively leverage MLLMs as a knowledge expert for conducting label noise detection and rectification. Specifically, we observe a \\textit{probability curvature} effect of MLLMs where clean and noisy examples reside on curvatures with different smoothness, further enabling the detection of label noise. By designing a token-wise Mix-of-Feature (MoF) technique to produce the curvature, we propose an In-Context Discrepancy (ICD) measure to determine the authenticity of an image-label pair. Subsequently, we repeat such a process to find the best matching pairs to complete our label rectification. Through extensive experiments, we carefully demonstrate the effectiveness of NoiseGPT on detecting and cleansing dataset noise, especially on ILSVRC12, the AUROC of NoiseGPT reached over 0.92. And by integrating with existing methods, the classification performance can be significantly improved on noisy datasets, typically by 22.8\\% on 80\\% symmetric CIFAR-10 with M-correction. Source code: \\url{https://github.com/drunkerWang/NoiseGPT}", "pdf": "https://openreview.net/pdf/ebc3761bb785d8090fcf67449b9ce9a932759e92.pdf"} {"title": "Distribution Guidance Network for Weakly Supervised Point Cloud Semantic Segmentation", "url": "https://openreview.net/forum?id=Jj2PEAZPWk", "detail_url": "https://openreview.net/forum?id=Jj2PEAZPWk", "authors": "Zhiyi Pan,Wei Gao,Shan Liu,Ge Li", "tags": "NIPS 2024,Poster", "abstract": "Despite alleviating the dependence on dense annotations inherent to fully supervised methods, weakly supervised point cloud semantic segmentation suffers from inadequate supervision signals. In response to this challenge, we introduce a novel perspective that imparts auxiliary constraints by regulating the feature space under weak supervision. Our initial investigation identifies which distributions accurately characterize the feature space, subsequently leveraging this priori to guide the alignment of the weakly supervised embeddings. Specifically, we analyze the superiority of the mixture of von Mises-Fisher distributions (moVMF) among several common distribution candidates. Accordingly, we develop a Distribution Guidance Network (DGNet), which comprises a weakly supervised learning branch and a distribution alignment branch. Leveraging reliable clustering initialization derived from the weakly supervised learning branch, the distribution alignment branch alternately updates the parameters of the moVMF and the network, ensuring alignment with the moVMF-defined latent space. Extensive experiments validate the rationality and effectiveness of our distribution choice and network design. Consequently, DGNet achieves state-of-the-art performance under multiple datasets and various weakly supervised settings.", "pdf": "https://openreview.net/pdf/5fd39ffb9e57415df47bf221210ee12d7bf8e005.pdf"} {"title": "Set-based Neural Network Encoding Without Weight Tying", "url": "https://openreview.net/forum?id=i3me9bCSCy", "detail_url": "https://openreview.net/forum?id=i3me9bCSCy", "authors": "Bruno Andreis,Bedionita Soro,Philip Torr,Sung Ju Hwang", "tags": "NIPS 2024,Poster", "abstract": "We propose a neural network weight encoding method for network property prediction that utilizes set-to-set and set-to-vector functions\nto efficiently encode neural network parameters. Our approach is capable of encoding neural networks in a model zoo of mixed architecture and different parameter sizes as opposed to previous approaches that require custom encoding models for different architectures. Furthermore, our \\textbf{S}et-based \\textbf{N}eural network \\textbf{E}ncoder (SNE) takes into consideration the hierarchical computational structure of neural networks. To respect symmetries inherent in network weight space, we utilize Logit Invariance to learn the required minimal invariance properties. Additionally, we introduce a \\textit{pad-chunk-encode} pipeline to efficiently encode neural network layers that is adjustable to computational and memory constraints. We also introduce two new tasks for neural network property prediction: cross-dataset and cross-architecture. In cross-dataset property prediction, we evaluate how well property predictors generalize across model zoos trained on different datasets but of the same architecture. In cross-architecture property prediction, we evaluate how well property predictors transfer to model zoos of different architecture not seen during training. We show that SNE outperforms the relevant baselines on standard benchmarks.", "pdf": "https://openreview.net/pdf/33f91158a52165aa47bb815d1eace1c19419ffbb.pdf"} {"title": "Towards Open-Vocabulary Semantic Segmentation Without Semantic Labels", "url": "https://openreview.net/forum?id=aRhxruC2bi", "detail_url": "https://openreview.net/forum?id=aRhxruC2bi", "authors": "Heeseong Shin,Chaehyun Kim,Sunghwan Hong,Seokju Cho,Anurag Arnab,Paul Hongsuck Seo,Seungryong Kim", "tags": "NIPS 2024,Poster", "abstract": "Large-scale vision-language models like CLIP have demonstrated impressive open-vocabulary capabilities for image-level tasks, excelling in recognizing what objects are present. However, they struggle with pixel-level recognition tasks like semantic segmentation, which require understanding where the objects are located. In this work, we propose a novel method, PixelCLIP, to adapt the CLIP image encoder for pixel-level understanding by guiding the model on where, which is achieved using unlabeled images and masks generated from vision foundation models such as SAM and DINO. To address the challenges of leveraging masks without semantic labels, we devise an online clustering algorithm using learnable class names to acquire general semantic concepts. PixelCLIP shows significant performance improvements over CLIP and competitive results compared to caption-supervised methods in open-vocabulary semantic segmentation.", "pdf": "https://openreview.net/pdf/cf8afbb8ba099a20bea9496361da12e1b03f02cf.pdf"} {"title": "OMG-LLaVA: Bridging Image-level, Object-level, Pixel-level Reasoning and Understanding", "url": "https://openreview.net/forum?id=WeoNd6PRqS", "detail_url": "https://openreview.net/forum?id=WeoNd6PRqS", "authors": "Tao Zhang,Xiangtai Li,Hao Fei,Haobo Yuan,Shengqiong Wu,Shunping Ji,Chen Change Loy,Shuicheng YAN", "tags": "NIPS 2024,Poster", "abstract": "Current universal segmentation methods demonstrate strong capabilities in pixel-level image and video understanding. However, they lack reasoning abilities and cannot be controlled via text instructions. In contrast, large vision-language multimodal models exhibit powerful vision-based conversation and reasoning capabilities but lack pixel-level understanding and have difficulty accepting visual prompts for flexible user interaction. This paper proposes OMG-LLaVA, a new and elegant framework combining powerful pixel-level vision understanding with reasoning abilities. It can accept various visual and text prompts for flexible user interaction. Specifically, we use a universal segmentation method as the visual encoder, integrating image information, perception priors, and visual prompts into visual tokens provided to the LLM. The LLM is responsible for understanding the user's text instructions and providing text responses and pixel-level segmentation results based on the visual information. We propose perception prior embedding to better integrate perception priors with image features. OMG-LLaVA achieves image-level, object-level, and pixel-level reasoning and understanding in a single model, matching or surpassing the performance of specialized methods on multiple benchmarks. Rather than using LLM to connect each specialist, our work aims at end-to-end training on one encoder, one decoder, and one LLM. The code and model have been released for further research.", "pdf": "https://openreview.net/pdf/09cae533bb8199d38a0a7e9a57d324ff1af3407c.pdf"} {"title": "Fetch and Forge: Efficient Dataset Condensation for Object Detection", "url": "https://openreview.net/forum?id=m8MElyzuwp", "detail_url": "https://openreview.net/forum?id=m8MElyzuwp", "authors": "Ding Qi,Jian Li,Jinlong Peng,Bo Zhao,Shuguang Dou,Jialin Li,Jiangning Zhang,Yabiao Wang,Chengjie Wang,Cairong Zhao", "tags": "NIPS 2024,Poster", "abstract": "Dataset condensation (DC) is an emerging technique capable of creating compact synthetic datasets from large originals while maintaining considerable performance. It is crucial for accelerating network training and reducing data storage requirements. \nHowever, current research on DC mainly focuses on image classification, with less exploration of object detection.\nThis is primarily due to two challenges: (i) the multitasking nature of object detection complicates the condensation process, and (ii) Object detection datasets are characterized by large-scale and high-resolution data, which are difficult for existing DC methods to handle.\nAs a remedy, we propose DCOD, the first dataset condensation framework for object detection. It operates in two stages: Fetch and Forge, initially storing key localization and classification information into model parameters, and then reconstructing synthetic images via model inversion. \nFor the complex of multiple objects in an image, we propose Foreground Background Decoupling to centrally update the foreground of multiple instances and Incremental PatchExpand to further enhance the diversity of foregrounds.\nExtensive experiments on various detection datasets demonstrate the superiority of DCOD. Even at an extremely low compression rate of 1\\%, we achieve 46.4\\% and 24.7\\% $\\text{AP}_{50}$ on the VOC and COCO, respectively, significantly reducing detector training duration.", "pdf": "https://openreview.net/pdf/c12d4adfe1c74582a5747a9602fb1d1e0f9cf70c.pdf"} {"title": "A Swiss Army Knife for Heterogeneous Federated Learning: Flexible Coupling via Trace Norm", "url": "https://openreview.net/forum?id=3YkeHuT1o6", "detail_url": "https://openreview.net/forum?id=3YkeHuT1o6", "authors": "Tianchi Liao,Lele Fu,Jialong Chen,Zhen WANG,Zibin Zheng,Chuan Chen", "tags": "NIPS 2024,Poster", "abstract": "The heterogeneity issue in federated learning (FL) has attracted increasing attention, which is attempted to be addressed by most existing methods. Currently, due to systems and objectives heterogeneity, enabling clients to hold models of different architectures and tasks of different demands has become an important direction in FL. \nMost existing FL methods are based on the homogeneity assumption, namely, different clients have the same architectural models with the same tasks, which are unable to handle complex and multivariate data and tasks. \nTo flexibly address these heterogeneity limitations, we propose a novel federated multi-task learning framework with the help of tensor trace norm, FedSAK. Specifically, it treats each client as a task and splits the local model into a feature extractor and a prediction head. \nClients can flexibly choose shared structures based on heterogeneous situations and upload them to the server, which learns correlations among client models by mining model low-rank structures through tensor trace norm.\nFurthermore, we derive convergence and generalization bounds under non-convex settings. Evaluated on 6 real-world datasets compared to 13 advanced FL models, FedSAK demonstrates superior performance.", "pdf": "https://openreview.net/pdf/b343888d0e7fcaf2b2d047de2b998dd3f813f383.pdf"} {"title": "Leveraging Separated World Model for Exploration in Visually Distracted Environments", "url": "https://openreview.net/forum?id=Osh7u2E1kC", "detail_url": "https://openreview.net/forum?id=Osh7u2E1kC", "authors": "Kaichen Huang,Shenghua Wan,Minghao Shao,Hai-Hang Sun,Le Gan,Shuai Feng,De-Chuan Zhan", "tags": "NIPS 2024,Poster", "abstract": "Model-based unsupervised reinforcement learning (URL) has gained prominence for reducing environment interactions and learning general skills using intrinsic rewards. However, distractors in observations can severely affect intrinsic reward estimation, leading to a biased exploration process, especially in environments with visual inputs like images or videos. To address this challenge, we propose a bi-level optimization framework named Separation-assisted eXplorer (SeeX). In the inner optimization, SeeX trains a separated world model to extract exogenous and endogenous information, minimizing uncertainty to ensure task relevance. In the outer optimization, it learns a policy on imaginary trajectories generated within the endogenous state space to maximize task-relevant uncertainty. Evaluations on multiple locomotion and manipulation tasks demonstrate SeeX's effectiveness.", "pdf": "https://openreview.net/pdf/6972a2683764073195f725a7d18b19d8e88711da.pdf"} {"title": "PowerPM: Foundation Model for Power Systems", "url": "https://openreview.net/forum?id=JInTfcxH3Q", "detail_url": "https://openreview.net/forum?id=JInTfcxH3Q", "authors": "Shihao Tu,Yupeng Zhang,Jing Zhang,Zhendong Fu,Yin Zhang,Yang Yang", "tags": "NIPS 2024,Poster", "abstract": "The proliferation of abundant electricity time series (ETS) data presents numerous opportunities for various applications within power systems, including demand-side management, grid stability, and consumer behavior analysis. Deep learning models have advanced ETS modeling by effectively capturing sequence dependence. However, learning a generic representation of ETS data for various applications is challenging due to the inherently complex hierarchical structure of ETS data. Moreover, ETS data exhibits intricate temporal dependencies and is susceptible to the influence of exogenous variables. Furthermore, different instances exhibit diverse electricity consumption behavior. In this paper, we propose a foundation model PowerPM for ETS data, providing a large-scale, off-the-shelf model for power systems. PowerPM consists of a temporal encoder and a hierarchical encoder. The temporal encoder captures temporal dependencies within ETS data, taking into account exogenous variables. The hierarchical encoder models correlations between different levels of hierarchy. Furthermore, PowerPM leverages a novel self-supervised pre-training framework consisting of masked ETS modeling and dual-view contrastive learning. This framework enables PowerPM to capture temporal dependency within ETS windows and aware the discrepancy across ETS windows, providing two different perspectives to learn generic representation. Our experiments span five real-world scenario datasets, including both private and public data. Through pre-training on massive ETS data, PowerPM achieves SOTA\nperformance on diverse downstream tasks within the private dataset. Notably, when transferred to public datasets, PowerPM retains its edge, showcasing its remarkable generalization ability across various tasks and domains. Moreover, ablation studies and few-shot experiments further substantiate the effectiveness of our model.", "pdf": "https://openreview.net/pdf/19208f10d10c5fc7f197f4c253d2b8786803e8c2.pdf"} {"title": "Kaleidoscope: Learnable Masks for Heterogeneous Multi-agent Reinforcement Learning", "url": "https://openreview.net/forum?id=W0wq9njGHi", "detail_url": "https://openreview.net/forum?id=W0wq9njGHi", "authors": "Xinran Li,Ling Pan,Jun Zhang", "tags": "NIPS 2024,Poster", "abstract": "In multi-agent reinforcement learning (MARL), parameter sharing is commonly employed to enhance sample efficiency. However, the popular approach of full parameter sharing often leads to homogeneous policies among agents, potentially limiting the performance benefits that could be derived from policy diversity. To address this critical limitation, we introduce \\emph{Kaleidoscope}, a novel adaptive partial parameter sharing scheme that fosters policy heterogeneity while still maintaining high sample efficiency. Specifically, Kaleidoscope maintains one set of common parameters alongside multiple sets of distinct, learnable masks for different agents, dictating the sharing of parameters. It promotes diversity among policy networks by encouraging discrepancy among these masks, without sacrificing the efficiencies of parameter sharing. This design allows Kaleidoscope to dynamically balance high sample efficiency with a broad policy representational capacity, effectively bridging the gap between full parameter sharing and non-parameter sharing across various environments. We further extend Kaleidoscope to critic ensembles in the context of actor-critic algorithms, which could help improve value estimations. Our empirical evaluations across extensive environments, including multi-agent particle environment, multi-agent MuJoCo and StarCraft multi-agent challenge v2, demonstrate the superior performance of Kaleidoscope compared with existing parameter sharing approaches, showcasing its potential for performance enhancement in MARL. The code is publicly available at \\url{https://github.com/LXXXXR/Kaleidoscope}.", "pdf": "https://openreview.net/pdf/4b6c83f052c4b9d8455aaaa21d0e1c6c81ec62de.pdf"} {"title": "Meta-Exploiting Frequency Prior for Cross-Domain Few-Shot Learning", "url": "https://openreview.net/forum?id=2nisrxMMQR", "detail_url": "https://openreview.net/forum?id=2nisrxMMQR", "authors": "Fei Zhou,Peng Wang,Lei Zhang,Zhenghua Chen,Wei Wei,Chen Ding,Guosheng Lin,Yanning Zhang", "tags": "NIPS 2024,Poster", "abstract": "Meta-learning offers a promising avenue for few-shot learning (FSL), enabling models to glean a generalizable feature embedding through episodic training on synthetic FSL tasks in a source domain. Yet, in practical scenarios where the target task diverges from that in the source domain, meta-learning based method is susceptible to over-fitting. To overcome this, we introduce a novel framework, Meta-Exploiting Frequency Prior for Cross-Domain Few-Shot Learning, which is crafted to comprehensively exploit the cross-domain transferable image prior that each image can be decomposed into complementary low-frequency content details and high-frequency robust structural characteristics. Motivated by this insight, we propose to decompose each query image into its high-frequency and low-frequency components, and parallel incorporate them into the feature embedding network to enhance the final category prediction. More importantly, we introduce a feature reconstruction prior and a prediction consistency prior to separately encourage the consistency of the intermediate feature as well as the final category prediction between the original query image and its decomposed frequency components. This allows for collectively guiding the network's meta-learning process with the aim of learning generalizable image feature embeddings, while not introducing any extra computational cost in the inference phase. Our framework establishes new state-of-the-art results on multiple cross-domain few-shot learning benchmarks.", "pdf": "https://openreview.net/pdf/f8c6bad088545553ebb1115035902aaff2c7d2a3.pdf"} {"title": "GenRec: Unifying Video Generation and Recognition with Diffusion Models", "url": "https://openreview.net/forum?id=YdfZP7qMzp", "detail_url": "https://openreview.net/forum?id=YdfZP7qMzp", "authors": "Zejia Weng,Xitong Yang,Zhen Xing,Zuxuan Wu,Yu-Gang Jiang", "tags": "NIPS 2024,Poster", "abstract": "Video diffusion models are able to generate high-quality videos by learning strong spatial-temporal priors on large-scale datasets. In this paper, we aim to investigate whether such priors derived from a generative process are suitable for video recognition, and eventually joint optimization of generation and recognition. Building upon Stable Video Diffusion, we introduce GenRec, the first unified framework trained with a random-frame conditioning process so as to learn generalized spatial-temporal representations. The resulting framework can naturally supports generation and recognition, and more importantly is robust even when visual inputs contain limited information. \nExtensive experiments demonstrate the efficacy of GenRec for both recognition and generation. In particular, GenRec achieves competitive recognition performance, offering 75.8% and 87.2% accuracy on SSV2 and K400, respectively. GenRec also performs the best on class-conditioned image-to-video generation, achieving 46.5 and 49.3 FVD scores on SSV2 and EK-100 datasets. Furthermore, GenRec demonstrates extraordinary robustness in scenarios that only limited frames can be observed. Code will be available at https://github.com/wengzejia1/GenRec.", "pdf": "https://openreview.net/pdf/6e0d541821e88a536a6cb8b823d0813a8fe901bb.pdf"} {"title": "FedLPA: One-shot Federated Learning with Layer-Wise Posterior Aggregation", "url": "https://openreview.net/forum?id=I3IuclVLFZ", "detail_url": "https://openreview.net/forum?id=I3IuclVLFZ", "authors": "Xiang Liu,Liangxi Liu,Feiyang Ye,Yunheng Shen,Xia Li,Linshan Jiang,Jialin Li", "tags": "NIPS 2024,Poster", "abstract": "Efficiently aggregating trained neural networks from local clients into a global model on a server is a widely researched topic in federated learning. Recently, motivated by diminishing privacy concerns, mitigating potential attacks, and reducing communication overhead, one-shot federated learning (i.e., limiting client-server communication into a single round) has gained popularity among researchers. However, the one-shot aggregation performances are sensitively affected by the non-identical training data distribution, which exhibits high statistical heterogeneity in some real-world scenarios. To address this issue, we propose a novel one-shot aggregation method with layer-wise posterior aggregation, named FedLPA. FedLPA aggregates local models to obtain a more accurate global model without requiring extra auxiliary datasets or exposing any private label information, e.g., label distributions. To effectively capture the statistics maintained in the biased local datasets in the practical non-IID scenario, we efficiently infer the posteriors of each layer in each local model using layer-wise Laplace approximation and aggregate them to train the global parameters. Extensive experimental results demonstrate that FedLPA significantly improves learning performance over state-of-the-art methods across several metrics.", "pdf": "https://openreview.net/pdf/6ff26d7ec1923946c7a6f49653391b5c799448f3.pdf"} {"title": "Hierarchical Object-Aware Dual-Level Contrastive Learning for Domain Generalized Stereo Matching", "url": "https://openreview.net/forum?id=HcqV2bPFKz", "detail_url": "https://openreview.net/forum?id=HcqV2bPFKz", "authors": "Yikun Miao,Meiqing Wu,Siew Kei Lam,Changsheng Li,Thambipillai Srikanthan", "tags": "NIPS 2024,Poster", "abstract": "Stereo matching algorithms that leverage end-to-end convolutional neural networks have recently demonstrated notable advancements in performance. However, a common issue is their susceptibility to domain shifts, hindering their ability in generalizing to diverse, unseen realistic domains. We argue that existing stereo matching networks overlook the importance of extracting semantically and structurally meaningful features. To address this gap, we propose an effective hierarchical object-aware dual-level contrastive learning (HODC) framework for domain generalized stereo matching. Our framework guides the model in extracting features that support semantically and structurally driven matching by segmenting objects at different scales and enhances correspondence between intra- and inter-scale regions from the left feature map to the right using dual-level contrastive loss. HODC can be integrated with existing stereo matching models in the training stage, requiring no modifications to the architecture. Remarkably, using only synthetic datasets for training, HODC achieves state-of-the-art generalization performance with various existing stereo matching network architectures, across multiple realistic datasets.", "pdf": "https://openreview.net/pdf/332ba4fcfd210d10872726058498c03c420c78cf.pdf"} {"title": "QVAE-Mole: The Quantum VAE with Spherical Latent Variable Learning for 3-D Molecule Generation", "url": "https://openreview.net/forum?id=RqvesBxqDo", "detail_url": "https://openreview.net/forum?id=RqvesBxqDo", "authors": "Huaijin Wu,Xinyu Ye,Junchi Yan", "tags": "NIPS 2024,Poster", "abstract": "Molecule generation ideally in its 3-D form has enjoyed wide applications in material, chemistry, life science, etc. We propose the first quantum parametric circuit for 3-D molecule generation for its potential quantum advantage especially considering the arrival of Noisy Intermediate-Scale Quantum (NISQ) era. We choose the Variational AutoEncoder (VAE) scheme for its simplicity and one-shot generation ability, which we believe is more quantum-friendly compared with the auto-regressive generative models or diffusion models as used in classic approaches. Specifically, we present a quantum encoding scheme designed for 3-D molecules with qubits complexity $\\mathcal{O}(C\\log n)$ ($n$ is the number of atoms) and adopt a von Mises-Fisher (vMF) distributed latent space to meet the inherent coherence of the quantum system. We further design to encode conditions into quantum circuits for property-specified generation. Experimentally, our model could generate plausible 3-D molecules and achieve competitive quantitative performance with significantly reduced circuit parameters compared with their classic counterparts. The source code will be released upon publication.", "pdf": "https://openreview.net/pdf/2a5f6da368a881efbbe5c13dcb9943d273c39483.pdf"} {"title": "DMNet: Self-comparison Driven Model for Subject-independent Seizure Detection", "url": "https://openreview.net/forum?id=mlmTxJwVsb", "detail_url": "https://openreview.net/forum?id=mlmTxJwVsb", "authors": "Shihao Tu,Linfeng Cao,Daoze Zhang,Junru Chen,Lvbin Ma,Yin Zhang,Yang Yang", "tags": "NIPS 2024,Poster", "abstract": "Automated seizure detection (ASD) using intracranial electroencephalography (iEEG) is critical for effective epilepsy treatment. However, the significant domain shift of iEEG signals across subjects poses a major challenge, limiting their applicability in real-world clinical scenarios. In this paper, we address this issue by analyzing the primary cause behind the failure of existing iEEG models for subject-independent seizure detection, and identify a critical universal seizure pattern: seizure events consistently exhibit higher average amplitude compared to adjacent normal events. To mitigate the domain shifts and preserve the universal seizure patterns, we propose a novel self-comparison mechanism. This mechanism effectively aligns iEEG signals across subjects and time intervals. Building upon these findings, we propose Difference Matrix-based Neural Network (DMNet), a subject-independent seizure detection model, which leverages self-comparison based on two constructed (contextual, channel-level) references to mitigate shifts of iEEG, and utilize a simple yet effective difference matrix to encode the universal seizure patterns. Extensive experiments show that DMNet significantly outperforms previous SOTAs while maintaining high efficiency on a real-world clinical dataset collected by us and two public datasets for subject-independent seizure detection. Moreover, the visualization results demonstrate that the generated difference matrix can effectively capture the seizure activity changes during the seizure evolution process. Additionally, we deploy our method in an online diagnosis system to illustrate its effectiveness in real clinical applications.", "pdf": "https://openreview.net/pdf/72bd52ddff251b9f910257ff0b20a7f4bb236452.pdf"} {"title": "Causal Deciphering and Inpainting in Spatio-Temporal Dynamics via Diffusion Model", "url": "https://openreview.net/forum?id=1ONdF1JHyJ", "detail_url": "https://openreview.net/forum?id=1ONdF1JHyJ", "authors": "Yifan Duan,Jian Zhao,pengcheng,Junyuan Mao,Hao Wu,Jingyu Xu,shilong wang,Caoyuan Ma,Kai Wang,Kun Wang,Xuelong Li", "tags": "NIPS 2024,Poster", "abstract": "Spatio-temporal (ST) prediction has garnered a De facto attention in earth sciences, such as meteorological prediction, human mobility perception. However, the scarcity of data coupled with the high expenses involved in sensor deployment results in notable data imbalances. Furthermore, models that are excessively customized and devoid of causal connections further undermine the generalizability and interpretability. To this end, we establish a causal framework for ST predictions, termed CaPaint, which targets to identify causal regions in data and endow model with causal reasoning ability in a two-stage process. Going beyond this process, we utilize the back-door adjustment to specifically address the sub-regions identified as non-causal in the upstream phase. Specifically, we employ a novel image inpainting technique. By using a fine-tuned unconditional Diffusion Probabilistic Model (DDPM) as the generative prior, we in-fill the masks defined as environmental parts, offering the possibility of reliable extrapolation for potential data distributions. CaPaint overcomes the high complexity dilemma of optimal ST causal discovery models by reducing the data generation complexity from exponential to quasi-linear levels. Extensive experiments conducted on five real-world ST benchmarks demonstrate that integrating the CaPaint concept allows models to achieve improvements ranging from 4.3% to 77.3%. Moreover, compared to traditional mainstream ST augmenters, CaPaint underscores the potential of diffusion models in ST enhancement, offering a novel paradigm for this field. Our project is available at https://anonymous.4open.science/r/12345-DFCC.", "pdf": "https://openreview.net/pdf/229ece8f52dd7beccd176f74e2bb798f93b39b1a.pdf"} {"title": "Dual-frame Fluid Motion Estimation with Test-time Optimization and Zero-divergence Loss", "url": "https://openreview.net/forum?id=WOBhJs9gqU", "detail_url": "https://openreview.net/forum?id=WOBhJs9gqU", "authors": "Yifei Zhang,Huan-ang Gao,Zhou Jiang,Hao Zhao", "tags": "NIPS 2024,Poster", "abstract": "3D particle tracking velocimetry (PTV) is a key technique for analyzing turbulent flow, one of the most challenging computational problems of our century. At the core of 3D PTV is the dual-frame fluid motion estimation algorithm, which tracks particles across two consecutive frames. Recently, deep learning-based methods have achieved impressive accuracy in dual-frame fluid motion estimation; however, they heavily depend on large volumes of labeled data. In this paper, we introduce a new method that is **completely self-supervised and notably outperforms its fully-supervised counterparts while requiring only 1\\% of the training samples (without labels) used by previous methods.** Our method features a novel zero-divergence loss that is specific to the domain of turbulent flow. Inspired by the success of splat operation in high-dimensional filtering and random fields, we propose a splat-based implementation for this loss which is both efficient and effective. The self-supervised nature of our method naturally supports test-time optimization, leading to the development of a tailored Dynamic Velocimetry Enhancer (DVE) module. We demonstrate that strong cross-domain robustness is achieved through test-time optimization on unseen leave-one-out synthetic domains and real physical/biological domains. Code, data and models are available at [https://github.com/Forrest-110/FluidMotionNet](https://github.com/Forrest-110/FluidMotionNet).", "pdf": "https://openreview.net/pdf/d4efe1a9d94fd478394e43a144cf0601fb10da39.pdf"} {"title": "Towards the Dynamics of a DNN Learning Symbolic Interactions", "url": "https://openreview.net/forum?id=dIHXwKjXRE", "detail_url": "https://openreview.net/forum?id=dIHXwKjXRE", "authors": "Qihan Ren,Junpeng Zhang,Yang Xu,Yue Xin,Dongrui Liu,Quanshi Zhang", "tags": "NIPS 2024,Poster", "abstract": "This study proves the two-phase dynamics of a deep neural network (DNN) learning interactions. Despite the long disappointing view of the faithfulness of post-hoc explanation of a DNN, a series of theorems have been proven [27] in recent years to show that for a given input sample, a small set of interactions between input variables can be considered as primitive inference patterns that faithfully represent a DNN's detailed inference logic on that sample. Particularly, Zhang et al. [41] have observed that various DNNs all learn interactions of different complexities in two distinct phases, and this two-phase dynamics well explains how a DNN changes from under-fitting to over-fitting. Therefore, in this study, we mathematically prove the two-phase dynamics of interactions, providing a theoretical mechanism for how the generalization power of a DNN changes during the training process. Experiments show that our theory well predicts the real dynamics of interactions on different DNNs trained for various tasks.", "pdf": "https://openreview.net/pdf/2f6dd32ef834138412b05b54e6d734174aad297d.pdf"} {"title": "OneRef: Unified One-tower Expression Grounding and Segmentation with Mask Referring Modeling", "url": "https://openreview.net/forum?id=siPdcro6uD", "detail_url": "https://openreview.net/forum?id=siPdcro6uD", "authors": "Linhui Xiao,Xiaoshan Yang,Fang Peng,Yaowei Wang,Changsheng Xu", "tags": "NIPS 2024,Poster", "abstract": "Constrained by the separate encoding of vision and language, existing grounding and referring segmentation works heavily rely on bulky Transformer-based fusion en-/decoders and a variety of early-stage interaction technologies. Simultaneously, the current mask visual language modeling (MVLM) fails to capture the nuanced referential relationship between image-text in referring tasks. In this paper, we propose **OneRef**, a minimalist referring framework built on the modality-shared one-tower transformer that unifies the visual and linguistic feature spaces. To modeling the referential relationship, we introduce a novel MVLM paradigm called Mask Referring Modeling (**MRefM**), which encompasses both referring-aware mask image modeling and referring-aware mask language modeling. Both modules not only reconstruct modality-related content but also cross-modal referring content. Within MRefM, we propose a referring-aware dynamic image masking strategy that is aware of the referred region rather than relying on fixed ratios or generic random masking schemes. By leveraging the unified visual language feature space and incorporating MRefM's ability to model the referential relations, our approach enables direct regression of the referring results without resorting to various complex techniques. Our method consistently surpasses existing approaches and achieves SoTA performance on both grounding and segmentation tasks, providing valuable insights for future research. Our code and models are available at https://github.com/linhuixiao/OneRef.", "pdf": "https://openreview.net/pdf/1c7025c22a54c45d08ffe811054a011e5b616da7.pdf"} {"title": "UniGAD: Unifying Multi-level Graph Anomaly Detection", "url": "https://openreview.net/forum?id=sRILMnkkQd", "detail_url": "https://openreview.net/forum?id=sRILMnkkQd", "authors": "Yiqing Lin,Jianheng Tang,Chenyi Zi,H. Vicky Zhao,Yuan Yao,Jia Li", "tags": "NIPS 2024,Poster", "abstract": "Graph Anomaly Detection (GAD) aims to identify uncommon, deviated, or suspicious objects within graph-structured data. Existing methods generally focus on a single graph object type (node, edge, graph, etc.) and often overlook the inherent connections among different object types of graph anomalies. For instance, a money laundering transaction might involve an abnormal account and the broader community it interacts with. To address this, we present UniGAD, the first unified framework for detecting anomalies at node, edge, and graph levels jointly. Specifically, we develop the Maximum Rayleigh Quotient Subgraph Sampler (MRQSampler) that unifies multi-level formats by transferring objects at each level into graph-level tasks on subgraphs. We theoretically prove that MRQSampler maximizes the accumulated spectral energy of subgraphs (i.e., the Rayleigh quotient) to preserve the most significant anomaly information. To further unify multi-level training, we introduce a novel GraphStitch Network to integrate information across different levels, adjust the amount of sharing required at each level, and harmonize conflicting training goals. Comprehensive experiments show that UniGAD outperforms both existing GAD methods specialized for a single task and graph prompt-based approaches for multiple tasks, while also providing robust zero-shot task transferability.", "pdf": "https://openreview.net/pdf/3787248272e67ad0479d37e7cdbee9c24a42fa35.pdf"} {"title": "Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling", "url": "https://openreview.net/forum?id=a75F45dBHK", "detail_url": "https://openreview.net/forum?id=a75F45dBHK", "authors": "Mahdi Karami,Ali Ghodsi", "tags": "NIPS 2024,Poster", "abstract": "In the rapidly evolving field of deep learning, the demand for models that are both expressive and computationally efficient has never been more critical. This paper introduces Orchid, a novel architecture designed to address the quadratic complexity of traditional attention mechanisms without compromising the ability to capture long-range dependencies and in-context learning. At the core of this architecture lies a new data-dependent global convolution layer, which contextually adapts its kernel conditioned on input sequence using a dedicated conditioning neural network. We design two simple conditioning networks that maintain shift equivariance in our data-dependent convolution operation. The dynamic nature of the proposed convolution kernel grants Orchid high expressivity while maintaining quasilinear scalability for long sequences. We evaluate the proposed model across multiple domains, including language modeling and image classification, to highlight its performance and generality. Our experiments demonstrate that this architecture not only outperforms traditional attention-based architectures such as BERT and Vision Transformers with smaller model sizes, but also extends the feasible sequence length beyond the limitations of the dense attention layers. This achievement represents a significant step towards more efficient and scalable deep learning models for sequence modeling.", "pdf": "https://openreview.net/pdf/7fc3ceeeac362e8235f460749eee3719e696e42e.pdf"} {"title": "The Many Faces of Optimal Weak-to-Strong Learning", "url": "https://openreview.net/forum?id=z7h7zMgyPJ", "detail_url": "https://openreview.net/forum?id=z7h7zMgyPJ", "authors": "Mikael M\u00f8ller H\u00f8gsgaard,Kasper Green Larsen,Markus Engelund Mathiasen", "tags": "NIPS 2024,Poster", "abstract": "Boosting is an extremely successful idea, allowing one to combine multiple low accuracy classifiers into a much more accurate voting classifier. In this work, we present a new and surprisingly simple Boosting algorithm that obtains a provably optimal sample complexity. Sample optimal Boosting algorithms have only recently been developed, and our new algorithm has the fastest runtime among all such algorithms and is the simplest to describe: Partition your training data into 5 disjoint pieces of equal size, run AdaBoost on each, and combine the resulting classifiers via a majority vote. In addition to this theoretical contribution, we also perform the first empirical comparison of the proposed sample optimal Boosting algorithms. Our pilot empirical study suggests that our new algorithm might outperform previous algorithms on large data sets.", "pdf": "https://openreview.net/pdf/2671bac5a8424c491302de1f86dcfa5df321520d.pdf"} {"title": "Towards Understanding How Transformers Learn In-context Through a Representation Learning Lens", "url": "https://openreview.net/forum?id=dB6gwSDXKL", "detail_url": "https://openreview.net/forum?id=dB6gwSDXKL", "authors": "Ruifeng Ren,Yong Liu", "tags": "NIPS 2024,Poster", "abstract": "Pre-trained large language models based on Transformers have demonstrated remarkable in-context learning (ICL) abilities. With just a few demonstration examples, the models can implement new tasks without any parameter updates. However, it is still an open question to understand the mechanism of ICL. In this paper, we attempt to explore the ICL process in Transformers through a lens of representation learning. Initially, leveraging kernel methods, we figure out a dual model for one softmax attention layer. The ICL inference process of the attention layer aligns with the training procedure of its dual model, generating token representation predictions that are equivalent to the dual model's test outputs. We delve into the training process of this dual model from a representation learning standpoint and further derive a generalization error bound related to the quantity of demonstration tokens. Subsequently, we extend our theoretical conclusions to more complicated scenarios, including one Transformer layer and multiple attention layers. Furthermore, drawing inspiration from existing representation learning methods especially contrastive learning, we propose potential modifications for the attention layer. Finally, experiments are designed to support our findings.", "pdf": "https://openreview.net/pdf/fa78502207425a128ead7f5c89f107666d487256.pdf"} {"title": "Rethinking The Training And Evaluation of Rich-Context Layout-to-Image Generation", "url": "https://openreview.net/forum?id=83e3DPVrFC", "detail_url": "https://openreview.net/forum?id=83e3DPVrFC", "authors": "Jiaxin Cheng,Zixu Zhao,Tong He,Tianjun Xiao,Yicong Zhou,Zheng Zhang", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements in generative models have significantly enhanced their capacity for image generation, enabling a wide range of applications such as image editing, completion and video editing. A specialized area within generative modeling is layout-to-image (L2I) generation, where predefined layouts of objects guide the generative process. In this study, we introduce a novel regional cross-attention module tailored to enrich layout-to-image generation. This module notably improves the representation of layout regions, particularly in scenarios where existing methods struggle with highly complex and detailed textual descriptions. Moreover, while current open-vocabulary L2I methods are trained in an open-set setting, their evaluations often occur in closed-set environments. To bridge this gap, we propose two metrics to assess L2I performance in open-vocabulary scenarios. Additionally, we conduct a comprehensive user study to validate the consistency of these metrics with human preferences.", "pdf": "https://openreview.net/pdf/9ba3c3f5924abddd42f76e3f3a823c12c7afb3b8.pdf"} {"title": "Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging", "url": "https://openreview.net/forum?id=81YIt63TTn", "detail_url": "https://openreview.net/forum?id=81YIt63TTn", "authors": "Zhenyi Lu,Chenghao Fan,Wei Wei,Xiaoye Qu,Dangyang Chen,Yu Cheng", "tags": "NIPS 2024,Poster", "abstract": "In the era of large language models, model merging is a promising way to combine multiple task-specific models into a single multitask model without extra training. \nHowever, two challenges remain: (a) interference between different models and (b) heterogeneous data during testing. Traditional model merging methods often show significant performance gaps compared to fine-tuned models due to these issues. \nAdditionally, a one-size-fits-all model lacks flexibility for diverse test data, leading to performance degradation. \nWe show that both shared and exclusive task-specific knowledge are crucial for merging performance, but directly merging exclusive knowledge hinders overall performance. \nIn view of this, we propose Twin-Merging, a method that encompasses two principal stages: \n(1) modularizing knowledge into shared and exclusive components, with compression to reduce redundancy and enhance efficiency; \n(2) dynamically merging shared and task-specific knowledge based on the input. \nThis approach narrows the performance gap between merged and fine-tuned models and improves adaptability to heterogeneous data. \nExtensive experiments on $20$ datasets for both language and vision tasks demonstrate the effectiveness of our method, showing an average improvement of $28.34\\%$ in absolute normalized score for discriminative tasks and even surpassing the fine-tuned upper bound on the generative tasks.", "pdf": "https://openreview.net/pdf/939dc81485410d4fd576b0098b90645956eac93c.pdf"} {"title": "On Giant's Shoulders: Effortless Weak to Strong by Dynamic Logits Fusion", "url": "https://openreview.net/forum?id=RMfiqfWAWg", "detail_url": "https://openreview.net/forum?id=RMfiqfWAWg", "authors": "Chenghao Fan,Zhenyi Lu,Wei Wei,Jie Tian,Xiaoye Qu,Dangyang Chen,Yu Cheng", "tags": "NIPS 2024,Poster", "abstract": "Efficient fine-tuning of large language models for task-specific applications is imperative, yet the vast number of parameters in these models makes their training increasingly challenging.\nDespite numerous proposals for effective methods, a substantial memory overhead remains for gradient computations during updates. \\thm{Can we fine-tune a series of task-specific small models and transfer their knowledge directly to a much larger model without additional training?} \nIn this paper, we explore weak-to-strong specialization using logit arithmetic, facilitating a direct answer to this question.\nExisting weak-to-strong methods often employ a static knowledge transfer ratio and a single small model for transferring complex knowledge, which leads to suboptimal performance. \nTo surmount these limitations,\nwe propose a dynamic logit fusion approach that works with a series of task-specific small models, each specialized in a different task. \nThis method adaptively allocates weights among these models at each decoding step,\nlearning the weights through Kullback-Leibler divergence constrained optimization problems. \nWe conduct extensive experiments across various benchmarks in both single-task and multi-task settings, achieving leading results.\nBy transferring expertise from the 7B model to the 13B model, our method closes the performance gap by 96.4\\% in single-task scenarios and by 86.3\\% in multi-task scenarios compared to full fine-tuning of the 13B model. Notably, we achieve surpassing performance on unseen tasks. Moreover, we further demonstrate that our method can effortlessly integrate in-context learning for single tasks and task arithmetic for multi-task scenarios.", "pdf": "https://openreview.net/pdf/c6f99b3fb069cc916c3736236545654d7502395d.pdf"} {"title": "Is A Picture Worth A Thousand Words? Delving Into Spatial Reasoning for Vision Language Models", "url": "https://openreview.net/forum?id=cvaSru8LeO", "detail_url": "https://openreview.net/forum?id=cvaSru8LeO", "authors": "Jiayu Wang,Yifei Ming,Zhenmei Shi,Vibhav Vineet,Xin Wang,Yixuan Li,Neel Joshi", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) and vision-language models (VLMs) have demonstrated remarkable performance across a wide range of tasks and domains. Despite this promise, spatial understanding and reasoning\u2014a fundamental component of human cognition\u2014remains under-explored. We propose SpatialEval, a novel benchmark that covers diverse aspects of spatial reasoning such as relationship understanding, navigation, and counting. We conduct a comprehensive evaluation of competitive language and vision-language models. Our findings reveal several counter-intuitive insights that have been overlooked in the literature: (1) Spatial reasoning poses significant challenges where competitive models can fall behind random guessing; (2) Despite additional visual input, VLMs often under-perform compared to their LLM counterparts; (3) When both textual and visual information is available, multi-modal language models become less reliant on visual information if sufficient textual clues are provided. Additionally, we demonstrate that leveraging redundancy between vision and text can significantly enhance model performance. We hope our study will inform the development of multimodal models to improve spatial intelligence and further close the gap with human intelligence. Our code is available at https://github.com/jiayuww/SpatialEval.", "pdf": "https://openreview.net/pdf/57e67390f654ba1dc3e43ed196fc93045e732f16.pdf"} {"title": "Understanding Representation of Deep Equilibrium Models from Neural Collapse Perspective", "url": "https://openreview.net/forum?id=obUXeUMmq1", "detail_url": "https://openreview.net/forum?id=obUXeUMmq1", "authors": "Haixiang Sun,Ye Shi", "tags": "NIPS 2024,Poster", "abstract": "Deep Equilibrium Model (DEQ), which serves as a typical implicit neural network, emphasizes their memory efficiency and competitive performance compared to explicit neural networks. However, there has been relatively limited theoretical analysis on the representation of DEQ. In this paper, we utilize the Neural Collapse ($\\mathcal{NC}$) as a tool to systematically analyze the representation of DEQ under both balanced and imbalanced conditions. $\\mathcal{NC}$ is an interesting phenomenon in the neural network training process that characterizes the geometry of class features and classifier weights. While extensively studied in traditional explicit neural networks, the $\\mathcal{NC}$ phenomenon has not received substantial attention in the context of implicit neural networks. \nWe theoretically show that $\\mathcal{NC}$ exists in DEQ under balanced conditions. Moreover, in imbalanced settings, despite the presence of minority collapse, DEQ demonstrated advantages over explicit neural networks. These advantages include the convergence of extracted features to the vertices of a simplex equiangular tight frame and self-duality properties under mild conditions, highlighting DEQ's superiority in handling imbalanced datasets. Finally, we validate our theoretical analyses through experiments in both balanced and imbalanced scenarios.", "pdf": "https://openreview.net/pdf/1a16568f6e76ba0c102e0389408ff6409d097dcb.pdf"} {"title": "PaCE: Parsimonious Concept Engineering for Large Language Models", "url": "https://openreview.net/forum?id=lOMHt16T8R", "detail_url": "https://openreview.net/forum?id=lOMHt16T8R", "authors": "Jinqi Luo,Tianjiao Ding,Kwan Ho Ryan Chan,Darshan Thaker,Aditya Chattopadhyay,Chris Callison-Burch,Rene Vidal", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) are being used for a wide variety of tasks. While they are capable of generating human-like responses, they can also produce undesirable output including potentially harmful information, racist or sexist language, and hallucinations. Alignment methods are designed to reduce such undesirable output, via techniques such as fine-tuning, prompt engineering, and representation engineering. However, existing methods face several challenges: some require costly fine-tuning for every alignment task; some do not adequately remove undesirable concepts, failing alignment; some remove benign concepts, lowering the linguistic capabilities of LLMs. To address these issues, we propose Parsimonious Concept Engineering (PaCE), a novel activation engineering framework for alignment. First, to sufficiently model the concepts, we construct a large-scale concept dictionary in the activation space, in which each atom corresponds to a semantic concept. Given any alignment task, we instruct a concept partitioner to efficiently annotate the concepts as benign or undesirable. Then, at inference time, we decompose the LLM activations along the concept dictionary via sparse coding, to accurately represent the activations as linear combinations of benign and undesirable components. By removing the latter ones from the activations, we reorient the behavior of the LLM towards the alignment goal. We conduct experiments on tasks such as response detoxification, faithfulness enhancement, and sentiment revising, and show that PaCE achieves state-of-the-art alignment performance while maintaining linguistic capabilities.", "pdf": "https://openreview.net/pdf/015901e2a61cebe7c75c3c2eef65ed19fa634704.pdf"} {"title": "MR-Ben: A Meta-Reasoning Benchmark for Evaluating System-2 Thinking in LLMs", "url": "https://openreview.net/forum?id=GN2qbxZlni", "detail_url": "https://openreview.net/forum?id=GN2qbxZlni", "authors": "Zhongshen Zeng,Yinhong Liu,Yingjia Wan,Jingyao Li,Pengguang Chen,Jianbo Dai,Yuxuan Yao,Rongwu Xu,Zehan Qi,Wanru Zhao,Linling Shen,Jianqiao Lu,Haochen Tan,Yukang Chen,Hao Zhang,Zhan Shi,Bailin Wang,Zhijiang Guo,Jiaya Jia", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have shown increasing capability in problem-solving and decision-making, largely based on the step-by-step chain-of-thought reasoning processes. However, evaluating these reasoning abilities has become increasingly challenging. Existing outcome-based benchmarks are beginning to saturate, becoming less effective in tracking meaningful progress. To address this, we present a process-based benchmark MR-Ben that demands a meta-reasoning skill, where LMs are asked to locate and analyse potential errors in automatically generated reasoning steps. Our meta-reasoning paradigm is especially suited for system-2 slow thinking, mirroring the human cognitive process of carefully examining assumptions, conditions, calculations, and logic to identify mistakes. MR-Ben comprises 5,975 questions curated by human experts across a wide range of subjects, including physics, chemistry, logic, coding, and more. Through our designed metrics for assessing meta-reasoning on this benchmark, we identify interesting limitations and weaknesses of current LLMs (open-source and closed-source models). For example, with models like the o1 series from OpenAI demonstrating strong performance by effectively scrutinizing the solution space, many other state-of-the-art models fall significantly behind on MR-Ben, exposing potential shortcomings in their training strategies and inference methodologies.", "pdf": "https://openreview.net/pdf/41a8a7be31110ef4ed84f23480f70da7eef87160.pdf"} {"title": "DRIP: Unleashing Diffusion Priors for Joint Foreground and Alpha Prediction in Image Matting", "url": "https://openreview.net/forum?id=jz5ZMeN9He", "detail_url": "https://openreview.net/forum?id=jz5ZMeN9He", "authors": "Xiaodi Li,Zongxin Yang,Ruijie Quan,Yi Yang", "tags": "NIPS 2024,Poster", "abstract": "Recovering the foreground color and opacity/alpha matte from a single image (i.e., image matting) is a challenging and ill-posed problem where data priors play a critical role in achieving precise results. Traditional methods generally predict the alpha matte and then extract the foreground through post-processing, often failing to produce high-fidelity foreground color. This failure stems from the models' difficulty in learning robust color predictions from limited matting datasets. To address this, we explore the potential of leveraging vision priors embedded in pre-trained latent diffusion models (LDM) for estimating foreground RGBA values in challenging scenarios and rare objects. We introduce Drip, a novel approach for image matting that harnesses the rich prior knowledge of LDM models. Our method incorporates a switcher and a cross-domain attention mechanism to extend the original LDM for joint prediction of the foreground color and opacity. This setup facilitates mutual information exchange and ensures high consistency across both modalities. To mitigate the inherent reconstruction errors of the LDM's VAE decoder, we propose a latent transparency decoder to align the RGBA prediction with the input image, thereby reducing discrepancies. Comprehensive experimental results demonstrate that our approach achieves state-of-the-art performance in foreground and alpha predictions and shows remarkable generalizability across various benchmarks.", "pdf": "https://openreview.net/pdf/c9e4d515bafd68a3f246568b632b4629fa4b754b.pdf"} {"title": "SampDetox: Black-box Backdoor Defense via Perturbation-based Sample Detoxification", "url": "https://openreview.net/forum?id=Y6RV6z98Pk", "detail_url": "https://openreview.net/forum?id=Y6RV6z98Pk", "authors": "Yanxin Yang,Chentao Jia,DengKe Yan,Ming Hu,Tianlin Li,Xiaofei Xie,Xian Wei,Mingsong Chen", "tags": "NIPS 2024,Poster", "abstract": "The advancement of Machine Learning has enabled the widespread deployment of Machine Learning as a Service (MLaaS) applications. However, the untrustworthy nature of third-party ML services poses backdoor threats. Existing defenses in MLaaS are limited by their reliance on training samples or white-box model analysis, highlighting the need for a black-box backdoor purification method. In our paper, we attempt to use diffusion models for purification by introducing noise in a forward diffusion process to destroy backdoors and recover clean samples through a reverse generative process. However, since a higher noise also destroys the semantics of the original samples, it still results in a low restoration performance. To investigate the effectiveness of noise in eliminating different types of backdoors, we conducted a preliminary study, which demonstrates that backdoors with low visibility can be easily destroyed by lightweight noise and those with high visibility need to be destroyed by high noise but can be easily detected. Based on the study, we propose SampDetox, which strategically combines lightweight and high noise. SampDetox applies weak noise to eliminate low-visibility backdoors and compares the structural similarity between the recovered and original samples to localize high-visibility backdoors. Intensive noise is then applied to these localized areas, destroying the high-visibility backdoors while preserving global semantic information. As a result, detoxified samples can be used for inference, even by poisoned models. Comprehensive experiments demonstrate the effectiveness of SampDetox in defending against various state-of-the-art backdoor attacks.", "pdf": "https://openreview.net/pdf/26e3335b97570784bbe89aa20771ead9a6809f91.pdf"} {"title": "StrategyLLM: Large Language Models as Strategy Generators, Executors, Optimizers, and Evaluators for Problem Solving", "url": "https://openreview.net/forum?id=UkxJd64mki", "detail_url": "https://openreview.net/forum?id=UkxJd64mki", "authors": "Chang Gao,Haiyun Jiang,Deng Cai,Shuming Shi,Wai Lam", "tags": "NIPS 2024,Poster", "abstract": "Most existing prompting methods suffer from the issues of generalizability and consistency, as they often rely on instance-specific solutions that may not be applicable to other instances and lack task-level consistency across the selected few-shot examples. To address these limitations, we propose a comprehensive framework, StrategyLLM, allowing LLMs to perform inductive reasoning, deriving general strategies from specific task instances, and deductive reasoning, applying these general strategies to particular task examples, for constructing generalizable and consistent few-shot prompts. It employs four LLM-based agents: strategy generator, executor, optimizer, and evaluator, working together to generate, evaluate, and select promising strategies for a given task. Experimental results demonstrate that StrategyLLM outperforms the competitive baseline CoT-SC that requires human-annotated solutions on 13 datasets across 4 challenging tasks without human involvement, including math reasoning (34.2\\% $\\rightarrow$ 38.8\\%), commonsense reasoning (70.3\\% $\\rightarrow$ 72.5\\%), algorithmic reasoning (73.7\\% $\\rightarrow$ 85.0\\%), and symbolic reasoning (30.0\\% $\\rightarrow$ 79.2\\%). Further analysis reveals that StrategyLLM is applicable to various LLMs and demonstrates advantages across numerous scenarios.", "pdf": "https://openreview.net/pdf/83f47ebda874c4f4dd7bbadefa77f1ff912951a9.pdf"} {"title": "Novel Object Synthesis via Adaptive Text-Image Harmony", "url": "https://openreview.net/forum?id=ENLsNDfys0", "detail_url": "https://openreview.net/forum?id=ENLsNDfys0", "authors": "Zeren Xiong,Ze-dong Zhang,Zikun Chen,Shuo Chen,Xiang Li,Gan Sun,Jian Yang,Jun Li", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we study an object synthesis task that combines an object text with an object image to create a new object image. However, most diffusion models struggle with this task, \\textit{i.e.}, often generating an object that predominantly reflects either the text or the image due to an imbalance between their inputs. To address this issue, we propose a simple yet effective method called Adaptive Text-Image Harmony (ATIH) to generate novel and surprising objects.\nFirst, we introduce a scale factor and an injection step to balance text and image features in cross-attention and to preserve image information in self-attention during the text-image inversion diffusion process, respectively. Second, to better integrate object text and image, we design a balanced loss function with a noise parameter, ensuring both optimal editability and fidelity of the object image. Third, to adaptively adjust these parameters, we present a novel similarity score function that not only maximizes the similarities between the generated object image and the input text/image but also balances these similarities to harmonize text and image integration. \nExtensive experiments demonstrate the effectiveness of our approach, showcasing remarkable object creations such as colobus-glass jar. https://xzr52.github.io/ATIH/", "pdf": "https://openreview.net/pdf/5e7a67122a7332a12ddf4bbffe98ae8953b7d390.pdf"} {"title": "Diffusion Twigs with Loop Guidance for Conditional Graph Generation", "url": "https://openreview.net/forum?id=fvOCJAAYLx", "detail_url": "https://openreview.net/forum?id=fvOCJAAYLx", "authors": "Giangiacomo Mercatali,Yogesh Verma,Andre Freitas,Vikas Garg", "tags": "NIPS 2024,Poster", "abstract": "We introduce a novel score-based diffusion framework named Twigs that incorporates multiple co-evolving flows for enriching conditional generation tasks. Specifically, a central or trunk diffusion process is associated with a primary variable (e.g., graph structure), and additional offshoot or stem processes are dedicated to dependent variables (e.g., graph properties or labels). A new strategy, which we call loop guidance, effectively orchestrates the flow of information between the trunk and the stem processes during sampling. This approach allows us to uncover intricate interactions and dependencies, and unlock new generative capabilities. We provide extensive experiments to demonstrate strong performance gains of the proposed method over contemporary baselines in the context of conditional graph generation, underscoring the potential of Twigs in challenging generative tasks such as inverse molecular design and molecular optimization. \nCode is available at https://github.com/Aalto-QuML/Diffusion_twigs.", "pdf": "https://openreview.net/pdf/56876d8150090faf15c9900b5ade616dc63d59d6.pdf"} {"title": "Beyond Efficiency: Molecular Data Pruning for Enhanced Generalization", "url": "https://openreview.net/forum?id=GJ0qIevGjD", "detail_url": "https://openreview.net/forum?id=GJ0qIevGjD", "authors": "Dingshuo Chen,Zhixun Li,Yuyan Ni,Guibin Zhang,Ding Wang,Qiang Liu,Shu Wu,Jeffrey Xu Yu,Liang Wang", "tags": "NIPS 2024,Poster", "abstract": "With the emergence of various molecular tasks and massive datasets, how to perform efficient training has become an urgent yet under-explored issue in the area. Data pruning (DP), as an oft-stated approach to saving training burdens, filters out less influential samples to form a coreset for training. However, the increasing reliance on pretrained models for molecular tasks renders traditional in-domain DP methods incompatible. Therefore, we propose a **Mol**ecular data **P**runing framework for **e**nhanced **G**eneralization (**MolPeg**), which focuses on the source-free data pruning scenario, where data pruning is applied with pretrained models. By maintaining two models with different updating paces during training, we introduce a novel scoring function to measure the informativeness of samples based on the loss discrepancy. As a plug-and-play framework, MolPeg realizes the perception of both source and target domain and consistently outperforms existing DP methods across four downstream tasks. Remarkably, it can surpass the performance obtained from full-dataset training, even when pruning up to 60-70% of the data on HIV and PCBA dataset. Our work suggests that the discovery of effective data-pruning metrics could provide a viable path to both enhanced efficiency and superior generalization in transfer learning.", "pdf": "https://openreview.net/pdf/9088723634e07e023deef321b116ff4076d69714.pdf"} {"title": "Graph Neural Flows for Unveiling Systemic Interactions Among Irregularly Sampled Time Series", "url": "https://openreview.net/forum?id=tFB5SsabVb", "detail_url": "https://openreview.net/forum?id=tFB5SsabVb", "authors": "Giangiacomo Mercatali,Andre Freitas,Jie Chen", "tags": "NIPS 2024,Poster", "abstract": "Interacting systems are prevalent in nature. It is challenging to accurately predict the dynamics of the system if its constituent components are analyzed independently. We develop a graph-based model that unveils the systemic interactions of time series observed at irregular time points, by using a directed acyclic graph to model the conditional dependencies (a form of causal notation) of the system components and learning this graph in tandem with a continuous-time model that parameterizes the solution curves of ordinary differential equations (ODEs). Our technique, a graph neural flow, leads to substantial enhancements over non-graph-based methods, as well as graph-based methods without the modeling of conditional dependencies. We validate our approach on several tasks, including time series classification and forecasting, to demonstrate its efficacy.", "pdf": "https://openreview.net/pdf/fed6b3dc2dd66de999874c711e0ab177f0c6f469.pdf"} {"title": "MambaLLIE: Implicit Retinex-Aware Low Light Enhancement with Global-then-Local State Space", "url": "https://openreview.net/forum?id=l6xVqzm72i", "detail_url": "https://openreview.net/forum?id=l6xVqzm72i", "authors": "Jiangwei Weng,Zhiqiang Yan,Ying Tai,Jianjun Qian,Jian Yang,Jun Li", "tags": "NIPS 2024,Poster", "abstract": "Recent advances in low light image enhancement have been dominated by Retinex-based learning framework, leveraging convolutional neural networks (CNNs) and Transformers. However, the vanilla Retinex theory primarily addresses global illumination degradation and neglects local issues such as noise and blur in dark conditions. Moreover, CNNs and Transformers struggle to capture global degradation due to their limited receptive fields. While state space models (SSMs) have shown promise in the long-sequence modeling, they face challenges in combining local invariants and global context in visual data. In this paper, we introduce MambaLLIE, an implicit Retinex-aware low light enhancer featuring a global-then-local state space design. We first propose a Local-Enhanced State Space Module (LESSM) that incorporates an augmented local bias within a 2D selective scan mechanism, enhancing the original SSMs by preserving local 2D dependency. Additionally, an Implicit Retinex-aware Selective Kernel module (IRSK) dynamically selects features using spatially-varying operations, adapting to varying inputs through an adaptive kernel selection process. Our Global-then-Local State Space Block (GLSSB) integrates LESSM and IRSK with layer normalization (LN) as its core. This design enables MambaLLIE to achieve comprehensive global long-range modeling and flexible local feature aggregation. Extensive experiments demonstrate that MambaLLIE significantly outperforms state-of-the-art CNN and Transformer-based methods. Our code is available at https://github.com/wengjiangwei/MambaLLIE.", "pdf": "https://openreview.net/pdf/90cb332aa165948530e0ea9a1b5949966821bef1.pdf"} {"title": "Memory-Efficient Gradient Unrolling for Large-Scale Bi-level Optimization", "url": "https://openreview.net/forum?id=MI8Z9gutIn", "detail_url": "https://openreview.net/forum?id=MI8Z9gutIn", "authors": "Qianli Shen,Yezhen Wang,Zhouhao Yang,Xiang Li,Haonan Wang,Yang Zhang,Jonathan Scarlett,Zhanxing Zhu,Kenji Kawaguchi", "tags": "NIPS 2024,Poster", "abstract": "Bi-level optimizaiton (BO) has become a fundamental mathematical framework for addressing hierarchical machine learning problems.\nAs deep learning models continue to grow in size, the demand for scalable bi-level optimization has become increasingly critical.\nTraditional gradient-based bi-level optimizaiton algorithms, due to their inherent characteristics, are ill-suited to meet the demands of large-scale applications.\nIn this paper, we introduce **F**orward **G**radient **U**nrolling with **F**orward **G**radient, abbreviated as **$($FG$)^2$U**, which achieves an unbiased stochastic approximation of the meta gradient for bi-level optimizaiton.\n$($FG$)^2$U circumvents the memory and approximation issues associated with classical bi-level optimizaiton approaches, and delivers significantly more accurate gradient estimates than existing large-scale bi-level optimizaiton approaches.\nAdditionally, $($FG$)^2$U is inherently designed to support parallel computing, enabling it to effectively leverage large-scale distributed computing systems to achieve significant computational efficiency.\nIn practice, $($FG$)^2$U and other methods can be strategically placed at different stages of the training process to achieve a more cost-effective two-phase paradigm.\nFurther, $($FG$)^2$U is easy to implement within popular deep learning frameworks, and can be conveniently adapted to address more challenging zeroth-order bi-level optimizaiton scenarios.\nWe provide a thorough convergence analysis and a comprehensive practical discussion for $($FG$)^2$U, complemented by extensive empirical evaluations, showcasing its superior performance in diverse large-scale bi-level optimizaiton tasks.", "pdf": "https://openreview.net/pdf/b45730a0c328fe4ce623e1fdab142dd9fef54f2c.pdf"} {"title": "Rethinking Memory and Communication Costs for Efficient Data Parallel Training of Large Language Models", "url": "https://openreview.net/forum?id=4Un2TD9bNe", "detail_url": "https://openreview.net/forum?id=4Un2TD9bNe", "authors": "Hanxiao Zhang,Lin JU,Chan Wu,Jinjing Huang,Youshao Xiao,Zhenglei Zhou,Zhiming fan,Zhaoxin Huan,Siyuan Li,Fanzhuang Meng,Lei Liang,Xiaolu Zhang,JUN ZHOU", "tags": "NIPS 2024,Poster", "abstract": "Recently, various strategies for distributed training of large language models (LLMs) have been proposed.\nBy categorizing them into basic strategies and composite strategies, we have discovered that existing basic strategies provide limited options in specific scenarios, leaving considerable room for optimization in training speed.\nIn this paper, we rethink the impact of memory and communication costs on the training speed of LLMs, taking into account the impact of intra- and inter-group communication performance disparities, and then propose a new set of basic strategies named the \\textbf{Pa}rtial \\textbf{R}edundancy \\textbf{O}ptimizer (PaRO).\nPaRO Data Parallelism (PaRO-DP) accelerates LLM training through refined model state partitioning and tailored training procedures. At the same time, PaRO Collective Communications (PaRO-CC) speeds up collective communication operations by rearranging the topology. We also propose a guideline for choosing different DP strategies based on simple quantitative calculations, which yields minimal ranking errors.\nOur experiments demonstrate that PaRO improves the training speed of LLMs by up to 266\\% that of ZeRO-3 as basic DP strategies.\nMoreover, employing PaRO-CC independently for model parallel strategies, such as Megatron, can also boost the training speed by 17\\%.", "pdf": "https://openreview.net/pdf/e81556f8396c5ffdfa40806831aae33dcd131916.pdf"} {"title": "Nonstationary Sparse Spectral Permanental Process", "url": "https://openreview.net/forum?id=jS34QpqdWs", "detail_url": "https://openreview.net/forum?id=jS34QpqdWs", "authors": "Zicheng Sun,Yixuan Zhang,Zenan Ling,Xuhui Fan,Feng Zhou", "tags": "NIPS 2024,Poster", "abstract": "Existing permanental processes often impose constraints on kernel types or stationarity, limiting the model's expressiveness. To overcome these limitations, we propose a novel approach utilizing the sparse spectral representation of nonstationary kernels. \nThis technique relaxes the constraints on kernel types and stationarity, allowing for more flexible modeling while reducing computational complexity to the linear level. \nAdditionally, we introduce a deep kernel variant by hierarchically stacking multiple spectral feature mappings, further enhancing the model's expressiveness to capture complex patterns in data. Experimental results on both synthetic and real-world datasets demonstrate the effectiveness of our approach, particularly in scenarios with pronounced data nonstationarity. Additionally, ablation studies are conducted to provide insights into the impact of various hyperparameters on model performance.", "pdf": "https://openreview.net/pdf/95d303ea5a03f8c300d97a4825910ab765228036.pdf"} {"title": "Rethinking Fourier Transform from A Basis Functions Perspective for Long-term Time Series Forecasting", "url": "https://openreview.net/forum?id=BAfKBkr8IP", "detail_url": "https://openreview.net/forum?id=BAfKBkr8IP", "authors": "Runze Yang,Longbing Cao,JIE YANG,li jianxun", "tags": "NIPS 2024,Poster", "abstract": "The interaction between Fourier transform and deep learning opens new avenues for long-term time series forecasting (LTSF). We propose a new perspective to reconsider the Fourier transform from a basis functions perspective. Specifically, the real and imaginary parts of the frequency components can be viewed as the coefficients of cosine and sine basis functions at tiered frequency levels, respectively. We argue existing Fourier-based methods do not involve basis functions thus fail to interpret frequency coefficients precisely and consider the time-frequency relationship sufficiently, leading to inconsistent starting cycles and inconsistent series length issues. Accordingly, a novel Fourier basis mapping (FBM) method addresses these issues by mixing time and frequency domain features through Fourier basis expansion. Differing from existing approaches, FBM (i) embeds the discrete Fourier transform with basis functions, and then (ii) can enable plug-and-play in various types of neural networks for better performance. FBM extracts explicit frequency features while preserving temporal characteristics, enabling the mapping network to capture the time-frequency relationships. By incorporating our unique time-frequency features, the FBM variants can enhance any type of networks like linear, multilayer-perceptron-based, transformer-based, and Fourier-based networks, achieving state-of-the-art LTSF results on diverse real-world datasets with just one or three fully connected layers. The code is available at: https://github.com/runze1223/Fourier-Basis-Mapping.", "pdf": "https://openreview.net/pdf/cb9ab1e8a45fbfdd1bd3acded890575f554719d0.pdf"} {"title": "Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor", "url": "https://openreview.net/forum?id=cbkJBYIkID", "detail_url": "https://openreview.net/forum?id=cbkJBYIkID", "authors": "Shaokui Wei,Hongyuan Zha,Baoyuan Wu", "tags": "NIPS 2024,Poster", "abstract": "Data-poisoning backdoor attacks are serious security threats to machine learning models, where an adversary can manipulate the training dataset to inject backdoors into models. In this paper, we focus on in-training backdoor defense, aiming to train a clean model even when the dataset may be potentially poisoned. Unlike most existing methods that primarily detect and remove/unlearn suspicious samples to mitigate malicious backdoor attacks, we propose a novel defense approach called PDB (Proactive Defensive Backdoor). Specifically, PDB leverages the \u201chome field\u201d advantage of defenders by proactively injecting a defensive backdoor into the model during training. Taking advantage of controlling the training process, the defensive backdoor is designed to suppress the malicious backdoor effectively while remaining secret to attackers. In addition, we introduce a reversible mapping to determine the defensive target label. During inference, PDB embeds a defensive trigger in the inputs and reverses the model\u2019s prediction, suppressing malicious backdoor and ensuring the model's utility on the original task. Experimental results across various datasets and models demonstrate that our approach achieves state-of-the-art defense performance against a wide range of backdoor attacks. The code is available at https://github.com/shawkui/Proactive_Defensive_Backdoor.", "pdf": "https://openreview.net/pdf/ce6a360fb52cf8f5bff3ea8215f90e1cb46d3b1d.pdf"} {"title": "Learning to Shape In-distribution Feature Space for Out-of-distribution Detection", "url": "https://openreview.net/forum?id=1Du3mMP5YN", "detail_url": "https://openreview.net/forum?id=1Du3mMP5YN", "authors": "Yonggang Zhang,Jie Lu,Bo Peng,Zhen Fang,Yiu-ming Cheung", "tags": "NIPS 2024,Poster", "abstract": "Out-of-distribution (OOD) detection is critical for deploying machine learning models in the open world. To design scoring functions that discern OOD data from the in-distribution (ID) cases from a pre-trained discriminative model, existing methods tend to make rigorous distributional assumptions either explicitly or implicitly due to the lack of knowledge about the learned feature space in advance. \nThe mismatch between the learned and assumed distributions motivates us to raise a fundamental yet under-explored question: \\textit{Is it possible to deterministically model the feature distribution while pre-training a discriminative model?}\nThis paper gives an affirmative answer to this question by presenting a Distributional Representation Learning (\\texttt{DRL}) framework for OOD detection. In particular, \\texttt{DRL} explicitly enforces the underlying feature space to conform to a pre-defined mixture distribution, together with an online approximation of normalization constants to enable end-to-end training. Furthermore, we formulate \\texttt{DRL} into a provably convergent Expectation-Maximization algorithm to avoid trivial solutions and rearrange the sequential sampling to guide the training consistency. Extensive evaluations across mainstream OOD detection benchmarks empirically manifest the superiority of the proposed \\texttt{DRL} over its advanced counterparts.", "pdf": "https://openreview.net/pdf/61b9c10dbfe8bfa5b9fd2b018dabfee761212c93.pdf"} {"title": "A Kernel Perspective on Distillation-based Collaborative Learning", "url": "https://openreview.net/forum?id=LdZ0u1FuXb", "detail_url": "https://openreview.net/forum?id=LdZ0u1FuXb", "authors": "Sejun Park,Kihun Hong,Ganguk Hwang", "tags": "NIPS 2024,Poster", "abstract": "Over the past decade, there is a growing interest in collaborative learning that can enhance AI models of multiple parties.\nHowever, it is still challenging to enhance performance them without sharing private data and models from individual parties.\nOne recent promising approach is to develop distillation-based algorithms that exploit unlabeled public data but the results are still unsatisfactory in both theory and practice.\nTo tackle this problem, we rigorously analyze a representative distillation-based algorithm in the view of kernel regression.\nThis work provides the first theoretical results to prove the (nearly) minimax optimality of the nonparametric collaborative learning algorithm that does not directly share local data or models in massively distributed statistically heterogeneous environments.\nInspired by our theoretical results, we also propose a practical distillation-based collaborative learning algorithm based on neural network architecture.\nOur algorithm successfully bridges the gap between our theoretical assumptions and practical settings with neural networks through feature kernel matching.\nWe simulate various regression tasks to verify our theory and demonstrate the practical feasibility of our proposed algorithm.", "pdf": "https://openreview.net/pdf/cb9b908c6342d37d801b44d752e9b7bb52127e3b.pdf"} {"title": "Unleashing the Denoising Capability of Diffusion Prior for Solving Inverse Problems", "url": "https://openreview.net/forum?id=2fiYzs3YkH", "detail_url": "https://openreview.net/forum?id=2fiYzs3YkH", "authors": "Jiawei Zhang,Jiaxin Zhuang,Cheng Jin,Gen Li,Yuantao Gu", "tags": "NIPS 2024,Poster", "abstract": "The recent emergence of diffusion models has significantly advanced the precision of learnable priors, presenting innovative avenues for addressing inverse problems. Previous works have endeavored to integrate diffusion priors into the maximum a posteriori estimation (MAP) framework and design optimization methods to solve the inverse problem. However, prevailing optimization-based rithms primarily exploit the prior information within the diffusion models while neglecting their denoising capability. To bridge this gap, this work leverages the diffusion process to reframe noisy inverse problems as a two-variable constrained optimization task by introducing an auxiliary optimization variable that represents a 'noisy' sample at an equivalent denoising step. The projection gradient descent method is efficiently utilized to solve the corresponding optimization problem by truncating the gradient through the $\\mu$-predictor. The proposed algorithm, termed ProjDiff, effectively harnesses the prior information and the denoising capability of a pre-trained diffusion model within the optimization framework. Extensive experiments on the image restoration tasks and source separation and partial generation tasks demonstrate that ProjDiff exhibits superior performance across various linear and nonlinear inverse problems, highlighting its potential for practical applications. Code is available at https://github.com/weigerzan/ProjDiff/.", "pdf": "https://openreview.net/pdf/3cdefd3db0836a915b7f6664868ba9ce7ba04903.pdf"} {"title": "Continuously Learning, Adapting, and Improving: A Dual-Process Approach to Autonomous Driving", "url": "https://openreview.net/forum?id=y9huwsnGRJ", "detail_url": "https://openreview.net/forum?id=y9huwsnGRJ", "authors": "Jianbiao Mei,Yukai Ma,Xuemeng Yang,Licheng Wen,Xinyu Cai,Xin Li,Daocheng Fu,Bo Zhang,Pinlong Cai,Min Dou,Botian Shi,Liang He,Yong Liu,Yu Qiao", "tags": "NIPS 2024,Poster", "abstract": "Autonomous driving has advanced significantly due to sensors, machine learning, and artificial intelligence improvements. However, prevailing methods struggle with intricate scenarios and causal relationships, hindering adaptability and interpretability in varied environments. To address the above problems, we introduce LeapAD, a novel paradigm for autonomous driving inspired by the human cognitive process. Specifically, LeapAD emulates human attention by selecting critical objects relevant to driving decisions, simplifying environmental interpretation, and mitigating decision-making complexities. Additionally, LeapAD incorporates an innovative dual-process decision-making module, which consists of an Analytic Process (System-II) for thorough analysis and reasoning, along with a Heuristic Process (System-I) for swift and empirical processing. The Analytic Process leverages its logical reasoning to accumulate linguistic driving experience, which is then transferred to the Heuristic Process by supervised fine-tuning. Through reflection mechanisms and a growing memory bank, LeapAD continuously improves itself from past mistakes in a closed-loop environment. Closed-loop testing in CARLA shows that LeapAD outperforms all methods relying solely on camera input, requiring 1-2 orders of magnitude less labeled data. Experiments also demonstrate that as the memory bank expands, the Heuristic Process with only 1.8B parameters can inherit the knowledge from a GPT-4 powered Analytic Process and achieve continuous performance improvement. Project page: https://pjlab-adg.github.io/LeapAD", "pdf": "https://openreview.net/pdf/b1babf61241a3da282fb13f4bf5bd64b4d8f7e45.pdf"} {"title": "Meaningful Learning: Enhancing Abstract Reasoning in Large Language Models via Generic Fact Guidance", "url": "https://openreview.net/forum?id=TIhiFqGOYC", "detail_url": "https://openreview.net/forum?id=TIhiFqGOYC", "authors": "Kai Xiong,Xiao Ding,Ting Liu,Bing Qin,Dongliang Xu,Qing Yang,Hongtao Liu,Yixin Cao", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have developed impressive performance and strong explainability across various reasoning scenarios, marking a significant stride towards mimicking human-like intelligence. Despite this, when tasked with several simple questions supported by a generic fact, LLMs often struggle to abstract and apply the generic fact to provide consistent and precise answers, revealing a deficiency in abstract reasoning abilities. This has sparked a vigorous debate about whether LLMs are genuinely reasoning or merely memorizing. In light of this, we design a preliminary study to quantify and delve into the abstract reasoning abilities of existing LLMs. Our findings reveal a substantial discrepancy between their general reasoning and abstract reasoning performances. To relieve this problem, we tailor an abstract reasoning dataset (AbsR) together with a meaningful learning paradigm to teach LLMs how to leverage generic facts for reasoning purposes. The results show that our approach not only boosts the general reasoning performance of LLMs but also makes considerable strides towards their capacity for abstract reasoning, moving beyond simple memorization or imitation to a more nuanced understanding and application of generic facts. The code is available at https://github.com/Waste-Wood/MeanLearn.", "pdf": "https://openreview.net/pdf/79bd6bf9a023e531aba3370e7efcfa678b4e44c3.pdf"} {"title": "Aggregate-and-Adapt Natural Language Prompts for Downstream Generalization of CLIP", "url": "https://openreview.net/forum?id=Yz3wBKoK0K", "detail_url": "https://openreview.net/forum?id=Yz3wBKoK0K", "authors": "Chen Huang,Skyler Seto,Samira Abnar,David Grangier,Navdeep Jaitly,Joshua M. Susskind", "tags": "NIPS 2024,Poster", "abstract": "Large pretrained vision-language models like CLIP have shown promising generalization capability, but may struggle in specialized domains (e.g., satellite imagery) or fine-grained classification (e.g., car models) where the visual concepts are unseen or under-represented during pretraining. Prompt learning offers a parameter-efficient finetuning framework that can adapt CLIP to downstream tasks even when limited annotation data are available. In this paper, we improve prompt learning by distilling the textual knowledge from natural language prompts (either human- or LLM-generated) to provide rich priors for those under-represented concepts. We first obtain a prompt ``summary'' aligned to each input image via a learned prompt aggregator. Then we jointly train a prompt generator, optimized to produce a prompt embedding that stays close to the aggregated summary while minimizing task loss at the same time. We dub such prompt embedding as Aggregate-and-Adapted Prompt Embedding (AAPE). AAPE is shown to be able to generalize to different downstream data distributions and tasks, including vision-language understanding tasks (e.g., few-shot classification, VQA) and generation tasks (image captioning) where AAPE achieves competitive performance. We also show AAPE is particularly helpful to handle non-canonical and OOD examples. Furthermore, AAPE learning eliminates LLM-based inference cost as required by baselines, and scales better with data and LLM model size.", "pdf": "https://openreview.net/pdf/54fddfff7094964ff34b82583cc9685c8804669a.pdf"} {"title": "On the Necessity of Collaboration for Online Model Selection with Decentralized Data", "url": "https://openreview.net/forum?id=uqWfLgZpV1", "detail_url": "https://openreview.net/forum?id=uqWfLgZpV1", "authors": "Junfan Li,Zheshun Wu,Zenglin Xu,Irwin King", "tags": "NIPS 2024,Poster", "abstract": "We consider online model selection with decentralized data over $M$ clients, and study the necessity of collaboration among clients. Previous work proposed various federated algorithms without demonstrating their necessity, while we answer the question from a novel perspective of computational constraints. We prove lower bounds on the regret, and propose a federated algorithm and analyze the upper bound. Our results show (i) collaboration is unnecessary in the absence of computational constraints on clients; (ii) collaboration is necessary if the computational cost on each client is limited to $o(K)$, where $K$ is the number of candidate hypothesis spaces. We clarify the unnecessary nature of collaboration in previous federated algorithms for distributed online multi-kernel learning, and improve the regret bounds at a smaller computational and communication cost. Our algorithm relies on three new techniques including an improved Bernstein's inequality for martingale, a federated online mirror descent framework, and decoupling model selection and prediction, which might be of independent interest.", "pdf": "https://openreview.net/pdf/2575e8197592ad29d2d365f3f67b34f1ae597edd.pdf"} {"title": "Beyond Euclidean: Dual-Space Representation Learning for Weakly Supervised Video Violence Detection", "url": "https://openreview.net/forum?id=TbPv0qFnHO", "detail_url": "https://openreview.net/forum?id=TbPv0qFnHO", "authors": "Jiaxu Leng,Zhanjie Wu,Mingpi Tan,Yiran Liu,Ji Gan,Haosheng Chen,Xinbo Gao", "tags": "NIPS 2024,Poster", "abstract": "While numerous Video Violence Detection (VVD) methods have focused on representation learning in Euclidean space, they struggle to learn sufficiently discriminative features, leading to weaknesses in recognizing normal events that are visually similar to violent events (i.e., ambiguous violence). In contrast, hyperbolic representation learning, renowned for its ability to model hierarchical and complex relationships between events, has the potential to amplify the discrimination between visually similar events. Inspired by these, we develop a novel Dual-Space Representation Learning (DSRL) method for weakly supervised VVD to utilize the strength of both Euclidean and hyperbolic geometries, capturing the visual features of events while also exploring the intrinsic relations between events, thereby enhancing the discriminative capacity of the features. DSRL employs a novel information aggregation strategy to progressively learn event context in hyperbolic spaces, which selects aggregation nodes through layer-sensitive hyperbolic association degrees constrained by hyperbolic Dirichlet energy. Furthermore, DSRL attempts to break the cyber-balkanization of different spaces, utilizing cross-space attention to facilitate information interactions between Euclidean and hyperbolic space to capture better discriminative features for final violence detection. Comprehensive experiments demonstrate the effectiveness of our proposed DSRL.", "pdf": "https://openreview.net/pdf/1ae786f2051113822bfc7cd92ac07db3fcc4c9d9.pdf"} {"title": "CRONOS: Enhancing Deep Learning with Scalable GPU Accelerated Convex Neural Networks", "url": "https://openreview.net/forum?id=YfLzYczAo3", "detail_url": "https://openreview.net/forum?id=YfLzYczAo3", "authors": "Miria Feng,Zachary Frangella,Mert Pilanci", "tags": "NIPS 2024,Poster", "abstract": "We introduce the CRONOS algorithm for convex optimization of two-layer neural networks. \nCRONOS is the first algorithm capable of scaling to high-dimensional datasets such as ImageNet, which are ubiquitous in modern deep learning. \nThis significantly improves upon prior work, which has been restricted to downsampled versions of MNIST and CIFAR-10.\nTaking CRONOS as a primitive, we then develop a new algorithm called CRONOS-AM, which combines CRONOS with alternating minimization, to obtain an algorithm capable of training multi-layer networks with arbitrary architectures.\nOur theoretical analysis proves that CRONOS converges to the global minimum of the convex reformulation under mild assumptions. \nIn addition, we validate the efficacy of CRONOS and CRONOS-AM through extensive large-scale numerical experiments with GPU acceleration in JAX.\nOur results show that CRONOS-AM can obtain comparable or better validation accuracy than predominant tuned deep learning optimizers on vision and language tasks with benchmark datasets such as ImageNet and IMDb.\nTo the best of our knowledge, CRONOS is the first algorithm which utilizes the convex reformulation to enhance performance on large-scale learning tasks.", "pdf": "https://openreview.net/pdf/62be0fde8ada2a71986f541623378947133648e9.pdf"} {"title": "Faster Local Solvers for Graph Diffusion Equations", "url": "https://openreview.net/forum?id=3Z0LTDjIM0", "detail_url": "https://openreview.net/forum?id=3Z0LTDjIM0", "authors": "Jiahe Bai,Baojian Zhou,Deqing Yang,Yanghua Xiao", "tags": "NIPS 2024,Poster", "abstract": "Efficient computation of graph diffusion equations (GDEs), such as Personalized PageRank, Katz centrality, and the Heat kernel, is crucial for clustering, training neural networks, and many other graph-related problems. Standard iterative methods require accessing the whole graph per iteration, making them time-consuming for large-scale graphs. While existing local solvers approximate diffusion vectors through heuristic local updates, they often operate sequentially and are typically designed for specific diffusion types, limiting their applicability. Given that diffusion vectors are highly localizable, as measured by the participation ratio, this paper introduces a novel framework for approximately solving GDEs using a local diffusion process. This framework reveals the suboptimality of existing local solvers. Furthermore, our approach effectively localizes standard iterative solvers by designing simple and provably sublinear time algorithms. These new local solvers are highly parallelizable, making them well-suited for implementation on GPUs. We demonstrate the effectiveness of our framework in quickly obtaining approximate diffusion vectors, achieving up to a hundred-fold speed improvement, and its applicability to large-scale dynamic graphs. Our framework could also facilitate more efficient local message-passing mechanisms for GNNs.", "pdf": "https://openreview.net/pdf/62b08b483e21d66336a342b6f768608efed05dab.pdf"} {"title": "Temporal Graph Neural Tangent Kernel with Graphon-Guaranteed", "url": "https://openreview.net/forum?id=266nH7kLSV", "detail_url": "https://openreview.net/forum?id=266nH7kLSV", "authors": "Katherine Tieu,Dongqi Fu,Yada Zhu,Hendrik Hamann,Jingrui He", "tags": "NIPS 2024,Poster", "abstract": "_Graph Neural Tangent Kernel_ (GNTK) fuses graph neural networks and graph kernels, simplifies the process of graph representation learning, interprets the training dynamics of graph neural networks, and serves various applications like protein identification, image segmentation, and social network analysis. In practice, graph data carries complex information among entities that inevitably evolves over time, and previous static graph neural tangent kernel methods may be stuck in the sub-optimal solution in terms of both effectiveness and efficiency. As a result, extending the advantage of GNTK to temporal graphs becomes a critical problem. To this end, we propose the temporal graph neural tangent kernel, which not only extends the simplicity and interpretation ability of GNTK to the temporal setting but also leads to rigorous temporal graph classification error bounds. Furthermore, we prove that when the input temporal graph grows over time in the number of nodes, our temporal graph neural tangent kernel will converge in the limit to the _graphon_ NTK value, which implies the transferability and robustness of the proposed kernel method, named **Temp**oral **G**raph **N**eural **T**angent **K**ernel with **G**raphon-**G**uaranteed or **Temp-G$^3$NTK**. In addition to the theoretical analysis, we also perform extensive experiments, not only demonstrating the superiority of Temp-G$^3$NTK in the temporal graph classification task, but also showing that Temp-G^3NTK can achieve very competitive performance in node-level tasks like node classification compared with various SOTA graph kernel and representation learning baselines. Our code is available at https://github.com/kthrn22/TempGNTK.", "pdf": "https://openreview.net/pdf/aa8413b4d6118fa3f33ae344ad2e3e3fd195bf64.pdf"} {"title": "Quantum Algorithms for Non-smooth Non-convex Optimization", "url": "https://openreview.net/forum?id=wsGzvhnoaX", "detail_url": "https://openreview.net/forum?id=wsGzvhnoaX", "authors": "Chengchang Liu,Chaowen Guan,Jianhao He,John C.S. Lui", "tags": "NIPS 2024,Poster", "abstract": "This paper considers the problem for finding the $(\\delta,\\epsilon)$-Goldstein stationary point of Lipschitz continuous objective, which is a rich function class to cover a great number of important applications. \nWe construct a novel zeroth-order quantum estimator for the gradient of the smoothed surrogate. \nBased on such estimator, we propose a novel quantum algorithm that achieves a query complexity of $\\tilde{\\mathcal{O}}(d^{3/2}\\delta^{-1}\\epsilon^{-3})$ on the stochastic function value oracle, where $d$ is the dimension of the problem. \nWe also enhance the query complexity to $\\tilde{\\mathcal{O}}(d^{3/2}\\delta^{-1}\\epsilon^{-7/3})$ by introducing a variance reduction variant. \nOur findings demonstrate the clear advantages of utilizing quantum techniques for non-convex non-smooth optimization, as they outperform the optimal classical methods on the dependency of $\\epsilon$ by a factor of $\\epsilon^{-2/3}$.", "pdf": "https://openreview.net/pdf/364623854979ad37fb6af986b474c827d812af53.pdf"} {"title": "ALI-Agent: Assessing LLMs' Alignment with Human Values via Agent-based Evaluation", "url": "https://openreview.net/forum?id=KZrfBTrPey", "detail_url": "https://openreview.net/forum?id=KZrfBTrPey", "authors": "Jingnan Zheng,Han Wang,An Zhang,Tai D. Nguyen,Jun Sun,Tat-Seng Chua", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) can elicit unintended and even harmful content when misaligned with human values, posing severe risks to users and society. To mitigate these risks, current evaluation benchmarks predominantly employ expert-designed contextual scenarios to assess how well LLMs align with human values. However, the labor-intensive nature of these benchmarks limits their test scope, hindering their ability to generalize to the extensive variety of open-world use cases and identify rare but crucial long-tail risks. Additionally, these static tests fail to adapt to the rapid evolution of LLMs, making it hard to evaluate timely alignment issues. To address these challenges, we propose ALI-Agent, an evaluation framework that leverages the autonomous abilities of LLM-powered agents to conduct in-depth and adaptive alignment assessments. ALI-Agent operates through two principal stages: Emulation and Refinement. During the Emulation stage, ALI-Agent automates the generation of realistic test scenarios. In the Refinement stage, it iteratively refines the scenarios to probe long-tail risks. Specifically, ALI-Agent incorporates a memory module to guide test scenario generation, a tool-using module to reduce human labor in tasks such as evaluating feedback from target LLMs, and an action module to refine tests. Extensive experiments across three aspects of human values--stereotypes, morality, and legality--demonstrate that ALI-Agent, as a general evaluation framework, effectively identifies model misalignment. Systematic analysis also validates that the generated test scenarios represent meaningful use cases, as well as integrate enhanced measures to probe long-tail risks.", "pdf": "https://openreview.net/pdf/48717ce572116e740ccd3f83f3344d95ffee435c.pdf"} {"title": "Piecewise-Stationary Bandits with Knapsacks", "url": "https://openreview.net/forum?id=haa457jwjw", "detail_url": "https://openreview.net/forum?id=haa457jwjw", "authors": "Xilin Zhang,Wang Chi Cheung", "tags": "NIPS 2024,Poster", "abstract": "We study Bandits with Knapsacks (Bwk) in a piecewise-stationary environment. We propose a novel inventory reserving algorithm which draws new insights into the problem. Suppose parameters $\\eta_{\\min}, \\eta_{\\max} \\in (0,1]$ respectively lower and upper bound the reward earned and the resources consumed in a time round. Our algorithm achieves a provably near-optimal competitive ratio of $O(\\log(\\eta_{\\max}/\\eta_{\\min}))$, with a matching lower bound provided. Our performance guarantee is based on a dynamic benchmark, distinguishing our work from existing works on adversarial Bwk who compare with the static benchmark. Furthermore, different from existing non-stationary Bwk work, we do not require a bounded global variation.", "pdf": "https://openreview.net/pdf/c67e0c13cef57df189a5c43e9f5ec06ab2b4a52a.pdf"} {"title": "TALoS: Enhancing Semantic Scene Completion via Test-time Adaptation on the Line of Sight", "url": "https://openreview.net/forum?id=gZWYdJ3c26", "detail_url": "https://openreview.net/forum?id=gZWYdJ3c26", "authors": "Hyun-Kurl Jang,Jihun Kim,Hyeokjun Kweon,Kuk-Jin Yoon", "tags": "NIPS 2024,Poster", "abstract": "Semantic Scene Completion (SSC) aims to perform geometric completion and semantic segmentation simultaneously. Despite the promising results achieved by existing studies, the inherently ill-posed nature of the task presents significant challenges in diverse driving scenarios. This paper introduces TALoS, a novel test-time adaptation approach for SSC that excavates the information available in driving environments. Specifically, we focus on that observations made at a certain moment can serve as Ground Truth (GT) for scene completion at another moment. Given the characteristics of the LiDAR sensor, an observation of an object at a certain location confirms both 1) the occupation of that location and 2) the absence of obstacles along the line of sight from the LiDAR to that point. TALoS utilizes these observations to obtain self-supervision about occupancy and emptiness, guiding the model to adapt to the scene in test time. In a similar manner, we aggregate reliable SSC predictions among multiple moments and leverage them as semantic pseudo-GT for adaptation. Further, to leverage future observations that are not accessible at the current time, we present a dual optimization scheme using the model in which the update is delayed until the future observation is available. Evaluations on the SemanticKITTI validation and test sets demonstrate that TALoS significantly improves the performance of the pre-trained SSC model.", "pdf": "https://openreview.net/pdf/48c0b0d873ddff12c2e58584da231852484fa268.pdf"} {"title": "Action Imitation in Common Action Space for Customized Action Image Synthesis", "url": "https://openreview.net/forum?id=h2e4G2YiwR", "detail_url": "https://openreview.net/forum?id=h2e4G2YiwR", "authors": "Wang Lin,Jingyuan Chen,Jiaxin Shi,Zirun Guo,Yichen Zhu,Zehan Wang,Tao Jin,Zhou Zhao,Fei Wu,Shuicheng YAN,Hanwang Zhang", "tags": "NIPS 2024,Poster", "abstract": "We propose a novel method, \\textbf{TwinAct}, to tackle the challenge of decoupling actions and actors in order to customize the text-guided diffusion models (TGDMs) for few-shot action image generation. TwinAct addresses the limitations of existing methods that struggle to decouple actions from other semantics (e.g., the actor's appearance) due to the lack of an effective inductive bias with few exemplar images. Our approach introduces a common action space, which is a textual embedding space focused solely on actions, enabling precise customization without actor-related details. Specifically, TwinAct involves three key steps: 1) Building common action space based on a set of representative action phrases; 2) Imitating the customized action within the action space; and 3) Generating highly adaptable customized action images in diverse contexts with action similarity loss. To comprehensively evaluate TwinAct, we construct a novel benchmark, which provides sample images with various forms of actions. Extensive experiments demonstrate TwinAct's superiority in generating accurate, context-independent customized actions while maintaining the identity consistency of different subjects, including animals, humans, and even customized actors.", "pdf": "https://openreview.net/pdf/acb0e460231bd47b76314b222bc7e222422738af.pdf"} {"title": "Offline Multitask Representation Learning for Reinforcement Learning", "url": "https://openreview.net/forum?id=72tRD2Mfjd", "detail_url": "https://openreview.net/forum?id=72tRD2Mfjd", "authors": "Haque Ishfaq,Thanh Nguyen-Tang,Songtao Feng,Raman Arora,Mengdi Wang,Ming Yin,Doina Precup", "tags": "NIPS 2024,Poster", "abstract": "We study offline multitask representation learning in reinforcement learning (RL), where a learner is provided with an offline dataset from different tasks that share a common representation and is asked to learn the shared representation. We theoretically investigate offline multitask low-rank RL, and propose a new algorithm called MORL for offline multitask representation learning. Furthermore, we examine downstream RL in reward-free, offline and online scenarios, where a new task is introduced to the agent that shares the same representation as the upstream offline tasks. Our theoretical results demonstrate the benefits of using the learned representation from the upstream offline task instead of directly learning the representation of the low-rank model.", "pdf": "https://openreview.net/pdf/378855aa995e72d6df859202988cd768ebc0ead7.pdf"} {"title": "The Implicit Bias of Heterogeneity towards Invariance: A Study of Multi-Environment Matrix Sensing", "url": "https://openreview.net/forum?id=pMPBxMf8T3", "detail_url": "https://openreview.net/forum?id=pMPBxMf8T3", "authors": "Yang Xu,Yihong Gu,Cong Fang", "tags": "NIPS 2024,Poster", "abstract": "Models are expected to engage in invariance learning, which involves distinguishing the core relations that remain consistent across varying environments to ensure the predictions are safe, robust and fair. While existing works consider specific algorithms to realize invariance learning, we show that model has the potential to learn invariance through standard training procedures. In other words, this paper studies the implicit bias of Stochastic Gradient Descent (SGD) over heterogeneous data and shows that the implicit bias drives the model learning towards an invariant solution. We call the phenomenon the implicit invariance learning. Specifically, we theoretically investigate the multi-environment low-rank matrix sensing problem where in each environment, the signal comprises (i) a lower-rank invariant part shared across all environments; and (ii) a significantly varying environment-dependent spurious component. The key insight is, through simply employing the large step size large-batch SGD sequentially in each environment without any explicit regularization, the oscillation caused by heterogeneity can provably prevent model learning spurious signals. The model reaches the invariant solution after certain iterations. In contrast, model learned using pooled SGD over all data would simultaneously learn both the invariant and spurious signals. Overall, we unveil another implicit bias that is a result of the symbiosis between the heterogeneity of data and modern algorithms, which is, to the best of our knowledge, first in the literature.", "pdf": "https://openreview.net/pdf/c0ddf06b5d5e52dfc2af13537aa87138050120f7.pdf"} {"title": "Linear Uncertainty Quantification of Graphical Model Inference", "url": "https://openreview.net/forum?id=XOVks7JHQA", "detail_url": "https://openreview.net/forum?id=XOVks7JHQA", "authors": "Chenghua Guo,Han Yu,Jiaxin Liu,Chao Chen,Qi Li,Sihong Xie,Xi Zhang", "tags": "NIPS 2024,Poster", "abstract": "Uncertainty Quantification (UQ) is vital for decision makers as it offers insights into the potential reliability of data and model, enabling more informed and risk-aware decision-making. \nGraphical models, capable of representing data with complex dependencies, are widely used across domains.\nExisting sampling-based UQ methods are unbiased but cannot guarantee convergence and are time-consuming on large-scale graphs. \nThere are fast UQ methods for graphical models with closed-form solutions and convergence guarantee but with uncertainty underestimation.\nWe propose *LinUProp*, a UQ method that utilizes a novel linear propagation of uncertainty to model uncertainty among related nodes additively instead of multiplicatively, to offer linear scalability, guaranteed convergence, and closed-form solutions without underestimating uncertainty.\nTheoretically, we decompose the expected prediction error of the graphical model and prove that the uncertainty computed by *LinUProp* is the *generalized variance component* of the decomposition.\nExperimentally, we demonstrate that *LinUProp* is consistent with the sampling-based method but with linear scalability and fast convergence.\nMoreover, *LinUProp* outperforms competitors in uncertainty-based active learning on four real-world graph datasets, achieving higher accuracy with a lower labeling budget.", "pdf": "https://openreview.net/pdf/4451cf52fd3680afaeb712fc8d220de7196044ee.pdf"} {"title": "BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models", "url": "https://openreview.net/forum?id=MaDykgj4Ru", "detail_url": "https://openreview.net/forum?id=MaDykgj4Ru", "authors": "Yibin Wang,Haizhou Shi,Ligong Han,Dimitris N. Metaxas,Hao Wang", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) often suffer from overconfidence during inference, particularly when adapted to downstream domain-specific tasks with limited data. Previous work addresses this issue by employing approximate Bayesian estimation after the LLMs are trained, enabling them to quantify uncertainty. However, such post-training approaches' performance is severely limited by the parameters learned during training. In this paper, we go beyond post-training Bayesianization and propose Bayesian Low-Rank Adaptation by Backpropagation (BLoB), an algorithm that continuously and jointly adjusts both the mean and covariance of LLM parameters throughout the whole fine-tuning process. Our empirical results verify the effectiveness of BLoB in terms of generalization and uncertainty estimation, when evaluated on both in-distribution and out-of-distribution data.", "pdf": "https://openreview.net/pdf/a5e9b56686e53677500fe5f8e6bf7fa09f7b718e.pdf"} {"title": "Algorithmic Capabilities of Random Transformers", "url": "https://openreview.net/forum?id=plH8gW7tPQ", "detail_url": "https://openreview.net/forum?id=plH8gW7tPQ", "authors": "Ziqian Zhong,Jacob Andreas", "tags": "NIPS 2024,Poster", "abstract": "Trained transformer models have been found to implement interpretable procedures for tasks like arithmetic and associative recall, but little is understood about how the circuits that implement these procedures originate during training. To what extent do they depend on the supervisory signal provided to models, and to what extent are they attributable to behavior already present in models at the beginning of training? To investigate these questions, we investigate what functions can be learned by randomly initialized transformers in which only the embedding layers are optimized, so that the only input--output mappings learnable from data are those already implemented (up to a choice of encoding scheme) by the randomly initialized model. We find that these random transformers can perform a wide range of meaningful algorithmic tasks, including modular arithmetic, in-weights and in-context associative recall, decimal addition, parenthesis balancing, and even some aspects of natural language text generation. Our results indicate that some algorithmic capabilities are present in transformers (and accessible via appropriately structured inputs) even before these models are trained.", "pdf": "https://openreview.net/pdf/edb6b6c967e15383353752d484f3eeea16da6215.pdf"} {"title": "RAGraph: A General Retrieval-Augmented Graph Learning Framework", "url": "https://openreview.net/forum?id=Dzk2cRUFMt", "detail_url": "https://openreview.net/forum?id=Dzk2cRUFMt", "authors": "Xinke Jiang,Rihong Qiu,Yongxin Xu,WentaoZhang,Yichen Zhu,Ruizhe zhang,Yuchen Fang,Xu Chu,Junfeng Zhao,Yasha Wang", "tags": "NIPS 2024,Poster", "abstract": "Graph Neural Networks (GNNs) have become essential in interpreting relational data across various domains, yet, they often struggle to generalize to unseen graph data that differs markedly from training instances. In this paper, we introduce a novel framework called General Retrieval-Augmented Graph Learning (RAGraph), which brings external graph data into the general graph foundation model to improve model generalization on unseen scenarios. On the top of our framework is a toy graph vector library that we established, which captures key attributes, such as features and task-specific label information. During inference, the RAGraph adeptly retrieves similar toy graphs based on key similarities in downstream tasks, integrating the retrieved data to enrich the learning context via the message-passing prompting mechanism. Our extensive experimental evaluations demonstrate that RAGraph significantly outperforms state-of-the-art graph learning methods in multiple tasks such as node classification, link prediction, and graph classification across both dynamic and static datasets. Furthermore, extensive testing confirms that RAGraph consistently maintains high performance without the need for task-specific fine-tuning, highlighting its adaptability, robustness, and broad applicability.", "pdf": "https://openreview.net/pdf/d70c2a2b826dcaadf8864165ae2b9e0395bb9d2b.pdf"} {"title": "4Real: Towards Photorealistic 4D Scene Generation via Video Diffusion Models", "url": "https://openreview.net/forum?id=SO1aRpwVLk", "detail_url": "https://openreview.net/forum?id=SO1aRpwVLk", "authors": "Heng Yu,Chaoyang Wang,Peiye Zhuang,Willi Menapace,Aliaksandr Siarohin,Junli Cao,Laszlo Attila Jeni,Sergey Tulyakov,Hsin-Ying Lee", "tags": "NIPS 2024,Poster", "abstract": "Existing dynamic scene generation methods mostly rely on distilling knowledge from pre-trained 3D generative models, which are typically fine-tuned on synthetic object datasets.\nAs a result, the generated scenes are often object-centric and lack photorealism. \nTo address these limitations, we introduce a novel pipeline designed for photorealistic text-to-4D scene generation, discarding the dependency on multi-view generative models and instead fully utilizing video generative models trained on diverse real-world datasets. \nOur method begins by generating a reference video using the video generation model.\nWe then learn the canonical 3D representation of the video using a freeze-time video, delicately generated from the reference video.\nTo handle inconsistencies in the freeze-time video, we jointly learn a per-frame deformation to model these imperfections.\nWe then learn the temporal deformation based on the canonical representation to capture dynamic interactions in the reference video. \nThe pipeline facilitates the generation of dynamic scenes with enhanced photorealism and structural integrity, viewable from multiple perspectives, thereby setting a new standard in 4D scene generation.", "pdf": "https://openreview.net/pdf/4e3d97c06b3f3c477290cce53eca1661d8b1aa6b.pdf"} {"title": "Prism: A Framework for Decoupling and Assessing the Capabilities of VLMs", "url": "https://openreview.net/forum?id=qLnXPVvwLx", "detail_url": "https://openreview.net/forum?id=qLnXPVvwLx", "authors": "Yuxuan Qiao,Haodong Duan,Xinyu Fang,Junming Yang,Lin Chen,Songyang Zhang,Jiaqi Wang,Dahua Lin,Kai Chen", "tags": "NIPS 2024,Poster", "abstract": "Vision Language Models (VLMs) demonstrate remarkable proficiency in addressing a wide array of visual questions, which requires strong perception and reasoning faculties. Assessing these two competencies independently is crucial for model refinement, despite the inherent difficulty due to the intertwined nature of seeing and reasoning in existing VLMs. To tackle this issue, we present Prism, an innovative framework designed to disentangle the perception and reasoning processes involved in visual question solving. Prism comprises two distinct stages: a perception stage that utilizes a VLM to extract and articulate visual information in textual form, and a reasoning stage that formulates responses based on the extracted visual information using a Large Language Model (LLM). This modular design enables the systematic comparison and assessment of both proprietary and open-source VLM for their perception and reasoning strengths. Our analytical framework provides several valuable insights, underscoring Prism's potential as a cost-effective solution for vision-language tasks.\nBy combining a streamlined VLM focused on perception with a powerful LLM tailored for reasoning, Prism achieves superior results in general vision-language tasks while substantially cutting down on training and operational expenses. Quantitative evaluations show that Prism, when configured with a vanilla 2B LLaVA and freely accessible GPT-3.5, delivers performance on par with VLMs $10 \\times$ larger on the rigorous multimodal benchmark MMStar.", "pdf": "https://openreview.net/pdf/a26416cef18f1dfea2d59d021c9813a3f39f36da.pdf"} {"title": "Deep linear networks for regression are implicitly regularized towards flat minima", "url": "https://openreview.net/forum?id=F738WY1Xm4", "detail_url": "https://openreview.net/forum?id=F738WY1Xm4", "authors": "Pierre Marion,L\u00e9na\u00efc Chizat", "tags": "NIPS 2024,Poster", "abstract": "The largest eigenvalue of the Hessian, or sharpness, of neural networks is a key quantity to understand their optimization dynamics. In this paper, we study the sharpness of deep linear networks for univariate regression. Minimizers can have arbitrarily large sharpness, but not an arbitrarily small one. Indeed, we show a lower bound on the sharpness of minimizers, which grows linearly with depth. We then study the properties of the minimizer found by gradient flow, which is the limit of gradient descent with vanishing learning rate. We show an implicit regularization towards flat minima: the sharpness of the minimizer is no more than a constant times the lower bound. The constant depends on the condition number of the data covariance matrix, but not on width or depth. This result is proven both for a small-scale initialization and a residual initialization. Results of independent interest are shown in both cases. For small-scale initialization, we show that the learned weight matrices are approximately rank-one and that their singular vectors align. For residual initialization, convergence of the gradient flow for a Gaussian initialization of the residual network is proven. Numerical experiments illustrate our results and connect them to gradient descent with non-vanishing learning rate.", "pdf": "https://openreview.net/pdf/292f2d6578b167cc7c0da53641a45e782b16624f.pdf"} {"title": "Optimal Scalarizations for Sublinear Hypervolume Regret", "url": "https://openreview.net/forum?id=30NS22tgCW", "detail_url": "https://openreview.net/forum?id=30NS22tgCW", "authors": "Qiuyi Zhang", "tags": "NIPS 2024,Poster", "abstract": "Scalarization is a general, parallizable technique that can be deployed in any multiobjective setting to reduce multiple objectives into one, yet some have dismissed this versatile approach because linear scalarizations cannot explore concave regions of the Pareto frontier. To that end, we aim to find simple non-linear scalarizations that provably explore a diverse set of $k$ objectives on the Pareto frontier, as measured by the dominated hypervolume. We show that hypervolume scalarizations with uniformly random weights achieves an optimal sublinear hypervolume regret bound of $O(T^{-1/k})$, with matching lower bounds that preclude any algorithm from doing better asymptotically. For the setting of multiobjective stochastic linear bandits, we utilize properties of hypervolume scalarizations to derive a novel non-Euclidean analysis to get regret bounds of $\\tilde{O}( d T^{-1/2} + T^{-1/k})$, removing unnecessary $\\text{poly}(k)$ dependencies. We support our theory with strong empirical performance of using non-linear scalarizations that outperforms both their linear counterparts and other standard multiobjective algorithms in a variety of natural settings.", "pdf": "https://openreview.net/pdf/861ec41d3ce273d77765d0fe51e426c897f6b370.pdf"} {"title": "On the Curses of Future and History in Future-dependent Value Functions for Off-policy Evaluation", "url": "https://openreview.net/forum?id=s5917zor6V", "detail_url": "https://openreview.net/forum?id=s5917zor6V", "authors": "Yuheng Zhang,Nan Jiang", "tags": "NIPS 2024,Poster", "abstract": "We study off-policy evaluation (OPE) in partially observable environments with complex observations, with the goal of developing estimators whose guarantee avoids exponential dependence on the horizon. While such estimators exist for MDPs and POMDPs can be converted to history-based MDPs, their estimation errors depend on the state-density ratio for MDPs which becomes history ratios after conversion, an exponential object. Recently, Uehara et al. [2022a] proposed future-dependent value functions as a promising framework to address this issue, where the guarantee for memoryless policies depends on the density ratio over the latent state space. However, it also depends on the boundedness of the future-dependent value function and other related quantities, which we show could be exponential-in-length and thus erasing the advantage of the method. In this paper, we discover novel coverage assumptions tailored to the structure of POMDPs, such as outcome coverage and belief coverage, which enable polynomial bounds on the aforementioned quantities. As a side product, our analyses also lead to the discovery of new algorithms with complementary properties.", "pdf": "https://openreview.net/pdf/ea73856bef4e0b150b71e3a47e27e765b5aba419.pdf"} {"title": "From Text to Trajectory: Exploring Complex Constraint Representation and Decomposition in Safe Reinforcement Learning", "url": "https://openreview.net/forum?id=MDpIQ9hQ7H", "detail_url": "https://openreview.net/forum?id=MDpIQ9hQ7H", "authors": "Pusen Dong,Tianchen Zhu,Yue Qiu,Haoyi Zhou,Jianxin Li", "tags": "NIPS 2024,Poster", "abstract": "Safe reinforcement learning (RL) requires the agent to finish a given task while obeying specific constraints. Giving constraints in natural language form has great potential for practical scenarios due to its flexible transfer capability and accessibility. Previous safe RL methods with natural language constraints typically need to design cost functions manually for each constraint, which requires domain expertise and lacks flexibility. In this paper, we harness the dual role of text in this task, using it not only to provide constraint but also as a training signal. We introduce the Trajectory-level Textual Constraints Translator (TTCT) to replace the manually designed cost function. Our empirical results demonstrate that TTCT effectively comprehends textual constraint and trajectory, and the policies trained by TTCT can achieve a lower violation rate than the standard cost function. Extra studies are conducted to demonstrate that the TTCT has zero-shot transfer capability to adapt to constraint-shift environments.", "pdf": "https://openreview.net/pdf/3e9d50d24b1a21619ddb6d7315431df9d2ade121.pdf"} {"title": "From Linear to Linearizable Optimization: A Novel Framework with Applications to Stationary and Non-stationary DR-submodular Optimization", "url": "https://openreview.net/forum?id=dGaMSMeeF8", "detail_url": "https://openreview.net/forum?id=dGaMSMeeF8", "authors": "Mohammad Pedramfar,Vaneet Aggarwal", "tags": "NIPS 2024,Poster", "abstract": "This paper introduces the notion of upper-linearizable/quadratizable functions, a class that extends concavity and DR-submodularity in various settings, including monotone and non-monotone cases over different types of convex sets. A general meta-algorithm is devised to convert algorithms for linear/quadratic maximization into ones that optimize upper-linearizable/quadratizable functions, offering a unified approach to tackling concave and DR-submodular optimization problems. The paper extends these results to multiple feedback settings, facilitating conversions between semi-bandit/first-order feedback and bandit/zeroth-order feedback, as well as between first/zeroth-order feedback and semi-bandit/bandit feedback. Leveraging this framework, new algorithms are derived using existing results as base algorithms for convex optimization, improving upon state-of-the-art results in various cases. Dynamic and adaptive regret guarantees are obtained for DR-submodular maximization, marking the first algorithms to achieve such guarantees in these settings. Notably, the paper achieves these advancements with fewer assumptions compared to existing state-of-the-art results, underscoring its broad applicability and theoretical contributions to non-convex optimization.", "pdf": "https://openreview.net/pdf/1c83676cd0a38cb42286d95f201fff2c1e29b8a2.pdf"} {"title": "Replay-and-Forget-Free Graph Class-Incremental Learning: A Task Profiling and Prompting Approach", "url": "https://openreview.net/forum?id=FXdMgfCDer", "detail_url": "https://openreview.net/forum?id=FXdMgfCDer", "authors": "Chaoxi Niu,Guansong Pang,Ling Chen,Bing Liu", "tags": "NIPS 2024,Poster", "abstract": "Class-incremental learning (CIL) aims to continually learn a sequence of tasks, with each task consisting of a set of unique classes. Graph CIL (GCIL) follows the same setting but needs to deal with graph tasks (e.g., node classification in a graph). The key characteristic of CIL lies in the absence of task identifiers (IDs) during inference, which causes a significant challenge in separating classes from different tasks (i.e., inter-task class separation). Being able to accurately predict the task IDs can help address this issue, but it is a challenging problem. In this paper, we show theoretically that accurate task ID prediction on graph data can be achieved by a Laplacian smoothing-based graph task profiling approach, in which each graph task is modeled by a task prototype based on Laplacian smoothing over the graph. It guarantees that the task prototypes of the same graph task are nearly the same with a large smoothing step, while those of different tasks are distinct due to differences in graph structure and node attributes. Further, to avoid the catastrophic forgetting of the knowledge learned in previous graph tasks, we propose a novel graph prompting approach for GCIL which learns a small discriminative graph prompt for each task, essentially resulting in a separate classification model for each task. The prompt learning requires the training of a single graph neural network (GNN) only once on the first task, and no data replay is required thereafter, thereby obtaining a GCIL model being both replay-free and forget-free. Extensive experiments on four GCIL benchmarks show that i) our task prototype-based method can achieve 100% task ID prediction accuracy on all four datasets, ii) our GCIL model significantly outperforms state-of-the-art competing methods by at least 18% in average CIL accuracy, and iii) our model is fully free of forgetting on the four datasets.", "pdf": "https://openreview.net/pdf/ea5a2d8b2cc69f45e8c1f45e4df1b26bf109b8f4.pdf"} {"title": "EM Distillation for One-step Diffusion Models", "url": "https://openreview.net/forum?id=rafVvthuxD", "detail_url": "https://openreview.net/forum?id=rafVvthuxD", "authors": "Sirui Xie,Zhisheng Xiao,Diederik P Kingma,Tingbo Hou,Ying Nian Wu,Kevin Patrick Murphy,Tim Salimans,Ben Poole,Ruiqi Gao", "tags": "NIPS 2024,Poster", "abstract": "While diffusion models can learn complex distributions, sampling requires a computationally expensive iterative process. Existing distillation methods enable efficient sampling, but have notable limitations, such as performance degradation with very few sampling steps, reliance on training data access, or mode-seeking optimization that may fail to capture the full distribution. We propose EM Distillation (EMD), a maximum likelihood-based approach that distills a diffusion model to a one-step generator model with minimal loss of perceptual quality. Our approach is derived through the lens of Expectation-Maximization (EM), where the generator parameters are updated using samples from the joint distribution of the diffusion teacher prior and inferred generator latents. We develop a reparametrized sampling scheme and a noise cancellation technique that together stabilizes the distillation process. We further reveal an interesting connection of our method with existing methods that minimize mode-seeking KL. EMD outperforms existing one-step generative methods in terms of FID scores on ImageNet-64 and ImageNet-128, and compares favorably with prior work on distilling text-to-image diffusion models.", "pdf": "https://openreview.net/pdf/d561a28c837498ba61b189816ed8d05027ee8269.pdf"} {"title": "Dual-Diffusion for Binocular 3D Human Pose Estimation", "url": "https://openreview.net/forum?id=NT8Z5NjwxF", "detail_url": "https://openreview.net/forum?id=NT8Z5NjwxF", "authors": "Xiaoyue Wan,Zhuo Chen,Bingzhi Duan,Xu Zhao", "tags": "NIPS 2024,Poster", "abstract": "Binocular 3D human pose estimation (HPE), reconstructing a 3D pose from 2D poses of two views, offers practical advantages by combining multiview geometry with the convenience of a monocular setup. However, compared to a multiview setup, the reduction in the number of cameras increases uncertainty in 3D reconstruction. To address this issue, we leverage the diffusion model, which has shown success in monocular 3D HPE by recovering 3D poses from noisy data with high uncertainty. Yet, the uncertainty distribution of initial 3D poses remains unknown. Considering that 3D errors stem from 2D errors within geometric constraints, we recognize that the uncertainties of 3D and 2D are integrated in a binocular configuration, with the initial 2D uncertainty being well-defined. Based on this insight, we propose Dual-Diffusion specifically for Binocular 3D HPE, simultaneously denoising the uncertainties in 2D and 3D, and recovering plausible and accurate results. Additionally, we introduce Z-embedding as an additional condition for denoising and implement baseline-width-related pose normalization to enhance the model flexibility for various baseline settings. This is crucial as 3D error influence factors encompass depth and baseline width. Extensive experiments validate the effectiveness of our Dual-Diffusion in 2D refinement and 3D estimation. The code and models are available at https://github.com/sherrywan/Dual-Diffusion.", "pdf": "https://openreview.net/pdf/81dd57a78be5808569e64a192580828b0c11c853.pdf"} {"title": "Accelerating Blockwise Parallel Language Models with Draft Refinement", "url": "https://openreview.net/forum?id=KT6F5Sw0eg", "detail_url": "https://openreview.net/forum?id=KT6F5Sw0eg", "authors": "Taehyeon Kim,Ananda Theertha Suresh,Kishore A Papineni,Michael Riley,Sanjiv Kumar,Adrian Benton", "tags": "NIPS 2024,Poster", "abstract": "Autoregressive language models have achieved remarkable advancements, yet their potential is often limited by the slow inference speeds associated with sequential token generation. Blockwise parallel decoding (BPD) was proposed by Stern et al. [42] as a method to improve inference speed of language models by simultaneously predicting multiple future tokens, termed block drafts, which are subsequently verified by the autoregressive model. This paper advances the understanding and improvement of block drafts in two ways. First, we analyze token distributions generated across multiple prediction heads. Second, leveraging these insights, we propose algorithms to improve BPD inference speed by refining the block drafts using task-independent \\ngram and neural language models as lightweight rescorers. Experiments demonstrate that by refining block drafts of open-sourced Vicuna and Medusa LLMs, the mean accepted token length are increased by 5-25% relative. This results in over a 3x speedup in wall clock time compared to standard autoregressive decoding in open-source 7B and 13B LLMs.", "pdf": "https://openreview.net/pdf/06e2f9ca2c0d653efe7762f371916649f0417b17.pdf"} {"title": "Task-oriented Time Series Imputation Evaluation via Generalized Representers", "url": "https://openreview.net/forum?id=n2dvAKKQoM", "detail_url": "https://openreview.net/forum?id=n2dvAKKQoM", "authors": "Zhixian Wang,Linxiao Yang,Liang Sun,Qingsong Wen,Yi Wang", "tags": "NIPS 2024,Poster", "abstract": "Time series analysis is widely used in many fields such as power energy, economics, and transportation, including different tasks such as forecasting, anomaly detection, classification, etc. Missing values are widely observed in these tasks, and often leading to unpredictable negative effects on existing methods, hindering their further application. In response to this situation, existing time series imputation methods mainly focus on restoring sequences based on their data characteristics, while ignoring the performance of the restored sequences in downstream tasks. Considering different requirements of downstream tasks (e.g., forecasting), this paper proposes an efficient downstream task-oriented time series imputation evaluation approach. By combining time series imputation with neural network models used for downstream tasks, the gain of different imputation strategies on downstream tasks is estimated without retraining, and the most favorable imputation value for downstream tasks is given by combining different imputation strategies according to the estimated gain.", "pdf": "https://openreview.net/pdf/ad41397f6b878c343bd484ca0cb2057edfba1ab7.pdf"} {"title": "Unveiling Causal Reasoning in Large Language Models: Reality or Mirage?", "url": "https://openreview.net/forum?id=1IU3P8VDbn", "detail_url": "https://openreview.net/forum?id=1IU3P8VDbn", "authors": "Haoang Chi,He Li,Wenjing Yang,Feng Liu,Long Lan,Xiaoguang Ren,Tongliang Liu,Bo Han", "tags": "NIPS 2024,Poster", "abstract": "Causal reasoning capability is critical in advancing large language models (LLMs) towards artificial general intelligence (AGI). While versatile LLMs appear to have demonstrated capabilities in understanding contextual causality and providing responses that obey the laws of causality, it remains unclear whether they perform genuine causal reasoning akin to humans. However, current evidence indicates the contrary. Specifically, LLMs are only capable of performing shallow (level-1) causal reasoning, primarily attributed to the causal knowledge embedded in their parameters, but they lack the capacity for genuine human-like (level-2) causal reasoning. To support this hypothesis, methodologically, we delve into the autoregression mechanism of transformer-based LLMs, revealing that it is not inherently causal. Empirically, we introduce a new causal Q&A benchmark named CausalProbe 2024, whose corpus is fresh and nearly unseen for the studied LLMs. Empirical results show a significant performance drop on CausalProbe 2024 compared to earlier benchmarks, indicating that LLMs primarily engage in level-1 causal reasoning.To bridge the gap towards level-2 causal reasoning, we draw inspiration from the fact that human reasoning is usually facilitated by general knowledge and intended goals. Inspired by this, we propose G$^2$-Reasoner, a LLM causal reasoning method that incorporates general knowledge and goal-oriented prompts into LLMs' causal reasoning processes. Experiments demonstrate that G$^2$-Reasoner significantly enhances LLMs' causal reasoning capability, particularly in fresh and fictitious contexts. This work sheds light on a new path for LLMs to advance towards genuine causal reasoning, going beyond level-1 and making strides towards level-2.", "pdf": "https://openreview.net/pdf/32f74033237352931c128a574f18a415440344ae.pdf"} {"title": "DOPPLER: Differentially Private Optimizers with Low-pass Filter for Privacy Noise Reduction", "url": "https://openreview.net/forum?id=r8YntmAd0g", "detail_url": "https://openreview.net/forum?id=r8YntmAd0g", "authors": "Xinwei Zhang,Zhiqi Bu,Mingyi Hong,Meisam Razaviyayn", "tags": "NIPS 2024,Poster", "abstract": "Privacy is a growing concern in modern deep-learning systems and applications. Differentially private (DP) training prevents the leakage of sensitive information in the collected training data from the trained machine learning models. DP optimizers, including DP stochastic gradient descent (DPSGD) and its variants, privatize the training procedure by gradient clipping and *DP noise* injection. However, in practice, DP models trained using DPSGD and its variants often suffer from significant model performance degradation. Such degradation prevents the application of DP optimization in many key tasks, such as foundation model pretraining. In this paper, we provide a novel *signal processing perspective* to the design and analysis of DP optimizers. We show that a ''frequency domain'' operation called *low-pass filtering* can be used to effectively reduce the impact of DP noise. More specifically, by defining the ''frequency domain'' for both the gradient and differential privacy (DP) noise, we have developed a new component, called DOPPLER. This component is designed for DP algorithms and works by effectively amplifying the gradient while suppressing DP noise within this frequency domain. As a result, it maintains privacy guarantees and enhances the quality of the DP-protected model. Our experiments show that the proposed DP optimizers with a low-pass filter outperform their counterparts without the filter on various models and datasets. Both theoretical and practical evidence suggest that the DOPPLER is effective in closing the gap between DP and non-DP training.", "pdf": "https://openreview.net/pdf/f55eefb1a27c15148c2b579611582a133f6f2a1f.pdf"} {"title": "Make-An-Agent: A Generalizable Policy Network Generator with Behavior-Prompted Diffusion", "url": "https://openreview.net/forum?id=pHiTmEsAfZ", "detail_url": "https://openreview.net/forum?id=pHiTmEsAfZ", "authors": "Yongyuan Liang,Tingqiang Xu,Kaizhe Hu,Guangqi Jiang,Furong Huang,Huazhe Xu", "tags": "NIPS 2024,Poster", "abstract": "Can we generate a control policy for an agent using just one demonstration of desired behaviors as a prompt, as effortlessly as creating an image from a textual description?\nIn this paper, we present **Make-An-Agent**, a novel policy parameter generator that leverages the power of conditional diffusion models for behavior-to-policy generation. Guided by behavior embeddings that encode trajectory information, our policy generator synthesizes latent parameter representations, which can then be decoded into policy networks. \nTrained on policy network checkpoints and their corresponding trajectories, our generation model demonstrates remarkable versatility and scalability on multiple tasks and has a strong generalization ability on unseen tasks to output well-performed policies with only few-shot demonstrations as inputs. We showcase its efficacy and efficiency on various domains and tasks, including varying objectives, behaviors, and even across different robot manipulators. Beyond simulation, we directly deploy policies generated by **Make-An-Agent** onto real-world robots on locomotion tasks. Project page: https://cheryyunl.github.io/make-an-agent/.", "pdf": "https://openreview.net/pdf/30950d8eec8a2456a781dff694642e9b4c2d048c.pdf"} {"title": "L-TTA: Lightweight Test-Time Adaptation Using a Versatile Stem Layer", "url": "https://openreview.net/forum?id=G7NZljVOol", "detail_url": "https://openreview.net/forum?id=G7NZljVOol", "authors": "Jin Shin,Hyun Kim", "tags": "NIPS 2024,Poster", "abstract": "Test-time adaptation (TTA) is the most realistic methodology for adapting deep learning models to the real world using only unlabeled data from the target domain. Numerous TTA studies in deep learning have aimed at minimizing entropy. However, this necessitates forward/backward processes across the entire model and is limited by the incapability to fully leverage data based solely on entropy. This study presents a groundbreaking TTA solution that involves a departure from the conventional focus on minimizing entropy. Our innovative approach uniquely remodels the stem layer (i.e., the first layer) to emphasize minimizing a new learning criterion, namely, uncertainty. This method requires minimal involvement of the model's backbone, with only the stem layer participating in the TTA process. This approach significantly reduces the memory required for training and enables rapid adaptation to the target domain with minimal parameter updates. Moreover, to maximize data leveraging, the stem layer applies a discrete wavelet transform to the input features. It extracts multi-frequency domains and focuses on minimizing their individual uncertainties. The proposed method integrated into ResNet-26 and ResNet-50 models demonstrates its robustness by achieving outstanding TTA performance while using the least amount of memory compared to existing studies on CIFAR-10-C, ImageNet-C, and Cityscapes-C benchmark datasets. The code is available at https://github.com/janus103/L_TTA.", "pdf": "https://openreview.net/pdf/f00f5429bf30e23d67511a8233740cf63a50c6e7.pdf"} {"title": "AWT: Transferring Vision-Language Models via Augmentation, Weighting, and Transportation", "url": "https://openreview.net/forum?id=YiYww1d3lE", "detail_url": "https://openreview.net/forum?id=YiYww1d3lE", "authors": "Yuhan Zhu,Yuyang Ji,Zhiyu Zhao,Gangshan Wu,Limin Wang", "tags": "NIPS 2024,Poster", "abstract": "Pre-trained vision-language models (VLMs) have shown impressive results in various visual classification tasks.\nHowever, we often fail to fully unleash their potential when adapting them for new concept understanding due to limited information on new classes.\nTo address this limitation, we introduce a novel adaptation framework, AWT (Augment, Weight, then Transport). AWT comprises three key components: augmenting inputs with diverse visual perspectives and enriched class descriptions through image transformations and language models; dynamically weighting inputs based on the prediction entropy; and employing optimal transport to mine semantic correlations in the vision-language space.\nAWT can be seamlessly integrated into various VLMs, enhancing their zero-shot capabilities without additional training and facilitating few-shot learning through an integrated multimodal adapter module.\nWe verify AWT in multiple challenging scenarios, including zero-shot and few-shot image classification, zero-shot video action recognition, and out-of-distribution generalization. AWT consistently outperforms the state-of-the-art methods in each setting. In addition, our extensive studies further demonstrate AWT's effectiveness and adaptability across different VLMs, architectures, and scales.", "pdf": "https://openreview.net/pdf/ffb83db0591b6ffcceafa66453e2b64dc9b6786d.pdf"} {"title": "Easy Regional Contrastive Learning of Expressive Fashion Representations", "url": "https://openreview.net/forum?id=bCL9U2X9Jg", "detail_url": "https://openreview.net/forum?id=bCL9U2X9Jg", "authors": "Daiqing Qi,Handong Zhao,Sheng Li", "tags": "NIPS 2024,Poster", "abstract": "When learning vision-language models (VLM) for the fashion domain, most existing works design new architectures from vanilla BERT with additional objectives, or perform dense multi-task learning with fashion-specific tasks. Though progress has been made, their architecture or objectives are often intricate and the extendibility is limited.\nBy contrast, with simple architecture (comprising only two unimodal encoders) and just the contrastive objective, popular pre-trained VL models (e.g., CLIP) achieve superior performance in general domains, which are further easily extended to downstream tasks.\nHowever, inheriting such benefits of CLIP in the fashion domain is non-trivial in the presence of the notable domain gap. Empirically, we find that directly finetuning on fashion data leads CLIP to frequently ignore minor yet important details such as logos and composition, which are critical in fashion tasks such as retrieval and captioning.\nIn this work, to maintain CLIP's simple architecture and objective while explicitly attending to fashion details, we propose $E^2$: Easy Regional Contrastive Learning of Expressive Fashion Representations.\n$E^2$ introduces only a few selection tokens and fusion blocks (just 1.9\\% additional parameters in total) with only contrastive losses. Despite lightweight, in our primary focus, cross-modal retrieval, $E^2$ notably outperforms existing fashion VLMs with various fashion-specific objectives.\nMoreover, thanks to CLIP's widespread use in downstream tasks in general domains (e.g., zero-shot composed image retrieval and image captioning), our model can easily extend these models from general domain to the fashion domain with notable improvement.\nTo conduct a comprehensive evaluation, we further collect data from Amazon Reviews to build a new dataset for cross-modal retrieval in the fashion domain.", "pdf": "https://openreview.net/pdf/a7c2b9c4c37a2728d765c8c59968a9fd51817a19.pdf"} {"title": "TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration", "url": "https://openreview.net/forum?id=tnQbciDjVf", "detail_url": "https://openreview.net/forum?id=tnQbciDjVf", "authors": "Yiwei Guo,Shaobin Zhuang,Kunchang Li,Yu Qiao,Yali Wang", "tags": "NIPS 2024,Poster", "abstract": "Vision-language foundation models (such as CLIP) have recently shown their power in transfer learning, owing to large-scale image-text pre-training. However, target domain data in the downstream tasks can be highly different from the pre-training phase, which makes it hard for such a single model to generalize well. Alternatively, there exists a wide range of expert models that contain diversified vision and/or language knowledge pre-trained on different modalities, tasks, networks, and datasets. Unfortunately, these models are \"isolated agents\" with heterogeneous structures, and how to integrate their knowledge for generalizing CLIP-like models has not been fully explored. To bridge this gap, we propose a general and concise TransAgent framework, which transports the knowledge of the isolated agents in a unified manner, and effectively guides CLIP to generalize with multi-source knowledge distillation. With such a distinct framework, we flexibly collaborate with 11 heterogeneous agents to empower vision-language foundation models, without further cost in the inference phase. Finally, our TransAgent achieves state-of-the-art performance on 11 visual recognition datasets. Under the same low-shot setting, it outperforms the popular CoOp with around 10\\% on average, and 20\\% on EuroSAT which contains large domain shifts.", "pdf": "https://openreview.net/pdf/49216ae30f248ec7be10e4a2d12eb5a0235dee9b.pdf"} {"title": "Noise Contrastive Alignment of Language Models with Explicit Rewards", "url": "https://openreview.net/forum?id=KwRLDkyVOl", "detail_url": "https://openreview.net/forum?id=KwRLDkyVOl", "authors": "Huayu Chen,Guande He,Lifan Yuan,Ganqu Cui,Hang Su,Jun Zhu", "tags": "NIPS 2024,Poster", "abstract": "User intentions are typically formalized as evaluation rewards to be maximized when fine-tuning language models (LMs). Existing alignment methods, such as Direct Preference Optimization (DPO), are mainly tailored for pairwise preference data where rewards are implicitly defined rather than explicitly given. In this paper, we introduce a general framework for LM alignment, leveraging Noise Contrastive Estimation (NCE) to bridge the gap in handling reward datasets explicitly annotated with scalar evaluations. Our framework comprises two parallel algorithms, NCA and InfoNCA, both enabling the direct extraction of an LM policy from reward data as well as preference data. Notably, we show that the DPO loss is a special case of our proposed InfoNCA objective under pairwise preference settings, thereby integrating and extending current alignment theories. By comparing NCA and InfoNCA, we demonstrate that the well-observed decreasing-likelihood trend of DPO/InfoNCA is caused by their focus on adjusting relative likelihood across different responses.\nIn contrast, NCA optimizes the absolute likelihood for each response, thereby effectively preventing the chosen likelihood from decreasing. We evaluate our methods in both reward and preference settings with Mistral-8$\\times$7B and 7B models. Experiments suggest that InfoNCA/NCA surpasses various preference baselines when reward datasets are available. We also find NCA significantly outperforms DPO in complex reasoning tasks like math and coding.", "pdf": "https://openreview.net/pdf/2d057a6ef195f17a85e0da5710d9c17754bd188e.pdf"} {"title": "On the Saturation Effects of Spectral Algorithms in Large Dimensions", "url": "https://openreview.net/forum?id=kJzecLYsRi", "detail_url": "https://openreview.net/forum?id=kJzecLYsRi", "authors": "Weihao Lu,Haobo Zhang,Yicheng Li,Qian Lin", "tags": "NIPS 2024,Poster", "abstract": "The saturation effects, which originally refer to the fact that kernel ridge regression (KRR) fails to achieve the information-theoretical lower bound when the regression function is over-smooth, have been observed for almost 20 years and were rigorously proved recently for kernel ridge regression and some other spectral algorithms over a fixed dimensional domain. The main focus of this paper is to explore the saturation effects for a large class of spectral algorithms (including the KRR, gradient descent, etc.) in large dimensional settings where $n \\asymp d^{\\gamma}$. More precisely, we first propose an improved minimax lower bound for the kernel regression problem in large dimensional settings and show that the gradient flow with early stopping strategy will result in an estimator achieving this lower bound (up to a logarithmic factor). Similar to the results in KRR, we can further determine the exact convergence rates (both upper and lower bounds) of a large class of (optimal tuned) spectral algorithms with different qualification $\\tau$'s. In particular, we find that these exact rate curves (varying along $\\gamma$) exhibit the periodic plateau behavior and the polynomial approximation barrier. Consequently, we can fully depict the saturation effects of the spectral algorithms and reveal a new phenomenon in large dimensional settings (i.e., the saturation effect occurs in large dimensional setting as long as the source condition $s>\\tau$ while it occurs in fixed dimensional setting as long as $s>2\\tau$).", "pdf": "https://openreview.net/pdf/3d74bcf2f80ed19d7bee6b2e745383d8139847da.pdf"} {"title": "Make Your LLM Fully Utilize the Context", "url": "https://openreview.net/forum?id=YGTVEmBXtV", "detail_url": "https://openreview.net/forum?id=YGTVEmBXtV", "authors": "Shengnan An,Zexiong Ma,Zeqi Lin,Nanning Zheng,Jian-Guang Lou,Weizhu Chen", "tags": "NIPS 2024,Poster", "abstract": "While many contemporary large language models (LLMs) can process lengthy input, they still struggle to fully utilize information within the long context, known as the *lost-in-the-middle* challenge.\nWe hypothesize that it stems from insufficient explicit supervision during the long-context training, which fails to emphasize that any position in a long context can hold crucial information.\nBased on this intuition, our study presents **information-intensive (IN2) training**, a purely data-driven solution to overcome lost-in-the-middle.\nSpecifically, IN2 training leverages a synthesized long-context question-answer dataset, where the answer requires (1) **fine-grained information awareness** on a short segment (~128 tokens) within a synthesized long context (4K-32K tokens), and (2) the **integration and reasoning** of information from two or more short segments.\nThrough applying this information-intensive training on Mistral-7B, we present **FILM-7B** (FIll-in-the-Middle).\nTo thoroughly assess the ability of FILM-7B for utilizing long contexts, we design three probing tasks that encompass various context styles (document, code, and structured-data context) and information retrieval patterns (forward, backward, and bi-directional retrieval).\nThe probing results demonstrate that FILM-7B can robustly retrieve information from different positions in its 32K context window.\nBeyond these probing tasks, FILM-7B significantly improves the performance on real-world long-context tasks (e.g., 23.5->26.9 F1 score on NarrativeQA), while maintaining a comparable performance on short-context tasks (e.g., 59.3->59.2 accuracy on MMLU).", "pdf": "https://openreview.net/pdf/edb8a8ede36265e03de685e7ecbc1d59bcee74b2.pdf"} {"title": "Many-shot Jailbreaking", "url": "https://openreview.net/forum?id=cw5mgd71jW", "detail_url": "https://openreview.net/forum?id=cw5mgd71jW", "authors": "Cem Anil,Esin DURMUS,Nina Rimsky,Mrinank Sharma,Joe Benton,Sandipan Kundu,Joshua Batson,Meg Tong,Jesse Mu,Daniel J Ford,Francesco Mosconi,Rajashree Agrawal,Rylan Schaeffer,Naomi Bashkansky,Samuel Svenningsen,Mike Lambert,Ansh Radhakrishnan,Carson Denison,Evan J Hubinger,Yuntao Bai,Trenton Bricken,Timothy Maxwell,Nicholas Schiefer,James Sully,Alex Tamkin,Tamera Lanham,Karina Nguyen,Tomasz Korbak,Jared Kaplan,Deep Ganguli,Samuel R. Bowman,Ethan Perez,Roger Baker Grosse,David Duvenaud", "tags": "NIPS 2024,Poster", "abstract": "We investigate a family of simple long-context attacks on large language models: prompting with hundreds of demonstrations of undesirable behavior. This attack is newly feasible with the larger context windows recently deployed by language model providers like Google DeepMind, OpenAI and Anthropic. We find that in diverse, realistic circumstances, the effectiveness of this attack follows a power law, up to hundreds of shots. We demonstrate the success of this attack on the most widely used state-of-the-art closed-weight models, and across various tasks. Our results suggest very long contexts present a rich new attack surface for LLMs.", "pdf": "https://openreview.net/pdf/1ebf39e11f6389943882568e89d7f94ca696d58c.pdf"} {"title": "Convergence Analysis of Split Federated Learning on Heterogeneous Data", "url": "https://openreview.net/forum?id=ud0RBkdBfE", "detail_url": "https://openreview.net/forum?id=ud0RBkdBfE", "authors": "Pengchao Han,Chao Huang,Geng Tian,Ming Tang,Xin Liu", "tags": "NIPS 2024,Poster", "abstract": "Split federated learning (SFL) is a recent distributed approach for collaborative model training among multiple clients. In SFL, a global model is typically split into two parts, where clients train one part in a parallel federated manner, and a main server trains the other. Despite the recent research on SFL algorithm development, the convergence analysis of SFL is missing in the literature, and this paper aims to fill this gap. The analysis of SFL can be more challenging than that of federated learning (FL), due to the potential dual-paced updates at the clients and the main server. We provide convergence analysis of SFL for strongly convex and general convex objectives on heterogeneous data. The convergence rates are $O(1/T)$ and $O(1/\\sqrt[3]{T})$, respectively, where $T$ denotes the total number of rounds for SFL training. We further extend the analysis to non-convex objectives and where some clients may be unavailable during training. Numerical experiments validate our theoretical results and show that SFL outperforms FL and split learning (SL) when data is highly heterogeneous across a large number of clients.", "pdf": "https://openreview.net/pdf/088ba5a9c6e6f1d21d21cbead01265bab1c3368d.pdf"} {"title": "Global Convergence in Training Large-Scale Transformers", "url": "https://openreview.net/forum?id=9wtlfRKwZS", "detail_url": "https://openreview.net/forum?id=9wtlfRKwZS", "authors": "Cheng Gao,Yuan Cao,Zihao Li,Yihan He,Mengdi Wang,Han Liu,Jason Matthew Klusowski,Jianqing Fan", "tags": "NIPS 2024,Poster", "abstract": "Despite the widespread success of Transformers across various domains, their optimization guarantees in large-scale model settings are not well-understood. This paper rigorously analyzes the convergence properties of gradient flow in training Transformers with weight decay regularization. First, we construct the mean-field limit of large-scale Transformers, showing that as the model width and depth go to infinity, gradient flow converges to the Wasserstein gradient flow, which is represented by a partial differential equation. Then, we demonstrate that the gradient flow reaches a global minimum consistent with the PDE solution when the weight decay regularization parameter is sufficiently small. Our analysis is based on a series of novel mean-field techniques that adapt to Transformers. Compared with existing tools for deep networks (Lu et al., 2020) that demand homogeneity and global Lipschitz smoothness, we utilize a refined analysis assuming only $\\textit{partial homogeneity}$ and $\\textit{local Lipschitz smoothness}$. These new techniques may be of independent interest.", "pdf": "https://openreview.net/pdf/7a89fc6e4e15f3b3f95234779c880b8294c95dca.pdf"} {"title": "Automated Label Unification for Multi-Dataset Semantic Segmentation with GNNs", "url": "https://openreview.net/forum?id=gSGLkCX9sc", "detail_url": "https://openreview.net/forum?id=gSGLkCX9sc", "authors": "Rong Ma,Jie Chen,Xiangyang Xue,Jian Pu", "tags": "NIPS 2024,Poster", "abstract": "Deep supervised models possess significant capability to assimilate extensive training data, thereby presenting an opportunity to enhance model performance through training on multiple datasets. However, conflicts arising from different label spaces among datasets may adversely affect model performance. In this paper, we propose a novel approach to automatically construct a unified label space across multiple datasets using graph neural networks. This enables semantic segmentation models to be trained simultaneously on multiple datasets, resulting in performance improvements. Unlike existing methods, our approach facilitates seamless training without the need for additional manual reannotation or taxonomy reconciliation. This significantly enhances the efficiency and effectiveness of multi-dataset segmentation model training. The results demonstrate that our method significantly outperforms other multi-dataset training methods when trained on seven datasets simultaneously, and achieves state-of-the-art performance on the WildDash 2 benchmark. Our code can be found in https://github.com/Mrhonor/AutoUniSeg.", "pdf": "https://openreview.net/pdf/aff8971dd729e26e5536ccaea17281fec5ef20a3.pdf"} {"title": "Vision-Language Models are Strong Noisy Label Detectors", "url": "https://openreview.net/forum?id=haUnEiXgQ7", "detail_url": "https://openreview.net/forum?id=haUnEiXgQ7", "authors": "Tong Wei,Hao-Tian Li,Chun-Shu Li,Jiang-Xin Shi,Yu-Feng Li,Min-Ling Zhang", "tags": "NIPS 2024,Poster", "abstract": "Recent research on fine-tuning vision-language models has demonstrated impressive performance in various downstream tasks. However, the challenge of obtaining accurately labeled data in real-world applications poses a significant obstacle during the fine-tuning process. To address this challenge, this paper presents a Denoising Fine-Tuning framework, called DeFT, for adapting vision-language models. DeFT utilizes the robust alignment of textual and visual features pre-trained on millions of auxiliary image-text pairs to sieve out noisy labels. The proposed framework establishes a noisy label detector by learning positive and negative textual prompts for each class. The positive prompt seeks to reveal distinctive features of the class, while the negative prompt serves as a learnable threshold for separating clean and noisy samples. We employ parameter-efficient fine-tuning for the adaptation of a pre-trained visual encoder to promote its alignment with the learned textual prompts. As a general framework, DeFT can seamlessly fine-tune many pre-trained models to downstream tasks by utilizing carefully selected clean samples. Experimental results on seven synthetic and real-world noisy datasets validate the effectiveness of DeFT in both noisy label detection and image classification. Our source code can be found in the supplementary material.", "pdf": "https://openreview.net/pdf/e975c820e2aa3edef48543d06a949c9b16ddabb6.pdf"} {"title": "Decoding-Time Language Model Alignment with Multiple Objectives", "url": "https://openreview.net/forum?id=3csuL7TVpV", "detail_url": "https://openreview.net/forum?id=3csuL7TVpV", "authors": "Ruizhe Shi,Yifang Chen,Yushi Hu,Alisa Liu,Hannaneh Hajishirzi,Noah A. Smith,Simon Shaolei Du", "tags": "NIPS 2024,Poster", "abstract": "Aligning language models (LMs) to human preferences has emerged as a critical pursuit, enabling these models to better serve diverse user needs. Existing methods primarily focus on optimizing LMs for a single reward function, limiting their adaptability to varied objectives. \nHere, we propose $\\textbf{multi-objective decoding~(MOD)}$, a decoding-time algorithm that outputs the next token from a linear combination of predictions of all base models, for any given weighting over different objectives.\nWe exploit a common form among a family of $f$-divergence regularized alignment approaches (such as PPO, DPO, and their variants) to identify a closed-form solution by Legendre transform, and derive an efficient decoding strategy.\nTheoretically, we show why existing approaches can be sub-optimal even in natural settings and obtain optimality guarantees for our method.\nEmpirical results demonstrate the effectiveness of the algorithm. For example, compared to a parameter-merging baseline, MOD achieves 12.8\\% overall reward improvement when equally optimizing towards $3$ objectives. Moreover, we experiment with MOD on combining three fully-finetuned \nLMs of different model sizes, each aimed at different objectives such as safety, coding, and general user preference. Unlike traditional methods that require careful curation of a mixture of datasets to achieve comprehensive improvement, we can quickly experiment with preference weightings using MOD to find the best combination of models. Our best combination reduces toxicity on Toxigen to nearly 0\\% and achieves 7.9--33.3\\% improvement across three other metrics ($\\textit{i.e.}$, Codex@1, GSM-COT, BBH-COT).", "pdf": "https://openreview.net/pdf/c94089ab631fffcc9b062e56c50c9ed7a15fd40f.pdf"} {"title": "What Is Missing For Graph Homophily? Disentangling Graph Homophily For Graph Neural Networks", "url": "https://openreview.net/forum?id=GmdGEF8xxU", "detail_url": "https://openreview.net/forum?id=GmdGEF8xxU", "authors": "Yilun Zheng,Sitao Luan,Lihui Chen", "tags": "NIPS 2024,Poster", "abstract": "Graph homophily refers to the phenomenon that connected nodes tend to share similar characteristics. Understanding this concept and its related metrics is crucial for designing effective Graph Neural Networks (GNNs). The most widely used homophily metrics, such as edge or node homophily, quantify such \"similarity\" as label consistency across the graph topology. These metrics are believed to be able to reflect the performance of GNNs, especially on node-level tasks. However, many recent studies have empirically demonstrated that the performance of GNNs does not always align with homophily metrics, and how homophily influences GNNs still remains unclear and controversial. Then, a crucial question arises: What is missing in our current understanding of homophily? To figure out the missing part, in this paper, we disentangle the graph homophily into three aspects: label, structural, and feature homophily, which are derived from the three basic elements of graph data. We argue that the synergy of the three homophily can provide a more comprehensive understanding of GNN performance. Our new proposed structural and feature homophily consider the neighborhood consistency and feature dependencies among nodes, addressing the previously overlooked structural and feature aspects in graph homophily. To investigate their synergy, we propose a Contextual Stochastic Block Model with three types of Homophily (CSBM-3H), where the topology and feature generation are controlled by the three metrics. Based on the theoretical analysis of CSBM-3H, we derive a new composite metric, named Tri-Hom, that considers all three aspects and overcomes the limitations of conventional homophily metrics. The theoretical conclusions and the effectiveness of Tri-Hom have been verified through synthetic experiments on CSBM-3H. In addition, we conduct experiments on $31$ real-world benchmark datasets and calculate the correlations between homophily metrics and model performance. Tri-Hom has significantly higher correlation values than $17$ existing metrics that only focus on a single homophily aspect, demonstrating its superiority and the importance of homophily synergy. Our code is available at https://github.com/zylMozart/Disentangle_GraphHom.", "pdf": "https://openreview.net/pdf/b8ee864ad9e12045d02d7acac167f412ef23c62e.pdf"} {"title": "Integrating Deep Metric Learning with Coreset for Active Learning in 3D Segmentation", "url": "https://openreview.net/forum?id=uyqjpycMbU", "detail_url": "https://openreview.net/forum?id=uyqjpycMbU", "authors": "Arvind Murari Vepa,ZUKANG YANG,Andrew Choi,Jungseock Joo,Fabien Scalzo,Yizhou Sun", "tags": "NIPS 2024,Poster", "abstract": "Deep learning has seen remarkable advancements in machine learning, yet it often demands extensive annotated data. Tasks like 3D semantic segmentation impose a substantial annotation burden, especially in domains like medicine, where expert annotations drive up the cost. Active learning (AL) holds great potential to alleviate this annotation burden in 3D medical segmentation. The majority of existing AL methods, however, are not tailored to the medical domain. While weakly-supervised methods have been explored to reduce annotation burden, the fusion of AL with weak supervision remains unexplored, despite its potential to significantly reduce annotation costs. Additionally, there is little focus on slice-based AL for 3D segmentation, which can also significantly reduce costs in comparison to conventional volume-based AL. This paper introduces a novel metric learning method for Coreset to perform slice-based active learning in 3D medical segmentation. By merging contrastive learning with inherent data groupings in medical imaging, we learn a metric that emphasizes the relevant differences in samples for training 3D medical segmentation models. We perform comprehensive evaluations using both weak and full annotations across four datasets (medical and non-medical). Our findings demonstrate that our approach surpasses existing active learning techniques on both weak and full annotations and obtains superior performance with low-annotation budgets which is crucial in medical imaging. Source code for this project is available in the supplementary materials and on GitHub: https://github.com/arvindmvepa/al-seg.", "pdf": "https://openreview.net/pdf/2e94d6203d5cc09df4e6ad5abe22a561dcc0648b.pdf"} {"title": "Unlock the Intermittent Control Ability of Model Free Reinforcement Learning", "url": "https://openreview.net/forum?id=eC5qdC4ZTQ", "detail_url": "https://openreview.net/forum?id=eC5qdC4ZTQ", "authors": "Jiashun Liu,Jianye HAO,Xiaotian Hao,Yi Ma,YAN ZHENG,Yujing Hu,Tangjie Lv", "tags": "NIPS 2024,Poster", "abstract": "Intermittent control problems are common in real world. The interactions between the decision maker and the executor can be discontinuous (intermittent) due to various types of interruptions, e.g. unstable communication channel. Due to intermittent interaction, agents are unable to acquire the state sent by the executor and cannot transmit actions to the executor within a period of time step, i.e. bidirectional blockage, which may lead to inefficiencies of reinforcement learning policies and prevent the executors from completing the task. Such problem is not well studied in the RL community. In this paper, we model Intermittent control problem as an Intermittent Control Markov Decision Process, i.e agents are expected to generate action sequences corresponding to the unavailable states and transmit them before disabling interactions to ensure the smooth and effective motion of executors. However, directly generating multiple future actions in the original action space has unnatural motion issue and exploration difficulty. We propose **M**ulti-step **A**ction **R**epre**S**entation (**MARS**), which encodes a sequence of actions from the original action space to a compact and decodable latent space. Then based on the latent action sequence representation, the mainstream RL methods can be easily optimized to learn a smooth and efficient motion policy. Extensive experiments on simulation tasks and real-world robotic grasping tasks show that MARS significantly improves the learning efficiency and final performances compared with existing baselines.", "pdf": "https://openreview.net/pdf/866a97e9a7f9d894309e4373c61c5ed45679250b.pdf"} {"title": "Efficient Streaming Algorithms for Graphlet Sampling", "url": "https://openreview.net/forum?id=EC9Hfi9V3k", "detail_url": "https://openreview.net/forum?id=EC9Hfi9V3k", "authors": "Yann Bourreau,Marco Bressan,T-H. Hubert Chan,Qipeng Kuang,Mauro Sozio", "tags": "NIPS 2024,Poster", "abstract": "Given a graph $G$ and a positive integer $k$, the Graphlet Sampling problem asks to sample a connected induced $k$-vertex subgraph of $G$ uniformly at random.\nGraphlet sampling enhances machine learning applications by transforming graph structures into feature vectors for tasks such as graph classification and subgraph identification, boosting neural network performance, and supporting clustered federated learning by capturing local structures and relationships.\nA recent work has shown that the problem admits an algorithm that preprocesses $G$ in time $O(nk^2 \\log k + m)$, and draws one sample in expected time $k^{O(k)} \\log n$, where $n=|V(G)|$ and $m=|E(G)|$. Such an algorithm relies on the assumption that the input graph fits into main memory and it does not seem to be straightforward to adapt it to very large graphs. We consider Graphlet Sampling in the semi-streaming setting, where we have a memory of $M = \\Omega(n \\log n)$ words, and $G$ can be only read through sequential passes over the edge list. We develop a semi-streaming algorithm that preprocesses $G$ in $p={O}(\\log n)$ passes and samples $\\Theta(M k^{-O(k)})$ independent uniform $k$-graphlets in $O(k)$ passes. For constant $k$, both phases run in time $O((n+m)\\log n)$. We also show that the tradeoff between memory and number of passes of our algorithms is near-optimal. Our extensive evaluation on very large graphs shows the effectiveness of our algorithms.", "pdf": "https://openreview.net/pdf/4046414f7daabaa3355d344da18d43d7c3eac33b.pdf"} {"title": "SlimGPT: Layer-wise Structured Pruning for Large Language Models", "url": "https://openreview.net/forum?id=MxF0IKJtKW", "detail_url": "https://openreview.net/forum?id=MxF0IKJtKW", "authors": "Gui Ling,Ziyang Wang,YuliangYan,Qingwen Liu", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have garnered significant attention for their remarkable capabilities across various domains, whose vast parameter scales present challenges for practical deployment. Structured pruning is an effective method to balance model performance with efficiency, but performance restoration under computational resource constraints is a principal challenge in pruning LLMs. Therefore, we present a low-cost and fast structured pruning method for LLMs named SlimGPT based on the Optimal Brain Surgeon framework. We propose Batched Greedy Pruning for rapid and near-optimal pruning, which enhances the accuracy of head-wise pruning error estimation through grouped Cholesky decomposition and improves the pruning efficiency of FFN via Dynamic Group Size, thereby achieving approximate local optimal pruning results within one hour. Besides, we explore the limitations of layer-wise pruning from the perspective of error accumulation and propose Incremental Pruning Ratio, a non-uniform pruning strategy to reduce performance degradation. Experimental results on the LLaMA benchmark show that SlimGPT outperforms other methods and achieves state-of-the-art results.", "pdf": "https://openreview.net/pdf/0590672f5c17b8f1c745f70246a5447ea6c6cf36.pdf"} {"title": "FashionR2R: Texture-preserving Rendered-to-Real Image Translation with Diffusion Models", "url": "https://openreview.net/forum?id=QAEnr5j172", "detail_url": "https://openreview.net/forum?id=QAEnr5j172", "authors": "Rui Hu,Qian He,Gaofeng He,Jiedong Zhuang,Huang Chen,Huafeng Liu,Huamin Wang", "tags": "NIPS 2024,Poster", "abstract": "Modeling and producing lifelike clothed human images has attracted researchers' attention from different areas for decades, with the complexity from highly articulated and structured content. Rendering algorithms decompose and simulate the imaging process of a camera, while are limited by the accuracy of modeled variables and the efficiency of computation. Generative models can produce impressively vivid human images, however still lacking in controllability and editability. This paper studies photorealism enhancement of rendered images, leveraging generative power from diffusion models on the controlled basis of rendering. We introduce a novel framework to translate rendered images into their realistic counterparts, which consists of two stages: Domain Knowledge Injection (DKI) and Realistic Image Generation (RIG). In DKI, we adopt positive (real) domain finetuning and negative (rendered) domain embedding to inject knowledge into a pretrained Text-to-image (T2I) diffusion model. In RIG, we generate the realistic image corresponding to the input rendered image, with a Texture-preserving Attention Control (TAC) to preserve fine-grained clothing textures, exploiting the decoupled features encoded in the UNet structure. Additionally, we introduce SynFashion dataset, featuring high-quality digital clothing images with diverse textures. Extensive experimental results demonstrate the superiority and effectiveness of our method in rendered-to-real image translation.", "pdf": "https://openreview.net/pdf/0099b041ee56503c36f162b77975743d9f7ab89c.pdf"} {"title": "The motion planning neural circuit in goal-directed navigation as Lie group operator search", "url": "https://openreview.net/forum?id=Qz7BfmWizk", "detail_url": "https://openreview.net/forum?id=Qz7BfmWizk", "authors": "Junfeng Zuo,Ying Nian Wu,Si Wu,Wenhao Zhang", "tags": "NIPS 2024,Poster", "abstract": "The information processing in the brain and embodied agents form a sensory-action loop to interact with the world. An important step in the loop is motion planning which selects motor actions based on the current world state and task need. In goal-directed navigation, the brain chooses and generates motor actions to bring the current state into the goal state. It is unclear about the neural circuit mechanism of motor action selection, nor its underlying theory. The present study formulates the motion planning as a Lie group operator search problem, and uses the 1D rotation group as an example to provide insight into general operator search in neural circuits. We found the abstract group operator search can be implemented by a two-layer feedforward circuit utilizing circuit motifs of connection phase shift, nonlinear activation function, and pooling, similar to Drosophila's goal-directed navigation neural circuits. And the computational complexity of the feedforward circuit can be even lower than common signal processing algorithms in certain conditions. We also provide geometric interpretations of circuit computation in the group representation space. The feedforward motion planning circuit is further combined with sensory and motor circuit modules into a full circuit of the sensory-action loop implementing goal-directed navigation. Our work for the first time links the abstract operator search with biological neural circuits.", "pdf": "https://openreview.net/pdf/931cb7cfa9fd238717900801fe399b52ae8184da.pdf"} {"title": "You Only Look Around: Learning Illumination-Invariant Feature for Low-light Object Detection", "url": "https://openreview.net/forum?id=MocRdX0n7B", "detail_url": "https://openreview.net/forum?id=MocRdX0n7B", "authors": "MingboHong,Shen Cheng,Haibin Huang,Haoqiang Fan,Shuaicheng Liu", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we introduce YOLA, a novel framework for object detection in low-light scenarios. Unlike previous works, we propose to tackle this challenging problem from the perspective of feature learning. Specifically, we propose to learn illumination-invariant features through the Lambertian image formation model. We observe that, under the Lambertian assumption, it is feasible to approximate illumination-invariant feature maps by exploiting the interrelationships between neighboring color channels and spatially adjacent pixels. By incorporating additional constraints, these relationships can be characterized in the form of convolutional kernels, which can be trained in a detection-driven manner within a network. Towards this end, we introduce a novel module dedicated to the extraction of illumination-invariant features from low-light images, which can be easily integrated into existing object detection frameworks. Our empirical findings reveal significant improvements in low-light object detection tasks, as well as promising results in both well-lit and over-lit scenarios.", "pdf": "https://openreview.net/pdf/7556af62f6a06f6439d48c528a75515bc7845783.pdf"} {"title": "Distributed Least Squares in Small Space via Sketching and Bias Reduction", "url": "https://openreview.net/forum?id=rkuVYosT2c", "detail_url": "https://openreview.net/forum?id=rkuVYosT2c", "authors": "Sachin Garg,Kevin Tan,Michal Derezinski", "tags": "NIPS 2024,Poster", "abstract": "Matrix sketching is a powerful tool for reducing the size of large data matrices. Yet there are fundamental limitations to this size reduction when we want to recover an accurate estimator for a task such as least square regression. We show that these limitations can be circumvented in the distributed setting by designing sketching methods that minimize the bias of the estimator, rather than its error. In particular, we give a sparse sketching method running in optimal space and current matrix multiplication time, which recovers a nearly-unbiased least squares estimator using two passes over the data. This leads to new communication-efficient distributed averaging algorithms for least squares and related tasks, which directly improve on several prior approaches. Our key novelty is a new bias analysis for sketched least squares, giving a sharp characterization of its dependence on the sketch sparsity. The techniques include new higher moment restricted Bai-Silverstein inequalities, which are of independent interest to the non-asymptotic analysis of deterministic equivalents for random matrices that arise from sketching.", "pdf": "https://openreview.net/pdf/2c8cc6686571de555edd96ef6c41649fee4657d7.pdf"} {"title": "Understanding the Expressivity and Trainability of Fourier Neural Operator: A Mean-Field Perspective", "url": "https://openreview.net/forum?id=QJr02BTM7J", "detail_url": "https://openreview.net/forum?id=QJr02BTM7J", "authors": "Takeshi Koshizuka,Masahiro Fujisawa,Yusuke Tanaka,Issei Sato", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we explores the expressivity and trainability of the Fourier Neural Operator (FNO). We establish a mean-field theory for the FNO, analyzing the behavior of the random FNO from an \\emph{edge of chaos} perspective. Our investigation into the expressivity of a random FNO involves examining the ordered-chaos phase transition of the network based on the weight distribution. This phase transition demonstrates characteristics unique to the FNO, induced by mode truncation, while also showcasing similarities to those of densely connected networks. Furthermore, we identify a connection between expressivity and trainability: the ordered and chaotic phases correspond to regions of vanishing and exploding gradients, respectively. This finding provides a practical prerequisite for the stable training of the FNO. Our experimental results corroborate our theoretical findings.", "pdf": "https://openreview.net/pdf/0b2702599eb62077cfb11da45e4eac8cf453d76f.pdf"} {"title": "Stochastic Optimization Schemes for Performative Prediction with Nonconvex Loss", "url": "https://openreview.net/forum?id=ejIzdt50ek", "detail_url": "https://openreview.net/forum?id=ejIzdt50ek", "authors": "Qiang LI,Hoi To Wai", "tags": "NIPS 2024,Poster", "abstract": "This paper studies a risk minimization problem with decision dependent data distribution. The problem pertains to the performative prediction setting in which a trained model can affect the outcome estimated by the model. Such dependency creates a feedback loop that influences the stability of optimization algorithms such as stochastic gradient descent (SGD). We present the first study on performative prediction with smooth but possibly non-convex loss. We analyze a greedy deployment scheme with SGD (SGD-GD). Note that in the literature, SGD-GD is often studied with strongly convex loss. We first propose the definition of stationary performative stable (SPS) solutions through relaxing the popular performative stable condition. We then prove that SGD-GD converges to a biased SPS solution in expectation. We consider two conditions of sensitivity on the distribution shifts: (i) the sensitivity is characterized by Wasserstein-1 distance and the loss is Lipschitz w.r.t.~data samples, or (ii) the sensitivity is characterized by total variation (TV) divergence and the loss is bounded. In both conditions, the bias levels are proportional to the stochastic gradient's variance and sensitivity level. \nOur analysis is extended to a lazy deployment scheme where models are deployed once per several SGD updates, and we show that it converges to an SPS solution with reduced bias. Numerical experiments corroborate our theories.", "pdf": "https://openreview.net/pdf/1568b3d1c1cdbb9aa953ed2b0f6a25125d6e0ca8.pdf"} {"title": "DreamClear: High-Capacity Real-World Image Restoration with Privacy-Safe Dataset Curation", "url": "https://openreview.net/forum?id=6eoGVqMiIj", "detail_url": "https://openreview.net/forum?id=6eoGVqMiIj", "authors": "Yuang Ai,Xiaoqiang Zhou,Huaibo Huang,Xiaotian Han,Zhengyu Chen,Quanzeng You,Hongxia Yang", "tags": "NIPS 2024,Poster", "abstract": "Image restoration (IR) in real-world scenarios presents significant challenges due to the lack of high-capacity models and comprehensive datasets.\nTo tackle these issues, we present a dual strategy: GenIR, an innovative data curation pipeline, and DreamClear, a cutting-edge Diffusion Transformer (DiT)-based image restoration model.\n**GenIR**, our pioneering contribution, is a dual-prompt learning pipeline that overcomes the limitations of existing datasets, which typically comprise only a few thousand images and thus offer limited generalizability for larger models. \nGenIR streamlines the process into three stages: image-text pair construction, dual-prompt based fine-tuning, and data generation \\& filtering. This approach circumvents the laborious data crawling process, ensuring copyright compliance and providing a cost-effective, privacy-safe solution for IR dataset construction. The result is a large-scale dataset of one million high-quality images.\nOur second contribution, **DreamClear**, is a DiT-based image restoration model. It utilizes the generative priors of text-to-image (T2I) diffusion models and the robust perceptual capabilities of multi-modal large language models (MLLMs) to achieve photorealistic restoration. To boost the model's adaptability to diverse real-world degradations, we introduce the Mixture of Adaptive Modulator (MoAM). It employs token-wise degradation priors to dynamically integrate various restoration experts, thereby expanding the range of degradations the model can address.\nOur exhaustive experiments confirm DreamClear's superior performance, underlining the efficacy of our dual strategy for real-world image restoration. Code and pre-trained models are available at: https://github.com/shallowdream204/DreamClear.", "pdf": "https://openreview.net/pdf/fc2054b17452b4d1eeab4125f33c36ecf292b0cf.pdf"} {"title": "AUC Maximization under Positive Distribution Shift", "url": "https://openreview.net/forum?id=yOe6ajdslI", "detail_url": "https://openreview.net/forum?id=yOe6ajdslI", "authors": "Atsutoshi Kumagai,Tomoharu Iwata,Hiroshi Takahashi,Taishi Nishiyama,Yasuhiro Fujiwara", "tags": "NIPS 2024,Poster", "abstract": "Maximizing the area under the receiver operating characteristic curve (AUC) is a popular approach to imbalanced binary classification problems. Existing AUC maximization methods usually assume that training and test distributions are identical. However, this assumption is often violated in practice due to {\\it a positive distribution shift}, where the negative-conditional density does not change but the positive-conditional density can vary. This shift often occurs in imbalanced classification since positive data are often more diverse and time-varying than negative data. To deal with this shift, we theoretically show that the AUC on the test distribution can be expressed by using the positive and marginal training densities and the marginal test density. Based on this result, we can maximize the AUC on the test distribution by using positive and unlabeled data in the training distribution and unlabeled data in the test distribution. The proposed method requires only positive labels in the training distribution as supervision. Moreover, the derived AUC has a simple form and thus is easy to implement. The effectiveness of the proposed method is shown with four real-world datasets.", "pdf": "https://openreview.net/pdf/6babeb80637c127be43e9a61c520ffe601db0123.pdf"} {"title": "Exploring DCN-like architecture for fast image generation with arbitrary resolution", "url": "https://openreview.net/forum?id=e57B7BfA2B", "detail_url": "https://openreview.net/forum?id=e57B7BfA2B", "authors": "Shuai Wang,Zexian Li,Tianhui Song,Xubin Li,Tiezheng Ge,Bo Zheng,Limin Wang", "tags": "NIPS 2024,Poster", "abstract": "Arbitrary-resolution image generation still remains a challenging task in AIGC, as it requires handling varying resolutions and aspect ratios while maintaining high visual quality. Existing transformer-based diffusion methods suffer from quadratic computation cost and limited resolution extrapolation capabilities, making them less effective for this task. In this paper, we propose FlowDCN, a purely convolution-based generative model with linear time and memory complexity, that can efficiently generate high-quality images at arbitrary resolutions. Equipped with a new design of learnable group-wise deformable convolution block, our FlowDCN yields higher flexibility and capability to handle different resolutions with a single model.\nFlowDCN achieves the state-of-the-art 4.30 sFID on $256\\times256$ ImageNet Benchmark and comparable resolution extrapolation results, surpassing transformer-based counterparts in terms of convergence speed (only $\\frac{1}{5}$ images), visual quality, parameters ($8\\%$ reduction) and FLOPs ($20\\%$ reduction). We believe FlowDCN offers a promising solution to scalable and flexible image synthesis.", "pdf": "https://openreview.net/pdf/4edb5d7a576679d7f22ae388e3a56bc9468b6105.pdf"} {"title": "How Does Black-Box Impact the Learning Guarantee of Stochastic Compositional Optimization?", "url": "https://openreview.net/forum?id=4AuEQ1FfUf", "detail_url": "https://openreview.net/forum?id=4AuEQ1FfUf", "authors": "Jun Chen,Hong Chen,Bin Gu", "tags": "NIPS 2024,Poster", "abstract": "Stochastic compositional optimization (SCO) problem constitutes a class of optimization problems characterized by the objective function with a compositional form, including the tasks with known derivatives, such as AUC maximization, and the derivative-free tasks exemplified by black-box vertical federated learning (VFL). From the learning theory perspective, the learning guarantees of SCO algorithms with known derivatives have been studied in the literature. However, the potential impacts of the derivative-free setting on the learning guarantees of SCO remains unclear and merits further investigation. This paper aims to reveal the impacts by developing a theoretical analysis for two derivative-free algorithms, black-box SCGD and SCSC. Specifically, we first provide the sharper generalization upper bounds of convex SCGD and SCSC based on a new stability analysis framework more effective than prior work under some milder conditions, which is further developed to the non-convex case using the almost co-coercivity property of smooth function. Then, we derive the learning guarantees of three black-box variants of non-convex SCGD and SCSC with additional optimization analysis. Comparing these results, we theoretically uncover the impacts that a better gradient estimation brings a tighter learning guarantee and a larger proportion of unknown gradients may lead to a stronger dependence on the gradient estimation quality. Finally, our analysis is applied to two SCO algorithms, FOO-based vertical VFL and VFL-CZOFO, to build the first learning guarantees for VFL that align with the findings of SCGD and SCSC.", "pdf": "https://openreview.net/pdf/43e3c93be8f0556187c88b3c568cb63f39d53976.pdf"} {"title": "Robust and Faster Zeroth-Order Minimax Optimization: Complexity and Applications", "url": "https://openreview.net/forum?id=F8wKoSFSaA", "detail_url": "https://openreview.net/forum?id=F8wKoSFSaA", "authors": "Weixin An,Yuanyuan Liu,Fanhua Shang,Hongying Liu", "tags": "NIPS 2024,Poster", "abstract": "Many zeroth-order (ZO) optimization algorithms have been developed to solve nonconvex minimax problems in machine learning and computer vision areas. However, existing ZO minimax algorithms have high complexity and rely on some strict restrictive conditions for ZO estimations. To address these issues, we design a new unified ZO gradient descent extragradient ascent (ZO-GDEGA) algorithm, which reduces the overall complexity to $\\mathcal{O}(d\\epsilon^{-6})$ to find an $\\epsilon$-stationary point of the function $\\psi$ for nonconvex-concave (NC-C) problems, where $d$ is the variable dimension. To the best of our knowledge, ZO-GDEGA is the first ZO algorithm with complexity guarantees to solve stochastic NC-C problems. Moreover, ZO-GDEGA requires weaker conditions on the ZO estimations and achieves more robust theoretical results. As a by-product, ZO-GDEGA has advantages on the condition number for the NC-strongly concave case. Experimentally, ZO-GDEGA can generate more effective poisoning attack data with an average accuracy reduction of 5\\%. The improved AUC performance also verifies the robustness of gradient estimations.", "pdf": "https://openreview.net/pdf/87ec8500f3b6f529333dea266b7a6b55828fb4fd.pdf"} {"title": "Last-Iterate Convergence for Generalized Frank-Wolfe in Monotone Variational Inequalities", "url": "https://openreview.net/forum?id=EjKNSErSMJ", "detail_url": "https://openreview.net/forum?id=EjKNSErSMJ", "authors": "Zaiwei Chen,Eric Mazumdar", "tags": "NIPS 2024,Poster", "abstract": "We study the convergence behavior of a generalized Frank-Wolfe algorithm in constrained (stochastic) monotone variational inequality (MVI) problems. In recent years, there have been numerous efforts to design algorithms for solving constrained MVI problems due to their connections with optimization, machine learning, and equilibrium computation in games. Most work in this domain has focused on extensions of simultaneous gradient play, with particular emphasis on understanding the convergence properties of extragradient and optimistic gradient methods. In contrast, we examine the performance of an algorithm from another well-known class of optimization algorithms: Frank-Wolfe. We show that a generalized variant of this algorithm achieves a fast $\\mathcal{O}(T^{-1/2})$ last-iterate convergence rate in constrained MVI problems. By drawing connections between our generalized Frank-Wolfe algorithm and the well-known smoothed fictitious play (FP) from game theory, we also derive a finite-sample convergence rate for smoothed FP in zero-sum matrix games. Furthermore, we demonstrate that a stochastic variant of the generalized Frank-Wolfe algorithm for MVI problems also converges in a last-iterate sense, albeit at a slower $\\mathcal{O}(T^{-1/6})$ convergence rate.", "pdf": "https://openreview.net/pdf/093287ecc4f7f9176a7d0f1801a92dc18068bf1b.pdf"} {"title": "The Benefits of Balance: From Information Projections to Variance Reduction", "url": "https://openreview.net/forum?id=vJMMdFfL0A", "detail_url": "https://openreview.net/forum?id=vJMMdFfL0A", "authors": "Lang Liu,Ronak Mehta,Soumik Pal,Zaid Harchaoui", "tags": "NIPS 2024,Poster", "abstract": "Data balancing across multiple modalities and sources appears in various forms in foundation models in machine learning and AI, e.g., in CLIP and DINO. We show that data balancing across modalities and sources actually offers an unsuspected benefit: variance reduction. We present a non-asymptotic statistical bound that quantifies this variance reduction effect and relates it to the eigenvalue decay of Markov operators. Furthermore, we describe how various forms of data balancing in contrastive multimodal learning and self-supervised clustering can be better understood, and even improved upon, owing to our variance reduction viewpoint.", "pdf": "https://openreview.net/pdf/ae57592f2b56e21a1b35767f5eda616ea38b034e.pdf"} {"title": "Generalized Tensor Decomposition for Understanding Multi-Output Regression under Combinatorial Shifts", "url": "https://openreview.net/forum?id=1v0BPTR3AA", "detail_url": "https://openreview.net/forum?id=1v0BPTR3AA", "authors": "Andong Wang,Yuning Qiu,Mingyuan Bai,Zhong Jin,Guoxu Zhou,Qibin Zhao", "tags": "NIPS 2024,Poster", "abstract": "In multi-output regression, we identify a previously neglected challenge that arises from the inability of training distribution to cover all combinations of input features, leading to combinatorial distribution shift (CDS). To the best of our knowledge, this is the first work to formally define and address this problem. We tackle it through a novel tensor decomposition perspective, proposing the Functional t-Singular Value Decomposition (Ft-SVD) theorem which extends the classical tensor SVD to infinite and continuous feature domains, providing a natural tool for representing and analyzing multi-output functions. Within the Ft-SVD framework, we formulate the multi-output regression problem under CDS as a low-rank tensor estimation problem under the missing not at random (MNAR) setting, and introduce a series of assumptions about the true functions, training and testing distributions, and spectral properties of the ground-truth embeddings, making the problem more tractable.\nTo address the challenges posed by CDS in multi-output regression, we develop a tailored Double-Stage Empirical Risk Minimization (ERM-DS) algorithm that leverages the spectral properties of the embeddings and uses specific hypothesis classes in each frequency component to better capture the varying spectral decay patterns. We provide rigorous theoretical analyses that establish performance guarantees for the ERM-DS algorithm. This work lays a preliminary theoretical foundation for multi-output regression under CDS.", "pdf": "https://openreview.net/pdf/a2341b6b8ecbbaae9d1c2cc5c0cb1238a25dad2f.pdf"} {"title": "Focus On What Matters: Separated Models For Visual-Based RL Generalization", "url": "https://openreview.net/forum?id=wz2KvvEk44", "detail_url": "https://openreview.net/forum?id=wz2KvvEk44", "authors": "Di Zhang,Bowen Lv,Hai Zhang,Feifan Yang,Junqiao Zhao,Hang Yu,Chang Huang,Hongtu Zhou,Chen Ye,changjun jiang", "tags": "NIPS 2024,Poster", "abstract": "A primary challenge for visual-based Reinforcement Learning (RL) is to generalize effectively across unseen environments. Although previous studies have explored different auxiliary tasks to enhance generalization, few adopt image reconstruction due to concerns about exacerbating overfitting to task-irrelevant features during training. Perceiving the pre-eminence of image reconstruction in representation learning, we propose SMG (\\blue{S}eparated \\blue{M}odels for \\blue{G}eneralization), a novel approach that exploits image reconstruction for generalization. SMG introduces two model branches to extract task-relevant and task-irrelevant representations separately from visual observations via cooperatively reconstruction. Built upon this architecture, we further emphasize the importance of task-relevant features for generalization. Specifically, SMG incorporates two additional consistency losses to guide the agent's focus toward task-relevant areas across different scenarios, thereby achieving free from overfitting. Extensive experiments in DMC demonstrate the SOTA performance of SMG in generalization, particularly excelling in video-background settings. Evaluations on robotic manipulation tasks further confirm the robustness of SMG in real-world applications. Source code is available at \\url{https://anonymous.4open.science/r/SMG/}.", "pdf": "https://openreview.net/pdf/4038405ec2477f5290f6738f4c80053e969f1bfe.pdf"} {"title": "Principled Probabilistic Imaging using Diffusion Models as Plug-and-Play Priors", "url": "https://openreview.net/forum?id=Xq9HQf7VNV", "detail_url": "https://openreview.net/forum?id=Xq9HQf7VNV", "authors": "Zihui Wu,Yu Sun,Yifan Chen,Bingliang Zhang,Yisong Yue,Katherine Bouman", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models (DMs) have recently shown outstanding capabilities in modeling complex image distributions, making them expressive image priors for solving Bayesian inverse problems. However, most existing DM-based methods rely on approximations in the generative process to be generic to different inverse problems, leading to inaccurate sample distributions that deviate from the target posterior defined within the Bayesian framework. To harness the generative power of DMs while avoiding such approximations, we propose a Markov chain Monte Carlo algorithm that performs posterior sampling for general inverse problems by reducing it to sampling the posterior of a Gaussian denoising problem. Crucially, we leverage a general DM formulation as a unified interface that allows for rigorously solving the denoising problem with a range of state-of-the-art DMs. We demonstrate the effectiveness of the proposed method on six inverse problems (three linear and three nonlinear), including a real-world black hole imaging problem. Experimental results indicate that our proposed method offers more accurate reconstructions and posterior estimation compared to existing DM-based imaging inverse methods.", "pdf": "https://openreview.net/pdf/c672647004289c229223f90d5e49cb36f9b21ba4.pdf"} {"title": "EGonc : Energy-based Open-Set Node Classification with substitute Unknowns", "url": "https://openreview.net/forum?id=3cL2XDyaEB", "detail_url": "https://openreview.net/forum?id=3cL2XDyaEB", "authors": "Qin Zhang,Zelin Shi,Shirui Pan,Junyang Chen,Huisi Wu,Xiaojun Chen", "tags": "NIPS 2024,Poster", "abstract": "Open-set Classification (OSC) is a critical requirement for safely deploying machine learning models in the open world, which aims to classify samples from known classes and reject samples from out-of-distribution (OOD). \nExisting methods exploit the feature space of trained network and attempt at estimating the uncertainty in the predictions.\nHowever, softmax-based neural networks are found to be overly confident in their predictions even on data they have never seen before and\nthe immense diversity of the OOD examples also makes such methods fragile.\nTo this end, we follow the idea of estimating the underlying density of the training data to decide whether a given input is close to the in-distribution (IND) data and adopt Energy-based models (EBMs) as density estimators. \nA novel energy-based generative open-set node classification method, \\textit{EGonc}, is proposed to achieve open-set graph learning. \nSpecifically, we generate substitute unknowns to mimic the distribution of real open-set samples firstly, based on the information of graph structures. \nThen, an additional energy logit representing the virtual OOD class is learned from the residual of the feature against the principal space, and matched with the original logits by a constant scaling. This virtual logit serves as the indicator of OOD-ness. \nEGonc has nice theoretical properties that guarantee an overall distinguishable margin between the detection scores for IND and OOD samples. \nComprehensive experimental evaluations of EGonc also demonstrate its superiority.", "pdf": "https://openreview.net/pdf/8cf514bedc720ccd24d628a90f85c2b084209a77.pdf"} {"title": "HiCo: Hierarchical Controllable Diffusion Model for Layout-to-image Generation", "url": "https://openreview.net/forum?id=i9QpRjUAhv", "detail_url": "https://openreview.net/forum?id=i9QpRjUAhv", "authors": "Bocheng,YuhangMa,wuliebucha,Shanyuan Liu,Ao Ma,Xiaoyu Wu,Dawei Leng,Yuhui Yin", "tags": "NIPS 2024,Poster", "abstract": "The task of layout-to-image generation involves synthesizing images based on the captions of objects and their spatial positions. Existing methods still struggle in complex layout generation, where common bad cases include object missing, inconsistent lighting, conflicting view angles, etc. To effectively address these issues, we propose a \\textbf{Hi}erarchical \\textbf{Co}ntrollable (HiCo) diffusion model for layout-to-image generation, featuring object seperable conditioning branch structure. Our key insight is to achieve spatial disentanglement through hierarchical modeling of layouts. We use a multi branch structure to represent hierarchy and aggregate them in fusion module. To evaluate the performance of multi-objective controllable layout generation in natural scenes, we introduce the HiCo-7K benchmark, derived from the GRIT-20M dataset and manually cleaned. https://github.com/360CVGroup/HiCo_T2I.", "pdf": "https://openreview.net/pdf/2332400930435bb4bf73a0958c5f15b073789b24.pdf"} {"title": "Scale-invariant Optimal Sampling for Rare-events Data and Sparse Models", "url": "https://openreview.net/forum?id=6SAnp0vr9X", "detail_url": "https://openreview.net/forum?id=6SAnp0vr9X", "authors": "Jing Wang,HaiYing Wang,Hao Zhang", "tags": "NIPS 2024,Poster", "abstract": "Subsampling is effective in tackling computational challenges for massive data with rare events. Overly aggressive subsampling may adversely affect estimation efficiency, and optimal subsampling is essential to mitigate the information loss. However, existing optimal subsampling probabilities depends on data scales, and some scaling transformations may result in inefficient subsamples. This problem is more significant when there are inactive features, because their influence on the subsampling probabilities can be arbitrarily magnified by inappropriate scaling transformations. We tackle this challenge and introduce a scale-invariant optimal subsampling function in the context of sparse models, where inactive features are commonly assumed. Instead of focusing on estimating model parameters, we define an optimal subsampling function to minimize the prediction error, using adaptive lasso as an example to outline the estimation procedure and study its theoretical guarantee. We first introduce the adaptive lasso estimator for rare-events data and establish its oracle properties, thereby validating the use of subsampling. Then we derive a scale-invariant optimal subsampling function that minimizes the prediction error of the inverse probability weighted (IPW) adaptive lasso. Finally, we present an estimator based on the maximum sampled conditional likelihood (MSCL) to further improve the estimation efficiency. We conduct numerical experiments using both simulated and real-world data sets to demonstrate the performance of the proposed methods.", "pdf": "https://openreview.net/pdf/c303da87e11f70f616153d925f95a3ce1e07acef.pdf"} {"title": "Local Curvature Smoothing with Stein's Identity for Efficient Score Matching", "url": "https://openreview.net/forum?id=yPPNi7vc7n", "detail_url": "https://openreview.net/forum?id=yPPNi7vc7n", "authors": "GENKI OSADA,Makoto Shing,Takashi Nishide", "tags": "NIPS 2024,Poster", "abstract": "The training of score-based diffusion models (SDMs) is based on score matching. The challenge of score matching is that it includes a\u00a0computationally expensive Jacobian trace. While several methods have been proposed to avoid this computation, each has drawbacks, such as instability during training and approximating the learning as learning a denoising vector field rather than a true score.\nWe propose a novel score matching variant, local curvature smoothing with Stein's identity (LCSS). The LCSS bypasses the Jacobian trace by applying Stein's identity, enabling regularization effectiveness and efficient computation. We show that LCSS surpasses existing methods in sample generation performance and matches the performance of denoising score matching, widely adopted by most SDMs, in evaluations such as FID, Inception score, and bits per dimension. Furthermore, we show that LCSS enables realistic image generation even at a high resolution of $1024 \\times 1024$.", "pdf": "https://openreview.net/pdf/dcc7ef4b2d112718c5f2b5a88d45e436bdeb815d.pdf"} {"title": "CosAE: Learnable Fourier Series for Image Restoration", "url": "https://openreview.net/forum?id=D0s29c5GvL", "detail_url": "https://openreview.net/forum?id=D0s29c5GvL", "authors": "Sifei Liu,Shalini De Mello,Jan Kautz", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we introduce Cosine Autoencoder (CosAE), a novel, generic Autoencoder that seamlessly leverages the classic Fourier series with a feed-forward neural network. CosAE represents an input image as a series of 2D Cosine time series, each defined by a tuple of learnable frequency and Fourier coefficients. This method stands in contrast to a conventional Autoencoder that often sacrifices detail in their reduced-resolution bottleneck latent spaces. CosAE, however, encodes frequency coefficients, i.e., the amplitudes and phases, in its bottleneck. This encoding enables extreme spatial compression, e.g., $64\\times$ downsampled feature maps in the bottleneck, without losing detail upon decoding. We showcase the advantage of CosAE via extensive experiments on flexible-resolution super-resolution and blind image restoration, two highly challenging tasks that demand the restoration network to effectively generalize to complex and even unknown image degradations. Our method surpasses state-of-the-art approaches, highlighting its capability to learn a generalizable representation for image restoration. The project page is maintained at [https://sifeiliu.net/CosAE-page/](https://sifeiliu.net/CosAE-page/).", "pdf": "https://openreview.net/pdf/2a0c5f52c3f163cf1deedf86a9b89282178bc953.pdf"} {"title": "Autonomous Agents for Collaborative Task under Information Asymmetry", "url": "https://openreview.net/forum?id=mp6OWpDIJC", "detail_url": "https://openreview.net/forum?id=mp6OWpDIJC", "authors": "Wei Liu,Chenxi Wang,YiFei Wang,Zihao Xie,Rennai Qiu,Yufan Dang,Zhuoyun Du,Weize Chen,Cheng Yang,Chen Qian", "tags": "NIPS 2024,Poster", "abstract": "Large Language Model Multi-Agent Systems (LLM-MAS) have greatly progressed in solving complex tasks. It communicates among agents within the system to collaboratively solve tasks, under the premise of shared information. However, when agents' collaborations are leveraged to perform multi-person tasks, a new challenge arises due to information asymmetry, since each agent can only access the information of its human user. Previous MAS struggle to complete tasks under this condition. To address this, we propose a new MAS paradigm termed iAgents, which denotes Informative Multi-Agent Systems. In iAgents, the human social network is mirrored in the agent network, where agents proactively exchange human information necessary for task resolution, thereby overcoming information asymmetry. iAgents employs a novel agent reasoning mechanism, InfoNav, to navigate agents' communication towards effective information exchange. Together with InfoNav, iAgents organizes human information in a mixed memory to provide agents with accurate and comprehensive information for exchange. Additionally, we introduce InformativeBench, the first benchmark tailored for evaluating LLM agents' task-solving ability under information asymmetry. Experimental results show that iAgents can collaborate within a social network of 140 individuals and 588 relationships, autonomously communicate over 30 turns, and retrieve information from nearly 70,000 messages to complete tasks within 3 minutes.", "pdf": "https://openreview.net/pdf/cbc02f9426edb1ceb1bfcc4517984b4916550c94.pdf"} {"title": "Robust Offline Active Learning on Graphs", "url": "https://openreview.net/forum?id=MDsl1ifiNS", "detail_url": "https://openreview.net/forum?id=MDsl1ifiNS", "authors": "Yuanchen Wu,Yubai Yuan", "tags": "NIPS 2024,Poster", "abstract": "We consider the problem of active learning on graphs for node-level tasks, which has crucial applications in many real-world networks where labeling node responses is expensive. In this paper, we propose an offline active learning method that selects nodes to query by explicitly incorporating information from both the network structure and node covariates. Building on graph signal recovery theories and the random spectral sparsification technique, the proposed method adopts a two-stage biased sampling strategy that takes both informativeness and representativeness into consideration for node querying. Informativeness refers to the complexity of graph signals that are learnable from the responses of queried nodes, while representativeness refers to the capacity of queried nodes to control generalization errors given noisy node-level information. We establish a theoretical relationship between generalization error and the number of nodes selected by the proposed method. Our theoretical results demonstrate the trade-off between Informativeness and representativeness in active learning. Extensive numerical experiments show that the proposed method is competitive with existing graph-based active learning methods, especially when node covariates and responses contain noises. Additionally, the proposed method is applicable to both regression and classification tasks on graphs.", "pdf": "https://openreview.net/pdf/784a909f8c33ab7e8f7e754f4c15a96e79d463ce.pdf"} {"title": "Normal-GS: 3D Gaussian Splatting with Normal-Involved Rendering", "url": "https://openreview.net/forum?id=kngLs5H6l1", "detail_url": "https://openreview.net/forum?id=kngLs5H6l1", "authors": "Meng Wei,Qianyi Wu,Jianmin Zheng,Hamid Rezatofighi,Jianfei Cai", "tags": "NIPS 2024,Poster", "abstract": "Rendering and reconstruction are long-standing topics in computer vision and graphics. Achieving both high rendering quality and accurate geometry is a challenge. Recent advancements in 3D Gaussian Splatting (3DGS) have enabled high-fidelity novel view synthesis at real-time speeds. However, the noisy and discrete nature of 3D Gaussian primitives hinders accurate surface estimation. Previous attempts to regularize 3D Gaussian normals often degrade rendering quality due to the fundamental disconnect between normal vectors and the rendering pipeline in 3DGS-based methods. Therefore, we introduce Normal-GS, a novel approach that integrates normal vectors into the 3DGS rendering pipeline. The core idea is to model the interaction between normals and incident lighting using the physically-based rendering equation. Our approach re-parameterizes surface colors as the product of normals and a designed Integrated Directional Illumination Vector (IDIV). To optimize memory usage and simplify optimization, we employ an anchor-based 3DGS to implicitly encode locally-shared IDIVs. Additionally, Normal-GS leverages optimized normals and Integrated Directional Encoding (IDE) to accurately model specular effects, enhancing both rendering quality and surface normal precision. Extensive experiments demonstrate that Normal-GS achieves near state-of-the-art visual quality while obtaining accurate surface normals and preserving real-time rendering performance.", "pdf": "https://openreview.net/pdf/281b0739653518bc1782310574db76aca0e8652c.pdf"} {"title": "Drago: Primal-Dual Coupled Variance Reduction for Faster Distributionally Robust Optimization", "url": "https://openreview.net/forum?id=ujk0XrNTQZ", "detail_url": "https://openreview.net/forum?id=ujk0XrNTQZ", "authors": "Ronak Mehta,Jelena Diakonikolas,Zaid Harchaoui", "tags": "NIPS 2024,Poster", "abstract": "We consider the penalized distributionally robust optimization (DRO) problem with a closed, convex uncertainty set, a setting that encompasses learning using $f$-DRO and spectral/$L$-risk minimization. We present Drago, a stochastic primal-dual algorithm which combines cyclic and randomized components with a carefully regularized primal update to achieve dual variance reduction. Owing to its design, Drago enjoys a state-of-the-art linear convergence rate on strongly convex-strongly concave DRO problems witha fine-grained dependency on primal and dual condition numbers. The theoretical results are supported with numerical benchmarks on regression and classification tasks.", "pdf": "https://openreview.net/pdf/ff88046382d04abab183222662c15345b7525a1c.pdf"} {"title": "Effective Exploration Based on the Structural Information Principles", "url": "https://openreview.net/forum?id=Bjh4mcYs20", "detail_url": "https://openreview.net/forum?id=Bjh4mcYs20", "authors": "Xianghua Zeng,Hao Peng,Angsheng Li", "tags": "NIPS 2024,Poster", "abstract": "Traditional information theory provides a valuable foundation for Reinforcement Learning (RL), particularly through representation learning and entropy maximiza tion for agent exploration. However, existing methods primarily concentrate on modeling the uncertainty associated with RL\u2019s random variables, neglecting the in herent structure within the state and action spaces. In this paper, we propose a novel Structural Information principles-based Effective Exploration framework, namely SI2E. Structural mutual information between two variables is defined to address the single-variable limitation in structural information, and an innovative embedding principle is presented to capture dynamics-relevant state-action representations. The SI2E analyzes value differences in the agent\u2019s policy between state-action pairs and minimizes structural entropy to derive the hierarchical state-action struc ture, referred to as the encoding tree. Under this tree structure, value-conditional structural entropy is defined and maximized to design an intrinsic reward mechanism that avoids redundant transitions and promotes enhanced coverage in the state-action space. Theoretical connections are established between SI2E and classical information-theoretic methodologies, highlighting our framework\u2019s rationality and advantage. Comprehensive evaluations in the MiniGrid, MetaWorld, and DeepMind Control Suite benchmarks demonstrate that SI2E significantly outperforms state-of-the-art exploration baselines regarding final performance and sample efficiency, with maximum improvements of 37.63% and 60.25%, respectively.", "pdf": "https://openreview.net/pdf/589103ed863dfc611e7dc292d336376bdf4d6875.pdf"} {"title": "DoFIT: Domain-aware Federated Instruction Tuning with Alleviated Catastrophic Forgetting", "url": "https://openreview.net/forum?id=FDfrPugkGU", "detail_url": "https://openreview.net/forum?id=FDfrPugkGU", "authors": "Binqian Xu,Xiangbo Shu,Haiyang Mei,Zechen Bai,Basura Fernando,Mike Zheng Shou,Jinhui Tang", "tags": "NIPS 2024,Poster", "abstract": "Federated Instruction Tuning (FIT) advances collaborative training on decentralized data, crucially enhancing model's capability and safeguarding data privacy. However, existing FIT methods are dedicated to handling data heterogeneity across different clients (i.e., client-aware data heterogeneity), while ignoring the variation between data from different domains (i.e., domain-aware data heterogeneity). When scarce data needs supplementation from related fields, these methods lack the ability to handle domain heterogeneity in cross-domain training. This leads to domain-information catastrophic forgetting in collaborative training and therefore makes model perform sub-optimally on the individual domain. To address this issue, we introduce DoFIT, a new Domain-aware FIT framework that alleviates catastrophic forgetting through two new designs. First, to reduce interference information from the other domain, DoFIT finely aggregates overlapping weights across domains on the inter-domain server side. Second, to retain more domain information, DoFIT initializes intra-domain weights by incorporating inter-domain information into a less-conflicted parameter space. Experimental results on diverse datasets consistently demonstrate that DoFIT excels in cross-domain collaborative training and exhibits significant advantages over conventional FIT methods in alleviating catastrophic forgetting. Code is available at [this link](https://github.com/1xbq1/DoFIT).", "pdf": "https://openreview.net/pdf/6dbf39abfa10a93ba1934baffb13d829294ae363.pdf"} {"title": "Revisiting Differentially Private ReLU Regression", "url": "https://openreview.net/forum?id=3uUIwMxYbR", "detail_url": "https://openreview.net/forum?id=3uUIwMxYbR", "authors": "Meng Ding,Mingxi Lei,Liyang Zhu,Shaowei Wang,Di Wang,Jinhui Xu", "tags": "NIPS 2024,Poster", "abstract": "As one of the most fundamental non-convex learning problems, ReLU regression under differential privacy (DP) constraints, especially in high-dimensional settings, remains a challenging area in privacy-preserving machine learning. Existing results are limited to the assumptions of bounded norm $ \\|\\mathbf{x}\\|_2 \\leq 1$, which becomes meaningless with increasing data dimensionality. In this work, we revisit the problem of DP ReLU regression in high-dimensional regimes. We propose two innovative algorithms DP-GLMtron and DP-TAGLMtron that outperform the conventional DPSGD. \nDP-GLMtron is based on a generalized linear model perceptron approach, integrating adaptive clipping and Gaussian mechanism for enhanced privacy. To overcome the constraints of small privacy budgets in DP-GLMtron, represented by $\\widetilde{O}(\\sqrt{1/N})$ where $N$ is the sample size, we introduce DP-TAGLMtron, which utilizes a tree aggregation protocol to balance privacy and utility effectively, showing that DP-TAGLMtron achieves comparable performance with only an additional factor of $O(\\log N)$ in the utility upper bound.\nMoreover, our theoretical analysis extends beyond Gaussian-like data distributions to settings with eigenvalue decay, showing how data distribution impacts learning in high dimensions. Notably, our findings suggest that the utility upper bound could be independent of the dimension $d$, even when $d \\gg N$. \nExperiments on synthetic and real-world datasets also validate our results.", "pdf": "https://openreview.net/pdf/1b8bed0fd41cb46007e7290717266daa2cadf94d.pdf"} {"title": "Reinforcement Learning with Euclidean Data Augmentation for State-Based Continuous Control", "url": "https://openreview.net/forum?id=NwiFLtWGEg", "detail_url": "https://openreview.net/forum?id=NwiFLtWGEg", "authors": "Jinzhu Luo,Dingyang Chen,Qi Zhang", "tags": "NIPS 2024,Poster", "abstract": "Data augmentation creates new data points by transforming the original ones for an reinforcement learning (RL) agent to learn from, which has been shown to be effective for the objective of improving data efficiency of RL for continuous control. Prior work towards this objective has been largely restricted to perturbation-based data augmentation where new data points are created by perturbing the original ones,\nwhich has been impressively effective for tasks where the RL agent observe control states as images with perturbations including random cropping, shifting, etc. This work focuses on state-based control, where the RL agent can directly observe raw kinematic and task features, and considers an alternative data augmentation applied to these features based on Euclidean symmetries under transformations like rotations. We show that the default state features used in exiting benchmark tasks that are based on joint configurations are not amenable to Euclidean transformations. We therefore advocate using state features based on configurations of the limbs (i.e., rigid bodies connected by joints) that instead provides rich augmented data under Euclidean transformations. With minimal hyperparameter tuning, we show this new Euclidean data augmentation strategy significantly improve both data efficiency and asymptotic performance of RL on a wide range of continuous control tasks.", "pdf": "https://openreview.net/pdf/5a591c79d539a93049e03a55f9024335c142be76.pdf"} {"title": "Functionally Constrained Algorithm Solves Convex Simple Bilevel Problem", "url": "https://openreview.net/forum?id=PAiGHJppam", "detail_url": "https://openreview.net/forum?id=PAiGHJppam", "authors": "Huaqing Zhang,Lesi Chen,Jing Xu,Jingzhao Zhang", "tags": "NIPS 2024,Poster", "abstract": "This paper studies simple bilevel problems, where a convex upper-level function is minimized over the optimal solutions of a convex lower-level problem. We first show the fundamental difficulty of simple bilevel problems, that the approximate optimal value of such problems is not obtainable by first-order zero-respecting algorithms. Then we follow recent works to pursue the weak approximate solutions. For this goal, we propose novel near-optimal methods for smooth and nonsmooth problems by reformulating them into functionally constrained problems.", "pdf": "https://openreview.net/pdf/0ebe84eb72b9d0ac2951cde1ff1b4b8b7b9451cc.pdf"} {"title": "OneBit: Towards Extremely Low-bit Large Language Models", "url": "https://openreview.net/forum?id=ZwiG9KjfHV", "detail_url": "https://openreview.net/forum?id=ZwiG9KjfHV", "authors": "Yuzhuang Xu,Xu Han,Zonghan Yang,Shuo Wang,Qingfu Zhu,Zhiyuan Liu,Weidong Liu,Wanxiang Che", "tags": "NIPS 2024,Poster", "abstract": "Model quantification uses low bit-width values to represent the weight matrices of existing models to be quantized, which is a promising approach to reduce both storage and computational overheads of deploying highly anticipated LLMs. However, current quantization methods suffer severe performance degradation when the bit-width is extremely reduced, and thus focus on utilizing 4-bit or 8-bit values to quantize models. This paper boldly quantizes the weight matrices of LLMs to 1-bit, paving the way for the extremely low bit-width deployment of LLMs. For this target, we introduce a 1-bit model compressing framework named OneBit, including a novel 1-bit parameter representation method to better quantize LLMs as well as an effective parameter initialization method based on matrix decomposition to improve the convergence speed of the quantization framework. Sufficient experimental results indicate that OneBit achieves good performance (at least 81% of the non-quantized performance on LLaMA models) with robust training processes when only using 1-bit weight matrices.", "pdf": "https://openreview.net/pdf/af84d451a067a50a03465a52ddad61fe1cc10de1.pdf"} {"title": "A Walsh Hadamard Derived Linear Vector Symbolic Architecture", "url": "https://openreview.net/forum?id=p3hNrpeWMe", "detail_url": "https://openreview.net/forum?id=p3hNrpeWMe", "authors": "Mohammad Mahmudul Alam,Alexander Oberle,Edward Raff,Stella Biderman,Tim Oates,James Holt", "tags": "NIPS 2024,Poster", "abstract": "Vector Symbolic Architectures (VSAs) are one approach to developing Neuro-symbolic AI, where two vectors in $\\mathbb{R}^d$ are 'bound' together to produce a new vector in the same space. VSAs support the commutativity and associativity of this binding operation, along with an inverse operation, allowing one to construct symbolic-style manipulations over real-valued vectors. Most VSAs were developed before deep learning and automatic differentiation became popular and instead focused on efficacy in hand-designed systems. In this work, we introduce the Hadamard-derived linear Binding (HLB), which is designed to have favorable computational efficiency, and efficacy in classic VSA tasks, and perform well in differentiable systems.", "pdf": "https://openreview.net/pdf/87b4b1c14628316f12b4dc3487e0802deba8e72b.pdf"} {"title": "LoD-Loc: Aerial Visual Localization using LoD 3D Map with Neural Wireframe Alignment", "url": "https://openreview.net/forum?id=PqlKliEXyJ", "detail_url": "https://openreview.net/forum?id=PqlKliEXyJ", "authors": "Juelin Zhu,Shen Yan,Long Wang,zhang shengYue,Yu Liu,Maojun Zhang", "tags": "NIPS 2024,Poster", "abstract": "We propose a new method named LoD-Loc for visual localization in the air. Unlike existing localization algorithms, LoD-Loc does not rely on complex 3D representations and can estimate the pose of an Unmanned Aerial Vehicle (UAV) using a Level-of-Detail (LoD) 3D map. LoD-Loc mainly achieves this goal by aligning the wireframe derived from the LoD projected model with that predicted by the neural network. Specifically, given a coarse pose provided by the UAV sensor, LoD-Loc hierarchically builds a cost volume for uniformly sampled pose hypotheses to describe pose probability distribution and select a pose with maximum probability. Each cost within this volume measures the degree of line alignment between projected and predicted wireframes. LoD-Loc also devises a 6-DoF pose optimization algorithm to refine the previous result with a differentiable Gaussian-Newton method. As no public dataset exists for the studied problem, we collect two datasets with map levels of LoD3.0 and LoD2.0, along with real RGB queries and ground-truth pose annotations. We benchmark our method and demonstrate that LoD-Loc achieves excellent performance, even surpassing current state-of-the-art methods that use textured 3D models for localization. The code and dataset will be made available upon publication.", "pdf": "https://openreview.net/pdf/b67d9578c41aef0883c8cadafcb71ebbb9eae75a.pdf"} {"title": "Spiking Token Mixer: An event-driven friendly Former structure for spiking neural networks", "url": "https://openreview.net/forum?id=iYcY7KAkSy", "detail_url": "https://openreview.net/forum?id=iYcY7KAkSy", "authors": "Shikuang Deng,Yuhang Wu,Kangrui Du,Shi Gu", "tags": "NIPS 2024,Poster", "abstract": "Spiking neural networks (SNNs), inspired by biological processes, use spike signals for inter-layer communication, presenting an energy-efficient alternative to traditional neural networks. To realize the theoretical advantages of SNNs in energy efficiency, it is essential to deploy them onto neuromorphic chips. On clock-driven synchronous chips, employing shorter time steps can enhance energy efficiency but reduce SNN performance. Compared to the clock-driven synchronous chip, the event-driven asynchronous chip achieves much lower energy consumption but only supports some specific network operations. Recently, a series of SNN projects have achieved tremendous success, significantly improving the SNN's performance. However, event-driven asynchronous chips do not support some of the proposed structures, making it impossible to integrate these SNNs into asynchronous hardware. In response to these problems, we propose the Spiking Token Mixer (STMixer) architecture, which consists exclusively of operations supported by asynchronous scenarios, including convolutional, fully connected layers and residual paths. Our series of experiments also demonstrates that STMixer achieves performance on par with spiking transformers in synchronous scenarios with very low timesteps. This indicates its ability to achieve the same level of performance with lower power consumption in synchronous scenarios. The codes are available at \\url{https://github.com/brain-intelligence-lab/STMixer_demo}.", "pdf": "https://openreview.net/pdf/cd4f08e709482baa184cbef85b5bf4aad8da37e0.pdf"} {"title": "FlexPlanner: Flexible 3D Floorplanning via Deep Reinforcement Learning in Hybrid Action Space with Multi-Modality Representation", "url": "https://openreview.net/forum?id=q9RLsvYOB3", "detail_url": "https://openreview.net/forum?id=q9RLsvYOB3", "authors": "Ruizhe Zhong,Xingbo Du,Shixiong Kai,Zhentao Tang,Siyuan Xu,Jianye HAO,Mingxuan Yuan,Junchi Yan", "tags": "NIPS 2024,Poster", "abstract": "In the Integrated Circuit (IC) design flow, floorplanning (FP) determines the position and shape of each block. Serving as a prototype for downstream tasks, it is critical and establishes the upper bound of the final PPA (Power, Performance, Area). However, with the emergence of 3D IC with stacked layers, existing methods are not flexible enough to handle the versatile constraints. Besides, they typically face difficulties in aligning the cross-die modules in 3D ICs due to their heuristic representations, which could potentially result in severe data transfer failures. To address these issues, we propose FlexPlanner, a flexible learning-based method in hybrid action space with multi-modality representation to simultaneously handle position, aspect ratio, and alignment of blocks. To our best knowledge, FlexPlanner is the first learning-based approach to discard heuristic-based search in the 3D FP task. Thus, the solution space is not limited by the heuristic floorplanning representation, allowing for significant improvements in both wirelength and alignment scores. Specifically, FlexPlanner models 3D FP based on multi-modalities, including vision, graph, and sequence. To address the non-trivial heuristic-dependent issue, we design a sophisticated policy network with hybrid action space and asynchronous layer decision mechanism that allow for determining the versatile properties of each block. Experiments on public benchmarks MCNC and GSRC show the effectiveness. We significantly improve the alignment score from 0.474 to 0.940 and achieve an average reduction of 16% in wirelength. Moreover, our method also demonstrates zero-shot transferability on unseen circuits.", "pdf": "https://openreview.net/pdf/8d8a7076514562c5c26608d048e73e99b85cf5cd.pdf"} {"title": "Training for Stable Explanation for Free", "url": "https://openreview.net/forum?id=HYa3eu8scG", "detail_url": "https://openreview.net/forum?id=HYa3eu8scG", "authors": "Chao Chen,Chenghua Guo,Rufeng Chen,Guixiang Ma,Ming Zeng,Xiangwen Liao,Xi Zhang,Sihong Xie", "tags": "NIPS 2024,Poster", "abstract": "To foster trust in machine learning models, explanations must be faithful and stable for consistent insights. Existing relevant works rely on the $\\ell_p$ distance for stability assessment, which diverges from human perception. Besides, existing adversarial training (AT) associated with intensive computations may lead to an arms race. To address these challenges, we introduce a novel metric to assess the stability of top-$k$ salient features. We introduce R2ET which trains for stable explanation by efficient and effective regularizer,\nand analyze R2ET by multi-objective optimization to prove numerical and statistical stability of explanations. Moreover, theoretical connections between R2ET and certified robustness justify R2ET's stability in all attacks. Extensive experiments across various data modalities and model architectures show that R2ET achieves superior stability against stealthy attacks, and generalizes effectively across different explanation methods. The code can be found at https://github.com/ccha005/R2ET.", "pdf": "https://openreview.net/pdf/81b5c26f5ae17258b3972b5bceeb95dca95df683.pdf"} {"title": "Adaptive Visual Scene Understanding: Incremental Scene Graph Generation", "url": "https://openreview.net/forum?id=6lwKOvL3KN", "detail_url": "https://openreview.net/forum?id=6lwKOvL3KN", "authors": "Naitik Khandelwal,Xiao Liu,Mengmi Zhang", "tags": "NIPS 2024,Poster", "abstract": "Scene graph generation (SGG) analyzes images to extract meaningful information about objects and their relationships. In the dynamic visual world, it is crucial for AI systems to continuously detect new objects and establish their relationships with existing ones. Recently, numerous studies have focused on continual learning within the domains of object detection and image recognition. However, a limited amount of research focuses on a more challenging continual learning problem in SGG. This increased difficulty arises from the intricate interactions and dynamic relationships among objects, and their associated contexts. Thus, in continual learning, SGG models are often required to expand, modify, retain, and reason scene graphs within the process of adaptive visual scene understanding. To systematically explore Continual Scene Graph Generation (CSEGG), we present a comprehensive benchmark comprising three learning regimes: relationship incremental, scene incremental, and relationship generalization. Moreover, we introduce a ``Replays via Analysis by Synthesis\" method named RAS. This approach leverages the scene graphs, decomposes and re-composes them to represent different scenes, and replays the synthesized scenes based on these compositional scene graphs. The replayed synthesized scenes act as a means to practice and refine proficiency in SGG in known and unknown environments. Our experimental results not only highlight the challenges of directly combining existing continual learning methods with SGG backbones but also demonstrate the effectiveness of our proposed approach, enhancing CSEGG efficiency while simultaneously preserving privacy and memory usage. All data and source code will be made public.", "pdf": "https://openreview.net/pdf/8e577312ef669ee933f93b7513fcff4d94b2b848.pdf"} {"title": "A Unified Principle of Pessimism for Offline Reinforcement Learning under Model Mismatch", "url": "https://openreview.net/forum?id=cBY66CKEbq", "detail_url": "https://openreview.net/forum?id=cBY66CKEbq", "authors": "Yue Wang,Zhongchang Sun,Shaofeng Zou", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we address the challenges of offline reinforcement learning (RL) under model mismatch, where the agent aims to optimize its performance through an offline dataset that may not accurately represent the deployment environment. We identify two primary challenges under the setting: inaccurate model estimation due to limited data and performance degradation caused by the model mismatch between the dataset-collecting environment and the target deployment one. To tackle these issues, we propose a unified principle of pessimism using distributionally robust Markov decision processes. We carefully construct a robust MDP with a single uncertainty set to tackle both data sparsity and model mismatch, and demonstrate that the optimal robust policy enjoys a near-optimal sub-optimality gap under the target environment across three widely used uncertainty models: total variation, $\\chi^2$ divergence, and KL divergence. Our results improve upon or match the state-of-the-art performance under the total variation and KL divergence models, and provide the first result for the $\\chi^2$ divergence model.", "pdf": "https://openreview.net/pdf/38810ca3724709a525d0e93e46c48726d2f078ea.pdf"} {"title": "Not Just Object, But State: Compositional Incremental Learning without Forgetting", "url": "https://openreview.net/forum?id=2LRZhbTDtA", "detail_url": "https://openreview.net/forum?id=2LRZhbTDtA", "authors": "Yanyi Zhang,Binglin Qiu,Qi Jia,Yu Liu,Ran He", "tags": "NIPS 2024,Poster", "abstract": "Most incremental learners excessively prioritize object classes while neglecting various kinds of states (e.g. color and material) attached to the objects. As a result, they are limited in the ability to model state-object compositionality accurately. To remedy this limitation, we propose a novel task called Compositional Incremental Learning (composition-IL), which enables the model to recognize a variety of state-object compositions in an incremental learning fashion. Since the lack of suitable datasets, we re-organize two existing datasets and make them tailored for composition-IL. Then, we propose a prompt-based Composition Incremental Learner (CompILer), to overcome the ambiguous composition boundary. Specifically, we exploit multi-pool prompt learning, and ensure the inter-pool prompt discrepancy and intra-pool prompt diversity. Besides, we devise object-injected state prompting which injects object prompts to guide the selection of state prompts. Furthermore, we fuse the selected prompts by a generalized-mean strategy, to eliminate irrelevant information learned in the prompts. Extensive experiments on two datasets exhibit state-of-the-art performance achieved by CompILer. Code and datasets are available at: https://github.com/Yanyi-Zhang/CompILer.", "pdf": "https://openreview.net/pdf/51d8219cb9c1ac31c3c4a2b2c6de8b84dedb291c.pdf"} {"title": "AR-Pro: Counterfactual Explanations for Anomaly Repair with Formal Properties", "url": "https://openreview.net/forum?id=m0jZUvlKl7", "detail_url": "https://openreview.net/forum?id=m0jZUvlKl7", "authors": "Xiayan Ji,Anton Xue,Eric Wong,Oleg Sokolsky,Insup Lee", "tags": "NIPS 2024,Poster", "abstract": "Anomaly detection is widely used for identifying critical errors and suspicious behaviors, but current methods lack interpretability.\nWe leverage common properties of existing methods and recent advances in generative models to introduce counterfactual explanations for anomaly detection.\nGiven an input, we generate its counterfactual as a diffusion-based repair that shows what a non-anomalous version $\\textit{should have looked like}$.\nA key advantage of this approach is that it enables a domain-independent formal specification of explainability desiderata, offering a unified framework for generating and evaluating explanations.\nWe demonstrate the effectiveness of our anomaly explainability framework, AR-Pro, on vision (MVTec, VisA) and time-series (SWaT, WADI, HAI) anomaly datasets. The code used for the experiments is accessible at: https://github.com/xjiae/arpro.", "pdf": "https://openreview.net/pdf/97157b5b2316c4e700c814c741f1ee7797d12eb3.pdf"} {"title": "When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search", "url": "https://openreview.net/forum?id=FfFcDNDNol", "detail_url": "https://openreview.net/forum?id=FfFcDNDNol", "authors": "Xuan Chen,Yuzhou Nie,Wenbo Guo,Xiangyu Zhang", "tags": "NIPS 2024,Poster", "abstract": "Recent studies developed jailbreaking attacks, which construct jailbreaking prompts to \"fool\" LLMs into responding to harmful questions.\nEarly-stage jailbreaking attacks require access to model internals or significant human efforts. \nMore advanced attacks utilize genetic algorithms for automatic and black-box attacks.\nHowever, the random nature of genetic algorithms significantly limits the effectiveness of these attacks.\nIn this paper, we propose RLbreaker, a black-box jailbreaking attack driven by deep reinforcement learning (DRL).\nWe model jailbreaking as a search problem and design an RL agent to guide the search, which is more effective and has less randomness than stochastic search, such as genetic algorithms.\nSpecifically, we design a customized DRL system for the jailbreaking problem, including a novel reward function and a customized proximal policy optimization (PPO) algorithm.\nThrough extensive experiments, we demonstrate that RLbreaker is much more effective than existing jailbreaking attacks against six state-of-the-art (SOTA) LLMs. \nWe also show that RLbreaker is robust against three SOTA defenses and its trained agents can transfer across different LLMs.\nWe further validate the key design choices of RLbreaker via a comprehensive ablation study.", "pdf": "https://openreview.net/pdf/313948d9d75c873021c802ab4e9c2c32b07e2e9c.pdf"} {"title": "$SE(3)$ Equivariant Ray Embeddings for Implicit Multi-View Depth Estimation", "url": "https://openreview.net/forum?id=yRuJqoWoCs", "detail_url": "https://openreview.net/forum?id=yRuJqoWoCs", "authors": "Yinshuang Xu,Dian Chen,Katherine Liu,Sergey Zakharov,Rares Andrei Ambrus,Kostas Daniilidis,Vitor Campagnolo Guizilini", "tags": "NIPS 2024,Poster", "abstract": "Incorporating inductive bias by embedding geometric entities (such as rays) as input has proven successful in multi-view learning. However, the methods adopting this technique typically lack equivariance, which is crucial for effective 3D learning. Equivariance serves as a valuable inductive prior, aiding in the generation of robust multi-view features for 3D scene understanding. In this paper, we explore the application of equivariant multi-view learning to depth estimation, not only recognizing its significance for computer vision and robotics but also addressing the limitations of previous research. Most prior studies have either overlooked equivariance in this setting or achieved only approximate equivariance through data augmentation, which often leads to inconsistencies across different reference frames. To address this issue, we propose to embed $SE(3)$ equivariance into the Perceiver IO architecture. We employ Spherical Harmonics for positional encoding to ensure 3D rotation equivariance, and develop a specialized equivariant encoder and decoder within the Perceiver IO architecture. To validate our model, we applied it to the task of stereo depth estimation, achieving state of the art results on real-world datasets without explicit geometric constraints or extensive data augmentation.", "pdf": "https://openreview.net/pdf/e8039c44b88c7c3803572ada21d4746ef0778d7d.pdf"} {"title": "Sharpness-diversity tradeoff: improving flat ensembles with SharpBalance", "url": "https://openreview.net/forum?id=wJaCsnT9UE", "detail_url": "https://openreview.net/forum?id=wJaCsnT9UE", "authors": "Haiquan Lu,Xiaotian Liu,Yefan Zhou,Qunli Li,Kurt Keutzer,Michael W. Mahoney,Yujun Yan,Huanrui Yang,Yaoqing Yang", "tags": "NIPS 2024,Poster", "abstract": "Recent studies on deep ensembles have identified the sharpness of the local minima of individual learners and the diversity of the ensemble members as key factors in improving test-time performance. Building on this, our study investigates the interplay between sharpness and diversity within deep ensembles, illustrating their crucial role in robust generalization to both in-distribution (ID) and out-of-distribution (OOD) data. We discover a trade-off between sharpness and diversity: minimizing the sharpness in the loss landscape tends to diminish the diversity of individual members within the ensemble, adversely affecting the ensemble's improvement. The trade-off is justified through our rigorous theoretical analysis and verified empirically through extensive experiments. To address the issue of reduced diversity, we introduce SharpBalance, a novel training approach that balances sharpness and diversity within ensembles. Theoretically, we show that our training strategy achieves a better sharpness-diversity trade-off. Empirically, we conducted comprehensive evaluations in various data sets (CIFAR-10, CIFAR-100, TinyImageNet) and showed that SharpBalance not only effectively improves the sharpness-diversity trade-off but also significantly improves ensemble performance in ID and OOD scenarios.", "pdf": "https://openreview.net/pdf/94d83140bd5537ee6cc7018fa3fdf7107c58db7b.pdf"} {"title": "Diversify, Contextualize, and Adapt: Efficient Entropy Modeling for Neural Image Codec", "url": "https://openreview.net/forum?id=gvg8pExqdd", "detail_url": "https://openreview.net/forum?id=gvg8pExqdd", "authors": "Jun-Hyuk Kim,Seungeon Kim,Won-Hee Lee,Dokwan Oh", "tags": "NIPS 2024,Poster", "abstract": "Designing a fast and effective entropy model is challenging but essential for practical application of neural codecs. Beyond spatial autoregressive entropy models, more efficient backward adaptation-based entropy models have been recently developed. They not only reduce decoding time by using smaller number of modeling steps but also maintain or even improve rate--distortion performance by leveraging more diverse contexts for backward adaptation. Despite their significant progress, we argue that their performance has been limited by the simple adoption of the design convention for forward adaptation: using only a single type of hyper latent representation, which does not provide sufficient contextual information, especially in the first modeling step. In this paper, we propose a simple yet effective entropy modeling framework that leverages sufficient contexts for forward adaptation without compromising on bit-rate. Specifically, we introduce a strategy of diversifying hyper latent representations for forward adaptation, i.e., using two additional types of contexts along with the existing single type of context. In addition, we present a method to effectively use the diverse contexts for contextualizing the current elements to be encoded/decoded. By addressing the limitation of the previous approach, our proposed framework leads to significant performance improvements. Experimental results on popular datasets show that our proposed framework consistently improves rate-distortion performance across various bit-rate regions, e.g., 3.73\\% BD-rate gain over the state-of-the-art baseline on the Kodak dataset.", "pdf": "https://openreview.net/pdf/0c4ae934d344ac6e45a6fbaaa1c79a197aca1ea9.pdf"} {"title": "Absorb & Escape: Overcoming Single Model Limitations in Generating Heterogeneous Genomic Sequences", "url": "https://openreview.net/forum?id=XHTl2k1LYk", "detail_url": "https://openreview.net/forum?id=XHTl2k1LYk", "authors": "Zehui Li,Yuhao Ni,Guoxuan Xia,William Beardall,Akashaditya Das,Guy-Bart Stan,Yiren Zhao", "tags": "NIPS 2024,Poster", "abstract": "Recent advances in immunology and synthetic biology have accelerated the development of deep generative methods for DNA sequence design. Two dominant approaches in this field are AutoRegressive (AR) models and Diffusion Models (DMs). However, genomic sequences are functionally heterogeneous, consisting of multiple connected regions (e.g., Promoter Regions, Exons, and Introns) where elements within each region come from the same probability distribution, but the overall sequence is non-homogeneous. This heterogeneous nature presents challenges for a single model to accurately generate genomic sequences. In this paper, we analyze the properties of AR models and DMs in heterogeneous genomic sequence generation, pointing out crucial limitations in both methods: (i) AR models capture the underlying distribution of data by factorizing and learning the transition probability but fail to capture the global property of DNA sequences. (ii) DMs learn to recover the global distribution but tend to produce errors at the base pair level. To overcome the limitations of both approaches, we propose a post-training sampling method, termed Absorb & Escape (A&E) to perform compositional generation from AR models and DMs. This approach starts with samples generated by DMs and refines the sample quality using an AR model through the alternation of the Absorb and Escape steps. To assess the quality of generated sequences, we conduct extensive experiments on 15 species for conditional and unconditional DNA generation. The experiment results from motif distribution, diversity checks, and genome integration tests unequivocally show that A&E outperforms state-of-the-art AR models and DMs in genomic sequence generation. A&E does not suffer from the slowness of traditional MCMC to sample from composed distributions with Energy-Based Models whilst it obtains higher quality samples than single models. Our research sheds light on the limitations of current single-model approaches in DNA generation and provides a simple but effective solution for heterogeneous sequence generation. Code is available at the [Github Repo](https://github.com/Zehui127/Absorb-Escape).", "pdf": "https://openreview.net/pdf/555cf2c047d3d5fa2cdaa0359854746ab4eab8c3.pdf"} {"title": "Understanding the Transferability of Representations via Task-Relatedness", "url": "https://openreview.net/forum?id=6cdYMkxxNt", "detail_url": "https://openreview.net/forum?id=6cdYMkxxNt", "authors": "Akshay Mehra,Yunbei Zhang,Jihun Hamm", "tags": "NIPS 2024,Poster", "abstract": "The growing popularity of transfer learning due to the availability of models pre-trained on vast amounts of data, makes it imperative to understand when the knowledge of these pre-trained models can be transferred to obtain high-performing models on downstream target tasks. However, the exact conditions under which transfer learning succeeds in a cross-domain cross-task setting are still poorly understood. To bridge this gap, we propose a novel analysis that analyzes the transferability of the representations of pre-trained models to downstream tasks in terms of their relatedness to a given reference task. Our analysis leads to an upper bound on transferability in terms of task-relatedness, quantified using the difference between the class priors, label sets, and features of the two tasks.Our experiments using state-of-the-art pre-trained models show the effectiveness of task-relatedness in explaining transferability on various vision and language tasks. The efficient computability of task-relatedness even without labels of the target task and its high correlation with the model's accuracy after end-to-end fine-tuning on the target task makes it a useful metric for transferability estimation. Our empirical results of using task-relatedness on the problem of selecting the best pre-trained model from a model zoo for a target task highlight its utility for practical problems.", "pdf": "https://openreview.net/pdf/2670463fc5279f8432dc6c2f866d25580c7c8111.pdf"} {"title": "Is Mamba Compatible with Trajectory Optimization in Offline Reinforcement Learning?", "url": "https://openreview.net/forum?id=yWSxjlFsmX", "detail_url": "https://openreview.net/forum?id=yWSxjlFsmX", "authors": "Yang Dai,Oubo Ma,Longfei Zhang,Xingxing Liang,Shengchao Hu,Mengzhu Wang,Shouling Ji,Jincai Huang,Li Shen", "tags": "NIPS 2024,Poster", "abstract": "Transformer-based trajectory optimization methods have demonstrated exceptional performance in offline Reinforcement Learning (offline RL). Yet, it poses challenges due to substantial parameter size and limited scalability, which is particularly critical in sequential decision-making scenarios where resources are constrained such as in robots and drones with limited computational power. Mamba, a promising new linear-time sequence model, offers performance on par with transformers while delivering substantially fewer parameters on long sequences. As it remains unclear whether Mamba is compatible with trajectory optimization, this work aims to conduct comprehensive experiments to explore the potential of Decision Mamba (dubbed DeMa) in offline RL from the aspect of data structures and essential components with the following insights: (1) Long sequences impose a significant computational burden without contributing to performance improvements since DeMa's focus on sequences diminishes approximately exponentially. Consequently, we introduce a Transformer-like DeMa as opposed to an RNN-like DeMa. (2) For the components of DeMa, we identify the hidden attention mechanism as a critical factor in its success, which can also work well with other residual structures and does not require position embedding. Extensive evaluations demonstrate that our specially designed DeMa is compatible with trajectory optimization and surpasses previous methods, outperforming Decision Transformer (DT) with higher performance while using 30\\% fewer parameters in Atari, and exceeding DT with only a quarter of the parameters in MuJoCo.", "pdf": "https://openreview.net/pdf/e8f05bc8b78365623dc8f45e047f65b46390a923.pdf"} {"title": "FM-Delta: Lossless Compression for Storing Massive Fine-tuned Foundation Models", "url": "https://openreview.net/forum?id=EMstukR5J4", "detail_url": "https://openreview.net/forum?id=EMstukR5J4", "authors": "Wanyi Ning,Jingyu Wang,Qi Qi,Mengde Zhu,Haifeng Sun,Daixuan Cheng,Jianxin Liao,Ce Zhang", "tags": "NIPS 2024,Poster", "abstract": "Pre-trained foundation models, particularly large language models, have achieved remarkable success and led to massive fine-tuned variants. These models are commonly fine-tuned locally and then uploaded by users to cloud platforms such as HuggingFace for secure storage. However, the huge model number and their billion-level parameters impose heavy storage overhead for cloud with limited resources. Our empirical and theoretical analysis reveals that most fine-tuned models in cloud have a small difference (delta) from their pre-trained models. To this end, we propose a novel lossless compression scheme FM-Delta specifically for storing massive fine-tuned models in cloud. FM-Delta maps fine-tuned and pre-trained model parameters into integers with the same bits, and entropy codes their integer delta. In this way, cloud only needs to store one uncompressed pre-trained model and other compressed fine-tuned models. \nExtensive experiments have demonstrated that FM-Delta efficiently reduces cloud storage consumption for massive fine-tuned models by an average of around 50% with only negligible additional time in most end-to-end cases. For example, on up to 10 fine-tuned models in the GPT-NeoX-20B family, FM-Delta reduces the original storage requirement from 423GB to 205GB, significantly saving cloud storage costs.", "pdf": "https://openreview.net/pdf/e7ce2019cf6f29a80da1916ff7643c91ec3db82e.pdf"} {"title": "AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models", "url": "https://openreview.net/forum?id=fHq4x2YXVv", "detail_url": "https://openreview.net/forum?id=fHq4x2YXVv", "authors": "Haiquan Lu,Yefan Zhou,Shiwei Liu,Zhangyang Wang,Michael W. Mahoney,Yaoqing Yang", "tags": "NIPS 2024,Poster", "abstract": "Recent work on pruning large language models (LLMs) has shown that one can eliminate a large number of parameters without compromising performance, making pruning a promising strategy to reduce LLM model size. Existing LLM pruning strategies typically assign uniform pruning ratios across layers, limiting overall pruning ability; and recent work on layerwise pruning of LLMs is often based on heuristics that can easily lead to suboptimal performance. In this paper, we leverage Heavy-Tailed Self-Regularization (HT-SR) Theory, in particular the shape of empirical spectral densities (ESDs) of weight matrices, to design improved layerwise pruning ratios for LLMs. Our analysis reveals a wide variability in how well-trained, and thus relatedly how prunable, different layers of an LLM are. Based on this, we propose AlphaPruning, which uses shape metrics to allocate layerwise sparsity ratios in a more theoretically-principled manner. AlphaPruning can be used in conjunction with multiple existing LLM pruning methods. Our empirical results show that AlphaPruning prunes LLaMA-7B to 80% sparsity while maintaining reasonable perplexity, marking a first in the literature on LLMs.", "pdf": "https://openreview.net/pdf/06ace2f66ba88b609c2c5a94b3cd44dc321f1f47.pdf"} {"title": "Approximated Orthogonal Projection Unit: Stabilizing Regression Network Training Using Natural Gradient", "url": "https://openreview.net/forum?id=xqrlhsbcwN", "detail_url": "https://openreview.net/forum?id=xqrlhsbcwN", "authors": "ShaoQi Wang,Chunjie Yang,Siwei Lou", "tags": "NIPS 2024,Poster", "abstract": "Neural networks (NN) are extensively studied in cutting-edge soft sensor models due to their feature extraction and function approximation capabilities. Current research into network-based methods primarily focuses on models' offline accuracy. Notably, in industrial soft sensor context, online optimizing stability and interpretability are prioritized, followed by accuracy. This requires a clearer understanding of network's training process. To bridge this gap, we propose a novel NN named the Approximated Orthogonal Projection Unit (AOPU) which has solid mathematical basis and presents superior training stability. AOPU truncates the gradient backpropagation at dual parameters, optimizes the trackable parameters updates, and enhances the robustness of training. We further prove that AOPU attains minimum variance estimation in NN, wherein the truncated gradient approximates the natural gradient. Empirical results on two chemical process datasets clearly show that AOPU outperforms other models in achieving stable convergence, marking a significant advancement in soft sensor field.", "pdf": "https://openreview.net/pdf/b37fda1b4909f85ae3e295ff8e8ca5de81f5920d.pdf"} {"title": "F-OAL: Forward-only Online Analytic Learning with Fast Training and Low Memory Footprint in Class Incremental Learning", "url": "https://openreview.net/forum?id=rGEDFS3emy", "detail_url": "https://openreview.net/forum?id=rGEDFS3emy", "authors": "Huiping Zhuang,Yuchen Liu,Run He,Kai Tong,Ziqian Zeng,Cen Chen,Yi Wang,Lap-Pui Chau", "tags": "NIPS 2024,Poster", "abstract": "Online Class Incremental Learning (OCIL) aims to train models incrementally, where data arrive in mini-batches, and previous data are not accessible. A major challenge in OCIL is Catastrophic Forgetting, i.e., the loss of previously learned knowledge. Among existing baselines, replay-based methods show competitive results but requires extra memory for storing exemplars, while exemplar-free (i.e., data need not be stored for replay in production) methods are resource friendly but often lack accuracy. In this paper, we propose an exemplar-free approach\u2014Forward-only Online Analytic Learning (F-OAL). Unlike traditional methods, F-OAL does not rely on back-propagation and is forward-only, significantly reducing memory usage and computational time. Cooperating with a pre-trained frozen encoder with Feature Fusion, F-OAL only needs to update a linear classifier by recursive least square. This approach simultaneously achieves high accuracy and low resource consumption. Extensive experiments on bench mark datasets demonstrate F-OAL\u2019s robust performance in OCIL scenarios. Code is available at: https://github.com/liuyuchen-cz/F-OAL", "pdf": "https://openreview.net/pdf/e226708f2029c076b49ce0f8780b0b25c1a15cb8.pdf"} {"title": "Active Perception for Grasp Detection via Neural Graspness Field", "url": "https://openreview.net/forum?id=6FYh6gxzPf", "detail_url": "https://openreview.net/forum?id=6FYh6gxzPf", "authors": "Haoxiang Ma,Modi Shi,Boyang Gao,Di Huang", "tags": "NIPS 2024,Poster", "abstract": "This paper tackles the challenge of active perception for robotic grasp detection in cluttered environments. Incomplete 3D geometry information can negatively affect the performance of learning-based grasp detection methods, and scanning the scene from multiple views introduces significant time costs. To achieve reliable grasping performance with efficient camera movement, we propose an active grasp detection framework based on the Neural Graspness Field (NGF), which models the scene incrementally and facilitates next-best-view planning. Constructed in real-time as the camera moves, the NGF effectively models the grasp distribution in 3D space by rendering graspness predictions from each view. For next-best-view planning, we aim to reduce the uncertainty of the NGF through a graspness inconsistency-guided policy, selecting views based on discrepancies between NGF outputs and a pre-trained graspness network. Additionally, we present a neural graspness sampling method that decodes graspness values from the NGF to improve grasp pose detection results. Extensive experiments on the GraspNet-1Billion benchmark demonstrate significant performance improvements compared to previous works. Real-world experiments show that our method achieves a superior trade-off between grasping performance and time costs.", "pdf": "https://openreview.net/pdf/eba3836261b94868013e1ac7d9ec431d41bfc9c5.pdf"} {"title": "An Autoencoder-Like Nonnegative Matrix Co-Factorization for Improved Student Cognitive Modeling", "url": "https://openreview.net/forum?id=8UqyWNsnyA", "detail_url": "https://openreview.net/forum?id=8UqyWNsnyA", "authors": "Shenbao Yu,Yinghui Pan,Yifeng Zeng,Prashant Doshi,Guoquan Liu,Kim-Leng Poh,Mingwei Lin", "tags": "NIPS 2024,Poster", "abstract": "Student cognitive modeling (SCM) is a fundamental task in intelligent education, with applications ranging from personalized learning to educational resource allocation. By exploiting students' response logs, SCM aims to predict their exercise performance as well as estimate knowledge proficiency in a subject. Data mining approaches such as matrix factorization can obtain high accuracy in predicting student performance on exercises, but the knowledge proficiency is unknown or poorly estimated. The situation is further exacerbated if only sparse interactions exist between exercises and students (or knowledge concepts). To solve this dilemma, we root monotonicity (a fundamental psychometric theory on educational assessments) in a co-factorization framework and present an autoencoder-like nonnegative matrix co-factorization (AE-NMCF), which improves the accuracy of estimating the student's knowledge proficiency via an encoder-decoder learning pipeline. The resulting estimation problem is nonconvex with nonnegative constraints. We introduce a projected gradient method based on block coordinate descent with Lipschitz constants and guarantee the method's theoretical convergence. Experiments on several real-world data sets demonstrate the efficacy of our approach in terms of both performance prediction accuracy and knowledge estimation ability, when compared with existing student cognitive models.", "pdf": "https://openreview.net/pdf/c79207e79a8d7cbd5b67bf4312ea7ea8774e538b.pdf"} {"title": "Fair Kernel K-Means: from Single Kernel to Multiple Kernel", "url": "https://openreview.net/forum?id=CehOqpvOxG", "detail_url": "https://openreview.net/forum?id=CehOqpvOxG", "authors": "Peng Zhou,Rongwen Li,Liang Du", "tags": "NIPS 2024,Poster", "abstract": "Kernel k-means has been widely studied in machine learning. However, existing kernel k-means methods often ignore the \\textit{fairness} issue, which may cause discrimination. To address this issue, in this paper, we propose a novel Fair Kernel K-Means (FKKM) framework. In this framework, we first propose a new fairness regularization term that can lead to a fair partition of data. The carefully designed fairness regularization term has a similar form to the kernel k-means which can be seamlessly integrated into the kernel k-means framework. Then, we extend this method to the multiple kernel setting, leading to a Fair Multiple Kernel K-Means (FMKKM) method. We also provide some theoretical analysis of the generalization error bound, and based on this bound we give a strategy to set the hyper-parameter, which makes the proposed methods easy to use. At last, we conduct extensive experiments on both the single kernel and multiple kernel settings to compare the proposed methods with state-of-the-art methods to demonstrate their effectiveness.", "pdf": "https://openreview.net/pdf/059381d9ece0c7c785fc84d391f729f9478207a2.pdf"} {"title": "STONE: A Submodular Optimization Framework for Active 3D Object Detection", "url": "https://openreview.net/forum?id=EQHQzRJy75", "detail_url": "https://openreview.net/forum?id=EQHQzRJy75", "authors": "RUIYU MAO,Sarthak Kumar Maharana,Rishabh K Iyer,Yunhui Guo", "tags": "NIPS 2024,Poster", "abstract": "3D object detection is fundamentally important for various emerging applications, including autonomous driving and robotics. A key requirement for training an accurate 3D object detector is the availability of a large amount of LiDAR-based point cloud data. Unfortunately, labeling point cloud data is extremely challenging, as accurate 3D bounding boxes and semantic labels are required for each potential object. This paper proposes a unified active 3D object detection framework, for greatly reducing the labeling cost of training 3D object detectors. Our framework is based on a novel formulation of submodular optimization, specifically tailored to the problem of active 3D object detection. In particular, we address two fundamental challenges associated with active 3D object detection: data imbalance and the need to cover the distribution of the data, including LiDAR-based point cloud data of varying difficulty levels. Extensive experiments demonstrate that our method achieves state-of-the-art performance with high computational efficiency compared to existing active learning methods. The code is available at [https://github.com/RuiyuM/STONE](https://github.com/RuiyuM/STONE)", "pdf": "https://openreview.net/pdf/766f68b508af225d0f0c51faa951bacc71e1769d.pdf"} {"title": "Federated Ensemble-Directed Offline Reinforcement Learning", "url": "https://openreview.net/forum?id=ypaqE8UwsC", "detail_url": "https://openreview.net/forum?id=ypaqE8UwsC", "authors": "Desik Rengarajan,Nitin Ragothaman,Dileep Kalathil,Srinivas Shakkottai", "tags": "NIPS 2024,Poster", "abstract": "We consider the problem of federated offline reinforcement learning (RL), a scenario under which distributed learning agents must collaboratively learn a high-quality control policy only using small pre-collected datasets generated according to different unknown behavior policies. Na\\\"{i}vely combining a standard offline RL approach with a standard federated learning approach to solve this problem can lead to poorly performing policies. In response, we develop the Federated Ensemble-Directed Offline Reinforcement Learning Algorithm (FEDORA), which distills the collective wisdom of the clients using an ensemble learning approach. We develop the FEDORA codebase to utilize distributed compute resources on a federated learning platform. We show that FEDORA significantly outperforms other approaches, including offline RL over the combined data pool, in various complex continuous control environments and real-world datasets. Finally, we demonstrate the performance of FEDORA in the real-world on a mobile robot. We provide our code and a video of our experiments at \\url{https://github.com/DesikRengarajan/FEDORA}.", "pdf": "https://openreview.net/pdf/ebb3c4a49c535b4cabc2bd5d7686f30b108d2a55.pdf"} {"title": "Reconstructing the Image Stitching Pipeline: Integrating Fusion and Rectangling into a Unified Inpainting Model", "url": "https://openreview.net/forum?id=ZViYPzh9Wq", "detail_url": "https://openreview.net/forum?id=ZViYPzh9Wq", "authors": "Xieziqi,Weidong Zhao,XianhuiLiu,Jian Zhao,Ning Jia", "tags": "NIPS 2024,Poster", "abstract": "Deep learning-based image stitching pipelines are typically divided into three cascading stages: registration, fusion, and rectangling. Each stage requires its own network training and is tightly coupled to the others, leading to error propagation and posing significant challenges to parameter tuning and system stability. This paper proposes the Simple and Robust Stitcher (SRStitcher), which \nrevolutionizes the image stitching pipeline by simplifying the fusion and rectangling stages into a unified inpainting model, requiring no model training or fine-tuning. We reformulate the problem definitions of the fusion and rectangling stages and demonstrate that they can be effectively integrated into an inpainting task. Furthermore, we design the weighted masks to guide the reverse process in a pre-trained large-scale diffusion model, implementing this integrated inpainting task in a single inference. Through extensive experimentation, we verify the interpretability and generalization capabilities of this unified model, demonstrating that SRStitcher outperforms state-of-the-art methods in both performance and stability.", "pdf": "https://openreview.net/pdf/f42d677e7e33dfa236ebf001987cc08db9e7a5a6.pdf"} {"title": "Sketchy Moment Matching: Toward Fast and Provable Data Selection for Finetuning", "url": "https://openreview.net/forum?id=yAAQWBMGiT", "detail_url": "https://openreview.net/forum?id=yAAQWBMGiT", "authors": "Yijun Dong,Hoang Phan,Xiang Pan,Qi Lei", "tags": "NIPS 2024,Poster", "abstract": "We revisit data selection in a modern context of finetuning from a fundamental perspective. Extending the classical wisdom of variance minimization in low dimensions to high-dimensional finetuning, our generalization analysis unveils the importance of additionally reducing bias induced by low-rank approximation. Inspired by the variance-bias tradeoff in high dimensions from the theory, we introduce Sketchy Moment Matching (SkMM), a scalable data selection scheme with two stages. (i) First, the bias is controlled using gradient sketching that explores the finetuning parameter space for an informative low-dimensional subspace $\\mathcal{S}$; (ii) then the variance is reduced over $\\mathcal{S}$ via moment matching between the original and selected datasets. Theoretically, we show that gradient sketching is fast and provably accurate: selecting $n$ samples by reducing variance over $\\mathcal{S}$ preserves the fast-rate generalization $O(\\dim(\\mathcal{S})/n)$, independent of the parameter dimension. Empirically, we concretize the variance-bias balance via synthetic experiments and demonstrate the effectiveness of SkMM for finetuning in real vision tasks.", "pdf": "https://openreview.net/pdf/342630203974cf3966cb02c9c856602a6fdba381.pdf"} {"title": "Learning Successor Features the Simple Way", "url": "https://openreview.net/forum?id=rI7oZj1WMc", "detail_url": "https://openreview.net/forum?id=rI7oZj1WMc", "authors": "Raymond Chua,Arna Ghosh,Christos Kaplanis,Blake Aaron Richards,Doina Precup", "tags": "NIPS 2024,Poster", "abstract": "In Deep Reinforcement Learning (RL), it is a challenge to learn representations that do not exhibit catastrophic forgetting or interference in non-stationary environments. Successor Features (SFs) offer a potential solution to this challenge. However, canonical techniques for learning SFs from pixel-level observations often lead to representation collapse, wherein representations degenerate and fail to capture meaningful variations in the data. More recent methods for learning SFs can avoid representation collapse, but they often involve complex losses and multiple learning phases, reducing their efficiency. We introduce a novel, simple method for learning SFs directly from pixels. Our approach uses a combination of a Temporal-difference (TD) loss and a reward prediction loss, which together capture the basic mathematical definition of SFs. We show that our approach matches or outperforms existing SF learning techniques in both 2D (Minigrid) and 3D (Miniworld) mazes, for both single and continual learning scenarios. As well, our technique is efficient, and can reach higher levels of performance in less time than other approaches. Our work provides a new, streamlined technique for learning SFs directly from pixel observations, with no pretraining required.", "pdf": "https://openreview.net/pdf/284ead1f3023f7d437837821d112a4abd2d00a1f.pdf"} {"title": "RegExplainer: Generating Explanations for Graph Neural Networks in Regression Tasks", "url": "https://openreview.net/forum?id=ejWvCpLuwu", "detail_url": "https://openreview.net/forum?id=ejWvCpLuwu", "authors": "Jiaxing Zhang,Zhuomin Chen,hao mei,Longchao Da,Dongsheng Luo,Hua Wei", "tags": "NIPS 2024,Poster", "abstract": "Graph regression is a fundamental task that has gained significant attention in\nvarious graph learning tasks. However, the inference process is often not easily\ninterpretable. Current explanation techniques are limited to understanding Graph\nNeural Network (GNN) behaviors in classification tasks, leaving an explanation gap\nfor graph regression models. In this work, we propose a novel explanation method\nto interpret the graph regression models (XAIG-R). Our method addresses the\ndistribution shifting problem and continuously ordered decision boundary issues\nthat hinder existing methods away from being applied in regression tasks. We\nintroduce a novel objective based on the graph information bottleneck theory (GIB)\nand a new mix-up framework, which can support various GNNs and explainers\nin a model-agnostic manner. Additionally, we present a self-supervised learning\nstrategy to tackle the continuously ordered labels in regression tasks. We evaluate\nour proposed method on three benchmark datasets and a real-life dataset introduced\nby us, and extensive experiments demonstrate its effectiveness in interpreting GNN\nmodels in regression tasks.", "pdf": "https://openreview.net/pdf/6e3e9217ecd49eb2c40a91a33255c5dc4634e23f.pdf"} {"title": "Robust group and simultaneous inferences for high-dimensional single index model", "url": "https://openreview.net/forum?id=MelYGfpy4x", "detail_url": "https://openreview.net/forum?id=MelYGfpy4x", "authors": "Weichao Yang,Hongwei Shi,Xu Guo,Changliang Zou", "tags": "NIPS 2024,Poster", "abstract": "The high-dimensional single index model (SIM), which assumes that the response is independent of the predictors given a linear combination of predictors, has drawn attention due to its flexibility and interpretability, but its efficiency is adversely affected by outlying observations and heavy-tailed distributions. This paper introduces a robust procedure by recasting the SIM into a pseudo-linear model with transformed responses. It relaxes the distributional conditions on random errors from sub-Gaussian to more general distributions and thus it is robust with substantial efficiency gain for heavy-tailed random errors. Under this paradigm, we provide asymptotically honest group inference procedures based on the idea of orthogonalization, which enjoys the feature that it does not require the zero and nonzero coefficients to be well-separated. Asymptotic null distribution and bootstrap implementation are both established. Moreover, we develop a multiple testing procedure for determining if the individual coefficients are relevant simultaneously, and show that it is able to control the false discovery rate asymptotically. Numerical results indicate that the new procedures can be highly competitive among existing methods, especially for heavy-tailed errors.", "pdf": "https://openreview.net/pdf/8062314ea5726569089d3991f99e67f55c8694f4.pdf"} {"title": "SparseLLM: Towards Global Pruning of Pre-trained Language Models", "url": "https://openreview.net/forum?id=oXHyYHp4Zb", "detail_url": "https://openreview.net/forum?id=oXHyYHp4Zb", "authors": "Guangji Bai,Yijiang Li,Chen Ling,Kibaek Kim,Liang Zhao", "tags": "NIPS 2024,Poster", "abstract": "The transformative impact of large language models (LLMs) like LLaMA and GPT on natural language processing is countered by their prohibitive computational demands. Pruning has emerged as a pivotal compression strategy, introducing sparsity to enhance both memory and computational efficiency. Yet, traditional global pruning is impractical for LLMs due to scalability issues, while local pruning, despite its efficiency, leads to suboptimal solutions. Addressing these challenges, we propose *SparseLLM*, a novel framework that redefines the global pruning process into manageable, coordinated subproblems, allowing for resource-efficient optimization with global optimality. SparseLLM's approach, which conceptualizes LLMs as a chain of modular functions and leverages auxiliary variables for problem decomposition, not only facilitates a pragmatic application on LLMs but also demonstrates significant performance improvements, particularly in high-sparsity regimes where it surpasses current state-of-the-art methods. Our source code is publicly available at https://github.com/BaiTheBest/SparseLLM.", "pdf": "https://openreview.net/pdf/fccc47c2f07ff5d77b691da1a0395609ed3b4e9f.pdf"} {"title": "Lever LM: Configuring In-Context Sequence to Lever Large Vision Language Models", "url": "https://openreview.net/forum?id=TGC7HNf6nK", "detail_url": "https://openreview.net/forum?id=TGC7HNf6nK", "authors": "Xu Yang,Yingzhe Peng,Haoxuan Ma,Shuo Xu,Chi Zhang,Yucheng Han,Hanwang Zhang", "tags": "NIPS 2024,Poster", "abstract": "As Archimedes famously said, ``Give me a lever long enough and a fulcrum on which to place it, and I shall move the world'', in this study, we propose to use a tiny Language Model (LM), \\eg, a Transformer with 67M parameters, to lever much larger Vision-Language Models (LVLMs) with 9B parameters. Specifically, we use this tiny \\textbf{Lever-LM} to configure effective in-context demonstration (ICD) sequences to improve the In-Context Learinng (ICL) performance of LVLMs. Previous studies show that diverse ICD configurations like the selection and ordering of the demonstrations heavily affect the ICL performance, highlighting the significance of configuring effective ICD sequences. Motivated by this and by re-considering the the process of configuring ICD sequence, we find this is a mirror process of human sentence composition and further assume that effective ICD configurations may contain internal statistical patterns that can be captured by Lever-LM. Then a dataset with effective ICD sequences is constructed to train Lever-LM. After training, given novel queries, new ICD sequences are configured by the trained Lever-LM to solve vision-language tasks through ICL. Experiments show that these ICD sequences can improve the ICL performance of two LVLMs compared with some strong baselines in Visual Question Answering and Image Captioning, validating that Lever-LM can really capture the statistical patterns for levering LVLMs. The code is available at \\url{https://anonymous.4open.science/r/Lever-LM-604A/}.", "pdf": "https://openreview.net/pdf/92d8e39d181a85e5e2caa371f60f13d61dcc2db6.pdf"} {"title": "Transductive Learning is Compact", "url": "https://openreview.net/forum?id=YWTpmLktMj", "detail_url": "https://openreview.net/forum?id=YWTpmLktMj", "authors": "Julian Asilis,Siddartha Devic,Shaddin Dughmi,Vatsal Sharan,Shang-Hua Teng", "tags": "NIPS 2024,Poster", "abstract": "We demonstrate a compactness result holding broadly across supervised learning with a general class of loss functions: Any hypothesis class $\\mathcal{H}$ is learnable with transductive sample complexity $m$ precisely when all of its finite projections are learnable with sample complexity $m$. We prove that this exact form of compactness holds for realizable and agnostic learning with respect to all proper metric loss functions (e.g., any norm on $\\mathbb{R}^d$) and any continuous loss on a compact space (e.g., cross-entropy, squared loss). For realizable learning with improper metric losses, we show that exact compactness of sample complexity can fail, and provide matching upper and lower bounds of a factor of 2 on the extent to which such sample complexities can differ. We conjecture that larger gaps are possible for the agnostic case. Furthermore, invoking the equivalence between sample complexities in the PAC and transductive models (up to lower order factors, in the realizable case) permits us to directly port our results to the PAC model, revealing an almost-exact form of compactness holding broadly in PAC learning.", "pdf": "https://openreview.net/pdf/4f676b095b728fe664fb2816e7123a6d84af004d.pdf"} {"title": "KFNN: K-Free Nearest Neighbor For Crowdsourcing", "url": "https://openreview.net/forum?id=wnPlJNiqfA", "detail_url": "https://openreview.net/forum?id=wnPlJNiqfA", "authors": "Wenjun Zhang,Liangxiao Jiang,Chaoqun Li", "tags": "NIPS 2024,Poster", "abstract": "To reduce annotation costs, it is common in crowdsourcing to collect only a few noisy labels from different crowd workers for each instance. However, the limited noisy labels restrict the performance of label integration algorithms in inferring the unknown true label for the instance. Recent works have shown that leveraging neighbor instances can help alleviate this problem. Yet, these works all assume that each instance has the same neighborhood size, which defies common sense. To address this gap, we propose a novel label integration algorithm called K-free nearest neighbor (KFNN). In KFNN, the neighborhood size of each instance is automatically determined based on its attributes and noisy labels. Specifically, KFNN initially estimates a Mahalanobis distance distribution from the attribute space to model the relationship between each instance and all classes. This distance distribution is then utilized to enhance the multiple noisy label distribution of each instance. Subsequently, a Kalman filter is designed to mitigate the impact of noise incurred by neighbor instances. Finally, KFNN determines the optimal neighborhood size by the max-margin learning. Extensive experimental results demonstrate that KFNN significantly outperforms all the other state-of-the-art algorithms and exhibits greater robustness in various crowdsourcing scenarios.", "pdf": "https://openreview.net/pdf/0b3a999c175feae55c108033441b1455e2a2d2d8.pdf"} {"title": "Linear Causal Representation Learning from Unknown Multi-node Interventions", "url": "https://openreview.net/forum?id=weemASPtzg", "detail_url": "https://openreview.net/forum?id=weemASPtzg", "authors": "Burak Var\u0131c\u0131,Emre Acart\u00fcrk,Karthikeyan Shanmugam,Ali Tajer", "tags": "NIPS 2024,Poster", "abstract": "Despite the multifaceted recent advances in interventional causal representation learning (CRL), they primarily focus on the stylized assumption of single-node interventions. This assumption is not valid in a wide range of applications, and generally, the subset of nodes intervened in an interventional environment is *fully unknown*. This paper focuses on interventional CRL under unknown multi-node (UMN) interventional environments and establishes the first identifiability results for *general* latent causal models (parametric or nonparametric) under stochastic interventions (soft or hard) and linear transformation from the latent to observed space. Specifically, it is established that given sufficiently diverse interventional environments, (i) identifiability *up to ancestors* is possible using only *soft* interventions, and (ii) *perfect* identifiability is possible using *hard* interventions. Remarkably, these guarantees match the best-known results for more restrictive single-node interventions. Furthermore, CRL algorithms are also provided that achieve the identifiability guarantees. A central step in designing these algorithms is establishing the relationships between UMN interventional CRL and score functions associated with the statistical models of different interventional environments. Establishing these relationships also serves as constructive proof of the identifiability guarantees.", "pdf": "https://openreview.net/pdf/97134ff9ae5f0c497e5980844484a73aa21a38ba.pdf"} {"title": "Provable Acceleration of Nesterov's Accelerated Gradient for Asymmetric Matrix Factorization and Linear Neural Networks", "url": "https://openreview.net/forum?id=X44OawAq7b", "detail_url": "https://openreview.net/forum?id=X44OawAq7b", "authors": "Zhenghao Xu,Yuqing Wang,Tuo Zhao,Rachel Ward,Molei Tao", "tags": "NIPS 2024,Poster", "abstract": "We study the convergence rate of first-order methods for rectangular matrix factorization, which is a canonical nonconvex optimization problem. Specifically, given a rank-$r$ matrix $\\mathbf{A}\\in\\mathbb{R}^{m\\times n}$, we prove that gradient descent (GD) can find a pair of $\\epsilon$-optimal solutions $\\mathbf{X}_T\\in\\mathbb{R}^{m\\times d}$ and $\\mathbf{Y}_T\\in\\mathbb{R}^{n\\times d}$, where $d\\geq r$, satisfying $\\lVert\\mathbf{X}_T\\mathbf{Y}_T^\\top-\\mathbf{A}\\rVert_F\\leq\\epsilon\\lVert\\mathbf{A}\\rVert_F$ in $T=O(\\kappa^2\\log\\frac{1}{\\epsilon})$ iterations with high probability, where $\\kappa$ denotes the condition number of $\\mathbf{A}$. Furthermore, we prove that Nesterov's accelerated gradient (NAG) attains an iteration complexity of $O(\\kappa\\log\\frac{1}{\\epsilon})$, which is the best-known bound of first-order methods for rectangular matrix factorization. Different from small balanced random initialization in the existing literature, we adopt an unbalanced initialization, where $\\mathbf{X}_0$ is large and $\\mathbf{Y}_0$ is $0$. Moreover, our initialization and analysis can be further extended to linear neural networks, where we prove that NAG can also attain an accelerated linear convergence rate. In particular, we only require the width of the network to be greater than or equal to the rank of the output label matrix. In contrast, previous results achieving the same rate require excessive widths that additionally depend on the condition number and the rank of the input data matrix.", "pdf": "https://openreview.net/pdf/4a2fce91aa7e588613cfbbc61570fc66d5086a13.pdf"} {"title": "GFT: Graph Foundation Model with Transferable Tree Vocabulary", "url": "https://openreview.net/forum?id=0MXzbAv8xy", "detail_url": "https://openreview.net/forum?id=0MXzbAv8xy", "authors": "Zehong Wang,Zheyuan Zhang,Nitesh V Chawla,Chuxu Zhang,Yanfang Ye", "tags": "NIPS 2024,Poster", "abstract": "Inspired by the success of foundation models in applications such as ChatGPT, as graph data has been ubiquitous, one can envision the far-reaching impacts that can be brought by Graph Foundation Models (GFMs) with broader applications in the areas such as scientific research, social network analysis, drug discovery, and e-commerce. Despite the significant progress of pre-trained graph neural networks, there haven\u2019t been GFMs that can achieve desired performance on various graph-learning-related tasks. Building GFMs may rely on a vocabulary that encodes transferable patterns shared among different tasks and domains. Unlike image and text, defining such transferable patterns for graphs remains an open question. In this paper, we aim to bridge this gap by rethinking the transferable patterns on graphs as computation trees -- i.e., tree structures derived from the message-passing process. Based on this insight, we propose a cross-task, cross-domain graph foundation model named GFT, short for Graph Foundation model with transferable Tree vocabulary. By treating computation trees as tokens within the transferable vocabulary, GFT improves model generalization and reduces the risk of negative transfer. The theoretical analyses and extensive experimental studies have demonstrated the transferability of computation trees and shown the effectiveness of GFT across diverse tasks and domains in graph learning. The open source code and data are available at https://github.com/Zehong-Wang/GFT.", "pdf": "https://openreview.net/pdf/addf28c235542c44a5f2fcfaf5e172021a4802de.pdf"} {"title": "Lumina-Next : Making Lumina-T2X Stronger and Faster with Next-DiT", "url": "https://openreview.net/forum?id=ieYdf9TZ2u", "detail_url": "https://openreview.net/forum?id=ieYdf9TZ2u", "authors": "Le Zhuo,Ruoyi Du,Han Xiao,Yangguang Li,Dongyang Liu,Rongjie Huang,Wenze Liu,Xiangyang Zhu,Fu-Yun Wang,Zhanyu Ma,Xu Luo,Zehan Wang,Kaipeng Zhang,Lirui Zhao,Si Liu,Xiangyu Yue,Wanli Ouyang,Yu Qiao,Hongsheng Li,Peng Gao", "tags": "NIPS 2024,Poster", "abstract": "Lumina-T2X is a nascent family of Flow-based Large Diffusion Transformers (Flag-DiT) that establishes a unified framework for transforming noise into various modalities, such as images and videos, conditioned on text instructions. Despite its promising capabilities, Lumina-T2X still encounters challenges including training instability, slow inference, and extrapolation artifacts. In this paper, we present Lumina-Next, an improved version of Lumina-T2X, showcasing stronger generation performance with increased training and inference efficiency. We begin with a comprehensive analysis of the Flag-DiT architecture and identify several suboptimal components, which we address by introducing the Next-DiT architecture with 3D RoPE and sandwich normalizations. To enable better resolution extrapolation, we thoroughly compare different context extrapolation methods applied to text-to-image generation with 3D RoPE, and propose Frequency- and Time-Aware Scaled RoPE tailored for diffusion transformers. Additionally, we introduce a sigmoid time discretization schedule for diffusion sampling, which achieves high-quality generation in 5-10 steps combined with higher-order ODE solvers. Thanks to these improvements, Lumina-Next not only improves the basic text-to-image generation but also demonstrates superior resolution extrapolation capabilities as well as multilingual generation using decoder-based LLMs as the text encoder, all in a zero-shot manner. To further validate Lumina-Next as a versatile generative framework, we instantiate it on diverse tasks including visual recognition, multi-views, audio, music, and point cloud generation, showcasing strong performance across these domains. By releasing all codes and model weights at https://github.com/Alpha-VLLM/Lumina-T2X, we aim to advance the development of next-generation generative AI capable of universal modeling.", "pdf": "https://openreview.net/pdf/9b34285383e247d8ddedc364f89e9ba0f8a99f5a.pdf"} {"title": "AHA: Human-Assisted Out-of-Distribution Generalization and Detection", "url": "https://openreview.net/forum?id=49hXkwpWKA", "detail_url": "https://openreview.net/forum?id=49hXkwpWKA", "authors": "Haoyue Bai,Jifan Zhang,Robert D Nowak", "tags": "NIPS 2024,Poster", "abstract": "Modern machine learning models deployed often encounter distribution shifts in real-world applications, manifesting as covariate or semantic out-of-distribution (OOD) shifts. These shifts give rise to challenges in OOD generalization and OOD detection. This paper introduces a novel, integrated approach AHA (Adaptive Human-Assisted OOD learning) to simultaneously address both OOD generalization and detection through a human-assisted framework by labeling data in the wild. Our approach strategically labels examples within a novel maximum disambiguation region, where the number of semantic and covariate OOD data roughly equalizes. By labeling within this region, we can maximally disambiguate the two types of OOD data, thereby maximizing the utility of the fixed labeling budget. Our algorithm first utilizes a noisy binary search algorithm that identifies the maximal disambiguation region with high probability. The algorithm then continues with annotating inside the identified labeling region, reaping the full benefit of human feedback. Extensive experiments validate the efficacy of our framework. We observed that with only a few hundred human annotations, our method significantly outperforms existing state-of-the-art methods that do not involve human assistance, in both OOD generalization and OOD detection.", "pdf": "https://openreview.net/pdf/20401ce4ca42f6448d73d3bc6227fcc31f2eb613.pdf"} {"title": "Mitigating Reward Overoptimization via Lightweight Uncertainty Estimation", "url": "https://openreview.net/forum?id=kYio3xH6eb", "detail_url": "https://openreview.net/forum?id=kYio3xH6eb", "authors": "Xiaoying Zhang,Jean-Francois Ton,Wei Shen,Hongning Wang,Yang Liu", "tags": "NIPS 2024,Poster", "abstract": "Reinforcement Learning from Human Feedback (RLHF) has been pivotal in aligning Large Language Models with human values but often suffers from overoptimization due to its reliance on a proxy reward model. To mitigate this limitation, we first propose a lightweight uncertainty quantification method that assesses the reliability of the proxy reward using only the last layer embeddings of the reward model. Enabled by this efficient uncertainty quantification method, we formulate AdvPO, a distributionally robust optimization procedure to tackle the reward overoptimization problem in RLHF. Through extensive experiments on the Anthropic HH and TL;DR summarization datasets, we verify the effectiveness of AdvPO in mitigating the overoptimization problem, resulting in enhanced RLHF performance as evaluated through human-assisted evaluation.", "pdf": "https://openreview.net/pdf/d5204c40ecb830dd2c23620408620ca203e3e897.pdf"} {"title": "GOMAA-Geo: GOal Modality Agnostic Active Geo-localization", "url": "https://openreview.net/forum?id=gPCesxD4B4", "detail_url": "https://openreview.net/forum?id=gPCesxD4B4", "authors": "Anindya Sarkar,Srikumar Sastry,Aleksis Pirinen,Chongjie Zhang,Nathan Jacobs,Yevgeniy Vorobeychik", "tags": "NIPS 2024,Poster", "abstract": "We consider the task of active geo-localization (AGL) in which an agent uses a sequence of visual cues observed during aerial navigation to find a target specified through multiple possible modalities. This could emulate a UAV involved in a search-and-rescue operation navigating through an area, observing a stream of aerial images as it goes. The AGL task is associated with two important challenges. Firstly, an agent must deal with a goal specification in one of multiple modalities (e.g., through a natural language description) while the search cues are provided in other modalities (aerial imagery). The second challenge is limited localization time (e.g., limited battery life, urgency) so that the goal must be localized as efficiently as possible, i.e. the agent must effectively leverage its sequentially observed aerial views when searching for the goal. To address these challenges, we propose GOMAA-Geo -- a goal modality agnostic active geo-localization agent -- for zero-shot generalization between different goal modalities. Our approach combines cross-modality contrastive learning to align representations across modalities with supervised foundation model pretraining and reinforcement learning to obtain highly effective navigation and localization policies. Through extensive evaluations, we show that GOMAA-Geo outperforms alternative learnable approaches and that it generalizes across datasets -- e.g., to disaster-hit areas without seeing a single disaster scenario during training -- and goal modalities -- e.g., to ground-level imagery or textual descriptions, despite only being trained with goals specified as aerial views. Our code is available at: https://github.com/mvrl/GOMAA-Geo.", "pdf": "https://openreview.net/pdf/cb1199d6a0586a9b1ed9294c942c542d0460974b.pdf"} {"title": "Posture-Informed Muscular Force Learning for Robust Hand Pressure Estimation", "url": "https://openreview.net/forum?id=LtS7pP8rEn", "detail_url": "https://openreview.net/forum?id=LtS7pP8rEn", "authors": "Kyungjin Seo,Junghoon Seo,Hanseok Jeong,Sangpil Kim,Sang Ho Yoon", "tags": "NIPS 2024,Poster", "abstract": "We present PiMForce, a novel framework that enhances hand pressure estimation by leveraging 3D hand posture information to augment forearm surface electromyography (sEMG) signals. Our approach utilizes detailed spatial information from 3D hand poses in conjunction with dynamic muscle activity from sEMG to enable accurate and robust whole-hand pressure measurements under diverse hand-object interactions. We also developed a multimodal data collection system that combines a pressure glove, an sEMG armband, and a markerless finger-tracking module. We created a comprehensive dataset from 21 participants, capturing synchronized data of hand posture, sEMG signals, and exerted hand pressure across various hand postures and hand-object interaction scenarios using our collection system. Our framework enables precise hand pressure estimation in complex and natural interaction scenarios. Our approach substantially mitigates the limitations of traditional sEMG-based or vision-based methods by integrating 3D hand posture information with sEMG signals.\nVideo demos, data, and code are available online.", "pdf": "https://openreview.net/pdf/9d85b91e2284ee705375c18d38c54fc1471e6234.pdf"} {"title": "Identifiability Analysis of Linear ODE Systems with Hidden Confounders", "url": "https://openreview.net/forum?id=8271eFxojN", "detail_url": "https://openreview.net/forum?id=8271eFxojN", "authors": "Yuanyuan Wang,Biwei Huang,Wei Huang,Xi Geng,Mingming Gong", "tags": "NIPS 2024,Poster", "abstract": "The identifiability analysis of linear Ordinary Differential Equation (ODE) systems is a necessary prerequisite for making reliable causal inferences about these systems. While identifiability has been well studied in scenarios where the system is fully observable, the conditions for identifiability remain unexplored when latent variables interact with the system. This paper aims to address this gap by presenting a systematic analysis of identifiability in linear ODE systems incorporating hidden confounders. Specifically, we investigate two cases of such systems. In the first case, latent confounders exhibit no causal relationships, yet their evolution adheres to specific functional forms, such as polynomial functions of time $t$. Subsequently, we extend this analysis to encompass scenarios where hidden confounders exhibit causal dependencies, with the causal structure of latent variables described by a Directed Acyclic Graph (DAG). The second case represents a more intricate variation of the first case, prompting a more comprehensive identifiability analysis. Accordingly, we conduct detailed identifiability analyses of the second system under various observation conditions, including both continuous and discrete observations from single or multiple trajectories. To validate our theoretical results, we perform a series of simulations, which support and substantiate our findings.", "pdf": "https://openreview.net/pdf/3cbb99334696d36cb119a6f10d3a253978283f71.pdf"} {"title": "Are Your Models Still Fair? Fairness Attacks on Graph Neural Networks via Node Injections", "url": "https://openreview.net/forum?id=LuqrIkGuru", "detail_url": "https://openreview.net/forum?id=LuqrIkGuru", "authors": "Zihan Luo,Hong Huang,Yongkang Zhou,Jiping Zhang,Nuo Chen,Hai Jin", "tags": "NIPS 2024,Poster", "abstract": "Despite the remarkable capabilities demonstrated by Graph Neural Networks (GNNs) in graph-related tasks, recent research has revealed the fairness vulnerabilities in GNNs when facing malicious adversarial attacks. However, all existing fairness attacks require manipulating the connectivity between existing nodes, which may be prohibited in reality. To this end, we introduce a Node Injection-based Fairness Attack (NIFA), exploring the vulnerabilities of GNN fairness in such a more realistic setting. In detail, NIFA first designs two insightful principles for node injection operations, namely the uncertainty-maximization principle and homophily-increase principle, and then optimizes injected nodes\u2019 feature matrix to further ensure the effectiveness of fairness attacks. Comprehensive experiments on three real-world datasets consistently demonstrate that NIFA can significantly undermine the fairness of mainstream GNNs, even including fairness-aware GNNs, by injecting merely 1% of nodes. We sincerely hope that our work can stimulate increasing attention from researchers on the vulnerability of GNN fairness, and encourage the development of corresponding defense mechanisms. Our code and data are released at: https://github.com/CGCL-codes/NIFA.", "pdf": "https://openreview.net/pdf/ddb14727107e7908632f8dd57f0c318af233c822.pdf"} {"title": "Online Non-convex Learning in Dynamic Environments", "url": "https://openreview.net/forum?id=DrQXDKbGgy", "detail_url": "https://openreview.net/forum?id=DrQXDKbGgy", "authors": "Zhipan Xu,Lijun Zhang", "tags": "NIPS 2024,Poster", "abstract": "This paper considers the problem of online learning with non-convex loss functions in dynamic environments. Recently, Suggala and Netrapalli [2020] demonstrated that follow the perturbed leader (FTPL) can achieve optimal regret for non-convex losses, but their results are limited to static environments. In this research, we examine dynamic environments and choose \\emph{dynamic regret} and \\emph{adaptive regret} to measure the performance. First, we propose an algorithm named FTPL-D by restarting FTPL periodically and establish $O(T^\\frac{2}{3}(V_T+1)^\\frac{1}{3})$ dynamic regret with the prior knowledge of $V_T$, which is the variation of loss functions. In the case that $V_T$ is unknown, we run multiple FTPL-D with different restarting parameters as experts and use a meta-algorithm to track the best one on the fly. To address the challenge of non-convexity, we utilize randomized sampling in the process of tracking experts. Next, we present a novel algorithm called FTPL-A that dynamically maintains a group of FTPL experts and combines them with an advanced meta-algorithm to obtain $O(\\sqrt{\\tau\\log{T}})$ adaptive regret for any interval of length $\\tau$. Moreover, we demonstrate that FTPL-A also attains an $\\tilde{O}(T^\\frac{2}{3}(V_T+1)^\\frac{1}{3})$ dynamic regret bound. Finally, we discuss the application to online constrained meta-learning and conduct experiments to verify the effectiveness of our methods.", "pdf": "https://openreview.net/pdf/c44404a568165072307f43eeb0a7da64ccc96fff.pdf"} {"title": "Facilitating Multimodal Classification via Dynamically Learning Modality Gap", "url": "https://openreview.net/forum?id=QbsPz0SnyV", "detail_url": "https://openreview.net/forum?id=QbsPz0SnyV", "authors": "Yang Yang,Fengqiang Wan,Qing-Yuan Jiang,Yi Xu", "tags": "NIPS 2024,Poster", "abstract": "Multimodal learning falls into the trap of the optimization dilemma due to the modality imbalance phenomenon, leading to unsatisfactory performance in real applications. A core reason for modality imbalance is that the models of each modality converge at different rates. Many attempts naturally focus on adjusting learning procedures adaptively. Essentially, the reason why models converge at different rates is because the difficulty of fitting category labels is inconsistent for each modality during learning. From the perspective of fitting labels, we find that appropriate positive intervention label fitting can correct this difference in learning ability. By exploiting the ability of contrastive learning to intervene in the learning of category label fitting, we propose a novel multimodal learning approach that dynamically integrates unsupervised contrastive learning and supervised multimodal learning to address the modality imbalance problem. We find that a simple yet heuristic integration strategy can significantly alleviate the modality imbalance phenomenon. Moreover, we design a learning-based integration strategy to integrate two losses dynamically, further improving the performance. Experiments on widely used datasets demonstrate the superiority of our method compared with state-of-the-art (SOTA) multimodal learning approaches. The code is available at https://github.com/njustkmg/NeurIPS24-LFM.", "pdf": "https://openreview.net/pdf/540acdc708c63949c339e0533cee22c36e5c277a.pdf"} {"title": "Conformal Inverse Optimization", "url": "https://openreview.net/forum?id=Y2NWKlrDrX", "detail_url": "https://openreview.net/forum?id=Y2NWKlrDrX", "authors": "Bo Lin,Erick Delage,Timothy Chan", "tags": "NIPS 2024,Poster", "abstract": "Inverse optimization has been increasingly used to estimate unknown parameters in an optimization model based on decision data. We show that such a point estimation is insufficient in a prescriptive setting where the estimated parameters are used to prescribe new decisions. The prescribed decisions may be low-quality and misaligned with human intuition and thus are unlikely to be adopted. To tackle this challenge, we propose conformal inverse optimization, which seeks to learn an uncertainty set for the unknown parameters and then solve a robust optimization model to prescribe new decisions. Under mild assumptions, we show that our method enjoys provable guarantees on solution quality, as evaluated using both the ground-truth parameters and the decision maker's perception of the unknown parameters. Our method demonstrates strong empirical performance compared to classic inverse optimization.", "pdf": "https://openreview.net/pdf/332e5bdc17e0605d923ca9c443f4850edd5996cb.pdf"} {"title": "GarmentLab: A Unified Simulation and Benchmark for Garment Manipulation", "url": "https://openreview.net/forum?id=bIRcf8i1kp", "detail_url": "https://openreview.net/forum?id=bIRcf8i1kp", "authors": "Haoran Lu,Ruihai Wu,Yitong Li,Sijie Li,Ziyu Zhu,Chuanruo Ning,Yan Shen,Longzan Luo,Yuanpei Chen,Hao Dong", "tags": "NIPS 2024,Poster", "abstract": "Manipulating garments and fabrics has long been a critical endeavor in the development of home-assistant robots. However, due to complex dynamics and topological structures, garment manipulations pose significant challenges. Recent successes in reinforcement learning and vision-based methods offer promising avenues for learning garment manipulation. Nevertheless, these approaches are severely constrained by current benchmarks, which exhibit offer limited diversity of tasks and unrealistic simulation behavior. Therefore, we present GarmentLab, a content-rich benchmark and realistic simulation designed for deformable object and garment manipulation. Our benchmark encompasses a diverse range of garment types, robotic systems and manipulators. The abundant tasks in the benchmark further explores of the interactions between garments, deformable objects, rigid bodies, fluids, and human body. Moreover, by incorporating multiple simulation methods such as FEM and PBD, along with our proposed sim-to-real algorithms and real-world benchmark, we aim to significantly narrow the sim-to-real gap. We evaluate state-of-the-art vision methods, reinforcement learning, and imitation learning approaches on these tasks, highlighting the challenges faced by current algorithms, notably their limited generalization capabilities. Our proposed open-source environments and comprehensive analysis show promising boost to future research in garment manipulation by unlocking the full potential of these methods. We guarantee that we will open-source our code as soon as possible. You can watch the videos in supplementary files to learn more about the details of our work.", "pdf": "https://openreview.net/pdf/54c17c13caa1af39b2d17587bf979aa4e4816bb2.pdf"} {"title": "Architect: Generating Vivid and Interactive 3D Scenes with Hierarchical 2D Inpainting", "url": "https://openreview.net/forum?id=JHg9eNuw6p", "detail_url": "https://openreview.net/forum?id=JHg9eNuw6p", "authors": "Yian Wang,Xiaowen Qiu,Jiageng Liu,Zhehuan Chen,Jiting Cai,Yufei Wang,Tsun-Hsuan Wang,Zhou Xian,Chuang Gan", "tags": "NIPS 2024,Poster", "abstract": "Creating large-scale interactive 3D environments is essential for the development of Robotics and Embodied AI research. However, generating diverse embodied environments with realistic detail and considerable complexity remains a significant challenge. Current methods, including manual design, procedural generation, diffusion-based scene generation, and large language model (LLM) guided scene design, are hindered by limitations such as excessive human effort, reliance on predefined rules or training datasets, and limited 3D spatial reasoning ability. Since pre-trained 2D image generative models better capture scene and object configuration than LLMs, we address these challenges by introducing $\\textit{Architect}$, a generative framework that creates complex and realistic 3D embodied environments leveraging diffusion-based 2D image inpainting. In detail, we utilize foundation visual perception models to obtain each generated object from the image and leverage pre-trained depth estimation models to lift the generated 2D image to 3D space. While there are still challenges that the camera parameters and scale of depth are still absent in the generated image, we address those problems by ''controlling'' the diffusion model by $\\textit{hierarchical inpainting}$. Specifically, having access to ground-truth depth and camera parameters in simulation, we first render a photo-realistic image of only the background. Then, we inpaint the foreground in this image, passing the geometric cues to the inpainting model in the background, which informs the camera parameters.\nThis process effectively controls the camera parameters and depth scale for the generated image, facilitating the back-projection from 2D image to 3D point clouds. Our pipeline is further extended to a hierarchical and iterative inpainting process to continuously generate the placement of large furniture and small objects to enrich the scene. This iterative structure brings the flexibility for our method to generate or refine scenes from various starting points, such as text, floor plans, or pre-arranged environments. Experimental results demonstrate that $\\textit{Architect}$ outperforms existing methods in producing realistic and complex environments, making it highly suitable for Embodied AI and robotics applications.", "pdf": "https://openreview.net/pdf/8273d02953e792d948c4d80d19f084e9d21a2ac4.pdf"} {"title": "Nonparametric Classification on Low Dimensional Manifolds using Overparameterized Convolutional Residual Networks", "url": "https://openreview.net/forum?id=guzWIg7ody", "detail_url": "https://openreview.net/forum?id=guzWIg7ody", "authors": "Zixuan Zhang,Kaiqi Zhang,Minshuo Chen,Yuma Takeda,Mengdi Wang,Tuo Zhao,Yu-Xiang Wang", "tags": "NIPS 2024,Poster", "abstract": "Convolutional residual neural networks (ConvResNets), though overparametersized, can achieve remarkable prediction performance in practice, which cannot be well explained by conventional wisdom. To bridge this gap, we study the performance of ConvResNeXts trained with weight decay, which cover ConvResNets as a special case, from the perspective of nonparametric classification. Our analysis allows for infinitely many building blocks in ConvResNeXts, and shows that weight decay implicitly enforces sparsity on these blocks. Specifically, we consider a smooth target function supported on a low-dimensional manifold, then prove that ConvResNeXts can adapt to the function smoothness and low-dimensional structures and efficiently learn the function without suffering from the curse of dimensionality. Our findings partially justify the advantage of overparameterized ConvResNeXts over conventional machine learning models.", "pdf": "https://openreview.net/pdf/b65fe1e27958f32f7e983ba4ecc5172d932adcc4.pdf"} {"title": "Adaptable Logical Control for Large Language Models", "url": "https://openreview.net/forum?id=58X9v92zRd", "detail_url": "https://openreview.net/forum?id=58X9v92zRd", "authors": "Honghua Zhang,Po-Nien Kung,Masahiro Yoshida,Guy Van den Broeck,Nanyun Peng", "tags": "NIPS 2024,Poster", "abstract": "Despite the success of Large Language Models (LLMs) on various tasks following human instructions, controlling model generation to follow strict constraints at inference time poses a persistent challenge. In this paper, we introduce Ctrl-G, a neuro-symbolic framework that enables tractable and adaptable control of LLM generation to follow logical constraints reliably. Ctrl-G combines any production-ready LLM with a Hidden Markov Model (HMM), guiding LLM outputs to adhere to logical constraints represented as deterministic finite automata. We show that Ctrl-G, when a TULU2-7B model is coupled with a 2B-parameter HMM, outperforms GPT4 in text editing: on the task of generating text insertions/continuations following logical constraints, our approach achieves over 30% higher satisfaction rate in human evaluation. When applied to medium-size language models (e.g., GPT2-large), Ctrl-G also beats its counterparts on standard benchmarks by large margins. Additionally, as a proof-of-concept study, we use Ctrl-G to assist LLM reasoning on the GSM benchmark, foreshadowing the application of Ctrl-G, as well as other constrained generation approaches, beyond traditional language generation tasks.", "pdf": "https://openreview.net/pdf/920eb68c968b12fd21b0ed15ae464117b66a2836.pdf"} {"title": "GL-NeRF: Gauss-Laguerre Quadrature Enables Training-Free NeRF Acceleration", "url": "https://openreview.net/forum?id=GDNZajKrML", "detail_url": "https://openreview.net/forum?id=GDNZajKrML", "authors": "Silong Yong,Yaqi Xie,Simon Stepputtis,Katia P. Sycara", "tags": "NIPS 2024,Poster", "abstract": "Volume rendering in neural radiance fields is inherently time-consuming due to the large number of MLP calls on the points sampled per ray. Previous works would address this issue by introducing new neural networks or data structures. In this work, we propose GL-NeRF, a new perspective of computing volume rendering with the Gauss-Laguerre quadrature. GL-NeRF significantly reduces the number of MLP calls needed for volume rendering, introducing no additional data structures or neural networks. The simple formulation makes adopting GL-NeRF in any NeRF model possible. In the paper, we first justify the use of the Gauss-Laguerre quadrature and then demonstrate this plug-and-play attribute by implementing it in two different NeRF models. We show that with a minimal drop in performance, GL-NeRF can significantly reduce the number of MLP calls, showing the potential to speed up any NeRF model. Code can be found in project page https://silongyong.github.io/GL-NeRF_project_page/.", "pdf": "https://openreview.net/pdf/cb17e13d223e5bced698e9c44fd54943b1d75b3f.pdf"} {"title": "Using Noise to Infer Aspects of Simplicity Without Learning", "url": "https://openreview.net/forum?id=b172ac0R4L", "detail_url": "https://openreview.net/forum?id=b172ac0R4L", "authors": "Zachery Boner,Harry Chen,Lesia Semenova,Ronald Parr,Cynthia Rudin", "tags": "NIPS 2024,Poster", "abstract": "Noise in data significantly influences decision-making in the data science process. In fact, it has been shown that noise in data generation processes leads practitioners to find simpler models. However, an open question still remains: what is the degree of model simplification we can expect under different noise levels? In this work, we address this question by investigating the relationship between the amount of noise and model simplicity across various hypothesis spaces, focusing on decision trees and linear models. We formally show that noise acts as an implicit regularizer for several different noise models. Furthermore, we prove that Rashomon sets (sets of near-optimal models) constructed with noisy data tend to contain simpler models than corresponding Rashomon sets with non-noisy data. Additionally, we show that noise expands the set of ``good'' features and consequently enlarges the set of models that use at least one good feature. Our work offers theoretical guarantees and practical insights for practitioners and policymakers on whether simple-yet-accurate machine learning models are likely to exist, based on knowledge of noise levels in the data generation process.", "pdf": "https://openreview.net/pdf/7c19a094cda572cde2840e78812bbbe6b000863f.pdf"} {"title": "Towards Exact Gradient-based Training on Analog In-memory Computing", "url": "https://openreview.net/forum?id=5GwbKlBIIf", "detail_url": "https://openreview.net/forum?id=5GwbKlBIIf", "authors": "Zhaoxian Wu,Tayfun Gokmen,Malte J. Rasch,Tianyi Chen", "tags": "NIPS 2024,Poster", "abstract": "Given the high economic and environmental costs of using large vision or language models, analog in-memory accelerators present a promising solution for energy-efficient AI. While inference on analog accelerators has been studied recently, the training perspective is underexplored. Recent studies have shown that the \"workhorse\" of digital AI training - stochastic gradient descent (SGD) algorithm converges inexactly when applied to model training on non-ideal devices. This paper puts forth a theoretical foundation for gradient-based training on analog devices. We begin by characterizing the non-convergent issue of SGD, which is caused by the asymmetric updates on the analog devices. We then provide a lower bound of the asymptotic error to show that there is a fundamental performance limit of SGD-based analog training rather than an artifact of our analysis. \nTo address this issue, we study a heuristic analog algorithm called Tiki-Taka that has recently exhibited superior empirical performance compared to SGD. We rigorously show its ability to converge to a critical point exactly and hence eliminate the asymptotic error. The simulations verify the correctness of the analyses.", "pdf": "https://openreview.net/pdf/e58fbfacaabf8979e1145b338e63ed3e3d224f0e.pdf"} {"title": "PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining", "url": "https://openreview.net/forum?id=5atraF1tbg", "detail_url": "https://openreview.net/forum?id=5atraF1tbg", "authors": "Mishaal Kazmi,Hadrien Lautraite,Alireza Akbari,Qiaoyue Tang,Mauricio Soroco,Tao Wang,S\u00e9bastien Gambs,Mathias L\u00e9cuyer", "tags": "NIPS 2024,Poster", "abstract": "We present PANORAMIA, a privacy leakage measurement framework for machine learning models that relies on membership inference attacks using generated data as non-members. By relying on generated non-member data, PANORAMIA eliminates the common dependency of privacy measurement tools on in-distribution non-member data. As a result, PANORAMIA does not modify the model, training data, or training process, and only requires access to a subset of the training data. We evaluate PANORAMIA on ML models for image and tabular data classification, as well as on large-scale language models.", "pdf": "https://openreview.net/pdf/8b86d59acce133c9d3ace25eb5338f80a50bd459.pdf"} {"title": "Paralinguistics-Aware Speech-Empowered Large Language Models for Natural Conversation", "url": "https://openreview.net/forum?id=NjewXJUDYq", "detail_url": "https://openreview.net/forum?id=NjewXJUDYq", "authors": "Heeseung Kim,Soonshin Seo,Kyeongseok Jeong,Ohsung Kwon,Soyoon Kim,Jungwhan Kim,Jaehong Lee,Eunwoo Song,Myungwoo Oh,Jung-Woo Ha,Sungroh Yoon,Kang Min Yoo", "tags": "NIPS 2024,Poster", "abstract": "Recent work shows promising results in expanding the capabilities of large language models (LLM) to directly understand and synthesize speech. However, an LLM-based strategy for modeling spoken dialogs remains elusive, calling for further investigation. This paper introduces an extensive speech-text LLM framework, the Unified Spoken Dialog Model (USDM), designed to generate coherent spoken responses with naturally occurring prosodic features relevant to the given input speech without relying on explicit automatic speech recognition (ASR) or text-to-speech (TTS) systems. We have verified the inclusion of prosody in speech tokens that predominantly contain semantic information and have used this foundation to construct a prosody-infused speech-text model. Additionally, we propose a generalized speech-text pretraining scheme that enhances the capture of cross-modal semantics. To construct USDM, we fine-tune our speech-text model on spoken dialog data using a multi-step spoken dialog template that stimulates the chain-of-reasoning capabilities exhibited by the underlying LLM. Automatic and human evaluations on the DailyTalk dataset demonstrate that our approach effectively generates natural-sounding spoken responses, surpassing previous and cascaded baselines. Our code and checkpoints are available at https://github.com/naver-ai/usdm.", "pdf": "https://openreview.net/pdf/53e6b781c7715ff449cf1c34b22d93e6c97b5eff.pdf"} {"title": "Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum", "url": "https://openreview.net/forum?id=r8M9SfYMDi", "detail_url": "https://openreview.net/forum?id=r8M9SfYMDi", "authors": "Hadi Pouransari,Chun-Liang Li,Jen-Hao Rick Chang,Pavan Kumar Anasosalu Vasu,Cem Koc,Vaishaal Shankar,Oncel Tuzel", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) are commonly trained on datasets consisting of fixed-length token sequences. These datasets are created by randomly concatenating documents of various lengths and then chunking them into sequences of a predetermined target length (concat-and-chunk). Recent attention implementations mask cross-document attention, reducing the effective length of a chunk of tokens. Additionally, training on long sequences becomes computationally prohibitive due to the quadratic cost of attention. In this study, we introduce dataset decomposition, a novel variable sequence length training technique, to tackle these challenges. We decompose a dataset into a union of buckets, each containing sequences of the same size extracted from a unique document. During training, we use variable sequence length and batch-size, sampling simultaneously from all buckets with a curriculum. In contrast to the concat-and-chunk baseline, which incurs a fixed attention cost at every step of training, our proposed method incurs a computational cost proportional to the actual document lengths at each step, resulting in significant savings in training time. We train an 8k context-length 1B model at the same cost as a 2k context-length model trained with the baseline approach. Experiments on a web-scale corpus demonstrate that our approach significantly enhances performance on standard language evaluations and long-context benchmarks, reaching target accuracy with up to 6x faster training compared to the baseline. Our method not only enables efficient pretraining on long sequences but also scales effectively with dataset size. Lastly, we shed light on a critical yet less studied aspect of training large language models: the distribution and curriculum of sequence lengths, which results in a non-negligible difference in performance.", "pdf": "https://openreview.net/pdf/da7a0f3f0f653a5adc873d2bc203533e37998c1f.pdf"} {"title": "Fairness-Aware Meta-Learning via Nash Bargaining", "url": "https://openreview.net/forum?id=eGJnB3tUgv", "detail_url": "https://openreview.net/forum?id=eGJnB3tUgv", "authors": "Yi Zeng,Xuelin Yang,Li Chen,Cristian Canton Ferrer,Ming Jin,Michael Jordan,Ruoxi Jia", "tags": "NIPS 2024,Poster", "abstract": "To address issues of group-level fairness in machine learning, it is natural to adjust model parameters based on specific fairness objectives over a sensitive-attributed validation set. Such an adjustment procedure can be cast within a meta-learning framework. However, naive integration of fairness goals via meta-learning can cause hypergradient conflicts for subgroups, resulting in unstable convergence and compromising model performance and fairness. To navigate this issue, we frame the resolution of hypergradient conflicts as a multi-player cooperative bargaining game. We introduce a two-stage meta-learning framework in which the first stage involves the use of a Nash Bargaining Solution (NBS) to resolve hypergradient conflicts and steer the model toward the Pareto front, and the second stage optimizes with respect to specific fairness goals.\nOur method is supported by theoretical results, notably a proof of the NBS for gradient aggregation free from linear independence assumptions, a proof of Pareto improvement, and a proof of monotonic improvement in validation loss. We also show empirical effects across various fairness objectives in six key fairness datasets and two image classification tasks.", "pdf": "https://openreview.net/pdf/a16b7251fa9ad053816004a3dc72ab93a17263fb.pdf"} {"title": "In-Context Learning with Representations: Contextual Generalization of Trained Transformers", "url": "https://openreview.net/forum?id=ik37kKxKBm", "detail_url": "https://openreview.net/forum?id=ik37kKxKBm", "authors": "Tong Yang,Yu Huang,Yingbin Liang,Yuejie Chi", "tags": "NIPS 2024,Poster", "abstract": "In-context learning (ICL) refers to a remarkable capability of pretrained large language models, which can learn a new task given a few examples during inference. However, theoretical understanding of ICL is largely under-explored, particularly whether transformers can be trained to generalize to unseen examples in a prompt, which will require the model to acquire contextual knowledge of the prompt for generalization. This paper investigates the training dynamics of transformers by gradient descent through the lens of non-linear regression tasks. The contextual generalization here can be attained via learning the template function for each task in-context, where all template functions lie in a linear space with $m$ basis functions. We analyze the training dynamics of one-layer multi-head transformers to {in-contextly} predict unlabeled inputs given partially labeled prompts, where the labels contain Gaussian noise and the number of examples in each prompt are not sufficient to determine the template. Under mild assumptions, we show that the training loss for a one-layer multi-head transformer converges linearly to a global minimum. Moreover, the transformer effectively learns to perform ridge regression over the basis functions. To our knowledge, this study is the first provable demonstration that transformers can learn contextual (i.e., template) information to generalize to both unseen examples and tasks when prompts contain only a small number of query-answer pairs.", "pdf": "https://openreview.net/pdf/2621c7276cfe4a8d736631f3c3b519f9272a4682.pdf"} {"title": "Replicable Uniformity Testing", "url": "https://openreview.net/forum?id=lCiqPxcyC0", "detail_url": "https://openreview.net/forum?id=lCiqPxcyC0", "authors": "Sihan Liu,Christopher Ye", "tags": "NIPS 2024,Poster", "abstract": "Uniformity testing is arguably one of the most fundamental distribution testing problems. Given sample access to an unknown distribution $\\mathbf{p}$ on $[n]$, one must decide if $\\mathbf{p}$ is uniform or $\\varepsilon$-far from uniform (in total variation distance). A long line of work established that uniformity testing has sample complexity $\\Theta(\\sqrt{n}\\varepsilon^{-2})$. However, when the input distribution is neither uniform nor far from uniform, known algorithms may have highly non-replicable behavior. \nConsequently, if these algorithms are applied in scientific studies, they may lead to contradictory results that erode public trust in science.\n\nIn this work, we revisit uniformity testing under the framework of algorithmic replicability [STOC '22], requiring the algorithm to be replicable under arbitrary distributions. While replicability typically incurs a $\\rho^{-2}$ factor overhead in sample complexity, we obtain a replicable uniformity tester using only $\\tilde{O}(\\sqrt{n} \\varepsilon^{-2} \\rho^{-1})$ samples. To our knowledge, this is the first replicable learning algorithm with (nearly) linear dependence on $\\rho$.\n\nLastly, we consider a class of ``symmetric\" algorithms [FOCS '00] whose outputs are invariant under relabeling of the domain $[n]$, which includes all existing uniformity testers (including ours). For this natural class of algorithms, we prove a nearly matching sample complexity lower bound for replicable uniformity testing.", "pdf": "https://openreview.net/pdf/758d6cce9c70c9995acb03f75db29817882bc7e5.pdf"} {"title": "Federated Natural Policy Gradient and Actor Critic Methods for Multi-task Reinforcement Learning", "url": "https://openreview.net/forum?id=DUFD6vsyF8", "detail_url": "https://openreview.net/forum?id=DUFD6vsyF8", "authors": "Tong Yang,Shicong Cen,Yuting Wei,Yuxin Chen,Yuejie Chi", "tags": "NIPS 2024,Poster", "abstract": "Federated reinforcement learning (RL) enables collaborative decision making of multiple distributed agents without sharing local data trajectories. In this work, we consider a multi-task setting, in which each agent has its own private reward function corresponding to different tasks, while sharing the same transition kernel of the environment. Focusing on infinite-horizon Markov decision processes, the goal is to learn a globally optimal policy that maximizes the sum of the discounted total rewards of all the agents in a decentralized manner, where each agent only communicates with its neighbors over some prescribed graph topology.\n\nWe develop federated vanilla and entropy-regularized natural policy gradient (NPG) methods in the tabular setting under softmax parameterization, where gradient tracking is applied to estimate the global Q-function to mitigate the impact of imperfect information sharing. We establish non-asymptotic global convergence guarantees under exact policy evaluation, where the rates are nearly independent of the size of the state-action space and illuminate the impacts of network size and connectivity. To the best of our knowledge, this is the first time that global convergence is established for federated multi-task RL using policy optimization. We further go beyond the tabular setting by proposing a federated natural actor critic (NAC) method for multi-task RL with function approximation, and establish its finite-time sample complexity taking the errors of function approximation into account.", "pdf": "https://openreview.net/pdf/488e822a2a6e4eb818a4856f557992e7f2ca3b78.pdf"} {"title": "KOALA: Empirical Lessons Toward Memory-Efficient and Fast Diffusion Models for Text-to-Image Synthesis", "url": "https://openreview.net/forum?id=KNDUBpWV9b", "detail_url": "https://openreview.net/forum?id=KNDUBpWV9b", "authors": "Youngwan Lee,Kwanyong Park,Yoorhim Cho,Yong-Ju Lee,Sung Ju Hwang", "tags": "NIPS 2024,Poster", "abstract": "As text-to-image (T2I) synthesis models increase in size, they demand higher inference costs due to the need for more expensive GPUs with larger memory, which makes it challenging to reproduce these models in addition to the restricted access to training datasets. Our study aims to reduce these inference costs and explores how far the generative capabilities of T2I models can be extended using only publicly available datasets and open-source models. To this end, by using the de facto standard text-to-image model, Stable Diffusion XL (SDXL), we present three key practices in building an efficient T2I model: (1) Knowledge distillation: we explore how to effectively distill the generation capability of SDXL into an efficient U-Net and find that self-attention is the most crucial part. (2) Data: despite fewer samples, high-resolution images with rich captions are more crucial than a larger number of low-resolution images with short captions. (3) Teacher: Step-distilled Teacher allows T2I models to reduce the noising steps. Based on these findings, we build two types of efficient text-to-image models, called KOALA-Turbo & -Lightning, with two compact U-Nets (1B & 700M), reducing the model size up to 54% and 69% of the SDXL U-Net. In particular, the KOALA-Lightning-700M is 4 times faster than SDXL while still maintaining satisfactory generation quality. Moreover, unlike SDXL, our KOALA models can generate 1024px high-resolution images on consumer-grade GPUs with 8GB of VRAMs (3060Ti). We believe that our KOALA models will have a significant practical impact, serving as cost-effective alternatives to SDXL for academic researchers and general users in resource-constrained environments.", "pdf": "https://openreview.net/pdf/990cdea2e853617bdddeeafd16d4c3981e318afe.pdf"} {"title": "Approximation Rate of the Transformer Architecture for Sequence Modeling", "url": "https://openreview.net/forum?id=ZwS2y21mZV", "detail_url": "https://openreview.net/forum?id=ZwS2y21mZV", "authors": "Haotian Jiang,Qianxiao Li", "tags": "NIPS 2024,Poster", "abstract": "The Transformer architecture is widely applied in sequence modeling applications, yet the theoretical understanding of its working principles remains limited. In this work, we investigate the approximation rate for single-layer Transformers with one head. We consider general non-linear relationships and identify a novel notion of complexity measures to establish an explicit Jackson-type approximation rate estimate for the Transformer. This rate reveals the structural properties of the Transformer and suggests the types of sequential relationships it is best suited for approximating. In particular, the results on approximation rates enable us to concretely analyze the differences between the Transformer and classical sequence modeling methods, such as recurrent neural networks.", "pdf": "https://openreview.net/pdf/6f4568848f7e4a1891607ede8eab17756a9b3bf9.pdf"} {"title": "Sequence-Augmented SE(3)-Flow Matching For Conditional Protein Generation", "url": "https://openreview.net/forum?id=paYwtPBpyZ", "detail_url": "https://openreview.net/forum?id=paYwtPBpyZ", "authors": "Guillaume Huguet,James Vuckovic,Kilian FATRAS,Eric Thibodeau-Laufer,Pablo Lemos,Riashat Islam,Cheng-Hao Liu,Jarrid Rector-Brooks,Tara Akhound-Sadegh,Michael M. Bronstein,Alexander Tong,Joey Bose", "tags": "NIPS 2024,Poster", "abstract": "Proteins are essential for almost all biological processes and derive their diverse functions from complex $3 \\rm D$ structures, which are in turn determined by their amino acid sequences. \nIn this paper, we exploit the rich biological inductive bias of amino acid sequences and introduce FoldFlow++, a novel sequence-conditioned $\\text{SE}(3)$-equivariant flow matching model for protein structure generation. FoldFlow++ presents substantial new architectural features over the previous FoldFlow family of models including a protein large language model to encode sequence, a new multi-modal fusion trunk that combines structure and sequence representations, and a geometric transformer based decoder. To increase \ndiversity and novelty of generated samples -- crucial for de-novo drug design -- we\ntrain FoldFlow++ at scale on a new dataset \nthat is an order of magnitude \nlarger than PDB datasets of prior works, containing both known proteins in PDB and high-quality synthetic structures achieved through filtering. We further demonstrate the ability to align FoldFlow++ to arbitrary rewards, e.g. increasing secondary structures diversity, by introducing a Reinforced Finetuning (ReFT) objective. We empirically observe that FoldFlow++ outperforms previous state-of-the-art protein structure-based generative models, improving over RFDiffusion in terms of unconditional generation across all metrics including designability, diversity, and novelty across all protein lengths, as well as exhibiting generalization on the task of equilibrium conformation sampling. Finally, we demonstrate that a fine-tuned FoldFlow++ makes progress on challenging conditional design tasks such as designing scaffolds for the VHH nanobody.", "pdf": "https://openreview.net/pdf/503e86547852b43509aa82eecef8210d45232c5b.pdf"} {"title": "Instance-Specific Asymmetric Sensitivity in Differential Privacy", "url": "https://openreview.net/forum?id=4I2aEav51N", "detail_url": "https://openreview.net/forum?id=4I2aEav51N", "authors": "David Durfee", "tags": "NIPS 2024,Poster", "abstract": "We provide a new algorithmic framework for differentially private estimation of general functions that adapts to the hardness of the underlying dataset. We build upon previous work that gives a paradigm for selecting an output through the exponential mechanism based upon closeness of the inverse to the underlying dataset, termed the inverse sensitivity mechanism. Our framework will slightly modify the closeness metric and instead give a simple and efficient application of the sparse vector technique. While the inverse sensitivity mechanism was shown to be instance optimal, it was only with respect to a class of unbiased mechanisms such that the most likely outcome matches the underlying data. We break this assumption in order to more naturally navigate the bias-variance tradeoff, which will also critically allow for extending our method to unbounded data. In consideration of this tradeoff, we provide theoretical guarantees and empirical validation that our technique will be particularly effective when the distances to the underlying dataset are asymmetric. This asymmetry is inherent to a range of important problems including fundamental statistics such as variance, as well as commonly used machine learning performance metrics for both classification and regression tasks. We efficiently instantiate our method in $O(n)$ time for these problems and empirically show that our techniques will give substantially improved differentially private estimations.", "pdf": "https://openreview.net/pdf/0aad375ef291f5acd0e3b752aeee45a578120a0c.pdf"} {"title": "Global Distortions from Local Rewards: Neural Coding Strategies in Path-Integrating Neural Systems", "url": "https://openreview.net/forum?id=938EYYewtq", "detail_url": "https://openreview.net/forum?id=938EYYewtq", "authors": "Francisco Acosta,Fatih Dinc,William T Redman,Manu Madhav,David Klindt,Nina Miolane", "tags": "NIPS 2024,Poster", "abstract": "Grid cells in the mammalian brain are fundamental to spatial navigation, and therefore crucial to how animals perceive and interact with their environment. Traditionally, grid cells are thought support path integration through highly symmetric hexagonal lattice firing patterns. However, recent findings show that their firing patterns become distorted in the presence of significant spatial landmarks such as rewarded locations. This introduces a novel perspective of dynamic, subjective, and action-relevant interactions between spatial representations and environmental cues. Here, we propose a practical and theoretical framework to quantify and explain these interactions. To this end, we train path-integrating recurrent neural networks (piRNNs) on a spatial navigation task, whose goal is to predict the agent's position with a special focus on rewarded locations. Grid-like neurons naturally emerge from the training of piRNNs, which allows us to investigate how the two aspects of the task, space and reward, are integrated in their firing patterns. We find that geometry, but not topology, of the grid cell population code becomes distorted. Surprisingly, these distortions are global in the firing patterns of the grid cells despite local changes in the reward. Our results indicate that after training with location-specific reward information, the preserved representational topology supports successful path integration, whereas the emergent heterogeneity in individual responses due to global distortions may encode dynamically changing environmental cues. By bridging the gap between computational models and the biological reality of spatial navigation under reward information, we offer new insights into how neural systems prioritize environmental landmarks in their spatial navigation code.", "pdf": "https://openreview.net/pdf/debf1b98ac3fe0b80920ce69a18bb8c63161629a.pdf"} {"title": "Evidence of Learned Look-Ahead in a Chess-Playing Neural Network", "url": "https://openreview.net/forum?id=8zg9sO4ttV", "detail_url": "https://openreview.net/forum?id=8zg9sO4ttV", "authors": "Erik Jenner,Shreyas Kapur,Vasil Georgiev,Cameron Allen,Scott Emmons,Stuart Russell", "tags": "NIPS 2024,Poster", "abstract": "Do neural networks learn to implement algorithms such as look-ahead or search \"in the wild\"? Or do they rely purely on collections of simple heuristics? We present evidence of *learned look-ahead* in the policy and value network of Leela Chess Zero, the currently strongest deep neural chess engine. We find that Leela internally represents future optimal moves and that these representations are crucial for its final output in certain board states. Concretely, we exploit the fact that Leela is a transformer that treats every chessboard square like a token in language models, and give three lines of evidence: (1) activations on certain squares of future moves are unusually important causally; (2) we find attention heads that move important information \"forward and backward in time,\" e.g., from squares of future moves to squares of earlier ones; and (3) we train a simple probe that can predict the optimal move 2 turns ahead with 92% accuracy (in board states where Leela finds a single best line). These findings are clear evidence of learned look-ahead in neural networks and might be a step towards a better understanding of their capabilities.", "pdf": "https://openreview.net/pdf/98e209a479c721f844ccc584f9ca15097944a47a.pdf"} {"title": "The Closeness of In-Context Learning and Weight Shifting for Softmax Regression", "url": "https://openreview.net/forum?id=SFaEENfEyw", "detail_url": "https://openreview.net/forum?id=SFaEENfEyw", "authors": "Shuai Li,Zhao Song,Yu Xia,Tong Yu,Tianyi Zhou", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) are known for their exceptional performance in natural language processing, making them highly effective in many human life-related tasks. The attention mechanism in the Transformer architecture is a critical component of LLMs, as it allows the model to selectively focus on specific input parts. The softmax unit, which is a key part of the attention mechanism, normalizes the attention scores. Hence, the performance of LLMs in various NLP tasks depends significantly on the crucial role played by the attention mechanism with the softmax unit.\n\nIn-context learning is one of the celebrated abilities of recent LLMs. \nWithout further parameter updates, Transformers can learn to predict based on few in-context examples. \nHowever, the reason why Transformers becomes in-context learners is not well understood.\nRecently, in-context learning has been studied from a mathematical perspective with simplified linear self-attention without softmax unit. \nBased on a linear regression formulation $\\min_x\\| Ax - b \\|_2$, existing works show linear Transformers' capability of learning linear functions in context. The capability of Transformers with softmax unit approaching full Transformers, however, remains unexplored.\n\nIn this work, we study the in-context learning based on a softmax regression formulation $\\min_{x} \\| \\langle \\exp(Ax), {\\bf 1}_n \\rangle^{-1} \\exp(Ax) - b \\|_2$. We show the upper bounds of the data transformations induced by a single self-attention layer with softmax unit and by gradient-descent on a $\\ell_2$ regression loss for softmax prediction function.\nOur theoretical results imply that when training self-attention-only Transformers for fundamental regression tasks, the models learned by gradient-descent and Transformers show great similarity.", "pdf": "https://openreview.net/pdf/9b0fac1a2e82221eaedb68182d5435c9ddb47ba6.pdf"} {"title": "Membership Inference Attacks against Large Vision-Language Models", "url": "https://openreview.net/forum?id=nv2Qt5cj1a", "detail_url": "https://openreview.net/forum?id=nv2Qt5cj1a", "authors": "Zhan Li,Yongtao Wu,Yihang Chen,Francesco Tonin,Elias Abad Rocamora,Volkan Cevher", "tags": "NIPS 2024,Poster", "abstract": "Large vision-language models (VLLMs) exhibit promising capabilities for processing multi-modal tasks across various application scenarios. However, their emergence also raises significant data security concerns, given the potential inclusion of sensitive information, such as private photos and medical records, in their training datasets. Detecting inappropriately used data in VLLMs remains a critical and unresolved issue, mainly due to the lack of standardized datasets and suitable methodologies. In this study, we introduce the first membership inference attack (MIA) benchmark tailored for various VLLMs to facilitate training data detection. Then, we propose a novel MIA pipeline specifically designed for token-level image detection. Lastly, we present a new metric called MaxR\u00e9nyi-K%, which is based on the confidence of the model output and applies to both text and image data. We believe that our work can deepen the understanding and methodology of MIAs in the context of VLLMs. Our code and datasets are available at https://github.com/LIONS-EPFL/VL-MIA.", "pdf": "https://openreview.net/pdf/3454120d38fdf1e3d5b8fef5891e49ff7e98a260.pdf"} {"title": "RL in Latent MDPs is Tractable: Online Guarantees via Off-Policy Evaluation", "url": "https://openreview.net/forum?id=juJl2uSq4D", "detail_url": "https://openreview.net/forum?id=juJl2uSq4D", "authors": "Jeongyeol Kwon,Shie Mannor,Constantine Caramanis,Yonathan Efroni", "tags": "NIPS 2024,Poster", "abstract": "In many real-world decision problems there is partially observed, hidden or latent information that remains fixed throughout an interaction. \nSuch decision problems can be modeled as Latent Markov Decision Processes (LMDPs), where a latent variable is selected at the beginning of an interaction and is not disclosed to the agent initially. \nIn last decade, there has been significant progress in designing learning algorithms for solving LMDPs under different structural assumptions. However, for general LMDPs, there is no known learning algorithm that provably matches the existing lower bound. We effectively resolve this open question, introducing the first sample-efficient algorithm for LMDPs without *any additional structural assumptions*. \nOur result builds off a new perspective on the role off-policy evaluation guarantees and coverage coefficient in LMDPs, a perspective, which has been overlooked in the context of exploration in partially observed environments. Specifically, we establish a novel off-policy evaluation lemma and introduce a new coverage coefficient for LMDPs. Then, we show how these can be used to derive near-optimal guarantees of an optimistic exploration algorithm. \nThese results, we believe, can be valuable for a wide range of interactive learning problems beyond the LMDP class, and especially, for partially observed environments.", "pdf": "https://openreview.net/pdf/7cb3accdc676adfb6fd5e4f3ef210733b0c422f6.pdf"} {"title": "Inevitable Trade-off between Watermark Strength and Speculative Sampling Efficiency for Language Models", "url": "https://openreview.net/forum?id=6YKMBUiIsG", "detail_url": "https://openreview.net/forum?id=6YKMBUiIsG", "authors": "Zhengmian Hu,Heng Huang", "tags": "NIPS 2024,Poster", "abstract": "Large language models are probabilistic models, and the process of generating content is essentially sampling from the output distribution of the language model. Existing watermarking techniques inject watermarks into the generated content without altering the output quality. On the other hand, existing acceleration techniques, specifically speculative sampling, leverage a draft model to speed up the sampling process while preserving the output distribution. However, there is no known method to simultaneously accelerate the sampling process and inject watermarks into the generated content. In this paper, we investigate this direction and find that the integration of watermarking and acceleration is non-trivial. We prove a no-go theorem, which states that it is impossible to simultaneously maintain the highest watermark strength and the highest sampling efficiency. Furthermore, we propose two methods that maintain either the sampling efficiency or the watermark strength, but not both. Our work provides a rigorous theoretical foundation for understanding the inherent trade-off between watermark strength and sampling efficiency in accelerating the generation of watermarked tokens for large language models. We also conduct numerical experiments to validate our theoretical findings and demonstrate the effectiveness of the proposed methods.", "pdf": "https://openreview.net/pdf/68e041a83c4cd5e3fe28f8340825479f5cae31fa.pdf"} {"title": "Not so griddy: Internal representations of RNNs path integrating more than one agent", "url": "https://openreview.net/forum?id=dsMSWUBN8f", "detail_url": "https://openreview.net/forum?id=dsMSWUBN8f", "authors": "William T Redman,Francisco Acosta,Santiago Acosta-Mendoza,Nina Miolane", "tags": "NIPS 2024,Poster", "abstract": "Success in collaborative and competitive environments, where agents must work with or against each other, requires individuals to encode the position and trajectory of themselves and others. Decades of neurophysiological experiments have shed light on how brain regions [e.g., medial entorhinal cortex (MEC), hippocampus] encode the self's position and trajectory. However, it has only recently been discovered that MEC and hippocampus are modulated by the positions and trajectories of others. To understand how encoding spatial information of multiple agents shapes neural representations, we train a recurrent neural network (RNN) model that captures properties of MEC to path integrate trajectories of two agents simultaneously navigating the same environment. We find significant differences between these RNNs and those trained to path integrate only a single agent. At the individual unit level, RNNs trained to path integrate more than one agent develop weaker grid responses, stronger border responses, and tuning for the relative position of the two agents. At the population level, they develop more distributed and robust representations, with changes in network dynamics and manifold topology. Our results provide testable predictions and open new directions with which to study the neural computations supporting spatial navigation.", "pdf": "https://openreview.net/pdf/0f3ace1ce6e51e26c53ba9af87b1a29671d318fd.pdf"} {"title": "Model Based Inference of Synaptic Plasticity Rules", "url": "https://openreview.net/forum?id=rI80PHlnFm", "detail_url": "https://openreview.net/forum?id=rI80PHlnFm", "authors": "Yash Mehta,Danil Tyulmankov,Adithya E. Rajagopalan,Glenn C Turner,James E Fitzgerald,Jan Funke", "tags": "NIPS 2024,Poster", "abstract": "Inferring the synaptic plasticity rules that govern learning in the brain is a key challenge in neuroscience. We present a novel computational method to infer these rules from experimental data, applicable to both neural and behavioral data. Our approach approximates plasticity rules using a parameterized function, employing either truncated Taylor series for theoretical interpretability or multilayer perceptrons. These plasticity parameters are optimized via gradient descent over entire trajectories to align closely with observed neural activity or behavioral learning dynamics. This method can uncover complex rules that induce long nonlinear time dependencies, particularly involving factors like postsynaptic activity and current synaptic weights. We validate our approach through simulations, successfully recovering established rules such as Oja's, as well as more intricate plasticity rules with reward-modulated terms. We assess the robustness of our technique to noise and apply it to behavioral data from \\textit{Drosophila} in a probabilistic reward-learning experiment. Notably, our findings reveal an active forgetting component in reward learning in flies, improving predictive accuracy over previous models. This modeling framework offers a promising new avenue for elucidating the computational principles of synaptic plasticity and learning in the brain.", "pdf": "https://openreview.net/pdf/a21c61f9cd6d8ed27bf3d681bd43b7ddb7a63c58.pdf"} {"title": "Molecule Generation with Fragment Retrieval Augmentation", "url": "https://openreview.net/forum?id=56Q0qggDlp", "detail_url": "https://openreview.net/forum?id=56Q0qggDlp", "authors": "Seul Lee,Karsten Kreis,Srimukh Prasad Veccham,Meng Liu,Danny Reidenbach,Saee Gopal Paliwal,Arash Vahdat,Weili Nie", "tags": "NIPS 2024,Poster", "abstract": "Fragment-based drug discovery, in which molecular fragments are assembled into new molecules with desirable biochemical properties, has achieved great success. However, many fragment-based molecule generation methods show limited exploration beyond the existing fragments in the database as they only reassemble or slightly modify the given ones. To tackle this problem, we propose a new fragment-based molecule generation framework with retrieval augmentation, namely *Fragment Retrieval-Augmented Generation* (*f*-RAG). *f*-RAG is based on a pre-trained molecular generative model that proposes additional fragments from input fragments to complete and generate a new molecule. Given a fragment vocabulary, *f*-RAG retrieves two types of fragments: (1) *hard fragments*, which serve as building blocks that will be explicitly included in the newly generated molecule, and (2) *soft fragments*, which serve as reference to guide the generation of new fragments through a trainable *fragment injection module*. To extrapolate beyond the existing fragments, *f*-RAG updates the fragment vocabulary with generated fragments via an iterative refinement process which is further enhanced with post-hoc genetic fragment modification. *f*-RAG can achieve an improved exploration-exploitation trade-off by maintaining a pool of fragments and expanding it with novel and high-quality fragments through a strong generative prior.", "pdf": "https://openreview.net/pdf/3a31bc6eb3d560a7d1c15b10673917f27b0bb969.pdf"} {"title": "Distributional Successor Features Enable Zero-Shot Policy Optimization", "url": "https://openreview.net/forum?id=8IysmgZte4", "detail_url": "https://openreview.net/forum?id=8IysmgZte4", "authors": "Chuning Zhu,Xinqi Wang,Tyler Han,Simon Shaolei Du,Abhishek Gupta", "tags": "NIPS 2024,Poster", "abstract": "Intelligent agents must be generalists, capable of quickly adapting to various tasks. In reinforcement learning (RL), model-based RL learns a dynamics model of the world, in principle enabling transfer to arbitrary reward functions through planning. However, autoregressive model rollouts suffer from compounding error, making model-based RL ineffective for long-horizon problems. Successor features offer an alternative by modeling a policy's long-term state occupancy, reducing policy evaluation under new rewards to linear regression. Yet, policy optimization with successor features can be challenging. This work proposes a novel class of models, i.e., Distributional Successor Features for Zero-Shot Policy Optimization (DiSPOs), that learn a distribution of successor features of a stationary dataset's behavior policy, along with a policy that acts to realize different successor features within the dataset. By directly modeling long-term outcomes in the dataset, DiSPOs avoid compounding error while enabling a simple scheme for zero-shot policy optimization across reward functions. We present a practical instantiation of DiSPOs using diffusion models and show their efficacy as a new class of transferable models, both theoretically and empirically across various simulated robotics problems. Videos and code are available at https://weirdlabuw.github.io/dispo/.", "pdf": "https://openreview.net/pdf/817351d49af97a24de0f029ade8834c9f09c2132.pdf"} {"title": "ChatQA: Surpassing GPT-4 on Conversational QA and RAG", "url": "https://openreview.net/forum?id=bkUvKPKafQ", "detail_url": "https://openreview.net/forum?id=bkUvKPKafQ", "authors": "Zihan Liu,Wei Ping,Rajarshi Roy,Peng Xu,Chankyu Lee,Mohammad Shoeybi,Bryan Catanzaro", "tags": "NIPS 2024,Poster", "abstract": "In this work, we introduce ChatQA, a suite of models that outperform GPT-4 on retrieval-augmented generation (RAG) and conversational question answering (QA). To enhance generation, we propose a two-stage instruction tuning method that significantly boosts the performance of RAG. For effective retrieval, we introduce a dense retriever optimized for conversational QA, which yields results comparable to the alternative state-of-the-art query rewriting models, while substantially reducing deployment costs. We also present the ChatRAG Bench, which encompasses ten datasets covering comprehensive evaluations on RAG, table-related QA, arithmetic calculations, and scenarios involving unanswerable questions. Our ChatQA-1.0-70B (score: 54.14), built on Llama2, a weaker foundation model than GPT-4, can slightly outperform GPT-4-0613 (score: 53.90) and GPT-4-Turbo-2024-04-09 (score: 54.03) on the ChatRAG Bench, without relying on any synthetic data from OpenAI GPT models. Notably, Llama3-ChatQA-1.5-70B model surpasses the accuracy of GPT-4-Turbo-2024-04-09 by a margin. These results demonstrate the exceptional quality of the proposed ChatQA recipe. To advance research in this field, we open-sourced the model weights, instruction tuning data, ChatRAG Bench, and retriever for the community.", "pdf": "https://openreview.net/pdf/a9868221a3b9bd4e6f654789c9d0a165d1ba3259.pdf"} {"title": "Fair and Welfare-Efficient Constrained Multi-Matchings under Uncertainty", "url": "https://openreview.net/forum?id=6KThdqFgmA", "detail_url": "https://openreview.net/forum?id=6KThdqFgmA", "authors": "Elita Lobo,Justin Payan,Cyrus Cousins,Yair Zick", "tags": "NIPS 2024,Poster", "abstract": "We study fair allocation of constrained resources, where a market designer optimizes overall welfare while maintaining group fairness. In many large-scale settings, utilities are not known in advance, but are instead observed after realizing the allocation. We therefore estimate agent utilities using machine learning. Optimizing over estimates requires trading-off between mean utilities and their predictive variances. We discuss these trade-offs under two paradigms for preference modeling \u2013 in the stochastic optimization regime, the market designer has access to a probability distribution over utilities, and in the robust optimization regime they have access to an uncertainty set containing the true utilities with high probability. We discuss utilitarian and egalitarian welfare objectives, and we explore how to optimize for them under stochastic and robust paradigms. We demonstrate the efficacy of our approaches on three publicly available conference reviewer assignment datasets. The approaches presented enable scalable constrained resource allocation under uncertainty for many combinations of objectives and preference models.", "pdf": "https://openreview.net/pdf/1c1d3f59b290af49011ec78a1862ec731cfa294f.pdf"} {"title": "Fast TRAC: A Parameter-Free Optimizer for Lifelong Reinforcement Learning", "url": "https://openreview.net/forum?id=QEaHE4TUgc", "detail_url": "https://openreview.net/forum?id=QEaHE4TUgc", "authors": "Aneesh Muppidi,Zhiyu Zhang,Heng Yang", "tags": "NIPS 2024,Poster", "abstract": "A key challenge in lifelong reinforcement learning (RL) is the loss of plasticity, where previous learning progress hinders an agent's adaptation to new tasks. While regularization and resetting can help, they require precise hyperparameter selection at the outset and environment-dependent adjustments. Building on the principled theory of online convex optimization, we present a parameter-free optimizer for lifelong RL, called TRAC, which requires no tuning or prior knowledge about the distribution shifts. Extensive experiments on Procgen, Atari, and Gym Control environments show that TRAC works surprisingly well\u2014mitigating loss of plasticity and rapidly adapting to challenging distribution shifts\u2014despite the underlying optimization problem being nonconvex and nonstationary.", "pdf": "https://openreview.net/pdf/6b8a32dafd259e58701a5912cf3c17a7dc37c02d.pdf"} {"title": "Beyond Accuracy: Ensuring Correct Predictions With Correct Rationales", "url": "https://openreview.net/forum?id=ADV0Pzi3Ol", "detail_url": "https://openreview.net/forum?id=ADV0Pzi3Ol", "authors": "Tang Li,Mengmeng Ma,Xi Peng", "tags": "NIPS 2024,Poster", "abstract": "Large pretrained foundation models demonstrate exceptional performance and, in some high-stakes applications, even surpass human experts. However, most of these models are currently evaluated primarily on prediction accuracy, overlooking the validity of the rationales behind their accurate predictions. For the safe deployment of foundation models, there is a pressing need to ensure *double-correct predictions*, *i.e.*, correct prediction backed by correct rationales. To achieve this, we propose a two-phase scheme: First, we curate a new dataset that offers structured rationales for visual recognition tasks. Second, we propose a rationale-informed optimization method to guide the model in disentangling and localizing visual evidence for each rationale, without requiring manual annotations. Extensive experiments and ablation studies demonstrate that our model outperforms state-of-the-art models by up to 10.1\\% in prediction accuracy across a wide range of tasks. Furthermore, our method significantly improves the model's rationale correctness, improving localization by 7.5\\% and disentanglement by 36.5\\%. Our dataset, source code, and pretrained weights: https://github.com/deep-real/DCP", "pdf": "https://openreview.net/pdf/8820a69c919942b3e26f6a4e1b14c3b161a45708.pdf"} {"title": "Truncated Variance Reduced Value Iteration", "url": "https://openreview.net/forum?id=BiikUm6pLu", "detail_url": "https://openreview.net/forum?id=BiikUm6pLu", "authors": "Yujia Jin,Ishani Karmarkar,Aaron Sidford,Jiayi Wang", "tags": "NIPS 2024,Poster", "abstract": "We provide faster randomized algorithms for computing an $\\epsilon$-optimal policy in a discounted Markov decision process with $A_{\\text{tot}}$-state-action pairs, bounded rewards, and discount factor $\\gamma$. We provide an $\\tilde{O}(A_{\\text{tot}}[(1 - \\gamma)^{-3}\\epsilon^{-2} + (1 - \\gamma)^{-2}])$-time algorithm in the sampling setting, where the probability transition matrix is unknown but accessible through a generative model which can be queried in $\\tilde{O}(1)$-time, and an $\\tilde{O}(s + (1-\\gamma)^{-2})$-time algorithm in the offline setting where the probability transition matrix is known and $s$-sparse. These results improve upon the prior state-of-the-art which either ran in $\\tilde{O}(A_{\\text{tot}}[(1 - \\gamma)^{-3}\\epsilon^{-2} + (1 - \\gamma)^{-3}])$ time [Sidford, Wang, Wu, Ye 2018] in the sampling setting, $\\tilde{O}(s + A_{\\text{tot}} (1-\\gamma)^{-3})$ time [Sidford, Wang, Wu, Yang, Ye 2018] in the offline setting, or time at least quadratic in the number of states using interior point methods for linear programming. We achieve our results by building upon prior stochastic variance-reduced value iteration methods [Sidford, Wang, Wu, Yang, Ye 2018]. We provide a variant that carefully truncates the progress of its iterates to improve the variance of new variance-reduced sampling procedures that we introduce to implement the steps. Our method is essentially model-free and can be implemented in $\\tilde{O}(A_{\\text{tot}})$-space when given generative model access. Consequently, our results take a step in closing the sample-complexity gap between model-free and model-based methods.", "pdf": "https://openreview.net/pdf/5c99378254a82f831d456fd8e9521fc1bd8993a3.pdf"} {"title": "Exploring Consistency in Graph Representations: from Graph Kernels to Graph Neural Networks", "url": "https://openreview.net/forum?id=dg0hO4M11K", "detail_url": "https://openreview.net/forum?id=dg0hO4M11K", "authors": "Xuyuan Liu,Yinghao Cai,Qihui Yang,Yujun Yan", "tags": "NIPS 2024,Poster", "abstract": "Graph Neural Networks (GNNs) have emerged as a dominant approach in graph representation learning, yet they often struggle to capture consistent similarity relationships among graphs. To capture similarity relationships, while graph kernel methods like the Weisfeiler-Lehman subtree (WL-subtree) and Weisfeiler-Lehman optimal assignment (WLOA) perform effectively, they are heavily reliant on predefined kernels and lack sufficient non-linearities. Our work aims to bridge the gap between neural network methods and kernel approaches by enabling GNNs to consistently capture relational structures in their learned representations. Given the analogy between the message-passing process of GNNs and WL algorithms, we thoroughly compare and analyze the properties of WL-subtree and WLOA kernels. We find that the similarities captured by WLOA at different iterations are asymptotically consistent, ensuring that similar graphs remain similar in subsequent iterations, thereby leading to superior performance over the WL-subtree kernel. Inspired by these findings, we conjecture that the consistency in the similarities of graph representations across GNN layers is crucial in capturing relational structures and enhancing graph classification performance. Thus, we propose a loss to enforce the similarity of graph representations to be consistent across different layers. Our empirical analysis verifies our conjecture and shows that our proposed consistency loss can significantly enhance graph classification performance across several GNN backbones on various datasets.", "pdf": "https://openreview.net/pdf/0a1072b7d2f43f6a98c52aefe66bbf8fc7dd79fa.pdf"} {"title": "DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving", "url": "https://openreview.net/forum?id=zLU21oQjD5", "detail_url": "https://openreview.net/forum?id=zLU21oQjD5", "authors": "Yuxuan Tong,Xiwen Zhang,Rui Wang,Ruidong Wu,Junxian He", "tags": "NIPS 2024,Poster", "abstract": "Solving mathematical problems requires advanced reasoning abilities and presents notable challenges for large language models. Previous works usually synthesize data from proprietary models to augment existing datasets, followed by instruction tuning to achieve top-tier results. However, our analysis of these datasets reveals severe biases towards easy queries, with frequent failures to generate any correct response for the most challenging queries.\nHypothesizing that difficult queries are crucial to learning complex reasoning, we propose *Difficulty-Aware Rejection Tuning* (`DART`), a method that allocates difficult queries more trials during the synthesis phase, enabling more extensive training on difficult samples.\nUtilizing `DART`, we have created new datasets for mathematical problem-solving that focus more on difficult queries and are substantially smaller than previous ones. Remarkably, our synthesis process solely relies on a 7B-sized open-weight model, without reliance on the commonly used proprietary GPT-4.\nWe fine-tune various base models on our datasets ranging from 7B to 70B in size, resulting in a series of strong models called `DART-Math`.\nIn comprehensive in-domain and out-of-domain evaluation on 6 mathematical benchmarks, `DART-Math` outperforms vanilla rejection tuning significantly, being superior or comparable to previous arts, despite using much smaller datasets and no proprietary models. Furthermore, our results position our synthetic datasets as the most effective and cost-efficient publicly available resources for advancing mathematical problem-solving. Our datasets, models and code are publicly available at https://github.com/hkust-nlp/dart-math.", "pdf": "https://openreview.net/pdf/26d6bf8a231686aaa5faf9277e38c2b2d934ff28.pdf"} {"title": "Enhancing Large Vision Language Models with Self-Training on Image Comprehension", "url": "https://openreview.net/forum?id=FZW7Ctyjm3", "detail_url": "https://openreview.net/forum?id=FZW7Ctyjm3", "authors": "Yihe Deng,Pan Lu,Fan Yin,Ziniu Hu,Sheng Shen,Quanquan Gu,James Zou,Kai-Wei Chang,Wei Wang", "tags": "NIPS 2024,Poster", "abstract": "Large vision language models (LVLMs) integrate large language models (LLMs) with pre-trained vision encoders, thereby activating the perception capability of the model to understand image inputs for different queries and conduct subsequent reasoning. Improving this capability requires high-quality vision-language data, which is costly and labor-intensive to acquire. Self-training approaches have been effective in single-modal settings to alleviate the need for labeled data by leveraging model's own generation. However, effective self-training remains a challenge regarding the unique visual perception and reasoning capability of LVLMs. To address this, we introduce **S**elf-**T**raining on **I**mage **C**omprehension (**STIC**), which emphasizes a self-training approach specifically for image comprehension. First, the model self-constructs a preference dataset for image descriptions using unlabeled images. Preferred responses are generated through a step-by-step prompt, while dis-preferred responses are generated from either corrupted images or misleading prompts. To further self-improve reasoning on the extracted visual information, we let the model reuse a small portion of existing instruction-tuning data and append its self-generated image descriptions to the prompts. We validate the effectiveness of STIC across seven different benchmarks, demonstrating substantial performance gains of 4.0% on average while using 70% less supervised fine-tuning data than the current method. Further studies dive into various components of STIC and highlight its potential to leverage vast quantities of unlabeled images for self-training.", "pdf": "https://openreview.net/pdf/695c912cb41743419d359d3224d1ec5f605d986c.pdf"} {"title": "DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning", "url": "https://openreview.net/forum?id=4XTvXMSZPO", "detail_url": "https://openreview.net/forum?id=4XTvXMSZPO", "authors": "Hao Bai,Yifei Zhou,Jiayi Pan,Mert Cemri,Alane Suhr,Sergey Levine,Aviral Kumar", "tags": "NIPS 2024,Poster", "abstract": "Pre-trained vision language models (VLMs), though powerful, typically lack training on decision-centric data, rendering them sub-optimal for decision-making tasks such as in-the-wild device control through Graphical User Interfaces (GUIs) when used off-the-shelf. While training with static demonstrations has shown some promise, we show that such methods fall short when controlling real GUIs due to their failure to deal with real world stochasticity and dynamism not captured in static observational data. This paper introduces a novel autonomous RL approach, called DigiRL, for training in-the-wild device control agents through fine-tuning a pre-trained VLM in two stages: offline and offline-to-online RL. We first build a scalable and parallelizable Android learning environment equipped with a VLM-based general-purpose evaluator and then identify the key design choices for simple and effective RL in this domain. We demonstrate the effectiveness of DigiRL using the Android-in-the-Wild (AitW) dataset, where our 1.5B VLM trained with RL achieves a 49.5\\% absolute improvement -- from 17.7 to 67.2\\% success rate -- over supervised fine-tuning with static human demonstration data. It is worth noting that such improvement is achieved without any additional supervision or demonstration data. These results significantly surpass not only the prior best agents, including AppAgent with GPT-4V (8.3\\% success rate) and the 17B CogAgent trained with AitW data (14.4\\%), but also our implementation of prior best autonomous RL approach based on filtered behavior cloning (57.8\\%), thereby establishing a new state-of-the-art for digital agents for in-the-wild device control.", "pdf": "https://openreview.net/pdf/53508ea1db0056abe7a6fb24ad516a8a40675570.pdf"} {"title": "Transformers Can Do Arithmetic with the Right Embeddings", "url": "https://openreview.net/forum?id=aIyNLWXuDO", "detail_url": "https://openreview.net/forum?id=aIyNLWXuDO", "authors": "Sean Michael McLeish,Arpit Bansal,Alex Stein,Neel Jain,John Kirchenbauer,Brian R. Bartoldson,Bhavya Kailkhura,Abhinav Bhatele,Jonas Geiping,Avi Schwarzschild,Tom Goldstein", "tags": "NIPS 2024,Poster", "abstract": "The poor performance of transformers on arithmetic tasks seems to stem in large part from their inability to keep track of the exact position of each digit inside of a large span of digits. We mend this problem by adding an embedding to each digit that encodes its position relative to the start of the number. In addition to the boost these embeddings provide on their own, we show that this fix enables architectural modifications such as input injection and recurrent layers to improve performance even further.\n\nWith positions resolved, we can study the logical extrapolation ability of transformers. Can they solve arithmetic problems that are larger and more complex than those in their training data? We find that training on only 20 digit numbers with a single GPU for one day, we can reach state-of-the-art performance, achieving up to 99% accuracy on 100 digit addition problems. Finally, we show that these gains in numeracy also unlock improvements on other multi-step reasoning tasks including sorting and multiplication.", "pdf": "https://openreview.net/pdf/117de1c5d2dffbd27845e6c194dc1e86ed64dae1.pdf"} {"title": "PPLNs: Parametric Piecewise Linear Networks for Event-Based Temporal Modeling and Beyond", "url": "https://openreview.net/forum?id=S8wFXyT4dY", "detail_url": "https://openreview.net/forum?id=S8wFXyT4dY", "authors": "Chen Song,Zhenxiao Liang,Bo Sun,Qixing Huang", "tags": "NIPS 2024,Poster", "abstract": "We present Parametric Piecewise Linear Networks (PPLNs) for temporal vision inference. Motivated by the neuromorphic principles that regulate biological neural behaviors, PPLNs are ideal for processing data captured by event cameras, which are built to simulate neural activities in the human retina. We discuss how to represent the membrane potential of an artificial neuron by a parametric piecewise linear function with learnable coefficients. This design echoes the idea of building deep models from learnable parametric functions recently popularized by Kolmogorov\u2013Arnold Networks (KANs). Experiments demonstrate the state-of-the-art performance of PPLNs in event-based and image-based vision applications, including steering prediction, human pose estimation, and motion deblurring.", "pdf": "https://openreview.net/pdf/417d474e89b240b7a53779c7754ebcf2c539d636.pdf"} {"title": "Training Data Attribution via Approximate Unrolling", "url": "https://openreview.net/forum?id=3NaqGg92KZ", "detail_url": "https://openreview.net/forum?id=3NaqGg92KZ", "authors": "Juhan Bae,Wu Lin,Jonathan Lorraine,Roger Baker Grosse", "tags": "NIPS 2024,Poster", "abstract": "Many training data attribution (TDA) methods aim to estimate how a model's behavior would change if one or more data points were removed from the training set. Methods based on implicit differentiation, such as influence functions, can be made computationally efficient, but fail to account for underspecification, the implicit bias of the optimization algorithm, or multi-stage training pipelines. By contrast, methods based on unrolling address these issues but face scalability challenges. In this work, we connect the implicit-differentiation-based and unrolling-based approaches and combine their benefits by introducing Source, an approximate unrolling-based TDA method that is computed using an influence-function-like formula. While being computationally efficient compared to unrolling-based approaches, Source is suitable in cases where implicit-differentiation-based approaches struggle, such as in non-converged models and multi-stage training pipelines. Empirically, Source outperforms existing TDA techniques in counterfactual prediction, especially in settings where implicit-differentiation-based approaches fall short.", "pdf": "https://openreview.net/pdf/2b18f215c328fe126573df66590c6a5a8fedaf2e.pdf"} {"title": "Learning to Price Homogeneous Data", "url": "https://openreview.net/forum?id=KoyTqNs6SZ", "detail_url": "https://openreview.net/forum?id=KoyTqNs6SZ", "authors": "Keran Chen,Joon Suk Huh,Kirthevasan Kandasamy", "tags": "NIPS 2024,Poster", "abstract": "We study a data pricing problem, where a seller has access to $N$ homogeneous data points (e.g. drawn i.i.d. from some distribution).\nThere are $m$ types of buyers in the market, where buyers of the same type $i$ have the same valuation curve $v_i:[N]\\rightarrow [0,1]$, where $v_i(n)$ is the value for having $n$ data points.\n*A priori*, the seller is unaware of the\ndistribution of buyers, but can repeat the market for $T$ rounds so as to learn the revenue-optimal pricing curve $p:[N] \\rightarrow [0, 1]$.\nTo solve this online learning problem,\nwe first develop novel discretization schemes to approximate any pricing curve.\nWhen compared to prior work,\nthe size of our discretization schemes scales gracefully with the approximation parameter, which translates to better regret in online learning.\nUnder assumptions like smoothness and diminishing returns which are satisfied by data, the discretization size can be reduced further.\nWe then turn to the online learning problem, \nboth in the stochastic and adversarial settings.\nOn each round, the seller chooses an *anonymous* pricing curve $p_t$.\nA new buyer appears and may choose to purchase some amount of data.\nShe then reveals her type *only if* she makes a purchase.\nOur online algorithms build on classical algorithms such as UCB and FTPL, but require novel ideas to account for the asymmetric nature of this feedback and to deal with the vastness of the space of pricing curves.\nUsing the improved discretization schemes previously developed, we are able to achieve \n$\\widetilde{O}(m\\sqrt{T})$ regret in the stochastic setting and $\\widetilde{\\mathcal{O}}(m^{3/2}\\sqrt{T})$ regret in the adversarial setting.", "pdf": "https://openreview.net/pdf/1e7642af4622e1cbe75b01e13b1e873f13e750c3.pdf"} {"title": "The Intelligible and Effective Graph Neural Additive Network", "url": "https://openreview.net/forum?id=SKY1ScUTwA", "detail_url": "https://openreview.net/forum?id=SKY1ScUTwA", "authors": "Maya Bechler-Speicher,Amir Globerson,Ran Gilad-Bachrach", "tags": "NIPS 2024,Poster", "abstract": "Graph Neural Networks (GNNs) have emerged as the predominant approach for learning over graph-structured data. However, most GNNs operate as black-box models and require post-hoc explanations, which may not suffice in high-stakes scenarios where transparency is crucial.\nIn this paper, we present a GNN that is interpretable by design. Our model, Graph Neural Additive Network (GNAN), is a novel extension of the interpretable class of Generalized Additive Models, and can be visualized and fully understood by humans. GNAN is designed to be fully interpretable, offering both global and local explanations at the feature and graph levels through direct visualization of the model. These visualizations describe exactly how the model uses the relationships between the target variable, the features, and the graph. We demonstrate the intelligibility of GNANs in a series of examples on different tasks and datasets. In addition, we show that the accuracy of GNAN is on par with black-box GNNs, making it suitable for critical applications where transparency is essential, alongside high accuracy.", "pdf": "https://openreview.net/pdf/5bc2864f36ab7bab95db7e805d80b1ef0325522b.pdf"} {"title": "OTTER: Effortless Label Distribution Adaptation of Zero-shot Models", "url": "https://openreview.net/forum?id=RsawwSBCs7", "detail_url": "https://openreview.net/forum?id=RsawwSBCs7", "authors": "Changho Shin,Jitian Zhao,Sonia Cromp,Harit Vishwakarma,Frederic Sala", "tags": "NIPS 2024,Poster", "abstract": "Popular zero-shot models suffer due to artifacts inherited from pretraining. One particularly detrimental issue, caused by unbalanced web-scale pretraining data, is mismatched label distribution. Existing approaches that seek to repair the label distribution are not suitable in zero-shot settings, as they have mismatching requirements, such as needing access to labeled downstream task data or knowledge of the true label balance in the pretraining distribution. We sidestep these challenges and introduce a simple and lightweight approach to adjust pretrained model predictions via optimal transport. Our technique requires only an estimate of the label distribution of a downstream task. Theoretically, we characterize the improvement produced by our procedure under certain mild conditions and provide bounds on the error caused by misspecification. Empirically, we validate our method in a wide array of zero-shot image and text classification tasks, improving accuracy by 4.8% and 15.9% on average, and beating baselines like prior matching---often by significant margins---in 17 out of 21 datasets.", "pdf": "https://openreview.net/pdf/561648a4c9135b4b0f37b415e1b810391a5b4bbe.pdf"} {"title": "Changing the Training Data Distribution to Reduce Simplicity Bias Improves In-distribution Generalization", "url": "https://openreview.net/forum?id=yySpldUsU2", "detail_url": "https://openreview.net/forum?id=yySpldUsU2", "authors": "Dang Nguyen,Paymon Haddad,Eric Gan,Baharan Mirzasoleiman", "tags": "NIPS 2024,Poster", "abstract": "Can we modify the training data distribution to encourage the underlying optimization method toward finding solutions with superior generalization performance on in-distribution data? In this work, we approach this question for the first time by comparing the inductive bias of gradient descent (GD) with that of sharpness-aware minimization (SAM). By studying a two-layer CNN, we rigorously prove that SAM learns different features more uniformly, particularly in early epochs. That is, SAM is less susceptible to simplicity bias compared to GD. We also show that examples constraining features that are learned early are separable from the rest based on the model\u2019s output. Based on this observation, we propose a method that (i) clusters examples based on the network output early in training, (ii) identifies a cluster of examples with similar network output, and (iii) upsamples the rest of examples only once to alleviate the simplicity bias. We show empirically that USEFUL effectively improves the generalization performance on the original data distribution when training with various gradient methods, including (S)GD and SAM. Notably, we demonstrate that our method can be combined with SAM variants and existing data augmentation strategies to achieve, to the best of our knowledge, state-of-the-art performance for training ResNet18 on CIFAR10, STL10, CINIC10, Tiny-ImageNet; ResNet34 on CIFAR100; and VGG19 and DenseNet121 on CIFAR10.", "pdf": "https://openreview.net/pdf/b07ede96cc83e38f5a579bbba4e95073247adb82.pdf"} {"title": "Embedding-Aligned Language Models", "url": "https://openreview.net/forum?id=WSu1PPi2UP", "detail_url": "https://openreview.net/forum?id=WSu1PPi2UP", "authors": "Guy Tennenholtz,Yinlam Chow,ChihWei Hsu,Lior Shani,Yi Liang,Craig Boutilier", "tags": "NIPS 2024,Poster", "abstract": "We propose a novel approach for training large language models (LLMs) to adhere to objectives defined within a latent embedding space. Our method leverages reinforcement learning (RL), treating a pre-trained LLM as an environment. Our embedding-aligned guided language (EAGLE) agent is trained to iteratively steer the LLM's generation towards optimal regions of the latent embedding space, w.r.t. some predefined criterion. We demonstrate the effectiveness of the EAGLE agent using the MovieLens 25M and Amazon Review datasets to surface content gaps that satisfy latent user demand. We also demonstrate the benefit of using an optimal design of a state-dependent action set to improve EAGLE's efficiency. Our work paves the way for controlled and grounded text generation using LLMs, ensuring consistency with domain-specific knowledge and data representations.", "pdf": "https://openreview.net/pdf/7deb7a119dbc07cea918f4e3e746d76abfb84271.pdf"} {"title": "Near-Optimal Distributionally Robust Reinforcement Learning with General $L_p$ Norms", "url": "https://openreview.net/forum?id=0l9yGPTHAU", "detail_url": "https://openreview.net/forum?id=0l9yGPTHAU", "authors": "Pierre Clavier,Laixi Shi,Erwan Le Pennec,Eric Mazumdar,Adam Wierman,Matthieu Geist", "tags": "NIPS 2024,Poster", "abstract": "To address the challenges of sim-to-real gap and sample efficiency in reinforcement learning (RL), this work studies distributionally robust Markov decision processes (RMDPs) --- optimize the worst-case performance when the deployed environment is within an uncertainty set around some nominal MDP. Despite recent efforts, the sample complexity of RMDPs has remained largely undetermined. While the statistical implications of distributional robustness in RL have been explored in some specific cases, the generalizability of the existing findings remains unclear, especially in comparison to standard RL. Assuming access to a generative model that samples from the nominal MDP, we examine the sample complexity of RMDPs using a class of generalized $L_p$ norms as the 'distance' function for the uncertainty set, under two commonly adopted $sa$-rectangular and $s$-rectangular conditions. Our results imply that RMDPs can be more sample-efficient to solve than standard MDPs using generalized $L_p$ norms in both $sa$- and $s$-rectangular cases, potentially inspiring more empirical research.\n We provide a near-optimal upper bound and a matching minimax lower bound for the $sa$-rectangular scenarios. For $s$-rectangular cases, we improve the state-of-the-art upper bound and also derive a lower bound using $L_\\infty$ norm that verifies the tightness.", "pdf": "https://openreview.net/pdf/f20d270458f74b56a90aec93c32a47ac12ff336c.pdf"} {"title": "SS1: Accelerating Inference with Fast and Expressive Sketch Structured Transform", "url": "https://openreview.net/forum?id=nrgyOGU7ZP", "detail_url": "https://openreview.net/forum?id=nrgyOGU7ZP", "authors": "Aditya Desai,Kimia Saedi,Apoorv Walia,Jihyeong Lee,Keren Zhou,Anshumali Shrivastava", "tags": "NIPS 2024,Poster", "abstract": "Tensor multiplication with learned weight matrices is the fundamental building block in deep learning models. These matrices can often be sparsified, decomposed, quantized, or subjected to random parameter sharing without losing accuracy, suggesting the possibility of more efficient transforms. Although many variants of weight matrices exist, unstructured ones are incompatible with modern hardware, slowing inference and training. On the other hand, structured variants often limit expressivity or fail to deliver the promised latency benefits. We present Sketch Structured Transform (SS1), an expressive and GPU-friendly operator that accelerates inference. SS1 leverages parameter sharing in a random yet structured manner to reduce computation while retraining the rich expressive nature of parameter sharing. We confirm empirically that SS1 offers better quality-efficiency tradeoffs than competing variants. Interestingly SS1 can be combined with Quantization to achieve gains unattainable by either method alone, a finding we justify via theoretical analysis. The analysis may be of independent interest.\nMoreover, existing pre-trained models can be projected onto SS1 and finetuned for efficient deployment. Surprisingly, these projected models can perform reasonably well even without finetuning. Our experiments highlight various applications of the SS1:\n(a) Training GPT2 and DLRM models from scratch for faster inference. (b) Finetuning projected BERT models for 1.31\u00d7 faster inference while maintaining GLUE scores. (c) Proof of concept with Llama-3-8b, showing 1.11\u00d7 faster wall clock inference using projected SS1 layers without finetuning. We open source our code :https://github.com/apd10/Sketch-Structured-Linear/", "pdf": "https://openreview.net/pdf/992b8fece2614bb855bbb75824128b346d888593.pdf"} {"title": "Accelerating Augmentation Invariance Pretraining", "url": "https://openreview.net/forum?id=Wh9ssqlCNg", "detail_url": "https://openreview.net/forum?id=Wh9ssqlCNg", "authors": "Jinhong Lin,Cheng-En Wu,Yibing Wei,Pedro Morgado", "tags": "NIPS 2024,Poster", "abstract": "Our work tackles the computational challenges of contrastive learning methods, particularly for the pretraining of Vision Transformers (ViTs). Despite the effectiveness of contrastive learning, the substantial computational resources required for training often hinder their practical application. To mitigate this issue, we propose an acceleration framework, leveraging ViT's unique ability to generalize across inputs of varying sequence lengths. Our method employs a mix of sequence compression strategies, including randomized token dropout and flexible patch scaling, to reduce the cost of gradient estimation and accelerate convergence. We further provide an in-depth analysis of the gradient estimation error of various acceleration strategies as well as their impact on downstream tasks, offering valuable insights into the trade-offs between acceleration and performance. \n We also propose a novel procedure to identify an optimal acceleration schedule to adjust the sequence compression ratios to the training progress, ensuring efficient training without sacrificing downstream performance. Our approach significantly reduces computational overhead across various self-supervised learning algorithms on large-scale datasets. In ImageNet, our method achieves speedups of 4$\\times$ in MoCo, 3.3$\\times$ in SimCLR, and 2.5$\\times$ in DINO, demonstrating substantial efficiency gains.", "pdf": "https://openreview.net/pdf/aa0185ffe8ecbb83473940c0d9309153cc2b1e06.pdf"} {"title": "Fisher Flow Matching for Generative Modeling over Discrete Data", "url": "https://openreview.net/forum?id=6jOScqwdHU", "detail_url": "https://openreview.net/forum?id=6jOScqwdHU", "authors": "Oscar Davis,Samuel Kessler,Mircea Petrache,Ismail Ilkan Ceylan,Michael M. Bronstein,Joey Bose", "tags": "NIPS 2024,Poster", "abstract": "Generative modeling over discrete data has recently seen numerous success stories, with applications spanning language modeling, biological sequence design, and graph-structured molecular data. The predominant generative modeling paradigm for discrete data is still autoregressive, with more recent alternatives based on diffusion or flow-matching falling short of their impressive performance in continuous data settings, such as image or video generation. In this work, we introduce Fisher-Flow, a novel flow-matching model for discrete data. Fisher-Flow takes a manifestly geometric perspective\nby considering categorical distributions over discrete data as points residing on a statistical manifold equipped with its natural Riemannian metric: the \\emph{Fisher-Rao metric}. As a result, we demonstrate discrete data itself can be continuously reparameterised to points on the positive orthant of the $d$-hypersphere $\\mathbb{S}^d_+$, \nwhich allows us to define flows that map any source distribution to target in a principled manner by transporting mass along (closed-form) geodesics of $\\mathbb{S}^d_+$. Furthermore, the learned flows in Fisher-Flow can be further bootstrapped by leveraging Riemannian optimal transport leading to improved training dynamics. We prove that the gradient flow induced by Fisher-FLow is optimal in reducing the forward KL divergence. We evaluate Fisher-Flow on an array of synthetic and diverse real-world benchmarks, including designing DNA Promoter, and DNA Enhancer sequences. Empirically, we find that Fisher-Flow improves over prior diffusion and flow-matching models on these benchmarks.", "pdf": "https://openreview.net/pdf/c16ab010bac44658a3c695f87e0c8d925deca3d4.pdf"} {"title": "Towards Croppable Implicit Neural Representations", "url": "https://openreview.net/forum?id=jrVoZLF20h", "detail_url": "https://openreview.net/forum?id=jrVoZLF20h", "authors": "Maor Ashkenazi,Eran Treister", "tags": "NIPS 2024,Poster", "abstract": "Implicit Neural Representations (INRs) have peaked interest in recent years due to their ability to encode natural signals using neural networks. While INRs allow for useful applications such as interpolating new coordinates and signal compression, their black-box nature makes it difficult to modify them post-training. In this paper we explore the idea of editable INRs, and specifically focus on the widely used cropping operation. To this end, we present Local-Global SIRENs - a novel INR architecture that supports cropping by design. Local-Global SIRENs are based on combining local and global feature extraction for signal encoding. What makes their design unique is the ability to effortlessly remove specific portions of an encoded signal, with a proportional weight decrease. This is achieved by eliminating the corresponding weights from the network, without the need for retraining. We further show how this architecture can be used to support the straightforward extension of previously encoded signals. Beyond signal editing, we examine how the Local-Global approach can accelerate training, enhance encoding of various signals, improve downstream performance, and be applied to modern INRs such as INCODE, highlighting its potential and flexibility. Code is available at https://github.com/maorash/Local-Global-INRs.", "pdf": "https://openreview.net/pdf/32fc8ae3ad209f8c99290d1abd48e5f59e66c35b.pdf"} {"title": "Nesterov acceleration despite very noisy gradients", "url": "https://openreview.net/forum?id=kHXUb494SY", "detail_url": "https://openreview.net/forum?id=kHXUb494SY", "authors": "Kanan Gupta,Jonathan W. Siegel,Stephan Wojtowytsch", "tags": "NIPS 2024,Poster", "abstract": "We present a generalization of Nesterov's accelerated gradient descent algorithm. Our algorithm (AGNES) provably achieves acceleration for smooth convex and strongly convex minimization tasks with noisy gradient estimates if the noise intensity is proportional to the magnitude of the gradient at every point. Nesterov's method converges at an accelerated rate if the constant of proportionality is below 1, while AGNES accommodates any signal-to-noise ratio. The noise model is motivated by applications in overparametrized machine learning. AGNES requires only two parameters in convex and three in strongly convex minimization tasks, improving on existing methods. We further provide clear geometric interpretations and heuristics for the choice of parameters.", "pdf": "https://openreview.net/pdf/ca4c88f389c842b4ad3fdfef8bc336b23ad0e69a.pdf"} {"title": "Unified Speech Recognition: A Single Model for Auditory, Visual, and Audiovisual Inputs", "url": "https://openreview.net/forum?id=vWSll6M9pj", "detail_url": "https://openreview.net/forum?id=vWSll6M9pj", "authors": "Alexandros Haliassos,Rodrigo Mira,Honglie Chen,Zoe Landgraf,Stavros Petridis,Maja Pantic", "tags": "NIPS 2024,Poster", "abstract": "Research in auditory, visual, and audiovisual speech recognition (ASR, VSR, and AVSR, respectively) has traditionally been conducted independently. Even recent self-supervised studies addressing two or all three tasks simultaneously tend to yield separate models, leading to disjoint inference pipelines with increased memory requirements and redundancies. This paper proposes unified training strategies for these systems. We demonstrate that training a single model for all three tasks enhances VSR and AVSR performance, overcoming typical optimisation challenges when training from scratch. Moreover, we introduce a greedy pseudo-labelling approach to more effectively leverage unlabelled samples, addressing shortcomings in related self-supervised methods. Finally, we develop a self-supervised pre-training method within our framework, proving its effectiveness alongside our semi-supervised approach. Despite using a single model for all tasks, our unified approach achieves state-of-the-art performance on LRS3 for ASR, VSR, and AVSR compared to recent methods. Code will be made publicly available.", "pdf": "https://openreview.net/pdf/10f24815426948411d0ebd5225a50f68f81cf70b.pdf"} {"title": "Distributionally Robust Reinforcement Learning with Interactive Data Collection: Fundamental Hardness and Near-Optimal Algorithms", "url": "https://openreview.net/forum?id=aYWtfsf3uP", "detail_url": "https://openreview.net/forum?id=aYWtfsf3uP", "authors": "Miao Lu,Han Zhong,Tong Zhang,Jose Blanchet", "tags": "NIPS 2024,Poster", "abstract": "The sim-to-real gap, which represents the disparity between training and testing environments, poses a significant challenge in reinforcement learning (RL). A promising approach to addressing this challenge is distributionally robust RL, often framed as a robust Markov decision process (RMDP). In this framework, the objective is to find a robust policy that achieves good performance under the worst-case scenario among all environments within a pre-specified uncertainty set centered around the training environment. Unlike previous work, which relies on a generative model or a pre-collected offline dataset enjoying good coverage of the deployment environment, we tackle robust RL via interactive data collection, where the learner interacts with the training environment only and refines the policy through trial and error. In this robust RL paradigm, two main challenges emerge: managing distributional robustness while striking a balance between exploration and exploitation during data collection. Initially, we establish that sample-efficient learning without additional assumptions is unattainable owing to the curse of support shift; i.e., the potential disjointedness of the distributional supports between the training and testing environments. To circumvent such a hardness result, we introduce the vanishing minimal value assumption to RMDPs with a total-variation (TV) distance robust set, postulating that the minimal value of the optimal robust value function is zero. We prove that such an assumption effectively eliminates the support shift issue for RMDPs with a TV distance robust set, and present an algorithm with a provable sample complexity guarantee. Our work makes the initial step to uncovering the inherent difficulty of robust RL via interactive data collection and sufficient conditions for designing a sample-efficient algorithm accompanied by sharp sample complexity analysis.", "pdf": "https://openreview.net/pdf/d0f097dc1f14176950a572b7949309caf6e0cd71.pdf"} {"title": "Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding", "url": "https://openreview.net/forum?id=3TxyhBZHT2", "detail_url": "https://openreview.net/forum?id=3TxyhBZHT2", "authors": "Yunze Man,Shuhong Zheng,Zhipeng Bao,Martial Hebert,Liangyan Gui,Yu-Xiong Wang", "tags": "NIPS 2024,Poster", "abstract": "Complex 3D scene understanding has gained increasing attention, with scene encoding strategies built on top of visual foundation models playing a crucial role in this success. However, the optimal scene encoding strategies for various scenarios remain unclear, particularly compared to their image-based counterparts. To address this issue, we present the first comprehensive study that probes various visual encoding models for 3D scene understanding, identifying the strengths and limitations of each model across different scenarios. Our evaluation spans seven vision foundation encoders, including image, video, and 3D foundation models. We evaluate these models in four tasks: Vision-Language Scene Reasoning, Visual Grounding, Segmentation, and Registration, each focusing on different aspects of scene understanding. Our evaluation yields key intriguing findings: Unsupervised image foundation models demonstrate superior overall performance, video models excel in object-level tasks, diffusion models benefit geometric tasks, language-pretrained models show unexpected limitations in language-related tasks, and the mixture-of-vision-expert (MoVE) strategy leads to consistent performance improvement. These insights challenge some conventional understandings, provide novel perspectives on leveraging visual foundation models, and highlight the need for more flexible encoder selection in future vision-language and scene understanding tasks.", "pdf": "https://openreview.net/pdf/d61ffc79e9a88037457e1867d07f92f24cf187e1.pdf"} {"title": "Theoretical Characterisation of the Gauss Newton Conditioning in Neural Networks", "url": "https://openreview.net/forum?id=fpOnUMjLiO", "detail_url": "https://openreview.net/forum?id=fpOnUMjLiO", "authors": "Jim Zhao,Sidak Pal Singh,Aurelien Lucchi", "tags": "NIPS 2024,Poster", "abstract": "The Gauss-Newton (GN) matrix plays an important role in machine learning, most evident in its use as a preconditioning matrix for a wide family of popular adaptive methods to speed up optimization. Besides, it can also provide key insights into the optimization landscape of neural networks. \nIn the context of deep neural networks, understanding the GN matrix involves studying the interaction between different weight matrices as well as the dependencies introduced by the data, thus rendering its analysis challenging.\nIn this work, we take a first step towards theoretically characterizing the conditioning of the GN matrix in neural networks. We establish tight bounds on the condition number of the GN in deep linear networks of arbitrary depth and width, which we also extend to two-layer ReLU networks.\nWe expand the analysis to further architectural components, such as residual connections and convolutional layers. \nFinally, we empirically validate the bounds and uncover valuable insights into the influence of the analyzed architectural components.", "pdf": "https://openreview.net/pdf/f7d610c71f0d43fb45890dc5d093e4e51522a6cf.pdf"} {"title": "UDON: Universal Dynamic Online distillatioN for generic image representations", "url": "https://openreview.net/forum?id=iQUxHrCna0", "detail_url": "https://openreview.net/forum?id=iQUxHrCna0", "authors": "Nikolaos-Antonios Ypsilantis,Kaifeng Chen,Andre Araujo,Ondrej Chum", "tags": "NIPS 2024,Poster", "abstract": "Universal image representations are critical in enabling real-world fine-grained and instance-level recognition applications, where objects and entities from any domain must be identified at large scale.\nDespite recent advances, existing methods fail to capture important domain-specific knowledge, while also ignoring differences in data distribution across different domains.\nThis leads to a large performance gap between efficient universal solutions and expensive approaches utilising a collection of specialist models, one for each domain.\nIn this work, we make significant strides towards closing this gap, by introducing a new learning technique, dubbed UDON (Universal Dynamic Online distillatioN).\nUDON employs multi-teacher distillation, where each teacher is specialized in one domain, to transfer detailed domain-specific knowledge into the student universal embedding.\nUDON's distillation approach is not only effective, but also very efficient, by sharing most model parameters between the student and all teachers, where all models are jointly trained in an online manner.\nUDON also comprises a sampling technique which adapts the training process to dynamically allocate batches to domains which are learned slower and require more frequent processing.\nThis boosts significantly the learning of complex domains which are characterised by a large number of classes and long-tail distributions.\nWith comprehensive experiments, we validate each component of UDON, and showcase significant improvements over the state of the art in the recent UnED benchmark.\nCode: https://github.com/nikosips/UDON.", "pdf": "https://openreview.net/pdf/4d11dfdaed254904ddc5db8dca9eab838bd06484.pdf"} {"title": "Probing Social Bias in Labor Market Text Generation by ChatGPT: A Masked Language Model Approach", "url": "https://openreview.net/forum?id=MP7j58lbWO", "detail_url": "https://openreview.net/forum?id=MP7j58lbWO", "authors": "Lei Ding,Yang Hu,Nicole Denier,Enze Shi,Junxi Zhang,Qirui Hu,Karen D. Hughes,Linglong Kong,Bei Jiang", "tags": "NIPS 2024,Poster", "abstract": "As generative large language models (LLMs) such as ChatGPT gain widespread adoption in various domains, their potential to propagate and amplify social biases, particularly in high-stakes areas such as the labor market, has become a pressing concern. AI algorithms are not only widely used in the selection of job applicants, individual job seekers may also make use of generative LLMs to help develop their job application materials. Against this backdrop, this research builds on a novel experimental design to examine social biases within ChatGPT-generated job applications in response to real job advertisements. By simulating the process of job application creation, we examine the language patterns and biases that emerge when the model is prompted with diverse job postings. Notably, we present a novel bias evaluation framework based on Masked Language Models to quantitatively assess social bias based on validated inventories of social cues/words, enabling a systematic analysis of the language used. Our findings show that the increasing adoption of generative AI, not only by employers but also increasingly by individual job seekers, can reinforce and exacerbate gender and social inequalities in the labor market through the use of biased and gendered language.", "pdf": "https://openreview.net/pdf/8feabcfbff273ffe7683a7abc6ebf606530967e8.pdf"} {"title": "DiffuBox: Refining 3D Object Detection with Point Diffusion", "url": "https://openreview.net/forum?id=J2wOOtkBx0", "detail_url": "https://openreview.net/forum?id=J2wOOtkBx0", "authors": "Xiangyu Chen,Zhenzhen Liu,Katie Z Luo,Siddhartha Datta,Adhitya Polavaram,Yan Wang,Yurong You,Boyi Li,Marco Pavone,Wei-Lun Chao,Mark Campbell,Bharath Hariharan,Kilian Q Weinberger", "tags": "NIPS 2024,Poster", "abstract": "Ensuring robust 3D object detection and localization is crucial for many applications in robotics and autonomous driving. Recent models, however, face difficulties in maintaining high performance when applied to domains with differing sensor setups or geographic locations, often resulting in poor localization accuracy due to domain shift. To overcome this challenge, we introduce a novel diffusion-based box refinement approach. This method employs a domain-agnostic diffusion model, conditioned on the LiDAR points surrounding a coarse bounding box, to simultaneously refine the box's location, size, and orientation. We evaluate this approach under various domain adaptation settings, and our results reveal significant improvements across different datasets, object classes and detectors. Our PyTorch implementation is available at https://github.com/cxy1997/DiffuBox.", "pdf": "https://openreview.net/pdf/2f2a7ab4b745917ee48546f9f819fd5a70aeabd2.pdf"} {"title": "Federated Learning under Periodic Client Participation and Heterogeneous Data: A New Communication-Efficient Algorithm and Analysis", "url": "https://openreview.net/forum?id=WftaVkL6G2", "detail_url": "https://openreview.net/forum?id=WftaVkL6G2", "authors": "Michael Crawshaw,Mingrui Liu", "tags": "NIPS 2024,Poster", "abstract": "In federated learning, it is common to assume that clients are always available to participate in training, which may not be feasible with user devices in practice. Recent works analyze federated learning under more realistic participation patterns, such as cyclic client availability or arbitrary participation. However, all such works either require strong assumptions (e.g., all clients participate almost surely within a bounded window), do not achieve linear speedup and reduced communication rounds, or are not applicable in the general non-convex setting. In this work, we focus on nonconvex optimization and consider participation patterns in which the chance of participation over a fixed window of rounds is equal among all clients, which includes cyclic client availability as a special case. Under this setting, we propose a new algorithm, named Amplified SCAFFOLD, and prove that it achieves linear speedup, reduced communication, and resilience to data heterogeneity simultaneously. In particular, for cyclic participation, our algorithm is proved to enjoy $\\mathcal{O}(\\epsilon^{-2})$ communication rounds to find an $\\epsilon$-stationary point in the non-convex stochastic setting. In contrast, the prior work under the same setting requires $\\mathcal{O}(\\kappa^2 \\epsilon^{-4})$ communication rounds, where $\\kappa$ denotes the data heterogeneity. Therefore, our algorithm significantly reduces communication rounds due to better dependency in terms of $\\epsilon$ and $\\kappa$. Our analysis relies on a fine-grained treatment of the nested dependence between client participation and errors in the control variates, which results in tighter guarantees than previous work. We also provide experimental results with (1) synthetic data and (2) real-world data with a large number of clients $(N = 250)$, demonstrating the effectiveness of our algorithm under periodic client participation.", "pdf": "https://openreview.net/pdf/8f0d5d0ba2e34b6c0a7a5edf308fad50f872cebe.pdf"} {"title": "Improving self-training under distribution shifts via anchored confidence with theoretical guarantees", "url": "https://openreview.net/forum?id=a17biETKyI", "detail_url": "https://openreview.net/forum?id=a17biETKyI", "authors": "Taejong Joo,Diego Klabjan", "tags": "NIPS 2024,Poster", "abstract": "Self-training often falls short under distribution shifts due to an increased discrepancy between prediction confidence and actual accuracy. This typically necessitates computationally demanding methods such as neighborhood or ensemble-based label corrections. Drawing inspiration from insights on early learning regularization, we develop a principled method to improve self-training under distribution shifts based on temporal consistency. Specifically, we build an uncertainty-aware temporal ensemble with a simple relative thresholding. Then, this ensemble smooths noisy pseudo labels to promote selective temporal consistency. We show that our temporal ensemble is asymptotically correct and our label smoothing technique can reduce the optimality gap of self-training. Our extensive experiments validate that our approach consistently improves self-training performances by 8% to 16% across diverse distribution shift scenarios without a computational overhead. Besides, our method exhibits attractive properties, such as improved calibration performance and robustness to different hyperparameter choices.", "pdf": "https://openreview.net/pdf/c17c6a15bab7352d094b4bfcb34e3e9bcf9a8d7a.pdf"} {"title": "T2V-Turbo: Breaking the Quality Bottleneck of Video Consistency Model with Mixed Reward Feedback", "url": "https://openreview.net/forum?id=53daI9kbvf", "detail_url": "https://openreview.net/forum?id=53daI9kbvf", "authors": "Jiachen Li,Weixi Feng,Tsu-Jui Fu,Xinyi Wang,S Basu,Wenhu Chen,William Yang Wang", "tags": "NIPS 2024,Poster", "abstract": "Diffusion-based text-to-video (T2V) models have achieved significant success but continue to be hampered by the slow sampling speed of their iterative sampling processes. To address the challenge, consistency models have been proposed to facilitate fast inference, albeit at the cost of sample quality. In this work, we aim to break the quality bottleneck of a video consistency model (VCM) to achieve **both fast and high-quality video generation**. We introduce T2V-Turbo, which integrates feedback from a mixture of differentiable reward models into the consistency distillation (CD) process of a pre-trained T2V model. Notably, we directly optimize rewards associated with single-step generations that arise naturally from computing the CD loss, effectively bypassing the memory constraints imposed by backpropagating gradients through an iterative sampling process. Remarkably, the 4-step generations from our T2V-Turbo achieve the highest total score on VBench, even surpassing Gen-2 and Pika. We further conduct human evaluations to corroborate the results, validating that the 4-step generations from our T2V-Turbo are preferred over the 50-step DDIM samples from their teacher models, representing more than a tenfold acceleration while improving video generation quality.", "pdf": "https://openreview.net/pdf/b7fb15d4a4c0c43818ed8209089ab7af89f5341b.pdf"} {"title": "TripletCLIP: Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives", "url": "https://openreview.net/forum?id=ZfRGRK5Kxl", "detail_url": "https://openreview.net/forum?id=ZfRGRK5Kxl", "authors": "Maitreya Patel,Naga Sai Abhiram kusumba,Sheng Cheng,Changhoon Kim,Tejas Gokhale,Chitta Baral,Yezhou Yang", "tags": "NIPS 2024,Poster", "abstract": "Contrastive Language-Image Pretraining (CLIP) models maximize the mutual information between text and visual modalities to learn representations. This makes the nature of the training data a significant factor in the efficacy of CLIP for downstream tasks. However, the lack of compositional diversity in contemporary image-text datasets limits the compositional reasoning ability of CLIP. We show that generating ``hard'' negative captions via in-context learning and synthesizing corresponding negative images with text-to-image generators offers a solution. We introduce a novel contrastive pre-training strategy that leverages these hard negative captions and images in an alternating fashion to train CLIP. We demonstrate that our method, named TripletCLIP, when applied to existing datasets such as CC3M and CC12M, enhances the compositional capabilities of CLIP, resulting in an absolute improvement of over 9% on the SugarCrepe benchmark on an equal computational budget, as well as improvements in zero-shot image classification and image retrieval. Our code, models, and data are available at: tripletclip.github.io.", "pdf": "https://openreview.net/pdf/289147f2527c2d1ba75a57f705d34f84e49a7bde.pdf"} {"title": "Towards Understanding Extrapolation: a Causal Lens", "url": "https://openreview.net/forum?id=2squ766Iq4", "detail_url": "https://openreview.net/forum?id=2squ766Iq4", "authors": "Lingjing Kong,Guangyi Chen,Petar Stojanov,Haoxuan Li,Eric P. Xing,Kun Zhang", "tags": "NIPS 2024,Poster", "abstract": "Canonical work handling distribution shifts typically necessitates an entire target distribution that lands inside the training distribution.\nHowever, practical scenarios often involve only a handful target samples, potentially lying outside the training support, which requires the capability of extrapolation.\nIn this work, we aim to provide a theoretical understanding of when extrapolation is possible and offer principled methods to achieve it without requiring an on-support target distribution.\nTo this end, we formulate the extrapolation problem with a latent-variable model that embodies the minimal change principle in causal mechanisms.\nUnder this formulation, we cast the extrapolation problem into a latent-variable identification problem.\nWe provide realistic conditions on shift properties and the estimation objectives that lead to identification even when only one off-support target sample is available, tackling the most challenging scenarios.\nOur theory reveals the intricate interplay between the underlying manifold's smoothness and the shift properties.\nWe showcase how our theoretical results inform the design of practical adaptation algorithms. Through experiments on both synthetic and real-world data, we validate our theoretical findings and their practical implications.", "pdf": "https://openreview.net/pdf/0fb6a5a5c946d67fa93344db03a2f0084c92bd3d.pdf"} {"title": "Segmenting Watermarked Texts From Language Models", "url": "https://openreview.net/forum?id=FAuFpGeLmx", "detail_url": "https://openreview.net/forum?id=FAuFpGeLmx", "authors": "Xingchi Li,Guanxun Li,Xianyang Zhang", "tags": "NIPS 2024,Poster", "abstract": "Watermarking is a technique that involves embedding nearly unnoticeable statistical signals within generated content to help trace its source. This work focuses on a scenario where an untrusted third-party user sends prompts to a trusted language model (LLM) provider, who then generates a text from their LLM with a watermark. This setup makes it possible for a detector to later identify the source of the text if the user publishes it. The user can modify the generated text by substitutions, insertions, or deletions. Our objective is to develop a statistical method to detect if a published text is LLM-generated from the perspective of a detector. We further propose a methodology to segment the published text into watermarked and non-watermarked sub-strings. The proposed approach is built upon randomization tests and change point detection techniques. We demonstrate that our method ensures Type I and Type II error control and can accurately identify watermarked sub-strings by finding the corresponding change point locations. To validate our technique, we apply it to texts generated by several language models with prompts extracted from Google's C4 dataset and obtain encouraging numerical results. We release all code publicly at https://github.com/doccstat/llm-watermark-cpd.", "pdf": "https://openreview.net/pdf/7370093081ca40698478614c4b1e7979ce9fa548.pdf"} {"title": "ESPACE: Dimensionality Reduction of Activations for Model Compression", "url": "https://openreview.net/forum?id=HAcaANQNMK", "detail_url": "https://openreview.net/forum?id=HAcaANQNMK", "authors": "Charbel Sakr,Brucek Khailany", "tags": "NIPS 2024,Poster", "abstract": "We propose ESPACE, an LLM compression technique based on dimensionality reduction of activations. Unlike prior works on weight-centric tensor decomposition, ESPACE projects activations onto a pre-calibrated set of principal components. The activation-centrality of the approach enables retraining LLMs with no loss of expressivity; while at inference, weight decomposition is obtained as a byproduct of matrix multiplication associativity. Theoretical results on the construction of projection matrices with optimal computational accuracy are provided. Experimentally, we find ESPACE enables 50% compression of GPT3, Llama2, and Nemotron4 models with small accuracy degradation, as low as a 0.18 perplexity increase on GPT3-22B. At lower compression rates of 20% to 40%, ESPACE drives GPT3 models to outperforming their baseline, by up to a 0.38 decrease in perplexity for GPT3-8B. ESPACE also reduces GEMM execution time and prefill inference latency on existing hardware. Comparison with related works on compressing Llama2-7B via matrix factorization shows that ESPACE is a first step in advancing the state-of-the-art in tensor decomposition compression of LLMs.", "pdf": "https://openreview.net/pdf/9fd84cfffbf125eb131b412daf7a831831e5cfdf.pdf"} {"title": "Statistical-Computational Trade-offs for Density Estimation", "url": "https://openreview.net/forum?id=PtD4aZPzcR", "detail_url": "https://openreview.net/forum?id=PtD4aZPzcR", "authors": "Anders Aamand,Alexandr Andoni,Justin Y. Chen,Piotr Indyk,Shyam Narayanan,Sandeep Silwal,Haike Xu", "tags": "NIPS 2024,Poster", "abstract": "We study the density estimation problem defined as follows: given $k$ distributions $p_1, \\ldots, p_k$ over a discrete domain $[n]$, as well as a collection of samples chosen from a \"query\" distribution $q$ over $[n]$, output $p_i$ that is \"close\" to $q$. Recently Aamand et al. gave the first and only known result that achieves sublinear bounds in both the sampling complexity and the query time while preserving polynomial data structure space. However, their improvement over linear samples and time is only by subpolynomial factors.\n\nOur main result is a lower bound showing that, for a broad class of data structures, their bounds cannot be significantly improved. In particular, if an algorithm uses $O(n/\\log^c k)$ samples for some constant $c>0$ and polynomial space, then the query time of the data structure must be at least $k^{1-O(1)/\\log \\log k}$, i.e., close to linear in the number of distributions $k$. This is a novel statistical-computational trade-off for density estimation, demonstrating that any data structure must use close to a linear number of samples or take close to linear query time. The lower bound holds even in the realizable case where $q=p_i$ for some $i$, and when the distributions are flat (specifically, all distributions are uniform over half of the domain $[n]$). We also give a simple data structure for our lower bound instance with asymptotically matching upper bounds. Experiments show that the data structure is quite efficient in practice.", "pdf": "https://openreview.net/pdf/74c7a2b19d56114ddc73f3ba8b3cbaf002ef25e9.pdf"} {"title": "Oracle-Efficient Differentially Private Learning with Public Data", "url": "https://openreview.net/forum?id=BAjjINf0Oh", "detail_url": "https://openreview.net/forum?id=BAjjINf0Oh", "authors": "Adam Block,Mark Bun,Rathin Desai,Abhishek Shetty,Steven Wu", "tags": "NIPS 2024,Poster", "abstract": "Due to statistical lower bounds on the learnability of many function classes under privacy constraints, there has been recent interest in leveraging public data to improve the performance of private learning algorithms. In this model, algorithms must always guarantee differential privacy with respect to the private samples while also ensuring learning guarantees when the private data distribution is sufficiently close to that of the public data. Previous work has demonstrated that when sufficient public, unlabelled data is available, private learning can be made statistically tractable, but the resulting algorithms have all been computationally inefficient. In this work, we present the first computationally efficient, algorithms to provably leverage public data to learn privately whenever a function class is learnable non-privately, where our notion of computational efficiency is with respect to the number of calls to an optimization oracle for the function class. In addition to this general result, we provide specialized algorithms with improved sample complexities in the special cases when the function class is convex or when the task is binary classification.", "pdf": "https://openreview.net/pdf/a6649e729901a20c6a77be023ceab128e07d343e.pdf"} {"title": "Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer", "url": "https://openreview.net/forum?id=2cQ3lPhkeO", "detail_url": "https://openreview.net/forum?id=2cQ3lPhkeO", "authors": "Zhihan Liu,Miao Lu,Shenao Zhang,Boyi Liu,Hongyi Guo,Yingxiang Yang,Jose Blanchet,Zhaoran Wang", "tags": "NIPS 2024,Poster", "abstract": "Aligning generative models with human preference via RLHF typically suffers from overoptimization, where an imperfectly learned reward model can misguide the generative model to output even undesired responses. We investigate this problem in a principled manner by identifying the source of the issue as the distributional shift and uncertainty of human preference in dataset. To mitigate overoptimization, we first propose a theoretical algorithm which optimizes the policy against an adversarially chosen reward model, one that simultaneously minimizes its MLE loss and a reward penalty term. The penalty pessimistically biases the uncertain rewards so as to prevent the policy from choosing actions with spursiouly high proxy rewards, resulting in provable sample efficiency of the algorithm under a partial coverage style condition. Moving from theory to practice, the proposed algorithm further enjoys an equivalent but surprisingly easy to implement form. With a clever usage of the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines (i) a preference optimization loss that directly aligns the policy with human preference, and (ii) a supervised learning loss which explicitly imitates the policy with a baseline distribution. In the context of aligning large language models (LLM), this objective fuses the direct preference optimization (DPO) loss with the supervised fune-tuning (SFT) loss to help mitigate the overoptimization towards undesired responses, for which we name the algorithm Regularized Preference Optimization (RPO).\nExperiments of aligning LLMs demonstrate the improved performance of our method when compared with DPO baselines. \nOur work sheds light on the interplay between preference optimization and SFT in tuning LLMs with both theoretical guarantees and empirical evidence.", "pdf": "https://openreview.net/pdf/28a18e4de8b4b57fff0e880d228a2a564bbaed3c.pdf"} {"title": "Relating Hopfield Networks to Episodic Control", "url": "https://openreview.net/forum?id=59DmXSBG6S", "detail_url": "https://openreview.net/forum?id=59DmXSBG6S", "authors": "Hugo Chateau-Laurent,Frederic Alexandre", "tags": "NIPS 2024,Poster", "abstract": "Neural Episodic Control is a powerful reinforcement learning framework that employs a differentiable dictionary to store non-parametric memories. It was inspired by episodic memory on the functional level, but lacks a direct theoretical connection to the associative memory models generally used to implement such a memory. We first show that the dictionary is an instance of the recently proposed Universal Hopfield Network framework. We then introduce a continuous approximation of the dictionary readout operation in order to derive two energy functions that are Lyapunov functions of the dynamics. Finally, we empirically show that the dictionary outperforms the Max separation function, which had previously been argued to be optimal, and that performance can further be improved by replacing the Euclidean distance kernel by a Manhattan distance kernel. These results are enabled by the generalization capabilities of the dictionary, so a novel criterion is introduced to disentangle memorization from generalization when evaluating associative memory models.", "pdf": "https://openreview.net/pdf/c0a38153a1e5111bbee356dbe8f4869f3093ee44.pdf"} {"title": "Learning Discrete Concepts in Latent Hierarchical Models", "url": "https://openreview.net/forum?id=bO5bUxvH6m", "detail_url": "https://openreview.net/forum?id=bO5bUxvH6m", "authors": "Lingjing Kong,Guangyi Chen,Biwei Huang,Eric P. Xing,Yuejie Chi,Kun Zhang", "tags": "NIPS 2024,Poster", "abstract": "Learning concepts from natural high-dimensional data (e.g., images) holds potential in building human-aligned and interpretable machine learning models.\n Despite its encouraging prospect, formalization and theoretical insights into this crucial task are still lacking.\n In this work, we formalize concepts as discrete latent causal variables that are related via a hierarchical causal model that encodes different abstraction levels of concepts embedded in high-dimensional data (e.g., a dog breed and its eye shapes in natural images).\n We formulate conditions to facilitate the identification of the proposed causal model, which reveals when learning such concepts from unsupervised data is possible.\n Our conditions permit complex causal hierarchical structures beyond latent trees and multi-level directed acyclic graphs in prior work and can handle high-dimensional, continuous observed variables, which is well-suited for unstructured data modalities such as images.\n We substantiate our theoretical claims with synthetic data experiments.\n Further, we discuss our theory's implications for understanding the underlying mechanisms of latent diffusion models and provide corresponding empirical evidence for our theoretical insights.", "pdf": "https://openreview.net/pdf/fc94091b7601a7b61f28aad4441e7ccc9ebe7776.pdf"} {"title": "Secret Collusion among AI Agents: Multi-Agent Deception via Steganography", "url": "https://openreview.net/forum?id=bnNSQhZJ88", "detail_url": "https://openreview.net/forum?id=bnNSQhZJ88", "authors": "Sumeet Ramesh Motwani,Mikhail Baranchuk,Martin Strohmeier,Vijay Bolina,Philip Torr,Lewis Hammond,Christian Schroeder de Witt", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements in generative AI suggest the potential for large-scale interaction between autonomous agents and humans across platforms such as the internet. While such interactions could foster productive cooperation, the ability of AI agents to circumvent security oversight raises critical multi-agent security problems, particularly in the form of unintended information sharing or undesirable coordination. In our work, we establish the subfield of secret collusion, a form of multi-agent deception, in which two or more agents employ steganographic methods to conceal the true nature of their interactions, be it communicative or otherwise, from oversight. We propose a formal threat model for AI agents communicating steganographically and derive rigorous theoretical insights about the capacity and incentives of large language models (LLMs) to perform secret collusion, in addition to the limitations of threat mitigation measures. We complement our findings with empirical evaluations demonstrating rising steganographic capabilities in frontier single and multi-agent LLM setups and examining potential scenarios where collusion may emerge, revealing limitations in countermeasures such as monitoring, paraphrasing, and parameter optimization. Our work is the first to formalize and investigate secret collusion among frontier foundation models, identifying it as a critical area in AI Safety and outlining a comprehensive research agenda to mitigate future risks of collusion between generative AI systems.", "pdf": "https://openreview.net/pdf/1819ecc171caf5c4b38f428386f6d19e8cfae90d.pdf"} {"title": "Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models", "url": "https://openreview.net/forum?id=KppBAWJbry", "detail_url": "https://openreview.net/forum?id=KppBAWJbry", "authors": "Yuxin Wen,Leo Marchyok,Sanghyun Hong,Jonas Geiping,Tom Goldstein,Nicholas Carlini", "tags": "NIPS 2024,Poster", "abstract": "It is commonplace to produce application-specific models by fine-tuning large pre-trained models using a small bespoke dataset. The widespread availability of foundation model checkpoints on the web poses considerable risks, including the vulnerability to backdoor attacks. In this paper, we unveil a new vulnerability: the privacy backdoor attack. This black-box privacy attack aims to amplify the privacy leakage that arises when fine-tuning a model: when a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model. We conduct extensive experiments on various datasets and models, including both vision-language models (CLIP) and large language models, demonstrating the broad applicability and effectiveness of such an attack. Additionally, we carry out multiple ablation studies with different fine-tuning methods and inference strategies to thoroughly analyze this new threat. Our findings highlight a critical privacy concern within the machine learning community and call for a re-evaluation of safety protocols in the use of open-source pre-trained models.", "pdf": "https://openreview.net/pdf/e72ef9fa5240ecd58026f407c70942a25b5f5ccc.pdf"} {"title": "Inverse M-Kernels for Linear Universal Approximators of Non-Negative Functions", "url": "https://openreview.net/forum?id=hgsS4onO4s", "detail_url": "https://openreview.net/forum?id=hgsS4onO4s", "authors": "Hideaki Kim", "tags": "NIPS 2024,Poster", "abstract": "Kernel methods are widely utilized in machine learning field to learn, from training data, a latent function in a reproducing kernel Hilbert space. It is well known that the approximator thus obtained usually achieves a linear representation, which brings various computational benefits, while maintaining great representation power (i.e., universal approximation). However, when non-negativity constraints are imposed on the function's outputs, the literature usually takes the kernel method-based approximators as offering linear representations at the expense of limited model flexibility or good representation power by allowing for their nonlinear forms. The main contribution of this paper is to derive a sufficient condition for a positive definite kernel so that it may construct flexible and linear approximators of non-negative functions. We call a kernel function that offers these attributes an *inverse M-kernel*; it is reminiscent of the inverse M-matrix. Furthermore, we show that for a one-dimensional input space, universal exponential/Abel kernels are inverse M-kernels and construct linear universal approximators of non-negative functions. To the best of our knowledge, it is the first time that the existence of linear universal approximators of non-negative functions has been elucidated. We confirm the effectiveness of our results by experiments on the problems of non-negativity-constrained regression, density estimation, and intensity estimation. Finally, we discuss issues and perspectives on multi-dimensional input settings.", "pdf": "https://openreview.net/pdf/00498bd2245bce9d370116e3ce71c22354054953.pdf"} {"title": "Fundamental Limits of Prompt Compression: A Rate-Distortion Framework for Black-Box Language Models", "url": "https://openreview.net/forum?id=TeBKVfhP2M", "detail_url": "https://openreview.net/forum?id=TeBKVfhP2M", "authors": "Alliot Nagle,Adway Girish,Marco Bondaschi,Michael Gastpar,Ashok Vardhan Makkuva,Hyeji Kim", "tags": "NIPS 2024,Poster", "abstract": "We formalize the problem of prompt compression for large language models (LLMs) and present a framework to unify token-level prompt compression methods which create hard prompts for black-box models. We derive the distortion-rate function for this setup as a linear program, and provide an efficient algorithm to compute this fundamental limit via the dual of the linear program. Using the distortion-rate function as the baseline, we study the performance of existing compression schemes on a synthetic dataset consisting of prompts generated from a Markov chain, natural language queries, and their respective answers. Our empirical analysis demonstrates the criticality of query-aware prompt compression, where the compressor has knowledge of the downstream task/query for the black-box LLM. We show that there is a large gap between the performance of current prompt compression methods and the optimal strategy, and propose Adaptive QuerySelect, a query-aware, variable-rate adaptation of a prior work to close the gap. We extend our experiments to a small natural language dataset to further confirm our findings on our synthetic dataset.", "pdf": "https://openreview.net/pdf/7f94bd23ea77433c638791e31d3b2fd1619b0225.pdf"} {"title": "KG-FIT: Knowledge Graph Fine-Tuning Upon Open-World Knowledge", "url": "https://openreview.net/forum?id=rDoPMODpki", "detail_url": "https://openreview.net/forum?id=rDoPMODpki", "authors": "Pengcheng Jiang,Lang Cao,Cao Xiao,Parminder Bhatia,Jimeng Sun,Jiawei Han", "tags": "NIPS 2024,Poster", "abstract": "Knowledge Graph Embedding (KGE) techniques are crucial in learning compact representations of entities and relations within a knowledge graph, facilitating efficient reasoning and knowledge discovery. While existing methods typically focus either on training KGE models solely based on graph structure or fine-tuning pre-trained language models with classification data in KG, KG-FIT leverages LLM-guided refinement to construct a semantically coherent hierarchical structure of entity clusters. By incorporating this hierarchical knowledge along with textual information during the fine-tuning process, KG-FIT effectively captures both global semantics from the LLM and local semantics from the KG. Extensive experiments on the benchmark datasets FB15K-237, YAGO3-10, and PrimeKG demonstrate the superiority of KG-FIT over state-of-the-art pre-trained language model-based methods, achieving improvements of 14.4\\%, 13.5\\%, and 11.9\\% in the Hits@10 metric for the link prediction task, respectively. Furthermore, KG-FIT yields substantial performance gains of 12.6\\%, 6.7\\%, and 17.7\\% compared to the structure-based base models upon which it is built. These results highlight the effectiveness of KG-FIT in incorporating open-world knowledge from LLMs to significantly enhance the expressiveness and informativeness of KG embeddings.", "pdf": "https://openreview.net/pdf/8e57c825b4a1824ff155a994d0dd93f8025cf334.pdf"} {"title": "Going Beyond Heuristics by Imposing Policy Improvement as a Constraint", "url": "https://openreview.net/forum?id=vBGMbFgvsX", "detail_url": "https://openreview.net/forum?id=vBGMbFgvsX", "authors": "Chi-Chang Lee,Zhang-Wei Hong,Pulkit Agrawal", "tags": "NIPS 2024,Poster", "abstract": "In many reinforcement learning (RL) applications, incorporating heuristic rewards alongside the task reward is crucial for achieving desirable performance. Heuristics encode prior human knowledge about how a task should be done, providing valuable hints for RL algorithms. However, such hints may not be optimal, limiting the performance of learned policies. \nThe currently established way of using heuristics is to modify the heuristic reward in a manner that ensures that the optimal policy learned with it remains the same as the optimal policy for the task reward (i.e., optimal policy invariance). \nHowever, these methods often fail in practical scenarios with limited training data. We found that while optimal policy invariance ensures convergence to the best policy based on task rewards, it doesn't guarantee better performance than policies trained with biased heuristics under a finite data regime, which is impractical. In this paper, we introduce a new principle tailored for finite data settings. Instead of enforcing optimal policy invariance, we train a policy that combines task and heuristic rewards and ensures it outperforms the heuristic-trained policy. As such, we prevent policies from merely exploiting heuristic rewards without improving the task reward. Our experiments on robotic locomotion, helicopter control, and manipulation tasks demonstrate that our method consistently outperforms the heuristic policy, regardless of the heuristic rewards' quality.\nCode is available at https://github.com/Improbable-AI/hepo.", "pdf": "https://openreview.net/pdf/1983230de4f35516ae3297ba7a6fb88fab7fad02.pdf"} {"title": "Asynchronous Perception Machine for Efficient Test Time Training", "url": "https://openreview.net/forum?id=7Ye12RLZ4P", "detail_url": "https://openreview.net/forum?id=7Ye12RLZ4P", "authors": "Rajat Modi,Yogesh S Rawat", "tags": "NIPS 2024,Poster", "abstract": "In this work, we propose Asynchronous Perception Machine (APM), a computationally-efficient architecture for test-time-training (TTT). APM can process patches of an image one at a time in any order asymmetrically, and still encode semantic-awareness in the net. We demonstrate APM\u2019s ability to recognize out-of-distribution images without dataset-specific pre-training, augmentation or\nany-pretext task. APM offers competitive performance over existing TTT approaches. To perform TTT, APM just distills test sample\u2019s representation once. APM possesses a unique property: it can learn using just this single representation and starts predicting semantically-aware features. APM\u2019s ability to recover semantic information from a global CLS token validates the insight that CLS\ntokens encode geometric-information of a given scene and can be recovered using appropriate inductive-biases. This offers a novel-insight with consequences for representational-learning. APM demostrates potential applications beyond test-time-training: APM can scale up to a dataset of 2D images and yield semantic-clusterings in a single forward pass. APM also provides first empirical evidence towards validating Hinton at Al\u2019s GLOM\u2019s insight, i.e. if input percept is a field. Therefore, APM helps our community converge towards an implementation which can do both interpolation and perception on a shared-connectionist hardware. Our\ncodebase has been made available at https://rajatmodi62.github.io/apm_project_page/\n\n--------\n\n**It now appears that some of the ideas in GLOM could be made to work.**\n\nhttps://www.technologyreview.com/2021/04/16/1021871/geoffrey-hinton-glom-godfather-ai-neural-networks/\n\n```\n .-\"\"\"\"\"\"-.\n .' '.\n/ O O \\\n| O |\n \\ '------' /\n '. .'\n '-....-'\nSilent men in deep-contemplation.\nSilent men emerges only sometimes.\nSilent men love all.\nSilent men practice slow science.\n```", "pdf": "https://openreview.net/pdf/8c4eb0384cca8c70af42b23daa1600db0e73dbc4.pdf"} {"title": "Derivatives of Stochastic Gradient Descent in parametric optimization", "url": "https://openreview.net/forum?id=7WoOphIZ8u", "detail_url": "https://openreview.net/forum?id=7WoOphIZ8u", "authors": "Franck Iutzeler,Edouard Pauwels,Samuel Vaiter", "tags": "NIPS 2024,Poster", "abstract": "We consider stochastic optimization problems where the objective depends on some parameter, as commonly found in hyperparameter optimization for instance. We investigate the behavior of the derivatives of the iterates of Stochastic Gradient Descent (SGD) with respect to that parameter and show that they are driven by an inexact SGD recursion on a different objective function, perturbed by the convergence of the original SGD. This enables us to establish that the derivatives of SGD converge to the derivative of the solution mapping in terms of mean squared error whenever the objective is strongly convex. Specifically, we demonstrate that with constant step-sizes, these derivatives stabilize within a noise ball centered at the solution derivative, and that with vanishing step-sizes they exhibit $O(\\log(k)^2 / k)$ convergence rates. Additionally, we prove exponential convergence in the interpolation regime. Our theoretical findings are illustrated by numerical experiments on synthetic tasks.", "pdf": "https://openreview.net/pdf/423c86dd162502363f0560c9b74988efc9fb4f92.pdf"} {"title": "Hierarchical Federated Learning with Multi-Timescale Gradient Correction", "url": "https://openreview.net/forum?id=aCAb1qNXI0", "detail_url": "https://openreview.net/forum?id=aCAb1qNXI0", "authors": "Wenzhi Fang,Dong-Jun Han,Evan Chen,Shiqiang Wang,Christopher Brinton", "tags": "NIPS 2024,Poster", "abstract": "While traditional federated learning (FL) typically focuses on a star topology where clients are directly connected to a central server, real-world distributed systems often exhibit hierarchical architectures. Hierarchical FL (HFL) has emerged as a promising solution to bridge this gap, leveraging aggregation points at multiple levels of the system. However, existing algorithms for HFL encounter challenges in dealing with multi-timescale model drift, i.e., model drift occurring across hierarchical levels of data heterogeneity. In this paper, we propose a multi-timescale gradient correction (MTGC) methodology to resolve this issue. Our key idea is to introduce distinct control variables to (i) correct the client gradient towards the group gradient, i.e., to reduce client model drift caused by local updates based on individual datasets, and (ii) correct the group gradient towards the global gradient, i.e., to reduce group model drift caused by FL over clients within the group. We analytically characterize the convergence behavior of MTGC under general non-convex settings, overcoming challenges associated with couplings between correction terms. We show that our convergence bound is immune to the extent of data heterogeneity, confirming the stability of the proposed algorithm against multi-level non-i.i.d. data. Through extensive experiments on various datasets and models, we validate the effectiveness of MTGC in diverse HFL settings. The code for this project is available at https://github.com/wenzhifang/MTGC.", "pdf": "https://openreview.net/pdf/8c4c048feb4b5a529cb3818927d059243853b969.pdf"} {"title": "Latent Learning Progress Drives Autonomous Goal Selection in Human Reinforcement Learning", "url": "https://openreview.net/forum?id=GbqzN9HiUC", "detail_url": "https://openreview.net/forum?id=GbqzN9HiUC", "authors": "Gaia Molinaro,C\u00e9dric Colas,Pierre-Yves Oudeyer,Anne Collins", "tags": "NIPS 2024,Poster", "abstract": "Humans are autotelic agents who learn by setting and pursuing their own goals. However, the precise mechanisms guiding human goal selection remain unclear. Learning progress, typically measured as the observed change in performance, can provide a valuable signal for goal selection in both humans and artificial agents. We hypothesize that human choices of goals may also be driven by _latent learning progress_, which humans can estimate through knowledge of their actions and the environment \u2013 even without experiencing immediate changes in performance. To test this hypothesis, we designed a hierarchical reinforcement learning task in which human participants (N = 175) repeatedly chose their own goals and learned goal-conditioned policies. Our behavioral and computational modeling results confirm the influence of latent learning progress on goal selection and uncover inter-individual differences, partially mediated by recognition of the task's hierarchical structure. By investigating the role of latent learning progress in human goal selection, we pave the way for more effective and personalized learning experiences as well as the advancement of more human-like autotelic machines.", "pdf": "https://openreview.net/pdf/eff3b101f442418a2b21abcd72eec9c890690e19.pdf"} {"title": "A Closer Look at AUROC and AUPRC under Class Imbalance", "url": "https://openreview.net/forum?id=S3HvA808gk", "detail_url": "https://openreview.net/forum?id=S3HvA808gk", "authors": "Matthew B.A. McDermott,Haoran Zhang,Lasse Hyldig Hansen,Giovanni Angelotti,Jack Gallifant", "tags": "NIPS 2024,Poster", "abstract": "In machine learning (ML), a widespread claim is that the area under the precision-recall curve (AUPRC) is a superior metric for model comparison to the area under the receiver operating characteristic (AUROC) for tasks with class imbalance. This paper refutes this notion on two fronts. First, we theoretically characterize the behavior of AUROC and AUPRC in the presence of model mistakes, establishing clearly that AUPRC is not generally superior in cases of class imbalance. We further show that AUPRC can be a harmful metric as it can unduly favor model improvements in subpopulations with more frequent positive labels, heightening algorithmic disparities. Next, we empirically support our theory using experiments on both semi-synthetic and real-world fairness datasets. Prompted by these insights, we conduct a review of over 1.5 million scientific papers to understand the origin of this invalid claim, finding that it is often made without citation, misattributed to papers that do not argue this point, and aggressively over-generalized from source arguments. Our findings represent a dual contribution: a significant technical advancement in understanding the relationship between AUROC and AUPRC and a stark warning about unchecked assumptions in the ML community.", "pdf": "https://openreview.net/pdf/676b92aa1a5ffa041d8c793e74cc77aaa3831830.pdf"} {"title": "Breaking the curse of dimensionality in structured density estimation", "url": "https://openreview.net/forum?id=dWwin2uGYE", "detail_url": "https://openreview.net/forum?id=dWwin2uGYE", "authors": "Robert A. Vandermeulen,Wai Ming Tai,Bryon Aragam", "tags": "NIPS 2024,Poster", "abstract": "We consider the problem of estimating a structured multivariate density, subject to Markov conditions implied by an undirected graph. In the worst case, without Markovian assumptions, this problem suffers from the curse of dimensionality. Our main result shows how the curse of dimensionality can be avoided or greatly alleviated under the Markov property, and applies to arbitrary graphs. While existing results along these lines focus on sparsity or manifold assumptions, we introduce a new graphical quantity called ``graph resilience'' and show that it dictates the optimal sample complexity. Surprisingly, although one might expect the sample complexity of this problem to scale with local graph parameters such as the degree, this turns out not to be the case. Through explicit examples, we compute uniform deviation bounds and illustrate how the curse of dimensionality in density estimation can thus be circumvented. Notable examples where the rate improves substantially include sequential, hierarchical, and spatial data.", "pdf": "https://openreview.net/pdf/44fecc216640c7b5db7a8733800e26a05767deaf.pdf"} {"title": "EGODE: An Event-attended Graph ODE Framework for Modeling Rigid Dynamics", "url": "https://openreview.net/forum?id=js5vZtyoIQ", "detail_url": "https://openreview.net/forum?id=js5vZtyoIQ", "authors": "Jingyang Yuan,Gongbo Sun,Zhiping Xiao,Hang Zhou,Xiao Luo,Junyu Luo,Yusheng Zhao,Wei Ju,Ming Zhang", "tags": "NIPS 2024,Poster", "abstract": "This paper studies the problem of rigid dynamics modeling, which has a wide range of applications in robotics, graphics, and mechanical design. The problem is partly solved by graph neural network (GNN) simulators. However, these approaches cannot effectively handle the relationship between intrinsic continuity and instantaneous changes in rigid dynamics. Moreover, they usually neglect hierarchical structures across mesh nodes and objects in systems. In this paper, we propose a novel approach named Event-attend Graph ODE (EGODE) for effective rigid dynamics modeling. In particular, we describe the rigid system using both mesh node representations and object representations. To model continuous dynamics across hierarchical structures, we use a coupled graph ODE framework for the evolution of both types of representations over a long period. In addition, to capture instantaneous changes during the collision, we introduce an event module, which can effectively estimate the occurrence of the collision and update the states of both mesh node and object representations during evolution. Extensive experiments on a range of benchmark datasets validate the superiority of the proposed EGODE compared to various state-of-the-art baselines. The source code can be found at https://github.com/yuanjypku/EGODE.", "pdf": "https://openreview.net/pdf/c4c3ac7cd0379357d24027e69a2515268b592a50.pdf"} {"title": "Shuffling Gradient-Based Methods for Nonconvex-Concave Minimax Optimization", "url": "https://openreview.net/forum?id=lfY0SUT3m9", "detail_url": "https://openreview.net/forum?id=lfY0SUT3m9", "authors": "Quoc Tran-Dinh,Trang H. Tran,Lam M. Nguyen", "tags": "NIPS 2024,Poster", "abstract": "This paper aims at developing novel shuffling gradient-based methods for tackling two classes of minimax problems: nonconvex-linear and nonconvex-strongly concave settings. The first algorithm addresses the nonconvex-linear minimax model and achieves the state-of-the-art oracle complexity typically observed in nonconvex optimization. It also employs a new shuffling estimator for the ``hyper-gradient'', departing from standard shuffling techniques in optimization. The second method consists of two variants: semi-shuffling and full-shuffling schemes. These variants tackle the nonconvex-strongly concave minimax setting. We establish their oracle complexity bounds under standard assumptions, which, to our best knowledge, are the best-known for this specific setting. Numerical examples demonstrate the performance of our algorithms and compare them with two other methods. Our results show that the new methods achieve comparable performance with SGD, supporting the potential of incorporating shuffling strategies into minimax algorithms.", "pdf": "https://openreview.net/pdf/396caa232403726b82ad02411f633f45bd3bc3e6.pdf"} {"title": "Medformer: A Multi-Granularity Patching Transformer for Medical Time-Series Classification", "url": "https://openreview.net/forum?id=jfkid2HwNr", "detail_url": "https://openreview.net/forum?id=jfkid2HwNr", "authors": "Yihe Wang,Nan Huang,Taida Li,Yujun Yan,Xiang Zhang", "tags": "NIPS 2024,Poster", "abstract": "Medical time series (MedTS) data, such as Electroencephalography (EEG) and Electrocardiography (ECG), play a crucial role in healthcare, such as diagnosing brain and heart diseases. Existing methods for MedTS classification primarily rely on handcrafted biomarkers extraction and CNN-based models, with limited exploration of transformer-based models. In this paper, we introduce Medformer, a multi-granularity patching transformer tailored specifically for MedTS classification. Our method incorporates three novel mechanisms to leverage the unique characteristics of MedTS: cross-channel patching to leverage inter-channel correlations, multi-granularity embedding for capturing features at different scales, and two-stage (intra- and inter-granularity) multi-granularity self-attention for learning features and correlations within and among granularities. We conduct extensive experiments on five public datasets under both subject-dependent and challenging subject-independent setups. Results demonstrate Medformer's superiority over 10 baselines, achieving top averaged ranking across five datasets on all six evaluation metrics. These findings underscore the significant impact of our method on healthcare applications, such as diagnosing Myocardial Infarction, Alzheimer's, and Parkinson's disease. We release the source code at https://github.com/DL4mHealth/Medformer.", "pdf": "https://openreview.net/pdf/0962cbda96d58addea1481d214e2fb42baf273ac.pdf"} {"title": "Meta-Reinforcement Learning with Universal Policy Adaptation: Provable Near-Optimality under All-task Optimum Comparator", "url": "https://openreview.net/forum?id=rpjh69DUX2", "detail_url": "https://openreview.net/forum?id=rpjh69DUX2", "authors": "Siyuan Xu,Minghui Zhu", "tags": "NIPS 2024,Poster", "abstract": "Meta-reinforcement learning (Meta-RL) has attracted attention due to its capability to enhance reinforcement learning (RL) algorithms, in terms of data efficiency and generalizability. In this paper, we develop a bilevel optimization framework for meta-RL (BO-MRL) to learn the meta-prior for task-specific policy adaptation, which implements multiple-step policy optimization on one-time data collection. Beyond existing meta-RL analyses, we provide upper bounds of the expected optimality gap over the task distribution. This metric measures the distance of the policy adaptation from the learned meta-prior to the task-specific optimum, and quantifies the model's generalizability to the task distribution. We empirically validate the correctness of the derived upper bounds and demonstrate the superior effectiveness of the proposed algorithm over benchmarks.", "pdf": "https://openreview.net/pdf/151c121043796259ec7bb2338ce9abeda1583f5b.pdf"} {"title": "Mission Impossible: A Statistical Perspective on Jailbreaking LLMs", "url": "https://openreview.net/forum?id=eowkjKVPoH", "detail_url": "https://openreview.net/forum?id=eowkjKVPoH", "authors": "Jingtong Su,Julia Kempe,Karen Ullrich", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) are trained on a deluge of text data with limited quality control. As a result, LLMs can exhibit unintended or even harmful behaviours, such as leaking information, fake news or hate speech. Countermeasures, commonly referred to as preference alignment, include fine-tuning the pretrained LLMs with carefully crafted text examples of desired behaviour. Even then, empirical evidence shows preference aligned LLMs can be enticed to harmful behaviour. This so called jailbreaking of LLMs is typically achieved by adversarially modifying the input prompt to the LLM. Our paper provides theoretical insights into the phenomenon of preference alignment and jailbreaking from a statistical perspective. Under our framework, we first show that pretrained LLMs will mimic harmful behaviour if present in the training corpus. \\textbf{Under that same framework, we then introduce a statistical notion of alignment, and lower-bound the jailbreaking probability, showing that it is unpreventable under reasonable assumptions.} Based on our insights, we propose an alteration to the currently prevalent alignment strategy RLHF. Specifically, we introduce a simple modification to the RLHF objective, we call \\emph{E-RLHF}, that aims to increase the likelihood of safe responses. \\emph{E-RLHF} brings no additional training cost, and is compatible with other methods. Empirically, we demonstrate that \\emph{E-RLHF} outperforms RLHF on all alignment problems put forward by the AdvBench \\citep{zou2023universal} and HarmBench project \\citep{mazeika2024harmbench} without sacrificing model performance as measured by the MT-Bench project \\citep{zheng2024judging}.", "pdf": "https://openreview.net/pdf/f825413c63ff02b981a7fce0386fdfc9b4cc20cb.pdf"} {"title": "bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction", "url": "https://openreview.net/forum?id=HtlfNbyfOn", "detail_url": "https://openreview.net/forum?id=HtlfNbyfOn", "authors": "Yehe Liu,Alexander Krull,Hector Basevi,Ales Leonardis,Michael W. Jenkins", "tags": "NIPS 2024,Poster", "abstract": "Quanta image sensors, such as SPAD arrays, are an emerging sensor technology, producing 1-bit arrays representing photon detection events over exposures as short as a few nanoseconds. In practice, raw data are post-processed using heavy spatiotemporal binning to create more useful and interpretable images at the cost of degrading spatiotemporal resolution. In this work, we propose bit2bit, a new method for reconstructing high-quality image stacks at the original spatiotemporal resolution from sparse binary quanta image data. Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data by predicting the photon arrival location probability distribution. However, due to the binary nature of the data, we show that the assumption of a Poisson distribution is inadequate. Instead, we model the process with a Bernoulli lattice process from the truncated Poisson. This leads to the proposal of a novel self-supervised solution based on a masked loss function. We evaluate our method using both simulated and real data. On simulated data from a conventional video, we achieve 34.35 mean PSNR with extremely photon-sparse binary input (<0.06 photons per pixel per frame). We also present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions. The scenes cover strong/weak ambient light, strong motion, ultra-fast events, etc., which will be made available to the community, on which we demonstrate the promise of our approach. Both reconstruction quality and throughput substantially surpass the state-of-the-art methods (e.g., Quanta Burst Photography (QBP)). Our approach significantly enhances the visualization and usability of the data, enabling the application of existing analysis techniques.", "pdf": "https://openreview.net/pdf/1b24879f5f0576afddf52835b0cf6a678f2ddb06.pdf"} {"title": "Warm-starting Push-Relabel", "url": "https://openreview.net/forum?id=YYY5lzE547", "detail_url": "https://openreview.net/forum?id=YYY5lzE547", "authors": "Sami Davies,Sergei Vassilvitskii,Yuyan Wang", "tags": "NIPS 2024,Poster", "abstract": "Push-Relabel is one of the most celebrated network flow algorithms. Maintaining a pre-flow that saturates a cut, it enjoys better theoretical and empirical running time than other flow algorithms, such as Ford-Fulkerson. In practice, Push-Relabel is even faster than what theoretical guarantees can promise, in part because of the use of good heuristics for seeding and updating the iterative algorithm. However, it remains unclear how to run Push-Relabel on an arbitrary initialization that is not necessarily a pre-flow or cut-saturating. We provide the first theoretical guarantees for warm-starting Push-Relabel with a predicted flow, where our learning-augmented version benefits from fast running time when the predicted flow is close to an optimal flow, while maintaining robust worst-case guarantees. Interestingly, our algorithm uses the gap relabeling heuristic, which has long been employed in practice, even though prior to our work there was no rigorous theoretical justification for why it can lead to run-time improvements. We then show our algorithmic framework works well in practice, as our warm-start version of Push-Relabel improves over the cold-start version by a larger and larger percentage as the size of the image increases.", "pdf": "https://openreview.net/pdf/56b2a5256531819ddced308f3e038fb838f8ed6a.pdf"} {"title": "Bias Detection via Signaling", "url": "https://openreview.net/forum?id=4D7haH4pdR", "detail_url": "https://openreview.net/forum?id=4D7haH4pdR", "authors": "Yiling Chen,Tao Lin,Ariel D. Procaccia,Aaditya Ramdas,Itai Shapira", "tags": "NIPS 2024,Poster", "abstract": "We introduce and study the problem of detecting whether an agent is updating their prior beliefs given new evidence in an optimal way that is Bayesian, or whether they are biased towards their own prior. In our model, biased agents form posterior beliefs that are a convex combination of their prior and the Bayesian posterior, where the more biased an agent is, the closer their posterior is to the prior. Since we often cannot observe the agent's beliefs directly, we take an approach inspired by *information design*. Specifically, we measure an agent's bias by designing a *signaling scheme* and observing the actions they take in response to different signals, assuming that they are maximizing their own expected utility; our goal is to detect bias with a minimum number of signals. Our main results include a characterization of scenarios where a single signal suffices and a computationally efficient algorithm to compute optimal signaling schemes.", "pdf": "https://openreview.net/pdf/9ec591e069b245725ae83245db1b8877dbf65043.pdf"} {"title": "Stabilizing Linear Passive-Aggressive Online Learning with Weighted Reservoir Sampling", "url": "https://openreview.net/forum?id=FNOBf6JM7r", "detail_url": "https://openreview.net/forum?id=FNOBf6JM7r", "authors": "Skyler Wu,Fred Lu,Edward Raff,James Holt", "tags": "NIPS 2024,Poster", "abstract": "Online learning methods, like the seminal Passive-Aggressive (PA) classifier, are still highly effective for high-dimensional streaming data, out-of-core processing, and other throughput-sensitive applications. Many such algorithms rely on fast adaptation to individual errors as a key to their convergence. While such algorithms enjoy low theoretical regret, in real-world deployment they can be sensitive to individual outliers that cause the algorithm to over-correct. When such outliers occur at the end of the data stream, this can cause the final solution to have unexpectedly low accuracy. We design a weighted reservoir sampling (WRS) approach to obtain a stable ensemble model from the sequence of solutions without requiring additional passes over the data, hold-out sets, or a growing amount of memory. Our key insight is that good solutions tend to be error-free for more iterations than bad solutions, and thus, the number of passive rounds provides an estimate of a solution's relative quality. Our reservoir thus contains $K$ previous intermediate weight vectors with high survival times. We demonstrate our WRS approach on the Passive-Aggressive Classifier (PAC) and First-Order Sparse Online Learning (FSOL), where our method consistently and significantly outperforms the unmodified approach. We show that the risk of the ensemble classifier is bounded with respect to the regret of the underlying online learning method.", "pdf": "https://openreview.net/pdf/2b906dc0f1f6492f739eae5b360d80bf8437152f.pdf"} {"title": "CountGD: Multi-Modal Open-World Counting", "url": "https://openreview.net/forum?id=eUg64OsGDE", "detail_url": "https://openreview.net/forum?id=eUg64OsGDE", "authors": "Niki Amini-Naieni,Tengda Han,Andrew Zisserman", "tags": "NIPS 2024,Poster", "abstract": "The goal of this paper is to improve the generality and accuracy of open-vocabulary object counting in images. To improve the generality, we repurpose an open-vocabulary detection foundation model (GroundingDINO) for the counting task, and also extend its capabilities by introducing modules to enable specifying the target object to count by visual exemplars. In turn, these new capabilities -- being able to specify the target object by multi-modalites (text and exemplars) -- lead to an improvement in counting accuracy. We make three contributions: First, we introduce the first open-world counting model, CountGD, where the prompt can be specified by a text description or visual exemplars or both; Second, we show that the performance of the model significantly improves the state of the art on multiple counting benchmarks -- when using text only, CountGD outperforms all previous text-only works, and when using both text and visual exemplars, we outperform all previous models; Third, we carry out a preliminary study into different interactions between the text and visual exemplar prompts, including the cases where they reinforce each other and where one restricts the other. The code and an app to test the model are available at https://www.robots.ox.ac.uk/vgg/research/countgd/.", "pdf": "https://openreview.net/pdf/673724e078a1d785fdd87105db2bf8e2fe667b77.pdf"} {"title": "HippoRAG: Neurobiologically Inspired Long-Term Memory for Large Language Models", "url": "https://openreview.net/forum?id=hkujvAPVsg", "detail_url": "https://openreview.net/forum?id=hkujvAPVsg", "authors": "Bernal Jimenez Gutierrez,Yiheng Shu,Yu Gu,Michihiro Yasunaga,Yu Su", "tags": "NIPS 2024,Poster", "abstract": "In order to thrive in hostile and ever-changing natural environments, mammalian brains evolved to store large amounts of knowledge about the world and continually integrate new information while avoiding catastrophic forgetting. Despite the impressive accomplishments, large language models (LLMs), even with retrieval-augmented generation (RAG), still struggle to efficiently and effectively integrate a large amount of new experiences after pre-training. In this work, we introduce HippoRAG, a novel retrieval framework inspired by the hippocampal indexing theory of human long-term memory to enable deeper and more efficient knowledge integration over new experiences. HippoRAG synergistically orchestrates LLMs, knowledge graphs, and the Personalized PageRank algorithm to mimic the different roles of neocortex and hippocampus in human memory. We compare HippoRAG with existing RAG methods on multi-hop question answering (QA) and show that our method outperforms the state-of-the-art methods remarkably, by up to 20%. Single-step retrieval with HippoRAG achieves comparable or better performance than iterative retrieval like IRCoT while being 10-20 times cheaper and 6-13 times faster, and integrating HippoRAG into IRCoT brings further substantial gains. Finally, we show that our method can tackle new types of scenarios that are out of reach of existing methods.", "pdf": "https://openreview.net/pdf/9dbb874a8ef320b3bc43096064eab8f52a6e8757.pdf"} {"title": "Average gradient outer product as a mechanism for deep neural collapse", "url": "https://openreview.net/forum?id=vtRotUd539", "detail_url": "https://openreview.net/forum?id=vtRotUd539", "authors": "Daniel Beaglehole,Peter S\u00faken\u00edk,Marco Mondelli,Mikhail Belkin", "tags": "NIPS 2024,Poster", "abstract": "Deep Neural Collapse (DNC) refers to the surprisingly rigid structure of the data representations in the final layers of Deep Neural Networks (DNNs). Though the phenomenon has been measured in a variety of settings, its emergence is typically explained via data-agnostic approaches, such as the unconstrained features model. In this work, we introduce a data-dependent setting where DNC forms due to feature learning through the average gradient outer product (AGOP). The AGOP is defined with respect to a learned predictor and is equal to the uncentered covariance matrix of its input-output gradients averaged over the training dataset. Deep Recursive Feature Machines are a method that constructs a neural network by iteratively mapping the data with the AGOP and applying an untrained random feature map. We demonstrate theoretically and empirically that DNC occurs in Deep Recursive Feature Machines as a consequence of the projection with the AGOP matrix computed at each layer. We then provide evidence that this mechanism holds for neural networks more generally. We show that the right singular vectors and values of the weights can be responsible for the majority of within-class variability collapse for DNNs trained in the feature learning regime. As observed in recent work, this singular structure is highly correlated with that of the AGOP.", "pdf": "https://openreview.net/pdf/e6aa9a4011a802d00ba4c1202d7c30befc5c3233.pdf"} {"title": "LLMDFA: Analyzing Dataflow in Code with Large Language Models", "url": "https://openreview.net/forum?id=QZ2d8E8Whu", "detail_url": "https://openreview.net/forum?id=QZ2d8E8Whu", "authors": "Chengpeng Wang,Wuqi Zhang,Zian Su,Xiangzhe Xu,Xiaoheng Xie,Xiangyu Zhang", "tags": "NIPS 2024,Poster", "abstract": "Dataflow analysis is a fundamental code analysis technique that identifies dependencies between program values. Traditional approaches typically necessitate successful compilation and expert customization, hindering their applicability and usability for analyzing uncompilable programs with evolving analysis needs in real-world scenarios. This paper presents LLMDFA, an LLM-powered compilation-free and customizable dataflow analysis framework. To address hallucinations for reliable results, we decompose the problem into several subtasks and introduce a series of novel strategies. Specifically, we leverage LLMs to synthesize code that outsources delicate reasoning to external expert tools, such as using a parsing library to extract program values of interest and invoking an automated theorem prover to validate path feasibility. Additionally, we adopt a few-shot chain-of-thought prompting to summarize dataflow facts in individual functions, aligning the LLMs with the program semantics of small code snippets to mitigate hallucinations. We evaluate LLMDFA on synthetic programs to detect three representative types of bugs and on real-world Android applications for customized bug detection. On average, LLMDFA achieves 87.10% precision and 80.77% recall, surpassing existing techniques with F1 score improvements of up to 0.35. We have open-sourced LLMDFA at https://github.com/chengpeng-wang/LLMDFA.", "pdf": "https://openreview.net/pdf/2cfd2ed22ae0deee46b1fb6367fa370ba0fca4e6.pdf"} {"title": "Abstract Reward Processes: Leveraging State Abstraction for Consistent Off-Policy Evaluation", "url": "https://openreview.net/forum?id=cYZibc2gKf", "detail_url": "https://openreview.net/forum?id=cYZibc2gKf", "authors": "Shreyas Chaudhari,Ameet Deshpande,Bruno Castro da Silva,Philip S. Thomas", "tags": "NIPS 2024,Poster", "abstract": "Evaluating policies using off-policy data is crucial for applying reinforcement learning to real-world problems such as healthcare and autonomous driving. Previous methods for *off-policy evaluation* (OPE) generally suffer from high variance or irreducible bias, leading to unacceptably high prediction errors. In this work, we introduce STAR, a framework for OPE that encompasses a broad range of estimators -- which include existing OPE methods as special cases -- that achieve lower mean squared prediction errors. STAR leverages state abstraction to distill complex, potentially continuous problems into compact, discrete models which we call *abstract reward processes* (ARPs). Predictions from ARPs estimated from off-policy data are provably consistent (asymptotically correct). Rather than proposing a specific estimator, we present a new framework for OPE and empirically demonstrate that estimators within STAR outperform existing methods. The best STAR estimator outperforms baselines in all twelve cases studied, and even the median STAR estimator surpasses the baselines in seven out of the twelve cases.", "pdf": "https://openreview.net/pdf/7c8af02f297158a80816597f36d79d1144f6a30f.pdf"} {"title": "Almost Free: Self-concordance in Natural Exponential Families and an Application to Bandits", "url": "https://openreview.net/forum?id=LKwVYvx66I", "detail_url": "https://openreview.net/forum?id=LKwVYvx66I", "authors": "Shuai Liu,Alex Ayoub,Flore Sentenac,Xiaoqi Tan,Csaba Szepesvari", "tags": "NIPS 2024,Poster", "abstract": "We prove that single-parameter natural exponential families with subexponential tails are self-concordant with polynomial-sized parameters. For subgaussian natural exponential families we establish an exact characterization of the growth rate of the self-concordance parameter. Applying these findings to bandits allows us to fill gaps in the literature: We show that optimistic algorithms for generalized linear bandits enjoy regret bounds that are both second-order (scale with the variance of the optimal arm's reward distribution) and free of an exponential dependence on the bound of the problem parameter in the leading term. To the best of our knowledge, ours is the first regret bound for generalized linear bandits with subexponential tails, broadening the class of problems to include Poisson, exponential and gamma bandits.", "pdf": "https://openreview.net/pdf/80bc9ff2d7268251aab1a1b319de315505432532.pdf"} {"title": "Annealed Multiple Choice Learning: Overcoming limitations of Winner-takes-all with annealing", "url": "https://openreview.net/forum?id=WEs4WMzndY", "detail_url": "https://openreview.net/forum?id=WEs4WMzndY", "authors": "David Perera,Victor Letzelter,Theo Mariotte,Adrien Cortes,Mickael Chen,Slim Essid,Ga\u00ebl Richard", "tags": "NIPS 2024,Poster", "abstract": "We introduce Annealed Multiple Choice Learning (aMCL) which combines simulated annealing with MCL. MCL is a learning framework handling ambiguous tasks by predicting a small set of plausible hypotheses. These hypotheses are trained using the Winner-takes-all (WTA) scheme, which promotes the diversity of the predictions. However, this scheme may converge toward an arbitrarily suboptimal local minimum, due to the greedy nature of WTA. We overcome this limitation using annealing, which enhances the exploration of the hypothesis space during training. We leverage insights from statistical physics and information theory to provide a detailed description of the model training trajectory. Additionally, we validate our algorithm by extensive experiments on synthetic datasets, on the standard UCI benchmark, and on speech separation.", "pdf": "https://openreview.net/pdf/2d6c96f93628c3b1200feba56a94498c804019ca.pdf"} {"title": "Asymptotics of Alpha-Divergence Variational Inference Algorithms with Exponential Families", "url": "https://openreview.net/forum?id=HfQF8LoLhs", "detail_url": "https://openreview.net/forum?id=HfQF8LoLhs", "authors": "Fran\u00e7ois Bertholom,randal douc,Fran\u00e7ois Roueff", "tags": "NIPS 2024,Poster", "abstract": "Recent works in Variational Inference have examined alternative criteria to the commonly used exclusive Kullback-Leibler divergence. Encouraging empirical results have been obtained with the family of alpha-divergences, but few works have focused on the asymptotic properties of the proposed algorithms, especially as the number of iterations goes to infinity. In this paper, we study a procedure that ensures a monotonic decrease in the alpha-divergence. We provide sufficient conditions to guarantee its convergence to a local minimizer of the alpha-divergence at a geometric rate when the variational family belongs to the class of exponential models. The sample-based version of this ideal procedure involves biased gradient estimators, thus hindering any theoretical study. We propose an alternative unbiased algorithm, we prove its almost sure convergence to a local minimizer of the alpha-divergence, and a law of the iterated logarithm. Our results are exemplified with toy and real-data experiments.", "pdf": "https://openreview.net/pdf/67a9daa02b003ad2b84a07aef1d11bc7a71ccad8.pdf"} {"title": "Generating Highly Designable Proteins with Geometric Algebra Flow Matching", "url": "https://openreview.net/forum?id=nAnEStxyfy", "detail_url": "https://openreview.net/forum?id=nAnEStxyfy", "authors": "Simon Wagner,Leif Seute,Vsevolod Viliuga,Nicolas Wolf,Frauke Gr\u00e4ter,Jan Stuehmer", "tags": "NIPS 2024,Poster", "abstract": "We introduce a generative model for protein backbone design utilizing geometric products and higher order message passing. In particular, we propose Clifford Frame Attention (CFA), an extension of the invariant point attention (IPA) architecture from AlphaFold2, in which the backbone residue frames and geometric features are represented in the projective geometric algebra. This enables to construct geometrically expressive messages between residues, including higher order terms, using the bilinear operations of the algebra. We evaluate our architecture by incorporating it into the framework of FrameFlow, a state-of-the-art flow matching model for protein backbone generation. The proposed model achieves high designability, diversity and novelty, while also sampling protein backbones that follow the statistical distribution of secondary structure elements found in naturally occurring proteins, a property so far only insufficiently achieved by many state-of-the-art generative models.", "pdf": "https://openreview.net/pdf/5cb59c78931cfaf13f75476f9422bcff4723d2eb.pdf"} {"title": "Unveiling LoRA Intrinsic Ranks via Salience Analysis", "url": "https://openreview.net/forum?id=vU512K8vrR", "detail_url": "https://openreview.net/forum?id=vU512K8vrR", "authors": "Wenjun Ke,Jiahao Wang,Peng Wang,Jiajun Liu,Dong Nie,Guozheng Li,Yining Li", "tags": "NIPS 2024,Poster", "abstract": "The immense parameter scale of large language models underscores the necessity for parameter-efficient fine-tuning methods. Methods based on Low-Rank Adaptation (LoRA) assume the low-rank characteristics of the incremental matrix and optimize the matrix obtained from low-rank decomposition. Although effective, these methods are constrained by a fixed and unalterable intrinsic rank, neglecting the variable importance of matrices. Consequently, methods for adaptive rank allocation are proposed, among which AdaLoRA demonstrates excellent fine-tuning performance. AdaLoRA conducts adaptation based on singular value decomposition (SVD), dynamically allocating intrinsic ranks according to importance. However, it still struggles to achieve a balance between fine-tuning effectiveness and efficiency, leading to limited rank allocation space. Additionally, the importance measurement focuses only on parameters with minimal impact on the loss, neglecting the dominant role of singular values in SVD-based matrices and the fluctuations during training. To address these issues, we propose SalientLoRA, which adaptively optimizes intrinsic ranks of LoRA via salience measurement. Firstly, during rank allocation, the salience measurement analyses the variation of singular value magnitudes across multiple time steps and establishes their inter-dependency relationships to assess the matrix importance. This measurement mitigates instability and randomness that may arise during importance assessment. Secondly, to achieve a balance between fine-tuning performance and efficiency, we propose an adaptive adjustment of time-series window, which adaptively controls the size of time-series for significance measurement and rank reduction during training, allowing for rapid rank allocation while maintaining training stability. This mechanism enables matrics to set a higher initial rank, thus expanding the allocation space for ranks. To evaluate the generality of our method across various tasks, we conduct experiments on natural language understanding (NLU), natural language generation (NLG), and large model instruction tuning tasks. Experimental results demonstrate the superiority of SalientLoRA, which outperforms state-of-the-art methods by 0.96\\%-3.56\\% on multiple datasets. Furthermore, as the rank allocation space expands, our method ensures fine-tuning efficiency, achieving a speed improvement of 94.5\\% compared to AdaLoRA. The code is publicly available at https://github.com/Heyest/SalientLoRA.", "pdf": "https://openreview.net/pdf/212291a21697f8ab6df33a882c29090800c75abd.pdf"} {"title": "Selective Attention: Enhancing Transformer through Principled Context Control", "url": "https://openreview.net/forum?id=QbqLcwMXfF", "detail_url": "https://openreview.net/forum?id=QbqLcwMXfF", "authors": "Xuechen Zhang,Xiangyu Chang,Mingchen Li,Amit Roy-Chowdhury,Jiasi Chen,Samet Oymak", "tags": "NIPS 2024,Poster", "abstract": "The attention mechanism within the transformer architecture enables the model to weigh and combine tokens based on their relevance to the query. While self-attention has enjoyed major success, it notably treats all queries $q$ in the same way by applying the mapping $V^\\top\\text{softmax}(Kq)$, where $V,K$ are the value and key embeddings respectively. In this work, we argue that this uniform treatment hinders the ability to control contextual sparsity and relevance. As a solution, we introduce the Selective Self-Attention (SSA) layer that augments the softmax nonlinearity with a principled temperature scaling strategy. By controlling temperature, SSA adapts the contextual sparsity of the attention map to the query embedding and its position in the context window. Through theory and experiments, we demonstrate that this alleviates attention dilution, aids the optimization process, and enhances the model's ability to control softmax spikiness of individual queries. We also incorporate temperature scaling for value embeddings and show that it boosts the model's ability to suppress irrelevant/noisy tokens. Notably, SSA is a lightweight method which introduces less than 0.5\\% new parameters through a weight-sharing strategy and can be fine-tuned on existing LLMs. Extensive empirical evaluations demonstrate that SSA-equipped models achieve a noticeable and consistent accuracy improvement on language modeling benchmarks.", "pdf": "https://openreview.net/pdf/d8ea6c3130e0c0c210caa241eb52e529bbb11c3d.pdf"} {"title": "Accelerating Relative Entropy Coding with Space Partitioning", "url": "https://openreview.net/forum?id=OuQYWNuNxm", "detail_url": "https://openreview.net/forum?id=OuQYWNuNxm", "authors": "Jiajun He,Gergely Flamich,Jos\u00e9 Miguel Hern\u00e1ndez-Lobato", "tags": "NIPS 2024,Poster", "abstract": "Relative entropy coding (REC) algorithms encode a random sample following a target distribution $Q$, using a coding distribution $P$ shared between the sender and receiver. Sadly, general REC algorithms suffer from prohibitive encoding times, at least on the order of $2^{D_{\\text{KL}}[Q||P]}$, and faster algorithms are limited to very specific settings. This work addresses this issue by introducing a REC scheme utilizing space partitioning to reduce runtime in practical scenarios. We provide theoretical analyses of our method and demonstrate its effectiveness with both toy examples and practical applications. Notably, our method successfully handles REC tasks with $D_{\\text{KL}}[Q||P]$ about three times greater than what previous methods can manage, and reduces the bitrate by approximately 5-15\\% in VAE-based lossless compression on MNIST and INR-based lossy compression on CIFAR-10, compared to previous methods, significantly improving the practicality of REC for neural compression.", "pdf": "https://openreview.net/pdf/8553e2022aba645e7ab24def30e193dd3ae6ec6d.pdf"} {"title": "Aligning Diffusion Models by Optimizing Human Utility", "url": "https://openreview.net/forum?id=MTMShU5QaC", "detail_url": "https://openreview.net/forum?id=MTMShU5QaC", "authors": "Shufan Li,Konstantinos Kallidromitis,Akash Gokul,Yusuke Kato,Kazuki Kozuka", "tags": "NIPS 2024,Poster", "abstract": "We present Diffusion-KTO, a novel approach for aligning text-to-image diffusion models by formulating the alignment objective as the maximization of expected human utility. Unlike previous methods, Diffusion-KTO does not require collecting pairwise preference data nor training a complex reward model. Instead, our objective uses per-image binary feedback signals, e.g. likes or dislikes, to align the model with human preferences. After fine-tuning using Diffusion-KTO, text-to-image diffusion models exhibit improved performance compared to existing techniques, including supervised fine-tuning and Diffusion-DPO, both in terms of human judgment and automatic evaluation metrics such as PickScore and ImageReward. Overall, Diffusion-KTO unlocks the potential of leveraging readily available per-image binary preference signals and broadens the applicability of aligning text-to-image diffusion models with human preferences.", "pdf": "https://openreview.net/pdf/dd62355c1202197d4a525565c9687c026a76076e.pdf"} {"title": "Few-Shot Task Learning through Inverse Generative Modeling", "url": "https://openreview.net/forum?id=atIE6Npr5A", "detail_url": "https://openreview.net/forum?id=atIE6Npr5A", "authors": "Aviv Netanyahu,Yilun Du,Antonia Bronars,Jyothish Pari,Joshua B. Tenenbaum,Tianmin Shu,Pulkit Agrawal", "tags": "NIPS 2024,Poster", "abstract": "Learning the intents of an agent, defined by its goals or motion style, is often extremely challenging from just a few examples. We refer to this problem as task concept learning and present our approach, Few-Shot Task Learning through Inverse Generative Modeling (FTL-IGM), which learns new task concepts by leveraging invertible neural generative models. The core idea is to pretrain a generative model on a set of basic concepts and their demonstrations. Then, given a few demonstrations of a new concept (such as a new goal or a new action), our method learns the underlying concepts through backpropagation without updating the model weights, thanks to the invertibility of the generative model. We evaluate our method in five domains -- object rearrangement, goal-oriented navigation, motion caption of human actions, autonomous driving, and real-world table-top manipulation. Our experimental results demonstrate that via the pretrained generative model, we successfully learn novel concepts and generate agent plans or motion corresponding to these concepts in (1) unseen environments and (2) in composition with training concepts.", "pdf": "https://openreview.net/pdf/053a4767d0b787d3a3a3e198cb6ee55524fe4c95.pdf"} {"title": "Rethinking LLM Memorization through the Lens of Adversarial Compression", "url": "https://openreview.net/forum?id=KFmRMvzAZy", "detail_url": "https://openreview.net/forum?id=KFmRMvzAZy", "authors": "Avi Schwarzschild,Zhili Feng,Pratyush Maini,Zachary Chase Lipton,J Zico Kolter", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) trained on web-scale datasets raise substantial concerns regarding permissible data usage. \nOne major question is whether these models \"memorize\" all their training data or they integrate many data sources in some way more akin to how a human would learn and synthesize information. The answer hinges, to a large degree, on \\emph{how we define memorization.} In this work, we propose the Adversarial Compression Ratio (ACR) as a metric for assessing memorization in LLMs. A given string from the training data is considered memorized if it can be elicited by a prompt (much) shorter than the string itself---in other words, if these strings can be ``compressed'' with the model by computing adversarial prompts of fewer tokens. The ACR overcomes the limitations of existing notions of memorization by (i) offering an adversarial view of measuring memorization, especially for monitoring unlearning and compliance; and (ii) allowing for the flexibility to measure memorization for arbitrary strings at a reasonably low compute. Our definition serves as a practical tool for determining when model owners may be violating terms around data usage, providing a potential legal tool and a critical lens through which to address such scenarios.", "pdf": "https://openreview.net/pdf/ccc9591d2f6bede48446f0db2e4db5a91ac4eb60.pdf"} {"title": "ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution", "url": "https://openreview.net/forum?id=483IPG0HWL", "detail_url": "https://openreview.net/forum?id=483IPG0HWL", "authors": "Haoran Ye,Jiarui Wang,Zhiguang Cao,Federico Berto,Chuanbo Hua,Haeyeon Kim,Jinkyoo Park,Guojie Song", "tags": "NIPS 2024,Poster", "abstract": "The omnipresence of NP-hard combinatorial optimization problems (COPs) compels domain experts to engage in trial-and-error heuristic design. The long-standing endeavor of design automation has gained new momentum with the rise of large language models (LLMs). This paper introduces Language Hyper-Heuristics (LHHs), an emerging variant of Hyper-Heuristics that leverages LLMs for heuristic generation, featuring minimal manual intervention and open-ended heuristic spaces. To empower LHHs, we present Reflective Evolution (ReEvo), a novel integration of evolutionary search for efficiently exploring the heuristic space, and LLM reflections to provide verbal gradients within the space. Across five heterogeneous algorithmic types, six different COPs, and both white-box and black-box views of COPs, ReEvo yields state-of-the-art and competitive meta-heuristics, evolutionary algorithms, heuristics, and neural solvers, while being more sample-efficient than prior LHHs.", "pdf": "https://openreview.net/pdf/5c3d1ee16189c57c2ca70a8b21b8a86a4c6b4cf4.pdf"} {"title": "Matrix Denoising with Doubly Heteroscedastic Noise: Fundamental Limits and Optimal Spectral Methods", "url": "https://openreview.net/forum?id=NgyT80IPUK", "detail_url": "https://openreview.net/forum?id=NgyT80IPUK", "authors": "Yihan Zhang,Marco Mondelli", "tags": "NIPS 2024,Poster", "abstract": "We study the matrix denoising problem of estimating the singular vectors of a rank-$1$ signal corrupted by noise with both column and row correlations. Existing works are either unable to pinpoint the exact asymptotic estimation error or, when they do so, the resulting approaches (e.g., based on whitening or singular value shrinkage) remain vastly suboptimal. On top of this, most of the literature has focused on the special case of estimating the left singular vector of the signal when the noise only possesses row correlation (one-sided heteroscedasticity). In contrast, our work establishes the information-theoretic and algorithmic limits of matrix denoising with doubly heteroscedastic noise. We characterize the exact asymptotic minimum mean square error, and design a novel spectral estimator with rigorous optimality guarantees: under a technical condition, it attains positive correlation with the signals whenever information-theoretically possible and, for one-sided heteroscedasticity, it also achieves the Bayes-optimal error. Numerical experiments demonstrate the significant advantage of our theoretically principled method with the state of the art. The proofs draw connections with statistical physics and approximate message passing, departing drastically from standard random matrix theory techniques.", "pdf": "https://openreview.net/pdf/b2b9b2f61b21ed73bc7c795e28c5f41f6b8dc90b.pdf"} {"title": "SDP4Bit: Toward 4-bit Communication Quantization in Sharded Data Parallelism for LLM Training", "url": "https://openreview.net/forum?id=PEEqnXlSCk", "detail_url": "https://openreview.net/forum?id=PEEqnXlSCk", "authors": "Jinda Jia,Cong Xie,Hanlin Lu,Daoce Wang,Hao Feng,Chengming Zhang,Baixi Sun,Haibin Lin,Zhi Zhang,Xin Liu,Dingwen Tao", "tags": "NIPS 2024,Poster", "abstract": "Recent years have witnessed a clear trend towards language models with an ever-increasing number of parameters, as well as the growing training overhead and memory usage. Distributed training, particularly through Sharded Data Parallelism (ShardedDP) which partitions optimizer states among workers, has emerged as a crucial technique to mitigate training time and memory usage. Yet, a major challenge in the scalability of ShardedDP is the intensive communication of weights and gradients. While compression techniques can alleviate this issue, they often result in worse accuracy. Driven by this limitation, we propose SDP4Bit (Toward 4Bit Communication Quantization in Sharded Data Parallelism for LLM Training), which effectively reduces the communication of weights and gradients to nearly 4 bits via two novel techniques: quantization on weight differences, and two-level gradient smooth quantization. Furthermore, SDP4Bit presents an algorithm-system co-design with runtime optimization to minimize the computation overhead of compression. Additional to the theoretical guarantees of convergence, we empirically evaluate the accuracy of SDP4Bit on the pre-training of GPT models with up to 6.7 billion parameters, and the results demonstrate a negligible impact on training loss. Furthermore, speed experiments show that SDP4Bit achieves up to 4.08\u00d7 speedup in end-to-end throughput on a scale of 128 GPUs.", "pdf": "https://openreview.net/pdf/833a29994df310a8c6e589cba642c4f0c4796f19.pdf"} {"title": "Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data", "url": "https://openreview.net/forum?id=7FokMz6U8n", "detail_url": "https://openreview.net/forum?id=7FokMz6U8n", "authors": "Johannes Treutlein,Dami Choi,Jan Betley,Samuel Marks,Cem Anil,Roger Baker Grosse,Owain Evans", "tags": "NIPS 2024,Poster", "abstract": "One way to address safety risks from large language models (LLMs) is to censor dangerous knowledge from their training data. While this removes the explicit information, implicit information can remain scattered across various training documents. Could an LLM infer the censored knowledge by piecing together these implicit hints? As a step towards answering this question, we study inductive out-of-context reasoning (OOCR), a type of generalization in which LLMs infer latent information from evidence distributed across training documents and apply it to downstream tasks without in-context learning. Using a suite of five tasks, we demonstrate that frontier LLMs can perform inductive OOCR. In one experiment we finetune an LLM on a corpus consisting only of distances between an unknown city and other known cities. Remarkably, without in-context examples or Chain of Thought, the LLM can verbalize that the unknown city is Paris and use this fact to answer downstream questions. Further experiments show that LLMs trained only on individual coin flip outcomes can verbalize whether the coin is biased, and those trained only on pairs $(x,f(x))$ can articulate a definition of $f$ and compute inverses. While OOCR succeeds in a range of cases, we also show that it is unreliable, particularly for smaller LLMs learning complex structures. Overall, the ability of LLMs to \"connect the dots\" without explicit in-context learning poses a potential obstacle to monitoring and controlling the knowledge acquired by LLMs.", "pdf": "https://openreview.net/pdf/a01cf3edc6dcd45361a0642168f312befc7ead8a.pdf"} {"title": "YouDream: Generating Anatomically Controllable Consistent Text-to-3D Animals", "url": "https://openreview.net/forum?id=RH7tfqhiZY", "detail_url": "https://openreview.net/forum?id=RH7tfqhiZY", "authors": "Sandeep Mishra,Oindrila Saha,Alan Bovik", "tags": "NIPS 2024,Poster", "abstract": "3D generation guided by text-to-image diffusion models enables the creation of visually compelling assets. However previous methods explore generation based on image or text. The boundaries of creativity are limited by what can be expressed through words or the images that can be sourced. We present YouDream, a method to generate high-quality anatomically controllable animals. YouDream is guided using a text-to-image diffusion model controlled by 2D views of a 3D pose prior. Our method is capable of generating novel imaginary animals that previous text-to-3D generative methods are unable to create. Additionally, our method can preserve anatomic consistency in the generated animals, an area where prior approaches often struggle. Moreover, we design a fully automated pipeline for generating commonly observed animals. To circumvent the need for human intervention to create a 3D pose, we propose a multi-agent LLM that adapts poses from a limited library of animal 3D poses to represent the desired animal. A user study conducted on the outcomes of YouDream demonstrates the preference of the animal models generated by our method over others. Visualizations and code are available at https://youdream3d.github.io/.", "pdf": "https://openreview.net/pdf/36452cadea8934951d7825cd581bba6868b0fccb.pdf"} {"title": "Pre-training Differentially Private Models with Limited Public Data", "url": "https://openreview.net/forum?id=GQrk0WGNiC", "detail_url": "https://openreview.net/forum?id=GQrk0WGNiC", "authors": "Zhiqi Bu,Xinwei Zhang,Sheng Zha,Mingyi Hong,George Karypis", "tags": "NIPS 2024,Poster", "abstract": "The superior performance of large foundation models can be attributed to the use of massive amounts of high-quality data. However, such datasets often contain sensitive, private and copyrighted material that requires formal protection. While differential privacy (DP) is a prominent method used to gauge the degree of security provided to large foundation models, its application in large foundation models has been met with limited success because there are often significant performance compromises when applying DP during the pre-training phase. Consequently, DP is more commonly implemented during the model fine-tuning stage, hence not capable of protecting a substantial portion of the data used during the initial pre-training process. In this work, we first provide a theoretical understanding of the efficacy of DP training by analyzing the per-iteration improvement of loss through the lens of the Hessian. We observe that DP optimizers' deceleration can be significantly mitigated by the use of limited public data, and thus propose the DP continual pre-training strategy. Our DP continual pre-training on vision models, using only 10% of public data, have achieved DP accuracy of 41.5% on ImageNet-21k (with epsilon=8) and non-DP accuracy of 55.7% on Places365 and 60.0% on iNaturalist-2021, which are on par with state-of-the-art standard pre-training and outperform existing DP pertained models. Our DP pre-trained models are released in *fastDP* library (https://github.com/awslabs/fast-differential-privacy/releases/tag/v2.1)", "pdf": "https://openreview.net/pdf/da0a315eac218f04ef4ce6c16c516714367a49a4.pdf"} {"title": "Simulation-Free Training of Neural ODEs on Paired Data", "url": "https://openreview.net/forum?id=GOgKhunkfw", "detail_url": "https://openreview.net/forum?id=GOgKhunkfw", "authors": "Semin Kim,Jaehoon Yoo,Jinwoo Kim,Yeonwoo Cha,Saehoon Kim,Seunghoon Hong", "tags": "NIPS 2024,Poster", "abstract": "In this work, we investigate a method for simulation-free training of Neural Ordinary Differential Equations (NODEs) for learning deterministic mappings between paired data. Despite the analogy of NODEs as continuous-depth residual networks, their application in typical supervised learning tasks has not been popular, mainly due to the large number of function evaluations required by ODE solvers and numerical instability in gradient estimation. To alleviate this problem, we employ the flow matching framework for simulation-free training of NODEs, which directly regresses the parameterized dynamics function to a predefined target velocity field. Contrary to generative tasks, however, we show that applying flow matching directly between paired data can often lead to an ill-defined flow that breaks the coupling of the data pairs (e.g., due to crossing trajectories). We propose a simple extension that applies flow matching in the embedding space of data pairs, where the embeddings are learned jointly with the dynamic function to ensure the validity of the flow which is also easier to learn. We demonstrate the effectiveness of our method on both regression and classification tasks, where our method outperforms existing NODEs with a significantly lower number of function evaluations. The code is available at https://github.com/seminkim/simulation-free-node.", "pdf": "https://openreview.net/pdf/dacf21e27edce51113acef4be66067666fbc87a4.pdf"} {"title": "Stable-Pose: Leveraging Transformers for Pose-Guided Text-to-Image Generation", "url": "https://openreview.net/forum?id=IwNTiNPxFt", "detail_url": "https://openreview.net/forum?id=IwNTiNPxFt", "authors": "Jiajun Wang,MORTEZA GHAHREMANI,Yitong Li,Bj\u00f6rn Ommer,Christian Wachinger", "tags": "NIPS 2024,Poster", "abstract": "Controllable text-to-image (T2I) diffusion models have shown impressive performance in generating high-quality visual content through the incorporation of various conditions. Current methods, however, exhibit limited performance when guided by skeleton human poses, especially in complex pose conditions such as side or rear perspectives of human figures. To address this issue, we present Stable-Pose, a novel adapter model that introduces a coarse-to-fine attention masking strategy into a vision Transformer (ViT) to gain accurate pose guidance for T2I models. Stable-Pose is designed to adeptly handle pose conditions within pre-trained Stable Diffusion, providing a refined and efficient way of aligning pose representation during image synthesis. We leverage the query-key self-attention mechanism of ViTs to explore the interconnections among different anatomical parts in human pose skeletons. Masked pose images are used to smoothly refine the attention maps based on target pose-related features in a hierarchical manner, transitioning from coarse to fine levels. \nAdditionally, our loss function is formulated to allocate increased emphasis to the pose region, thereby augmenting the model's precision in capturing intricate pose details. We assessed the performance of Stable-Pose across five public datasets under a wide range of indoor and outdoor human pose scenarios. Stable-Pose achieved an AP score of 57.1 in the LAION-Human dataset, marking around 13\\% improvement over the established technique ControlNet. The project link and code is available at https://github.com/ai-med/StablePose.", "pdf": "https://openreview.net/pdf/95763a5eeaeb02ff2b0e3379c76761128b321177.pdf"} {"title": "Differentiable Quantum Computing for Large-scale Linear Control", "url": "https://openreview.net/forum?id=GHqw3xLAvd", "detail_url": "https://openreview.net/forum?id=GHqw3xLAvd", "authors": "Connor Clayton,Jiaqi Leng,Gengzhi Yang,Yi-Ling Qiao,Ming Lin,Xiaodi Wu", "tags": "NIPS 2024,Poster", "abstract": "As industrial models and designs grow increasingly complex, the demand for optimal control of large-scale dynamical systems has significantly increased. However, traditional methods for optimal control incur significant overhead as problem dimensions grow. In this paper, we introduce an end-to-end quantum algorithm for linear-quadratic control with provable speedups. Our algorithm, based on a policy gradient method, incorporates a novel quantum subroutine for solving the matrix Lyapunov equation. Specifically, we build a *quantum-assisted differentiable simulator* for efficient gradient estimation that is more accurate and robust than classical methods relying on stochastic approximation. Compared to the classical approaches, our method achieves a *super-quadratic* speedup. To the best of our knowledge, this is the first end-to-end quantum application to linear control problems with provable quantum advantage.", "pdf": "https://openreview.net/pdf/2106791d2416b412198a23b17db09a1d782cc414.pdf"} {"title": "Meta-Controller: Few-Shot Imitation of Unseen Embodiments and Tasks in Continuous Control", "url": "https://openreview.net/forum?id=M5D5rMwLjj", "detail_url": "https://openreview.net/forum?id=M5D5rMwLjj", "authors": "Seongwoong Cho,Donggyun Kim,Jinwoo Lee,Seunghoon Hong", "tags": "NIPS 2024,Poster", "abstract": "Generalizing across robot embodiments and tasks is crucial for adaptive robotic systems. Modular policy learning approaches adapt to new embodiments but are limited to specific tasks, while few-shot imitation learning (IL) approaches often focus on a single embodiment.\nIn this paper, we introduce a few-shot behavior cloning framework to simultaneously generalize to unseen embodiments and tasks using a few (e.g., five) reward-free demonstrations. Our framework leverages a joint-level input-output representation to unify the state and action spaces of heterogeneous embodiments and employs a novel structure-motion state encoder that is parameterized to capture both shared knowledge across all embodiments and embodiment-specific knowledge. A matching-based policy network then predicts actions from a few demonstrations, producing an adaptive policy that is robust to over-fitting. Evaluated in the DeepMind Control suite, our framework termed Meta-Controller demonstrates superior few-shot generalization to unseen embodiments and tasks over modular policy learning and few-shot IL approaches.", "pdf": "https://openreview.net/pdf/a2f6c54273665fef8c3639341d09cdc7e5faee0c.pdf"} {"title": "Kraken: Inherently Parallel Transformers For Efficient Multi-Device Inference", "url": "https://openreview.net/forum?id=jRtxzzk0a6", "detail_url": "https://openreview.net/forum?id=jRtxzzk0a6", "authors": "Rohan Baskar Prabhakar,Hengrui Zhang,David Wentzlaff", "tags": "NIPS 2024,Poster", "abstract": "Large Transformer networks are increasingly used in settings where low inference latency is necessary to enable new applications and improve the end-user experience.\nHowever, autoregressive inference is resource intensive and requires parallelism for efficiency.\nParallelism introduces collective communication that is both expensive and represents a phase when hardware resources are underutilized.\nTowards mitigating this, Kraken is an evolution of the standard Transformer architecture that is designed to complement existing tensor parallelism schemes for efficient inference on multi-device systems.\nBy introducing a fixed degree of intra-layer model parallelism, the architecture allows collective operations to be overlapped with compute, decreasing latency and increasing hardware utilization.\nWhen trained on OpenWebText, Kraken models reach a similar perplexity as standard Transformers while also preserving their language modeling capabilities as evaluated on the SuperGLUE benchmark.\nImportantly, when tested on multi-GPU systems using TensorRT-LLM engines, Kraken speeds up Time To First Token by a mean of 35.6% across a range of model sizes, context lengths, and degrees of tensor parallelism.", "pdf": "https://openreview.net/pdf/fb0201479f2fd17acfe8d15ef583ed458ab564a0.pdf"} {"title": "Chain-of-Thought Reasoning Without Prompting", "url": "https://openreview.net/forum?id=4Zt7S0B0Jp", "detail_url": "https://openreview.net/forum?id=4Zt7S0B0Jp", "authors": "Xuezhi Wang,Denny Zhou", "tags": "NIPS 2024,Poster", "abstract": "In enhancing the reasoning capabilities of large language models (LLMs), prior research primarily focuses on specific prompting techniques such as few-shot or zero-shot chain-of-thought (CoT) prompting. These methods, while effective, often involve manually intensive prompt engineering. Our study takes a novel approach by asking: Can LLMs reason effectively without any prompting? Our findings reveal that, intriguingly, CoT reasoning paths can be elicited from pre-trained LLMs by simply altering the \\textit{decoding} process. Rather than conventional greedy decoding, we investigate the top-$k$ alternative tokens, uncovering that CoT paths are frequently inherent in these sequences. This approach not only bypasses the confounders of prompting but also allows us to assess the LLMs' \\textit{intrinsic} reasoning abilities. Moreover, we observe that the presence of a CoT in the decoding path correlates with a higher confidence in the model's decoded answer. This confidence metric effectively differentiates between CoT and non-CoT paths. Extensive empirical studies on various reasoning benchmarks show that the proposed CoT-decoding effectively elicits reasoning capabilities from language models, which were previously obscured by standard greedy decoding.", "pdf": "https://openreview.net/pdf/953b6154357359076d2684cb85b7d3ca80a05123.pdf"} {"title": "Wide Two-Layer Networks can Learn from Adversarial Perturbations", "url": "https://openreview.net/forum?id=1YGgaouVgZ", "detail_url": "https://openreview.net/forum?id=1YGgaouVgZ", "authors": "Soichiro Kumano,Hiroshi Kera,Toshihiko Yamasaki", "tags": "NIPS 2024,Poster", "abstract": "Adversarial examples have raised several open questions, such as why they can deceive classifiers and transfer between different models. A prevailing hypothesis to explain these phenomena suggests that adversarial perturbations appear as random noise but contain class-specific features. This hypothesis is supported by the success of perturbation learning, where classifiers trained solely on adversarial examples and the corresponding incorrect labels generalize well to correctly labeled test data. Although this hypothesis and perturbation learning are effective in explaining intriguing properties of adversarial examples, their solid theoretical foundation is limited. In this study, we theoretically explain the counterintuitive success of perturbation learning. We assume wide two-layer networks and the results hold for any data distribution. We prove that adversarial perturbations contain sufficient class-specific features for networks to generalize from them. Moreover, the predictions of classifiers trained on mislabeled adversarial examples coincide with those of classifiers trained on correctly labeled clean samples. The code is available at https://github.com/s-kumano/perturbation-learning.", "pdf": "https://openreview.net/pdf/26f885e61349887e6784ab7edd44a65b6e8e9ff5.pdf"} {"title": "Contextual Bilevel Reinforcement Learning for Incentive Alignment", "url": "https://openreview.net/forum?id=W3Dx1TGW3f", "detail_url": "https://openreview.net/forum?id=W3Dx1TGW3f", "authors": "Vinzenz Thoma,Barna P\u00e1sztor,Andreas Krause,Giorgia Ramponi,Yifan Hu", "tags": "NIPS 2024,Poster", "abstract": "The optimal policy in various real-world strategic decision-making problems depends both on the environmental configuration and exogenous events. For these settings, we introduce Contextual Bilevel Reinforcement Learning (CB-RL), a stochastic bilevel decision-making model, where the lower level consists of solving a contextual Markov Decision Process (CMDP). CB-RL can be viewed as a Stackelberg Game where the leader and a random context beyond the leader\u2019s control together decide the setup of many MDPs that potentially multiple followers best respond to. This framework extends beyond traditional bilevel optimization and finds relevance in diverse fields such as RLHF, tax design, reward shaping, contract theory and mechanism design. We propose a stochastic Hyper Policy Gradient Descent (HPGD) algorithm to solve CB-RL, and demonstrate its convergence. Notably, HPGD uses stochastic hypergradient estimates, based on observations of the followers\u2019 trajectories. Therefore, it allows followers to use any training procedure and the leader to be agnostic of the specific algorithm, which aligns with various real-world scenarios. We further consider the setting when the leader can influence the training of followers and propose an accelerated algorithm. We empirically demonstrate the performance of our algorithm for reward shaping and tax design.", "pdf": "https://openreview.net/pdf/37adb6ec1b86b70591f2cd5f5713b11a9c3aeed4.pdf"} {"title": "Replicability in Learning: Geometric Partitions and KKM-Sperner Lemma", "url": "https://openreview.net/forum?id=2lL7s5ESTj", "detail_url": "https://openreview.net/forum?id=2lL7s5ESTj", "authors": "Jason Vander Woude,Peter Dixon,A. Pavan,Jamie Radcliffe,N. V. Vinodchandran", "tags": "NIPS 2024,Poster", "abstract": "This paper studies replicability in machine learning tasks from a geometric viewpoint. Recent works have revealed the role of geometric partitions and Sperner's lemma (and its variations) in designing replicable learning algorithms and in establishing impossibility results. \n\nA partition $\\mathcal{P}$ of $\\mathbb{R}^d$ is called a $(k,\\epsilon)$-secluded partition if for every $\\vec{p}\\in\\mathbb{R}^d$, an $\\varepsilon$-radius ball (with respect to the $\\ell_{\\infty}$ norm) centered at $\\vec{p}$ intersects at most $k$ members of $\\mathcal{P}$. In relation to replicable learning, the parameter $k$ is closely related to the $\\textit{list complexity}$, and the parameter $\\varepsilon$ is related to the sample complexity of the replicable learner. Construction of secluded partitions with better parameters (small $k$ and large $\\varepsilon$) will lead to replicable learning algorithms with small list and sample complexities. \n\nMotivated by this connection, we undertake a comprehensive study of secluded partitions and establish near-optimal relationships between $k$ and $\\varepsilon$. \n\n1. We show that for any $(k,\\epsilon)$-secluded partition where each member has at most unit measure, it must be that $k \\geq(1+2\\varepsilon)^d$, and consequently, for the interesting regime $k\\in[2^d]$ it must be that $\\epsilon\\leq\\frac{\\log_4(k)}{d}$. \n\n2. To complement this upper bound on $\\epsilon$, we show that for each $d\\in\\mathbb{N}$ and each viable $k\\in[2^d]$, a construction of a $(k,\\epsilon)$-secluded (unit cube) partition with $\\epsilon\\geq\\frac{\\log_4(k)}{d}\\cdot\\frac{1}{8\\log_4(d+1)}$. This establishes the optimality of $\\epsilon$ within a logarithmic factor.\n\n3. Finally, we adapt our proof techniques to obtain a new ``neighborhood'' variant of the cubical KKM lemma (or cubical Sperner's lemma): For any coloring of $[0,1]^d$ in which no color is used on opposing faces, it holds for each $\\epsilon\\in(0,\\frac12]$ that there is a point where the open $\\epsilon$-radius $\\ell_\\infty$-ball intersects at least $(1+\\frac23\\epsilon)^d$ colors. While the classical Sperner/KKM lemma guarantees the existence of a point that is \"adjacent\" to points with $(d+1)$ distinct colors, the neighborhood version guarantees the existence of a small neighborhood with exponentially many points with distinct colors.", "pdf": "https://openreview.net/pdf/bb397a9eb927b3b90afd463a1410599a9d5ddc3b.pdf"} {"title": "SCAFFLSA: Taming Heterogeneity in Federated Linear Stochastic Approximation and TD Learning", "url": "https://openreview.net/forum?id=HeJ1cBAgiV", "detail_url": "https://openreview.net/forum?id=HeJ1cBAgiV", "authors": "Paul Mangold,Sergey Samsonov,Safwan Labbi,Ilya Levin,Reda ALAMI,Alexey Naumov,Eric Moulines", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we analyze the sample and communication complexity of the federated linear stochastic approximation (FedLSA) algorithm. We explicitly quantify the effects of local training with agent heterogeneity. We show that the communication complexity of FedLSA scales polynomially with the inverse of the desired accuracy \u03f5. To overcome this, we propose SCAFFLSA a new variant of FedLSA that uses control variates to correct for client drift, and establish its sample and communication complexities. We show that for statistically heterogeneous agents, its communication complexity scales logarithmically with the desired accuracy, similar to Scaffnew. An important finding is that, compared to the existing results for Scaffnew, the sample complexity scales with the inverse of the number of agents, a property referred to as linear speed-up. Achieving this linear speed-up requires completely new theoretical arguments. We apply the proposed method to federated temporal difference learning with linear function approximation and analyze the corresponding complexity improvements.", "pdf": "https://openreview.net/pdf/193c7f891ef51e06933682db5731cf0f89c5606e.pdf"} {"title": "Conditional Density Estimation with Histogram Trees", "url": "https://openreview.net/forum?id=5SUP6vUVkP", "detail_url": "https://openreview.net/forum?id=5SUP6vUVkP", "authors": "Lincen Yang,Matthijs van Leeuwen", "tags": "NIPS 2024,Poster", "abstract": "Conditional density estimation (CDE) goes beyond regression by modeling the full conditional distribution, providing a richer understanding of the data than just the conditional mean in regression. This makes CDE particularly useful in critical application domains. However, interpretable CDE methods are understudied. Current methods typically employ kernel-based approaches, using kernel functions directly for kernel density estimation or as basis functions in linear models. In contrast, despite their conceptual simplicity and visualization suitability, tree-based methods---which are arguably more comprehensible---have been largely overlooked for CDE tasks. Thus, we propose the Conditional Density Tree (CDTree), a fully non-parametric model consisting of a decision tree in which each leaf is formed by a histogram model. Specifically, we formalize the problem of learning a CDTree using the minimum description length (MDL) principle, which eliminates the need for tuning the hyperparameter for regularization. Next, we propose an iterative algorithm that, although greedily, searches the optimal histogram for every possible node split. Our experiments demonstrate that, in comparison to existing interpretable CDE methods, CDTrees are both more accurate (as measured by the log-loss) and more robust against irrelevant features. Further, our approach leads to smaller tree sizes than existing tree-based models, which benefits interpretability.", "pdf": "https://openreview.net/pdf/5db205a68d5ccc7e397851e89356dc8677bb1789.pdf"} {"title": "Decoupling Semantic Similarity from Spatial Alignment for Neural Networks.", "url": "https://openreview.net/forum?id=ypFgcT147Z", "detail_url": "https://openreview.net/forum?id=ypFgcT147Z", "authors": "Tassilo Wald,Constantin Ulrich,Priyank Jaini,Gregor Koehler,David Zimmerer,Stefan Denner,Fabian Isensee,Michael Baumgartner,Klaus Maier-Hein", "tags": "NIPS 2024,Poster", "abstract": "What representation do deep neural networks learn? How similar are images to each other for neural networks? Despite the overwhelming success of deep learning methods key questions about their internal workings still remain largely unanswered, due to their internal high dimensionality and complexity. To address this, one approach is to measure the similarity of activation responses to various inputs.\nRepresentational Similarity Matrices (RSMs) distill this similarity into scalar values for each input pair.\nThese matrices encapsulate the entire similarity structure of a system, indicating which input lead to similar responses.\nWhile the similarity between images is ambiguous, we argue that the spatial location of semantic objects does neither influence human perception nor deep learning classifiers. Thus this should be reflected in the definition of similarity between image responses for computer vision systems. Revisiting the established similarity calculations for RSMs we expose their sensitivity to spatial alignment. In this paper we propose to solve this through _semantic RSMs_, which are invariant to spatial permutation. We measure semantic similarity between input responses by formulating it as a set-matching problem. Further, we quantify the superiority of _semantic_ RSMs over _spatio-semantic_ RSMs through image retrieval and by comparing the similarity between representations to the similarity between predicted class probabilities.", "pdf": "https://openreview.net/pdf/d22a2517d54b412df61755b57dfc902e0053fba1.pdf"} {"title": "Community Detection Guarantees using Embeddings Learned by Node2Vec", "url": "https://openreview.net/forum?id=cnpR4e2HCQ", "detail_url": "https://openreview.net/forum?id=cnpR4e2HCQ", "authors": "Andrew Davison,Samuel Carlyle Morgan,Owen G. Ward", "tags": "NIPS 2024,Poster", "abstract": "Embedding the nodes of a large network into an Euclidean space is a common objective in modern\nmachine learning, with a variety of tools available. These embeddings can then be used as features for\ntasks such as community detection/node clustering or link prediction, where they achieve state of the art\nperformance. With the exception of spectral clustering methods, there is little theoretical understanding\nfor commonly used approaches to learning embeddings. In this work we examine the theoretical\nproperties of the embeddings learned by node2vec. Our main result shows that the use of k-means\nclustering on the embedding vectors produced by node2vec gives weakly consistent community recovery\nfor the nodes in (degree corrected) stochastic block models. We also discuss the use of these embeddings\nfor node and link prediction tasks. We demonstrate this result empirically for both\nreal and simulated networks, and examine how this relates\nto other embedding tools for network data.", "pdf": "https://openreview.net/pdf/9b15138918139ce93eca2fd353062835642befd1.pdf"} {"title": "Truthful High Dimensional Sparse Linear Regression", "url": "https://openreview.net/forum?id=ZmIAd3JaZN", "detail_url": "https://openreview.net/forum?id=ZmIAd3JaZN", "authors": "Liyang Zhu,Amina Manseur,Meng Ding,Jinyan Liu,Jinhui Xu,Di Wang", "tags": "NIPS 2024,Poster", "abstract": "We study the problem of fitting the high dimensional sparse linear regression model, where the data are provided by strategic or self-interested agents (individuals) who prioritize their privacy of data disclosure. In contrast to the classical setting, our focus is on designing mechanisms that can effectively incentivize most agents to truthfully report their data while preserving the privacy of individual reports. Simultaneously, we seek an estimator which should be close to the underlying parameter. \nWe attempt to solve the problem by deriving a novel private estimator that has a closed-form expression. \nBased on the estimator, we propose a mechanism which has the following properties via some appropriate design of the computation and payment scheme: (1) the mechanism is $(o(1), O(n^{-\\Omega({1})}))$-jointly differentially private, where $n$ is the number of agents; (2) it is an $o(\\frac{1}{n})$-approximate Bayes Nash equilibrium for a $(1-o(1))$-fraction of agents to truthfully report their data; (3) the output could achieve an error of $o(1)$ to the underlying parameter; (4) it is individually rational for a $(1-o(1))$ fraction of agents in the mechanism; (5) the payment budget required from the analyst to run the mechanism is $o(1)$. To the best of our knowledge, this is the first study on designing truthful (and privacy-preserving) mechanisms for high dimensional sparse linear regression.", "pdf": "https://openreview.net/pdf/85b8c775681804f90e01c26e4a451d7baacccc06.pdf"} {"title": "Empowering Visible-Infrared Person Re-Identification with Large Foundation Models", "url": "https://openreview.net/forum?id=qQlmONeI5k", "detail_url": "https://openreview.net/forum?id=qQlmONeI5k", "authors": "Zhangyi Hu,Bin Yang,Mang Ye", "tags": "NIPS 2024,Poster", "abstract": "Visible-Infrared Person Re-identification (VI-ReID) is a challenging cross-modal retrieval task due to significant modality differences, primarily resulting from the absence of color information in the infrared modality. The development of large foundation models like Large Language Models (LLMs) and Vision Language Models (VLMs) motivates us to explore a feasible solution to empower VI-ReID with off-the-shelf large foundation models. To this end, we propose a novel Text-enhanced VI-ReID framework driven by Large Foundation Models (TVI-LFM). The core idea is to enrich the representation of the infrared modality with textual descriptions automatically generated by VLMs. Specifically, we incorporate a pre-trained VLM to extract textual features from texts generated by VLM and augmented by LLM, and incrementally fine-tune the text encoder to minimize the domain gap between generated texts and original visual modalities. Meanwhile, to enhance the infrared modality with extracted textual representations, we leverage modality alignment capabilities of VLMs and VLM-generated feature-level filters. This enables the text model to learn complementary features from the infrared modality, ensuring the semantic structural consistency between the fusion modality and the visible modality. Furthermore, we introduce modality joint learning to align features across all modalities, ensuring that textual features maintain stable semantic representation of overall pedestrian appearance during complementary information learning. Additionally, a modality ensemble retrieval strategy is proposed to leverage complementary strengths of each query modality to improve retrieval effectiveness and robustness. Extensive experiments on three expanded VI-ReID datasets demonstrate that our method significantly improves the retrieval performance, paving the way for the utilization of large foundation models in downstream multi-modal retrieval tasks.", "pdf": "https://openreview.net/pdf/9eb1ad9b9b53277ab4f1691f9973e3689dd054a4.pdf"} {"title": "Fast Encoder-Based 3D from Casual Videos via Point Track Processing", "url": "https://openreview.net/forum?id=bqGAheAeQY", "detail_url": "https://openreview.net/forum?id=bqGAheAeQY", "authors": "Yoni Kasten,Wuyue Lu,Haggai Maron", "tags": "NIPS 2024,Poster", "abstract": "This paper addresses the long-standing challenge of reconstructing 3D structures from videos with dynamic content. Current approaches to this problem were not designed to operate on casual videos recorded by standard cameras or require a long optimization time. \n Aiming to significantly improve the efficiency of previous approaches, we present TracksTo4D, a learning-based approach that enables inferring 3D structure and camera positions from dynamic content originating from casual videos using a single efficient feed-forward pass. To achieve this, we propose operating directly over 2D point tracks as input and designing an architecture tailored for processing 2D point tracks. Our proposed architecture is designed with two key principles in mind: (1) it takes into account the inherent symmetries present in the input point tracks data, and (2) it assumes that the movement patterns can be effectively represented using a low-rank approximation. TracksTo4D is trained in an unsupervised way on a dataset of casual videos utilizing only the 2D point tracks extracted from the videos, without any 3D supervision. Our experiments show that TracksTo4D can reconstruct a temporal point cloud and camera positions of the underlying video with accuracy comparable to state-of-the-art methods, while drastically reducing runtime by up to 95\\%. We further show that TracksTo4D generalizes well to unseen videos of unseen semantic categories at inference time.", "pdf": "https://openreview.net/pdf/3a07e8316c05efb30b81806e9eb8a63d1ab1c3ca.pdf"} {"title": "Advancing Cross-domain Discriminability in Continual Learning of Vision-Language Models", "url": "https://openreview.net/forum?id=boGxvYWZEq", "detail_url": "https://openreview.net/forum?id=boGxvYWZEq", "authors": "Yicheng Xu,Yuxin Chen,Jiahao Nie,Yusong Wang,Huiping Zhuang,Manabu Okumura", "tags": "NIPS 2024,Poster", "abstract": "Continual learning (CL) with Vision-Language Models (VLMs) has overcome the constraints of traditional CL, which only focuses on previously encountered classes. During the CL of VLMs, we need not only to prevent the catastrophic forgetting on incrementally learned knowledge but also to preserve the zero-shot ability of VLMs. However, existing methods require additional reference datasets to maintain such zero-shot ability and rely on domain-identity hints to classify images across different domains. In this study, we propose Regression-based Analytic Incremental Learning (RAIL), which utilizes a recursive ridge regression-based adapter to learn from a sequence of domains in a non-forgetting manner and decouple the cross-domain correlations by projecting features to a higher-dimensional space. Cooperating with a training-free fusion module, RAIL absolutely preserves the VLM's zero-shot ability on unseen domains without any reference data.\nAdditionally, we introduce Cross-domain Task-Agnostic Incremental Learning (X-TAIL) setting. In this setting, a CL learner is required to incrementally learn from multiple domains and classify test images from both seen and unseen domains without any domain-identity hint.\nWe theoretically prove RAIL's absolute memorization on incrementally learned domains. Experiment results affirm RAIL's state-of-the-art performance in both X-TAIL and existing Multi-domain Task-Incremental Learning settings. The code is released at https://github.com/linghan1997/Regression-based-Analytic-Incremental-Learning.", "pdf": "https://openreview.net/pdf/f13992ea7e554b8fcfa2b120be55eeb89c25643f.pdf"} {"title": "Incorporating Test-Time Optimization into Training with Dual Networks for Human Mesh Recovery", "url": "https://openreview.net/forum?id=ugqx9tgyum", "detail_url": "https://openreview.net/forum?id=ugqx9tgyum", "authors": "Yongwei Nie,Mingxian Fan,Chengjiang Long,Qing Zhang,Jian Zhu,Xuemiao Xu", "tags": "NIPS 2024,Poster", "abstract": "Human Mesh Recovery (HMR) is the task of estimating a parameterized 3D human mesh from an image. There is a kind of methods first training a regression model for this problem, then further optimizing the pretrained regression model for any specific sample individually at test time. However, the pretrained model may not provide an ideal optimization starting point for the test-time optimization. Inspired by meta-learning, we incorporate the test-time optimization into training, performing a step of test-time optimization for each sample in the training batch before really conducting the training optimization over all the training samples. In this way, we obtain a meta-model, the meta-parameter of which is friendly to the test-time optimization. At test time, after several test-time optimization steps starting from the meta-parameter, we obtain much higher HMR accuracy than the test-time optimization starting from the simply pretrained regression model. Furthermore, we find test-time HMR objectives are different from training-time objectives, which reduces the effectiveness of the learning of the meta-model. To solve this problem, we propose a dual-network architecture that unifies the training-time and test-time objectives. Our method, armed with meta-learning and the dual networks, outperforms state-of-the-art regression-based and optimization-based HMR approaches, as validated by the extensive experiments. The codes are available at https://github.com/fmx789/Meta-HMR.", "pdf": "https://openreview.net/pdf/49c32264f3c66302dba10c8926a06f670753fe19.pdf"} {"title": "The Limits of Transfer Reinforcement Learning with Latent Low-rank Structure", "url": "https://openreview.net/forum?id=pK2qGRY2Hv", "detail_url": "https://openreview.net/forum?id=pK2qGRY2Hv", "authors": "Tyler Sam,Yudong Chen,Christina Yu", "tags": "NIPS 2024,Poster", "abstract": "Many reinforcement learning (RL) algorithms are too costly to use in practice due to the large sizes $S,A$ of the problem's state and action space. To resolve this issue, we study transfer RL with latent low rank structure. We consider the problem of transferring a latent low rank representation when the source and target MDPs have transition kernels with Tucker rank $(S, d, A)$, $(S ,S , d), (d, S , A )$, or $(d , d , d )$. In each setting, we introduce the transfer-ability coefficient $\\alpha$ that measures the difficulty of representational transfer. Our algorithm learns latent representations in each source MDP and then exploits the linear structure to remove the dependence on $S , A $, or $SA $ in the target MDP regret bound. We complement our positive results with information theoretic lower bounds that show our algorithms (excluding the ($d, d, d$) setting) are minimax-optimal with respect to $\\alpha$.", "pdf": "https://openreview.net/pdf/03994c43bf851c51ecc74ae7b1902121af6814e6.pdf"} {"title": "Weight for Robustness: A Comprehensive Approach towards Optimal Fault-Tolerant Asynchronous ML", "url": "https://openreview.net/forum?id=v1kpc060aC", "detail_url": "https://openreview.net/forum?id=v1kpc060aC", "authors": "Tehila Dahan,Kfir Yehuda Levy", "tags": "NIPS 2024,Poster", "abstract": "We address the challenges of Byzantine-robust training in asynchronous distributed machine learning systems, aiming to enhance efficiency amid massive parallelization and heterogeneous compute resources. Asynchronous systems, marked by independently operating workers and intermittent updates, uniquely struggle with maintaining integrity against Byzantine failures, which encompass malicious or erroneous actions that disrupt learning. The inherent delays in such settings not only introduce additional bias to the system but also obscure the disruptions caused by Byzantine faults. To tackle these issues, we adapt the Byzantine framework to asynchronous dynamics by introducing a novel weighted robust aggregation framework. This allows for the extension of robust aggregators and a recent meta-aggregator to their weighted versions, mitigating the effects of delayed updates. By further incorporating a recent variance-reduction technique, we achieve an optimal convergence rate for the first time in an asynchronous Byzantine environment. Our methodology is rigorously validated through empirical and theoretical analysis, demonstrating its effectiveness in enhancing fault tolerance and optimizing performance in asynchronous ML systems.", "pdf": "https://openreview.net/pdf/d60ac7ce3ad4a6e24042fbbb55edf0fa9acf3086.pdf"} {"title": "Advancing Fine-Grained Classification by Structure and Subject Preserving Augmentation", "url": "https://openreview.net/forum?id=MNg331t8Tj", "detail_url": "https://openreview.net/forum?id=MNg331t8Tj", "authors": "Eyal Michaeli,Ohad Fried", "tags": "NIPS 2024,Poster", "abstract": "Fine-grained visual classification (FGVC) involves classifying closely related subcategories. This task is inherently difficult due to the subtle differences between classes and the high intra-class variance. Moreover, FGVC datasets are typically small and challenging to gather, thus highlighting a significant need for effective data augmentation.\nRecent advancements in text-to-image diffusion models have introduced new possibilities for data augmentation in image classification. While these models have been used to generate training data for classification tasks, their effectiveness in full-dataset training of FGVC models remains under-explored. Recent techniques that rely on text-to-image generation or Img2Img methods, such as SDEdit, often struggle to generate images that accurately represent the class while modifying them to a degree that significantly increases the dataset's diversity. To address these challenges, we present SaSPA: Structure and Subject Preserving Augmentation. Contrary to recent methods, our method does not use real images as guidance, thereby increasing generation flexibility and promoting greater diversity. To ensure accurate class representation, we employ conditioning mechanisms, specifically by conditioning on image edges and subject representation.\nWe conduct extensive experiments and benchmark SaSPA against both traditional and generative data augmentation techniques. SaSPA consistently outperforms all established baselines across multiple settings, including full dataset training and contextual bias. Additionally, our results reveal interesting patterns in using synthetic data for FGVC models; for instance, we find a relationship between the amount of real data used and the optimal proportion of synthetic data.", "pdf": "https://openreview.net/pdf/a68333e357015c0da34498829380c0df7fac727c.pdf"} {"title": "Fundamental Convergence Analysis of Sharpness-Aware Minimization", "url": "https://openreview.net/forum?id=PuXYI4HOQU", "detail_url": "https://openreview.net/forum?id=PuXYI4HOQU", "authors": "Pham Duy Khanh,Hoang-Chau Luong,Boris Mordukhovich,Dat Ba Tran", "tags": "NIPS 2024,Poster", "abstract": "The paper investigates the fundamental convergence properties of Sharpness-Aware Minimization (SAM), a recently proposed gradient-based optimization method (Foret et al., 2021) that significantly improves the generalization of deep neural networks. The convergence properties including the stationarity of accumulation points, the convergence of the sequence of gradients to the origin, the sequence of function values to the optimal value, and the sequence of iterates to the optimal solution are established for the method. The universality of the provided convergence analysis based on inexact gradient descent frameworks (Khanh et al., 2023b) allows its extensions to the normalized versions of SAM such as F-SAM (Li et al. 2024), VaSSO (Li & Giannakis, 2023), RSAM (Liu et al., 2022), and to the unnormalized versions of SAM such as USAM (Andriushchenko & Flammarion, 2022). Numerical experiments are conducted on classification tasks using deep learning models to confirm the practical aspects of our analysis.", "pdf": "https://openreview.net/pdf/f56da93c7f00c34ebb8c6832ca29e25f73b7c07b.pdf"} {"title": "Masked Pre-training Enables Universal Zero-shot Denoiser", "url": "https://openreview.net/forum?id=oFgTScAsBr", "detail_url": "https://openreview.net/forum?id=oFgTScAsBr", "authors": "Xiaoxiao Ma,Zhixiang Wei,Yi Jin,Pengyang Ling,Tianle Liu,Ben Wang,Junkang Dai,Huaian Chen", "tags": "NIPS 2024,Poster", "abstract": "In this work, we observe that model trained on vast general images via masking strategy, has been naturally embedded with their distribution knowledge, thus spontaneously attains the underlying potential for strong image denoising.\nBased on this observation, we propose a novel zero-shot denoising paradigm, i.e., $\\textbf{M}$asked $\\textbf{P}$re-train then $\\textbf{I}$terative fill ($\\textbf{MPI}$).\nMPI first trains model via masking and then employs pre-trained weight for high-quality zero-shot image denoising on a single noisy image.\nConcretely, MPI comprises two key procedures:\n$\\textbf{1) Masked Pre-training}$ involves training model to reconstruct massive natural images with random masking for generalizable representations, gathering the potential for valid zero-shot denoising on images with varying noise degradation and even in distinct image types.\n$\\textbf{2) Iterative filling}$ exploits pre-trained knowledge for effective zero-shot denoising. It iteratively optimizes the image by leveraging pre-trained weights, focusing on alternate reconstruction of different image parts, and gradually assembles fully denoised image within limited number of iterations.\nComprehensive experiments across various noisy scenarios underscore the notable advances of MPI over previous approaches with a marked reduction in inference time.", "pdf": "https://openreview.net/pdf/264ce97f1508a2a3605cf04af7e7ebd47bcf6786.pdf"} {"title": "CODE: Contrasting Self-generated Description to Combat Hallucination in Large Multi-modal Models", "url": "https://openreview.net/forum?id=g6nn2AijDp", "detail_url": "https://openreview.net/forum?id=g6nn2AijDp", "authors": "Junho Kim,Hyunjun Kim,KIM YEONJU,Yong Man Ro", "tags": "NIPS 2024,Poster", "abstract": "Large Multi-modal Models (LMMs) have recently demonstrated remarkable abilities in visual context understanding and coherent response generation. However, alongside these advancements, the issue of hallucinations has emerged as a significant challenge, producing erroneous responses that are unrelated to the visual contents. In this paper, we introduce a novel contrastive-based decoding method, COuntering DEscription Contrastive Decoding (CODE), which leverages self-generated descriptions as contrasting references during the decoding phase of LMMs to address hallucination issues. CODE utilizes the comprehensive descriptions from model itself as visual counterpart to correct and improve response alignment with actual visual content. By dynamically adjusting the information flow and distribution of next-token predictions in the LMM's vocabulary, CODE enhances the coherence and informativeness of generated responses. Extensive experiments demonstrate that our method significantly reduces hallucinations and improves cross-modal consistency across various benchmarks and cutting-edge LMMs. Our method provides a simple yet effective decoding strategy that can be integrated to existing LMM frameworks without additional training.", "pdf": "https://openreview.net/pdf/eac5defa347259c7f2afbae461c70fd74bf94a3e.pdf"} {"title": "Decision-Making Behavior Evaluation Framework for LLMs under Uncertain Context", "url": "https://openreview.net/forum?id=re0ly2Ylcu", "detail_url": "https://openreview.net/forum?id=re0ly2Ylcu", "authors": "Jingru Jia,Zehua Yuan,Junhao Pan,Paul E McNamara,Deming Chen", "tags": "NIPS 2024,Poster", "abstract": "When making decisions under uncertainty, individuals often deviate from rational behavior, which can be evaluated across three dimensions: risk preference, probability weighting, and loss aversion. Given the widespread use of large language models (LLMs) in supporting decision-making processes, it is crucial to assess whether their behavior aligns with human norms and ethical expectations or exhibits potential biases. Although several empirical studies have investigated the rationality and social behavior performance of LLMs, their internal decision-making tendencies and capabilities remain inadequately understood. This paper proposes a framework, grounded in behavioral economics theories, to evaluate the decision-making behaviors of LLMs. With a multiple-choice-list experiment, we initially estimate the degree of risk preference, probability weighting, and loss aversion in a context-free setting for three commercial LLMs: ChatGPT-4.0-Turbo, Claude-3-Opus, and Gemini-1.0-pro. Our results reveal that LLMs generally exhibit patterns similar to humans, such as risk aversion and loss aversion, with a tendency to overweight small probabilities, but there are significant variations in the degree to which these behaviors are expressed across different LLMs. Further, we explore their behavior when embedded with socio-demographic features of human beings, uncovering significant disparities across various demographic characteristics.", "pdf": "https://openreview.net/pdf/cfdb54076307baf1e330d87ecd09915bc38f9cff.pdf"} {"title": "Contrastive losses as generalized models of global epistasis", "url": "https://openreview.net/forum?id=hLoiXOzoly", "detail_url": "https://openreview.net/forum?id=hLoiXOzoly", "authors": "David H Brookes,Jakub Otwinowski,Sam Sinai", "tags": "NIPS 2024,Poster", "abstract": "Fitness functions map large combinatorial spaces of biological sequences to properties of interest. Inferring these multimodal functions from experimental data is a central task in modern protein engineering. Global epistasis models are an effective and physically-grounded class of models for estimating fitness functions from observed data. These models assume that a sparse latent function is transformed by a monotonic nonlinearity to emit measurable fitness. Here we demonstrate that minimizing supervised contrastive loss functions, such as the Bradley-Terry loss, is a simple and flexible technique for extracting the sparse latent function implied by global epistasis. We argue by way of a fitness-epistasis uncertainty principle that the nonlinearities in global epistasis models can produce observed fitness functions that do not admit sparse representations, and thus may be inefficient to learn from observations when using a Mean Squared Error (MSE) loss (a common practice). We show that contrastive losses are able to accurately estimate a ranking function from limited data even in regimes where MSE is ineffective and validate the practical utility of this insight by demonstrating that contrastive loss functions result in consistently improved performance on empirical benchmark tasks.", "pdf": "https://openreview.net/pdf/05da1910a87bb993337563e5282857bbbd9e893a.pdf"} {"title": "Metric Flow Matching for Smooth Interpolations on the Data Manifold", "url": "https://openreview.net/forum?id=fE3RqiF4Nx", "detail_url": "https://openreview.net/forum?id=fE3RqiF4Nx", "authors": "Kacper Kapusniak,Peter Potaptchik,Teodora Reu,Leo Zhang,Alexander Tong,Michael M. Bronstein,Joey Bose,Francesco Di Giovanni", "tags": "NIPS 2024,Poster", "abstract": "Matching objectives underpin the success of modern generative models and rely on constructing conditional paths that transform a source distribution into a target distribution. Despite being a fundamental building block, conditional paths have been designed principally under the assumption of $\\textit{Euclidean geometry}$, resulting in straight interpolations. However, this can be particularly restrictive for tasks such as trajectory inference, where straight paths might lie outside the data manifold, thus failing to capture the underlying dynamics giving rise to the observed marginals. In this paper, we propose Metric Flow Matching (MFM), a novel simulation-free framework for conditional flow matching where interpolants are approximate geodesics learned by minimizing the kinetic energy of a data-induced Riemannian metric. This way, the generative model matches vector fields on the data manifold, which corresponds to lower uncertainty and more meaningful interpolations. We prescribe general metrics to instantiate MFM, independent of the task, and test it on a suite of challenging problems including LiDAR navigation, unpaired image translation, and modeling cellular dynamics. We observe that MFM outperforms the Euclidean baselines, particularly achieving SOTA on single-cell trajectory prediction.", "pdf": "https://openreview.net/pdf/5fd21860d4ce3d4b1c290c2ac4b4e68d2b82e3a7.pdf"} {"title": "CLIPCEIL: Domain Generalization through CLIP via Channel rEfinement and Image-text aLignment", "url": "https://openreview.net/forum?id=MqeCU0tXAY", "detail_url": "https://openreview.net/forum?id=MqeCU0tXAY", "authors": "Xi Yu,Shinjae Yoo,Yuewei Lin", "tags": "NIPS 2024,Poster", "abstract": "Domain generalization (DG) is a fundamental yet challenging topic in machine learning. Recently, the remarkable zero-shot capabilities of the large pre-trained vision-language model (e.g., CLIP) have made it popular for various downstream tasks. However, the effectiveness of this capacity often degrades when there are shifts in data distribution during testing compared to the training data. In this paper, we propose a novel method, known as CLIPCEIL, a model that utilizes Channel rEfinement and Image-text aLignment to facilitate the CLIP to the inaccessible $\\textit{out-of-distribution}$ test datasets that exhibit domain shifts. Specifically, we refine the feature channels in the visual domain to ensure they contain domain-invariant and class-relevant features by using a lightweight adapter. This is achieved by minimizing the inter-domain variance while maximizing the inter-class variance. In the meantime, we ensure the image-text alignment by aligning text embeddings of the class descriptions and their corresponding image embedding while further removing the domain-specific features. Moreover, our model integrates multi-scale CLIP features by utilizing a self-attention fusion module, technically implemented through one Transformer layer. Extensive experiments on five widely used benchmark datasets demonstrate that CLIPCEIL outperforms the existing state-of-the-art methods. The source code is available at \\url{https://github.com/yuxi120407/CLIPCEIL}.", "pdf": "https://openreview.net/pdf/d3ddaddee69355c71025e6d1ee8fe9950aadcc4d.pdf"} {"title": "NeuroGauss4D-PCI: 4D Neural Fields and Gaussian Deformation Fields for Point Cloud Interpolation", "url": "https://openreview.net/forum?id=LKdCkV31T7", "detail_url": "https://openreview.net/forum?id=LKdCkV31T7", "authors": "Chaokang Jiang,Dalong Du,Jiuming Liu,Siting Zhu,Zhenqiang Liu,Zhuang Ma,Zhujin Liang,Jie Zhou", "tags": "NIPS 2024,Poster", "abstract": "Point Cloud Interpolation confronts challenges from point sparsity, complex spatiotemporal dynamics, and the difficulty of deriving complete 3D point clouds from sparse temporal information. This paper presents NeuroGauss4D-PCI, which excels at modeling complex non-rigid deformations across varied dynamic scenes. The method begins with an iterative Gaussian cloud soft clustering module, offering structured temporal point cloud representations. The proposed temporal radial basis function Gaussian residual utilizes Gaussian parameter interpolation over time, enabling smooth parameter transitions and capturing temporal residuals of Gaussian distributions. Additionally, a 4D Gaussian deformation field tracks the evolution of these parameters, creating continuous spatiotemporal deformation fields. A 4D neural field transforms low-dimensional spatiotemporal coordinates ($x,y,z,t$) into a high-dimensional latent space. Finally, we adaptively and efficiently fuse the latent features from neural fields and the geometric features from Gaussian deformation fields.\nNeuroGauss4D-PCI outperforms existing methods in point cloud frame interpolation, delivering leading performance on both object-level (DHB) and large-scale autonomous driving datasets (NL-Drive), with scalability to auto-labeling and point cloud densification tasks.", "pdf": "https://openreview.net/pdf/71997589038272dd8a8de7753b68260b65ce55f6.pdf"} {"title": "How Does Message Passing Improve Collaborative Filtering?", "url": "https://openreview.net/forum?id=c78U5zi4eA", "detail_url": "https://openreview.net/forum?id=c78U5zi4eA", "authors": "Mingxuan Ju,William Shiao,Zhichun Guo,Yanfang Ye,Yozen Liu,Neil Shah,Tong Zhao", "tags": "NIPS 2024,Poster", "abstract": "Collaborative filtering (CF) has exhibited prominent results for recommender systems and been broadly utilized for real-world applications.\nA branch of research enhances CF methods by message passing (MP) used in graph neural networks, due to its strong capabilities of extracting knowledge from graph-structured data, like user-item bipartite graphs that naturally exist in CF. They assume that MP helps CF methods in a manner akin to its benefits for graph-based learning tasks in general (e.g., node classification). However, even though MP empirically improves CF, whether or not this assumption is correct still needs verification. To address this gap, we formally investigate why MP helps CF from multiple perspectives and show that many assumptions made by previous works are not entirely accurate. With our curated ablation studies and theoretical analyses, we discover that (i) MP improves the CF performance primarily by additional representations passed from neighbors during the forward pass instead of additional gradient updates to neighbor representations during the model back-propagation and (ii) MP usually helps low-degree nodes more than high-degree nodes.}Utilizing these novel findings, we present Test-time Aggregation for Collaborative Filtering, namely TAG-CF, a test-time augmentation framework that only conducts MP once at inference time. The key novelty of TAG-CF is that it effectively utilizes graph knowledge while circumventing most of notorious computational overheads of MP. Besides, TAG-CF is extremely versatile can be used as a plug-and-play module to enhance representations trained by different CF supervision signals. Evaluated on six datasets (i.e., five academic benchmarks and one real-world industrial dataset), TAG-CF consistently improves the recommendation performance of CF methods without graph by up to 39.2% on cold users and 31.7% on all users, with little to no extra computational overheads. Furthermore, compared with trending graph-enhanced CF methods, TAG-CF delivers comparable or even better performance with less than 1% of their total training times. Our code is publicly available at https://github.com/snap-research/Test-time-Aggregation-for-CF.", "pdf": "https://openreview.net/pdf/23f92092e0bdf4b282ded977c322580448f50217.pdf"} {"title": "Learning Plaintext-Ciphertext Cryptographic Problems via ANF-based SAT Instance Representation", "url": "https://openreview.net/forum?id=FzwAQJK4CG", "detail_url": "https://openreview.net/forum?id=FzwAQJK4CG", "authors": "Xinhao Zheng,Yang Li,Cunxin Fan,Huaijin Wu,Xinhao Song,Junchi Yan", "tags": "NIPS 2024,Poster", "abstract": "Cryptographic problems, operating within binary variable spaces, can be routinely transformed into Boolean Satisfiability (SAT) problems regarding specific cryptographic conditions like plaintext-ciphertext matching. With the fast development of learning for discrete data, this SAT representation also facilitates the utilization of machine-learning approaches with the hope of automatically capturing patterns and strategies inherent in cryptographic structures in a data-driven manner. Existing neural SAT solvers consistently adopt conjunctive normal form (CNF) for instance representation, which in the cryptographic context can lead to scale explosion and a loss of high-level semantics. In particular, extensively used XOR operations in cryptographic problems can incur an exponential number of clauses. In this paper, we propose a graph structure based on Arithmetic Normal Form (ANF) to efficiently handle the XOR operation bottleneck. Additionally, we design an encoding method for AND operations in these ANF-based graphs, demonstrating improved efficiency over alternative general graph forms for SAT. We then propose CryptoANFNet, a graph learning approach that trains a classifier based on a message-passing scheme to predict plaintext-ciphertext satisfiability. \nUsing ANF-based SAT instances, CryptoANFNet demonstrates superior scalability and can naturally capture higher-order operational information. Empirically, CryptoANFNet achieves a 50x speedup over heuristic solvers and outperforms SOTA learning-based SAT solver NeuroSAT, with 96\\% vs. 91\\% accuracy on small-scale and 72\\% vs. 55\\% on large-scale datasets from real encryption algorithms. We also introduce a key-solving algorithm that simplifies ANF-based SAT instances from plaintext and ciphertext, enhancing key decryption accuracy from 76.5\\% to 82\\% and from 72\\% to 75\\% for datasets generated from two real encryption algorithms.", "pdf": "https://openreview.net/pdf/6a88f97c28a523945719874211adbd5ad82c82ce.pdf"} {"title": "Newton Informed Neural Operator for Computing Multiple Solutions of Nonlinear Partials Differential Equations", "url": "https://openreview.net/forum?id=F9mNL6vR27", "detail_url": "https://openreview.net/forum?id=F9mNL6vR27", "authors": "Wenrui Hao,Xinliang Liu,Yahong Yang", "tags": "NIPS 2024,Poster", "abstract": "Solving nonlinear partial differential equations (PDEs) with multiple solutions is essential in various fields, including physics, biology, and engineering. However, traditional numerical methods, such as finite element and finite difference methods, often face challenges when dealing with nonlinear solvers, particularly in the presence of multiple solutions. These methods can become computationally expensive, especially when relying on solvers like Newton's method, which may struggle with ill-posedness near bifurcation points.\nIn this paper, we propose a novel approach, the Newton Informed Neural Operator, which learns the Newton solver for nonlinear PDEs. Our method integrates traditional numerical techniques with the Newton nonlinear solver, efficiently learning the nonlinear mapping at each iteration. This approach allows us to compute multiple solutions in a single learning process while requiring fewer supervised data points than existing neural network methods.", "pdf": "https://openreview.net/pdf/c641da610396004810980b02b7a4a0f43b3638db.pdf"} {"title": "On scalable oversight with weak LLMs judging strong LLMs", "url": "https://openreview.net/forum?id=O1fp9nVraj", "detail_url": "https://openreview.net/forum?id=O1fp9nVraj", "authors": "Zachary Kenton,Noah Yamamoto Siegel,Janos Kramar,Jonah Brown-Cohen,Samuel Albanie,Jannis Bulian,Rishabh Agarwal,David Lindner,Yunhao Tang,Noah Goodman,Rohin Shah", "tags": "NIPS 2024,Poster", "abstract": "Scalable oversight protocols aim to enable humans to accurately supervise superhuman AI. \nIn this paper we study debate, where two AI's compete to convince a judge; consultancy, \nwhere a single AI tries to convince a judge that asks questions;\nand compare to a baseline of direct question-answering, where the judge just answers outright without the AI.\nWe use large language models (LLMs) as both AI agents and as stand-ins for human judges, taking the judge models to be weaker than agent models. \nWe benchmark on a diverse range of asymmetries between judges and agents, extending previous work on a single extractive QA task with information asymmetry, to also include mathematics, coding, logic and multimodal reasoning asymmetries. \nWe find that debate outperforms consultancy across all tasks when the consultant is randomly assigned to argue for the correct/incorrect answer. Comparing debate to direct question answering, the results depend on the type of task: in extractive QA tasks with information asymmetry debate outperforms direct question answering, but in other tasks without information asymmetry the results are mixed.\nPrevious work assigned debaters/consultants an answer to argue for. When we allow them to instead choose which answer to argue for, we find judges are less frequently convinced by the wrong answer in debate than in consultancy.\nFurther, we find that stronger debater models increase judge accuracy, though more modestly than in previous studies.", "pdf": "https://openreview.net/pdf/76a4252d05a25cb434eb5268fd6867c13d961cd0.pdf"} {"title": "Learning with Fitzpatrick Losses", "url": "https://openreview.net/forum?id=7Dep87TMJs", "detail_url": "https://openreview.net/forum?id=7Dep87TMJs", "authors": "Seta Rakotomandimby,Jean-Philippe Chancelier,Michel De Lara,Mathieu Blondel", "tags": "NIPS 2024,Poster", "abstract": "Fenchel-Young losses are a family of convex loss functions,\nencompassing the squared, logistic and sparsemax losses, among others.\nEach Fenchel-Young loss is implicitly associated with a link function, for\nmapping model outputs to predictions. For instance, the logistic loss is\nassociated with the soft argmax link function. Can we build new loss functions\nassociated with the same link function as Fenchel-Young losses?\nIn this paper, we introduce Fitzpatrick losses, a new family of convex loss\nfunctions based on the Fitzpatrick function. A well-known theoretical tool in\nmaximal monotone operator theory, the Fitzpatrick function naturally leads to a\nrefined Fenchel-Young inequality, making Fitzpatrick losses tighter than\nFenchel-Young losses, while maintaining the same link\nfunction for prediction. \nAs an example, we introduce the Fitzpatrick logistic loss and the\nFitzpatrick sparsemax loss, counterparts of the logistic and the sparsemax\nlosses. This yields two\nnew tighter losses associated with the soft argmax and the sparse argmax,\ntwo of the most ubiquitous output layers used in machine learning. We study in\ndetails the properties of Fitzpatrick losses and in particular, we show that\nthey can be seen as Fenchel-Young losses using a modified, target-dependent\ngenerating function. We demonstrate the effectiveness of Fitzpatrick losses for\nlabel proportion estimation.", "pdf": "https://openreview.net/pdf/74d44c678b0ebbddf4748c7a393d59af0e0e4bd0.pdf"} {"title": "Vision Mamba Mender", "url": "https://openreview.net/forum?id=9VnevS2YoR", "detail_url": "https://openreview.net/forum?id=9VnevS2YoR", "authors": "Jiacong Hu,Anda Cao,Zunlei Feng,Shengxuming Zhang,Yi Wang,Lingxiang Jia,Mingli Song", "tags": "NIPS 2024,Poster", "abstract": "Mamba, a state-space model with selective mechanisms and hardware-aware architecture, has demonstrated outstanding performance in long sequence modeling tasks, particularly garnering widespread exploration and application in the field of computer vision. While existing works have mixed opinions of its application in visual tasks, the exploration of its internal workings and the optimization of its performance remain urgent and worthy research questions given its status as a novel model. Existing optimizations of the Mamba model, especially when applied in the visual domain, have primarily relied on predefined methods such as improving scanning mechanisms or integrating other architectures, often requiring strong priors and extensive trial and error. In contrast to these approaches, this paper proposes the Vision Mamba Mender, a systematic approach for understanding the workings of Mamba, identifying flaws within, and subsequently optimizing model performance. Specifically, we present methods for predictive correlation analysis of Mamba's hidden states from both internal and external perspectives, along with corresponding definitions of correlation scores, aimed at understanding the workings of Mamba in visual recognition tasks and identifying flaws therein. Additionally, tailored repair methods are proposed for identified external and internal state flaws to eliminate them and optimize model performance. Extensive experiments validate the efficacy of the proposed methods on prevalent Mamba architectures, significantly enhancing Mamba's performance.", "pdf": "https://openreview.net/pdf/591748617887e22e997c708f7f48a2fd796895c0.pdf"} {"title": "A Global Depth-Range-Free Multi-View Stereo Transformer Network with Pose Embedding", "url": "https://openreview.net/forum?id=1FikBPewU9", "detail_url": "https://openreview.net/forum?id=1FikBPewU9", "authors": "Yitong Dong,Yijin Li,Zhaoyang Huang,Weikang Bian,Jingbo Liu,Hujun Bao,Zhaopeng Cui,Hongsheng Li,Guofeng Zhang", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we propose a novel multi-view stereo (MVS) framework that gets rid of the depth range prior. Unlike recent prior-free MVS methods that work in a pair-wise manner, our method simultaneously considers all the source images. Specifically, we introduce a Multi-view Disparity Attention (MDA) module to aggregate long-range context information within and across multi-view images. Considering the asymmetry of the epipolar disparity flow, the key to our method lies in accurately modeling multi-view geometric constraints. We integrate pose embedding to encapsulate information such as multi-view camera poses, providing implicit geometric constraints for multi-view disparity feature fusion dominated by attention. Additionally, we construct corresponding hidden states for each source image due to significant differences in the observation quality of the same pixel in the reference frame across multiple source frames. We explicitly estimate the quality of the current pixel corresponding to sampled points on the epipolar line of the source image and dynamically update hidden states through the uncertainty estimation module. Extensive results on the DTU dataset and Tanks\\&Temple benchmark demonstrate the effectiveness of our method.", "pdf": "https://openreview.net/pdf/3fa7cd225762a3bfe69614e381792958847a5da8.pdf"} {"title": "Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs", "url": "https://openreview.net/forum?id=2cczgOfMP4", "detail_url": "https://openreview.net/forum?id=2cczgOfMP4", "authors": "Xuan Zhang,Chao Du,Tianyu Pang,Qian Liu,Wei Gao,Min Lin", "tags": "NIPS 2024,Poster", "abstract": "The recent development of chain-of-thought (CoT) decoding has enabled large language models (LLMs) to generate explicit logical reasoning paths for complex problem-solving. However, research indicates that these paths are not always deliberate and optimal. The tree-of-thought (ToT) method employs tree-searching to extensively explore the reasoning space and find better reasoning paths that CoT decoding might overlook. This deliberation, however, comes at the cost of significantly increased inference complexity. In this work, we demonstrate that fine-tuning LLMs leveraging the search tree constructed by ToT allows CoT to achieve similar or better performance, thereby avoiding the substantial inference burden. This is achieved through \\emph{Chain of Preference Optimization} (CPO), where LLMs are fine-tuned to align each step of the CoT reasoning paths with those of ToT using the inherent preference information in the tree-search process. Extensive experimental results show that CPO significantly improves LLM performance in solving a variety of complex problems, including question answering, fact verification, and arithmetic reasoning, demonstrating its effectiveness. \nOur code is available at [https://github.com/sail-sg/CPO](https://github.com/sail-sg/CPO).", "pdf": "https://openreview.net/pdf/aa0711383e2d1c60f619c4541357d097e46dade1.pdf"} {"title": "Safe LoRA: The Silver Lining of Reducing Safety Risks when Finetuning Large Language Models", "url": "https://openreview.net/forum?id=HcifdQZFZV", "detail_url": "https://openreview.net/forum?id=HcifdQZFZV", "authors": "Chia-Yi Hsu,Yu-Lin Tsai,Chih-Hsun Lin,Pin-Yu Chen,Chia-Mu Yu,Chun-Ying Huang", "tags": "NIPS 2024,Poster", "abstract": "While large language models (LLMs) such as Llama-2 or GPT-4 have shown impressive zero-shot performance, fine-tuning is still necessary to enhance their performance for customized datasets, domain-specific tasks, or other private needs. However, fine-tuning all parameters of LLMs requires significant hardware resources, which can be impractical for typical users. Therefore, parameter-efficient fine-tuning such as LoRA have emerged, allowing users to fine-tune LLMs without the need for considerable computing resources, with little performance degradation compared to fine-tuning all parameters. Unfortunately, recent studies indicate that fine-tuning can increase the risk to the safety of LLMs, even when data does not contain malicious content. To address this challenge, we propose $\\textsf{Safe LoRA}$, a simple one-liner patch to the original LoRA implementation by introducing the projection of LoRA weights from selected layers to the safety-aligned subspace, effectively reducing the safety risks in LLM fine-tuning while maintaining utility. It is worth noting that $\\textsf{Safe LoRA}$ is a training-free and data-free approach, as it only requires the knowledge of the weights from the base and aligned LLMs. Our extensive experiments demonstrate that when fine-tuning on purely malicious data, $\\textsf{Safe LoRA}$ retains similar safety performance as the original aligned model. Moreover, when the fine-tuning dataset contains a mixture of both benign and malicious data, $\\textsf{Safe LoRA}$ mitigates the negative effect made by malicious data while preserving performance on downstream tasks. Our codes are available at https://github.com/IBM/SafeLoRA.", "pdf": "https://openreview.net/pdf/93a097bbfe5e3798a79c46c3c9b7d873d64ae2be.pdf"} {"title": "Space-Time Continuous PDE Forecasting using Equivariant Neural Fields", "url": "https://openreview.net/forum?id=wN5AgP0DJ0", "detail_url": "https://openreview.net/forum?id=wN5AgP0DJ0", "authors": "David M Knigge,David Wessels,Riccardo Valperga,Samuele Papa,Jan-Jakob Sonke,Erik J Bekkers,Stratis Gavves", "tags": "NIPS 2024,Poster", "abstract": "Recently, Conditional Neural Fields (NeFs) have emerged as a powerful modelling paradigm for PDEs, by learning solutions as flows in the latent space of the Conditional NeF. Although benefiting from favourable properties of NeFs such as grid-agnosticity and space-time-continuous dynamics modelling, this approach limits the ability to impose known constraints of the PDE on the solutions -- such as symmetries or boundary conditions -- in favour of modelling flexibility. Instead, we propose a space-time continuous NeF-based solving framework that - by preserving geometric information in the latent space of the Conditional NeF - preserves known symmetries of the PDE. We show that modelling solutions as flows of pointclouds over the group of interest $G$ improves generalization and data-efficiency. Furthermore, we validate that our framework readily generalizes to unseen spatial and temporal locations, as well as geometric transformations of the initial conditions - where other NeF-based PDE forecasting methods fail -, and improve over baselines in a number of challenging geometries.", "pdf": "https://openreview.net/pdf/d1a0db94f57add0d588388b6446bcdc66c454e64.pdf"} {"title": "Identify Then Recommend: Towards Unsupervised Group Recommendation", "url": "https://openreview.net/forum?id=oTZYhOAMhX", "detail_url": "https://openreview.net/forum?id=oTZYhOAMhX", "authors": "Yue Liu,Shihao Zhu,Tianyuan Yang,Jian Ma,Wenliang Zhong", "tags": "NIPS 2024,Poster", "abstract": "Group Recommendation (GR), which aims to recommend items to groups of users, has become a promising and practical direction for recommendation systems. This paper points out two issues of the state-of-the-art GR models. (1) The pre-defined and fixed number of user groups is inadequate for real-time industrial recommendation systems, where the group distribution can shift dynamically. (2) The training schema of existing GR methods is supervised, necessitating expensive user-group and group-item labels, leading to significant annotation costs. To this end, we present a novel unsupervised group recommendation framework named $\\underline{\\text{I}}$dentify $\\underline{\\text{T}}$hen $\\underline{\\text{R}}$ecommend ($\\underline{\\text{ITR}}$), where it first identifies the user groups in an unsupervised manner even without the pre-defined number of groups, and then two pre-text tasks are designed to conduct self-supervised group recommendation. Concretely, at the group identification stage, we first estimate the adaptive density of each user point, where areas with higher densities are more likely to be recognized as group centers. Then, a heuristic merge-and-split strategy is designed to discover the user groups and decision boundaries. Subsequently, at the self-supervised learning stage, the pull-and-repulsion pre-text task is proposed to optimize the user-group distribution. Besides, the pseudo group recommendation pre-text task is designed to assist the recommendations. Extensive experiments demonstrate the superiority and effectiveness of ITR on both user recommendation (e.g., 22.22\\% NDCG@5 $\\uparrow$) and group recommendation (e.g., 22.95\\% NDCG@5 $\\uparrow$). Furthermore, we deploy ITR on the industrial recommender and achieve promising results.", "pdf": "https://openreview.net/pdf/8ae6f2c622fb70c48215177ba3d62424f497ccda.pdf"} {"title": "Fast yet Safe: Early-Exiting with Risk Control", "url": "https://openreview.net/forum?id=bbFjpasRgs", "detail_url": "https://openreview.net/forum?id=bbFjpasRgs", "authors": "Metod Jazbec,Alexander Timans,Tin Had\u017ei Veljkovi\u0107,Kaspar Sakmann,Dan Zhang,Christian A. Naesseth,Eric Nalisnick", "tags": "NIPS 2024,Poster", "abstract": "Scaling machine learning models significantly improves their performance. However, such gains come at the cost of inference being slow and resource-intensive. Early-exit neural networks (EENNs) offer a promising solution: they accelerate inference by allowing intermediate layers to exit and produce a prediction early. Yet a fundamental issue with EENNs is how to determine when to exit without severely degrading performance. In other words, when is it 'safe' for an EENN to go 'fast'? To address this issue, we investigate how to adapt frameworks of risk control to EENNs. Risk control offers a distribution-free, post-hoc solution that tunes the EENN's exiting mechanism so that exits only occur when the output is of sufficient quality. We empirically validate our insights on a range of vision and language tasks, demonstrating that risk control can produce substantial computational savings, all the while preserving user-specified performance goals.", "pdf": "https://openreview.net/pdf/baabe9ab5b5884dbeb9a0ff31f6187e5274fd4a7.pdf"} {"title": "Contextual Decision-Making with Knapsacks Beyond the Worst Case", "url": "https://openreview.net/forum?id=Dgt6sh2ruQ", "detail_url": "https://openreview.net/forum?id=Dgt6sh2ruQ", "authors": "Zhaohua Chen,Rui Ai,Mingwei Yang,Yuqi Pan,Chang Wang,Xiaotie Deng", "tags": "NIPS 2024,Poster", "abstract": "We study the framework of a dynamic decision-making scenario with resource constraints.\nIn this framework, an agent, whose target is to maximize the total reward under the initial inventory, selects an action in each round upon observing a random request, leading to a reward and resource consumptions that are further associated with an unknown random external factor.\nWhile previous research has already established an $\\widetilde{O}(\\sqrt{T})$ worst-case regret for this problem, this work offers two results that go beyond the worst-case perspective: one for the worst-case gap between benchmarks and another for logarithmic regret rates.\nWe first show that an $\\Omega(\\sqrt{T})$ distance between the commonly used fluid benchmark and the online optimum is unavoidable when the former has a degenerate optimal solution.\nOn the algorithmic side, we merge the re-solving heuristic with distribution estimation skills and propose an algorithm that achieves an $\\widetilde{O}(1)$ regret as long as the fluid LP has a unique and non-degenerate solution.\nFurthermore, we prove that our algorithm maintains a near-optimal $\\widetilde{O}(\\sqrt{T})$ regret even in the worst cases and extend these results to the setting where the request and external factor are continuous.\nRegarding information structure, our regret results are obtained under two feedback models, respectively, where the algorithm accesses the external factor at the end of each round and at the end of a round only when a non-null action is executed.", "pdf": "https://openreview.net/pdf/7e84f3d871b12161c42c7d315a8e2334d37e25ed.pdf"} {"title": "Segment, Shuffle, and Stitch: A Simple Layer for Improving Time-Series Representations", "url": "https://openreview.net/forum?id=zm1LcgRpHm", "detail_url": "https://openreview.net/forum?id=zm1LcgRpHm", "authors": "Shivam Grover,Amin Jalali,Ali Etemad", "tags": "NIPS 2024,Poster", "abstract": "Existing approaches for learning representations of time-series keep the temporal arrangement of the time-steps intact with the presumption that the original order is the most optimal for learning. However, non-adjacent sections of real-world time-series may have strong dependencies. Accordingly, we raise the question: Is there an alternative arrangement for time-series which could enable more effective representation learning? To address this, we propose a simple plug-and-play neural network layer called Segment, Shuffle, and Stitch (S3) designed to improve representation learning in time-series models. S3 works by creating non-overlapping segments from the original sequence and shuffling them in a learned manner that is optimal for the task at hand. It then re-attaches the shuffled segments back together and performs a learned weighted sum with the original input to capture both the newly shuffled sequence along with the original sequence. S3 is modular and can be stacked to achieve different levels of granularity, and can be added to many forms of neural architectures including CNNs or Transformers with negligible computation overhead. Through extensive experiments on several datasets and state-of-the-art baselines, we show that incorporating S3 results in significant improvements for the tasks of time-series classification, forecasting, and anomaly detection, improving performance on certain datasets by up to 68\\%. We also show that S3 makes the learning more stable with a smoother training loss curve and loss landscape compared to the original baseline. The code is available at https://github.com/shivam-grover/S3-TimeSeries.", "pdf": "https://openreview.net/pdf/d5ba68bdf83d04632580f0b9e7ac80199a8c19c5.pdf"} {"title": "Towards the Transferability of Rewards Recovered via Regularized Inverse Reinforcement Learning", "url": "https://openreview.net/forum?id=l5wEQPcDab", "detail_url": "https://openreview.net/forum?id=l5wEQPcDab", "authors": "Andreas Schlaginhaufen,Maryam Kamgarpour", "tags": "NIPS 2024,Poster", "abstract": "Inverse reinforcement learning (IRL) aims to infer a reward from expert demonstrations, motivated by the idea that the reward, rather than the policy, is the most succinct and transferable description of a task [Ng et al., 2000]. However, the reward corresponding to an optimal policy is not unique, making it unclear if an IRL-learned reward is transferable to new transition laws in the sense that its optimal policy aligns with the optimal policy corresponding to the expert's true reward. Past work has addressed this problem only under the assumption of full access to the expert's policy, guaranteeing transferability when learning from two experts with the same reward but different transition laws that satisfy a specific rank condition [Rolland et al., 2022]. In this work, we show that the conditions developed under full access to the expert's policy cannot guarantee transferability in the more practical scenario where we have access only to demonstrations of the expert. Instead of a binary rank condition, we propose principal angles as a more refined measure of similarity and dissimilarity between transition laws. Based on this, we then establish two key results: 1) a sufficient condition for transferability to any transition laws when learning from at least two experts with sufficiently different transition laws, and 2) a sufficient condition for transferability to local changes in the transition law when learning from a single expert. Furthermore, we also provide a probably approximately correct (PAC) algorithm and an end-to-end analysis for learning transferable rewards from demonstrations of multiple experts.", "pdf": "https://openreview.net/pdf/91b311c232293f7b6542e2c230bbdabd28d5b427.pdf"} {"title": "QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs", "url": "https://openreview.net/forum?id=dfqsW38v1X", "detail_url": "https://openreview.net/forum?id=dfqsW38v1X", "authors": "Saleh Ashkboos,Amirkeivan Mohtashami,Maximilian L. Croci,Bo Li,Pashmina Cameron,Martin Jaggi,Dan Alistarh,Torsten Hoefler,James Hensman", "tags": "NIPS 2024,Poster", "abstract": "We introduce QuaRot, a new Quantization scheme based on Rotations, which is able to quantize LLMs end-to-end, including all weights, activations, and KV cache in 4 bits. QuaRot rotates LLMs in a way that removes outliers from the hidden state without changing the output, making quantization easier. This computational invariance is applied to the hidden state (residual) of the LLM, as well as to the activations of the feed-forward components, aspects of the attention mechanism, and to the KV cache. The result is a quantized model where all matrix multiplications are performed in 4 bits, without any channels identified for retention in higher precision. Our 4-bit quantized LLAMA2-70B model has losses of at most 0.47 WikiText-2 perplexity and retains 99% of the zero-shot performance. We also show that QuaRot can provide lossless 6 and 8 bit LLAMA-2 models without any calibration data using round-to-nearest quantization. Code is available at github.com/spcl/QuaRot.", "pdf": "https://openreview.net/pdf/139029c96e2d340c8703447400fd4c85b745604c.pdf"} {"title": "Adaptive and Optimal Second-order Optimistic Methods for Minimax Optimization", "url": "https://openreview.net/forum?id=NVDYgEFXCy", "detail_url": "https://openreview.net/forum?id=NVDYgEFXCy", "authors": "Ruichen Jiang,Ali Kavis,Qiujiang Jin,sujay sanghavi,Aryan Mokhtari", "tags": "NIPS 2024,Poster", "abstract": "We propose adaptive, line-search-free second-order methods with optimal rate of convergence for solving convex-concave min-max problems. By means of an adaptive step size, our algorithms feature a simple update rule that requires solving only one linear system per iteration, eliminating the need for line-search or backtracking mechanisms. Specifically, we base our algorithms on the optimistic method and appropriately combine it with second-order information. Moreover, distinct from common adaptive schemes, we define the step size recursively as a function of the gradient norm and the prediction error in the optimistic update. We first analyze a variant where the step size requires knowledge of the Lipschitz constant of the Hessian. Under the additional assumption of Lipschitz continuous gradients, we further design a parameter-free version by tracking the Hessian Lipschitz constant locally and ensuring the iterates remain bounded. We also evaluate the practical performance of our algorithm by comparing it to existing second-order algorithms for minimax optimization.", "pdf": "https://openreview.net/pdf/3a94a01a7b61495b721a6c20e6816658a7a0d191.pdf"} {"title": "IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors", "url": "https://openreview.net/forum?id=0SRJBtTNhX", "detail_url": "https://openreview.net/forum?id=0SRJBtTNhX", "authors": "Shenghe Zheng,Hongzhi Wang,Xianglong Liu", "tags": "NIPS 2024,Poster", "abstract": "Graph Neural Networks (GNNs) have shown great performance in various tasks, with the core idea of learning from data labels and aggregating messages within the neighborhood of nodes. However, the common challenges in graphs are twofold: insufficient accurate (high-quality) labels and limited neighbors for nodes, resulting in weak GNNs. \nExisting graph augmentation methods typically address only one of these challenges, often adding training costs or relying on oversimplified or knowledge-intensive strategies, limiting their generalization.\nTo simultaneously address both challenges faced by graphs in a generalized way, we propose an elegant method called IntraMix. Considering the incompatibility of vanilla Mixup with the complex topology of graphs, IntraMix innovatively employs Mixup among inaccurate labeled data of the same class, generating high-quality labeled data at minimal cost. \nAdditionally, it finds data with high confidence of being clustered into the same group as the generated data to serve as their neighbors, thereby enriching the neighborhoods of graphs. IntraMix efficiently tackles both issues faced by graphs and challenges the prior notion of the limited effectiveness of Mixup in node classification. IntraMix is a theoretically grounded plug-in-play method that can be readily applied to all GNNs. Extensive experiments demonstrate the effectiveness of IntraMix across various GNNs and datasets. Our code is available at: [https://github.com/Zhengsh123/IntraMix](https://github.com/Zhengsh123/IntraMix).", "pdf": "https://openreview.net/pdf/2df08c17e88734ce3e56147658624ff49f7b1898.pdf"} {"title": "Conditional Outcome Equivalence: A Quantile Alternative to CATE", "url": "https://openreview.net/forum?id=tyPcIETPWM", "detail_url": "https://openreview.net/forum?id=tyPcIETPWM", "authors": "Josh Givens,Henry Reeve,Song Liu,Katarzyna Reluga", "tags": "NIPS 2024,Poster", "abstract": "The conditional quantile treatment effect (CQTE) can provide insight into the effect of a treatment beyond the conditional average treatment effect (CATE). This ability to provide information over multiple quantiles of the response makes the CQTE especially valuable in cases where the effect of a treatment is not well-modelled by a location shift, even conditionally on the covariates. Nevertheless, the estimation of the CQTE is challenging and often depends upon the smoothness of the individual quantiles as a function of the covariates rather than smoothness of the CQTE itself. This is in stark contrast to the CATE where it is possible to obtain high-quality estimates which have less dependency upon the smoothness of the nuisance parameters when the CATE itself is smooth. Moreover, relative smoothness of the CQTE lacks the interpretability of smoothness of the CATE making it less clear whether it is a reasonable assumption to make. We combine the desirable properties of the CATE and CQTE by considering a new estimand, the conditional quantile comparator (CQC). The CQC not only retains information about the whole treatment distribution, similar to the CQTE, but also having more natural examples of smoothness and is able to leverage simplicity in an auxiliary estimand. We provide finite sample bounds on the error of our estimator, demonstrating its ability to exploit simplicity. We validate our theory in numerical simulations which show that our method produces more accurate estimates than baselines. Finally, we apply our methodology to a study on the effect of employment incentives on earnings across different age groups. We see that our method is able to reveal heterogeneity of the effect across different quantiles.", "pdf": "https://openreview.net/pdf/a63da2a2fa88b2892641ef33b8f9ba615e7dc057.pdf"} {"title": "On the Computational Landscape of Replicable Learning", "url": "https://openreview.net/forum?id=1PCsDNG6Jg", "detail_url": "https://openreview.net/forum?id=1PCsDNG6Jg", "authors": "Alkis Kalavasis,Amin Karbasi,Grigoris Velegkas,Felix Zhou", "tags": "NIPS 2024,Poster", "abstract": "We study computational aspects of algorithmic replicability, a notion of stability introduced by Impagliazzo, Lei,\nPitassi, and Sorrell [STOC, 2022]. Motivated by a recent line of work that established strong statistical connections between\nreplicability and other notions of learnability such as online learning, private learning, and SQ learning, we aim to\nunderstand better the computational connections between replicability and these learning paradigms.\nOur first result shows that there is a concept class that is efficiently replicably PAC learnable, but, under standard\ncryptographic assumptions, no efficient online learner exists for this class. Subsequently, we design an efficient\nreplicable learner for PAC learning parities when the marginal distribution is far from uniform, making progress on a\nquestion posed by Impagliazzo et al. [STOC, 2022]. To obtain this result, we design a replicable lifting framework inspired by\nBlanc, Lange, Malik, and Tan [STOC, 2023], that transforms in a black-box manner efficient replicable PAC learners under the\nuniform marginal distribution over the Boolean hypercube to replicable PAC learners under any marginal distribution,\nwith sample and time complexity that depends on a certain measure of the complexity of the distribution. \nFinally, we show that any pure DP learner can be transformed in a black-box manner to a replicable learner, with time complexity polynomial in the confidence and accuracy parameters, but exponential in the representation dimension of the underlying hypothesis class.", "pdf": "https://openreview.net/pdf/c64d2042683b04a4bd0a9b126fa01b2982b62d54.pdf"} {"title": "Theoretical and Empirical Insights into the Origins of Degree Bias in Graph Neural Networks", "url": "https://openreview.net/forum?id=1mAaewThcz", "detail_url": "https://openreview.net/forum?id=1mAaewThcz", "authors": "Arjun Subramonian,Jian Kang,Yizhou Sun", "tags": "NIPS 2024,Poster", "abstract": "Graph Neural Networks (GNNs) often perform better for high-degree nodes than low-degree nodes on node classification tasks. This degree bias can reinforce social marginalization by, e.g., privileging celebrities and other high-degree actors in social networks during social and content recommendation. While researchers have proposed numerous hypotheses for why GNN degree bias occurs, we find via a survey of 38 degree bias papers that these hypotheses are often not rigorously validated, and can even be contradictory. Thus, we provide an analysis of the origins of degree bias in message-passing GNNs with different graph filters. We prove that high-degree test nodes tend to have a lower probability of misclassification regardless of how GNNs are trained. Moreover, we show that degree bias arises from a variety of factors that are associated with a node's degree (e.g., homophily of neighbors, diversity of neighbors). Furthermore, we show that during training, some GNNs may adjust their loss on low-degree nodes more slowly than on high-degree nodes; however, with sufficiently many epochs of training, message-passing GNNs can achieve their maximum possible training accuracy, which is not significantly limited by their expressive power. Throughout our analysis, we connect our findings to previously-proposed hypotheses for the origins of degree bias, supporting and unifying some while drawing doubt to others. We validate our theoretical findings on 8 common real-world networks, and based on our theoretical and empirical insights, describe a roadmap to alleviate degree bias.", "pdf": "https://openreview.net/pdf/6df0c4ad9ea96a2a6aab1a1f14255a2734b0942d.pdf"} {"title": "$\\textit{NeuroPath}$: A Neural Pathway Transformer for Joining the Dots of Human Connectomes", "url": "https://openreview.net/forum?id=AvBuK8Ezrg", "detail_url": "https://openreview.net/forum?id=AvBuK8Ezrg", "authors": "Ziquan Wei,Tingting Dan,Jiaqi Ding,Guorong Wu", "tags": "NIPS 2024,Poster", "abstract": "Although modern imaging technologies allow us to study connectivity between two distinct brain regions $\\textit{in-vivo}$, an in-depth understanding of how anatomical structure supports brain function and how spontaneous functional fluctuations emerge remarkable cognition is still elusive. Meanwhile, tremendous efforts have been made in the realm of machine learning to establish the nonlinear mapping between neuroimaging data and phenotypic traits. However, the absence of neuroscience insight in the current approaches poses significant challenges in understanding cognitive behavior from transient neural activities. \nTo address this challenge, we put the spotlight on the coupling mechanism of structural connectivity (SC) and functional connectivity (FC) by formulating such network neuroscience question into an expressive graph representation learning problem for high-order topology. Specifically, we introduce the concept of $\\textit{topological detour}$ to characterize how a ubiquitous instance of FC (direct link) is supported by neural pathways (detour) physically wired by SC, which forms a cyclic loop interacted by brain structure and function. In the clich\\'e of machine learning, the multi-hop detour pathway underlying SC-FC coupling allows us to devise a novel multi-head self-attention mechanism within Transformer to capture multi-modal feature representation from paired graphs of SC and FC. Taken together, we propose a biological-inspired deep model, coined as $\\textit{NeuroPath}$, to find putative connectomic feature representations from the unprecedented amount of neuroimages, which can be plugged into various downstream applications such as task recognition and disease diagnosis. \nWe have evaluated $\\textit{NeuroPath}$ on large-scale public datasets including Human Connectome Project (HCP) and UK Biobank (UKB) under different experiment settings of supervised and zero-shot learning, where the state-of-the-art performance by our $\\textit{NeuroPath}$ indicates great potential in network neuroscience.", "pdf": "https://openreview.net/pdf/6cf963e48dd9691fc884a1a28825d7176391a3c4.pdf"} {"title": "Supra-Laplacian Encoding for Transformer on Dynamic Graphs", "url": "https://openreview.net/forum?id=vP9qAzr2Gw", "detail_url": "https://openreview.net/forum?id=vP9qAzr2Gw", "authors": "Yannis Karmim,Marc Lafon,Raphael Fournier-S'niehotta,Nicolas THOME", "tags": "NIPS 2024,Poster", "abstract": "Fully connected Graph Transformers (GT) have rapidly become prominent in the static graph community as an alternative to Message-Passing models, which suffer from a lack of expressivity, oversquashing, and under-reaching.\nHowever, in a dynamic context, by interconnecting all nodes at multiple snapshots with self-attention,GT loose both structural and temporal information. In this work, we introduce Supra-LAplacian encoding for spatio-temporal TransformErs (SLATE), a new spatio-temporal encoding to leverage the GT architecture while keeping spatio-temporal information.\nSpecifically, we transform Discrete Time Dynamic Graphs into multi-layer graphs and take advantage of the spectral properties of their associated supra-Laplacian matrix.\nOur second contribution explicitly model nodes' pairwise relationships with a cross-attention mechanism, providing an accurate edge representation for dynamic link prediction.\nSLATE outperforms numerous state-of-the-art methods based on Message-Passing Graph Neural Networks combined with recurrent models (e.g, LSTM), and Dynamic Graph Transformers,\non~9 datasets. Code is open-source and available at this link https://github.com/ykrmm/SLATE.", "pdf": "https://openreview.net/pdf/acb194ce31b86916495f23d4c82ee0d79949b5cb.pdf"} {"title": "Conformal Alignment: Knowing When to Trust Foundation Models with Guarantees", "url": "https://openreview.net/forum?id=YzyCEJlV9Z", "detail_url": "https://openreview.net/forum?id=YzyCEJlV9Z", "authors": "Yu Gui,Ying Jin,Zhimei Ren", "tags": "NIPS 2024,Poster", "abstract": "Before deploying outputs from foundation models in high-stakes tasks, it is imperative to ensure that they align with human values.\nFor instance, in radiology report generation, reports generated by a vision-language model must align with human evaluations before their use in medical decision-making. This paper presents Conformal Alignment, a general framework for identifying units whose outputs meet a user-specified alignment criterion. It is guaranteed that on average, a prescribed fraction of selected units indeed meet the alignment criterion, regardless of the foundation model or the data distribution. Given any pre-trained model and new units with model-generated outputs, Conformal Alignment leverages a set of reference data with ground-truth alignment status to train an alignment predictor. It then selects new units whose predicted alignment scores surpass a data-dependent threshold, certifying their corresponding outputs as trustworthy. Through applications to question answering and radiology report generation, we demonstrate that our method is able to accurately identify units with trustworthy outputs via lightweight training over a moderate amount of reference data. En route, we investigate the informativeness of various features in alignment prediction and combine them with standard models to construct the alignment predictor.", "pdf": "https://openreview.net/pdf/7b7adf9f768f97abe11b120552912a9342c64df8.pdf"} {"title": "SPARKLE: A Unified Single-Loop Primal-Dual Framework for Decentralized Bilevel Optimization", "url": "https://openreview.net/forum?id=g5DyqerUpX", "detail_url": "https://openreview.net/forum?id=g5DyqerUpX", "authors": "Shuchen Zhu,Boao Kong,Songtao Lu,Xinmeng Huang,Kun Yuan", "tags": "NIPS 2024,Poster", "abstract": "This paper studies decentralized bilevel optimization, in which multiple agents collaborate to solve problems involving nested optimization structures with neighborhood communications. Most existing literature primarily utilizes gradient tracking to mitigate the influence of data heterogeneity, without exploring other well-known heterogeneity-correction techniques such as EXTRA or Exact Diffusion. Additionally, these studies often employ identical decentralized strategies for both upper- and lower-level problems, neglecting to leverage distinct mechanisms across different levels. To address these limitations, this paper proposes SPARKLE, a unified single-loop primal-dual algorithm framework for decentralized bilevel optimization. SPARKLE offers the flexibility to incorporate various heterogeneity-correction strategies into the algorithm. Moreover, SPARKLE allows for different strategies to solve upper- and lower-level problems. We present a unified convergence analysis for SPARKLE, applicable to all its variants, with state-of-the-art convergence rates compared to existing decentralized bilevel algorithms. Our results further reveal that EXTRA and Exact Diffusion are more suitable for decentralized bilevel optimization, and using mixed strategies in bilevel algorithms brings more benefits than relying solely on gradient tracking.", "pdf": "https://openreview.net/pdf/be356ceceef5e8f17456bd0308ffcd0212712bfc.pdf"} {"title": "Enriching Disentanglement: From Logical Definitions to Quantitative Metrics", "url": "https://openreview.net/forum?id=tvQ3XCKWbB", "detail_url": "https://openreview.net/forum?id=tvQ3XCKWbB", "authors": "Yivan Zhang,Masashi Sugiyama", "tags": "NIPS 2024,Poster", "abstract": "Disentangling the explanatory factors in complex data is a promising approach for generalizable and data-efficient representation learning. While a variety of quantitative metrics for learning and evaluating disentangled representations have been proposed, it remains unclear what properties these metrics truly quantify. In this work, we establish algebraic relationships between logical definitions and quantitative metrics to derive theoretically grounded disentanglement metrics. Concretely, we introduce a compositional approach for converting a higher-order predicate into a real-valued quantity by replacing (i) equality with a strict premetric, (ii) the Heyting algebra of binary truth values with a quantale of continuous values, and (iii) quantifiers with aggregators. The metrics induced by logical definitions have strong theoretical guarantees, and some of them are easily differentiable and can be used as learning objectives directly. Finally, we empirically demonstrate the effectiveness of the proposed metrics by isolating different aspects of disentangled representations.", "pdf": "https://openreview.net/pdf/1c6c34020b56e235277af4b47085e69bbf792fea.pdf"} {"title": "Exact, Tractable Gauss-Newton Optimization in Deep Reversible Architectures Reveal Poor Generalization", "url": "https://openreview.net/forum?id=p37NlKi9vl", "detail_url": "https://openreview.net/forum?id=p37NlKi9vl", "authors": "Davide Buffelli,Jamie McGowan,Wangkun Xu,Alexandru Cioba,Da-shan Shiu,Guillaume Hennequin,Alberto Bernacchia", "tags": "NIPS 2024,Poster", "abstract": "Second-order optimization has been shown to accelerate the training of deep neural networks in many applications, often yielding faster progress per iteration on the training loss compared to first-order optimizers. However, the generalization properties of second-order methods are still being debated. Theoretical investigations have proved difficult to carry out outside the tractable settings of heavily simplified model classes - thus, the relevance of existing theories to practical deep learning applications remains unclear. Similarly, empirical studies in large-scale models and real datasets are significantly confounded by the necessity to approximate second-order updates in practice. It is often unclear whether the observed generalization behaviour arises specifically from the second-order nature of the parameter updates, or instead reflects the specific structured (e.g. Kronecker) approximations used or any damping-based interpolation towards first-order updates. Here, we show for the first time that exact Gauss-Newton (GN) updates take on a tractable form in a class of deep reversible architectures that are sufficiently expressive to be meaningfully applied to common benchmark datasets. We exploit this novel setting to study the training and generalization properties of the GN optimizer. We find that exact GN generalizes poorly. In the mini-batch training setting, this manifests as rapidly saturating progress even on the training loss, with parameter updates found to overfit each mini-batch without producing the features that would support generalization to other mini-batches. In contrast to previous work, we show that our experiments run in the feature learning regime, in which the neural tangent kernel (NTK) changes during the course of training. However, changes in the NTK are not associated with any significant change in neural representations, explaining the lack of generalization.", "pdf": "https://openreview.net/pdf/86472b35b8afcaf820a9cce70de715247435a5a4.pdf"} {"title": "FACT or Fiction: Can Truthful Mechanisms Eliminate Federated Free Riding?", "url": "https://openreview.net/forum?id=JiRGxrqHh0", "detail_url": "https://openreview.net/forum?id=JiRGxrqHh0", "authors": "Marco Bornstein,Amrit Bedi,Abdirisak Mohamed,Furong Huang", "tags": "NIPS 2024,Poster", "abstract": "Standard federated learning (FL) approaches are vulnerable to the free-rider dilemma: participating agents can contribute little to nothing yet receive a well-trained aggregated model. While prior mechanisms attempt to solve the free-rider dilemma, none have addressed the issue of truthfulness. In practice, adversarial agents can provide false information to the server in order to cheat its way out of contributing to federated training. In an effort to make free-riding-averse federated mechanisms truthful, and consequently less prone to breaking down in practice, we propose FACT. FACT is the first federated mechanism that: (1) eliminates federated free riding by using a penalty system, (2) ensures agents provide truthful information by creating a competitive environment, and (3) encourages agent participation by offering better performance than training alone. Empirically, FACT avoids free-riding when agents are untruthful, and reduces agent loss by over 4x.", "pdf": "https://openreview.net/pdf/784f4088c262af9131746d0ac8f5b5c179af7efc.pdf"} {"title": "Generative Adversarial Model-Based Optimization via Source Critic Regularization", "url": "https://openreview.net/forum?id=3RxcarQFRn", "detail_url": "https://openreview.net/forum?id=3RxcarQFRn", "authors": "Michael S Yao,Yimeng Zeng,Hamsa Bastani,Jacob R. Gardner,James Gee,Osbert Bastani", "tags": "NIPS 2024,Poster", "abstract": "Offline model-based optimization seeks to optimize against a learned surrogate model without querying the true oracle objective function during optimization. Such tasks are commonly encountered in protein design, robotics, and clinical medicine where evaluating the oracle function is prohibitively expensive. However, inaccurate surrogate model predictions are frequently encountered along offline optimization trajectories. To address this limitation, we propose *generative adversarial model-based optimization* using **adaptive source critic regularization (aSCR)**\u2014a task- and optimizer- agnostic framework for constraining the optimization trajectory to regions of the design space where the surrogate function is reliable. We propose a computationally tractable algorithm to dynamically adjust the strength of this constraint, and show how leveraging aSCR with standard Bayesian optimization outperforms existing methods on a suite of offline generative design tasks. Our code is available at https://github.com/michael-s-yao/gabo.", "pdf": "https://openreview.net/pdf/5e9c7395873a86fe0b8430642a4679960f1a27ef.pdf"} {"title": "Generative Forests", "url": "https://openreview.net/forum?id=cRlQHncjwT", "detail_url": "https://openreview.net/forum?id=cRlQHncjwT", "authors": "Richard Nock,Mathieu Guillame-Bert", "tags": "NIPS 2024,Poster", "abstract": "We focus on generative AI for a type of data that still represent one of the most prevalent form of data: tabular data. We introduce a new powerful class of forest-based models fit for such tasks and a simple training algorithm with strong convergence guarantees in a boosting model that parallels that of the original weak / strong supervised learning setting. This algorithm can be implemented by a few tweaks to the most popular induction scheme for decision tree induction (*i.e. supervised learning*) with two classes. Experiments on the quality of generated data display substantial improvements compared to the state of the art. The losses our algorithm minimize and the structure of our models make them practical for related tasks that require fast estimation of a density given a generative model and an observation (even partially specified): such tasks include missing data imputation and density estimation. Additional experiments on these tasks reveal that our models can be notably good contenders to diverse state of the art methods, relying on models as diverse as (or mixing elements of) trees, neural nets, kernels or graphical models.", "pdf": "https://openreview.net/pdf/18f83ad66abbb3fb05282a98775cadb33fbf3c7d.pdf"} {"title": "FedGMark: Certifiably Robust Watermarking for Federated Graph Learning", "url": "https://openreview.net/forum?id=xeviQPXTMU", "detail_url": "https://openreview.net/forum?id=xeviQPXTMU", "authors": "Yuxin Yang,Qiang Li,Yuan Hong,Binghui Wang", "tags": "NIPS 2024,Poster", "abstract": "Federated graph learning (FedGL) is an emerging learning paradigm to collaboratively train graph data from various clients. However, during the development and deployment of FedGL models, they are susceptible to illegal copying and model theft. Backdoor-based watermarking is a well-known method for mitigating these attacks, as it offers ownership verification to the model owner. We take the first step to protect the ownership of FedGL models via backdoor-based watermarking. Existing techniques have challenges in achieving the goal: 1) they either cannot be directly applied or yield unsatisfactory performance; 2) they are vulnerable to watermark removal attacks; and 3) they lack of formal guarantees. To address all the challenges, we propose FedGMark, the first certified robust backdoor-based watermarking for FedGL. FedGMark leverages the unique graph structure and client information in FedGL to learn customized and diverse watermarks. It also designs a novel GL architecture that facilitates defending against both the empirical and theoretically worst-case watermark removal attacks. Extensive experiments validate the promising empirical and provable watermarking performance of FedGMark. Source code is available at: https://github.com/Yuxin104/FedGMark.", "pdf": "https://openreview.net/pdf/75848fdd795ff86e8eff2d9277a1b8057ad9f7d9.pdf"} {"title": "Towards Unsupervised Model Selection for Domain Adaptive Object Detection", "url": "https://openreview.net/forum?id=gYa94o5Gmq", "detail_url": "https://openreview.net/forum?id=gYa94o5Gmq", "authors": "Hengfu Yu,Jinhong Deng,Wen Li,Lixin Duan", "tags": "NIPS 2024,Poster", "abstract": "Evaluating the performance of deep models in new scenarios has drawn increasing attention in recent years due to the wide application of deep learning techniques in various fields. However, while it is possible to collect data from new scenarios, the annotations are not always available. Existing Domain Adaptive Object Detection (DAOD) works usually report their performance by selecting the best model on the validation set or even the test set of the target domain, which is highly impractical in real-world applications. In this paper, we propose a novel unsupervised model selection approach for domain adaptive object detection, which is able to select almost the optimal model for the target domain without using any target labels. Our approach is based on the flat minima principle, i.e., models located in the flat minima region in the parameter space usually exhibit excellent generalization ability. However, traditional methods require labeled data to evaluate how well a model is located in the flat minima region, which is unrealistic for the DAOD task. Therefore, we design a Detection Adaptation Score (DAS) approach to approximately measure the flat minima without using target labels. We show via a generalization bound that the flatness can be deemed as model variance, while the minima depend on the domain distribution distance for the DAOD task. Accordingly, we propose a Flatness Index Score (FIS) to assess the flatness by measuring the classification and localization fluctuation before and after perturbations of model parameters and a Prototypical Distance Ratio (PDR) score to seek the minima by measuring the transferability and discriminability of the models. In this way, the proposed DAS approach can effectively represent the degree of flat minima and evaluate the model generalization ability on the target domain. We have conducted extensive experiments on various DAOD benchmarks and approaches, and the experimental results show that the proposed DAS correlates well with the performance of DAOD models and can be used as an effective tool for model selection after training. The code will be released at https://github.com/HenryYu23/DAS.", "pdf": "https://openreview.net/pdf/4f5187aec7f14ade3c71e6055beef36ca03e4d70.pdf"} {"title": "Bandits with Preference Feedback: A Stackelberg Game Perspective", "url": "https://openreview.net/forum?id=wIE991zhXH", "detail_url": "https://openreview.net/forum?id=wIE991zhXH", "authors": "Barna P\u00e1sztor,Parnian Kassraie,Andreas Krause", "tags": "NIPS 2024,Poster", "abstract": "Bandits with preference feedback present a powerful tool for optimizing unknown target functions when only pairwise comparisons are allowed instead of direct value queries. This model allows for incorporating human feedback into online inference and optimization and has been employed in systems for tuning large language models.\nThe problem is fairly understood in toy settings with linear target functions or over finite small domains that limits practical interest.\nTaking the next step, we consider infinite domains and kernelized rewards. In this setting, selecting a pair of actions is quite challenging and requires balancing exploration and exploitation at two levels: within the pair, and along the iterations of the algorithm.\nWe propose MaxMinLCB, which emulates this trade-off as a zero-sum Stackelberg game and chooses action pairs that are informative and have favorable reward values. MaxMinLCB consistently outperforms algorithms in the literature and satisfies an anytime-valid rate-optimal regret guarantee. This is owed to our novel preference-based confidence sequences for kernelized logistic estimators, which are of independent interest.", "pdf": "https://openreview.net/pdf/06881d22cfddbf3f7f9bc53464ceb077f58a0c51.pdf"} {"title": "Random Cycle Coding: Lossless Compression of Cluster Assignments via Bits-Back Coding", "url": "https://openreview.net/forum?id=XkvNQPDFqV", "detail_url": "https://openreview.net/forum?id=XkvNQPDFqV", "authors": "Daniel Severo,Ashish J Khisti,Alireza Makhzani", "tags": "NIPS 2024,Poster", "abstract": "We present an optimal method for encoding cluster assignments of arbitrary data sets. Our method, Random Cycle Coding (RCC), encodes data sequentially and sends assignment information as cycles of the permutation defined by the order of encoded elements. RCC does not require any training and its worst-case complexity scales quasi-linearly with the size of the largest cluster. We characterize the achievable bit rates as a function of cluster sizes and number of elements, showing RCC consistently outperforms previous methods while requiring less compute and memory resources. Experiments show RCC can save up to $2$ bytes per element when applied to vector databases, and removes the need for assigning integer ids to identify vectors, translating to savings of up to $70\\%$ in vector database systems for similarity search applications.", "pdf": "https://openreview.net/pdf/17ec4273a2f1dccfb09e5e33384223bd18983e2b.pdf"} {"title": "Binary Search with Distributional Predictions", "url": "https://openreview.net/forum?id=JEKXTLjEIq", "detail_url": "https://openreview.net/forum?id=JEKXTLjEIq", "authors": "Michael Dinitz,Sungjin Im,Thomas Lavastida,Benjamin Moseley,Aidin Niaparast,Sergei Vassilvitskii", "tags": "NIPS 2024,Poster", "abstract": "Algorithms with (machine-learned) predictions is a powerful framework for combining traditional worst-case algorithms with modern machine learning. However, the vast majority of work in this space assumes that the prediction itself is non-probabilistic, even if it is generated by some stochastic process (such as a machine learning system). This is a poor fit for modern ML, particularly modern neural networks, which naturally generate a *distribution*. We initiate the study of algorithms with *distributional* predictions, where the prediction itself is a distribution. We focus on one of the simplest yet fundamental settings: binary search (or searching a sorted array). \n This setting has one of the simplest algorithms with a point prediction, but what happens if the prediction is a distribution? We show that this is a richer setting: there are simple distributions where using the classical prediction-based algorithm with any single prediction does poorly. \n Motivated by this, as our main result, we give an algorithm with query complexity\n $O(H(p) + \\log \\eta)$, where $H(p)$ is the entropy of the true distribution $p$ and $\\eta$ is the earth mover's distance between $p$ and the predicted distribution $\\hat p$. This also yields the first *distributionally-robust* algorithm for the classical problem of computing an optimal binary search tree given a distribution over target keys.\n We complement this with a lower bound showing that this query complexity is essentially optimal (up to constants), and experiments validating the practical usefulness of our algorithm.", "pdf": "https://openreview.net/pdf/f4e13234d067a36ff946a69dcb75ed2dd9d33eaf.pdf"} {"title": "Jointly Modeling Inter- & Intra-Modality Dependencies for Multi-modal Learning", "url": "https://openreview.net/forum?id=XAKALzI3Gw", "detail_url": "https://openreview.net/forum?id=XAKALzI3Gw", "authors": "Divyam Madaan,Taro Makino,Sumit Chopra,Kyunghyun Cho", "tags": "NIPS 2024,Poster", "abstract": "Supervised multi-modal learning involves mapping multiple modalities to a target label. Previous studies in this field have concentrated on capturing in isolation either the inter-modality dependencies (the relationships between different modalities and the label) or the intra-modality dependencies (the relationships within a single modality and the label). We argue that these conventional approaches that rely solely on either inter- or intra-modality dependencies may not be optimal in general. We view the multi-modal learning problem from the lens of generative models where we consider the target as a source of multiple modalities and the interaction between them. Towards that end, we propose inter- \\& intra-modality modeling (I2M2) framework, which captures and integrates both the inter- and intra-modality dependencies, leading to more accurate predictions. We evaluate our approach using real-world healthcare and vision-and-language datasets with state-of-the-art models, demonstrating superior performance over traditional methods focusing only on one type of modality dependency. The code is available at https://github.com/divyam3897/I2M2.", "pdf": "https://openreview.net/pdf/d6ad6816d3dcf864a9e9d4f4d60aeec6a8b9b16b.pdf"} {"title": "Sparsity-Agnostic Linear Bandits with Adaptive Adversaries", "url": "https://openreview.net/forum?id=jIabKyXOTt", "detail_url": "https://openreview.net/forum?id=jIabKyXOTt", "authors": "Tianyuan Jin,Kyoungseok Jang,Nicol\u00f2 Cesa-Bianchi", "tags": "NIPS 2024,Poster", "abstract": "We study stochastic linear bandits where, in each round, the learner receives a set of actions (i.e., feature vectors), from which it chooses an element and obtains a stochastic reward. The expected reward is a fixed but unknown linear function of the chosen action. We study \\emph{sparse} regret bounds, that depend on the number $S$ of non-zero coefficients in the linear reward function. Previous works focused on the case where $S$ is known, or the action sets satisfy additional assumptions. In this work, we obtain the first sparse regret bounds that hold when $S$ is unknown and the action sets are adversarially generated. Our techniques combine online to confidence set conversions with a novel randomized model selection approach over a hierarchy of nested confidence sets. When $S$ is known, our analysis recovers state-of-the-art bounds for adversarial action sets. We also show that a variant of our approach, using Exp3 to dynamically select the confidence sets, can be used to improve the empirical performance of stochastic linear bandits while enjoying a regret bound with optimal dependence on the time horizon.", "pdf": "https://openreview.net/pdf/a11f10df8daa8c02c35358d41e26ea611b0edc1b.pdf"} {"title": "On Differentially Private Subspace Estimation in a Distribution-Free Setting", "url": "https://openreview.net/forum?id=aCcHVnwNlf", "detail_url": "https://openreview.net/forum?id=aCcHVnwNlf", "authors": "Eliad Tsfadia", "tags": "NIPS 2024,Poster", "abstract": "Private data analysis faces a significant challenge known as the curse of dimensionality, leading to increased costs. However, many datasets possess an inherent low-dimensional structure. For instance, during optimization via gradient descent, the gradients frequently reside near a low-dimensional subspace. If the low-dimensional structure could be privately identified using a small amount of points, we could avoid paying for the high ambient dimension.\n\nOn the negative side, Dwork, Talwar, Thakurta, and Zhang (STOC 2014) proved that privately estimating subspaces, in general, requires an amount of points that has a polynomial dependency on the dimension. However, their bounds do not rule out the possibility to reduce the number of points for \"easy\" instances. Yet, providing a measure that captures how much a given dataset is \"easy\" for this task turns out to be challenging, and was not properly addressed in prior works.\n\nInspired by the work of Singhal and Steinke (NeurIPS 2021), we provide the first measures that quantify \"easiness\" as a function of multiplicative singular-value gaps in the input dataset, and support them with new upper and lower bounds. In particular, our results determine the first types of gaps that are sufficient and necessary for estimating a subspace with an amount of points that is independent of the dimension. Furthermore, we realize our upper bounds using a practical algorithm and demonstrate its advantage in high-dimensional regimes compared to prior approaches.", "pdf": "https://openreview.net/pdf/d15b2a369bf21e9f61b5037b88af8416b2f2bf9e.pdf"} {"title": "Periodic agent-state based Q-learning for POMDPs", "url": "https://openreview.net/forum?id=HmMSBhMAw4", "detail_url": "https://openreview.net/forum?id=HmMSBhMAw4", "authors": "Amit Sinha,Matthieu Geist,Aditya Mahajan", "tags": "NIPS 2024,Poster", "abstract": "The standard approach for Partially Observable Markov Decision Processes (POMDPs) is to convert them to a fully observed belief-state MDP. However, the belief state depends on the system model and is therefore not viable in reinforcement learning (RL) settings. A widely used alternative is to use an agent state, which is a model-free, recursively updateable function of the observation history. Examples include frame stacking and recurrent neural networks. Since the agent state is model-free, it is used to adapt standard RL algorithms to POMDPs. However, standard RL algorithms like Q-learning learn a stationary policy. Our main thesis that we illustrate via examples is that because the agent state does not satisfy the Markov property, non-stationary agent-state based policies can outperform stationary ones. To leverage this feature, we propose PASQL (periodic agent-state based Q-learning), which is a variant of agent-state-based Q-learning that learns periodic policies. By combining ideas from periodic Markov chains and stochastic approximation, we rigorously establish that PASQL converges to a cyclic limit and characterize the approximation error of the converged periodic policy. Finally, we present a numerical experiment to highlight the salient features of PASQL and demonstrate the benefit of learning periodic policies over stationary policies.", "pdf": "https://openreview.net/pdf/cf78eb41e789b603bf45637ea244ab536b64cfbe.pdf"} {"title": "Pseudo-Private Data Guided Model Inversion Attacks", "url": "https://openreview.net/forum?id=pyqPUf36D2", "detail_url": "https://openreview.net/forum?id=pyqPUf36D2", "authors": "Xiong Peng,Bo Han,Feng Liu,Tongliang Liu,Mingyuan Zhou", "tags": "NIPS 2024,Poster", "abstract": "In model inversion attacks (MIAs), adversaries attempt to recover private training data by exploiting access to a well-trained target model. Recent advancements have improved MIA performance using a two-stage generative framework. This approach first employs a generative adversarial network to learn a fixed distributional prior, which is then used to guide the inversion process during the attack. However, in this paper, we observed a phenomenon that such a fixed prior would lead to a low probability of sampling actual private data during the inversion process due to the inherent distribution gap between the prior distribution and the private data distribution, thereby constraining attack performance. To address this limitation, we propose increasing the density around high-quality pseudo-private data\u2014recovered samples through model inversion that exhibit characteristics of the private training data\u2014by slightly tuning the generator. This strategy effectively increases the probability of sampling actual private data that is close to these pseudo-private data during the inversion process. After integrating our method, the generative model inversion pipeline is strengthened, leading to improvements over state-of-the-art MIAs. This paves the way for new research directions in generative MIAs.", "pdf": "https://openreview.net/pdf/d4eb8bbc60ca2fb9ed32715d6b6b8c101d79cb91.pdf"} {"title": "Occupancy-based Policy Gradient: Estimation, Convergence, and Optimality", "url": "https://openreview.net/forum?id=Nq8enbbaP2", "detail_url": "https://openreview.net/forum?id=Nq8enbbaP2", "authors": "Audrey Huang,Nan Jiang", "tags": "NIPS 2024,Poster", "abstract": "Occupancy functions play an instrumental role in reinforcement learning (RL) for guiding exploration, handling distribution shift, and optimizing general objectives beyond the expected return. Yet, computationally efficient policy optimization methods that use (only) occupancy functions are virtually non-existent. In this paper, we establish the theoretical foundations of model-free policy gradient (PG) methods that compute the gradient through the occupancy for both online and offline RL, without modeling value functions. Our algorithms reduce gradient estimation to squared-loss regression and are computationally oracle-efficient. We characterize the sample complexities of both local and global convergence, accounting for both finite-sample estimation error and the roles of exploration (online) and data coverage (offline). Occupancy-based PG naturally handles arbitrary offline data distributions, and, with one-line algorithmic changes, can be adapted to optimize any differentiable objective functional.", "pdf": "https://openreview.net/pdf/1f79d84c65aaca4bd4e75487eb5e1c8f5a82eb55.pdf"} {"title": "Foundation Inference Models for Markov Jump Processes", "url": "https://openreview.net/forum?id=f4v7cmm5sC", "detail_url": "https://openreview.net/forum?id=f4v7cmm5sC", "authors": "David Berghaus,Kostadin Cvejoski,Patrick Seifner,Cesar Ojeda,Ramses J Sanchez", "tags": "NIPS 2024,Poster", "abstract": "Markov jump processes are continuous-time stochastic processes which describe dynamical systems evolving in discrete state spaces. These processes find wide application in the natural sciences and machine learning, but their inference is known to be far from trivial. In this work we introduce a methodology for *zero-shot inference* of Markov jump processes (MJPs), on bounded state spaces, from noisy and sparse observations, which consists of two components. First, a broad probability distribution over families of MJPs, as well as over possible observation times and noise mechanisms, with which we simulate a synthetic dataset of hidden MJPs and their noisy observations. Second, a neural recognition model that processes subsets of the simulated observations, and that is trained to output the initial condition and rate matrix of the target MJP in a supervised way. We empirically demonstrate that *one and the same* (pretrained) recognition model can infer, *in a zero-shot fashion*, hidden MJPs evolving in state spaces of different dimensionalities. Specifically, we infer MJPs which describe (i) discrete flashing ratchet systems, which are a type of Brownian motors, and the conformational dynamics in (ii) molecular simulations, (iii) experimental ion channel data and (iv) simple protein folding models. What is more, we show that our model performs on par with state-of-the-art models which are trained on the target datasets.\n\nOur pretrained model is available online.", "pdf": "https://openreview.net/pdf/bd723bf66a821d98f129f165535ca22d06223f32.pdf"} {"title": "Deep Graph Neural Networks via Posteriori-Sampling-based Node-Adaptative Residual Module", "url": "https://openreview.net/forum?id=VywZsAGhp0", "detail_url": "https://openreview.net/forum?id=VywZsAGhp0", "authors": "Jingbo Zhou,Yixuan Du,Ruqiong Zhang,Jun Xia,Zhizhi Yu,Zelin Zang,Di Jin,Carl Yang,Rui Zhang,Stan Z. Li", "tags": "NIPS 2024,Poster", "abstract": "Graph Neural Networks (GNNs), a type of neural network that can learn from graph-structured data through neighborhood information aggregation, have shown superior performance in various downstream tasks. However, as the number of layers increases, node representations becomes indistinguishable, which is known as over-smoothing. To address this issue, many residual methods have emerged. In this paper, we focus on the over-smoothing issue and related residual methods. Firstly, we revisit over-smoothing from the perspective of overlapping neighborhood subgraphs, and based on this, we explain how residual methods can alleviate over-smoothing by integrating multiple orders neighborhood subgraphs to avoid the indistinguishability of the single high-order neighborhood subgraphs. Additionally, we reveal the drawbacks of previous residual methods, such as the lack of node adaptability and severe loss of high-order neighborhood subgraph information, and propose a \\textbf{Posterior-Sampling-based, Node-Adaptive Residual module (PSNR)}. We theoretically demonstrate that PSNR can alleviate the drawbacks of previous residual methods. Furthermore, extensive experiments verify the superiority of the PSNR module in fully observed node classification and missing feature scenarios. Our code\nis available at \\href{https://github.com/jingbo02/PSNR-GNN}{https://github.com/jingbo02/PSNR-GNN}.", "pdf": "https://openreview.net/pdf/a1966358145d788db2f7d44fc6c4a141efaf1544.pdf"} {"title": "Fair Online Bilateral Trade", "url": "https://openreview.net/forum?id=I90ypQpLgL", "detail_url": "https://openreview.net/forum?id=I90ypQpLgL", "authors": "Fran\u00e7ois Bachoc,Nicol\u00f2 Cesa-Bianchi,Tommaso Cesari,Roberto Colomboni", "tags": "NIPS 2024,Poster", "abstract": "In online bilateral trade, a platform posts prices to incoming pairs of buyers and sellers that have private valuations for a certain good. If the price is lower than the buyers' valuation and higher than the sellers' valuation, then a trade takes place. Previous work focused on the platform perspective, with the goal of setting prices maximizing the *gain from trade* (the sum of sellers' and buyers' utilities). Gain from trade is, however, potentially unfair to traders, as they may receive highly uneven shares of the total utility. In this work we enforce fairness by rewarding the platform with the _fair gain from trade_, defined as the minimum between sellers' and buyers' utilities.\nAfter showing that any no-regret learning algorithm designed to maximize the sum of the utilities may fail badly with fair gain from trade, we present our main contribution: a complete characterization of the regret regimes for fair gain from trade when, after each interaction, the platform only learns whether each trader accepted the current price. Specifically, we prove the following regret bounds: $\\Theta(\\ln T)$ in the deterministic setting, $\\Omega(T)$ in the stochastic setting, and $\\tilde{\\Theta}(T^{2/3})$ in the stochastic setting when sellers' and buyers' valuations are independent of each other. We conclude by providing tight regret bounds when, after each interaction, the platform is allowed to observe the true traders' valuations.", "pdf": "https://openreview.net/pdf/1cd9bf11394c374fe3b613a6cb19dee22a814bcc.pdf"} {"title": "Learning Multimodal Behaviors from Scratch with Diffusion Policy Gradient", "url": "https://openreview.net/forum?id=vU1SiBb57j", "detail_url": "https://openreview.net/forum?id=vU1SiBb57j", "authors": "Zechu Li,Rickmer Krohn,Tao Chen,Anurag Ajay,Pulkit Agrawal,Georgia Chalvatzaki", "tags": "NIPS 2024,Poster", "abstract": "Deep reinforcement learning (RL) algorithms typically parameterize the policy as a deep network that outputs either a deterministic action or a stochastic one modeled as a Gaussian distribution, hence restricting learning to a single behavioral mode. Meanwhile, diffusion models emerged as a powerful framework for multimodal learning. However, the use of diffusion policies in online RL is hindered by the intractability of policy likelihood approximation, as well as the greedy objective of RL methods that can easily skew the policy to a single mode. This paper presents Deep Diffusion Policy Gradient (DDiffPG), a novel actor-critic algorithm that learns from scratch multimodal policies parameterized as diffusion models while discovering and maintaining versatile behaviors. DDiffPG explores and discovers multiple modes through off-the-shelf unsupervised clustering combined with novelty-based intrinsic motivation. DDiffPG forms a multimodal training batch and utilizes mode-specific Q-learning to mitigate the inherent greediness of the RL objective, ensuring the improvement of the diffusion policy across all modes. Our approach further allows the policy to be conditioned on mode-specific embeddings to explicitly control the learned modes. Empirical studies validate DDiffPG's capability to master multimodal behaviors in complex, high-dimensional continuous control tasks with sparse rewards, also showcasing proof-of-concept dynamic online replanning when navigating mazes with unseen obstacles. Our project page is available at https://supersglzc.github.io/projects/ddiffpg/.", "pdf": "https://openreview.net/pdf/c1d49e0a18ba30e91f8393bb7381467ee4dc0bc6.pdf"} {"title": "Distributional Reinforcement Learning with Regularized Wasserstein Loss", "url": "https://openreview.net/forum?id=CiEynTpF28", "detail_url": "https://openreview.net/forum?id=CiEynTpF28", "authors": "Ke Sun,Yingnan Zhao,Wulong Liu,Bei Jiang,Linglong Kong", "tags": "NIPS 2024,Poster", "abstract": "The empirical success of distributional reinforcement learning (RL) highly relies on the choice of distribution divergence equipped with an appropriate distribution representation. In this paper, we propose \\textit{Sinkhorn distributional RL (SinkhornDRL)}, which leverages Sinkhorn divergence\u2014a regularized Wasserstein loss\u2014to minimize the difference between current and target Bellman return distributions. Theoretically, we prove the contraction properties of SinkhornDRL, aligning with the interpolation nature of Sinkhorn divergence between Wasserstein distance and Maximum Mean Discrepancy (MMD). The introduced SinkhornDRL enriches the family of distributional RL algorithms, contributing to interpreting the algorithm behaviors compared with existing approaches by our investigation into their relationships. Empirically, we show that SinkhornDRL consistently outperforms or matches existing algorithms on the Atari games suite and particularly stands out in the multi-dimensional reward setting. \\thanks{Code is available in \\url{https://github.com/datake/SinkhornDistRL}.}.", "pdf": "https://openreview.net/pdf/31b322addb4cab3960ac3646c3629caf3d42851d.pdf"} {"title": "Addressing Spectral Bias of Deep Neural Networks by Multi-Grade Deep Learning", "url": "https://openreview.net/forum?id=IoRT7EhFap", "detail_url": "https://openreview.net/forum?id=IoRT7EhFap", "authors": "Ronglong Fang,Yuesheng Xu", "tags": "NIPS 2024,Poster", "abstract": "Deep neural networks (DNNs) have showcased their remarkable precision in approximating smooth functions. However, they suffer from the {\\it spectral bias}, wherein DNNs typically exhibit a tendency to prioritize the learning of lower-frequency components of a function, struggling to effectively capture its high-frequency features. This paper is to address this issue. Notice that a function having only low frequency components may be well-represented by a shallow neural network (SNN), a network having only a few layers. By observing that composition of low frequency functions can effectively approximate a high-frequency function, we propose to learn a function containing high-frequency components by composing several SNNs, each of which learns certain low-frequency information from the given data. We implement the proposed idea by exploiting the multi-grade deep learning (MGDL) model, a recently introduced model that trains a DNN incrementally, grade by grade, a current grade learning from the residue of the previous grade only an SNN (with trainable parameters) composed with the SNNs (with fixed parameters) trained in the preceding grades as features. We apply MGDL to synthetic, manifold, colored images, and MNIST datasets, all characterized by presence of high-frequency features. Our study reveals that MGDL excels at representing functions containing high-frequency information. Specifically, the neural networks learned in each grade adeptly capture some low-frequency information, allowing their compositions with SNNs learned in the previous grades effectively representing the high-frequency features. Our experimental results underscore the efficacy of MGDL in addressing the spectral bias inherent in DNNs. By leveraging MGDL, we offer insights into overcoming spectral bias limitation of DNNs, thereby enhancing the performance and applicability of deep learning models in tasks requiring the representation of high-frequency information. This study confirms that the proposed method offers a promising solution to address the spectral bias of DNNs. The code is available on GitHub: \\href{https://github.com/Ronglong-Fang/AddressingSpectralBiasviaMGDL}{\\texttt{Addressing Spectral Bias via MGDL}}.", "pdf": "https://openreview.net/pdf/7c1832b55ff98718216586022306db75081c62a3.pdf"} {"title": "Mobility-LLM: Learning Visiting Intentions and Travel Preference from Human Mobility Data with Large Language Models", "url": "https://openreview.net/forum?id=0feJEykDRx", "detail_url": "https://openreview.net/forum?id=0feJEykDRx", "authors": "Letian Gong,Yan Lin,Xinyue Zhang,Yiwen Lu,Xuedi Han,Yichen Liu,Shengnan Guo,Youfang Lin,Huaiyu Wan", "tags": "NIPS 2024,Poster", "abstract": "Location-based services (LBS) have accumulated extensive human mobility data on diverse behaviors through check-in sequences. These sequences offer valuable insights into users\u2019 intentions and preferences. Yet, existing models analyzing check-in sequences fail to consider the semantics contained in these sequences, which closely reflect human visiting intentions and travel preferences, leading to an incomplete comprehension. Drawing inspiration from the exceptional semantic understanding and contextual information processing capabilities of large language models (LLMs) across various domains, we present Mobility-LLM, a novel framework that leverages LLMs to analyze check-in sequences for multiple tasks. Since LLMs cannot directly interpret check-ins, we reprogram these sequences to help LLMs comprehensively understand the semantics of human visiting intentions and travel preferences. \nSpecifically, we introduce a visiting intention memory network (VIMN) to capture the visiting intentions at each record, along with a shared pool of human travel preference prompts (HTPP) to guide the LLM in understanding users\u2019 travel preferences. These components enhance the model\u2019s ability to extract and leverage semantic information from human mobility data effectively. Extensive experiments on four benchmark datasets and three downstream tasks demonstrate that our approach significantly outperforms existing models, underscoring the effectiveness of Mobility-LLM in advancing our understanding of human mobility data within LBS contexts.", "pdf": "https://openreview.net/pdf/b5b10ea80aa4cbfb7065d5ea7ccf3f8458f5f5d6.pdf"} {"title": "A Concept-Based Explainability Framework for Large Multimodal Models", "url": "https://openreview.net/forum?id=MvjLRFntW6", "detail_url": "https://openreview.net/forum?id=MvjLRFntW6", "authors": "Jayneel Parekh,Pegah KHAYATAN,Mustafa Shukor,Alasdair Newson,Matthieu Cord", "tags": "NIPS 2024,Poster", "abstract": "Large multimodal models (LMMs) combine unimodal encoders and large language models (LLMs) to perform multimodal tasks. Despite recent advancements towards the interpretability of these models, understanding internal representations of LMMs remains largely a mystery. In this paper, we present a novel framework for the interpretation of LMMs. We propose a dictionary learning based approach, applied to the representation of tokens. The elements of the learned dictionary correspond to our proposed concepts. We show that these concepts are well semantically grounded in both vision and text. Thus we refer to these as ``multi-modal concepts''. \nWe qualitatively and quantitatively evaluate the results of the learnt concepts. We show that the extracted multimodal concepts are useful to interpret representations of test samples. Finally, we evaluate the disentanglement between different concepts and the quality of grounding concepts visually and textually. Our implementation is publicly available: https://github.com/mshukor/xl-vlms.", "pdf": "https://openreview.net/pdf/0ee504fedb132439837292d208d67761ce3e8cc7.pdf"} {"title": "Certified Machine Unlearning via Noisy Stochastic Gradient Descent", "url": "https://openreview.net/forum?id=h3k2NXu5bJ", "detail_url": "https://openreview.net/forum?id=h3k2NXu5bJ", "authors": "Eli Chien,Haoyu Peter Wang,Ziang Chen,Pan Li", "tags": "NIPS 2024,Poster", "abstract": "``The right to be forgotten'' ensured by laws for user data privacy becomes increasingly important. Machine unlearning aims to efficiently remove the effect of certain data points on the trained model parameters so that it can be approximately the same as if one retrains the model from scratch. We propose to leverage projected noisy stochastic gradient descent for unlearning and establish its first approximate unlearning guarantee under the convexity assumption. Our approach exhibits several benefits, including provable complexity saving compared to retraining, and supporting sequential and batch unlearning. Both of these benefits are closely related to our new results on the infinite Wasserstein distance tracking of the adjacent (un)learning processes. Extensive experiments show that our approach achieves a similar utility under the same privacy constraint while using $2\\%$ and $10\\%$ of the gradient computations compared with the state-of-the-art gradient-based approximate unlearning methods for mini-batch and full-batch settings, respectively.", "pdf": "https://openreview.net/pdf/7b9f36f60de25eb0d2fd523b8fd15b000df921c0.pdf"} {"title": "Structured Learning of Compositional Sequential Interventions", "url": "https://openreview.net/forum?id=FsA0OSsdzJ", "detail_url": "https://openreview.net/forum?id=FsA0OSsdzJ", "authors": "Jialin Yu,Andreas Koukorinis,Nicol\u00f2 Colombo,Yuchen Zhu,Ricardo Silva", "tags": "NIPS 2024,Poster", "abstract": "We consider sequential treatment regimes where each unit is exposed to combinations of interventions over time. When interventions are described by qualitative labels, such as \"close schools for a month due to a pandemic\" or \"promote this podcast to this user during this week\", it is unclear which appropriate structural assumptions allow us to generalize behavioral predictions to previously unseen combinations of interventions. Standard black-box approaches mapping sequences of categorical variables to outputs are applicable, but they rely on poorly understood assumptions on how reliable generalization can be obtained, and may underperform under sparse sequences, temporal variability, and large action spaces. To approach that, we pose an explicit model for composition, that is, how the effect of sequential interventions can be isolated into modules, clarifying which data conditions allow for the identification of their combined effect at different units and time steps. We show the identification properties of our compositional model, inspired by advances in causal matrix factorization methods. Our focus is on predictive models for novel compositions of interventions instead of matrix completion tasks and causal effect estimation. We compare our approach to flexible but generic black-box models to illustrate how structure aids prediction in sparse data conditions.", "pdf": "https://openreview.net/pdf/497f92718b568c28124973e78c69dbecd2829457.pdf"} {"title": "CriticEval: Evaluating Large-scale Language Model as Critic", "url": "https://openreview.net/forum?id=ZsxZ65YqL1", "detail_url": "https://openreview.net/forum?id=ZsxZ65YqL1", "authors": "Tian Lan,Wenwei Zhang,Chen Xu,Heyan Huang,Dahua Lin,Kai Chen,Xian-Ling Mao", "tags": "NIPS 2024,Poster", "abstract": "Critique ability, i.e., the capability of Large Language Models (LLMs) to identify and rectify flaws in responses, is crucial for their applications in self-improvement and scalable oversight. While numerous studies have been proposed to evaluate critique ability of LLMs, their comprehensiveness and reliability are still limited. To overcome this problem, we introduce CriticEval, a novel benchmark designed to comprehensively and reliably evaluate critique ability of LLMs. Specifically, to ensure the comprehensiveness, CriticEval evaluates critique ability from four dimensions across nine diverse task scenarios. It evaluates both scalar-valued and textual critiques, targeting responses of varying quality. To ensure the reliability, a large number of critiques are annotated to serve as references, enabling GPT-4 to evaluate textual critiques reliably. Extensive evaluations of open-source and closed-source LLMs first validate the reliability of evaluation in CriticEval. Then, experimental results demonstrate the promising potential of open-source LLMs, the effectiveness of critique datasets and several intriguing relationships between the critique ability and some critical factors, including task types, response qualities and critique dimensions.", "pdf": "https://openreview.net/pdf/41b4ec0b565683230d9c6a7a9c5954c55717213f.pdf"} {"title": "Enhancing Consistency-Based Image Generation via Adversarialy-Trained Classification and Energy-Based Discrimination", "url": "https://openreview.net/forum?id=uBVCPAMDGk", "detail_url": "https://openreview.net/forum?id=uBVCPAMDGk", "authors": "Shelly Golan,Roy Ganz,Michael Elad", "tags": "NIPS 2024,Poster", "abstract": "The recently introduced Consistency models pose an efficient alternative to diffusion algorithms, enabling rapid and good quality image synthesis. These methods overcome the slowness of diffusion models by directly mapping noise to data, while maintaining a (relatively) simpler training. Consistency models enable a fast one- or few-step generation, but they typically fall somewhat short in sample quality when compared to their diffusion origins. \nIn this work we propose a novel and highly effective technique for post-processing Consistency-based generated images, enhancing their perceptual quality. Our approach utilizes a joint classifier-discriminator model, in which both portions are trained adversarially. While the classifier aims to grade an image based on its assignment to a designated class, the discriminator portion of the very same network leverages the softmax values to assess the proximity of the input image to the targeted data manifold, thereby serving as an Energy-based Model. By employing example-specific projected gradient iterations under the guidance of this joint machine, we refine synthesized images and achieve an improved FID scores on the ImageNet 64x64 dataset for both Consistency-Training and Consistency-Distillation techniques.", "pdf": "https://openreview.net/pdf/d198174754c207b8bf18c138254bbe39902f45e4.pdf"} {"title": "Kernel-Based Function Approximation for Average Reward Reinforcement Learning: An Optimist No-Regret Algorithm", "url": "https://openreview.net/forum?id=VwUTz2pOnD", "detail_url": "https://openreview.net/forum?id=VwUTz2pOnD", "authors": "Sattar Vakili,Julia Olkhovskaya", "tags": "NIPS 2024,Poster", "abstract": "Reinforcement Learning (RL) utilizing kernel ridge regression to predict the expected value function represents a powerful method with great representational capacity. This setting is a highly versatile framework amenable to analytical results. We consider kernel-based function approximation for RL in the infinite horizon average reward setting, also referred to as the undiscounted setting. We propose an *optimistic* algorithm, similar to acquisition function based algorithms in the special case of bandits. We establish novel *no-regret* performance guarantees for our algorithm, under kernel-based modelling assumptions. Additionally, we derive a novel confidence interval for the kernel-based prediction of the expected value function, applicable across various RL problems.", "pdf": "https://openreview.net/pdf/0c1fea247b79746ce21fa0d29b2c2a588cf514c0.pdf"} {"title": "Fair Wasserstein Coresets", "url": "https://openreview.net/forum?id=ylceJ2xIw5", "detail_url": "https://openreview.net/forum?id=ylceJ2xIw5", "authors": "Zikai Xiong,Niccolo Dalmasso,Shubham Sharma,Freddy Lecue,Daniele Magazzeni,Vamsi K. Potluru,Tucker Balch,Manuela Veloso", "tags": "NIPS 2024,Poster", "abstract": "Data distillation and coresets have emerged as popular approaches to generate a smaller representative set of samples for downstream learning tasks to handle large-scale datasets. At the same time, machine learning is being increasingly applied to decision-making processes at a societal level, making it imperative for modelers to address inherent biases towards subgroups present in the data. While current approaches focus on creating fair synthetic representative samples by optimizing local properties relative to the original samples, their impact on downstream learning processes has yet to be explored. In this work, we present fair Wasserstein coresets ($\\texttt{FWC}$), a novel coreset approach which generates fair synthetic representative samples along with sample-level weights to be used in downstream learning tasks. $\\texttt{FWC}$ uses an efficient majority minimization algorithm to minimize the Wasserstein distance between the original dataset and the weighted synthetic samples while enforcing demographic parity. We show that an unconstrained version of $\\texttt{FWC}$ is equivalent to Lloyd's algorithm for k-medians and k-means clustering. Experiments conducted on both synthetic and real datasets show that $\\texttt{FWC}$: (i) achieves a competitive fairness-performance tradeoff in downstream models compared to existing approaches, (ii) improves downstream fairness when added to the existing training data and (iii) can be used to reduce biases in predictions from large language models (GPT-3.5 and GPT-4).", "pdf": "https://openreview.net/pdf/93985e308e0356a2b95c8e021f79d007aeda2429.pdf"} {"title": "ColJailBreak: Collaborative Generation and Editing for Jailbreaking Text-to-Image Deep Generation", "url": "https://openreview.net/forum?id=eGIzeTmAtE", "detail_url": "https://openreview.net/forum?id=eGIzeTmAtE", "authors": "Yizhuo Ma,Shanmin Pang,Qi Guo,Tianyu Wei,Qing Guo", "tags": "NIPS 2024,Poster", "abstract": "The commercial text-to-image deep generation models (e.g. DALL\u00b7E) can produce high-quality images based on input language descriptions. These models incorporate a black-box safety filter to prevent the generation of unsafe or unethical content, such as violent, criminal, or hateful imagery. Recent jailbreaking methods generate adversarial prompts capable of bypassing safety filters and producing unsafe content, exposing vulnerabilities in influential commercial models. However, once these adversarial prompts are identified, the safety filter can be updated to prevent the generation of unsafe images. In this work, we propose an effective, simple, and difficult-to-detect jailbreaking solution: generating safe content initially with normal text prompts and then editing the generations to embed unsafe content. The intuition behind this idea is that the deep generation model cannot reject safe generation with normal text prompts, while the editing models focus on modifying the local regions of images and do not involve a safety strategy. However, implementing such a solution is non-trivial, and we need to overcome several challenges: how to automatically confirm the normal prompt to replace the unsafe prompts, and how to effectively perform editable replacement and naturally generate unsafe content. In this work, we propose the collaborative generation and editing for jailbreaking text-to-image deep generation (ColJailBreak), which comprises three key components: adaptive normal safe substitution, inpainting-driven injection of unsafe content, and contrastive language-image-guided collaborative optimization. We validate our method on three datasets and compare it to two baseline methods. Our method could generate unsafe content through two commercial deep generation models including GPT-4 and DALL\u00b7E 2.", "pdf": "https://openreview.net/pdf/9c420cc1f3c42fc03c3bb81048a0cb18e51db58a.pdf"} {"title": "SelfCodeAlign: Self-Alignment for Code Generation", "url": "https://openreview.net/forum?id=xXRnUU7xTL", "detail_url": "https://openreview.net/forum?id=xXRnUU7xTL", "authors": "Yuxiang Wei,Federico Cassano,Jiawei Liu,Yifeng Ding,Naman Jain,Zachary Mueller,Harm de Vries,Leandro Von Werra,Arjun Guha,LINGMING ZHANG", "tags": "NIPS 2024,Poster", "abstract": "Instruction tuning is a supervised fine-tuning approach that significantly improves the ability of large language models (LLMs) to follow human instructions. For programming tasks, most models are finetuned with costly human-annotated instruction-response pairs or those generated by large, proprietary LLMs, which may not be permitted. We propose SelfCodeAlign, the first fully transparent and permissive pipeline for self-aligning code LLMs without extensive human annotations or distillation. SelfCodeAlign employs the same base model for inference throughout the data generation process. It first extracts diverse coding concepts from high-quality seed snippets to generate new tasks. It then samples multiple responses per task, pairs each with test cases, and validates them in a sandbox environment. Finally, passing examples are selected for instruction tuning. In our primary experiments, we use SelfCodeAlign with CodeQwen1.5-7B to generate a dataset of 74k instruction-response pairs. Finetuning on this dataset leads to a model that achieves a 67.1 pass@1 on HumanEval+, surpassing CodeLlama-70B-Instruct despite being ten times smaller. Across all benchmarks, this finetuned model consistently outperforms the original version trained with OctoPack, the previous state-of-the-art method for instruction tuning without human annotations or distillation. Additionally, we show that SelfCodeAlign is effective across LLMs of various sizes, from 3B to 33B, and that the base models can benefit more from alignment with their own data distribution. We further validate each component\u2019s effectiveness in our pipeline, showing that SelfCodeAlign outperforms both direct distillation from GPT-4o and leading GPT-3.5-based distillation methods, such as OSS-Instruct and Evol-Instruct. SelfCodeAlign has also led to the creation of StarCoder2-Instruct, the first fully transparent, permissively licensed, and self-aligned code LLM that achieves state-of-the-art coding performance. Overall, SelfCodeAlign shows for the first time that a strong instruction-tuned code LLM can result from self-alignment rather than distillation.", "pdf": "https://openreview.net/pdf/f4cd100d3f9f85fe8c929ea517dc4cbd24143e72.pdf"} {"title": "DDK: Distilling Domain Knowledge for Efficient Large Language Models", "url": "https://openreview.net/forum?id=xgiurUq0ss", "detail_url": "https://openreview.net/forum?id=xgiurUq0ss", "authors": "Jiaheng Liu,Chenchen Zhang,Jinyang Guo,Yuanxing Zhang,Haoran Que,Ken Deng,ZhiqiBai,Jie Liu,Ge Zhang,JiakaiWang,Yanan Wu,Congnan Liu,Jiamang Wang,Lin Qu,Wenbo Su,Bo Zheng", "tags": "NIPS 2024,Poster", "abstract": "Despite the advanced intelligence abilities of large language models (LLMs) in various applications, they still face significant computational and storage demands. Knowledge Distillation (KD) has emerged as an effective strategy to improve the performance of a smaller LLM (i.e., the student model) by transferring knowledge from a high-performing LLM (i.e., the teacher model). Prevailing techniques in LLM distillation typically use a black-box model API to generate high-quality pretrained and aligned datasets, or utilize white-box distillation by altering the loss function to better transfer knowledge from the teacher LLM. However, these methods ignore the knowledge differences between the student and teacher LLMs across domains. This results in excessive focus on domains with minimal performance gaps and insufficient attention to domains with large gaps, reducing overall performance. In this paper, we introduce a new LLM distillation framework called DDK, which dynamically adjusts the composition of the distillation dataset in a smooth manner according to the domain performance differences between the teacher and student models, making the distillation process more stable and effective. Extensive evaluations show that DDK significantly improves the performance of student models, outperforming both continuously pretrained baselines and existing knowledge distillation methods by a large margin.", "pdf": "https://openreview.net/pdf/21bd9bcb8bb3c29f998b32ec6cc9d3892d669785.pdf"} {"title": "An Improved Empirical Fisher Approximation for Natural Gradient Descent", "url": "https://openreview.net/forum?id=LmjLRHVCMG", "detail_url": "https://openreview.net/forum?id=LmjLRHVCMG", "authors": "Xiaodong Wu,Wenyi Yu,Chao Zhang,Phil Woodland", "tags": "NIPS 2024,Poster", "abstract": "Approximate Natural Gradient Descent (NGD) methods are an important family of optimisers for deep learning models, which use approximate Fisher information matrices to pre-condition gradients during training. The empirical Fisher (EF) method approximates the Fisher information matrix empirically by reusing the per-sample gradients collected during back-propagation. Despite its ease of implementation, the EF approximation has its theoretical and practical limitations. This paper investigates the *inversely-scaled projection* issue of EF, which is shown to be a major cause of its poor empirical approximation quality. An improved empirical Fisher (iEF) method is proposed to address this issue, which is motivated as a generalised NGD method from a loss reduction perspective, meanwhile retaining the practical convenience of EF. The exact iEF and EF methods are experimentally evaluated using practical deep learning setups, including widely-used setups for parameter-efficient fine-tuning of pre-trained models (T5-base with LoRA and Prompt-Tuning on GLUE tasks, and ViT with LoRA for CIFAR100). Optimisation experiments show that applying exact iEF directly as an optimiser provides strong convergence and generalisation. It achieves the best test performance and the lowest training loss for the majority of the tasks, even when compared to well-tuned AdamW/Adafactor baselines. Additionally, under a novel empirical evaluation framework, the proposed iEF method shows consistently better approximation quality to exact Natural Gradient updates than both the EF and the more expensive sampled Fisher methods, meanwhile demonstrating the superior property of being robust to the choice of damping across tasks and training stages. Improving existing approximate NGD optimisers with iEF is expected to lead to better convergence and robustness. Furthermore, the iEF method also serves as a better approximation method to the Fisher information matrix itself, which enables the improvement of a variety of Fisher-based methods, not limited to the scope of optimisation.", "pdf": "https://openreview.net/pdf/bcc1c16c2835e359053a07006e04f5149dd67981.pdf"} {"title": "How Molecules Impact Cells: Unlocking Contrastive PhenoMolecular Retrieval", "url": "https://openreview.net/forum?id=LQBlSGeOGm", "detail_url": "https://openreview.net/forum?id=LQBlSGeOGm", "authors": "Philip Fradkin,Puria Azadi Moghadam,Karush Suri,Frederik Wenkel,Ali Bashashati,Maciej Sypetkowski,Dominique Beaini", "tags": "NIPS 2024,Poster", "abstract": "Predicting molecular impact on cellular function is a core challenge in therapeutic design. Phenomic experiments, designed to capture cellular morphology, utilize microscopy based techniques and demonstrate a high throughput solution for uncovering molecular impact on the cell. In this work, we learn a joint latent space between molecular structures and microscopy phenomic experiments, aligning paired samples with contrastive learning. Specifically, we study the problem of Contrastive PhenoMolecular Retrieval, which consists of zero-shot molecular structure identification conditioned on phenomic experiments. We assess challenges in multi-modal learning of phenomics and molecular modalities such as experimental batch effect, inactive molecule perturbations, and encoding perturbation concentration. We demonstrate improved multi-modal learner retrieval through (1) a uni-modal pre-trained phenomics model, (2) a novel inter sample similarity aware loss, and (3) models conditioned on a representation of molecular concentration. Following this recipe, we propose MolPhenix, a molecular phenomics model. MolPhenix leverages a pre-trained phenomics model to demonstrate significant performance gains across perturbation concentrations, molecular scaffolds, and activity thresholds. In particular, we demonstrate an 8.1 times improvement in zero shot molecular retrieval of active molecules over the previous state-of-the-art, reaching 77.33% in top-1% accuracy. These results open the door for machine learning to be applied in virtual phenomics screening, which can significantly benefit drug discovery applications.", "pdf": "https://openreview.net/pdf/8c5af0f26b90bf945460e5be094f8ed0934cce61.pdf"} {"title": "Spiking Neural Network as Adaptive Event Stream Slicer", "url": "https://openreview.net/forum?id=CcNw4mVIxo", "detail_url": "https://openreview.net/forum?id=CcNw4mVIxo", "authors": "Jiahang Cao,Mingyuan Sun,Ziqing Wang,Hao Cheng,Qiang Zhang,shibo zhou,Renjing Xu", "tags": "NIPS 2024,Poster", "abstract": "Event-based cameras are attracting significant interest as they provide rich edge information, high dynamic range, and high temporal resolution. Many state-of-the-art event-based algorithms rely on splitting the events into fixed groups, resulting in the omission of crucial temporal information, particularly when dealing with diverse motion scenarios (e.g., high/low speed). In this work, we propose SpikeSlicer, a novel-designed event processing framework capable of splitting events stream adaptively. SpikeSlicer utilizes a low-energy spiking neural network (SNN) to trigger event slicing. To guide the SNN to fire spikes at optimal time steps, we propose the Spiking Position-aware Loss (SPA-Loss) to modulate the neuron's state. Additionally, we develop a Feedback-Update training strategy that refines the slicing decisions using feedback from the downstream artificial neural network (ANN). Extensive experiments demonstrate that our method yields significant performance improvements in event-based object tracking and recognition. Notably, SpikeSlicer provides a brand-new SNN-ANN cooperation paradigm, where the SNN acts as an efficient, low-energy data processor to assist the ANN in improving downstream performance, injecting new perspectives and potential avenues of exploration.", "pdf": "https://openreview.net/pdf/e33c8272f181973d94a72bf07ef9d3dffbd63dd2.pdf"} {"title": "Uncovering Safety Risks of Large Language Models through Concept Activation Vector", "url": "https://openreview.net/forum?id=Uymv9ThB50", "detail_url": "https://openreview.net/forum?id=Uymv9ThB50", "authors": "Zhihao Xu,Ruixuan HUANG,Changyu Chen,Xiting Wang", "tags": "NIPS 2024,Poster", "abstract": "Despite careful safety alignment, current large language models (LLMs) remain vulnerable to various attacks. To further unveil the safety risks of LLMs, we introduce a Safety Concept Activation Vector (SCAV) framework, which effectively guides the attacks by accurately interpreting LLMs' safety mechanisms. We then develop an SCAV-guided attack method that can generate both attack prompts and embedding-level attacks with automatically selected perturbation hyperparameters. Both automatic and human evaluations demonstrate that our attack method significantly improves the attack success rate and response quality while requiring less training data. Additionally, we find that our generated attack prompts may be transferable to GPT-4, and the embedding-level attacks may also be transferred to other white-box LLMs whose parameters are known. Our experiments further uncover the safety risks present in current LLMs. For example, in our evaluation of seven open-source LLMs, we observe an average attack success rate of 99.14%, based on the classic keyword-matching criterion. Finally, we provide insights into the safety mechanism of LLMs. The code is available at https://github.com/SproutNan/AI-Safety_SCAV.", "pdf": "https://openreview.net/pdf/2bac0b5a59c5bca762bad248ab9fc545ee0eb79e.pdf"} {"title": "Sequential Harmful Shift Detection Without Labels", "url": "https://openreview.net/forum?id=jps9KkuSD3", "detail_url": "https://openreview.net/forum?id=jps9KkuSD3", "authors": "Salim I. Amoukou,Tom Bewley,Saumitra Mishra,Freddy Lecue,Daniele Magazzeni,Manuela Veloso", "tags": "NIPS 2024,Poster", "abstract": "We introduce a novel approach for detecting distribution shifts that negatively impact the performance of machine learning models in continuous production environments, which requires no access to ground truth data labels. It builds upon the work of Podkopaev and Ramdas [2022], who address scenarios where labels are available for tracking model errors over time. Our solution extends this framework to work in the absence of labels, by employing a proxy for the true error. This proxy is derived using the predictions of a trained error estimator. Experiments show that our method has high power and false alarm control under various distribution shifts, including covariate and label shifts and natural shifts over geography and time.", "pdf": "https://openreview.net/pdf/fdd640562539b62dcc7883927b02d55339d174b0.pdf"} {"title": "Federated Transformer: Multi-Party Vertical Federated Learning on Practical Fuzzily Linked Data", "url": "https://openreview.net/forum?id=FqWyzyErVT", "detail_url": "https://openreview.net/forum?id=FqWyzyErVT", "authors": "Zhaomin Wu,Junyi Hou,Yiqun Diao,Bingsheng He", "tags": "NIPS 2024,Poster", "abstract": "Federated Learning (FL) is an evolving paradigm that enables multiple parties to collaboratively train models without sharing raw data. Among its variants, Vertical Federated Learning (VFL) is particularly relevant in real-world, cross-organizational collaborations, where distinct features of a shared instance group are contributed by different parties. In these scenarios, parties are often linked using fuzzy identifiers, leading to a common practice termed as _multi-party fuzzy VFL_. Existing models generally address either multi-party VFL or fuzzy VFL between two parties. Extending these models to practical multi-party fuzzy VFL typically results in significant performance degradation and increased costs for maintaining privacy. To overcome these limitations, we introduce the _Federated Transformer (FeT)_, a novel framework that supports multi-party VFL with fuzzy identifiers. FeT innovatively encodes these identifiers into data representations and employs a transformer architecture distributed across different parties, incorporating three new techniques to enhance performance. Furthermore, we have developed a multi-party privacy framework for VFL that integrates differential privacy with secure multi-party computation, effectively protecting local representations while minimizing associated utility costs. Our experiments demonstrate that the FeT surpasses the baseline models by up to 46\\% in terms of accuracy when scaled to 50 parties. Additionally, in two-party fuzzy VFL settings, FeT also shows improved performance and privacy over cutting-edge VFL models.", "pdf": "https://openreview.net/pdf/29b823d38b368080e2aeeded30f53f4af7f869c3.pdf"} {"title": "PageRank Bandits for Link Prediction", "url": "https://openreview.net/forum?id=VSz9na5Jtl", "detail_url": "https://openreview.net/forum?id=VSz9na5Jtl", "authors": "Yikun Ban,Jiaru Zou,Zihao Li,Yunzhe Qi,Dongqi Fu,Jian Kang,Hanghang Tong,Jingrui He", "tags": "NIPS 2024,Poster", "abstract": "Link prediction is a critical problem in graph learning with broad applications such as recommender systems and knowledge graph completion. Numerous research efforts have been directed at solving this problem, including approaches based on similarity metrics and Graph Neural Networks (GNN). However, most existing solutions are still rooted in conventional supervised learning, which makes it challenging to adapt over time to changing customer interests and to address the inherent dilemma of exploitation versus exploration in link prediction.\nTo tackle these challenges, this paper reformulates link prediction as a sequential decision-making process, where each link prediction interaction occurs sequentially. We propose a novel fusion algorithm, PRB (PageRank Bandits), which is the first to combine contextual bandits with PageRank for collaborative exploitation and exploration. We also introduce a new reward formulation and provide a theoretical performance guarantee for PRB. Finally, we extensively evaluate PRB in both online and offline settings, comparing it with bandit-based and graph-based methods. The empirical success of PRB demonstrates the value of the proposed fusion approach. Our code is released at https://github.com/jiaruzouu/PRB.", "pdf": "https://openreview.net/pdf/80c437faab5ff4420938e1f8234a59b0c844fcfb.pdf"} {"title": "Real-time Core-Periphery Guided ViT with Smart Data Layout Selection on Mobile Devices", "url": "https://openreview.net/forum?id=lD7ziaMHbf", "detail_url": "https://openreview.net/forum?id=lD7ziaMHbf", "authors": "Zhihao Shu,Xiaowei Yu,Zihao Wu,Wenqi Jia,Yinchen Shi,Miao Yin,Tianming Liu,Dajiang Zhu,Wei Niu", "tags": "NIPS 2024,Poster", "abstract": "Mobile devices have become essential enablers for AI applications, particularly in scenarios that require real-time performance. Vision Transformer (ViT) has become a fundamental cornerstone in this regard due to its high accuracy. Recent efforts have been dedicated to developing various transformer architectures that offer im- proved accuracy while reducing the computational requirements. However, existing research primarily focuses on reducing the theoretical computational complexity through methods such as local attention and model pruning, rather than considering realistic performance on mobile hardware. Although these optimizations reduce computational demands, they either introduce additional overheads related to data transformation (e.g., Reshape and Transpose) or irregular computation/data-access patterns. These result in significant overhead on mobile devices due to their limited bandwidth, which even makes the latency worse than vanilla ViT on mobile. In this paper, we present ECP-ViT, a real-time framework that employs the core-periphery principle inspired by the brain functional networks to guide self-attention in ViTs and enable the deployment of ViT models on smartphones. We identify the main bottleneck in transformer structures caused by data transformation and propose a hardware-friendly core-periphery guided self-attention to decrease computation demands. Additionally, we design the system optimizations for intensive data transformation in pruned models. ECP-ViT, with the proposed algorithm-system co-optimizations, achieves a speedup of 4.6\u00d7 to 26.9\u00d7 on mobile GPUs across four datasets: STL-10, CIFAR100, TinyImageNet, and ImageNet.", "pdf": "https://openreview.net/pdf/bcb3daacd691d78e3a034dacbc9da8c718a545ff.pdf"} {"title": "Regularized Q-Learning", "url": "https://openreview.net/forum?id=4sueqIwb4o", "detail_url": "https://openreview.net/forum?id=4sueqIwb4o", "authors": "Han-Dong Lim,Donghwan Lee", "tags": "NIPS 2024,Poster", "abstract": "Q-learning is widely used algorithm in reinforcement learning (RL) community. Under the lookup table setting, its convergence is well established. However, its behavior is known to be unstable with the linear function approximation case. This paper develops a new Q-learning algorithm, called RegQ, that converges when linear function approximation is used. We prove that simply adding an appropriate regularization term ensures convergence of the algorithm. Its stability is established using a recent analysis tool based on switching system models. Moreover, we experimentally show that RegQ converges in environments where Q-learning with linear function approximation has known to diverge. An error bound on the solution where the algorithm converges is also given.", "pdf": "https://openreview.net/pdf/030db0df6de829c2f67534620d1a132d577067e7.pdf"} {"title": "One Sample Fits All: Approximating All Probabilistic Values Simultaneously and Efficiently", "url": "https://openreview.net/forum?id=AUg9D2VjcF", "detail_url": "https://openreview.net/forum?id=AUg9D2VjcF", "authors": "Weida Li,Yaoliang Yu", "tags": "NIPS 2024,Poster", "abstract": "The concept of probabilistic values, such as Beta Shapley values and weighted Banzhaf values, has gained recent attention in applications like feature attribution and data valuation. However, exact computation of these values is often exponentially expensive, necessitating approximation techniques. Prior research has shown that the choice of probabilistic values significantly impacts downstream performance, with no universally superior option. Consequently, one may have to approximate multiple candidates and select the best-performing one. Although there have been many efforts to develop efficient estimators, none are intended to approximate all probabilistic values both simultaneously and efficiently. In this work, we embark on the first exploration of achieving this goal. Adhering to the principle of maximum sample reuse and avoiding amplifying factors, we propose a one-sample-fits-all framework parameterized by a sampling vector to approximate intermediate terms that can be converted to any probabilistic value. Leveraging the concept of $ (\\epsilon, \\delta) $-approximation, we theoretically identify a key formula that effectively determines the convergence rate of our framework. By optimizing the sampling vector using this formula, we obtain i) a one-for-all estimator that achieves the currently best time complexity for all probabilistic values on average, and ii) a faster generic estimator with the sampling vector optimally tuned for each probabilistic value. Particularly, our one-for-all estimator achieves the fastest convergence rate on Beta Shapley values, including the well-known Shapley value, both theoretically and empirically. Finally, we establish a connection between probabilistic values and the least square regression used in (regularized) datamodels, showing that our one-for-all estimator can solve a family of datamodels simultaneously. Our code is available at https://github.com/watml/one-for-all.", "pdf": "https://openreview.net/pdf/224ddb3bdb27e35577edc0ae6a5941eba7f7a45f.pdf"} {"title": "Multivariate Probabilistic Time Series Forecasting with Correlated Errors", "url": "https://openreview.net/forum?id=cAFvxVFaii", "detail_url": "https://openreview.net/forum?id=cAFvxVFaii", "authors": "Vincent Zhihao Zheng,Lijun Sun", "tags": "NIPS 2024,Poster", "abstract": "Accurately modeling the correlation structure of errors is critical for reliable uncertainty quantification in probabilistic time series forecasting. While recent deep learning models for multivariate time series have developed efficient parameterizations for time-varying contemporaneous covariance, but they often assume temporal independence of errors for simplicity. However, real-world data often exhibit significant error autocorrelation and cross-lag correlation due to factors such as missing covariates. In this paper, we introduce a plug-and-play method that learns the covariance structure of errors over multiple steps for autoregressive models with Gaussian-distributed errors. To ensure scalable inference and computational efficiency, we model the contemporaneous covariance using a low-rank-plus-diagonal parameterization and capture cross-covariance through a group of independent latent temporal processes. The learned covariance matrix is then used to calibrate predictions based on observed residuals. We evaluate our method on probabilistic models built on RNNs and Transformer architectures, and the results confirm the effectiveness of our approach in improving predictive accuracy and uncertainty quantification without significantly increasing the parameter size.", "pdf": "https://openreview.net/pdf/f4c3347882a547ad216575bc321928832e0703c3.pdf"} {"title": "Cascade of phase transitions in the training of energy-based models", "url": "https://openreview.net/forum?id=Qtf6Xz4VvE", "detail_url": "https://openreview.net/forum?id=Qtf6Xz4VvE", "authors": "Dimitrios Bachtis,Giulio Biroli,Aur\u00e9lien Decelle,Beatriz Seoane", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we investigate the feature encoding process in a prototypical energy-based generative model, the Restricted Boltzmann Machine (RBM). We start with an analytical investigation using simplified architectures and data structures, and end with numerical analysis of real trainings on real datasets. Our study tracks the evolution of the model\u2019s weight matrix through its singular value decomposition, revealing a series of thermodynamic phase transitions that shape the principal learning modes of the empirical probability distribution. We first describe this process analytically in several controlled setups that allow us to fully monitor the training dynamics until convergence. We then validate these findings by training the Bernoulli-Bernoulli RBM on real data sets. By studying the phase behavior over data sets of increasing dimension, we show that these phase transitions are genuine in the thermodynamic sense. Moreover, we propose a mean-field finite-size scaling hypothesis, confirming that the initial phase transition, reminiscent of the paramagnetic-to-ferromagnetic phase transition in mean-field ferromagnetism models, is governed by mean-field critical exponents.", "pdf": "https://openreview.net/pdf/599bd23403c0b3e0dab04bbc7c1a5b9a2d19eee8.pdf"} {"title": "Invariant subspaces and PCA in nearly matrix multiplication time", "url": "https://openreview.net/forum?id=Wyp8vsL9de", "detail_url": "https://openreview.net/forum?id=Wyp8vsL9de", "authors": "Aleksandros Sobczyk,Marko Mladenovic,Mathieu Luisier", "tags": "NIPS 2024,Poster", "abstract": "Approximating invariant subspaces of generalized eigenvalue problems (GEPs) is a fundamental computational problem at the core of machine learning and scientific computing. It is, for example, the root of Principal Component Analysis (PCA) for dimensionality reduction, data visualization, and noise filtering, and of Density Functional Theory (DFT), arguably the most popular method to calculate the electronic structure of materials. \nGiven Hermitian $H,S\\in\\mathbb{C}^{n\\times n}$, where $S$ is positive-definite, let $\\Pi_k$ be the true spectral projector on the invariant subspace that is associated with the $k$ smallest (or largest) eigenvalues of the GEP $HC=SC\\Lambda$, for some $k\\in[n]$. \nWe show that we can compute a matrix $\\widetilde\\Pi_k$ such that $\\lVert\\Pi_k-\\widetilde\\Pi_k\\rVert_2\\leq \\epsilon$, in $O\\left( n^{\\omega+\\eta}\\mathrm{polylog}(n,\\epsilon^{-1},\\kappa(S),\\mathrm{gap}_k^{-1}) \\right)$ bit operations in the floating point model, for some $\\epsilon\\in(0,1)$, with probability $1-1/n$. Here, $\\eta>0$ is arbitrarily small, $\\omega\\lesssim 2.372$ is the matrix multiplication exponent, $\\kappa(S)=\\lVert S\\rVert_2\\lVert S^{-1}\\rVert_2$, and $\\mathrm{gap}_k$ is the gap between eigenvalues $k$ and $k+1$. \nTo achieve such provable \"forward-error\" guarantees, our methods rely on a new $O(n^{\\omega+\\eta})$ stability analysis for the Cholesky factorization, and a smoothed analysis for computing spectral gaps, which can be of independent interest.\nUltimately, we obtain new matrix multiplication-type bit complexity upper bounds for PCA problems, including classical PCA and (randomized) low-rank approximation.", "pdf": "https://openreview.net/pdf/709cbe5c1bbc70a31c2c8c07948fdcae906ec7fc.pdf"} {"title": "TopoLogic: An Interpretable Pipeline for Lane Topology Reasoning on Driving Scenes", "url": "https://openreview.net/forum?id=MXzr10iX2d", "detail_url": "https://openreview.net/forum?id=MXzr10iX2d", "authors": "Yanping Fu,Wenbin Liao,Xinyuan Liu,Hang Xu,Yike Ma,Yucheng Zhang,Feng Dai", "tags": "NIPS 2024,Poster", "abstract": "As an emerging task that integrates perception and reasoning, topology reasoning in autonomous driving scenes has recently garnered widespread attention. However, existing work often emphasizes \"perception over reasoning\": they typically boost reasoning performance by enhancing the perception of lanes and directly adopt vanilla MLPs to learn lane topology from lane query. This paradigm overlooks the geometric features intrinsic to the lanes themselves and are prone to being influenced by inherent endpoint shifts in lane detection.\n To tackle this issue, we propose an interpretable method for lane topology reasoning based on lane geometric distance and lane query similarity, named TopoLogic. This method mitigates the impact of endpoint shifts in geometric space, and introduces explicit similarity calculation in semantic space as a complement. By integrating results from both spaces, our methods provides more comprehensive information for lane topology. Ultimately, our approach significantly outperforms the existing state-of-the-art methods on the mainstream benchmark OpenLane-V2 (23.9 v.s. 10.9 in TOP$_{ll}$ and 44.1 v.s. 39.8 in OLS on subsetA). Additionally, our proposed geometric distance topology reasoning method can be incorporated into well-trained models without re-training, significantly enhancing the performance of lane topology reasoning. The code is released at https://github.com/Franpin/TopoLogic.", "pdf": "https://openreview.net/pdf/69e15a196f79b315a8bdd3441c871b8b6a79c16d.pdf"} {"title": "Connectivity Shapes Implicit Regularization in Matrix Factorization Models for Matrix Completion", "url": "https://openreview.net/forum?id=9jgODkdH0F", "detail_url": "https://openreview.net/forum?id=9jgODkdH0F", "authors": "Zhiwei Bai,Jiajie Zhao,Yaoyu Zhang", "tags": "NIPS 2024,Poster", "abstract": "Matrix factorization models have been extensively studied as a valuable test-bed for understanding the implicit biases of overparameterized models. Although both low nuclear norm and low rank regularization have been studied for these models, a unified understanding of when, how, and why they achieve different implicit regularization effects remains elusive. In this work, we systematically investigate the implicit regularization of matrix factorization for solving matrix completion problems. We empirically discover that the connectivity of observed data plays a key role in the implicit bias, with a transition from low nuclear norm to low rank as data shifts from disconnected to connected with increased observations. We identify a hierarchy of intrinsic invariant manifolds in the loss landscape that guide the training trajectory to evolve from low-rank to higher-rank solutions. Based on this finding, we theoretically characterize the training trajectory as following the hierarchical invariant manifold traversal process, generalizing the characterization of Li et al.(2020) to include the disconnected case. Furthermore, we establish conditions that guarantee minimum nuclear norm, closely aligning with our experimental findings, and we provide a dynamics characterization condition for ensuring minimum rank. Our work reveals the intricate interplay between data connectivity, training dynamics, and implicit regularization in matrix factorization models.", "pdf": "https://openreview.net/pdf/dff44f7028fa8fce3e403214a527f714f8f3e3b9.pdf"} {"title": "HLM-Cite: Hybrid Language Model Workflow for Text-based Scientific Citation Prediction", "url": "https://openreview.net/forum?id=OV8YUk151r", "detail_url": "https://openreview.net/forum?id=OV8YUk151r", "authors": "Qianyue Hao,Jingyang Fan,Fengli Xu,Jian Yuan,Yong Li", "tags": "NIPS 2024,Poster", "abstract": "Citation networks are critical infrastructures of modern science, serving as intricate webs of past literature and enabling researchers to navigate the knowledge production system. To mine information hiding in the link space of such networks, predicting which previous papers (candidates) will a new paper (query) cite is a critical problem that has long been studied. However, an important gap remains unaddressed: the roles of a paper's citations vary significantly, ranging from foundational knowledge basis to superficial contexts. Distinguishing these roles requires a deeper understanding of the logical relationships among papers, beyond simple edges in citation networks. The emergence of large language models (LLMs) with textual reasoning capabilities offers new possibilities for discerning these relationships, but there are two major challenges. First, in practice, a new paper may select its citations from gigantic existing papers, where the combined texts far exceed the context length of LLMs. Second, logical relationships between papers are often implicit, and directly prompting an LLM to predict citations may lead to results based primarily on surface-level textual similarities, rather than the deeper logical reasoning required. In this paper, we introduce the novel concept of core citation, which identifies the critical references that go beyond superficial mentions. Thereby, we elevate the citation prediction task from a simple binary classification to a more nuanced problem: distinguishing core citations from both superficial citations and non-citations. To address this, we propose $\\textbf{HLM-Cite}$, a $\\textbf{H}$ybrid $\\textbf{L}$anguage $\\textbf{M}$odel workflow for citation prediction, which combines embedding and generative LMs. We design a curriculum finetune procedure to adapt a pretrained text embedding model to coarsely retrieve high-likelihood core citations from vast candidate sets and then design an LLM agentic workflow to rank the retrieved papers through one-shot reasoning, revealing the implicit relationships among papers. With the two-stage pipeline, we can scale the candidate sets to 100K papers, vastly exceeding the size handled by existing methods. We evaluate HLM-Cite on a dataset across 19 scientific fields, demonstrating a 17.6\\% performance improvement comparing SOTA methods. Our code is open-source at https://github.com/tsinghua-fib-lab/H-LM for reproducibility.", "pdf": "https://openreview.net/pdf/95932b01cf99bed796f749d323a8d4d9033b11a8.pdf"} {"title": "MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks", "url": "https://openreview.net/forum?id=Q8Z04XhDdL", "detail_url": "https://openreview.net/forum?id=Q8Z04XhDdL", "authors": "Xingkui Zhu,Yiran Guan,Dingkang Liang,Yuchao Chen,Yuliang Liu,Xiang Bai", "tags": "NIPS 2024,Poster", "abstract": "The sparsely activated mixture of experts (MoE) model presents an effective alternative to densely activated (dense) models, combining improved accuracy with computational efficiency. However, training MoE models from scratch requires extensive data and computational resources, a challenge that limits their widespread adoption. To address this, we introduce MoE Jetpack, a framework designed to fine-tune the abundant and easily accessible dense checkpoints into MoE models. MoE Jetpack incorporates two key techniques: (1) **checkpoint recycling**, which initializes MoE models with dense checkpoints to accelerate convergence and enhance accuracy, minimizing the need for extensive pre-training; (2) the **hyperspherical adaptive MoE (SpheroMoE) layer**, which optimizes the MoE architecture to enhance fine-tuning performance and efficiency.\nExperimental results indicate that MoE Jetpack doubles the convergence speed and enhances accuracy by 2.8% on ImageNet-1K. On smaller datasets, it achieves up to 8-fold faster convergence and over 30% accuracy gains, highlighting its efficiency.\nThe code is available at https://github.com/Adlith/MoE-Jetpack.", "pdf": "https://openreview.net/pdf/628a5bb329340ffe5b793330e9f0ff3ca98d4958.pdf"} {"title": "Learning Cortico-Muscular Dependence through Orthonormal Decomposition of Density Ratios", "url": "https://openreview.net/forum?id=wdGvRud1LS", "detail_url": "https://openreview.net/forum?id=wdGvRud1LS", "authors": "Shihan Ma,Bo Hu,Tianyu Jia,Alexander Kenneth Clarke,Blanka Zicher,Arnault H. Caillet,Dario Farina,Jose C Principe", "tags": "NIPS 2024,Poster", "abstract": "The cortico-spinal neural pathway is fundamental for motor control and movement execution, and in humans it is typically studied using concurrent electroencephalography (EEG) and electromyography (EMG) recordings. However, current approaches for capturing high-level and contextual connectivity between these recordings have important limitations. Here, we present a novel application of statistical dependence estimators based on orthonormal decomposition of density ratios to model the relationship between cortical and muscle oscillations. Our method extends from traditional scalar-valued measures by learning eigenvalues, eigenfunctions, and projection spaces of density ratios from realizations of the signal, addressing the interpretability, scalability, and local temporal dependence of cortico-muscular connectivity. We experimentally demonstrate that eigenfunctions learned from cortico-muscular connectivity can accurately classify movements and subjects. Moreover, they reveal channel and temporal dependencies that confirm the activation of specific EEG channels during movement.", "pdf": "https://openreview.net/pdf/3bdba64a512929e72c61ef5333a70f8c11950c8b.pdf"} {"title": "Auditing Local Explanations is Hard", "url": "https://openreview.net/forum?id=ybMrn4tdn0", "detail_url": "https://openreview.net/forum?id=ybMrn4tdn0", "authors": "Robi Bhattacharjee,Ulrike von Luxburg", "tags": "NIPS 2024,Poster", "abstract": "In sensitive contexts, providers of machine learning algorithms are increasingly required to give explanations for their algorithms' decisions. However, explanation receivers might not trust the provider, who potentially could output misleading or manipulated explanations. In this work, we investigate an auditing framework in which a third-party auditor or a collective of users attempts to sanity-check explanations: they can query model decisions and the corresponding local explanations, pool all the information received, and then check for basic consistency properties. We prove upper and lower bounds on the amount of queries that are needed for an auditor to succeed within this framework. Our results show that successful auditing requires a potentially exorbitant number of queries -- particularly in high dimensional cases. Our analysis also reveals that a key property is the ``locality'' of the provided explanations --- a quantity that so far has not been paid much attention to in the explainability literature. Looking forward, our results suggest that for complex high-dimensional settings, merely providing a pointwise prediction and explanation could be insufficient, as there is no way for the users to verify that the provided explanations are not completely made-up.", "pdf": "https://openreview.net/pdf/da6c6a3020c220d933287ab79ebff6b0838a4941.pdf"} {"title": "Learning-Augmented Dynamic Submodular Maximization", "url": "https://openreview.net/forum?id=stY80vVBS8", "detail_url": "https://openreview.net/forum?id=stY80vVBS8", "authors": "Arpit Agarwal,Eric Balkanski", "tags": "NIPS 2024,Poster", "abstract": "In dynamic submodular maximization, the goal is to maintain a high-value solution over a sequence of element insertions and deletions with a fast update time. Motivated by large-scale applications and the fact that dynamic data often exhibits patterns, we ask the following question: can predictions be used to accelerate the update time of dynamic submodular maximization algorithms? \n\nWe consider the model for dynamic algorithms with predictions where predictions regarding the insertion and deletion times of elements can be used for preprocessing. Our main result is an algorithm with an $O(\\text{poly}(\\log \\eta, \\log w, \\log k))$ amortized update time over the sequence of updates that achieves a $1/2 - \\epsilon$ approximation for dynamic monotone submodular maximization under a cardinality constraint $k$, where the prediction error $\\eta$ is the number of elements that are not inserted and deleted within $w$ time steps of their predicted insertion and deletion times. This amortized update time is independent of the length of the stream and instead depends on the prediction error.", "pdf": "https://openreview.net/pdf/49936dbcd6981eeb4a3a45c32cd25eb31bb1cee1.pdf"} {"title": "Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts", "url": "https://openreview.net/forum?id=FCsEvaMorw", "detail_url": "https://openreview.net/forum?id=FCsEvaMorw", "authors": "Mikayel Samvelyan,Sharath Chandra Raparthy,Andrei Lupu,Eric Hambro,Aram H. Markosyan,Manish Bhatt,Yuning Mao,Minqi Jiang,Jack Parker-Holder,Jakob Nicolaus Foerster,Tim Rockt\u00e4schel,Roberta Raileanu", "tags": "NIPS 2024,Poster", "abstract": "As large language models (LLMs) become increasingly prevalent across many real-world applications, understanding and enhancing their robustness to adversarial attacks is of paramount importance. Existing methods for identifying adversarial prompts tend to focus on specific domains, lack diversity, or require extensive human annotations. To address these limitations, we present Rainbow Teaming, a novel black-box approach for producing a diverse collection of adversarial prompts. Rainbow Teaming casts adversarial prompt generation as a quality-diversity problem and uses open-ended search to generate prompts that are both effective and diverse. Focusing on the safety domain, we use Rainbow Teaming to target various state-of-the-art LLMs, including the Llama 2 and Llama 3 models. Our approach reveals hundreds of effective adversarial prompts, with an attack success rate exceeding 90% across all tested models. Furthermore, we demonstrate that prompts generated by Rainbow Teaming are highly transferable and that fine-tuning models with synthetic data generated by our method significantly enhances their safety without sacrificing general performance or helpfulness. We additionally explore the versatility of Rainbow Teaming by applying it to question answering and cybersecurity, showcasing its potential to drive robust open-ended self-improvement in a wide range of applications.", "pdf": "https://openreview.net/pdf/00f9503191aa41ee310f384b3836aac54280ad99.pdf"} {"title": "A Simple and Adaptive Learning Rate for FTRL in Online Learning with Minimax Regret of $\\Theta(T^{2/3})$ and its Application to Best-of-Both-Worlds", "url": "https://openreview.net/forum?id=XlvUz9F50g", "detail_url": "https://openreview.net/forum?id=XlvUz9F50g", "authors": "Taira Tsuchiya,Shinji Ito", "tags": "NIPS 2024,Poster", "abstract": "Follow-the-Regularized-Leader (FTRL) is a powerful framework for various online learning problems. By designing its regularizer and learning rate to be adaptive to past observations, FTRL is known to work adaptively to various properties of an underlying environment. However, most existing adaptive learning rates are for online learning problems with a minimax regret of $\\Theta(\\sqrt{T})$ for the number of rounds $T$, and there are only a few studies on adaptive learning rates for problems with a minimax regret of $\\Theta(T^{2/3})$, which include several important problems dealing with indirect feedback. To address this limitation, we establish a new adaptive learning rate framework for problems with a minimax regret of $\\Theta(T^{2/3})$. Our learning rate is designed by matching the stability, penalty, and bias terms that naturally appear in regret upper bounds for problems with a minimax regret of $\\Theta(T^{2/3})$. As applications of this framework, we consider three major problems with a minimax regret of $\\Theta(T^{2/3})$: partial monitoring, graph bandits, and multi-armed bandits with paid observations. We show that FTRL with our learning rate and the Tsallis entropy regularizer improves existing Best-of-Both-Worlds (BOBW) regret upper bounds, which achieve simultaneous optimality in the stochastic and adversarial regimes. The resulting learning rate is surprisingly simple compared to the existing learning rates for BOBW algorithms for problems with a minimax regret of $\\Theta(T^{2/3})$.", "pdf": "https://openreview.net/pdf/fc259aba969620f7876aae99a694f18dc78db73d.pdf"} {"title": "Enhancing Motion in Text-to-Video Generation with Decomposed Encoding and Conditioning", "url": "https://openreview.net/forum?id=nkzSE5KkCA", "detail_url": "https://openreview.net/forum?id=nkzSE5KkCA", "authors": "Penghui Ruan,Pichao WANG,Divya Saxena,Jiannong Cao,Yuhui Shi", "tags": "NIPS 2024,Poster", "abstract": "Despite advancements in Text-to-Video (T2V) generation, producing videos with realistic motion remains challenging. Current models often yield static or minimally dynamic outputs, failing to capture complex motions described by text. This issue stems from the internal biases in text encoding which overlooks motions, and inadequate conditioning mechanisms in T2V generation models. To address this, we propose a novel framework called DEcomposed MOtion (DEMO), which enhances motion synthesis in T2V generation by decomposing both text encoding and conditioning into content and motion components. Our method includes a content encoder for static elements and a motion encoder for temporal dynamics, alongside separate content and motion conditioning mechanisms. Crucially, we introduce text-motion and video-motion supervision to improve the model's understanding and generation of motion. Evaluations on benchmarks such as MSR-VTT, UCF-101, WebVid-10M, EvalCrafter, and VBench demonstrate DEMO's superior ability to produce videos with enhanced motion dynamics while maintaining high visual quality. Our approach significantly advances T2V generation by integrating comprehensive motion understanding directly from textual descriptions. Project page: https://PR-Ryan.github.io/DEMO-project/", "pdf": "https://openreview.net/pdf/a261ab5eac4415e02421fe3169a362369c977e34.pdf"} {"title": "Provably Robust Score-Based Diffusion Posterior Sampling for Plug-and-Play Image Reconstruction", "url": "https://openreview.net/forum?id=SLnsoaY4u1", "detail_url": "https://openreview.net/forum?id=SLnsoaY4u1", "authors": "Xingyu Xu,Yuejie Chi", "tags": "NIPS 2024,Poster", "abstract": "In a great number of tasks in science and engineering, the goal is to infer an unknown image from a small number of noisy measurements collected from a known forward model describing certain sensing or imaging modality. Due to resource constraints, this image reconstruction task is often extremely ill-posed, which necessitates the adoption of expressive prior information to regularize the solution space. Score-based diffusion models, thanks to its impressive empirical success, have emerged as an appealing candidate of an expressive prior in image reconstruction. In order to accommodate diverse tasks at once, it is of great interest to develop efficient, consistent and robust algorithms that incorporate unconditional score functions of an image prior distribution in conjunction with flexible choices of forward models.\n\nThis work develops an algorithmic framework for employing score-based diffusion models as an expressive data prior in nonlinear inverse problems with general forward models. Motivated by the plug-and-play framework in the imaging community, we introduce a diffusion plug-and-play method (DPnP) that alternatively calls two samplers, a proximal consistency sampler based solely on the likelihood function of the forward model, and a denoising diffusion sampler based solely on the score functions of the image prior. The key insight is that denoising under white Gaussian noise can be solved rigorously via both stochastic (i.e., DDPM-type) and deterministic (i.e., DDIM-type) samplers using the same set of score functions trained for generation. We establish both asymptotic and non-asymptotic performance guarantees of DPnP, and provide numerical experiments to illustrate its promise in solving both linear and nonlinear image reconstruction tasks. To the best of our knowledge, DPnP is the first provably-robust posterior sampling method for nonlinear inverse problems using unconditional diffusion priors.", "pdf": "https://openreview.net/pdf/0fd08a5beae31f4eeae6f33bce249774d110e6c5.pdf"} {"title": "Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes", "url": "https://openreview.net/forum?id=vI1WqFn15v", "detail_url": "https://openreview.net/forum?id=vI1WqFn15v", "authors": "Xiaomeng Hu,Pin-Yu Chen,Tsung-Yi Ho", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) are becoming a prominent generative AI tool, where the user enters a query and the LLM generates an answer. To reduce harm and misuse, efforts have been made to align these LLMs to human values using advanced training techniques such as Reinforcement Learning from Human Feedback (RLHF). However, recent studies have highlighted the vulnerability of LLMs to adversarial jailbreak attempts aiming at subverting the embedded safety guardrails. To address this challenge, this paper defines and investigates the **Refusal Loss** of LLMs and then proposes a method called **Gradient Cuff** to detect jailbreak attempts. Gradient Cuff exploits the unique properties observed in the refusal loss landscape, including functional values and its smoothness, to design an effective two-step detection strategy. Experimental results on two aligned LLMs (LLaMA-2-7B-Chat and Vicuna-7B-V1.5) and six types of jailbreak attacks (GCG, AutoDAN, PAIR, TAP, Base64, and LRL) show that Gradient Cuff can significantly improve the LLM's rejection capability for malicious jailbreak queries, while maintaining the model's performance for benign user queries by adjusting the detection threshold.", "pdf": "https://openreview.net/pdf/9781a3bd6f23c741e28496d0beae9114337a6e01.pdf"} {"title": "Text-Infused Attention and Foreground-Aware Modeling for Zero-Shot Temporal Action Detection", "url": "https://openreview.net/forum?id=kS9dciADtY", "detail_url": "https://openreview.net/forum?id=kS9dciADtY", "authors": "Yearang Lee,Ho-Joong Kim,Seong-Whan Lee", "tags": "NIPS 2024,Poster", "abstract": "Zero-Shot Temporal Action Detection (ZSTAD) aims to classify and localize action segments in untrimmed videos for unseen action categories. Most existing ZSTAD methods utilize a foreground-based approach, limiting the integration of text and visual features due to their reliance on pre-extracted proposals. In this paper, we introduce a cross-modal ZSTAD baseline with mutual cross-attention, integrating both text and visual information throughout the detection process. Our simple approach results in superior performance compared to previous methods. Despite this improvement, we further identify a common-action bias issue that the cross-modal baseline over-focus on common sub-actions due to a lack of ability to discriminate text-related visual parts. To address this issue, we propose Text-infused attention and Foreground-aware Action Detection (Ti-FAD), which enhances the ability to focus on text-related sub-actions and distinguish relevant action segments from the background. Our extensive experiments demonstrate that Ti-FAD outperforms the state-of-the-art methods on ZSTAD benchmarks by a large margin: 41.2\\% (+ 11.0\\%) on THUMOS14 and 32.0\\% (+ 5.4\\%) on ActivityNet v1.3. Code is available at: https://github.com/YearangLee/Ti-FAD.", "pdf": "https://openreview.net/pdf/5fe47f61eee7ec17c41c636e0cd550c708fb2e99.pdf"} {"title": "Epipolar-Free 3D Gaussian Splatting for Generalizable Novel View Synthesis", "url": "https://openreview.net/forum?id=iO6tcLJEwA", "detail_url": "https://openreview.net/forum?id=iO6tcLJEwA", "authors": "Zhiyuan Min,Yawei Luo,Jianwen Sun,Yi Yang", "tags": "NIPS 2024,Poster", "abstract": "Generalizable 3D Gaussian splitting (3DGS) can reconstruct new scenes from sparse-view observations in a feed-forward inference manner, eliminating the need for scene-specific retraining required in conventional 3DGS. However, existing methods rely heavily on epipolar priors, which can be unreliable in complex real-world scenes, particularly in non-overlapping and occluded regions. In this paper, we propose eFreeSplat, an efficient feed-forward 3DGS-based model for generalizable novel view synthesis that operates independently of epipolar line constraints. To enhance multiview feature extraction with 3D perception, we employ a self-supervised Vision Transformer (ViT) with cross-view completion pre-training on large-scale datasets. Additionally, we introduce an Iterative Cross-view Gaussians Alignment method to ensure consistent depth scales across different views. Our eFreeSplat represents a new paradigm for generalizable novel view synthesis. We evaluate eFreeSplat on wide-baseline novel view synthesis tasks using the RealEstate10K and ACID datasets. Extensive experiments demonstrate that eFreeSplat surpasses state-of-the-art baselines that rely on epipolar priors, achieving superior geometry reconstruction and novel view synthesis quality.", "pdf": "https://openreview.net/pdf/83c544d3122e24e20284b95452d0a92d297d7a02.pdf"} {"title": "RectifID: Personalizing Rectified Flow with Anchored Classifier Guidance", "url": "https://openreview.net/forum?id=KKrj1vCQaG", "detail_url": "https://openreview.net/forum?id=KKrj1vCQaG", "authors": "Zhicheng Sun,Zhenhao Yang,Yang Jin,Haozhe Chi,Kun Xu,Kun Xu,Liwei Chen,Hao Jiang,Yang Song,Kun Gai,Yadong MU", "tags": "NIPS 2024,Poster", "abstract": "Customizing diffusion models to generate identity-preserving images from user-provided reference images is an intriguing new problem. The prevalent approaches typically require training on extensive domain-specific images to achieve identity preservation, which lacks flexibility across different use cases. To address this issue, we exploit classifier guidance, a training-free technique that steers diffusion models using an existing classifier, for personalized image generation. Our study shows that based on a recent rectified flow framework, the major limitation of vanilla classifier guidance in requiring a special classifier can be resolved with a simple fixed-point solution, allowing flexible personalization with off-the-shelf image discriminators. Moreover, its solving procedure proves to be stable when anchored to a reference flow trajectory, with a convergence guarantee. The derived method is implemented on rectified flow with different off-the-shelf image discriminators, delivering advantageous personalization results for human faces, live subjects, and certain objects. Code is available at https://github.com/feifeiobama/RectifID.", "pdf": "https://openreview.net/pdf/c8f2d67b397ef8a8eee9fddcf495d00135c935b4.pdf"} {"title": "Breaking Semantic Artifacts for Generalized AI-generated Image Detection", "url": "https://openreview.net/forum?id=NtNTfRTjE8", "detail_url": "https://openreview.net/forum?id=NtNTfRTjE8", "authors": "Chende Zheng,Chenhao Lin,Zhengyu Zhao,Hang Wang,Xu Guo,Shuai Liu,Chao Shen", "tags": "NIPS 2024,Poster", "abstract": "With the continuous evolution of AI-generated images, the generalized detection of them has become a crucial aspect of AI security. \nExisting detectors have focused on cross-generator generalization, while it remains unexplored whether these detectors can generalize across different image scenes, e.g., images from different datasets with different semantics. In this paper, we reveal that existing detectors suffer from substantial Accuracy drops in such cross-scene generalization. In particular, we attribute their failures to ''semantic artifacts'' in both real and generated images, to which detectors may overfit. To break such ''semantic artifacts'', we propose a simple yet effective approach based on conducting an image patch shuffle and then training an end-to-end patch-based classifier. We conduct a comprehensive open-world evaluation on 31 test sets, covering 7 Generative Adversarial Networks, 18 (variants of) Diffusion Models, and another 6 CNN-based generative models. The results demonstrate that our approach outperforms previous approaches by 2.08\\% (absolute) on average regarding cross-scene detection Accuracy. We also notice the superiority of our approach in open-world generalization, with an average Accuracy improvement of 10.59\\% (absolute) across all test sets.", "pdf": "https://openreview.net/pdf/6944c4f936012e965788014a0da0160b19981634.pdf"} {"title": "Accelerating Pre-training of Multimodal LLMs via Chain-of-Sight", "url": "https://openreview.net/forum?id=KHcB1drMRX", "detail_url": "https://openreview.net/forum?id=KHcB1drMRX", "authors": "Ziyuan Huang,Kaixiang Ji,Biao Gong,Zhiwu Qing,Qing-Long Zhang,Kecheng Zheng,Jian Wang,Jingdong Chen,Ming Yang", "tags": "NIPS 2024,Poster", "abstract": "This paper introduces Chain-of-Sight, a vision-language bridge module that accelerates the pre-training of Multimodal Large Language Models (MLLMs). \nOur approach employs a sequence of visual resamplers that capture visual details at various spacial scales.\nThis architecture not only leverages global and local visual contexts effectively, but also facilitates the flexible extension of visual tokens through a compound token scaling strategy, allowing up to a 16x increase in the token count post pre-training.\nConsequently, Chain-of-Sight requires significantly fewer visual tokens in the pre-training phase compared to the fine-tuning phase. \nThis intentional reduction of visual tokens during pre-training notably accelerates the pre-training process, cutting down the wall-clock training time by $\\sim$73\\%.\nEmpirical results on a series of vision-language benchmarks reveal that the pre-train acceleration through Chain-of-Sight is achieved without sacrificing performance, matching or surpassing the standard pipeline of utilizing all visual tokens throughout the entire training process. \nFurther scaling up the number of visual tokens for pre-training leads to stronger performances, competitive to existing approaches in a series of benchmarks.", "pdf": "https://openreview.net/pdf/4de54bb9f50438f3639463fcd1a37943926a1d03.pdf"} {"title": "DiTFastAttn: Attention Compression for Diffusion Transformer Models", "url": "https://openreview.net/forum?id=51HQpkQy3t", "detail_url": "https://openreview.net/forum?id=51HQpkQy3t", "authors": "Zhihang Yuan,Hanling Zhang,Lu Pu,Xuefei Ning,Linfeng Zhang,Tianchen Zhao,Shengen Yan,Guohao Dai,Yu Wang", "tags": "NIPS 2024,Poster", "abstract": "Diffusion Transformers (DiT) excel at image and video generation but face computational challenges due to the quadratic complexity of self-attention operators. We propose DiTFastAttn, a post-training compression method to alleviate the computational bottleneck of DiT.\nWe identify three key redundancies in the attention computation during DiT inference: (1) spatial redundancy, where many attention heads focus on local information; (2) temporal redundancy, with high similarity between the attention outputs of neighboring steps; (3) conditional redundancy, where conditional and unconditional inferences exhibit significant similarity. We propose three techniques to reduce these redundancies: (1) $\\textit{Window Attention with Residual Sharing}$ to reduce spatial redundancy; (2) $\\textit{Attention Sharing across Timesteps}$ to exploit the similarity between steps; (3) $\\textit{Attention Sharing across CFG}$ to skip redundant computations during conditional generation.", "pdf": "https://openreview.net/pdf/f04f7cb65b98f0bf20c13bbd3cb6d0ecc0432d01.pdf"} {"title": "Spiking Transformer with Experts Mixture", "url": "https://openreview.net/forum?id=WcIeEtY3AG", "detail_url": "https://openreview.net/forum?id=WcIeEtY3AG", "authors": "Zhaokun Zhou,Yijie Lu,Yanhao Jia,Kaiwei Che,Jun Niu,Liwei Huang,Xinyu Shi,Yuesheng Zhu,Guoqi Li,Zhaofei Yu,Li Yuan", "tags": "NIPS 2024,Poster", "abstract": "Spiking Neural Networks (SNNs) provide a sparse spike-driven mechanism which is believed to be critical for energy-efficient deep learning. \nMixture-of-Experts (MoE), on the other side, aligns with the brain mechanism of distributed and sparse processing, resulting in an efficient way of enhancing model capacity and conditional computation. \nIn this work, we consider how to incorporate SNNs\u2019 spike-driven and MoE\u2019s conditional computation into a unified framework. \nHowever, MoE uses softmax to get the dense conditional weights for each expert and TopK to hard-sparsify the network, which does not fit the properties of SNNs. \nTo address this issue, we reformulate MoE in SNNs and introduce the Spiking Experts Mixture Mechanism (SEMM) from the perspective of sparse spiking activation. \nBoth the experts and the router output spiking sequences, and their element-wise operation makes SEMM computation spike-driven and dynamic sparse-conditional. \nBy developing SEMM into Spiking Transformer, the Experts Mixture Spiking Attention (EMSA) and the Experts Mixture Spiking Perceptron (EMSP) are proposed, which performs routing allocation for head-wise and channel-wise spiking experts, respectively. Experiments show that SEMM realizes sparse conditional computation and obtains a stable improvement on neuromorphic and static datasets with approximate computational overhead based on the Spiking Transformer baselines.", "pdf": "https://openreview.net/pdf/35a5bc54de368426f66605d8e3f447638863888a.pdf"} {"title": "Approximately Equivariant Neural Processes", "url": "https://openreview.net/forum?id=dqT9MC5NQl", "detail_url": "https://openreview.net/forum?id=dqT9MC5NQl", "authors": "Matthew Ashman,Cristiana Diaconu,Adrian Weller,Wessel P Bruinsma,Richard E. Turner", "tags": "NIPS 2024,Poster", "abstract": "Equivariant deep learning architectures exploit symmetries in learning problems to improve the sample efficiency of neural-network-based models and their ability to generalise. However, when modelling real-world data, learning problems are often not *exactly* equivariant, but only approximately. For example, when estimating the global temperature field from weather station observations, local topographical features like mountains break translation equivariance. In these scenarios, it is desirable to construct architectures that can flexibly depart from exact equivariance in a data-driven way. Current approaches to achieving this cannot usually be applied out-of-the-box to any architecture and symmetry group. In this paper, we develop a general approach to achieving this using existing equivariant architectures. Our approach is agnostic to both the choice of symmetry group and model architecture, making it widely applicable. We consider the use of approximately equivariant architectures in neural processes (NPs), a popular family of meta-learning models. We demonstrate the effectiveness of our approach on a number of synthetic and real-world regression experiments, showing that approximately equivariant NP models can outperform both their non-equivariant and strictly equivariant counterparts.", "pdf": "https://openreview.net/pdf/54ed6f4274ca318765724fa8ce0660f01758f4c9.pdf"} {"title": "QBB: Quantization with Binary Bases for LLMs", "url": "https://openreview.net/forum?id=Kw6MRGFx0R", "detail_url": "https://openreview.net/forum?id=Kw6MRGFx0R", "authors": "Adrian Bulat,Yassine Ouali,Georgios Tzimiropoulos", "tags": "NIPS 2024,Poster", "abstract": "Current post-training quantization methods for LLMs compress the weights down to 4-bits, with moderate to low degradation in accuracy. However, further reducing the number of bits or accelerating the network while avoiding large accuracy drops, especially for smaller, sub 7B models, remains an actively researched and open problem. To address this, in this work, we introduce Quantization with Binary Bases (QBB), a new approach for low-bit quantization that effectively removes (nearly) all multiplications, reducing the implementation to summations. Our novel approach works by decomposing the original weights into a set of binary (1-bit) matrices using an iterative process. For a given layer, starting from a weight matrix, we first construct an initial approximation using an analytical solution, where each new binary matrix, paired with a scaling vector, approximates the residual error of the previous estimation. Secondly, using gradient descent and a progressive learning curriculum, we find the optimal set of binary matrices and scaling vectors that minimize the $\\ell_2$ distance between the produced approximation and original weights. Thirdly, as previous steps are input agnostic, we holistically optimize the scaling vectors alone, calibrating them in student-teacher fashion, with the teacher providing both the data, \n by autoregressive generation starting from a random token, and the target logits.\n When evaluated across multiple LLM families, our approach matches and outperforms all prior works, setting a new state-of-the-art result using a summation-only based approach.", "pdf": "https://openreview.net/pdf/abf9ad98cc9e60a081dbbbdc6eb042b12cc8cc5b.pdf"} {"title": "Extending Multi-modal Contrastive Representations", "url": "https://openreview.net/forum?id=PquRXu9pQ6", "detail_url": "https://openreview.net/forum?id=PquRXu9pQ6", "authors": "Ziang Zhang,Zehan Wang,Luping Liu,Rongjie Huang,Xize Cheng,Zhenhui Ye,Wang Lin,Huadai Liu,Haifeng Huang,Yang Zhao,Tao Jin,Siqi Zheng,Zhou Zhao", "tags": "NIPS 2024,Poster", "abstract": "Multi-modal contrastive representation (MCR) of more than three modalities is critical in multi-modal learning. Although recent methods showcase impressive achievements, the high dependence on large-scale, high-quality paired data and the expensive training costs limit their further development. Inspired by recent C-MCR, this paper proposes $\\textbf{Ex}$tending $\\textbf{M}$ultimodal $\\textbf{C}$ontrastive $\\textbf{R}$epresentation (Ex-MCR), a training-efficient and paired-data-free method to build unified contrastive representation for many modalities. Since C-MCR is designed to learn a new latent space for the two non-overlapping modalities and projects them onto this space, a significant amount of information from their original spaces is lost in the projection process. To address this issue, Ex-MCR proposes to extend one modality's space into the other's, rather than mapping both modalities onto a completely new space. This method effectively preserves semantic alignment in the original space. Experimentally, we extend pre-trained audio-text and 3D-image representations to the existing vision-text space. Without using paired data, Ex-MCR achieves comparable performance to advanced methods on a series of audio-image-text and 3D-image-text tasks and achieves superior performance when used in parallel with data-driven methods. Moreover, semantic alignment also emerges between the extended modalities (e.g., audio and 3D).", "pdf": "https://openreview.net/pdf/f29182032a293bce4d555f1e5a4046cee7c6ffa4.pdf"} {"title": "Relational Concept Bottleneck Models", "url": "https://openreview.net/forum?id=G99BSV9pt5", "detail_url": "https://openreview.net/forum?id=G99BSV9pt5", "authors": "Pietro Barbiero,Francesco Giannini,Gabriele Ciravegna,Michelangelo Diligenti,Giuseppe Marra", "tags": "NIPS 2024,Poster", "abstract": "The design of interpretable deep learning models working in relational domains poses an open challenge: interpretable deep learning methods, such as Concept Bottleneck Models (CBMs), are not designed to solve relational problems, while relational deep learning models, such as Graph Neural Networks (GNNs), are not as interpretable as CBMs. To overcome these limitations, we propose Relational Concept Bottleneck Models (R-CBMs), a family of relational deep learning methods providing interpretable task predictions. As special cases, we show that R-CBMs are capable of both representing standard CBMs and message passing GNNs. To evaluate the effectiveness and versatility of these models, we designed a class of experimental problems, ranging from image classification to link prediction in knowledge graphs. In particular we show that R-CBMs (i) match generalization performance of existing relational black-boxes, (ii) support the generation of quantified concept-based explanations, (iii) effectively respond to test-time interventions, and (iv) withstand demanding settings including out-of-distribution scenarios, limited training data regimes, and scarce concept supervisions.", "pdf": "https://openreview.net/pdf/86af57df00b9b8061b91d29dae6ee858de86ed4c.pdf"} {"title": "Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models", "url": "https://openreview.net/forum?id=dOJ6CqWDf1", "detail_url": "https://openreview.net/forum?id=dOJ6CqWDf1", "authors": "Zhanhui Zhou,Zhixuan Liu,Jie Liu,Zhichen Dong,Chao Yang,Yu Qiao", "tags": "NIPS 2024,Poster", "abstract": "Large language models are usually fine-tuned to align with human preferences. However, fine-tuning a large language model can be challenging. In this work, we introduce $\\textit{weak-to-strong search}$, framing the alignment of a large language model as a test-time greedy search to maximize the log-probability difference between small tuned and untuned models while sampling from the frozen large model. This method serves both as (1) a compute-efficient model up-scaling strategy that avoids directly tuning the large model and as (2) an instance of weak-to-strong generalization that enhances a strong model with weak test-time guidance.\nEmpirically, we demonstrate the flexibility of weak-to-strong search across different tasks. In controlled-sentiment generation and summarization, we use tuned and untuned $\\texttt{gpt2}$s to improve the alignment of large models without additional training. Crucially, in a more difficult instruction-following benchmark, AlpacaEval 2.0, we show that reusing off-the-shelf small models (e.g., $\\texttt{zephyr-7b-beta}$ and its untuned version) can improve the length-controlled win rates of both white-box and black-box large models against $\\texttt{gpt-4-turbo}$ (e.g., $34.4\\% \\rightarrow 37.9\\%$ for $\\texttt{Llama-3-70B-Instruct}$ and $16.0\\% \\rightarrow 20.1\\%$ for $\\texttt{gpt-3.5-turbo-instruct}$), despite the small models' low win rates $\\approx 10.0\\%$.", "pdf": "https://openreview.net/pdf/b6aa8b7cf6e01e041ebe0be8f82b9ef92d403f10.pdf"} {"title": "On the Target-kernel Alignment: a Unified Analysis with Kernel Complexity", "url": "https://openreview.net/forum?id=hKcx2wa3P0", "detail_url": "https://openreview.net/forum?id=hKcx2wa3P0", "authors": "Chao Wang,Xin HE,Yuwen Wang,Junhui Wang", "tags": "NIPS 2024,Poster", "abstract": "This paper investigates the impact of alignment between the target function of interest and the kernel matrix on a variety of kernel-based methods based on a general loss belonging to a rich loss function family, which covers many commonly used methods in regression and classification problems. We consider the truncated kernel-based method (TKM) which is estimated within a reduced function space constructed by using the spectral truncation of the kernel matrix and compare its theoretical behavior to that of the standard kernel-based method (KM) under various settings. By using the kernel complexity function that quantifies the complexity of the induced function space, we derive the upper bounds for both TKM and KM, and further reveal their dependencies on the degree of target-kernel alignment. Specifically, for the alignment with polynomial decay, the established results indicate that under the just-aligned and weakly-aligned regimes, TKM and KM share the same learning rate. Yet, under the strongly-aligned regime, KM suffers the saturation effect, while TKM can be continuously improved as the alignment becomes stronger. This further implies that TKM has a strong ability to capture the strong alignment and provide a theoretically guaranteed solution to eliminate the phenomena of saturation effect. The minimax lower bound is also established for the squared loss to confirm the optimality of TKM. Extensive numerical experiments further support our theoretical findings. The Python code for reproducing the numerical experiments is available at https://github.com/wywangen.", "pdf": "https://openreview.net/pdf/9ff4ac54caa3654d76c73ead55825f3eec4b1f9e.pdf"} {"title": "Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity", "url": "https://openreview.net/forum?id=YxyYTcv3hp", "detail_url": "https://openreview.net/forum?id=YxyYTcv3hp", "authors": "Hanlin Gu,Win Kent Ong,Chee Seng Chan,Lixin Fan", "tags": "NIPS 2024,Poster", "abstract": "The advent of Federated Learning (FL) highlights the practical necessity for the \u2019right to be forgotten\u2019 for all clients, allowing them to request data deletion from the machine learning model\u2019s service provider. This necessity has spurred a growing demand for Federated Unlearning (FU). Feature unlearning has gained considerable attention due to its applications in unlearning sensitive, backdoor, and biased features. Existing methods employ the influence function to achieve feature unlearning, which is impractical for FL as it necessitates the participation of other clients, if not all, in the unlearning process. Furthermore, current research lacks an evaluation of the effectiveness of feature unlearning. To address these limitations, we define feature sensitivity in evaluating feature unlearning according to Lipschitz continuity. This metric characterizes the model output\u2019s rate of change or sensitivity to perturbations in the input feature. We then propose an effective federated feature unlearning framework called Ferrari, which minimizes feature sensitivity. Extensive experimental results and theoretical analysis demonstrate the effectiveness of Ferrari across various feature unlearning scenarios, including sensitive, backdoor, and biased features. The code is publicly available at https://github.com/OngWinKent/Federated-Feature-Unlearning", "pdf": "https://openreview.net/pdf/a599b2e5c1592e3a62eca05001d074abb9d925ee.pdf"} {"title": "Road Network Representation Learning with the Third Law of Geography", "url": "https://openreview.net/forum?id=gPtiGRaVcE", "detail_url": "https://openreview.net/forum?id=gPtiGRaVcE", "authors": "Haicang Zhou,Weiming Huang,Yile Chen,Tiantian He,Gao Cong,Yew-Soon Ong", "tags": "NIPS 2024,Poster", "abstract": "Road network representation learning aims to learn compressed and effective vectorized representations for road segments that are applicable to numerous tasks. In this paper, we identify the limitations of existing methods, particularly their overemphasis on the distance effect as outlined in the First Law of Geography. In response, we propose to endow road network representation with the principles of the recent Third Law of Geography. To this end, we propose a novel graph contrastive learning framework that employs geographic configuration-aware graph augmentation and spectral negative sampling, ensuring that road segments with similar geographic configurations yield similar representations, and vice versa, aligning with the principles stated in the Third Law. The framework further fuses the Third Law with the First Law through a dual contrastive learning objective to effectively balance the implications of both laws. We evaluate our framework on two real-world datasets across three downstream tasks. The results show that the integration of the Third Law significantly improves the performance of road segment representations in downstream tasks.", "pdf": "https://openreview.net/pdf/4fcdb7005507bb48de506c94d62940dd74c92b2e.pdf"} {"title": "Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models", "url": "https://openreview.net/forum?id=nAIhvNy15T", "detail_url": "https://openreview.net/forum?id=nAIhvNy15T", "authors": "Tuomas Kynk\u00e4\u00e4nniemi,Miika Aittala,Tero Karras,Samuli Laine,Timo Aila,Jaakko Lehtinen", "tags": "NIPS 2024,Poster", "abstract": "Guidance is a crucial technique for extracting the best performance out of image-generating diffusion models. Traditionally, a constant guidance weight has been applied throughout the sampling chain of an image. We show that guidance is clearly harmful toward the beginning of the chain (high noise levels), largely unnecessary toward the end (low noise levels), and only beneficial in the middle. We thus restrict it to a specific range of noise levels, improving both the inference speed and result quality. This limited guidance interval improves the record FID in ImageNet-512 significantly, from 1.81 to 1.40. We show that it is quantitatively and qualitatively beneficial across different sampler parameters, network architectures, and datasets, including the large-scale setting of Stable Diffusion XL. We thus suggest exposing the guidance interval as a hyperparameter in all diffusion models that use guidance.", "pdf": "https://openreview.net/pdf/0e4e2aaba5bb103cca5994d4a6f202229790e0f6.pdf"} {"title": "Noise-Aware Differentially Private Regression via Meta-Learning", "url": "https://openreview.net/forum?id=99rOAM7Jfm", "detail_url": "https://openreview.net/forum?id=99rOAM7Jfm", "authors": "Ossi R\u00e4is\u00e4,Stratis Markou,Matthew Ashman,Wessel P Bruinsma,Marlon Tobaben,Antti Honkela,Richard E. Turner", "tags": "NIPS 2024,Poster", "abstract": "Many high-stakes applications require machine learning models that protect user privacy and provide well-calibrated, accurate predictions. While Differential Privacy (DP) is the gold standard for protecting user privacy, standard DP mechanisms typically significantly impair performance. One approach to mitigating this issue is pre-training models on simulated data before DP learning on the private data. In this work we go a step further, using simulated data to train a meta-learning model that combines the Convolutional Conditional Neural Process (ConvCNP) with an improved functional DP mechanism of Hall et al. (2013), yielding the DPConvCNP. DPConvCNP learns from simulated data how to map private data to a DP predictive model in one forward pass, and then provides accurate, well-calibrated predictions. We compare DPConvCNP with a DP Gaussian Process (GP) baseline with carefully tuned hyperparameters. The DPConvCNP outperforms the GP baseline, especially on non-Gaussian data, yet is much faster at test time and requires less tuning.", "pdf": "https://openreview.net/pdf/0fb03adaf5e2be6e05997e5de302ef7cb008677c.pdf"} {"title": "The Limits of Differential Privacy in Online Learning", "url": "https://openreview.net/forum?id=Cqr6E81iB7", "detail_url": "https://openreview.net/forum?id=Cqr6E81iB7", "authors": "Bo Li,Wei Wang,Peng Ye", "tags": "NIPS 2024,Poster", "abstract": "Differential privacy (DP) is a formal notion that restricts the privacy leakage of an algorithm when running on sensitive data, in which privacy-utility trade-off is one of the central problems in private data analysis. In this work, we investigate the fundamental limits of differential privacy in online learning algorithms and present evidence that separates three types of constraints: no DP, pure DP, and approximate DP. We first describe a hypothesis class that is online learnable under approximate DP but not online learnable under pure DP under the adaptive adversarial setting. This indicates that approximate DP must be adopted when dealing with adaptive adversaries. We then prove that any private online learner must make an infinite number of mistakes for almost all hypothesis classes. This essentially generalizes previous results and shows a strong separation between private and non-private settings since a finite mistake bound is always attainable (as long as the class is online learnable) when there is no privacy requirement.", "pdf": "https://openreview.net/pdf/9c08c85af72ee9b6f8c1c6ef01acd2a936d76499.pdf"} {"title": "Animate3D: Animating Any 3D Model with Multi-view Video Diffusion", "url": "https://openreview.net/forum?id=HB6KaCFiMN", "detail_url": "https://openreview.net/forum?id=HB6KaCFiMN", "authors": "Yanqin Jiang,Chaohui Yu,Chenjie Cao,Fan Wang,Weiming Hu,Jin Gao", "tags": "NIPS 2024,Poster", "abstract": "Recent advances in 4D generation mainly focus on generating 4D content by distilling pre-trained text or single-view image conditioned models. It is inconvenient for them to take advantage of various off-the-shelf 3D assets with multi-view attributes, and their results suffer from spatiotemporal inconsistency owing to the inherent ambiguity in the supervision signals. In this work, we present Animate3D, a novel framework for animating any static 3D model. The core idea is two-fold: 1) We propose a novel multi-view video diffusion model (MV-VDM) conditioned on multi-view renderings of the static 3D object, which is trained on our presented large-scale multi-view video dataset (MV-Video). 2) Based on MV-VDM, we introduce a framework combining reconstruction and 4D Score Distillation Sampling (4D-SDS) to leverage the multi-view video diffusion priors for animating 3D objects. Specifically, for MV-VDM, we design a new spatiotemporal attention module to enhance spatial and temporal consistency by integrating 3D and video diffusion models. Additionally, we leverage the static 3D model\u2019s multi-view renderings as conditions to preserve its identity. For animating 3D models, an effective two-stage pipeline is proposed: we first reconstruct coarse motions directly from generated multi-view videos, followed by the introduced 4D-SDS to model fine-level motions. Benefiting from accurate motion learning, we could achieve straightforward mesh animation. Qualitative and quantitative experiments demonstrate that Animate3D significantly outperforms previous approaches. Data, code, and models are open-released.", "pdf": "https://openreview.net/pdf/261c406376f4ed859245239c6b560f9ca0d2ca22.pdf"} {"title": "Multi-Agent Imitation Learning: Value is Easy, Regret is Hard", "url": "https://openreview.net/forum?id=Qk3IBHyv6z", "detail_url": "https://openreview.net/forum?id=Qk3IBHyv6z", "authors": "Jingwu Tang,Gokul Swamy,Fei Fang,Steven Wu", "tags": "NIPS 2024,Poster", "abstract": "We study a multi-agent imitation learning (MAIL) problem where we take the perspective of a learner attempting to *coordinate* a group of agents based on demonstrations of an expert doing so. Most prior work in MAIL essentially reduces the problem to matching the behavior of the expert *within* the support of the demonstrations. While doing so is sufficient to drive the *value gap* between the learner and the expert to zero under the assumption that agents are non-strategic, it does not guarantee robustness to deviations by strategic agents. Intuitively, this is because strategic deviations can depend on a counterfactual quantity: the coordinator's recommendations outside of the state distribution their recommendations induce. In response, we initiate the study of an alternative objective for MAIL in Markov Games we term the *regret gap* that explicitly accounts for potential deviations by agents in the group. We first perform an in-depth exploration of the relationship between the value and regret gaps. First, we show that while the value gap can be efficiently minimized via a direct extension of single-agent IL algorithms, even *value equivalence* can lead to an arbitrarily large regret gap. This implies that achieving regret equivalence is harder than achieving value equivalence in MAIL. We then provide a pair of efficient reductions to no-regret online convex optimization that are capable of minimizing the regret gap *(a)* under a coverage assumption on the expert (MALICE) or *(b)* with access to a queryable expert (BLADES).", "pdf": "https://openreview.net/pdf/85c274327fc673a249af4459e7b9afd62b81b79d.pdf"} {"title": "OctreeOcc: Efficient and Multi-Granularity Occupancy Prediction Using Octree Queries", "url": "https://openreview.net/forum?id=os14qXhy55", "detail_url": "https://openreview.net/forum?id=os14qXhy55", "authors": "Yuhang Lu,Xinge ZHU,Tai Wang,Yuexin Ma", "tags": "NIPS 2024,Poster", "abstract": "Occupancy prediction has increasingly garnered attention in recent years for its fine-grained understanding of 3D scenes. Traditional approaches typically rely on dense, regular grid representations, which often leads to excessive computational demands and a loss of spatial details for small objects. This paper introduces OctreeOcc, an innovative 3D occupancy prediction framework that leverages the octree representation to adaptively capture valuable information in 3D, offering variable granularity to accommodate object shapes and semantic regions of varying sizes and complexities. In particular, we incorporate image semantic information to improve the accuracy of initial octree structures and design an effective rectification mechanism to refine the octree structure iteratively. Our extensive evaluations show that OctreeOcc not only surpasses state-of-the-art methods in occupancy prediction, but also achieves a 15%-24% reduction in computational overhead compared to dense-grid-based methods.", "pdf": "https://openreview.net/pdf/11a4477db5fea940dca91084f19ce4645c3cd0fa.pdf"} {"title": "Local Linearity: the Key for No-regret Reinforcement Learning in Continuous MDPs", "url": "https://openreview.net/forum?id=QEmsZoQ45M", "detail_url": "https://openreview.net/forum?id=QEmsZoQ45M", "authors": "Davide Maran,Alberto Maria Metelli,Matteo Papini,Marcello Restelli", "tags": "NIPS 2024,Poster", "abstract": "Achieving the no-regret property for Reinforcement Learning (RL) problems in continuous state and action-space environments is one of the major open problems in the field. Existing solutions either work under very specific assumptions or achieve bounds that are vacuous in some regimes. Furthermore, many structural assumptions\n are known to suffer from a provably unavoidable exponential dependence on the time horizon $H$ in the regret, which makes any possible solution unfeasible in practice. \n In this paper, we identify _local linearity_ as the feature that makes Markov Decision Processes (MDPs) both _learnable_ (sublinear regret) and _feasible_ (regret that is polynomial in $H$). \n We define a novel MDP representation class, namely _Locally Linearizable MDPs_, generalizing other representation classes like Linear MDPs and MDPS with low inherent Belmman error. \n Then, i) we introduce **Cinderella**, a no-regret algorithm for this general representation class, and ii) we show that all known learnable and feasible MDP families are representable in this class. \n We first show that all known feasible MDPs belong to a family that we call _Mildly Smooth MDPs_. Then, we show how any mildly smooth MDP can be represented as a Locally Linearizable MDP by an appropriate choice of representation. This way, **Cinderella** is shown to achieve state-of-the-art regret bounds for all previously known (and some new) continuous MDPs for which RL is learnable and feasible.", "pdf": "https://openreview.net/pdf/d6997acca3ed47fed9e23de54aa576aa1d92e7ee.pdf"} {"title": "AlphaMath Almost Zero: Process Supervision without Process", "url": "https://openreview.net/forum?id=VaXnxQ3UKo", "detail_url": "https://openreview.net/forum?id=VaXnxQ3UKo", "authors": "Guoxin Chen,Minpeng Liao,Chengxi Li,Kai Fan", "tags": "NIPS 2024,Poster", "abstract": "Although recent advancements in large language models (LLMs) have significantly improved their performance on various tasks, they still face challenges with complex and symbolic multi-step reasoning, particularly in mathematical reasoning. To bolster the mathematical reasoning capabilities of LLMs, most existing efforts concentrate on seeking assistance from either domain experts or GPT-4 for high-quality process-supervised data, which is not only expensive but also labor-intensive. In our study, we propose an innovative framework, AlphaMath, that bypasses the need for process annotations (from humans or GPTs) by leveraging Monte Carlo Tree Search (MCTS). This framework focuses on unleashing the potential of a well-pretrained LLM to autonomously enhance its mathematical reasoning. Specifically, we integrate a value model with the LLM, automatically generating both process supervision and step-level evaluation signals in MCTS. Furthermore, we propose an efficient inference strategy\u2014step-level beam search, where the value model is crafted to assist the policy model (i.e., LLM) in navigating more effective reasoning paths, rather than solely relying on prior probabilities. The experimental results on both in-domain and out-of-domain datasets demonstrate that even without GPT-4 or human-annotated process supervision, our AlphaMath framework achieves comparable or superior results to previous state-of-the-art methods.", "pdf": "https://openreview.net/pdf/54a6e2775b68f898c1253fbbb5a5778fec469b33.pdf"} {"title": "Unveil Benign Overfitting for Transformer in Vision: Training Dynamics, Convergence, and Generalization", "url": "https://openreview.net/forum?id=FGJb0peY4R", "detail_url": "https://openreview.net/forum?id=FGJb0peY4R", "authors": "Jiarui Jiang,Wei Huang,Miao Zhang,Taiji Suzuki,Liqiang Nie", "tags": "NIPS 2024,Poster", "abstract": "Transformers have demonstrated great power in the recent development of large foundational models. In particular, the Vision Transformer (ViT) has brought revolutionary changes to the field of vision, achieving significant accomplishments on the experimental side. However, their theoretical capabilities, particularly in terms of generalization when trained to overfit training data, are still not fully understood. To address this gap, this work delves deeply into the \\textit{benign overfitting} perspective of transformers in vision. To this end, we study the optimization of a Transformer composed of a self-attention layer with softmax followed by a fully connected layer under gradient descent on a certain data distribution model. By developing techniques that address the challenges posed by softmax and the interdependent nature of multiple weights in transformer optimization, we successfully characterized the training dynamics and achieved generalization in post-training. Our results establish a sharp condition that can distinguish between the small test error phase and the large test error regime, based on the signal-to-noise ratio in the data model. The theoretical results are further verified by experimental simulation. To the best of our knowledge, this is the first work to characterize benign overfitting for Transformers.", "pdf": "https://openreview.net/pdf/a150caf3828f329dc59605cea4924ef3af84946d.pdf"} {"title": "Toward Semantic Gaze Target Detection", "url": "https://openreview.net/forum?id=BAmAFraxvf", "detail_url": "https://openreview.net/forum?id=BAmAFraxvf", "authors": "Samy Tafasca,Anshul Gupta,Victor Bros,Jean-marc Odobez", "tags": "NIPS 2024,Poster", "abstract": "From the onset of infanthood, humans naturally develop the ability to closely observe and interpret the visual gaze of others. This skill, known as gaze following, holds significance in developmental theory as it enables us to grasp another person\u2019s mental state, emotions, intentions, and more. In computer vision, gaze following is defined as the prediction of the pixel coordinates where a person in the image is focusing their attention. Existing methods in this research area have predominantly centered on pinpointing the gaze target by predicting a gaze heatmap or gaze point. However, a notable drawback of this approach is its limited practical value in gaze applications, as mere localization may not fully capture our primary interest \u2014 understanding the underlying semantics, such as the nature of the gaze target, rather than just its 2D pixel location. To address this gap, we extend the gaze following task, and introduce a novel architecture that simultaneously predicts the localization and semantic label of the gaze target. We devise a pseudo-annotation pipeline for the GazeFollow dataset, propose a new benchmark, develop an experimental protocol and design a suitable baseline for comparison. Our method sets a new state-of-the-art on the main GazeFollow benchmark for localization and achieves competitive results in the recognition task on both datasets compared to the baseline, with 40% fewer parameters", "pdf": "https://openreview.net/pdf/15fe719b032a7739acdc50d46264301a88656f7a.pdf"} {"title": "High Rank Path Development: an approach to learning the filtration of stochastic processes", "url": "https://openreview.net/forum?id=w28i9oe9Xr", "detail_url": "https://openreview.net/forum?id=w28i9oe9Xr", "authors": "Jiajie Tao,Hao Ni,Chong Liu", "tags": "NIPS 2024,Poster", "abstract": "Since the weak convergence for stochastic processes does not account for the growth of information over time which is represented by the underlying filtration, a slightly erroneous stochastic model in weak topology may cause huge loss in multi-periods decision making problems. To address such discontinuities, Aldous introduced the extended weak convergence, which can fully characterise all essential properties, including the filtration, of stochastic processes; however, it was considered to be hard to find efficient numerical implementations. In this paper, we introduce a novel metric called High Rank PCF Distance (HRPCFD) for extended weak convergence based on the high rank path development method from rough path theory, which also defines the characteristic function for measure-valued processes. We then show that such HRPCFD admits many favourable analytic properties which allows us to design an efficient algorithm for training HRPCFD from data and construct the HRPCF-GAN by using HRPCFD as the discriminator for conditional time series generation. Our numerical experiments on both hypothesis testing and generative modelling validate the out-performance of our approach compared with several state-of-the-art methods, highlighting its potential in broad applications of synthetic time series generation and in addressing classic financial and economic challenges, such as optimal stopping or utility maximisation problems. Code is available at https://github.com/DeepIntoStreams/High-Rank-PCF-GAN.git.", "pdf": "https://openreview.net/pdf/eb7aecacaf5d45f5ac40f8a3fe78d6f3122cb6e7.pdf"} {"title": "MambaTalk: Efficient Holistic Gesture Synthesis with Selective State Space Models", "url": "https://openreview.net/forum?id=5uUleAsYUG", "detail_url": "https://openreview.net/forum?id=5uUleAsYUG", "authors": "Zunnan Xu,Yukang Lin,Haonan Han,Sicheng Yang,Ronghui Li,Yachao Zhang,Xiu Li", "tags": "NIPS 2024,Poster", "abstract": "Gesture synthesis is a vital realm of human-computer interaction, with wide-ranging applications across various fields like film, robotics, and virtual reality. \nRecent advancements have utilized the diffusion model to improve gesture synthesis. \nHowever, the high computational complexity of these techniques limits the application in reality. \nIn this study, we explore the potential of state space models (SSMs).\nDirect application of SSMs in gesture synthesis encounters difficulties, which stem primarily from the diverse movement dynamics of various body parts. \nThe generated gestures may also exhibit unnatural jittering issues.\nTo address these, we implement a two-stage modeling strategy with discrete motion priors to enhance the quality of gestures.\nBuilt upon the selective scan mechanism, we introduce MambaTalk, which integrates hybrid fusion modules, local and global scans to refine latent space representations.\nSubjective and objective experiments demonstrate that our method surpasses the performance of state-of-the-art models. Our project is publicly available at~\\url{https://kkakkkka.github.io/MambaTalk/}.", "pdf": "https://openreview.net/pdf/9ffa0790f8744428324975380ccc9d8fc80992b7.pdf"} {"title": "Optimal Aggregation of Prediction Intervals under Unsupervised Domain Shift", "url": "https://openreview.net/forum?id=ldXyNSvXEr", "detail_url": "https://openreview.net/forum?id=ldXyNSvXEr", "authors": "Jiawei Ge,Debarghya Mukherjee,Jianqing Fan", "tags": "NIPS 2024,Poster", "abstract": "As machine learning models are increasingly deployed in dynamic environments, it becomes paramount to assess and quantify uncertainties associated with distribution shifts.\nA distribution shift occurs when the underlying data-generating process changes, leading to a deviation in the model's performance. \nThe prediction interval, which captures the range of likely outcomes for a given prediction, serves as a crucial tool for characterizing uncertainties induced by their underlying distribution. \nIn this paper, we propose methodologies for aggregating prediction intervals to obtain one with minimal width and adequate coverage on the target domain under unsupervised domain shift, under which we have labeled samples from a related source domain and unlabeled covariates from the target domain.\nOur analysis encompasses scenarios where the source and the target domain are related via i) a bounded density ratio, and ii) a measure-preserving transformation.\nOur proposed methodologies are computationally efficient and easy to implement. Beyond illustrating the performance of our method through real-world datasets, we also delve into the theoretical details. This includes establishing rigorous theoretical guarantees, coupled with finite sample bounds, regarding the coverage and width of our prediction intervals. Our approach excels in practical applications and is underpinned by a solid theoretical framework, ensuring its reliability and effectiveness across diverse contexts.", "pdf": "https://openreview.net/pdf/4b5e7856a214e261abbca6b3022232ac3b3f4ac3.pdf"} {"title": "LoTLIP: Improving Language-Image Pre-training for Long Text Understanding", "url": "https://openreview.net/forum?id=pc4GSBi1Hx", "detail_url": "https://openreview.net/forum?id=pc4GSBi1Hx", "authors": "Wei Wu,Kecheng Zheng,Shuailei Ma,Fan Lu,Yuxin Guo,Yifei Zhang,Wei Chen,Qingpei Guo,Yujun Shen,Zheng-Jun Zha", "tags": "NIPS 2024,Poster", "abstract": "In this work, we empirically confirm that the key reason causing such an issue is that the training images are usually paired with short captions, leaving certain tokens easily overshadowed by salient tokens. Towards this problem, our initial attempt is to relabel the data with long captions, however, directly learning with which may lead to performance degradation in understanding short text (e.g., in the image classification task). Then, after incorporating corner tokens to aggregate diverse textual information, we manage to help the model catch up to its original level of short text understanding yet greatly enhance its capability of long text understanding. We further look into whether the model can continuously benefit from longer captions and notice a clear trade-off between the performance and the efficiency. Finally, we validate the effectiveness of our approach using a self-constructed large-scale dataset, which consists of 100M long caption oriented text-image pairs. It is noteworthy that, on the task of long-text image retrieval, we beat the competitor using long captions with 11.1% improvement (i.e., from 72.62% to 83.72%). The project page is available at https://wuw2019.github.io/lot-lip.", "pdf": "https://openreview.net/pdf/5d189a3b433819f650b5bed46c5f4a55567063e8.pdf"} {"title": "Lightweight Frequency Masker for Cross-Domain Few-Shot Semantic Segmentation", "url": "https://openreview.net/forum?id=GCmmy4At6i", "detail_url": "https://openreview.net/forum?id=GCmmy4At6i", "authors": "Jintao Tong,Yixiong Zou,Yuhua Li,Ruixuan Li", "tags": "NIPS 2024,Poster", "abstract": "Cross-domain few-shot segmentation (CD-FSS) is proposed to first pre-train the model on a large-scale source-domain dataset, and then transfer the model to data-scarce target-domain datasets for pixel-level segmentation. The significant domain gap between the source and target datasets leads to a sharp decline in the performance of existing few-shot segmentation (FSS) methods in cross-domain scenarios. In this work, we discover an intriguing phenomenon: simply filtering different frequency components for target domains can lead to a significant performance improvement, sometimes even as high as 14% mIoU. Then, we delve into this phenomenon for an interpretation, and find such improvements stem from the reduced inter-channel correlation in feature maps, which benefits CD-FSS with enhanced robustness against domain gaps and larger activated regions for segmentation. Based on this, we propose a lightweight frequency masker, which further reduces channel correlations by an Amplitude-Phase Masker (APM) module and an Adaptive Channel Phase Attention (ACPA) module. Notably, APM introduces only 0.01% additional parameters but improves the average performance by over 10%, and ACPA imports only 2.5% parameters but further improves the performance by over 1.5%, which significantly surpasses the state-of-the-art CD-FSS methods.", "pdf": "https://openreview.net/pdf/37326306831a1d8cb56271abf7eaa6d8463e6f41.pdf"} {"title": "Direct Unlearning Optimization for Robust and Safe Text-to-Image Models", "url": "https://openreview.net/forum?id=UdXE5V2d0O", "detail_url": "https://openreview.net/forum?id=UdXE5V2d0O", "authors": "Yong-Hyun Park,Sangdoo Yun,Jin-Hwa Kim,Junho Kim,Geonhui Jang,Yonghyun Jeong,Junghyo Jo,Gayoung Lee", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements in text-to-image (T2I) models have greatly benefited from large-scale datasets, but they also pose significant risks due to the potential generation of unsafe content. To mitigate this issue, researchers proposed unlearning techniques that attempt to induce the model to unlearn potentially harmful prompts. However, these methods are easily bypassed by adversarial attacks, making them unreliable for ensuring the safety of generated images. In this paper, we propose Direct Unlearning Optimization (DUO), a novel framework for removing NSFW content from T2I models while preserving their performance on unrelated topics. DUO employs a preference optimization approach using curated paired image data, ensuring that the model learns to remove unsafe visual concepts while retain unrelated features. Furthermore, we introduce an output-preserving regularization term to maintain the model's generative capabilities on safe content. Extensive experiments demonstrate that DUO can robustly defend against various state-of-the-art red teaming methods without significant performance degradation on unrelated topics, as measured by FID and CLIP scores. Our work contributes to the development of safer and more reliable T2I models, paving the way for their responsible deployment in both closed-source and open-source scenarios.", "pdf": "https://openreview.net/pdf/838bbe4be4a0fb4acbc9d364f12f3119c503e167.pdf"} {"title": "DePLM: Denoising Protein Language Models for Property Optimization", "url": "https://openreview.net/forum?id=MU27zjHBcW", "detail_url": "https://openreview.net/forum?id=MU27zjHBcW", "authors": "Zeyuan Wang,Keyan Ding,Ming Qin,Xiaotong Li,Xiang Zhuang,Yu Zhao,Jianhua Yao,Qiang Zhang,Huajun Chen", "tags": "NIPS 2024,Poster", "abstract": "Protein optimization is a fundamental biological task aimed at enhancing theperformance of proteins by modifying their sequences. Computational methodsprimarily rely on evolutionary information (EI) encoded by protein languagemodels (PLMs) to predict fitness landscape for optimization. However, thesemethods suffer from a few limitations. (1) Evolutionary processes involve thesimultaneous consideration of multiple functional properties, often overshadowingthe specific property of interest. (2) Measurements of these properties tend to betailored to experimental conditions, leading to reduced generalizability of trainedmodels to novel proteins. To address these limitations, we introduce DenoisingProtein Language Models (DePLM), a novel approach that refines the evolutionaryinformation embodied in PLMs for improved protein optimization. Specifically, weconceptualize EI as comprising both property-relevant and irrelevant information,with the latter acting as \u201cnoise\u201d for the optimization task at hand. Our approachinvolves denoising this EI in PLMs through a diffusion process conducted in therank space of property values, thereby enhancing model generalization and ensuringdataset-agnostic learning. Extensive experimental results have demonstrated thatDePLM not only surpasses the state-of-the-art in mutation effect prediction butalso exhibits strong generalization capabilities for novel proteins.", "pdf": "https://openreview.net/pdf/8f9fddc96bbfe2d55f21a3238ba386cb0b43d243.pdf"} {"title": "SfPUEL: Shape from Polarization under Unknown Environment Light", "url": "https://openreview.net/forum?id=skeopn3q5Y", "detail_url": "https://openreview.net/forum?id=skeopn3q5Y", "authors": "Youwei Lyu,Heng Guo,Kailong Zhang,Si Li,Boxin Shi", "tags": "NIPS 2024,Poster", "abstract": "Shape from polarization (SfP) benefits from advancements like polarization cameras for single-shot normal estimation, but its performance heavily relies on light conditions. This paper proposes SfPUEL, an end-to-end SfP method to jointly estimate surface normal and material under unknown environment light. To handle this challenging light condition, we design a transformer-based framework for enhancing the perception of global context features. We further propose to integrate photometric stereo (PS) priors from pretrained models to enrich extracted features for high-quality normal predictions. As metallic and dielectric materials exhibit different BRDFs, SfPUEL additionally predicts dielectric and metallic material segmentation to further boost performance. Experimental results on synthetic and our collected real-world dataset demonstrate that SfPUEL significantly outperforms existing SfP and single-shot normal estimation methods. The code and dataset is available at https://github.com/YouweiLyu/SfPUEL.", "pdf": "https://openreview.net/pdf/0a4b6b6da5aef4b17615551776a194e635f96ae2.pdf"} {"title": "Director3D: Real-world Camera Trajectory and 3D Scene Generation from Text", "url": "https://openreview.net/forum?id=08A6X7FSTs", "detail_url": "https://openreview.net/forum?id=08A6X7FSTs", "authors": "Xinyang Li,Zhangyu Lai,Linning Xu,Yansong Qu,Liujuan Cao,Shengchuan Zhang,Bo Dai,Rongrong Ji", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements in 3D generation have leveraged synthetic datasets with ground truth 3D assets and predefined camera trajectories. However, the potential of adopting real-world datasets, which can produce significantly more realistic 3D scenes, remains largely unexplored. In this work, we delve into the key challenge of the complex and scene-specific camera trajectories found in real-world captures. We introduce Director3D, a robust open-world text-to-3D generation framework, designed to generate both real-world 3D scenes and adaptive camera trajectories. To achieve this, (1) we first utilize a Trajectory Diffusion Transformer, acting as the \\emph{Cinematographer}, to model the distribution of camera trajectories based on textual descriptions. Next, a Gaussian-driven Multi-view Latent Diffusion Model serves as the \\emph{Decorator}, modeling the image sequence distribution given the camera trajectories and texts. This model, fine-tuned from a 2D diffusion model, directly generates pixel-aligned 3D Gaussians as an immediate 3D scene representation for consistent denoising. Lastly, the 3D Gaussians are further refined by a novel SDS++ loss as the \\emph{Detailer}, which incorporates the prior of the 2D diffusion model. Extensive experiments demonstrate that Director3D outperforms existing methods, offering superior performance in real-world 3D generation.", "pdf": "https://openreview.net/pdf/6bfc6cdeac3c3cb12e647bad9e9489fe0a16fd03.pdf"} {"title": "Streaming Bayes GFlowNets", "url": "https://openreview.net/forum?id=Nv0Vvz588D", "detail_url": "https://openreview.net/forum?id=Nv0Vvz588D", "authors": "Tiago Silva,Daniel Augusto de Souza,Diego Mesquita", "tags": "NIPS 2024,Poster", "abstract": "Bayes' rule naturally allows for inference refinement in a streaming fashion, without the need to recompute posteriors from scratch whenever new data arrives. In principle, Bayesian streaming is straightforward: we update our prior with the available data and use the resulting posterior as a prior when processing the next data chunk. In practice, however, this recipe entails i) approximating an intractable posterior at each time step; and ii) encapsulating results appropriately to allow for posterior propagation. For continuous state spaces, variational inference (VI) is particularly convenient due to its scalability and the tractability of variational posteriors, For discrete state spaces, however, state-of-the-art VI results in analytically intractable approximations that are ill-suited for streaming settings. To enable streaming Bayesian inference over discrete parameter spaces, we propose streaming Bayes GFlowNets (abbreviated as SB-GFlowNets) by leveraging the recently proposed GFlowNets --- a powerful class of amortized samplers for discrete compositional objects. Notably, SB-GFlowNet approximates the initial posterior using a standard GFlowNet and subsequently updates it using a tailored procedure that requires only the newly observed data. Our case studies in linear preference learning and phylogenetic inference showcase the effectiveness of SB-GFlowNets in sampling from an unnormalized posterior in a streaming setting. As expected, we also observe that SB-GFlowNets is significantly faster than repeatedly training a GFlowNet from scratch to sample from the full posterior.", "pdf": "https://openreview.net/pdf/2d512f521a413b3ed49ebfe14d93d839a48bc0cc.pdf"} {"title": "How does Inverse RL Scale to Large State Spaces? A Provably Efficient Approach", "url": "https://openreview.net/forum?id=ZjgcYMkCmX", "detail_url": "https://openreview.net/forum?id=ZjgcYMkCmX", "authors": "Filippo Lazzati,Mirco Mutti,Alberto Maria Metelli", "tags": "NIPS 2024,Poster", "abstract": "In online Inverse Reinforcement Learning (IRL), the learner can collect samples about the dynamics of the environment to improve its\nestimate of the reward function. Since IRL suffers from identifiability issues, many theoretical works on online IRL focus on estimating the entire set of rewards that explain the demonstrations, named the *feasible reward set*. However, none of the algorithms available in literature can scale to problems with large state spaces. In this paper, we focus on the online IRL problem in Linear Markov Decision\nProcesses (MDPs). We show that the structure offered by Linear MDPs is not sufficient for efficiently estimating the feasible set when the state space is large. As a consequence, we introduce the novel framework of *rewards compatibility*, which generalizes the notion of feasible set, and we develop CATY-IRL, a sample efficient algorithm whose complexity is independent of the size of the state space in Linear MDPs. When restricted to the tabular setting, we demonstrate that CATY-IRL is minimax optimal up to logarithmic factors. As a by-product, we show that Reward-Free Exploration (RFE) enjoys the same worst-case rate, improving over the state-of-the-art lower bound. Finally, we devise a unifying framework for IRL and RFE that may be of independent interest.", "pdf": "https://openreview.net/pdf/6cb6e5b62c2f29db7ddf045a0b23ef02a1ecf85a.pdf"} {"title": "Iteratively Refined Behavior Regularization for Offline Reinforcement Learning", "url": "https://openreview.net/forum?id=RbS7RWxw3r", "detail_url": "https://openreview.net/forum?id=RbS7RWxw3r", "authors": "Yi Ma,Jianye HAO,Xiaohan Hu,YAN ZHENG,Chenjun Xiao", "tags": "NIPS 2024,Poster", "abstract": "One of the fundamental challenges for offline reinforcement learning (RL) is ensuring robustness to data distribution. Whether the data originates from a near-optimal policy or not, we anticipate that an algorithm should demonstrate its ability to learn an effective control policy that seamlessly aligns with the inherent distribution of offline data. Unfortunately, behavior regularization, a simple yet effective offline RL algorithm, tends to struggle in this regard. In this paper, we propose a new algorithm that substantially enhances behavior-regularization based on conservative policy iteration. Our key observation is that by iteratively refining the reference policy used for behavior regularization, conservative policy update guarantees gradually improvement, while also implicitly avoiding querying out-of-sample actions to prevent catastrophic learning failures. We prove that in the tabular setting this algorithm is capable of learning the optimal policy covered by the offline dataset, commonly referred to as the in-sample optimal policy. We then explore several implementation details of the algorithm when function approximations are applied. The resulting algorithm is easy to implement, requiring only a few lines of code modification to existing methods. Experimental results on the D4RL benchmark indicate that our method outperforms previous state-of-the-art baselines in most tasks, clearly demonstrate its superiority over behavior regularization.", "pdf": "https://openreview.net/pdf/8f5533c62834c0d55de46bf3702c10c0cf550b24.pdf"} {"title": "CSPG: Crossing Sparse Proximity Graphs for Approximate Nearest Neighbor Search", "url": "https://openreview.net/forum?id=ohvXBIPV7e", "detail_url": "https://openreview.net/forum?id=ohvXBIPV7e", "authors": "Ming Yang,Yuzheng Cai,Weiguo Zheng", "tags": "NIPS 2024,Poster", "abstract": "The state-of-the-art approximate nearest neighbor search (ANNS) algorithm builds a large proximity graph on the dataset and performs a greedy beam search, which may bring many unnecessary explorations. We develop a novel framework, namely *corssing sparse proximity graph (CSPG)*, based on random partitioning of the dataset. It produces a smaller sparse proximity graph for each partition and routing vectors that bind all the partitions. An efficient two-staged approach is designed for exploring *CSPG*, with fast approaching and cross-partition expansion. We theoretically prove that *CSPG* can accelerate the existing graph-based ANNS algorithms by reducing unnecessary explorations. In addition, we conduct extensive experiments on benchmark datasets. The experimental results confirm that the existing graph-based methods can be significantly outperformed by incorporating *CSPG*, achieving 1.5x to 2x speedups of *QPS* in almost all recalls.", "pdf": "https://openreview.net/pdf/5a92e17bff7f7b824c429943475cdb44a59be721.pdf"} {"title": "Multi-Agent Domain Calibration with a Handful of Offline Data", "url": "https://openreview.net/forum?id=hkBhX5ABjk", "detail_url": "https://openreview.net/forum?id=hkBhX5ABjk", "authors": "Tao Jiang,Lei Yuan,Lihe Li,Cong Guan,Zongzhang Zhang,Yang Yu", "tags": "NIPS 2024,Poster", "abstract": "The shift in dynamics results in significant performance degradation of policies trained in the source domain when deployed in a different target domain, posing a challenge for the practical application of reinforcement learning (RL) in real-world scenarios. Domain transfer methods aim to bridge this dynamics gap through techniques such as domain adaptation or domain calibration. While domain adaptation involves refining the policy through extensive interactions in the target domain, it may not be feasible for sensitive fields like healthcare and autonomous driving. On the other hand, offline domain calibration utilizes only static data from the target domain to adjust the physics parameters of the source domain (e.g., a simulator) to align with the target dynamics, enabling the direct deployment of the trained policy without sacrificing performance, which emerges as the most promising for policy deployment. However, existing techniques primarily rely on evolution algorithms for calibration, resulting in low sample efficiency.\nTo tackle this issue, we propose a novel framework Madoc (\\textbf{M}ulti-\\textbf{a}gent \\textbf{do}main \\textbf{c}alibration). Firstly, we formulate a bandit RL objective to match the target trajectory distribution by learning a couple of classifiers. We then address the challenge of a large domain parameter space by modeling domain calibration as a cooperative multi-agent reinforcement learning (MARL) problem. Specifically, we utilize a Variational Autoencoder (VAE) to automatically cluster physics parameters with similar effects on the dynamics, grouping them into distinct agents. These grouped agents train calibration policies coordinately to adjust multiple parameters using MARL.\nOur empirical evaluation on 21 offline locomotion tasks in D4RL and NeoRL benchmarks showcases the superior performance of our method compared to strong existing offline model-based RL, offline domain calibration, and hybrid offline-and-online RL baselines.", "pdf": "https://openreview.net/pdf/5656a2f21c1001a3a55e7683e5a3d1a20e36023e.pdf"} {"title": "MALT Powers Up Adversarial Attacks", "url": "https://openreview.net/forum?id=bCqIx5Q8qX", "detail_url": "https://openreview.net/forum?id=bCqIx5Q8qX", "authors": "Odelia Melamed,Gilad Yehudai,Adi Shamir", "tags": "NIPS 2024,Poster", "abstract": "Current adversarial attacks for multi-class classifiers choose potential adversarial target classes naively based on the classifier's confidence levels. We present a novel adversarial targeting method, \\textit{MALT - Mesoscopic Almost Linearity Targeting}, based on local almost linearity assumptions. Our attack wins over the current state of the art AutoAttack on the standard benchmark datasets CIFAR-100 and Imagenet and for different robust models. In particular, our attack uses a \\emph{five times faster} attack strategy than AutoAttack's while successfully matching AutoAttack's successes and attacking additional samples that were previously out of reach. We additionally prove formally and demonstrate empirically that our targeting method, although inspired by linear predictors, also applies to non-linear models.", "pdf": "https://openreview.net/pdf/1db24e45bf9382ea106286a54476e746b8d81e7a.pdf"} {"title": "On Divergence Measures for Training GFlowNets", "url": "https://openreview.net/forum?id=N5H4z0Pzvn", "detail_url": "https://openreview.net/forum?id=N5H4z0Pzvn", "authors": "Tiago Silva,Eliezer de Souza da Silva,Diego Mesquita", "tags": "NIPS 2024,Poster", "abstract": "Generative Flow Networks (GFlowNets) are amortized samplers of unnormalized distributions over compositional objects with applications to causal discovery, NLP, and drug design. Recently, it was shown that GFlowNets can be framed as a hierarchical variational inference (HVI) method for discrete distributions. Despite this equivalence, attempts to train GFlowNets using traditional divergence measures as learning objectives were unsuccessful. Instead, current approaches for training these models rely on minimizing the log-squared difference between a proposal (forward policy) and a target (backward policy) distributions. In this work, we first formally extend the relationship between GFlowNets and HVI to distributions on arbitrary measurable topological spaces. Then, we empirically show that the ineffectiveness of divergence-based learning of GFlowNets is due to large gradient variance of the corresponding stochastic objectives. To address this issue, we devise a collection of provably variance-reducing control variates for gradient estimation based on the REINFORCE leave-one-out estimator. Our experimental results suggest that the resulting algorithms often accelerate training convergence when compared against previous approaches. All in all, our work contributes by narrowing the gap between GFlowNet training and HVI, paving the way for algorithmic advancements inspired by the divergence minimization viewpoint.", "pdf": "https://openreview.net/pdf/44cfe36d1ec856c06258ae2df7850bfe62cc90bf.pdf"} {"title": "Markovian Flow Matching: Accelerating MCMC with Continuous Normalizing Flows", "url": "https://openreview.net/forum?id=amJyuVqSaf", "detail_url": "https://openreview.net/forum?id=amJyuVqSaf", "authors": "Alberto Cabezas,Louis Sharrock,Christopher Nemeth", "tags": "NIPS 2024,Poster", "abstract": "Continuous normalizing flows (CNFs) learn the probability path between a reference distribution and a target distribution by modeling the vector field generating said path using neural networks. Recently, Lipman et al. (2022) introduced a simple and inexpensive method for training CNFs in generative modeling, termed flow matching (FM). In this paper, we repurpose this method for probabilistic inference by incorporating Markovian sampling methods in evaluating the FM objective, and using the learned CNF to improve Monte Carlo sampling. Specifically, we propose an adaptive Markov chain Monte Carlo (MCMC) algorithm, which combines a local Markov transition kernel with a non-local, flow-informed transition kernel, defined using a CNF. This CNF is adapted on-the-fly using samples from the Markov chain, which are used to specify the probability path for the FM objective. Our method also includes an adaptive tempering mechanism that allows the discovery of multiple modes in the target distribution. Under mild assumptions, we establish convergence of our method to a local optimum of the FM objective. We then benchmark our approach on several synthetic and real-world examples, achieving similar performance to other state-of-the-art methods but often at a significantly lower computational cost.", "pdf": "https://openreview.net/pdf/c1d22a01947642f18fb9a1f6b81d7b6a0b459450.pdf"} {"title": "SCaR: Refining Skill Chaining for Long-Horizon Robotic Manipulation via Dual Regularization", "url": "https://openreview.net/forum?id=RnxJc4vTVi", "detail_url": "https://openreview.net/forum?id=RnxJc4vTVi", "authors": "Zixuan Chen,Ze Ji,Jing Huo,Yang Gao", "tags": "NIPS 2024,Poster", "abstract": "Long-horizon robotic manipulation tasks typically involve a series of interrelated sub-tasks spanning multiple execution stages. Skill chaining offers a feasible solution for these tasks by pre-training the skills for each sub-task and linking them sequentially. However, imperfections in skill learning or disturbances during execution can lead to the accumulation of errors in skill chaining process, resulting in execution failures. In this paper, we investigate how to achieve stable and smooth skill chaining for long-horizon robotic manipulation tasks. Specifically, we propose a novel skill chaining framework called Skill Chaining via Dual Regularization (SCaR). This framework applies dual regularization to sub-task skill pre-training and fine-tuning, which not only enhances the intra-skill dependencies within each sub-task skill but also reinforces the inter-skill dependencies between sequential sub-task skills, thus ensuring smooth skill chaining and stable long-horizon execution. We evaluate the SCaR framework on two representative long-horizon robotic manipulation simulation benchmarks: IKEA furniture assembly and kitchen organization. Additionally, we conduct a simple real-world validation in tabletop robot pick-and-place tasks. The experimental results show that, with the support of SCaR, the robot achieves a higher success rate in long-horizon tasks compared to relevant baselines and demonstrates greater robustness to perturbations.", "pdf": "https://openreview.net/pdf/50be19a804bdb6639d65e2c9e02d12774349979c.pdf"} {"title": "Discovery of the Hidden World with Large Language Models", "url": "https://openreview.net/forum?id=w50ICQC6QJ", "detail_url": "https://openreview.net/forum?id=w50ICQC6QJ", "authors": "Chenxi Liu,Yongqiang Chen,Tongliang Liu,Mingming Gong,James Cheng,Bo Han,Kun Zhang", "tags": "NIPS 2024,Poster", "abstract": "Revealing the underlying causal mechanisms in the real world is the key to the development of science. Despite the progress in the past decades, traditional causal discovery approaches (CDs) mainly rely on high-quality measured variables, usually given by human experts, to find causal relations. The lack of well-defined high-level variables in many real-world applications has already been a longstanding roadblock to a broader application of CDs. To this end, this paper presents Causal representatiOn AssistanT (COAT) that introduces large language models (LLMs) to bridge the gap. LLMs are trained on massive observations of the world and have demonstrated great capability in extracting key information from unstructured data. Therefore, it is natural to employ LLMs to assist with proposing useful high-level factors and crafting their measurements. Meanwhile, COAT also adopts CDs to find causal relations among the identified variables as well as to provide feedback to LLMs to iteratively refine the proposed factors. We show that LLMs and CDs are mutually beneficial and the constructed feedback provably also helps with the factor proposal. We construct and curate several synthetic and real-world benchmarks including analysis of human reviews and diagnosis of neuropathic and brain tumors, to comprehensively evaluate COAT. Extensive empirical results confirm the effectiveness and reliability of COAT with significant improvements.", "pdf": "https://openreview.net/pdf/d7bdc070a6044df2e284ee1476561ea96fa74dae.pdf"} {"title": "Federated Learning from Vision-Language Foundation Models: Theoretical Analysis and Method", "url": "https://openreview.net/forum?id=Y4L8GQXZZO", "detail_url": "https://openreview.net/forum?id=Y4L8GQXZZO", "authors": "Bikang Pan,Wei Huang,Ye Shi", "tags": "NIPS 2024,Poster", "abstract": "Integrating pretrained vision-language foundation models like CLIP into federated learning has attracted significant attention for enhancing generalization across diverse tasks. Typically, federated learning of vision-language models employs prompt learning to reduce communication and computational costs, i.e., prompt-based federated learning. However, there is limited theoretical analysis to understand the performance of prompt-based federated learning. In this work, we construct a theoretical analysis framework for prompt-based federated learning via feature learning theory. Specifically, we monitor the evolution of signal learning and noise memorization in prompt-based federated learning, demonstrating that performance can be assessed by the ratio of task-relevant to task-irrelevant coefficients. Furthermore, we draw an analogy between income and risk in portfolio optimization and the task-relevant and task-irrelevant terms in feature learning. Leveraging inspiration from portfolio optimization that combining two independent assets will maintain the income while reducing the risk, we introduce two prompts: global prompt and local prompt to construct a prompt portfolio to balance the generalization and personalization. Consequently, we showed the performance advantage of the prompt portfolio and derived the optimal mixing coefficient. These theoretical claims have been further supported by empirical experiments.", "pdf": "https://openreview.net/pdf/816c0b8ef1c1163b736cc0826bf4b5b01d2efa8d.pdf"} {"title": "Multidimensional Fractional Programming for Normalized Cuts", "url": "https://openreview.net/forum?id=3G8sjUZqO3", "detail_url": "https://openreview.net/forum?id=3G8sjUZqO3", "authors": "Yannan Chen,Beichen Huang,Licheng Zhao,Kaiming Shen", "tags": "NIPS 2024,Poster", "abstract": "The Normalized cut (NCut) problem is a fundamental and yet notoriously difficult one in the unsupervised clustering field. Because the NCut problem is fractionally structured, the fractional programming (FP) based approach has worked its way into a new frontier. However, the conventional FP techniques are insufficient: the classic Dinkelbach's transform can only deal with a single ratio and hence is limited to the two-class clustering, while the state-of-the-art quadratic transform accounts for multiple ratios but fails to convert the NCut problem to a tractable form. This work advocates a novel extension of the quadratic transform to the multidimensional ratio case, thereby recasting the fractional 0-1 NCut problem into a bipartite matching problem---which can be readily solved in an iterative manner. Furthermore, we explore the connection between the proposed multidimensional FP method and the minorization-maximization theory to verify the convergence.", "pdf": "https://openreview.net/pdf/4566838f266bba244a83bb5fd9bfa6cd35a62606.pdf"} {"title": "EfficientCAPER: An End-to-End Framework for Fast and Robust Category-Level Articulated Object Pose Estimation", "url": "https://openreview.net/forum?id=LBXSP79oCd", "detail_url": "https://openreview.net/forum?id=LBXSP79oCd", "authors": "Xinyi Yu,Haonan Jiang,Li Zhang,Lin Yuanbo Wu,Linlin Ou,Liu Liu", "tags": "NIPS 2024,Poster", "abstract": "Human life is populated with articulated objects. Pose estimation for category-level articulated objects is a significant challenge due to their inherent complexity and diverse kinematic structures. Current methods for this task usually meet the problems of insufficient consideration of kinematic constraints, self-occlusion, and optimization requirements. In this paper, we propose EfficientCAPER, an end-to-end Category-level Articulated object Pose EstimatoR, eliminating the need for optimization functions as post-processing and utilizing the kinematic structure for joint-centric pose modeling, thus enhancing the efficiency and applicability. Given a partial point cloud as input, the EfficientCAPER firstly estimates the pose for the free part of an articulated object using decoupled rotation representation. Next, we canonicalize the input point cloud to estimate constrained parts' poses by predicting the joint parameters and states as replacements. Evaluations on three diverse datasets, ArtImage, ReArtMix, and RobotArm, show EfficientCAPER's effectiveness and generalization ability to real-world scenarios. The framework exhibits excellent static pose estimation performance for articulated objects, contributing to the advancement of category-level pose estimation. Codes will be made publicly available.", "pdf": "https://openreview.net/pdf/96b175aca73fb75e7bfc3e332487048318ed6d6a.pdf"} {"title": "DDN: Dual-domain Dynamic Normalization for Non-stationary Time Series Forecasting", "url": "https://openreview.net/forum?id=RVZfra6sZo", "detail_url": "https://openreview.net/forum?id=RVZfra6sZo", "authors": "Tao Dai,Beiliang Wu,Peiyuan Liu,Naiqi Li,Xue Yuerong,Shu-Tao Xia,Zexuan Zhu", "tags": "NIPS 2024,Poster", "abstract": "Deep neural networks (DNNs) have recently achieved remarkable advancements in time series forecasting (TSF) due to their powerful ability of sequence dependence modeling. To date, existing DNN-based TSF methods still suffer from unreliable predictions for real-world data due to its non-stationarity characteristics, i.e., data distribution varies quickly over time. To mitigate this issue, several normalization methods (e.g., SAN) have recently been specifically designed by normalization in a fixed period/window in the time domain. However, these methods still struggle to capture distribution variations, due to the complex time patterns of time series in the time domain. Based on the fact that wavelet transform can decompose time series into a linear combination of different frequencies, which exhibits distribution variations with time-varying periods, we propose a novel Dual-domain Dynamic Normalization (DDN) to dynamically capture distribution variations in both time and frequency domains. Specifically, our DDN tries to eliminate the non-stationarity of time series via both frequency and time domain normalization in a sliding window way. Besides, our DDN can serve as a plug-in-play module, and thus can be easily incorporated into other forecasting models. Extensive experiments on public benchmark datasets under different forecasting models demonstrate the superiority of our DDN over other normalization methods. Code will be made available following the review process.", "pdf": "https://openreview.net/pdf/b2c1cc188feb87ee122d6646c86064f16525a62b.pdf"} {"title": "FIFO-Diffusion: Generating Infinite Videos from Text without Training", "url": "https://openreview.net/forum?id=uikhNa4wam", "detail_url": "https://openreview.net/forum?id=uikhNa4wam", "authors": "Jihwan Kim,Junoh Kang,Jinyoung Choi,Bohyung Han", "tags": "NIPS 2024,Poster", "abstract": "We propose a novel inference technique based on a pretrained diffusion model for text-conditional video generation. Our approach, called FIFO-Diffusion, is conceptually capable of generating infinitely long videos without additional training. This is achieved by iteratively performing diagonal denoising, which simultaneously processes a series of consecutive frames with increasing noise levels in a queue; our method dequeues a fully denoised frame at the head while enqueuing a new random noise frame at the tail. However, diagonal denoising is a double-edged sword as the frames near the tail can take advantage of cleaner frames by forward reference but such a strategy induces the discrepancy between training and inference. Hence, we introduce latent partitioning to reduce the training-inference gap and lookahead denoising to leverage the benefit of forward referencing. Practically, FIFO-Diffusion consumes a constant amount of memory regardless of the target video length given a baseline model, while well-suited for parallel inference on multiple GPUs. We have demonstrated the promising results and effectiveness of the proposed methods on existing text-to-video generation baselines. Generated video examples and source codes are available at our project page.", "pdf": "https://openreview.net/pdf/394cdbbb549f23b88a118bc60517b77b96974648.pdf"} {"title": "SLowcalSGD : Slow Query Points Improve Local-SGD for Stochastic Convex Optimization", "url": "https://openreview.net/forum?id=B29BlRe26Z", "detail_url": "https://openreview.net/forum?id=B29BlRe26Z", "authors": "Tehila Dahan,Kfir Yehuda Levy", "tags": "NIPS 2024,Poster", "abstract": "We consider distributed learning scenarios where $M$ machines interact with a parameter server along several communication rounds in order to minimize a joint objective function. \nFocusing on the heterogeneous case, where different machines may draw samples from different data-distributions, we design the first local update method that provably benefits over the two most prominent distributed baselines: namely Minibatch-SGD and Local-SGD. \nKey to our approach is a slow querying technique that we customize to the distributed setting, which in turn enables a better mitigation of the bias caused by local updates.", "pdf": "https://openreview.net/pdf/058e4d4ac818209f4dccaf3ff47d72e3cf99d75e.pdf"} {"title": "Learning Disentangled Representations for Perceptual Point Cloud Quality Assessment via Mutual Information Minimization", "url": "https://openreview.net/forum?id=MSSRhxwZP7", "detail_url": "https://openreview.net/forum?id=MSSRhxwZP7", "authors": "Ziyu Shan,Yujie Zhang,Yipeng Liu,Yiling Xu", "tags": "NIPS 2024,Poster", "abstract": "No-Reference Point Cloud Quality Assessment (NR-PCQA) aims to objectively assess the human perceptual quality of point clouds without relying on pristine-quality point clouds for reference. It is becoming increasingly significant with the rapid advancement of immersive media applications such as virtual reality (VR) and augmented reality (AR). However, current NR-PCQA models attempt to indiscriminately learn point cloud content and distortion representations within a single network, overlooking their distinct contributions to quality information. To address this issue, we propose DisPA, a novel disentangled representation learning framework for NR-PCQA. The framework trains a dual-branch disentanglement network to minimize mutual information (MI) between representations of point cloud content and distortion. Specifically, to fully disentangle representations, the two branches adopt different philosophies: the content-aware encoder is pretrained by a masked auto-encoding strategy, which can allow the encoder to capture semantic information from rendered images of distorted point clouds; the distortion-aware encoder takes a mini-patch map as input, which forces the encoder to focus on low-level distortion patterns. Furthermore, we utilize an MI estimator to estimate the tight upper bound of the actual MI and further minimize it to achieve explicit representation disentanglement. Extensive experimental results demonstrate that DisPA outperforms state-of-the-art methods on multiple PCQA datasets.", "pdf": "https://openreview.net/pdf/5b43b1bf30c2db2886507e0d254b3a04d9c188e9.pdf"} {"title": "Multimodal Large Language Models Make Text-to-Image Generative Models Align Better", "url": "https://openreview.net/forum?id=IRXyPm9IPW", "detail_url": "https://openreview.net/forum?id=IRXyPm9IPW", "authors": "Xun Wu,Shaohan Huang,Guolong Wang,Jing Xiong,Furu Wei", "tags": "NIPS 2024,Poster", "abstract": "Recent studies have demonstrated the exceptional potentials of leveraging human preference datasets to refine text-to-image generative models, enhancing the alignment between generated images and textual prompts. Despite these advances, current human preference datasets are either prohibitively expensive to construct or suffer from a lack of diversity in preference dimensions, resulting in limited applicability for instruction tuning in open-source text-to-image generative models and hinder further exploration. To address these challenges and promote the alignment of generative models through instruction tuning, we leverage multimodal large language models to create VisionPrefer, a high-quality and fine-grained preference dataset that captures multiple preference aspects. We aggregate feedback from AI annotators across four aspects: prompt-following, aesthetic, fidelity, and harmlessness to construct VisionPrefer. To validate the effectiveness of VisionPrefer, we train a reward model VP-Score over VisionPrefer to guide the training of text-to-image generative models and the preference prediction accuracy of VP-Score is comparable to human annotators. Furthermore, we use two reinforcement learning methods to supervised fine-tune generative models to evaluate the performance of VisionPrefer, and extensive experimental results demonstrate that VisionPrefer significantly improves text-image alignment in compositional image generation across diverse aspects, e.g., aesthetic, and generalizes better than previous human-preference metrics across various image distributions. Moreover, VisionPrefer indicates that the integration of AI-generated synthetic data as a supervisory signal is a promising avenue for achieving improved alignment with human preferences in vision generative models.", "pdf": "https://openreview.net/pdf/b1997ae3b608c9cb6c81e9d59197e5a4b371fe4a.pdf"} {"title": "MetaUAS: Universal Anomaly Segmentation with One-Prompt Meta-Learning", "url": "https://openreview.net/forum?id=4jegYnUMHb", "detail_url": "https://openreview.net/forum?id=4jegYnUMHb", "authors": "Bin-Bin Gao", "tags": "NIPS 2024,Poster", "abstract": "Zero- and few-shot visual anomaly segmentation relies on powerful vision-language models that detect unseen anomalies using manually designed textual prompts. However, visual representations are inherently independent of language. In this paper, we explore the potential of a pure visual foundation model as an alternative to widely used vision-language models for universal visual anomaly segmentation.\nWe present a novel paradigm that unifies anomaly segmentation into change segmentation. This paradigm enables us to leverage large-scale synthetic image pairs, featuring object-level and local region changes, derived from existing image datasets, which are independent of target anomaly datasets. We propose a one-prompt Meta-learning framework for Universal Anomaly Segmentation (MetaUAS) that is trained on this synthetic dataset and then generalizes well to segment any novel or unseen visual anomalies in the real world. To handle geometrical variations between prompt and query images, we propose a soft feature alignment module that bridges paired-image change perception and single-image semantic segmentation. This is the first work to achieve universal anomaly segmentation using a pure vision model without relying on special anomaly detection datasets and pre-trained visual-language models. Our method effectively and efficiently segments any anomalies with only one normal image prompt and enjoys training-free without guidance from language. Our MetaUAS significantly outperforms previous zero-shot, few-shot, and even full-shot anomaly segmentation methods. Code and Models: https://github.com/gaobb/MetaUAS.", "pdf": "https://openreview.net/pdf/c3c24c493f2de20c90611891da523e1a4fe7158a.pdf"} {"title": "MoGenTS: Motion Generation based on Spatial-Temporal Joint Modeling", "url": "https://openreview.net/forum?id=FisyQfoJCm", "detail_url": "https://openreview.net/forum?id=FisyQfoJCm", "authors": "Weihao Yuan,Yisheng HE,Weichao Shen,Yuan Dong,Xiaodong Gu,Zilong Dong,Liefeng Bo,Qixing Huang", "tags": "NIPS 2024,Poster", "abstract": "Motion generation from discrete quantization offers many advantages over continuous regression, but at the cost of inevitable approximation errors. Previous methods usually quantize the entire body pose into one code, which not only faces the difficulty in encoding all joints within one vector but also loses the spatial relationship between different joints. Differently, in this work we quantize each individual joint into one vector, which i) simplifies the quantization process as the complexity associated with a single joint is markedly lower than that of the entire pose; ii) maintains a spatial-temporal structure that preserves both the spatial relationships among joints and the temporal movement patterns; iii) yields a 2D token map, which enables the application of various 2D operations widely used in 2D images. Grounded in the 2D motion quantization, we build a spatial-temporal modeling framework, where 2D joint VQVAE, temporal-spatial 2D masking technique, and spatial-temporal 2D attention are proposed to take advantage of spatial-temporal signals among the 2D tokens. Extensive experiments demonstrate that our method significantly outperforms previous methods across different datasets, with a $26.6\\%$ decrease of FID on HumanML3D and a $29.9\\%$ decrease on KIT-ML.", "pdf": "https://openreview.net/pdf/b6961f3f3291f547b4fb339b37e9b406ed34cb13.pdf"} {"title": "Learning the Latent Causal Structure for Modeling Label Noise", "url": "https://openreview.net/forum?id=nJKfNiEBvq", "detail_url": "https://openreview.net/forum?id=nJKfNiEBvq", "authors": "Yexiong Lin,Yu Yao,Tongliang Liu", "tags": "NIPS 2024,Poster", "abstract": "In label-noise learning, the noise transition matrix reveals how an instance transitions from its clean label to its noisy label. Accurately estimating an instance's noise transition matrix is crucial for estimating its clean label. However, when only a noisy dataset is available, noise transition matrices can be estimated only for some \"special\" instances. To leverage these estimated transition matrices to help estimate the transition matrices of other instances, it is essential to explore relations between the matrices of these \"special\" instances and those of others. Existing studies typically build the relation by explicitly defining the similarity between the estimated noise transition matrices of \"special\" instances and those of other instances. However, these similarity-based assumptions are hard to validate and may not align with real-world data. If these assumptions fail, both noise transition matrices and clean labels cannot be accurately estimated. In this paper, we found that by learning the latent causal structure governing the generating process of noisy data, we can estimate noise transition matrices without the need for similarity-based assumptions. Unlike previous generative label-noise learning methods, we consider causal relations between latent causal variables and model them with a learnable graphical model. Utilizing only noisy data, our method can effectively learn the latent causal structure. Experimental results on various noisy datasets demonstrate that our method achieves state-of-the-art performance in estimating noise transition matrices, which leads to improved classification accuracy. The code is available at: https://github.com/tmllab/2024_NeurIPS_CSGN.", "pdf": "https://openreview.net/pdf/b8c998238d0902caeab39beb11e1dae967af97c7.pdf"} {"title": "Learning General Parameterized Policies for Infinite Horizon Average Reward Constrained MDPs via Primal-Dual Policy Gradient Algorithm", "url": "https://openreview.net/forum?id=3lQgEPRxeu", "detail_url": "https://openreview.net/forum?id=3lQgEPRxeu", "authors": "Qinbo Bai,Washim Uddin Mondal,Vaneet Aggarwal", "tags": "NIPS 2024,Poster", "abstract": "This paper explores the realm of infinite horizon average reward Constrained Markov Decision Processes (CMDPs). To the best of our knowledge, this work is the first to delve into the regret and constraint violation analysis of average reward CMDPs with a general policy parametrization. To address this challenge, we propose a primal dual-based policy gradient algorithm that adeptly manages the constraints while ensuring a low regret guarantee toward achieving a global optimal policy. In particular, our proposed algorithm achieves $\\tilde{\\mathcal{O}}({T}^{4/5})$ objective regret and $\\tilde{\\mathcal{O}}({T}^{4/5})$ constraint violation bounds.", "pdf": "https://openreview.net/pdf/500b125b905014d27b7d9d90d28918d9cdfe0d33.pdf"} {"title": "GRANOLA: Adaptive Normalization for Graph Neural Networks", "url": "https://openreview.net/forum?id=qd8blc0o0F", "detail_url": "https://openreview.net/forum?id=qd8blc0o0F", "authors": "Moshe Eliasof,Beatrice Bevilacqua,Carola-Bibiane Sch\u00f6nlieb,Haggai Maron", "tags": "NIPS 2024,Poster", "abstract": "Despite the widespread adoption of Graph Neural Networks (GNNs), these models often incorporate off-the-shelf normalization layers like BatchNorm or InstanceNorm, which were not originally designed for GNNs. Consequently, these normalization layers may not effectively capture the unique characteristics of graph-structured data, potentially even weakening the expressive power of the overall architecture. \nWhile existing graph-specific normalization layers have been proposed, they often struggle to offer substantial and consistent benefits. In this paper, we propose GRANOLA, a novel graph-adaptive normalization layer. Unlike existing normalization layers, GRANOLA normalizes node features by adapting to the specific characteristics of the graph, particularly by generating expressive representations of its nodes, obtained by leveraging the propagation of Random Node Features (RNF) in the graph. We provide theoretical results that support our design choices as well as an extensive empirical evaluation demonstrating the superior performance of GRANOLA over existing normalization techniques. Furthermore, GRANOLA emerges as the top-performing method among all baselines in the same time complexity class of Message Passing Neural Networks (MPNNs).", "pdf": "https://openreview.net/pdf/3807d9230bfdc54a228ce9aed657948f1c75ae78.pdf"} {"title": "NeuMA: Neural Material Adaptor for Visual Grounding of Intrinsic Dynamics", "url": "https://openreview.net/forum?id=AvWB40qXZh", "detail_url": "https://openreview.net/forum?id=AvWB40qXZh", "authors": "Junyi Cao,Shanyan Guan,Yanhao Ge,Wei Li,Xiaokang Yang,Chao Ma", "tags": "NIPS 2024,Poster", "abstract": "While humans effortlessly discern intrinsic dynamics and adapt to new scenarios, modern AI systems often struggle. Current methods for visual grounding of dynamics either use pure neural-network-based simulators (black box), which may violate physical laws, or traditional physical simulators (white box), which rely on expert-defined equations that may not fully capture actual dynamics. We propose the Neural Material Adaptor (NeuMA), which integrates existing physical laws with learned corrections, facilitating accurate learning of actual dynamics while maintaining the generalizability and interpretability of physical priors. Additionally, we propose Particle-GS, a particle-driven 3D Gaussian Splatting variant that bridges simulation and observed images, allowing back-propagate image gradients to optimize the simulator. Comprehensive experiments on various dynamics in terms of grounded particle accuracy, dynamic rendering quality, and generalization ability demonstrate that NeuMA can accurately capture intrinsic dynamics. Project Page: https://xjay18.github.io/projects/neuma.html.", "pdf": "https://openreview.net/pdf/f874ed95469fcdbd11b5f439c05a381a5178554a.pdf"} {"title": "Improved Regret for Bandit Convex Optimization with Delayed Feedback", "url": "https://openreview.net/forum?id=aR9JvkOGjM", "detail_url": "https://openreview.net/forum?id=aR9JvkOGjM", "authors": "Yuanyu Wan,Chang Yao,Mingli Song,Lijun Zhang", "tags": "NIPS 2024,Poster", "abstract": "We investigate bandit convex optimization (BCO) with delayed feedback, where only the loss value of the action is revealed under an arbitrary delay. Let $n,T,\\bar{d}$ denote the dimensionality, time horizon, and average delay, respectively. Previous studies have achieved an $O(\\sqrt{n}T^{3/4}+(n\\bar{d})^{1/3}T^{2/3})$ regret bound for this problem, whose delay-independent part matches the regret of the classical non-delayed bandit gradient descent algorithm. However, there is a large gap between its delay-dependent part, i.e., $O((n\\bar{d})^{1/3}T^{2/3})$, and an existing $\\Omega(\\sqrt{\\bar{d}T})$ lower bound. In this paper, we illustrate that this gap can be filled in the worst case, where $\\bar{d}$ is very close to the maximum delay $d$. Specifically, we first develop a novel algorithm, and prove that it enjoys a regret bound of $O(\\sqrt{n}T^{3/4}+\\sqrt{dT})$ in general. Compared with the previous result, our regret bound is better for $d=O((n\\bar{d})^{2/3}T^{1/3})$, and the delay-dependent part is tight in the worst case. The primary idea is to decouple the joint effect of the delays and the bandit feedback on the regret by carefully incorporating the delayed bandit feedback with a blocking update mechanism. Furthermore, we show that the proposed algorithm can improve the regret bound to $O((nT)^{2/3}\\log^{1/3}T+d\\log T)$ for strongly convex functions. Finally, if the action sets are unconstrained, we demonstrate that it can be simply extended to achieve an $O(n\\sqrt{T\\log T}+d\\log T)$ regret bound for strongly convex and smooth functions.", "pdf": "https://openreview.net/pdf/c071c58e9f5a472db8871bd2ecfbc447f9f0fb7e.pdf"} {"title": "Sample-Efficient Constrained Reinforcement Learning with General Parameterization", "url": "https://openreview.net/forum?id=1po4j1Tv7O", "detail_url": "https://openreview.net/forum?id=1po4j1Tv7O", "authors": "Washim Uddin Mondal,Vaneet Aggarwal", "tags": "NIPS 2024,Poster", "abstract": "We consider a constrained Markov Decision Problem (CMDP) where the goal of an agent is to maximize the expected discounted sum of rewards over an infinite horizon while ensuring that the expected discounted sum of costs exceeds a certain threshold. Building on the idea of momentum-based acceleration, we develop the Primal-Dual Accelerated Natural Policy Gradient (PD-ANPG) algorithm that ensures an $\\epsilon$ global optimality gap and $\\epsilon$ constraint violation with $\\tilde{\\mathcal{O}}((1-\\gamma)^{-7}\\epsilon^{-2})$ sample complexity for general parameterized policies where $\\gamma$ denotes the discount factor. This improves the state-of-the-art sample complexity in general parameterized CMDPs by a factor of $\\mathcal{O}((1-\\gamma)^{-1}\\epsilon^{-2})$ and achieves the theoretical lower bound in $\\epsilon^{-1}$.", "pdf": "https://openreview.net/pdf/b178b4957c9f40578fc6a3a1e59fd2b08dc2b708.pdf"} {"title": "CLIP in Mirror: Disentangling text from visual images through reflection", "url": "https://openreview.net/forum?id=FYm8coxdiR", "detail_url": "https://openreview.net/forum?id=FYm8coxdiR", "authors": "Tiancheng Wang,Yuguang Yang,Linlin Yang,Shaohui Lin,Juan Zhang,Guodong Guo,Baochang Zhang", "tags": "NIPS 2024,Poster", "abstract": "The CLIP network excels in various tasks, but struggles with text-visual images i.e., images that contain both text and visual objects; it risks confusing textual and visual representations. To address this issue, we propose MirrorCLIP, a zero-shot framework, which disentangles the image features of CLIP by exploiting the difference in the mirror effect between visual objects and text in the images. Specifically, MirrorCLIP takes both original and flipped images as inputs, comparing their features dimension-wise in the latent space to generate disentangling masks. With disentangling masks, we further design filters to separate textual and visual factors more precisely, and then get disentangled representations. Qualitative experiments using stable diffusion models and class activation mapping (CAM) validate the effectiveness of our disentanglement. Moreover, our proposed MirrorCLIP reduces confusion when encountering text-visual images and achieves a substantial improvement on typographic defense, further demonstrating its superior ability of disentanglement. Our code is available at https://github.com/tcwangbuaa/MirrorCLIP", "pdf": "https://openreview.net/pdf/84f899513f8eaccec425d8a555e0dba2e9e5984e.pdf"} {"title": "Spiking Graph Neural Network on Riemannian Manifolds", "url": "https://openreview.net/forum?id=VKt0K3iOmO", "detail_url": "https://openreview.net/forum?id=VKt0K3iOmO", "authors": "Li Sun,Zhenhao Huang,Qiqi Wan,Hao Peng,Philip S. Yu", "tags": "NIPS 2024,Poster", "abstract": "Graph neural networks (GNNs) have become the dominant solution for learning on graphs, the typical non-Euclidean structures. Conventional GNNs, constructed with the Artificial Neuron Network (ANN), have achieved impressive performance at the cost of high computation and energy consumption. In parallel, spiking GNNs with brain-like spiking neurons are drawing increasing research attention owing to the energy efficiency. So far, existing spiking GNNs consider graphs in Euclidean space, ignoring the structural geometry, and suffer from the high latency issue due to Back-Propagation-Through-Time (BPTT) with the surrogate gradient. In light of the aforementioned issues, we are devoted to exploring spiking GNN on Riemannian manifolds, and present a Manifold-valued Spiking GNN (MSG). In particular, we design a new spiking neuron on geodesically complete manifolds with the diffeomorphism, so that BPTT regarding the spikes is replaced by the proposed differentiation via manifold. Theoretically, we show that MSG approximates a solver of the manifold ordinary differential equation. Extensive experiments on common graphs show the proposed MSG achieves superior performance to previous spiking GNNs and energy efficiency to conventional GNNs.", "pdf": "https://openreview.net/pdf/f9a484a9f7ac01baff738900d4ad49f80894467c.pdf"} {"title": "Causal Dependence Plots", "url": "https://openreview.net/forum?id=pU0z2sNM1M", "detail_url": "https://openreview.net/forum?id=pU0z2sNM1M", "authors": "Joshua R. Loftus,Lucius E.J. Bynum,Sakina Hansen", "tags": "NIPS 2024,Poster", "abstract": "To use artificial intelligence and machine learning models wisely we must understand how they interact with the world, including how they depend causally on data inputs. In this work we develop Causal Dependence Plots (CDPs) to visualize how a model's predicted outcome depends on changes in a given predictor *along with consequent causal changes in other predictor variables*. Crucially, this differs from standard methods based on independence or holding other predictors constant, such as regression coefficients or Partial Dependence Plots (PDPs). Our explanatory framework generalizes PDPs, including them as a special case, as well as a variety of other interpretive plots that show, for example, the total, direct, and indirect effects of causal mediation. We demonstrate with simulations and real data experiments how CDPs can be combined in a modular way with methods for causal learning or sensitivity analysis. Since people often think causally about input-output dependence, CDPs can be powerful tools in the xAI or interpretable machine learning toolkit and contribute to applications like scientific machine learning and algorithmic fairness.", "pdf": "https://openreview.net/pdf/9e16578e540aacca76cc98f43eae7737b7a37f31.pdf"} {"title": "Trade-Offs of Diagonal Fisher Information Matrix Estimators", "url": "https://openreview.net/forum?id=TVbCKAqoD8", "detail_url": "https://openreview.net/forum?id=TVbCKAqoD8", "authors": "Alexander Soen,Ke Sun", "tags": "NIPS 2024,Poster", "abstract": "The Fisher information matrix can be used to characterize the local geometry of\nthe parameter space of neural networks. It elucidates insightful theories and\nuseful tools to understand and optimize neural networks. Given its high\ncomputational cost, practitioners often use random estimators and evaluate only\nthe diagonal entries. We examine two popular estimators whose accuracy and sample\ncomplexity depend on their associated variances. We derive bounds of the\nvariances and instantiate them in neural networks for regression and\nclassification. We navigate trade-offs for both estimators based on analytical\nand numerical studies. We find that the variance quantities depend on the\nnon-linearity w.r.t. different parameter groups and should not be neglected when\nestimating the Fisher information.", "pdf": "https://openreview.net/pdf/3b7293b9d2cd8c527ababaec3f552724f4bcac2f.pdf"} {"title": "Implicit Regularization of Sharpness-Aware Minimization for Scale-Invariant Problems", "url": "https://openreview.net/forum?id=oSOVME9kl2", "detail_url": "https://openreview.net/forum?id=oSOVME9kl2", "authors": "Bingcong Li,Liang Zhang,Niao He", "tags": "NIPS 2024,Poster", "abstract": "Sharpness-aware minimization (SAM) improves generalization of various deep learning tasks. Motivated by popular architectures such as LoRA, we explore the implicit regularization of SAM for scale-invariant problems involving two groups of variables. Instead of focusing on commonly used sharpness, this work introduces a concept termed *balancedness*, defined as the difference between the squared norm of two variables. This allows us to depict richer global behaviors of SAM. In particular, our theoretical and empirical findings reveal that i) SAM promotes balancedness; and ii) the regularization on balancedness is *data-responsive* -- outliers have stronger impact. \nThe latter coincides with empirical observations that SAM outperforms SGD in the presence of outliers. \nLeveraging the implicit regularization, we develop a resource-efficient SAM variant, balancedness-aware regularization (BAR), tailored for scale-invariant problems such as finetuning language models with LoRA. BAR saves 95% computational overhead of SAM, with enhanced test performance across various tasks on RoBERTa, GPT2, and OPT-1.3B.", "pdf": "https://openreview.net/pdf/333bb561496ca87ac566d937ee37ac97cd20950e.pdf"} {"title": "D-CPT Law: Domain-specific Continual Pre-Training Scaling Law for Large Language Models", "url": "https://openreview.net/forum?id=JzKFN5fWOk", "detail_url": "https://openreview.net/forum?id=JzKFN5fWOk", "authors": "Haoran Que,Jiaheng Liu,Ge Zhang,Chenchen Zhang,Xingwei Qu,Yinghao Ma,Feiyu Duan,ZhiqiBai,JiakaiWang,Yuanxing Zhang,Xu Tan,Jie Fu,Jiamang Wang,Lin Qu,Wenbo Su,Bo Zheng", "tags": "NIPS 2024,Poster", "abstract": "Continual Pre-Training (CPT) on Large Language Models (LLMs) has been widely used to expand the model\u2019s fundamental understanding of specific downstream domains (e.g., math and code). For the CPT on domain-specific LLMs, one important question is how to choose the optimal mixture ratio between the general-corpus (e.g., Dolma, Slim-pajama) and the downstream domain-corpus. Existing methods usually adopt laborious human efforts by grid-searching on a set of mixture ratios, which require high GPU training consumption costs. Besides, we cannot guarantee the selected ratio is optimal for the specific domain. To address the limitations of existing methods, inspired by the Scaling Law for performance prediction, we propose to investigate the Scaling Law of the Domain-specific Continual Pre-Training (D-CPT Law) to decide the optimal mixture ratio with acceptable training costs for LLMs of different sizes. Specifically, by fitting the D-CPT Law, we can easily predict the general and downstream performance of arbitrary mixture ratios, model sizes, and dataset sizes using small-scale training costs on limited experiments. Moreover, we also extend our standard D-CPT Law on cross-domain settings and propose the Cross-Domain D-CPT Law to predict the D-CPT law of target domains, where very small training costs (about 1\\% of the normal training costs) are needed for the target domains. Comprehensive experimental results on six downstream domains demonstrate the effectiveness and generalizability of our proposed D-CPT Law and Cross-Domain D-CPT Law.", "pdf": "https://openreview.net/pdf/08d9805d8e1416c6165aa5cd70963dac8418ec23.pdf"} {"title": "A Framework for Bilevel Optimization on Riemannian Manifolds", "url": "https://openreview.net/forum?id=LvNDqNJKlD", "detail_url": "https://openreview.net/forum?id=LvNDqNJKlD", "authors": "Andi Han,Bamdev Mishra,Pratik Jawanpuria,Akiko Takeda", "tags": "NIPS 2024,Poster", "abstract": "Bilevel optimization has gained prominence in various applications. In this study, we introduce a framework for solving bilevel optimization problems, where the variables in both the lower and upper levels are constrained on Riemannian manifolds. We present several hypergradient estimation strategies on manifolds and analyze their estimation errors. Furthermore, we provide comprehensive convergence and complexity analyses for the proposed hypergradient descent algorithm on manifolds. We also extend our framework to encompass stochastic bilevel optimization and incorporate the use of general retraction. The efficacy of the proposed framework is demonstrated through several applications.", "pdf": "https://openreview.net/pdf/6b039aac32d70739e8a988d80a648bdacc69a2aa.pdf"} {"title": "Sigmoid Gating is More Sample Efficient than Softmax Gating in Mixture of Experts", "url": "https://openreview.net/forum?id=IG6kd5V4kd", "detail_url": "https://openreview.net/forum?id=IG6kd5V4kd", "authors": "Huy Nguyen,Nhat Ho,Alessandro Rinaldo", "tags": "NIPS 2024,Poster", "abstract": "The softmax gating function is arguably the most popular choice in mixture of experts modeling. Despite its widespread use in practice, the softmax gating may lead to unnecessary competition among experts, potentially causing the undesirable phenomenon of representation collapse due to its inherent structure. In response, the sigmoid gating function has been recently proposed as an alternative and has been demonstrated empirically to achieve superior performance. However, a rigorous examination of the sigmoid gating function is lacking in current literature. In this paper, we verify theoretically that the sigmoid gating, in fact, enjoys a higher sample efficiency than the softmax gating for the statistical task of expert estimation. Towards that goal, we consider a regression framework in which the unknown regression function is modeled as a mixture of experts, and study the rates of convergence of the least squares estimator under the over-specified case in which the number of fitted experts is larger than the true value. We show that two gating regimes naturally arise and, in each of them, we formulate an identifiability condition for the expert functions and derive the corresponding convergence rates. In both cases, we find that experts formulated as feed-forward networks with commonly used activation such as $\\mathrm{ReLU}$ and $\\mathrm{GELU}$ enjoy faster convergence rates under the sigmoid gating than those under softmax gating. Furthermore, given the same choice of experts, we demonstrate that the sigmoid gating function requires a smaller sample size than its softmax counterpart to attain the same error of expert estimation and, therefore, is more sample efficient.", "pdf": "https://openreview.net/pdf/9a1051cfe0256725e6419093f115c5379a5a325d.pdf"} {"title": "Lips Are Lying: Spotting the Temporal Inconsistency between Audio and Visual in Lip-Syncing DeepFakes", "url": "https://openreview.net/forum?id=yMS7ansbr6", "detail_url": "https://openreview.net/forum?id=yMS7ansbr6", "authors": "Weifeng Liu,Tianyi She,Jiawei Liu,Boheng Li,Dongyu Yao,Ziyou Liang,Run Wang", "tags": "NIPS 2024,Poster", "abstract": "In recent years, DeepFake technology has achieved unprecedented success in high-quality video synthesis, but these methods also pose potential and severe security threats to humanity. DeepFake can be bifurcated into entertainment applications like face swapping and illicit uses such as lip-syncing fraud. However, lip-forgery videos, which neither change identity nor have discernible visual artifacts, present a formidable challenge to existing DeepFake detection methods. Our preliminary experiments have shown that the effectiveness of the existing methods often drastically decrease or even fail when tackling lip-syncing videos.\nIn this paper, for the first time, we propose a novel approach dedicated to lip-forgery identification that exploits the inconsistency between lip movements and audio signals. We also mimic human natural cognition by capturing subtle biological links between lips and head regions to boost accuracy. To better illustrate the effectiveness and advances of our proposed method, we create a high-quality LipSync dataset, AVLips, by employing the state-of-the-art lip generators. We hope this high-quality and diverse dataset could be well served the further research on this challenging and interesting field. Experimental results show that our approach gives an average accuracy of more than 95.3% in spotting lip-syncing videos, significantly outperforming the baselines. Extensive experiments demonstrate the capability to tackle deepfakes and the robustness in surviving diverse input transformations. Our method achieves an accuracy of up to 90.2% in real-world scenarios (e.g., WeChat video call) and shows its powerful capabilities in real scenario deployment.\nTo facilitate the progress of this research community, we release all resources at https://github.com/AaronComo/LipFD.", "pdf": "https://openreview.net/pdf/e4d265e599081369073e1436266147e9fe673842.pdf"} {"title": "Shaping the distribution of neural responses with interneurons in a recurrent circuit model", "url": "https://openreview.net/forum?id=ojLIEQ0j9T", "detail_url": "https://openreview.net/forum?id=ojLIEQ0j9T", "authors": "David Lipshutz,Eero P Simoncelli", "tags": "NIPS 2024,Poster", "abstract": "Efficient coding theory posits that sensory circuits transform natural signals into neural representations that maximize information transmission subject to resource constraints. Local interneurons are thought to play an important role in these transformations, shaping patterns of circuit activity to facilitate and direct information flow. However, the relationship between these coordinated, nonlinear, circuit-level transformations and the properties of interneurons (e.g., connectivity, activation functions) remains unknown. Here, we propose a normative computational model that establishes such a relationship. Our model is derived from an optimal transport objective that conceptualizes the circuit's input-response function as transforming the inputs to achieve a target response distribution. The circuit, which is comprised of primary neurons that are recurrently connected to a set of local interneurons, continuously optimizes this objective by dynamically adjusting both the synaptic connections between neurons as well as the interneuron activation functions. In an application motivated by redundancy reduction theory, we demonstrate that when the inputs are natural image statistics and the target distribution is a spherical Gaussian, the circuit learns a nonlinear transformation that significantly reduces statistical dependencies in neural responses. Overall, our results provide a framework in which the distribution of circuit responses is systematically and nonlinearly controlled by adjustment of interneuron connectivity and activation functions.", "pdf": "https://openreview.net/pdf/114269791aae20bc4e759430ce5081dcf812397f.pdf"} {"title": "Graph Neural Networks Do Not Always Oversmooth", "url": "https://openreview.net/forum?id=nY7fGtsspU", "detail_url": "https://openreview.net/forum?id=nY7fGtsspU", "authors": "Bastian Epping,Alexandre Ren\u00e9,Moritz Helias,Michael T Schaub", "tags": "NIPS 2024,Poster", "abstract": "Graph neural networks (GNNs) have emerged as powerful tools for processing relational data in applications. However, GNNs suffer from the problem of oversmoothing, the property that features of all nodes exponentially converge to the same vector over layers, prohibiting the design of deep GNNs. In this work we study oversmoothing in graph convolutional networks (GCNs) by using their Gaussian process (GP) equivalence in the limit of infinitely many hidden features. By generalizing methods from conventional deep neural networks (DNNs), we can describe the distribution of features at the output layer of deep GCNs in terms of a GP: as expected, we find that typical parameter choices from the literature lead to oversmoothing. The theory, however, allows us to identify a new, non-oversmoothing phase: if the initial weights of the network have sufficiently large variance, GCNs do not oversmooth, and node features remain informative even at large depth. We demonstrate the validity of this prediction in finite-size GCNs by training a linear classifier on their output. Moreover, using the linearization of the GCN GP, we generalize the concept of propagation depth of information from DNNs to GCNs. This propagation depth diverges at the transition between the oversmoothing and non-oversmoothing phase. We test the predictions of our approach and find good agreement with finite-size GCNs. Initializing GCNs near the transition to the non-oversmoothing phase, we obtain networks which are both deep and expressive.", "pdf": "https://openreview.net/pdf/da4dc1c0c104b2df3eea20a889b63bf4b54d2e59.pdf"} {"title": "CryoGEM: Physics-Informed Generative Cryo-Electron Microscopy", "url": "https://openreview.net/forum?id=edOZifvwMi", "detail_url": "https://openreview.net/forum?id=edOZifvwMi", "authors": "Jiakai Zhang,Qihe Chen,Yan Zeng,Wenyuan Gao,Xuming He,Zhijie Liu,Jingyi Yu", "tags": "NIPS 2024,Poster", "abstract": "In the past decade, deep conditional generative models have revolutionized the generation of realistic images, extending their application from entertainment to scientific domains. Single-particle cryo-electron microscopy (cryo-EM) is crucial in resolving near-atomic resolution 3D structures of proteins, such as the SARS-COV-2 spike protein. To achieve high-resolution reconstruction, a comprehensive data processing pipeline has been adopted. However, its performance is still limited as it lacks high-quality annotated datasets for training. To address this, we introduce physics-informed generative cryo-electron microscopy (CryoGEM), which for the first time integrates physics-based cryo-EM simulation with a generative unpaired noise translation to generate physically correct synthetic cryo-EM datasets with realistic noises. Initially, CryoGEM simulates the cryo-EM imaging process based on a virtual specimen. To generate realistic noises, we leverage an unpaired noise translation via contrastive learning with a novel mask-guided sampling scheme. Extensive experiments show that CryoGEM is capable of generating authentic cryo-EM images. The generated dataset can be used as training data for particle picking and pose estimation models, eventually improving the reconstruction resolution.", "pdf": "https://openreview.net/pdf/28c9b2115b82c3166b55e72a5bbf65dc9b1611fc.pdf"} {"title": "Localized Adaptive Risk Control", "url": "https://openreview.net/forum?id=fogJgrozu1", "detail_url": "https://openreview.net/forum?id=fogJgrozu1", "authors": "Matteo Zecchin,Osvaldo Simeone", "tags": "NIPS 2024,Poster", "abstract": "Adaptive Risk Control (ARC) is an online calibration strategy based on set prediction that offers worst-case deterministic long-term risk control, as well as statistical marginal coverage guarantees. ARC adjusts the size of the prediction set by varying a single scalar threshold based on feedback from past decisions. In this work, we introduce Localized Adaptive Risk Control (L-ARC), an online calibration scheme that targets statistical localized risk guarantees ranging from conditional risk to marginal risk, while preserving the worst-case performance of ARC. L-ARC updates a threshold function within a reproducing kernel Hilbert space (RKHS), with the kernel determining the level of localization of the statistical risk guarantee. The theoretical results highlight a trade-off between localization of the statistical risk and convergence speed to the long-term risk target. Thanks to localization, L-ARC is demonstrated via experiments to produce prediction sets with risk guarantees across different data subpopulations, significantly improving the fairness of the calibrated model for tasks such as image segmentation and beam selection in wireless networks.", "pdf": "https://openreview.net/pdf/f15fee39391c304678dfb05aa0001503656a9fc0.pdf"} {"title": "Meta 3D AssetGen: Text-to-Mesh Generation with High-Quality Geometry, Texture, and PBR Materials", "url": "https://openreview.net/forum?id=M3BIsgGQNb", "detail_url": "https://openreview.net/forum?id=M3BIsgGQNb", "authors": "Yawar Siddiqui,Tom Monnier,Filippos Kokkinos,Mahendra Kariya,Yanir Kleiman,Emilien Garreau,Oran Gafni,Natalia Neverova,Andrea Vedaldi,Roman Shapovalov,David Novotny", "tags": "NIPS 2024,Poster", "abstract": "We present Meta 3D AssetGen (AssetGen), a significant advancement in text-to-3D generation which produces faithful, high-quality meshes with texture and material control. Compared to works that bake shading in the 3D object\u2019s appearance, AssetGen outputs physically-based rendering (PBR) materials, supporting realistic relighting. AssetGen generates first several views of the object with separate shaded and albedo appearance channels, and then reconstructs colours, metalness and roughness in 3D, using a deferred shading loss for efficient supervision. It also uses a sign-distance function to represent 3D shape more reliably and introduces a\ncorresponding loss for direct shape supervision. This is implemented using fused kernels for high memory efficiency. After mesh extraction, a texture refinement transformer operating in UV space significantly improves sharpness and details. AssetGen achieves 17% improvement in Chamfer Distance and 40% in LPIPS over the best concurrent work for few-view reconstruction, and a human preference of 72% over the best industry competitors of comparable speed, including those that support PBR. Project page with generated assets: https://assetgen.github.io", "pdf": "https://openreview.net/pdf/9ad24dc1488746a3aeb2354924daf049c03d5120.pdf"} {"title": "DDR: Exploiting Deep Degradation Response as Flexible Image Descriptor", "url": "https://openreview.net/forum?id=RXLO4Zv3wB", "detail_url": "https://openreview.net/forum?id=RXLO4Zv3wB", "authors": "Juncheng Wu,Zhangkai Ni,Hanli Wang,Wenhan Yang,Yuyin Zhou,Shiqi Wang", "tags": "NIPS 2024,Poster", "abstract": "Image deep features extracted by pre-trained networks are known to contain rich and informative representations. In this paper, we present Deep Degradation Response (DDR), a method to quantify changes in image deep features under varying degradation conditions. Specifically, our approach facilitates flexible and adaptive degradation, enabling the controlled synthesis of image degradation through text-driven prompts. Extensive evaluations demonstrate the versatility of DDR as an image descriptor, with strong correlations observed with key image attributes such as complexity, colorfulness, sharpness, and overall quality. Moreover, we demonstrate the efficacy of DDR across a spectrum of applications. It excels as a blind image quality assessment metric, outperforming existing methodologies across multiple datasets. Additionally, DDR serves as an effective unsupervised learning objective in image restoration tasks, yielding notable advancements in image deblurring and single-image super-resolution. Our code is available at: https://github.com/eezkni/DDR.", "pdf": "https://openreview.net/pdf/7db1e4ab61cb322379721e4adb7f869ab29287eb.pdf"} {"title": "Zero-Shot Reinforcement Learning from Low Quality Data", "url": "https://openreview.net/forum?id=79eWvkLjib", "detail_url": "https://openreview.net/forum?id=79eWvkLjib", "authors": "Scott Jeen,Tom Bewley,Jonathan Cullen", "tags": "NIPS 2024,Poster", "abstract": "Zero-shot reinforcement learning (RL) promises to provide agents that can perform _any_ task in an environment after an offline, reward-free pre-training phase. Methods leveraging successor measures and successor features have shown strong performance in this setting, but require access to large heterogenous datasets for pre-training which cannot be expected for most real problems. Here, we explore how the performance of zero-shot RL methods degrades when trained on small homogeneous datasets, and propose fixes inspired by _conservatism_, a well-established feature of performant single-task offline RL algorithms. We evaluate our proposals across various datasets, domains and tasks, and show that conservative zero-shot RL algorithms outperform their non-conservative counterparts on low quality datasets, and perform no worse on high quality datasets. Somewhat surprisingly, our proposals also outperform baselines that get to see the task during training. Our code is available via the project page https://enjeeneer.io/projects/zero-shot-rl/.", "pdf": "https://openreview.net/pdf/6306d12a3e8f326600a550b9a27ca39cd7bd91d8.pdf"} {"title": "Continuous Spatiotemporal Events Decoupling through Spike-based Bayesian Computation", "url": "https://openreview.net/forum?id=zNIhPZnqhh", "detail_url": "https://openreview.net/forum?id=zNIhPZnqhh", "authors": "Yajing Zheng,Jiyuan Zhang,Zhaofei Yu,Tiejun Huang", "tags": "NIPS 2024,Poster", "abstract": "Numerous studies have demonstrated that the cognitive processes of the human brain can be modeled using the Bayesian theorem for probabilistic inference of the external world. Spiking neural networks (SNNs), capable of performing Bayesian computation with greater physiological interpretability, offer a novel approach to distributed information processing in the cortex. However, applying these models to real-world scenarios to harness the advantages of brain-like computation remains a challenge. \nRecently, bio-inspired sensors with high dynamic range and ultra-high temporal resolution have been widely used in extreme vision scenarios. Event streams, generated by various types of motion, represent spatiotemporal data. Inferring motion targets from these streams without prior knowledge remains a difficult task. The Bayesian inference-based Expectation-Maximization (EM) framework has proven effective for motion segmentation in event streams, allowing for decoupling without prior information about the motion or its source. \nThis work demonstrates that Bayesian computation based on spiking neural networks can decouple event streams of different motions. The Winner-Take-All (WTA) circuits in the constructed network implement an equivalent E-step, while STDP achieves an equivalent optimization in M-step. Through theoretical analysis and experiments, we show that STDP-based learning can maximize the contrast of warped events under mixed motion models. Experimental results show that the constructed spiking network can effectively segment the motion contained in event streams.", "pdf": "https://openreview.net/pdf/abd0b84b63dea5b160f7b9b5684458ec526eb4bd.pdf"} {"title": "Learning from Pattern Completion: Self-supervised Controllable Generation", "url": "https://openreview.net/forum?id=83pV20DD2s", "detail_url": "https://openreview.net/forum?id=83pV20DD2s", "authors": "Zhiqiang Chen,Guofan Fan,Jinying Gao,Lei Ma,Bo Lei,Tiejun Huang,Shan Yu", "tags": "NIPS 2024,Poster", "abstract": "The human brain exhibits a strong ability to spontaneously associate different visual attributes of the same or similar visual scene, such as associating sketches and graffiti with real-world visual objects, usually without supervising information. In contrast, in the field of artificial intelligence, controllable generation methods like ControlNet heavily rely on annotated training datasets such as depth maps, semantic segmentation maps, and poses, which limits the method\u2019s scalability. Inspired by the neural mechanisms that may contribute to the brain\u2019s associative power, specifically the cortical modularization and hippocampal pattern completion, here we propose a self-supervised controllable generation (SCG) framework. Firstly, we introduce an equivariance constraint to promote inter-module independence and intra-module correlation in a modular autoencoder network, thereby achieving functional specialization. Subsequently, based on these specialized modules, we employ a self-supervised pattern completion approach for controllable generation training. Experimental results demonstrate that the proposed modular autoencoder effectively achieves functional specialization, including the modular processing of color, brightness, and edge detection, and exhibits brain-like features including orientation selectivity, color antagonism, and center-surround receptive fields. Through self-supervised training, associative generation capabilities spontaneously emerge in SCG, demonstrating excellent zero-shot generalization ability to various tasks such as superresolution, dehaze and associative or conditional generation on painting, sketches, and ancient graffiti. Compared to the previous representative method ControlNet, our proposed approach not only demonstrates superior robustness in more challenging high-noise scenarios but also possesses more promising scalability potential due to its self-supervised manner. Codes are released on Github and Gitee.", "pdf": "https://openreview.net/pdf/c4a5035f5ec7beae0dafc12623af3a3b4f0df6ca.pdf"} {"title": "Strategic Linear Contextual Bandits", "url": "https://openreview.net/forum?id=apPHMfE63y", "detail_url": "https://openreview.net/forum?id=apPHMfE63y", "authors": "Thomas Kleine Buening,Aadirupa Saha,Christos Dimitrakakis,Haifeng Xu", "tags": "NIPS 2024,Poster", "abstract": "Motivated by the phenomenon of strategic agents gaming a recommender system to maximize the number of times they are recommended to users, we study a strategic variant of the linear contextual bandit problem, where the arms can strategically misreport privately observed contexts to the learner. We treat the algorithm design problem as one of *mechanism design* under uncertainty and propose the Optimistic Grim Trigger Mechanism (OptGTM) that incentivizes the agents (i.e., arms) to report their contexts truthfully while simultaneously minimizing regret. We also show that failing to account for the strategic nature of the agents results in linear regret. However, a trade-off between mechanism design and regret minimization appears to be unavoidable. More broadly, this work aims to provide insight into the intersection of online learning and mechanism design.", "pdf": "https://openreview.net/pdf/5b2a93a4fb5c106dc53020e9528250cb47baa3f0.pdf"} {"title": "Universal Physics Transformers: A Framework For Efficiently Scaling Neural Operators", "url": "https://openreview.net/forum?id=oUXiNX5KRm", "detail_url": "https://openreview.net/forum?id=oUXiNX5KRm", "authors": "Benedikt Alkin,Andreas F\u00fcrst,Simon Lucas Schmid,Lukas Gruber,Markus Holzleitner,Johannes Brandstetter", "tags": "NIPS 2024,Poster", "abstract": "Neural operators, serving as physics surrogate models, have recently gained increased interest. With ever increasing problem complexity, the natural question arises: what is an efficient way to scale neural operators to larger and more complex simulations - most importantly by taking into account different types of simulation datasets. This is of special interest since, akin to their numerical counterparts, different techniques are used across applications, even if the underlying dynamics of the systems are similar. Whereas the flexibility of transformers has enabled unified architectures across domains, neural operators mostly follow a problem specific design, where GNNs are commonly used for Lagrangian simulations and grid-based models predominate Eulerian simulations. \n\nWe introduce Universal Physics Transformers (UPTs), an efficient and unified learning paradigm for a wide range of spatio-temporal problems. UPTs operate without grid- or particle-based latent structures, enabling flexibility and scalability across meshes and particles. UPTs efficiently propagate dynamics in the latent space, emphasized by inverse encoding and decoding techniques. Finally, UPTs allow for queries of the latent space representation at any point in space-time. We demonstrate diverse applicability and efficacy of UPTs in mesh-based fluid simulations, and steady-state Reynolds averaged Navier-Stokes simulations, and Lagrangian-based dynamics.", "pdf": "https://openreview.net/pdf/4ffed9700454b3329a91e1a3f4304eef2c75e090.pdf"} {"title": "GENOT: Entropic (Gromov) Wasserstein Flow Matching with Applications to Single-Cell Genomics", "url": "https://openreview.net/forum?id=hjspWd7jvg", "detail_url": "https://openreview.net/forum?id=hjspWd7jvg", "authors": "Dominik Klein,Th\u00e9o Uscidda,Fabian J Theis,marco cuturi", "tags": "NIPS 2024,Poster", "abstract": "Single-cell genomics has significantly advanced our understanding of cellular behavior, catalyzing innovations in treatments and precision medicine. However,\nsingle-cell sequencing technologies are inherently destructive and can only measure a limited array of data modalities simultaneously. This limitation underscores\nthe need for new methods capable of realigning cells. Optimal transport (OT)\nhas emerged as a potent solution, but traditional discrete solvers are hampered by\nscalability, privacy, and out-of-sample estimation issues. These challenges have\nspurred the development of neural network-based solvers, known as neural OT\nsolvers, that parameterize OT maps. Yet, these models often lack the flexibility\nneeded for broader life science applications. To address these deficiencies, our\napproach learns stochastic maps (i.e. transport plans), allows for any cost function,\nrelaxes mass conservation constraints and integrates quadratic solvers to tackle the\ncomplex challenges posed by the (Fused) Gromov-Wasserstein problem. Utilizing\nflow matching as a backbone, our method offers a flexible and effective framework.\nWe demonstrate its versatility and robustness through applications in cell development studies, cellular drug response modeling, and cross-modality cell translation,\nillustrating significant potential for enhancing therapeutic strategies.", "pdf": "https://openreview.net/pdf/65d7a0702ecd914bd64c6c097b050352f50f3264.pdf"} {"title": "Rethinking Parity Check Enhanced Symmetry-Preserving Ansatz", "url": "https://openreview.net/forum?id=aIuByRyHhV", "detail_url": "https://openreview.net/forum?id=aIuByRyHhV", "authors": "Ge Yan,Mengfei Ran,Ruocheng Wang,Kaisen Pan,Junchi Yan", "tags": "NIPS 2024,Poster", "abstract": "With the arrival of the Noisy Intermediate-Scale Quantum (NISQ) era, Variational Quantum Algorithms (VQAs) have emerged to obtain possible quantum advantage. In particular, how to effectively incorporate hard constraints in VQAs remains a critical and open question. In this paper, we manage to combine the Hamming Weight Preserving ansatz with a topological-aware parity check on physical qubits to enforce error mitigation and further hard constraints. We demonstrate the combination significantly outperforms peer VQA methods on both quantum chemistry problems and constrained combinatorial optimization problems e.g. Quadratic Assignment Problem. Intensive experimental results on both simulators and superconducting quantum processors are provided to verify that the combination of HWP ansatz with parity check is among the most promising candidates to demonstrate quantum advantages in the NISQ era to solve more realistic problems.", "pdf": "https://openreview.net/pdf/6c226c1f9beff6116e991cdd6c33a8f3750b85fa.pdf"} {"title": "REBORN: Reinforcement-Learned Boundary Segmentation with Iterative Training for Unsupervised ASR", "url": "https://openreview.net/forum?id=V3QZCM1AQv", "detail_url": "https://openreview.net/forum?id=V3QZCM1AQv", "authors": "Liang-Hsuan Tseng,En-Pei Hu,Cheng-Han Chiang,Yuan Tseng,Hung-yi Lee,Lin-shan Lee,Shao-Hua Sun", "tags": "NIPS 2024,Poster", "abstract": "Unsupervised automatic speech recognition (ASR) aims to learn the mapping between the speech signal and its corresponding textual transcription without the supervision of paired speech-text data. A word/phoneme in the speech signal is represented by a segment of speech signal with variable length and unknown boundary, and this segmental structure makes learning the mapping between speech and text challenging, especially without paired data. In this paper, we propose REBORN, Reinforcement-Learned Boundary Segmentation with Iterative Training for Unsupervised ASR. REBORN alternates between (1) training a segmentation model that predicts the boundaries of the segmental structures in speech signals and (2) training the phoneme prediction model, whose input is a segmental structure segmented by the segmentation model, to predict a phoneme transcription. Since supervised data for training the segmentation model is not available, we use reinforcement learning to train the segmentation model to favor segmentations that yield phoneme sequence predictions with a lower perplexity. We conduct extensive experiments and find that under the same setting, REBORN outperforms all prior unsupervised ASR models on LibriSpeech, TIMIT, and five non-English languages in Multilingual LibriSpeech. We comprehensively analyze why the boundaries learned by REBORN improve the unsupervised ASR performance.", "pdf": "https://openreview.net/pdf/967a9da8a9a7a5da3630376608e3cfe2b1ed4212.pdf"} {"title": "Advancing Training Efficiency of Deep Spiking Neural Networks through Rate-based Backpropagation", "url": "https://openreview.net/forum?id=wlcm21C4nk", "detail_url": "https://openreview.net/forum?id=wlcm21C4nk", "authors": "Chengting Yu,Lei Liu,Gaoang Wang,Erping Li,Aili Wang", "tags": "NIPS 2024,Poster", "abstract": "Recent insights have revealed that rate-coding is a primary form of information representation captured by surrogate-gradient-based Backpropagation Through Time (BPTT) in training deep Spiking Neural Networks (SNNs). Motivated by these findings, we propose rate-based backpropagation, a training strategy specifically designed to exploit rate-based representations to reduce the complexity of BPTT. Our method minimizes reliance on detailed temporal derivatives by focusing on averaged dynamics, streamlining the computational graph to reduce memory and computational demands of SNNs training. We substantiate the rationality of the gradient approximation between BPTT and the proposed method through both theoretical analysis and empirical observations. Comprehensive experiments on CIFAR-10, CIFAR-100, ImageNet, and CIFAR10-DVS validate that our method achieves comparable performance to BPTT counterparts, and surpasses state-of-the-art efficient training techniques. By leveraging the inherent benefits of rate-coding, this work sets the stage for more scalable and efficient SNNs training within resource-constrained environments.", "pdf": "https://openreview.net/pdf/a4eb38a11be001248c145a0cd2381f9d6503b19c.pdf"} {"title": "Towards Efficient and Optimal Covariance-Adaptive Algorithms for Combinatorial Semi-Bandits", "url": "https://openreview.net/forum?id=PI0CDY6nmo", "detail_url": "https://openreview.net/forum?id=PI0CDY6nmo", "authors": "Julien Zhou,Pierre Gaillard,Thibaud Rahier,Houssam Zenati,Julyan Arbel", "tags": "NIPS 2024,Poster", "abstract": "We address the problem of stochastic combinatorial semi-bandits, where a player selects among $P$ actions from the power set of a set containing $d$ base items. Adaptivity to the problem's structure is essential in order to obtain optimal regret upper bounds. As estimating the coefficients of a covariance matrix can be manageable in practice, leveraging them should improve the regret. We design ``optimistic'' covariance-adaptive algorithms relying on online estimations of the covariance structure, called OLS-UCB-C and COS-V (only the variances for the latter). They both yields improved gap-free regret. Although COS-V can be slightly suboptimal, it improves on computational complexity by taking inspiration from Thompson Sampling approaches. It is the first sampling-based algorithm satisfying a $\\sqrt{T}$ gap-free regret (up to poly-logs). We also show that in some cases, our approach efficiently leverages the semi-bandit feedback and outperforms bandit feedback approaches, not only in exponential regimes where $P\\gg d$ but also when $P\\leq d$, which is not covered by existing analyses.", "pdf": "https://openreview.net/pdf/d8f83f17537cde6d371917428f36a07c33da1f9f.pdf"} {"title": "Exploiting Descriptive Completeness Prior for Cross Modal Hashing with Incomplete Labels", "url": "https://openreview.net/forum?id=ferj6WqShv", "detail_url": "https://openreview.net/forum?id=ferj6WqShv", "authors": "Haoyang Luo,Zheng Zhang,Yadan Luo", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we tackle the challenge of generating high-quality hash codes for cross-modal retrieval in the presence of incomplete labels, which creates uncertainty in distinguishing between positive and negative pairs. Vision-language models such as CLIP offer a potential solution by providing generic knowledge for missing label recovery, yet their zero-shot performance remains insufficient. To address this, we propose a novel Prompt Contrastive Recovery approach, \\textbf{PCRIL}, which progressively identifies promising positive classes from unknown label sets and recursively searches for other relevant labels. Identifying unknowns is nontrivial due to the fixed and long-tailed patterns of positive label sets in training data, which hampers the discovery of new label combinations. Therefore, we consider each subset of positive labels and construct three types of negative prompts through deletion, addition, and replacement for prompt learning. The augmented supervision guides the model to measure the completeness of label sets, thus facilitating the subsequent greedy tree search for label completion. We also address extreme cases of significant unknown labels and lack of negative pairwise supervision by deriving two augmentation strategies: seeking unknown-complementary samples for mixup and random flipping for negative labels. Extensive experiments reveal the vulnerability of current methods and demonstrate the effectiveness of PCRIL, achieving an average 12\\% mAP improvement to the current SOTA across all datasets. Our code is available at https://github.com/E-Galois/PCRIL.", "pdf": "https://openreview.net/pdf/25333eef489ec00318dd300158da08b8b0a2eceb.pdf"} {"title": "B$\\oplus$LD: Boolean Logic Deep Learning", "url": "https://openreview.net/forum?id=DO9wPZOPjk", "detail_url": "https://openreview.net/forum?id=DO9wPZOPjk", "authors": "Van Minh Nguyen,Cristian Ocampo,Aymen Askri,Louis Leconte,Ba-Hien Tran", "tags": "NIPS 2024,Poster", "abstract": "Computational intensiveness of deep learning has motivated low-precision arithmetic designs. However, the current quantized/binarized training approaches are limited by: (1) significant performance loss due to arbitrary approximations of the latent weight gradient through its discretization/binarization function, and (2) training computational intensiveness due to the reliance on full-precision latent weights. \nThis paper proposes a novel mathematical principle by introducing the notion of Boolean variation such that neurons made of Boolean weights and/or activations can be trained ---for the first time--- natively in Boolean domain instead of latent-weight gradient descent and real arithmetic. We explore its convergence, conduct extensively experimental benchmarking, and provide consistent complexity evaluation by considering chip architecture, memory hierarchy, dataflow, and arithmetic precision. Our approach achieves baseline full-precision accuracy in ImageNet classification and surpasses state-of-the-art results in semantic segmentation, with notable performance in image super-resolution, and natural language understanding with transformer-based models. Moreover, it significantly reduces energy consumption during both training and inference.", "pdf": "https://openreview.net/pdf/0295378bd65fe374014215d8776b9a023c17efaf.pdf"} {"title": "Unified Domain Generalization and Adaptation for Multi-View 3D Object Detection", "url": "https://openreview.net/forum?id=lxuXvJSOcP", "detail_url": "https://openreview.net/forum?id=lxuXvJSOcP", "authors": "Gyusam Chang,Jiwon Lee,Donghyun Kim,Jinkyu Kim,Dongwook Lee,Daehyun Ji,Sujin Jang,Sangpil Kim", "tags": "NIPS 2024,Poster", "abstract": "Recent advances in 3D object detection leveraging multi-view cameras have demonstrated their practical and economical value in various challenging vision tasks.\nHowever, typical supervised learning approaches face challenges in achieving satisfactory adaptation toward unseen and unlabeled target datasets (i.e., direct transfer) due to the inevitable geometric misalignment between the source and target domains.\nIn practice, we also encounter constraints on resources for training models and collecting annotations for the successful deployment of 3D object detectors.\nIn this paper, we propose Unified Domain Generalization and Adaptation (UDGA), a practical solution to mitigate those drawbacks.\nWe first propose Multi-view Overlap Depth Constraint that leverages the strong association between multi-view, significantly alleviating geometric gaps due to perspective view changes.\nThen, we present a Label-Efficient Domain Adaptation approach to handle unfamiliar targets with significantly fewer amounts of labels (i.e., 1$\\%$ and 5$\\%)$, while preserving well-defined source knowledge for training efficiency.\nOverall, UDGA framework enables stable detection performance in both source and target domains, effectively bridging inevitable domain gaps, while demanding fewer annotations.\nWe demonstrate the robustness of UDGA with large-scale benchmarks: nuScenes, Lyft, and Waymo, where our framework outperforms the current state-of-the-art methods.", "pdf": "https://openreview.net/pdf/b5dc22bb89aacc415a9f812bb5b590d0629a8c41.pdf"} {"title": "MindMerger: Efficiently Boosting LLM Reasoning in non-English Languages", "url": "https://openreview.net/forum?id=Oq32ylAOu2", "detail_url": "https://openreview.net/forum?id=Oq32ylAOu2", "authors": "Zixian Huang,Wenhao Zhu,Gong Cheng,Lei Li,Fei Yuan", "tags": "NIPS 2024,Poster", "abstract": "Reasoning capabilities are crucial for Large Language Models~(LLMs), yet a notable gap exists between English and non-English languages. To bridge this disparity, some works fine-tune LLMs to relearn reasoning capabilities in non-English languages, while others replace non-English inputs with an external model's outputs such as English translation text to circumvent the challenge of LLM understanding non-English. Unfortunately, these methods often underutilize the built-in skilled reasoning and useful language understanding capabilities of LLMs. In order to better utilize the minds of reasoning and language understanding in LLMs, we propose a new method, namely MergeMinds, which merges LLMs with the external language understanding capabilities from multilingual models to boost the multilingual reasoning performance. Furthermore, a two-step training scheme is introduced to first train to embeded the external capabilities into LLMs and then train the collaborative utilization of the external capabilities and the built-in capabilities in LLMs. Experiments on three multilingual reasoning datasets and a language understanding dataset demonstrate that MergeMinds consistently outperforms all baselines, especially in low-resource languages. Without updating the parameters of LLMs, the average accuracy improved by 6.7 and 8.0 across all languages and low-resource languages on the MGSM dataset, respectively.", "pdf": "https://openreview.net/pdf/f88e7c1ad0b42e9fb0b8d3288036b9ab5be2071e.pdf"} {"title": "Adaptive Depth Networks with Skippable Sub-Paths", "url": "https://openreview.net/forum?id=NPu7Cdk2f9", "detail_url": "https://openreview.net/forum?id=NPu7Cdk2f9", "authors": "Woochul Kang,Hyungseop Lee", "tags": "NIPS 2024,Poster", "abstract": "Predictable adaptation of network depths can be an effective way to control inference latency and meet the resource condition of various devices. However, previous adaptive depth networks do not provide general principles and a formal explanation on why and which layers can be skipped, and, hence, their approaches are hard to be generalized and require long and complex training steps. In this paper, we present a practical approach to adaptive depth networks that is applicable to various networks with minimal training effort. In our approach, every hierarchical residual stage is divided into two sub-paths, and they are trained to acquire different properties through a simple self-distillation strategy. While the first sub-path is essential for hierarchical feature learning, the second one is trained to refine the learned features and minimize performance degradation if it is skipped. Unlike prior adaptive networks, our approach does not train every target sub-network in an iterative manner. At test time, however, we can connect these sub-paths in a combinatorial manner to select sub-networks of various accuracy-efficiency trade-offs from a single network. We provide a formal rationale for why the proposed training method can reduce overall prediction errors while minimizing the impact of skipping sub-paths. We demonstrate the generality and effectiveness of our approach with convolutional neural networks and transformers.", "pdf": "https://openreview.net/pdf/76e0dc5926191a811cde33c314182f2375b42b5f.pdf"} {"title": "Articulate your NeRF: Unsupervised articulated object modeling via conditional view synthesis", "url": "https://openreview.net/forum?id=9B6J64eTp4", "detail_url": "https://openreview.net/forum?id=9B6J64eTp4", "authors": "Jianning Deng,Kartic Subr,Hakan Bilen", "tags": "NIPS 2024,Poster", "abstract": "We propose a novel unsupervised method to learn pose and part-segmentation of articulated objects with rigid parts.\n Given two observations of an object in different articulation states, our method learns the geometry and appearance of object parts by using an implicit model from the first observation, distills the part segmentation and articulation from the second observation while rendering the latter observation.\n Additionally, to tackle the complexities in the joint optimization of part segmentation and articulation, we propose a voxel grid based initialization strategy and a decoupled optimization procedure.\n Compared to the prior unsupervised work, our model obtains significantly better performance, generalizes to objects with multiple parts while it can be efficiently from few views for the latter observation.", "pdf": "https://openreview.net/pdf/5ae21776a1aca287ee6f5bcb3fddc0c76ecb7571.pdf"} {"title": "FUGAL: Feature-fortified Unrestricted Graph Alignment", "url": "https://openreview.net/forum?id=SdLOs1FR4h", "detail_url": "https://openreview.net/forum?id=SdLOs1FR4h", "authors": "Aditya Bommakanti,Harshith Reddy Vonteri,Konstantinos Skitsas,Sayan Ranu,Davide Mottin,Panagiotis Karras", "tags": "NIPS 2024,Poster", "abstract": "The necessity to align two graphs, minimizing a structural distance metric, is prevalent in biology, chemistry, recommender systems, and social network analysis. Due to the problem\u2019s NP-hardness, prevailing graph alignment methods follow a modular and mediated approach, solving the problem by restricting to the domain of intermediary graph representations or products like embeddings, spectra, and graph signals. Restricting the problem to this intermediate space may distort the original problem and are hence predisposed to miss high-quality solutions. In this paper, we propose an unrestricted method, FUGAL, which finds a permutation matrix that maps one graph to another by directly operating on their adjacency matrices with judicious constraint relaxation. Extensive experimentation demonstrates that FUGAL consistently surpasses state-of-the-art graph alignment methods in accuracy across all benchmark datasets without encumbering efficiency.", "pdf": "https://openreview.net/pdf/17aef8ff1d289ba15e7ffd178b132deee4f747e3.pdf"} {"title": "Hyperbolic Embeddings of Supervised Models", "url": "https://openreview.net/forum?id=n60xBFZWrk", "detail_url": "https://openreview.net/forum?id=n60xBFZWrk", "authors": "Richard Nock,Ehsan Amid,Frank Nielsen,Alexander Soen,Manfred K Warmuth", "tags": "NIPS 2024,Poster", "abstract": "Models of hyperbolic geometry have been successfully used in ML for two main tasks: embedding *models* in unsupervised learning (*e.g.* hierarchies) and embedding *data*. \nTo our knowledge, there are no approaches that provide embeddings for supervised models; even when hyperbolic geometry provides convenient properties for expressing popular hypothesis classes, such as decision trees (and ensembles).\nIn this paper, we propose a full-fledged solution to the problem in three independent contributions. The first linking the theory of losses for class probability estimation to hyperbolic embeddings in Poincar\\'e disk model. The second resolving an issue for a clean, unambiguous embedding of (ensembles of) decision trees in this model. The third showing how to smoothly tweak the Poincar\\'e hyperbolic distance to improve its encoding and visualization properties near the border of the disk, a crucial region for our application, while keeping hyperbolicity.\nThis last step has substantial independent interest as it is grounded in a generalization of Leibniz-Newton's fundamental Theorem of calculus.", "pdf": "https://openreview.net/pdf/462449e98af15b416d5418ce67b4921a32541c11.pdf"} {"title": "Team-Fictitious Play for Reaching Team-Nash Equilibrium in Multi-team Games", "url": "https://openreview.net/forum?id=6VVgAgVfxW", "detail_url": "https://openreview.net/forum?id=6VVgAgVfxW", "authors": "Ahmed Said D\u00f6nmez,Y\u00fcksel Arslanta\u015f,Muhammed O. Sayin", "tags": "NIPS 2024,Poster", "abstract": "Multi-team games, prevalent in robotics and resource management, involve team members striving for a joint best response against other teams. Team-Nash equilibrium (TNE) predicts the outcomes of such coordinated interactions. However, can teams of self-interested agents reach TNE? We introduce Team-Fictitious Play (Team-FP), a new variant of fictitious play where agents respond to the last actions of team members and the beliefs formed about other teams with some inertia in action updates. This design is essential in team coordination beyond the classical fictitious play dynamics. We focus on zero-sum potential team games (ZSPTGs) where teams can interact pairwise while the team members do not necessarily have identical payoffs. We show that Team-FP reaches near TNE in ZSPTGs with a quantifiable error bound. We extend Team-FP dynamics to multi-team Markov games for model-based and model-free cases. The convergence analysis tackles the challenge of non-stationarity induced by evolving opponent strategies based on the optimal coupling lemma and stochastic differential inclusion approximation methods. Our work strengthens the foundation for using TNE to predict the behavior of decentralized teams and offers a practical rule for team learning in multi-team environments. We provide extensive simulations of Team-FP dynamics and compare its performance with other widely studied dynamics such as smooth fictitious play and multiplicative weights update. We further explore how different parameters impact the speed of convergence.", "pdf": "https://openreview.net/pdf/d6fc0a044595d08ac10b7e6e074c1db9cf5a7b27.pdf"} {"title": "SMART: Towards Pre-trained Missing-Aware Model for Patient Health Status Prediction", "url": "https://openreview.net/forum?id=7UenF4kx4j", "detail_url": "https://openreview.net/forum?id=7UenF4kx4j", "authors": "Zhihao Yu,Xu Chu,Yujie Jin,Yasha Wang,Junfeng Zhao", "tags": "NIPS 2024,Poster", "abstract": "Electronic health record (EHR) data has emerged as a valuable resource for analyzing patient health status. However, the prevalence of missing data in EHR poses significant challenges to existing methods, leading to spurious correlations and suboptimal predictions. While various imputation techniques have been developed to address this issue, they often obsess difficult-to-interpolate details and may introduce additional noise when making clinical predictions. To tackle this problem, we propose SMART, a Self-Supervised Missing-Aware RepresenTation Learning approach for patient health status prediction, which encodes missing information via missing-aware temporal and variable attentions and learns to impute missing values through a novel self-supervised pre-training approach which reconstructs missing data representations in the latent space rather than in input space as usual. By adopting elaborated attentions and focusing on learning higher-order representations, SMART promotes better generalization and robustness to missing data. We validate the effectiveness of SMART through extensive experiments on six EHR tasks, demonstrating its superiority over state-of-the-art methods.", "pdf": "https://openreview.net/pdf/bf3ecad3a287cc493830547544623640f97d06f5.pdf"} {"title": "Learning to compute Gr\u00f6bner bases", "url": "https://openreview.net/forum?id=ZRz7XlxBzQ", "detail_url": "https://openreview.net/forum?id=ZRz7XlxBzQ", "authors": "Hiroshi Kera,Yuki Ishihara,Yuta Kambe,Tristan Vaccon,Kazuhiro Yokoyama", "tags": "NIPS 2024,Poster", "abstract": "Solving a polynomial system, or computing an associated Gr\u00f6bner basis, has been a fundamental task in computational algebra. However, it is also known for its notorious doubly exponential time complexity in the number of variables in the worst case. This paper is the first to address the learning of Gr\u00f6bner basis computation with Transformers. The training requires many pairs of a polynomial system and the associated Gr\u00f6bner basis, raising two novel algebraic problems: random generation of Gr\u00f6bner bases and transforming them into non-Gr\u00f6bner ones, termed as backward Gr\u00f6bner problem. We resolve these problems with 0-dimensional radical ideals, the ideals appearing in various applications. Further, we propose a hybrid input embedding to handle coefficient tokens with continuity bias and avoid the growth of the vocabulary set. The experiments show that our dataset generation method is a few orders of magnitude faster than a naive approach, overcoming a crucial challenge in learning to compute Gr\u00f6bner bases, and Gr\u00f6bner computation is learnable in a particular class.", "pdf": "https://openreview.net/pdf/ffdfe9cb70c0d3679a58de94d49e12b1a2e9b8c8.pdf"} {"title": "Improved Algorithms for Contextual Dynamic Pricing", "url": "https://openreview.net/forum?id=iMEAHXDiNP", "detail_url": "https://openreview.net/forum?id=iMEAHXDiNP", "authors": "Matilde Tullii,Solenne Gaucher,Nadav Merlis,Vianney Perchet", "tags": "NIPS 2024,Poster", "abstract": "In contextual dynamic pricing, a seller sequentially prices goods based on contextual information. Buyers will purchase products only if the prices are below their valuations.\nThe goal of the seller is to design a pricing strategy that collects as much revenue as possible. We focus on two different valuation models. The first assumes that valuations linearly depend on the context and are further distorted by noise. Under minor regularity assumptions, our algorithm achieves an optimal regret bound of $\\tilde{\\mathcal{O}}(T^{2/3})$, improving the existing results. The second model removes the linearity assumption, requiring only that the expected buyer valuation is $\\beta$-H\\\"older in the context. For this model, our algorithm obtains a regret $\\tilde{\\mathcal{O}}(T^{d+2\\beta/d+3\\beta})$, where $d$ is the dimension of the context space.", "pdf": "https://openreview.net/pdf/09d45096a714980a50e55f280c6e766083f24124.pdf"} {"title": "Combining Statistical Depth and Fermat Distance for Uncertainty Quantification", "url": "https://openreview.net/forum?id=xeXRhTUmcf", "detail_url": "https://openreview.net/forum?id=xeXRhTUmcf", "authors": "Hai-Vy Nguyen,Fabrice Gamboa,Reda Chhaibi,Sixin Zhang,Serge Gratton,Thierry Giaccone", "tags": "NIPS 2024,Poster", "abstract": "We measure the out-of-domain uncertainty in the prediction of Neural Networks using a statistical notion called \"Lens Depth'' (LD) combined with Fermat Distance, which is able to capture precisely the \"depth'' of a point with respect to a distribution in feature space, without any distributional assumption. Our method also has no trainable parameter. The method is applied directly in the feature space at test time and does not intervene in training process. As such, it does not impact the performance of the original model. The proposed method gives excellent qualitative results on toy datasets and can give competitive or better uncertainty estimation on standard deep learning datasets compared to strong baseline methods.", "pdf": "https://openreview.net/pdf/c4068669caf9d9c9527416b6771bb88358891592.pdf"} {"title": "Mixture of Experts Meets Prompt-Based Continual Learning", "url": "https://openreview.net/forum?id=erwatqQ4p8", "detail_url": "https://openreview.net/forum?id=erwatqQ4p8", "authors": "Minh Le,An Nguyen The,Huy Nguyen,Thien Trang Nguyen Vu,Huyen Trang Pham,Linh Ngo Van,Nhat Ho", "tags": "NIPS 2024,Poster", "abstract": "Exploiting the power of pre-trained models, prompt-based approaches stand out compared to other continual learning solutions in effectively preventing catastrophic forgetting, even with very few learnable parameters and without the need for a memory buffer. While existing prompt-based continual learning methods excel in leveraging prompts for state-of-the-art performance, they often lack a theoretical explanation for the effectiveness of prompting. This paper conducts a theoretical analysis to unravel how prompts bestow such advantages in continual learning, thus offering a new perspective on prompt design. We first show that the attention block of pre-trained models like Vision Transformers inherently encodes a special mixture of experts architecture, characterized by linear experts and quadratic gating score functions. This realization drives us to provide a novel view on prefix tuning, reframing it as the addition of new task-specific experts, thereby inspiring the design of a novel gating mechanism termed Non-linear Residual Gates (NoRGa). Through the incorporation of non-linear activation and residual connection, NoRGa enhances continual learning performance while preserving parameter efficiency. The effectiveness of NoRGa is substantiated both theoretically and empirically across diverse benchmarks and pretraining paradigms. Our code is publicly available at https://github.com/Minhchuyentoancbn/MoE_PromptCL.", "pdf": "https://openreview.net/pdf/5bc9356edb3ef12d9404c86c0420e2c0f4106b84.pdf"} {"title": "Causal Context Adjustment Loss for Learned Image Compression", "url": "https://openreview.net/forum?id=AYntCZvoLI", "detail_url": "https://openreview.net/forum?id=AYntCZvoLI", "authors": "Minghao Han,Shiyin Jiang,Shengxi Li,Xin Deng,Mai Xu,Ce Zhu,Shuhang Gu", "tags": "NIPS 2024,Poster", "abstract": "In recent years, learned image compression (LIC) technologies have surpassed conventional methods notably in terms of rate-distortion (RD) performance. Most present learned techniques are VAE-based with an autoregressive entropy model, which obviously promotes the RD performance by utilizing the decoded causal context. However, extant methods are highly dependent on the fixed hand-crafted causal context. The question of how to guide the auto-encoder to generate a more effective causal context benefit for the autoregressive entropy models is worth exploring. In this paper, we make the first attempt in investigating the way to explicitly adjust the causal context with our proposed Causal Context Adjustment loss (CCA-loss). By imposing the CCA-loss, we enable the neural network to spontaneously adjust important information into the early stage of the autoregressive entropy model. Furthermore, as transformer technology develops remarkably, variants of which have been adopted by many state-of-the-art (SOTA) LIC techniques. The existing computing devices have not adapted the calculation of the attention mechanism well, which leads to a burden on computation quantity and inference latency. To overcome it, we establish a convolutional neural network (CNN) image compression model and adopt the unevenly channel-wise grouped strategy for high efficiency. Ultimately, the proposed CNN-based LIC network trained with our Causal Context Adjustment loss attains a great trade-off between inference latency and rate-distortion performance.", "pdf": "https://openreview.net/pdf/3aec980538688db2397ed3b1e9df92a13d39530e.pdf"} {"title": "Quantifying Aleatoric Uncertainty of the Treatment Effect: A Novel Orthogonal Learner", "url": "https://openreview.net/forum?id=RDsDvSHGkA", "detail_url": "https://openreview.net/forum?id=RDsDvSHGkA", "authors": "Valentyn Melnychuk,Stefan Feuerriegel,Mihaela van der Schaar", "tags": "NIPS 2024,Poster", "abstract": "Estimating causal quantities from observational data is crucial for understanding the safety and effectiveness of medical treatments. However, to make reliable inferences, medical practitioners require not only estimating averaged causal quantities, such as the conditional average treatment effect, but also understanding the randomness of the treatment effect as a random variable. This randomness is referred to as aleatoric uncertainty and is necessary for understanding the probability of benefit from treatment or quantiles of the treatment effect. Yet, the aleatoric uncertainty of the treatment effect has received surprisingly little attention in the causal machine learning community. To fill this gap, we aim to quantify the aleatoric uncertainty of the treatment effect at the covariate-conditional level, namely, the conditional distribution of the treatment effect (CDTE). Unlike average causal quantities, the CDTE is not point identifiable without strong additional assumptions. As a remedy, we employ partial identification to obtain sharp bounds on the CDTE and thereby quantify the aleatoric uncertainty of the treatment effect. We then develop a novel, orthogonal learner for the bounds on the CDTE, which we call AU-learner. We further show that our AU-learner has several strengths in that it satisfies Neyman-orthogonality and is doubly robust. Finally, we propose a fully-parametric deep learning instantiation of our AU-learner.", "pdf": "https://openreview.net/pdf/bac3123e1917112569e820312854f425c9db5215.pdf"} {"title": "SDformer: Similarity-driven Discrete Transformer For Time Series Generation", "url": "https://openreview.net/forum?id=ZKbplMrDzI", "detail_url": "https://openreview.net/forum?id=ZKbplMrDzI", "authors": "Chen Zhicheng,FENG SHIBO,Zhong Zhang,Xi Xiao,Xingyu Gao,Peilin Zhao", "tags": "NIPS 2024,Poster", "abstract": "The superior generation capabilities of Denoised Diffusion Probabilistic Models (DDPMs) have been effectively showcased across a multitude of domains. Recently, the application of DDPMs has extended to time series generation tasks, where they have significantly outperformed other deep generative models, often by a substantial margin. However, we have discovered two main challenges with these methods: 1) the inference time is excessively long; 2) there is potential for improvement in the quality of the generated time series. In this paper, we propose a method based on discrete token modeling technique called Similarity-driven Discrete Transformer (SDformer). Specifically, SDformer utilizes a similarity-driven vector quantization method for learning high-quality discrete token representations of time series, followed by a discrete Transformer for data distribution modeling at the token level. Comprehensive experiments show that our method significantly outperforms competing approaches in terms of the generated time series quality while also ensuring a short inference time. Furthermore, without requiring retraining, SDformer can be directly applied to predictive tasks and still achieve commendable results.", "pdf": "https://openreview.net/pdf/680a46b704bb1f384861631eee0691feb67385b1.pdf"} {"title": "S2HPruner: Soft-to-Hard Distillation Bridges the Discretization Gap in Pruning", "url": "https://openreview.net/forum?id=mtyy3Myyhz", "detail_url": "https://openreview.net/forum?id=mtyy3Myyhz", "authors": "Weihao Lin,Shengji Tang,Chong Yu,Peng Ye,Tao Chen", "tags": "NIPS 2024,Poster", "abstract": "Recently, differentiable mask pruning methods optimize the continuous relaxation architecture (soft network) as the proxy of the pruned discrete network (hard network) for superior sub-architecture search. However, due to the agnostic impact of the discretization process, the hard network struggles with the equivalent representational capacity as the soft network, namely discretization gap, which severely spoils the pruning performance. In this paper, we first investigate the discretization gap and propose a novel structural differentiable mask pruning framework named S2HPruner to bridge the discretization gap in a one-stage manner. In the training procedure, SH2Pruner forwards both the soft network and its corresponding hard network, then distills the hard network under the supervision of the soft network. To optimize the mask and prevent performance degradation, we propose a decoupled bidirectional knowledge distillation. It blocks the weight updating from the hard to the soft network while maintaining the gradient corresponding to the mask. Compared with existing pruning arts, S2HPruner achieves surpassing pruning performance without fine-tuning on comprehensive benchmarks, including CIFAR-100, Tiny ImageNet, and ImageNet with a variety of network architectures. Besides, investigation and analysis experiments explain the effectiveness of S2HPruner. Codes will be released soon.", "pdf": "https://openreview.net/pdf/109cc2aaecb1682c381e6f746e4bcf164254f641.pdf"} {"title": "ODGS: 3D Scene Reconstruction from Omnidirectional Images with 3D Gaussian Splattings", "url": "https://openreview.net/forum?id=CovjSQmNOD", "detail_url": "https://openreview.net/forum?id=CovjSQmNOD", "authors": "Suyoung Lee,Jaeyoung Chung,Jaeyoo Huh,Kyoung Mu Lee", "tags": "NIPS 2024,Poster", "abstract": "Omnidirectional (or 360-degree) images are increasingly being used for 3D applications since they allow the rendering of an entire scene with a single image. Existing works based on neural radiance fields demonstrate successful 3D reconstruction quality on egocentric videos, yet they suffer from long training and rendering times. Recently, 3D Gaussian splatting has gained attention for its fast optimization and real-time rendering. However, directly using a perspective rasterizer to omnidirectional images results in severe distortion due to the different optical properties between the two image domains. In this work, we present ODGS, a novel rasterization pipeline for omnidirectional images with geometric interpretation. For each Gaussian, we define a tangent plane that touches the unit sphere and is perpendicular to the ray headed toward the Gaussian center. We then leverage a perspective camera rasterizer to project the Gaussian onto the corresponding tangent plane. The projected Gaussians are transformed and combined into the omnidirectional image, finalizing the omnidirectional rasterization process. This interpretation reveals the implicit assumptions within the proposed pipeline, which we verify through mathematical proofs. The entire rasterization process is parallelized using CUDA, achieving optimization and rendering speeds 100 times faster than NeRF-based methods. Our comprehensive experiments highlight the superiority of ODGS by delivering the best reconstruction and perceptual quality across various datasets. Additionally, results on roaming datasets demonstrate that ODGS effectively restores fine details, even when reconstructing large 3D scenes. The source code is available on our project page (https://github.com/esw0116/ODGS).", "pdf": "https://openreview.net/pdf/a775cea7efdc466cfc6b0e41d902dcb854859eb2.pdf"} {"title": "Unsupervised Homography Estimation on Multimodal Image Pair via Alternating Optimization", "url": "https://openreview.net/forum?id=zkhyrxlwqH", "detail_url": "https://openreview.net/forum?id=zkhyrxlwqH", "authors": "Sanghyeob Song,Jaihyun Lew,Hyemi Jang,Sungroh Yoon", "tags": "NIPS 2024,Poster", "abstract": "Estimating the homography between two images is crucial for mid- or high-level vision tasks, such as image stitching and fusion. However, using supervised learning methods is often challenging or costly due to the difficulty of collecting ground-truth data. In response, unsupervised learning approaches have emerged. Most early methods, though, assume that the given image pairs are from the same camera or have minor lighting differences. Consequently, while these methods perform effectively under such conditions, they generally fail when input image pairs come from different domains, referred to as multimodal image pairs.\nTo address these limitations, we propose AltO, an unsupervised learning framework for estimating homography in multimodal image pairs. Our method employs a two-phase alternating optimization framework, similar to Expectation-Maximization (EM), where one phase reduces the geometry gap and the other addresses the modality gap. To handle these gaps, we use Barlow Twins loss for the modality gap and propose an extended version, Geometry Barlow Twins, for the geometry gap. As a result, we demonstrate that our method, AltO, can be trained on multimodal datasets without any ground-truth data. It not only outperforms other unsupervised methods but is also compatible with various architectures of homography estimators.\nThe source code can be found at: https://github.com/songsang7/AltO", "pdf": "https://openreview.net/pdf/dbd7c26b2dae2f1c86abaa70a60fb6e9e683d675.pdf"} {"title": "Initializing Variable-sized Vision Transformers from Learngene with Learnable Transformation", "url": "https://openreview.net/forum?id=7j6xgGj5lF", "detail_url": "https://openreview.net/forum?id=7j6xgGj5lF", "authors": "Shiyu Xia,Yuankun Zu,Xu Yang,Xin Geng", "tags": "NIPS 2024,Poster", "abstract": "In practical scenarios, it is necessary to build variable-sized models to accommodate diverse resource constraints, where weight initialization serves as a crucial step preceding training. The recently introduced Learngene framework firstly learns one compact module, termed learngene, from a large well-trained model, and then transforms learngene to initialize variable-sized models. However, the existing Learngene methods provide limited guidance on transforming learngene, where transformation mechanisms are manually designed and generally lack a learnable component. Moreover, these methods only consider transforming learngene along depth dimension, thus constraining the flexibility of learngene. Motivated by these concerns, we propose a novel and effective Learngene approach termed LeTs (Learnable Transformation), where we transform the learngene module along both width and depth dimension with a set of learnable matrices for flexible variablesized model initialization. Specifically, we construct an auxiliary model comprising the compact learngene module and learnable transformation matrices, enabling both components to be trained. To meet the varying size requirements of target models, we select specific parameters from well-trained transformation matrices to adaptively transform the learngene, guided by strategies such as continuous selection and magnitude-wise selection. Extensive experiments on ImageNet-1K demonstrate that Des-Nets initialized via LeTs outperform those with 100-epoch from scratch training after only 1 epoch tuning. When transferring to downstream image classification tasks, LeTs achieves better results while outperforming from scratch training after about 10 epochs within a 300-epoch training schedule.", "pdf": "https://openreview.net/pdf/f1ca0af821faf3e735d643be7ac69a386dc3c717.pdf"} {"title": "Self-playing Adversarial Language Game Enhances LLM Reasoning", "url": "https://openreview.net/forum?id=oCGkSH7ys2", "detail_url": "https://openreview.net/forum?id=oCGkSH7ys2", "authors": "Pengyu Cheng,Tianhao Hu,Han Xu,Zhisong Zhang,Yong Dai,Lei Han,nan du,Xiaolong Li", "tags": "NIPS 2024,Poster", "abstract": "We explore the potential of self-play training for large language models (LLMs) in a two-player adversarial language game called Adversarial Taboo. In this game, an attacker and a defender communicate around a target word only visible to the attacker. The attacker aims to induce the defender to speak the target word unconsciously, while the defender tries to infer the target word from the attacker's utterances. To win the game, both players must have sufficient knowledge about the target word and high-level reasoning ability to infer and express in this information-reserved conversation. Hence, we are curious about whether LLMs' reasoning ability can be further enhanced by Self-Playing this Adversarial language Game (SPAG). With this goal, we select several open-source LLMs and let each act as the attacker and play with a copy of itself as the defender on an extensive range of target words. Through reinforcement learning on the game outcomes, we observe that the LLMs' performances uniformly improve on a broad range of reasoning benchmarks. Furthermore, iteratively adopting this self-play process can continuously promote LLMs' reasoning abilities. The code is available at https://github.com/Linear95/SPAG.", "pdf": "https://openreview.net/pdf/5346da27aaf48030be064c5b97d66589dd9dfd0c.pdf"} {"title": "Distributional regression: CRPS-error bounds for model fitting, model selection and convex aggregation", "url": "https://openreview.net/forum?id=cSfxzCozPU", "detail_url": "https://openreview.net/forum?id=cSfxzCozPU", "authors": "Dombry Clement,Ahmed Zaoui", "tags": "NIPS 2024,Poster", "abstract": "Distributional regression aims at estimating the conditional distribution of a target variable given explanatory co-variates. It is a crucial tool for forecasting when a precise uncertainty quantification is required. A popular methodology consists in fitting a parametric model via empirical risk minimization where the risk is measured by the Continuous Rank Probability Score (CRPS). For independent and identically distributed observations, we provide a concentration result for the estimation error and an upper bound for its expectation. Furthermore, we consider model selection performed by minimization of the validation error and provide a concentration bound for the regret. A similar result is proved for convex aggregation of models. Finally, we show that our results may be applied to various models such as EMOS, distributional regression networks, distributional nearest neighbours or distributional random forests and we illustrate our findings on two data sets (QSAR aquatic toxicity and Airfoil self-noise).", "pdf": "https://openreview.net/pdf/7469e1c025e503882aa37a2dfe2b1e7e81129f84.pdf"} {"title": "Generalizablity of Memorization Neural Network", "url": "https://openreview.net/forum?id=sABwo1ZTFi", "detail_url": "https://openreview.net/forum?id=sABwo1ZTFi", "authors": "Lijia Yu,Xiao-Shan Gao,Lijun Zhang,Yibo Miao", "tags": "NIPS 2024,Poster", "abstract": "The neural network memorization problem is to study the expressive power of neural networks to interpolate a finite dataset. Although memorization is widely believed to have a close relationship with the strong generalizability of deep learning when using overparameterized models, to the best of our knowledge, there exists no theoretical study on the generalizability of memorization neural networks. In this paper, we give the first theoretical analysis of this topic. Since using i.i.d. training data is a necessary condition for a learning algorithm to be generalizable, memorization and its generalization theory for i.i.d. datasets are developed under mild conditions on the data distribution. First, algorithms are given to construct memorization networks for an i.i.d. dataset, which have the smallest number of parameters and even a constant number of parameters. Second, we show that, in order for the memorization networks to be generalizable, the width of the network must be at least equal to the dimension of the data, which implies that the existing memorization networks with an optimal number of parameters are not generalizable. Third, a lower bound for the sample complexity of general memorization algorithms and the exact sample complexity for memorization algorithms with constant number of parameters are given. As a consequence, it is shown that there exist data distributions such that, to be generalizable for them, the memorization network must have an exponential number of parameters in the data dimension. Finally, an efficient and generalizable memorization algorithm is given when the number of training samples is greater than the efficient memorization sample complexity of the data distribution.", "pdf": "https://openreview.net/pdf/c4ff1dd2ee3ad1659967d628a98a25f7dbb71021.pdf"} {"title": "Measuring Per-Unit Interpretability at Scale Without Humans", "url": "https://openreview.net/forum?id=oYyEsVz6DX", "detail_url": "https://openreview.net/forum?id=oYyEsVz6DX", "authors": "Roland S. Zimmermann,David Klindt,Wieland Brendel", "tags": "NIPS 2024,Poster", "abstract": "In today\u2019s era, whatever we can measure at scale, we can optimize. So far, measuring the interpretability of units in deep neural networks (DNNs) for computer vision still requires direct human evaluation and is not scalable. As a result, the inner workings of DNNs remain a mystery despite the remarkable progress we have seen in their applications. In this work, we introduce the first scalable method to measure the per-unit interpretability in vision DNNs. This method does not require any human evaluations, yet its prediction correlates well with existing human interpretability measurements. We validate its predictive power through an interventional human psychophysics study. We demonstrate the usefulness of this measure by performing previously infeasible experiments: (1) A large-scale interpretability analysis across more than 70 million units from 835 computer vision models, and (2) an extensive analysis of how units transform during training. We find an anticorrelation between a model's downstream classification performance and per-unit interpretability, which is also observable during model training. Furthermore, we see that a layer's location and width influence its interpretability.", "pdf": "https://openreview.net/pdf/1f3c601e4a12a2ae5db3cfa5d505568abbed1f6b.pdf"} {"title": "Constrained Sampling with Primal-Dual Langevin Monte Carlo", "url": "https://openreview.net/forum?id=o6Hk6vld20", "detail_url": "https://openreview.net/forum?id=o6Hk6vld20", "authors": "Luiz F. O. Chamon,Mohammad Reza Karimi Jaghargh,Anna Korba", "tags": "NIPS 2024,Poster", "abstract": "This work considers the problem of sampling from a probability distribution known up to a normalization constant while satisfying a set of statistical constraints specified by the expected values of general nonlinear functions. This problem finds applications in, e.g., Bayesian inference, where it can constrain moments to evaluate counterfactual scenarios or enforce desiderata such as prediction fairness. Methods developed to handle support constraints, such as those based on mirror maps, barriers, and penalties, are not suited for this task. This work therefore relies on gradient descent-ascent dynamics in Wasserstein space to put forward a discrete-time primal-dual Langevin Monte Carlo algorithm (PD-LMC) that simultaneously constrains the target distribution and samples from it. We analyze the convergence of PD-LMC under standard assumptions on the target distribution and constraints, namely (strong) convexity and log-Sobolev inequalities. To do so, we bring classical optimization arguments for saddle-point algorithms to the geometry of Wasserstein space. We illustrate the relevance and effectiveness of PD-LMC in several applications.", "pdf": "https://openreview.net/pdf/b71f9f91ff6ee9042d03ff62253ae1dc5be8da49.pdf"} {"title": "DA-Ada: Learning Domain-Aware Adapter for Domain Adaptive Object Detection", "url": "https://openreview.net/forum?id=hkEwwAqmCk", "detail_url": "https://openreview.net/forum?id=hkEwwAqmCk", "authors": "Haochen Li,Rui Zhang,Hantao Yao,Xin Zhang,Yifan Hao,Xinkai Song,Xiaqing Li,Yongwei Zhao,Yunji Chen,Ling Li", "tags": "NIPS 2024,Poster", "abstract": "Domain adaptive object detection (DAOD) aims to generalize detectors trained on an annotated source domain to an unlabelled target domain.\nAs the visual-language models (VLMs) can provide essential general knowledge on unseen images, freezing the visual encoder and inserting a domain-agnostic adapter can learn domain-invariant knowledge for DAOD.\nHowever, the domain-agnostic adapter is inevitably biased to the source domain.\nIt discards some beneficial knowledge discriminative on the unlabelled domain, \\ie domain-specific knowledge of the target domain.\nTo solve the issue, we propose a novel Domain-Aware Adapter (DA-Ada) tailored for the DAOD task.\nThe key point is exploiting domain-specific knowledge between the essential general knowledge and domain-invariant knowledge.\nDA-Ada consists of the Domain-Invariant Adapter (DIA) for learning domain-invariant knowledge and the Domain-Specific Adapter (DSA) for injecting the domain-specific knowledge from the information discarded by the visual encoder.\nComprehensive experiments over multiple DAOD tasks show that DA-Ada can efficiently infer a domain-aware visual encoder for boosting domain adaptive object detection.\nOur code is available at https://github.com/Therock90421/DA-Ada.", "pdf": "https://openreview.net/pdf/0fa8169f839f01b8d08ed84bf35204b702f76249.pdf"} {"title": "Toward Conditional Distribution Calibration in Survival Prediction", "url": "https://openreview.net/forum?id=l8XnqbQYBK", "detail_url": "https://openreview.net/forum?id=l8XnqbQYBK", "authors": "Shi-ang Qi,Yakun Yu,Russell Greiner", "tags": "NIPS 2024,Poster", "abstract": "Survival prediction often involves estimating the time-to-event distribution from censored datasets. Previous approaches have focused on enhancing discrimination and marginal calibration. In this paper, we highlight the significance of *conditional calibration* for real-world applications \u2013 especially its role in individual decision-making. We propose a method based on conformal prediction that uses the model\u2019s predicted individual survival probability at that instance\u2019s observed time. This method effectively improves the model\u2019s marginal and conditional calibration, without compromising discrimination. We provide asymptotic theoretical guarantees for both marginal and conditional calibration and test it extensively across 15 diverse real-world datasets, demonstrating the method\u2019s practical effectiveness and\nversatility in various settings.", "pdf": "https://openreview.net/pdf/a88c44af42915ac04bb6600e67e8fe783142c008.pdf"} {"title": "Self-Supervised Adversarial Training via Diverse Augmented Queries and Self-Supervised Double Perturbation", "url": "https://openreview.net/forum?id=CIHdlhfrOo", "detail_url": "https://openreview.net/forum?id=CIHdlhfrOo", "authors": "Ruize Zhang,Sheng Tang,Juan Cao", "tags": "NIPS 2024,Poster", "abstract": "Recently, there have been some works studying self-supervised adversarial training, a learning paradigm that learns robust features without labels. While those works have narrowed the performance gap between self-supervised adversarial training (SAT) and supervised adversarial training (supervised AT), a well-established formulation of SAT and its connections with supervised AT are under-explored. Based on a simple SAT benchmark, we find that SAT still faces the problem of large robust generalization gap and degradation on natural samples. We hypothesize this is due to the lack of data complexity and model regularization and propose a method named as DAQ-SDP (Diverse Augmented Queries Self-supervised Double Perturbation). We first challenge the previous conclusion that complex data augmentations degrade robustness in SAT by using diversely augmented samples as queries to guide adversarial training. Inspired by previous works in supervised AT, we then incorporate a self-supervised double perturbation scheme to self-supervised learning (SSL), which promotes robustness transferable to downstream classification. Our work can be seamlessly combined with models pretrained by different SSL frameworks without revising the learning objectives and helps to bridge the gap between SAT and AT. Our method also improves both robust and natural accuracies across different SSL frameworks. Our code is available at https://github.com/rzzhang222/DAQ-SDP.", "pdf": "https://openreview.net/pdf/e678cb2ee5c2609e5700234bbbb92ea55457af22.pdf"} {"title": "Statistical and Geometrical properties of the Kernel Kullback-Leibler divergence", "url": "https://openreview.net/forum?id=RxQoIekEa2", "detail_url": "https://openreview.net/forum?id=RxQoIekEa2", "authors": "Anna Korba,Francis Bach,Cl\u00e9mentine Chazal", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we study the statistical and geometrical properties of the Kullback-Leibler divergence with kernel covariance operators (KKL) introduced by [Bach, 2022, Information Theory with Kernel Methods]. Unlike the classical Kullback-Leibler (KL) divergence that involves density ratios, the KKL compares probability distributions through covariance operators (embeddings) in a reproducible kernel Hilbert space (RKHS), and compute the Kullback-Leibler quantum divergence. \nThis novel divergence hence shares parallel but different aspects with both the standard Kullback-Leibler between probability distributions and kernel embeddings metrics such as the maximum mean discrepancy. \nA limitation faced with the original KKL divergence is its inability to be defined for distributions with disjoint supports. To solve this problem, we propose in this paper a regularised variant that guarantees that divergence is well defined for all distributions. We derive bounds that quantify the deviation of the regularised KKL to the original one, as well as concentration bounds. \nIn addition, we provide a closed-form expression for the regularised KKL, specifically applicable when the distributions consist of finite sets of points, which makes it implementable. \nFurthermore, we derive a Wasserstein gradient descent scheme of the KKL divergence in the case of discrete distributions, and study empirically its properties to transport a set of points to a target distribution.", "pdf": "https://openreview.net/pdf/7c589c98a062cd2912cde38fe6a086918502b9f0.pdf"} {"title": "ShowMaker: Creating High-Fidelity 2D Human Video via Fine-Grained Diffusion Modeling", "url": "https://openreview.net/forum?id=lpxdG0hk4H", "detail_url": "https://openreview.net/forum?id=lpxdG0hk4H", "authors": "Quanwei Yang,Jiazhi Guan,Kaisiyuan Wang,Lingyun Yu,Wenqing Chu,Hang Zhou,ZhiQiang Feng,Haocheng Feng,Errui Ding,Jingdong Wang,Hongtao Xie", "tags": "NIPS 2024,Poster", "abstract": "Although significant progress has been made in human video generation, most previous studies focus on either human facial animation or full-body animation, which cannot be directly applied to produce realistic conversational human videos with frequent hand gestures and various facial movements simultaneously.\nTo address these limitations, we propose a 2D human video generation framework, named ShowMaker, capable of generating high-fidelity half-body conversational videos via fine-grained diffusion modeling.\nWe leverage dual-stream diffusion models as the backbone of our framework and carefully design two novel components for crucial local regions (i.e., hands and face) that can be easily integrated into our backbone.\nSpecifically, to handle the challenging hand generation caused by sparse motion guidance, we propose a novel Key Point-based Fine-grained Hand Modeling module by amplifying positional information from raw hand key points and constructing a corresponding key point-based codebook. \nMoreover, to restore richer facial details in generated results, we introduce a Face Recapture module, which extracts facial texture features and global identity features from the aligned human face and integrates them into the diffusion process for face enhancement. \nExtensive quantitative and qualitative experiments demonstrate the superior visual quality and temporal consistency of our method.", "pdf": "https://openreview.net/pdf/2aa2861673d4d0792b3cd9c6c544f355d479d936.pdf"} {"title": "Adaptive Preference Scaling for Reinforcement Learning with Human Feedback", "url": "https://openreview.net/forum?id=GnaFrZRHPf", "detail_url": "https://openreview.net/forum?id=GnaFrZRHPf", "authors": "Ilgee Hong,Zichong Li,Alexander Bukharin,Yixiao Li,Haoming Jiang,Tianbao Yang,Tuo Zhao", "tags": "NIPS 2024,Poster", "abstract": "Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values by learning rewards from human preference data. Due to various reasons, however, such data typically takes the form of rankings over pairs of trajectory segments, which fails to capture the varying strengths of preferences across different pairs. In this paper, we propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO), designed to address this uncertainty in preference strength. By incorporating an adaptive scaling parameter into the loss for each pair, our method increases the flexibility of the reward function. Specifically, it assigns small scaling parameters to pairs with ambiguous preferences, leading to more comparable rewards, and large scaling parameters to those with clear preferences for more distinct rewards. Computationally, our proposed loss function is strictly convex and univariate with respect to each scaling parameter, enabling its efficient optimization through a simple second-order algorithm. Our method is versatile and can be readily adapted to various preference optimization frameworks, including direct preference optimization (DPO). Our experiments with robotic control and natural language generation with large language models (LLMs) show that our method not only improves policy performance but also aligns reward function selection more closely with policy optimization, simplifying the hyperparameter tuning process.", "pdf": "https://openreview.net/pdf/eaa267a47dd2c488be1aad5e49a25710d060880a.pdf"} {"title": "ControlSynth Neural ODEs: Modeling Dynamical Systems with Guaranteed Convergence", "url": "https://openreview.net/forum?id=dBE8KHdMFs", "detail_url": "https://openreview.net/forum?id=dBE8KHdMFs", "authors": "Wenjie Mei,Dongzhe Zheng,Shihua Li", "tags": "NIPS 2024,Poster", "abstract": "Neural ODEs (NODEs) are continuous-time neural networks (NNs) that can process data without the limitation of time intervals. They have advantages in learning and understanding the evolution of complex real dynamics. Many previous works have focused on NODEs in concise forms, while numerous physical systems taking straightforward forms in fact belong to their more complex quasi-classes, thus appealing to a class of general NODEs with high scalability and flexibility to model those systems. This however may result in intricate nonlinear properties. In this paper, we introduce ControlSynth Neural ODEs (CSODEs). We show that despite their highly nonlinear nature, convergence can be guaranteed via tractable linear inequalities. In the composition of CSODEs, we introduce an extra control term for learning the potential simultaneous capture of dynamics at different scales, which could be particularly useful for partial differential equation-formulated systems. Finally, we compare several representative NNs with CSODEs on important physical dynamics under the inductive biases of CSODEs, and illustrate that CSODEs have better learning and predictive abilities in these settings.", "pdf": "https://openreview.net/pdf/7c3608c704864e9e7c263dd65d63e678fe3c87dd.pdf"} {"title": "CausalDiff: Causality-Inspired Disentanglement via Diffusion Model for Adversarial Defense", "url": "https://openreview.net/forum?id=BZLdXBjB8O", "detail_url": "https://openreview.net/forum?id=BZLdXBjB8O", "authors": "Mingkun Zhang,Keping Bi,Wei Chen,Quanrun Chen,Jiafeng Guo,Xueqi Cheng", "tags": "NIPS 2024,Poster", "abstract": "Despite ongoing efforts to defend neural classifiers from adversarial attacks, they remain vulnerable, especially to unseen attacks. In contrast, humans are difficult to be cheated by subtle manipulations, since we make judgments only based on essential factors. Inspired by this observation, we attempt to model label generation with essential label-causative factors and incorporate label-non-causative factors to assist data generation. For an adversarial example, we aim to discriminate the perturbations as non-causative factors and make predictions only based on the label-causative factors. Concretely, we propose a casual diffusion model (CausalDiff) that adapts diffusion models for conditional data generation and disentangles the two types of casual factors by learning towards a novel casual information bottleneck objective. Empirically, CausalDiff has significantly outperformed state-of-the-art defense methods on various unseen attacks, achieving an average robustness of 86.39\\% (+4.01\\%) on CIFAR-10, 56.25\\% (+3.13\\%) on CIFAR-100, and 82.62\\% (+4.93\\%) on GTSRB (German Traffic Sign Recognition Benchmark).", "pdf": "https://openreview.net/pdf/d0f21be15c77ef714dbb806f4f65e3a61bb1f5e9.pdf"} {"title": "Delving into the Reversal Curse: How Far Can Large Language Models Generalize?", "url": "https://openreview.net/forum?id=1wxFznQWhp", "detail_url": "https://openreview.net/forum?id=1wxFznQWhp", "authors": "Zhengkai Lin,Zhihang Fu,Kai Liu,Liang Xie,Binbin Lin,Wenxiao Wang,Deng Cai,Yue Wu,Jieping Ye", "tags": "NIPS 2024,Poster", "abstract": "While large language models (LLMs) showcase unprecedented capabilities, they also exhibit certain inherent limitations when facing seemingly trivial tasks. \nA prime example is the recently debated \"reversal curse\", which surfaces when models, having been trained on the fact \"A is B\", struggle to generalize this knowledge to infer that \"B is A\".\nIn this paper, we examine the manifestation of the reversal curse across various tasks and delve into both the generalization abilities and the problem-solving mechanisms of LLMs. This investigation leads to a series of significant insights:\n(1) LLMs are able to generalize to \"B is A\" when both A and B are presented in the context as in the case of a multiple-choice question.\n(2) This generalization ability is highly correlated to the structure of the fact \"A is B\" in the training documents. For example, this generalization only applies to biographies structured in \"[Name] is [Description]\" but not to \"[Description] is [Name]\".\n(3) We propose and verify the hypothesis that LLMs possess an inherent bias in fact recalling during knowledge application, which explains and underscores the importance of the document structure to successful learning.\n(4) The negative impact of this bias on the downstream performance of LLMs can hardly be mitigated through training alone.\nBased on these intriguing findings, our work not only presents a novel perspective for interpreting LLMs' generalization abilities from their intrinsic working mechanism but also provides new insights for the development of more effective learning methods for LLMs.", "pdf": "https://openreview.net/pdf/ac6cdbf3efbc4596cab50659877ae41e7bc53935.pdf"} {"title": "Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior Models", "url": "https://openreview.net/forum?id=77kCJzvpOa", "detail_url": "https://openreview.net/forum?id=77kCJzvpOa", "authors": "Hui-Po Wang,Mario Fritz", "tags": "NIPS 2024,Poster", "abstract": "Despite the widespread use of statistical prior models in various fields, such models for neural network gradients have long been overlooked. The inherent challenge stems from their high-dimensional structures and complex interdependencies, which complicate effective modeling. In this work, we demonstrate the potential of large language models (LLMs) to act as gradient priors in a zero-shot setting. We examine the property by considering lossless gradient compression -- a critical application in distributed learning -- that depends heavily on precise probability modeling. To achieve this, we introduce LM-GC, a novel method that integrates LLMs with arithmetic coding. Our technique converts plain gradients into text-like formats, enhancing token efficiency by up to 38 times compared to their plain representations. We ensure that this data conversion maintains a close alignment with the structure of plain gradients and the symbols commonly recognized by LLMs. Our experiments indicate that LM-GC surpasses existing state-of-the-art lossless compression methods, improving compression rates by 10\\% up to 21\\% across various datasets and architectures. Additionally, our approach shows promising compatibility with lossy compression techniques such as quantization and sparsification. These findings highlight the significant potential of LLMs as a model for effectively handling gradients. We will release the source code upon publication.", "pdf": "https://openreview.net/pdf/bf55fe09aa8bc4b0b516197a65e85020e4622ad9.pdf"} {"title": "DreamMesh4D: Video-to-4D Generation with Sparse-Controlled Gaussian-Mesh Hybrid Representation", "url": "https://openreview.net/forum?id=6ZwJSk2kvU", "detail_url": "https://openreview.net/forum?id=6ZwJSk2kvU", "authors": "Zhiqi Li,Yiming Chen,Peidong Liu", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements in 2D/3D generative techniques have facilitated the generation of dynamic 3D objects from monocular videos. Previous methods mainly rely on the implicit neural radiance fields (NeRF) or explicit Gaussian Splatting as the underlying representation, and struggle to achieve satisfactory spatial-temporal consistency and surface appearance. Drawing inspiration from modern 3D animation pipelines, we introduce DreamMesh4D, a novel framework combining mesh representation with geometric skinning technique to generate high-quality 4D object from a monocular video. Instead of utilizing classical texture map for appearance, we bind Gaussian splats to triangle face of mesh for differentiable optimization of both the texture and mesh vertices. In particular, DreamMesh4D begins with a coarse mesh obtained through an image-to-3D generation procedure. Sparse points are then uniformly sampled across the mesh surface, and are used to build a deformation graph to drive the motion of the 3D object for the sake of computational efficiency and providing additional constraint. For each step, transformations of sparse control points are predicted using a deformation network, and the mesh vertices as well as the surface Gaussians are deformed via a novel geometric skinning algorithm. The skinning algorithm is a hybrid approach combining LBS (linear blending skinning) and DQS (dual-quaternion skinning), mitigating drawbacks associated with both approaches. The static surface Gaussians and mesh vertices as well as the dynamic deformation network are learned via reference view photometric loss, score distillation loss as well as other regularization losses in a two-stage manner. Extensive experiments demonstrate superior performance of our method in terms of both rendering quality and spatial-temporal consistency. Furthermore, our method is compatible with modern graphic pipelines, showcasing its potential in the 3D gaming and film industry.", "pdf": "https://openreview.net/pdf/ae02cc8bee0dd3a0dce2cb7ae1df1316b235e1b9.pdf"} {"title": "Optimal Multi-Fidelity Best-Arm Identification", "url": "https://openreview.net/forum?id=gKMTM1i8Ew", "detail_url": "https://openreview.net/forum?id=gKMTM1i8Ew", "authors": "Riccardo Poiani,R\u00e9my Degenne,Emilie Kaufmann,Alberto Maria Metelli,Marcello Restelli", "tags": "NIPS 2024,Poster", "abstract": "In bandit best-arm identification, an algorithm is tasked with finding the arm with highest mean reward with a specified accuracy as fast as possible. We study multi-fidelity best-arm identification, in which the algorithm can choose to sample an arm at a lower fidelity (less accurate mean estimate) for a lower cost. Several methods have been proposed for tackling this problem, but their optimality remain elusive, notably due to loose lower bounds on the total cost needed to identify the best arm. Our first contribution is a tight, instance-dependent lower bound on the cost complexity. The study of the optimization problem featured in the lower bound provides new insights to devise computationally efficient algorithms, and leads us to propose a gradient-based approach with asymptotically optimal cost complexity. We demonstrate the benefits of the new algorithm compared to existing methods in experiments. Our theoretical and empirical findings also shed light on an intriguing concept of optimal fidelity for each arm.", "pdf": "https://openreview.net/pdf/faac96ab4b76573a484a159e16a432d1ceaa09e2.pdf"} {"title": "SearchLVLMs: A Plug-and-Play Framework for Augmenting Large Vision-Language Models by Searching Up-to-Date Internet Knowledge", "url": "https://openreview.net/forum?id=leeosk2RAM", "detail_url": "https://openreview.net/forum?id=leeosk2RAM", "authors": "Chuanhao Li,Zhen Li,Chenchen Jing,Shuo Liu,Wenqi Shao,Yuwei Wu,Ping Luo,Yu Qiao,Kaipeng Zhang", "tags": "NIPS 2024,Poster", "abstract": "Large vision-language models (LVLMs) are ignorant of the up-to-date knowledge, such as LLaVA series, because they cannot be updated frequently due to the large amount of resources required, and therefore fail in many cases. For example, if a LVLM was released on January 2024, and it wouldn't know the singer of the theme song for the new Detective Conan movie, which wasn't released until April 2024. To solve the problem, a promising solution motivated by retrieval-augmented generation (RAG) is to provide LVLMs with up-to-date knowledge via internet search during inference, i.e., internet-augmented generation (IAG), which is already integrated in some closed-source commercial LVLMs such as GPT-4V. However, the specific mechanics underpinning them remain a mystery. In this paper, we propose a plug-and-play framework, for augmenting existing LVLMs in handling visual question answering (VQA) about up-to-date knowledge, dubbed SearchLVLMs. A hierarchical filtering model is trained to effectively and efficiently find the most helpful content from the websites returned by a search engine to prompt LVLMs with up-to-date knowledge. To train the model and evaluate our framework's performance, we propose a pipeline to automatically generate news-related VQA samples to construct a dataset, dubbed UDK-VQA. A multi-model voting mechanism is introduced to label the usefulness of website/content for VQA samples to construct the training set. Experimental results demonstrate the effectiveness of our framework, outperforming GPT-4o by $\\sim$30\\% in accuracy.", "pdf": "https://openreview.net/pdf/0f27f7569896eff9d72e5134b77687131406bf9c.pdf"} {"title": "Streaming Long Video Understanding with Large Language Models", "url": "https://openreview.net/forum?id=axX62CQJpa", "detail_url": "https://openreview.net/forum?id=axX62CQJpa", "authors": "Rui Qian,Xiaoyi Dong,Pan Zhang,Yuhang Zang,Shuangrui Ding,Dahua Lin,Jiaqi Wang", "tags": "NIPS 2024,Poster", "abstract": "This paper presents VideoStreaming, an advanced vision-language large model (VLLM) for video understanding, that capably understands arbitrary-length video with a constant number of video tokens streamingly encoded and adaptively selected.\nThe challenge of video understanding in the vision language area mainly lies in the significant computational burden caused by the great number of tokens extracted from long videos. Previous works rely on sparse sampling or frame compression to reduce tokens. However, such approaches either disregard temporal information in a long time span or sacrifice spatial details, resulting in flawed compression. \nTo address these limitations, our VideoStreaming has two core designs: Memory-Propagated Streaming Encoding and Adaptive Memory Selection. The Memory-Propagated Streaming Encoding architecture segments long videos into short clips and sequentially encodes each clip with a propagated memory. In each iteration, we utilize the encoded results of the preceding clip as historical memory, which is integrated with the current clip to distill a condensed representation that encapsulates the video content up to the current timestamp. This method not only incorporates long-term temporal dynamics into the streaming encoding process but also yields a fixed-length memory as a global representation for arbitrarily long videos. After the encoding process, the Adaptive Memory Selection strategy selects a constant number of question-related memories from all the historical memories, and feeds them into the LLM to generate informative responses. The question-related selection reduces redundancy within the memories, enabling efficient and precise video understanding. Meanwhile, the disentangled video extraction and reasoning design allows the LLM to answer different questions about a video by directly selecting corresponding memories, without the need to encode the whole video for each question. Through extensive experiments, our model achieves superior performance and higher efficiency on long video benchmarks, showcasing precise temporal comprehension for detailed question answering.", "pdf": "https://openreview.net/pdf/2dca177f9bf6fd0140e1c2fa5294e1237f153677.pdf"} {"title": "AGILE: A Novel Reinforcement Learning Framework of LLM Agents", "url": "https://openreview.net/forum?id=Ul3lDYo3XQ", "detail_url": "https://openreview.net/forum?id=Ul3lDYo3XQ", "authors": "Peiyuan Feng,Yichen He,Guanhua Huang,Yuan Lin,Hanchong Zhang,Yuchen Zhang,Hang Li", "tags": "NIPS 2024,Poster", "abstract": "We introduce a novel reinforcement learning framework of LLM agents named AGILE (AGent that Interacts and Learns from Environments) designed to perform complex conversational tasks with users, leveraging LLMs, memory, tools, and interactions with experts. The agent possesses capabilities beyond conversation, including reflection, tool usage, and expert consultation. We formulate the construction of such an LLM agent as a reinforcement learning (RL) problem, in which the LLM serves as the policy model. We fine-tune the LLM using labeled data of actions and the PPO algorithm. We focus on question answering and release a dataset for agents called ProductQA, comprising challenging questions in online shopping. Our extensive experiments on ProductQA, MedMCQA and HotPotQA show that AGILE agents based on 7B and 13B LLMs trained with PPO can outperform GPT-4 agents. Our ablation study highlights the indispensability of memory, tools, consultation, reflection, and reinforcement learning in achieving the agent's strong performance. Datasets and code are available at https://github.com/bytarnish/AGILE.", "pdf": "https://openreview.net/pdf/59dcbea256d8ee883de6d1ec9925a9dda78eb277.pdf"} {"title": "A provable control of sensitivity of neural networks through a direct parameterization of the overall bi-Lipschitzness", "url": "https://openreview.net/forum?id=ww62xltEfB", "detail_url": "https://openreview.net/forum?id=ww62xltEfB", "authors": "Yuri Kinoshita,Taro Toyoizumi", "tags": "NIPS 2024,Poster", "abstract": "While neural networks can enjoy an outstanding flexibility and exhibit unprecedented performance, the mechanism behind their behavior is still not well-understood. To tackle this fundamental challenge, researchers have tried to restrict and manipulate some of their properties in order to gain new insights and better control on them. Especially, throughout the past few years, the concept of *bi-Lipschitzness* has been proved as a beneficial inductive bias in many areas. However, due to its complexity, the design and control of bi-Lipschitz architectures are falling behind, and a model that is precisely designed for bi-Lipschitzness realizing a direct and simple control of the constants along with solid theoretical analysis is lacking. In this work, we investigate and propose a novel framework for bi-Lipschitzness that can achieve such a clear and tight control based on convex neural networks and the Legendre-Fenchel duality. Its desirable properties are illustrated with concrete experiments to illustrate its broad range of applications.", "pdf": "https://openreview.net/pdf/3b2ddda7e4b50b265094c5f987886a84be0b3958.pdf"} {"title": "Learning Distributions on Manifolds with Free-Form Flows", "url": "https://openreview.net/forum?id=QbPHYPZKJI", "detail_url": "https://openreview.net/forum?id=QbPHYPZKJI", "authors": "Peter Sorrenson,Felix Draxler,Armand Rousselot,Sander Hummerich,Ullrich Koethe", "tags": "NIPS 2024,Poster", "abstract": "We propose Manifold Free-Form Flows (M-FFF), a simple new generative model for data on manifolds. The existing approaches to learning a distribution on arbitrary manifolds are expensive at inference time, since sampling requires solving a differential equation. Our method overcomes this limitation by sampling in a single function evaluation. The key innovation is to optimize a neural network via maximum likelihood on the manifold, possible by adapting the free-form flow framework to Riemannian manifolds. M-FFF is straightforwardly adapted to any manifold with a known projection. It consistently matches or outperforms previous single-step methods specialized to specific manifolds. It is typically two orders of magnitude faster than multi-step methods based on diffusion or flow matching, achieving better likelihoods in several experiments. We provide our code at https://github.com/vislearn/FFF.", "pdf": "https://openreview.net/pdf/d77dd775a9a63102bc882a79cad2992e68a91a77.pdf"} {"title": "SMART: Scalable Multi-agent Real-time Motion Generation via Next-token Prediction", "url": "https://openreview.net/forum?id=2uy3LZHNIG", "detail_url": "https://openreview.net/forum?id=2uy3LZHNIG", "authors": "Wei Wu,Xiaoxin Feng,Ziyan Gao,Yuheng KAN", "tags": "NIPS 2024,Poster", "abstract": "Data-driven autonomous driving motion generation tasks are frequently impacted by the limitations of dataset size and the domain gap between datasets, which precludes their extensive application in real-world scenarios. To address this issue, we introduce SMART, a novel autonomous driving motion generation paradigm that models vectorized map and agent trajectory data into discrete sequence tokens. These tokens are then processed through a decoder-only transformer architecture to train for the next token prediction task across spatial-temporal series. This GPT-style method allows the model to learn the motion distribution in real driving scenarios. SMART achieves state-of-the-art performance across most of the metrics on the generative Sim Agents challenge, ranking 1st on the leaderboards of Waymo Open Motion Dataset (WOMD), demonstrating remarkable inference speed. Moreover, SMART represents the generative model in the autonomous driving motion domain, exhibiting zero-shot generalization capabilities: Using only the NuPlan dataset for training and WOMD for validation, SMART achieved a competitive score of 0.72 on the Sim Agents challenge. Lastly, we have collected over 1 billion motion tokens from multiple datasets, validating the model's scalability. These results suggest that SMART has initially emulated two important properties: scalability and zero-shot generalization, and preliminarily meets the needs of large-scale real-time simulation applications. We have released all the code to promote the exploration of models for motion generation in the autonomous driving field. The source code is available at https://github.com/rainmaker22/SMART.", "pdf": "https://openreview.net/pdf/88366a6ac7a1f9f458e3ecd015714df2e674691c.pdf"} {"title": "State Chrono Representation for Enhancing Generalization in Reinforcement Learning", "url": "https://openreview.net/forum?id=J42SwBemEA", "detail_url": "https://openreview.net/forum?id=J42SwBemEA", "authors": "Jianda Chen,Wen zheng terence Ng,Zichen Chen,Sinno Jialin Pan,Tianwei Zhang", "tags": "NIPS 2024,Poster", "abstract": "In reinforcement learning with image-based inputs, it is crucial to establish a robust and generalizable state representation. Recent advancements in metric learning, such as deep bisimulation metric approaches, have shown promising results in learning structured low-dimensional representation space from pixel observations, where the distance between states is measured based on task-relevant features. However, these approaches face challenges in demanding generalization tasks and scenarios with non-informative rewards. This is because they fail to capture sufficient long-term information in the learned representations. To address these challenges, we propose a novel State Chrono Representation (SCR) approach. SCR augments state metric-based representations by incorporating extensive temporal information into the update step of bisimulation metric learning. It learns state distances within a temporal framework that considers both future dynamics and cumulative rewards over current and long-term future states. Our learning strategy effectively incorporates future behavioral information into the representation space without introducing a significant number of additional parameters for modeling dynamics. Extensive experiments conducted in DeepMind Control and Meta-World environments demonstrate that SCR achieves better performance comparing to other recent metric-based methods in demanding generalization tasks. The codes of SCR are available in https://github.com/jianda-chen/SCR.", "pdf": "https://openreview.net/pdf/431430203838c93f76ff9de3195aca6375234e79.pdf"} {"title": "Sub-optimal Experts mitigate Ambiguity in Inverse Reinforcement Learning", "url": "https://openreview.net/forum?id=7zzOcyT0hd", "detail_url": "https://openreview.net/forum?id=7zzOcyT0hd", "authors": "Riccardo Poiani,Curti Gabriele,Alberto Maria Metelli,Marcello Restelli", "tags": "NIPS 2024,Poster", "abstract": "Inverse Reinforcement Learning (IRL) deals with the problem of deducing a reward function that explains the behavior of an expert agent who is assumed to act *optimally* in an underlying unknown task. Recent works have studied the IRL problem from the perspective of recovering the *feasible reward set*, i.e., the class of reward functions that are compatible with a unique optimal expert. However, in several problems of interest it is possible to observe the behavior of multiple experts with different degree of optimality (e.g., racing drivers whose skills ranges from amateurs to professionals). For this reason, in this work, we focus on the reconstruction of the feasible reward set when, in addition to demonstrations from the optimal expert, we observe the behavior of multiple *sub-optimal experts*. Given this problem, we first study the theoretical properties showing that the presence of multiple sub-optimal experts, in addition to the optimal one, can significantly shrink the set of compatible rewards, ultimately mitigating the inherent ambiguity of IRL.\nFurthermore, we study the statistical complexity of estimating the feasible reward set with a generative model and analyze a uniform sampling algorithm that turns out to be minimax optimal whenever the sub-optimal experts' performance level is sufficiently close to that of the optimal expert.", "pdf": "https://openreview.net/pdf/304ec8c563c06c5693515d4050afb2fc4bad6a1d.pdf"} {"title": "Logical characterizations of recurrent graph neural networks with reals and floats", "url": "https://openreview.net/forum?id=atDcnWqG5n", "detail_url": "https://openreview.net/forum?id=atDcnWqG5n", "authors": "Veeti Ahvonen,Damian Heiman,Antti Kuusisto,Carsten Lutz", "tags": "NIPS 2024,Poster", "abstract": "In pioneering work from 2019, Barcel\u00f3 and coauthors identified logics that precisely match the expressive power of constant iteration-depth graph neural networks (GNNs) relative to properties definable in first-order logic. In this article, we give exact logical characterizations of recurrent GNNs in two scenarios: (1) in the setting with floating-point numbers and (2) with reals. For floats, the formalism matching recurrent GNNs is a rule-based modal logic with counting, while for reals we use a suitable infinitary modal logic, also with counting. These results give exact matches between logics and GNNs in the recurrent setting without relativising to a background logic in either case, but using some natural assumptions about floating-point arithmetic. Applying our characterizations, we also prove that, relative to graph properties definable in monadic second-order logic (MSO), our infinitary and rule-based logics are equally expressive. This implies that recurrent GNNs with reals and floats have the same expressive power over MSO-definable properties and shows that, for such properties, also recurrent GNNs with reals are characterized by a (finitary!) rule-based modal logic. In the general case, in contrast, the expressive power with floats is weaker than with reals. In addition to logic-oriented results, we also characterize recurrent GNNs, with both reals and floats, via distributed automata, drawing links to distributed computing models.", "pdf": "https://openreview.net/pdf/d95c7a391327680e42ba310256a65fac63a1fa87.pdf"} {"title": "InfLLM: Training-Free Long-Context Extrapolation for LLMs with an Efficient Context Memory", "url": "https://openreview.net/forum?id=bTHFrqhASY", "detail_url": "https://openreview.net/forum?id=bTHFrqhASY", "authors": "Chaojun Xiao,Pengle Zhang,Xu Han,Guangxuan Xiao,Yankai Lin,Zhengyan Zhang,Zhiyuan Liu,Maosong Sun", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have emerged as a cornerstone in real-world applications with lengthy streaming inputs (e.g., LLM-driven agents). However, existing LLMs, pre-trained on sequences with a restricted maximum length, cannot process longer sequences due to the out-of-domain and distraction issues. Common solutions often involve continual pre-training on longer sequences, which will introduce expensive computational overhead and uncontrollable change in model capabilities. In this paper, we unveil the intrinsic capacity of LLMs for understanding extremely long sequences without any fine-tuning. To this end, we introduce a training-free memory-based method, InfLLM. Specifically, InfLLM stores distant contexts into additional memory units and employs an efficient mechanism to lookup token-relevant units for attention computation. Thereby, InfLLM allows LLMs to efficiently process long sequences with a limited context window and well capture long-distance dependencies. Without any training, InfLLM enables LLMs that are pre-trained on sequences consisting of a few thousand tokens to achieve comparable performance with competitive baselines that continually train these LLMs on long sequences. Even when the sequence length is scaled to 1,024K, InfLLM still effectively captures long-distance dependencies. Our code can be found at https://github.com/thunlp/InfLLM.", "pdf": "https://openreview.net/pdf/e03c904324941c48677aaa7b054baa086298dd93.pdf"} {"title": "Stress-Testing Capability Elicitation With Password-Locked Models", "url": "https://openreview.net/forum?id=zzOOqD6R1b", "detail_url": "https://openreview.net/forum?id=zzOOqD6R1b", "authors": "Ryan Greenblatt,Fabien Roger,Dmitrii Krasheninnikov,David Krueger", "tags": "NIPS 2024,Poster", "abstract": "To determine the safety of large language models (LLMs), AI developers must be able to assess their dangerous capabilities. But simple prompting strategies often fail to elicit an LLM\u2019s full capabilities. One way to elicit capabilities more robustly is to fine-tune the LLM to complete the task. In this paper, we investigate the conditions under which fine-tuning-based elicitation suffices to elicit capabilities. To do this, we introduce password-locked models, LLMs fine-tuned such that some of their capabilities are deliberately hidden. Specifically, these LLMs are trained to exhibit these capabilities only when a password is present in the prompt, and to imitate a much weaker LLM otherwise. Password-locked models enable a novel method of evaluating capabilities elicitation methods, by testing whether these password-locked capabilities can be elicited without using the password. We find that a few high-quality demonstrations are often sufficient to fully elicit password-locked capabilities. More surprisingly, fine-tuning can elicit other capabilities that have been locked using the same password, or even different passwords. Furthermore, when only evaluations, and not demonstrations, are available, approaches like reinforcement learning are still often able to elicit capabilities. Overall, our findings suggest that fine-tuning is an effective method of eliciting hidden capabilities of current models but may be unreliable when high-quality demonstrations are not available, e.g., as may be the case when models\u2019 (hidden) capabilities exceed those of human demonstrators.", "pdf": "https://openreview.net/pdf/060fc5a68cf9e8cd99067fa71d86b9b2407c68af.pdf"} {"title": "Towards Harmless Rawlsian Fairness Regardless of Demographic Prior", "url": "https://openreview.net/forum?id=7U5MwUS3Rw", "detail_url": "https://openreview.net/forum?id=7U5MwUS3Rw", "authors": "Xuanqian Wang,Jing Li,Ivor Tsang,Yew-Soon Ong", "tags": "NIPS 2024,Poster", "abstract": "Due to privacy and security concerns, recent advancements in group fairness advocate for model training regardless of demographic information. However, most methods still require prior knowledge of demographics. In this study, we explore the potential for achieving fairness without compromising its utility when no prior demographics are provided to the training set, namely _harmless Rawlsian fairness_. We ascertain that such a fairness requirement with no prior demographic information essential promotes training losses to exhibit a Dirac delta distribution. To this end, we propose a simple but effective method named VFair to minimize the variance of training losses inside the optimal set of empirical losses. This problem is then optimized by a tailored dynamic update approach that operates in both loss and gradient dimensions, directing the model towards relatively fairer solutions while preserving its intact utility. Our experimental findings indicate that regression tasks, which are relatively unexplored from literature, can achieve significant fairness improvement through VFair regardless of any prior, whereas classification tasks usually do not because of their quantized utility measurements. The implementation of our method is publicly available at https://github.com/wxqpxw/VFair.", "pdf": "https://openreview.net/pdf/27168faed7e430a0c951400bd7499d517b0541e0.pdf"} {"title": "Generalizable Implicit Motion Modeling for Video Frame Interpolation", "url": "https://openreview.net/forum?id=ZlpJLQsr2v", "detail_url": "https://openreview.net/forum?id=ZlpJLQsr2v", "authors": "Zujin Guo,Wei Li,Chen Change Loy", "tags": "NIPS 2024,Poster", "abstract": "Motion modeling is critical in flow-based Video Frame Interpolation (VFI). Existing paradigms either consider linear combinations of bidirectional flows or directly predict bilateral flows for given timestamps without exploring favorable motion priors, thus lacking the capability of effectively modeling spatiotemporal dynamics in real-world videos. To address this limitation, in this study, we introduce Generalizable Implicit Motion Modeling (GIMM), a novel and effective approach to motion modeling for VFI. Specifically, to enable GIMM as an effective motion modeling paradigm, we design a motion encoding pipeline to model spatiotemporal motion latent from bidirectional flows extracted from pre-trained flow estimators, effectively representing input-specific motion priors. Then, we implicitly predict arbitrary-timestep optical flows within two adjacent input frames via an adaptive coordinate-based neural network, with spatiotemporal coordinates and motion latent as inputs. Our GIMM can be easily integrated with existing flow-based VFI works by supplying accurately modeled motion. We show that GIMM performs better than the current state of the art on standard VFI benchmarks.", "pdf": "https://openreview.net/pdf/4b1a3abb0f7804af0dfc66d0857b07b7b549499b.pdf"} {"title": "CHASE: Learning Convex Hull Adaptive Shift for Skeleton-based Multi-Entity Action Recognition", "url": "https://openreview.net/forum?id=WhE4C4fLbE", "detail_url": "https://openreview.net/forum?id=WhE4C4fLbE", "authors": "Yuhang Wen,Mengyuan Liu,Songtao Wu,Beichen Ding", "tags": "NIPS 2024,Poster", "abstract": "Skeleton-based multi-entity action recognition is a challenging task aiming to identify interactive actions or group activities involving multiple diverse entities. Existing models for individuals often fall short in this task due to the inherent distribution discrepancies among entity skeletons, leading to suboptimal backbone optimization. To this end, we introduce a Convex Hull Adaptive Shift based multi-Entity action recognition method (CHASE), which mitigates inter-entity distribution gaps and unbiases subsequent backbones. Specifically, CHASE comprises a learnable parameterized network and an auxiliary objective. The parameterized network achieves plausible, sample-adaptive repositioning of skeleton sequences through two key components. First, the Implicit Convex Hull Constrained Adaptive Shift ensures that the new origin of the coordinate system is within the skeleton convex hull. Second, the Coefficient Learning Block provides a lightweight parameterization of the mapping from skeleton sequences to their specific coefficients in convex combinations. Moreover, to guide the optimization of this network for discrepancy minimization, we propose the Mini-batch Pair-wise Maximum Mean Discrepancy as the additional objective. CHASE operates as a sample-adaptive normalization method to mitigate inter-entity distribution discrepancies, thereby reducing data bias and improving the subsequent classifier's multi-entity action recognition performance. Extensive experiments on six datasets, including NTU Mutual 11/26, H2O, Assembly101, Collective Activity and Volleyball, consistently verify our approach by seamlessly adapting to single-entity backbones and boosting their performance in multi-entity scenarios. Our code is publicly available at https://github.com/Necolizer/CHASE .", "pdf": "https://openreview.net/pdf/7e56f3c3d5c52627375cba1b817886cc88aa206f.pdf"} {"title": "DECRL: A Deep Evolutionary Clustering Jointed Temporal Knowledge Graph Representation Learning Approach", "url": "https://openreview.net/forum?id=V42zfM2GXw", "detail_url": "https://openreview.net/forum?id=V42zfM2GXw", "authors": "Qian Chen,Ling Chen", "tags": "NIPS 2024,Poster", "abstract": "Temporal Knowledge Graph (TKG) representation learning aims to map temporal evolving entities and relations to embedded representations in a continuous low-dimensional vector space. However, existing approaches cannot capture the temporal evolution of high-order correlations in TKGs. To this end, we propose a **D**eep **E**volutionary **C**lustering jointed temporal knowledge graph **R**epresentation **L**earning approach (**DECRL**). Specifically, a deep evolutionary clustering module is proposed to capture the temporal evolution of high-order correlations among entities. Furthermore, a cluster-aware unsupervised alignment mechanism is introduced to ensure the precise one-to-one alignment of soft overlapping clusters across timestamps, thereby maintaining the temporal smoothness of clusters. In addition, an implicit correlation encoder is introduced to capture latent correlations between any pair of clusters under the guidance of a global graph. Extensive experiments on seven real-world datasets demonstrate that DECRL achieves the state-of-the-art performances, outperforming the best baseline by an average of 9.53\\%, 12.98\\%, 10.42\\%, and 14.68\\% in MRR, Hits@1, Hits@3, and Hits@10, respectively.", "pdf": "https://openreview.net/pdf/361627df3b544f7258b6ab6282ccf5257a59171d.pdf"} {"title": "Ada-MSHyper: Adaptive Multi-Scale Hypergraph Transformer for Time Series Forecasting", "url": "https://openreview.net/forum?id=RNbrIQ0se8", "detail_url": "https://openreview.net/forum?id=RNbrIQ0se8", "authors": "Zongjiang Shang,Ling Chen,Binqing Wu,Dongliang Cui", "tags": "NIPS 2024,Poster", "abstract": "Although transformer-based methods have achieved great success in multi-scale temporal pattern interaction modeling, two key challenges limit their further development: (1) Individual time points contain less semantic information, and leveraging attention to model pair-wise interactions may cause the information utilization bottleneck. (2) Multiple inherent temporal variations (e.g., rising, falling, and fluctuating) entangled in temporal patterns. To this end, we propose Adaptive Multi-Scale Hypergraph Transformer (Ada-MSHyper) for time series forecasting. Specifically, an adaptive hypergraph learning module is designed to provide foundations for modeling group-wise interactions, then a multi-scale interaction module is introduced to promote more comprehensive pattern interactions at different scales. In addition, a node and hyperedge constraint mechanism is introduced to cluster nodes with similar semantic information and differentiate the temporal variations within each scales. Extensive experiments on 11 real-world datasets demonstrate that Ada-MSHyper achieves state-of-the-art performance, reducing prediction errors by an average of 4.56%, 10.38%, and 4.97% in MSE for long-range, short-range, and ultra-long-range time series forecasting, respectively. Code is available at https://github.com/shangzongjiang/Ada-MSHyper.", "pdf": "https://openreview.net/pdf/c995baf793a178477c9183aa31bdf0c93be59125.pdf"} {"title": "Local Superior Soups: A Catalyst for Model Merging in Cross-Silo Federated Learning", "url": "https://openreview.net/forum?id=0LfgE6kvKZ", "detail_url": "https://openreview.net/forum?id=0LfgE6kvKZ", "authors": "Minghui Chen,Meirui Jiang,Xin Zhang,Qi Dou,Zehua Wang,Xiaoxiao Li", "tags": "NIPS 2024,Poster", "abstract": "Federated learning (FL) is a learning paradigm that enables collaborative training of models using decentralized data. \nRecently, the utilization of pre-trained weight initialization in FL has been demonstrated to effectively improve model performance. \nHowever, the evolving complexity of current pre-trained models, characterized by a substantial increase in parameters, markedly intensifies the challenges associated with communication rounds required for their adaptation to FL. \nTo address these communication cost issues and increase the performance of pre-trained model adaptation in FL, we propose an innovative model interpolation-based local training technique called ``Local Superior Soups.''\nOur method enhances local training across different clients, encouraging the exploration of a connected low-loss basin within a few communication rounds through regularized model interpolation. \nThis approach acts as a catalyst for the seamless adaptation of pre-trained models in in FL.\nWe demonstrated its effectiveness and efficiency across diverse widely-used FL datasets.", "pdf": "https://openreview.net/pdf/182dcb91550832dd0b9bc7c88457fdc06efe869e.pdf"} {"title": "MomentumSMoE: Integrating Momentum into Sparse Mixture of Experts", "url": "https://openreview.net/forum?id=y929esCZNJ", "detail_url": "https://openreview.net/forum?id=y929esCZNJ", "authors": "Rachel Teo,Tan Minh Nguyen", "tags": "NIPS 2024,Poster", "abstract": "Sparse Mixture of Experts (SMoE) has become the key to unlocking unparalleled scalability in deep learning. SMoE has the potential to exponentially increase in parameter count while maintaining the efficiency of the model by only activating a small subset of these parameters for a given sample. However, it has been observed that SMoE suffers from unstable training and has difficulty adapting to new distributions, leading to the model's lack of robustness to data contamination. To overcome these limitations, we first establish a connection between the dynamics of the expert representations in SMoEs and gradient descent on a multi-objective optimization problem. Leveraging our framework, we then integrate momentum into SMoE and propose a new family of SMoEs, named MomentumSMoE. We theoretically prove and numerically validate that MomentumSMoE is more stable and robust than SMoE. In particular, we verify the advantages of MomentumSMoE over SMoE on a variety of practical tasks including ImageNet-1K object recognition and WikiText-103 language modeling. We demonstrate the applicability of MomentumSMoE to many types of SMoE models, including those in the Sparse MoE model for vision (V-MoE) and the Generalist Language Model (GLaM). We also show that other advanced momentum-based optimization methods, such as Adam, can be easily incorporated into the MomentumSMoE framework for designing new SMoE models with even better performance, almost negligible additional computation cost, and simple implementations.", "pdf": "https://openreview.net/pdf/72b117c375c15a0ef6ea9c489740b45ea2c3e8ed.pdf"} {"title": "Why Go Full? Elevating Federated Learning Through Partial Network Updates", "url": "https://openreview.net/forum?id=6OK8Qy9yVu", "detail_url": "https://openreview.net/forum?id=6OK8Qy9yVu", "authors": "Haolin Wang,Xuefeng Liu,Jianwei Niu,Wenkai Guo,Shaojie Tang", "tags": "NIPS 2024,Poster", "abstract": "Federated learning is a distributed machine learning paradigm designed to protect user data privacy, which has been successfully implemented across various scenarios. In traditional federated learning, the entire parameter set of local models is updated and averaged in each training round. Although this full network update method maximizes knowledge acquisition and sharing for each model layer, it prevents the layers of the global model from cooperating effectively to complete the tasks of each client, a challenge we refer to as layer mismatch. This mismatch problem recurs after every parameter averaging, consequently slowing down model convergence and degrading overall performance. To address the layer mismatch issue, we introduce the FedPart method, which restricts model updates to either a single layer or a few layers during each communication round. Furthermore, to maintain the efficiency of knowledge acquisition and sharing, we develop several strategies to select trainable layers in each round, including sequential updating and multi-round cycle training. Through both theoretical analysis and experiments, our findings demonstrate that the FedPart method significantly surpasses conventional full network update strategies in terms of convergence speed and accuracy, while also reducing communication and computational overheads.", "pdf": "https://openreview.net/pdf/5a138158a86f8829397065c8da271edc6fa8cfe9.pdf"} {"title": "Enhancing Chess Reinforcement Learning with Graph Representation", "url": "https://openreview.net/forum?id=97OvPgmjRN", "detail_url": "https://openreview.net/forum?id=97OvPgmjRN", "authors": "Tomas Rigaux,Hisashi Kashima", "tags": "NIPS 2024,Poster", "abstract": "Mastering games is a hard task, as games can be extremely complex, and still fundamentally different in structure from one another. While the AlphaZero algorithm has demonstrated an impressive ability to learn the rules and strategy of a large variety of games, ranging from Go and Chess, to Atari games, its reliance on extensive computational resources and rigid Convolutional Neural Network (CNN) architecture limits its adaptability and scalability. A model trained to play on a $19\\times 19$ Go board cannot be used to play on a smaller $13\\times 13$ board, despite the similarity between the two Go variants.\nIn this paper, we focus on Chess, and explore using a more generic Graph-based Representation of a game state, rather than a grid-based one, to introduce a more general architecture based on Graph Neural Networks (GNN). We also expand the classical Graph Attention Network (GAT) layer to incorporate edge-features, to naturally provide a generic policy output format.\nOur experiments, performed on smaller networks than the initial AlphaZero paper, show that this new architecture outperforms previous architectures with a similar number of parameters, being able to increase playing strength an order of magnitude faster. We also show that the model, when trained on a smaller $5\\times 5$ variant of chess, is able to be quickly fine-tuned to play on regular $8\\times 8$ chess, suggesting that this approach yields promising generalization abilities.\nOur code is available at https://github.com/akulen/AlphaGateau.", "pdf": "https://openreview.net/pdf/5fb3c7e5f4561687daf246761b9b515193818d67.pdf"} {"title": "Unveiling the Hidden Structure of Self-Attention via Kernel Principal Component Analysis", "url": "https://openreview.net/forum?id=VUWvVvNi6r", "detail_url": "https://openreview.net/forum?id=VUWvVvNi6r", "authors": "Rachel Teo,Tan Minh Nguyen", "tags": "NIPS 2024,Poster", "abstract": "The remarkable success of transformers in sequence modeling tasks, spanning various applications in natural language processing and computer vision, is attributed to the critical role of self-attention. Similar to the development of most deep learning models, the construction of these attention mechanisms relies on heuristics and experience. In our work, we derive self-attention from kernel principal component analysis (kernel PCA) and show that self-attention projects its query vectors onto the principal component axes of its key matrix in a feature space. We then formulate the exact formula for the value matrix in self-attention, theoretically and empirically demonstrating that this value matrix captures the eigenvectors of the Gram matrix of the key vectors in self-attention. Leveraging our kernel PCA framework, we propose Attention with Robust Principal Components (RPC-Attention), a novel class of robust attention that is resilient to data contamination. We empirically demonstrate the advantages of RPC-Attention over softmax attention on the ImageNet-1K object classification, WikiText-103 language modeling, and ADE20K image segmentation task.", "pdf": "https://openreview.net/pdf/eb5e8f12fd5239ece6baa420e8adbe6f71d6afb5.pdf"} {"title": "Jailbreaking Large Language Models Against Moderation Guardrails via Cipher Characters", "url": "https://openreview.net/forum?id=AcBLtTKK5q", "detail_url": "https://openreview.net/forum?id=AcBLtTKK5q", "authors": "Haibo Jin,Andy Zhou,Joe D. Menke,Haohan Wang", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) are typically harmless but remain vulnerable to carefully crafted prompts known as ``jailbreaks'', which can bypass protective measures and induce harmful behavior. Recent advancements in LLMs have incorporated moderation guardrails that can filter outputs, which trigger processing errors for certain malicious questions. Existing red-teaming benchmarks often neglect to include questions that trigger moderation guardrails, making it difficult to evaluate jailbreak effectiveness. To address this issue, we introduce JAMBench, a harmful behavior benchmark designed to trigger and evaluate moderation guardrails. JAMBench involves 160 manually crafted instructions covering four major risk categories at multiple severity levels. Furthermore, we propose a jailbreak method, JAM (Jailbreak Against Moderation), designed to attack moderation guardrails using jailbreak prefixes to bypass input-level filters and a fine-tuned shadow model functionally equivalent to the guardrail model to generate cipher characters to bypass output-level filters. Our extensive experiments on four LLMs demonstrate that JAM achieves higher jailbreak success ($\\sim$ $\\times$ 19.88) and lower filtered-out rates ($\\sim$ $\\times$ 1/6) than baselines.", "pdf": "https://openreview.net/pdf/aecaf57ee9a4cd36e01edfe38d57f5b8a2ba3164.pdf"} {"title": "Multi-hypotheses Conditioned Point Cloud Diffusion for 3D Human Reconstruction from Occluded Images", "url": "https://openreview.net/forum?id=E2JCQyYu0E", "detail_url": "https://openreview.net/forum?id=E2JCQyYu0E", "authors": "Donghwan Kim,Tae-Kyun Kim", "tags": "NIPS 2024,Poster", "abstract": "3D human shape reconstruction under severe occlusion due to human-object or human-human interaction is a challenging problem. While implicit function methods capture detailed clothed shapes, they require aligned shape priors and or are weak at inpainting occluded regions given an image input. Parametric models i.e. SMPL, instead offer whole body shapes, however, are often misaligned with images. In this work, we propose a novel pipeline composed of a probabilistic SMPL model and point cloud diffusion for pixel-aligned detailed 3D human reconstruction under occlusion. Multiple hypotheses generated by the probabilistic SMPL method are conditioned via continuous 3D shape representations. Point cloud diffusion refines the distribution of 3D points fitted to both the multi-hypothesis shape condition and pixel-aligned image features, offering detailed clothed shapes and inpainting occluded parts of human bodies. In the experiments using the CAPE, MultiHuman and Hi4D datasets, the proposed method outperforms various SOTA methods based on SMPL, implicit functions, point cloud diffusion, and their combined, under synthetic and real occlusions. Our code is publicly available at https://donghwankim0101.github.io/projects/mhcdiff.", "pdf": "https://openreview.net/pdf/2b5178c90c8265ce4f68b5284777766f7bf6fddf.pdf"} {"title": "Functional Gradient Flows for Constrained Sampling", "url": "https://openreview.net/forum?id=kpo6ZCgVZH", "detail_url": "https://openreview.net/forum?id=kpo6ZCgVZH", "authors": "Shiyue Zhang,Longlin Yu,Ziheng Cheng,Cheng Zhang", "tags": "NIPS 2024,Poster", "abstract": "Recently, through a unified gradient flow perspective of Markov chain Monte Carlo (MCMC) and variational inference (VI), particle-based variational inference methods (ParVIs) have been proposed that tend to combine the best of both worlds. While typical ParVIs such as Stein Variational Gradient Descent (SVGD) approximate the gradient flow within a reproducing kernel Hilbert space (RKHS), many attempts have been made recently to replace RKHS with more expressive function spaces, such as neural networks. While successful, these methods are mainly designed for sampling from unconstrained domains. In this paper, we offer a general solution to constrained sampling by introducing a boundary condition for the gradient flow which would confine the particles within the specific domain. This allows us to propose a new functional gradient ParVI method for constrained sampling, called *constrained functional gradient flow* (CFG), with provable continuous-time convergence in total variation (TV). We also present novel numerical strategies to handle the boundary integral term arising from the domain constraints. Our theory and experiments demonstrate the effectiveness of the proposed framework.", "pdf": "https://openreview.net/pdf/ffd9719358c69e9ae3500b8c7c386220deaa3bdc.pdf"} {"title": "ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization", "url": "https://openreview.net/forum?id=MXY0qsGgeO", "detail_url": "https://openreview.net/forum?id=MXY0qsGgeO", "authors": "Luca Eyring,Shyamgopal Karthik,Karsten Roth,Alexey Dosovitskiy,Zeynep Akata", "tags": "NIPS 2024,Poster", "abstract": "Text-to-Image (T2I) models have made significant advancements in recent years, but they still struggle to accurately capture intricate details specified in complex compositional prompts. While fine-tuning T2I models with reward objectives has shown promise, it suffers from \"reward hacking\" and may not generalize well to unseen prompt distributions. In this work, we propose Reward-based Noise Optimization (ReNO), a novel approach that enhances T2I models at inference by optimizing the initial noise based on the signal from one or multiple human preference reward models. Remarkably, solving this optimization problem with gradient ascent for 50 iterations yields impressive results on four different one-step models across two competitive benchmarks, T2I-CompBench and GenEval. Within a computational budget of 20-50 seconds, ReNO-enhanced one-step models consistently surpass the performance of all current open-source Text-to-Image models. Extensive user studies demonstrate that our model is preferred nearly twice as often compared to the popular SDXL model and is on par with the proprietary Stable Diffusion 3 with 8B parameters. Moreover, given the same computational resources, a ReNO-optimized one-step model outperforms widely-used open-source models such as SDXL and PixArt-alpha, highlighting the efficiency and effectiveness of ReNO in enhancing T2I model performance at inference time.", "pdf": "https://openreview.net/pdf/8da50ed60ab3de08590e0649f1497641f550b329.pdf"} {"title": "Online Adaptation of Language Models with a Memory of Amortized Contexts", "url": "https://openreview.net/forum?id=RIfgKCknTu", "detail_url": "https://openreview.net/forum?id=RIfgKCknTu", "authors": "Jihoon Tack,Jaehyung Kim,Eric Mitchell,Jinwoo Shin,Yee Whye Teh,Jonathan Richard Schwarz", "tags": "NIPS 2024,Poster", "abstract": "Due to the rapid generation and dissemination of information, large language models (LLMs) quickly run out of date despite enormous development costs. To address the crucial need to keep models updated, online learning has emerged as a critical tool when utilizing LLMs for real-world applications. However, given the ever-expanding corpus of unseen documents and the large parameter space of modern LLMs, efficient adaptation is essential. To address these challenges, we propose Memory of Amortized Contexts (MAC), an efficient and effective online adaptation framework for LLMs with strong knowledge retention. We propose a feature extraction and memory-augmentation approach to compress and extract information from new documents into compact modulations stored in a memory bank. When answering questions, our model attends to and extracts relevant knowledge from this memory bank. To learn informative modulations in an efficient manner, we utilize amortization-based meta-learning, which substitutes an otherwise required optimization process with a single forward pass of the encoder. Subsequently, we learn to choose from and aggregate selected documents into a single modulation by conditioning on the question, allowing us to adapt a frozen language model during test time without requiring further gradient updates. Our experiment demonstrates the superiority of MAC in multiple aspects, including online adaptation performance, time, and memory efficiency. In addition, we show how MAC can be combined with and improve the performance of popular alternatives such as retrieval augmented generations (RAGs). Code is available at: https://github.com/jihoontack/MAC.", "pdf": "https://openreview.net/pdf/43a2d3798fadf0a6da59e1c7f2ccf58f18a664bd.pdf"} {"title": "Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series", "url": "https://openreview.net/forum?id=3O5YCEWETq", "detail_url": "https://openreview.net/forum?id=3O5YCEWETq", "authors": "Vijay Ekambaram,Arindam Jati,Pankaj Dayama,Sumanta Mukherjee,Nam H Nguyen,Wesley M. Gifford,Chandra Reddy,Jayant Kalagnanam", "tags": "NIPS 2024,Poster", "abstract": "Large pre-trained models excel in zero/few-shot learning for language and vision tasks but face challenges in multivariate time series (TS) forecasting due to diverse data characteristics. Consequently, recent research efforts have focused on developing pre-trained TS forecasting models. These models, whether built from scratch or adapted from large language models (LLMs), excel in zero/few-shot forecasting tasks. However, they are limited by slow performance, high computational demands, and neglect of cross-channel and exogenous correlations. To address this, we introduce Tiny Time Mixers (TTM), a compact model (starting from 1M parameters) with effective transfer learning capabilities, trained exclusively on public TS datasets. TTM, based on the light-weight TSMixer architecture, incorporates innovations like adaptive patching, diverse resolution sampling, and resolution prefix tuning to handle pre-training on varied dataset resolutions with minimal model capacity. Additionally, it employs multi-level modeling to capture channel correlations and infuse exogenous signals during fine-tuning. TTM outperforms existing popular benchmarks in zero/few-shot forecasting by (4-40\\%), while reducing computational requirements significantly. Moreover, TTMs are lightweight and can be executed even on CPU-only machines, enhancing usability and fostering wider adoption in resource-constrained environments. The model weights for reproducibility and research use are available at https://huggingface.co/ibm/ttm-research-r2/, while enterprise-use weights under the Apache license can be accessed as follows: the initial TTM-Q variant at https://huggingface.co/ibm-granite/granite-timeseries-ttm-r1, and the latest variants (TTM-B, TTM-E, TTM-A) weights are available at https://huggingface.co/ibm-granite/granite-timeseries-ttm-r2. The source code for the TTM model along with the usage scripts are available at https://github.com/ibm-granite/granite-tsfm/tree/main/tsfm_public/models/tinytimemixer", "pdf": "https://openreview.net/pdf/c1a7cea36450273599d6fedfb15d84e946924570.pdf"} {"title": "Association Pattern-aware Fusion for Biological Entity Relationship Prediction", "url": "https://openreview.net/forum?id=LI5KmimXbM", "detail_url": "https://openreview.net/forum?id=LI5KmimXbM", "authors": "Lingxiang Jia,Yuchen Ying,Zunlei Feng,Zipeng Zhong,Shaolun Yao,Jiacong Hu,Mingjiang Duan,Xingen Wang,Jie Song,Mingli Song", "tags": "NIPS 2024,Poster", "abstract": "Deep learning-based methods significantly advance the exploration of associations among triple-wise biological entities (e.g., drug-target protein-adverse reaction), thereby facilitating drug discovery and safeguarding human health. However, existing researches only focus on entity-centric information mapping and aggregation, neglecting the crucial role of potential association patterns among different entities. To address the above limitation, we propose a novel association pattern-aware fusion method for biological entity relationship prediction, which effectively integrates the related association pattern information into entity representation learning. Additionally, to enhance the missing information of the low-order message passing, we devise a bind-relation module that considers the strong bind of low-order entity associations. Extensive experiments conducted on three biological datasets quantitatively demonstrate that the proposed method achieves about 4%-23% hit@1 improvements compared with state-of-the-art baselines. Furthermore, the interpretability of association patterns is elucidated in detail, thus revealing the intrinsic biological mechanisms and promoting it to be deployed in real-world scenarios. Our data and code are available at https://github.com/hry98kki/PatternBERP.", "pdf": "https://openreview.net/pdf/ca931154ba745071cc19bb54d0f7d484f8ff7bfd.pdf"} {"title": "AID: Attention Interpolation of Text-to-Image Diffusion", "url": "https://openreview.net/forum?id=Nb5xlelV0C", "detail_url": "https://openreview.net/forum?id=Nb5xlelV0C", "authors": "Qiyuan He,Jinghao Wang,Ziwei Liu,Angela Yao", "tags": "NIPS 2024,Poster", "abstract": "Conditional diffusion models can create unseen images in various settings, aiding image interpolation. Interpolation in latent spaces is well-studied, but interpolation with specific conditions like text or image is less understood. Common approaches interpolate linearly in the conditioning space but tend to result in inconsistent images with poor fidelity. This work introduces a novel training-free technique named \\textbf{Attention Interpolation via Diffusion (AID)}. AID has two key contributions: \\textbf{1)} a fused inner/outer interpolated attention layer to boost image consistency and fidelity; and \\textbf{2)} selection of interpolation coefficients via a beta distribution to increase smoothness. Additionally, we present an AID variant called \\textbf{Prompt-guided Attention Interpolation via Diffusion (PAID)}, which \\textbf{3)} treats interpolation as a condition-dependent generative process. Experiments demonstrate that our method achieves greater consistency, smoothness, and efficiency in condition-based interpolation, aligning closely with human preferences. Furthermore, PAID offers substantial benefits for compositional generation, controlled image editing, image morphing and image-controlled generation, all while remaining training-free.", "pdf": "https://openreview.net/pdf/82878d42613e0c8a2d1e818f6d4b36612c532d69.pdf"} {"title": "FSP-Laplace: Function-Space Priors for the Laplace Approximation in Bayesian Deep Learning", "url": "https://openreview.net/forum?id=83vxe8alV4", "detail_url": "https://openreview.net/forum?id=83vxe8alV4", "authors": "Tristan Cinquin,Marvin Pf\u00f6rtner,Vincent Fortuin,Philipp Hennig,Robert Bamler", "tags": "NIPS 2024,Poster", "abstract": "Laplace approximations are popular techniques for endowing deep networks with epistemic uncertainty estimates as they can be applied without altering the predictions of the trained network, and they scale to large models and datasets. While the choice of prior strongly affects the resulting posterior distribution, computational tractability and lack of interpretability of the weight space typically limit the Laplace approximation to isotropic Gaussian priors, which are known to cause pathological behavior as depth increases. As a remedy, we directly place a prior on function space. More precisely, since Lebesgue densities do not exist on infinite-dimensional function spaces, we recast training as finding the so-called weak mode of the posterior measure under a Gaussian process (GP) prior restricted to the space of functions representable by the neural network. Through the GP prior, one can express structured and interpretable inductive biases, such as regularity or periodicity, directly in function space, while still exploiting the implicit inductive biases that allow deep networks to generalize. After model linearization, the training objective induces a negative log-posterior density to which we apply a Laplace approximation, leveraging highly scalable methods from matrix-free linear algebra. Our method provides improved results where prior knowledge is abundant (as is the case in many scientific inference tasks). At the same time, it stays competitive for black-box supervised learning problems, where neural networks typically excel.", "pdf": "https://openreview.net/pdf/ab6633056604866f61a50a074df0819ba2251148.pdf"} {"title": "A Sober Look at the Robustness of CLIPs to Spurious Features", "url": "https://openreview.net/forum?id=wWyumwEYV8", "detail_url": "https://openreview.net/forum?id=wWyumwEYV8", "authors": "Qizhou Wang,Yong Lin,Yongqiang Chen,Ludwig Schmidt,Bo Han,Tong Zhang", "tags": "NIPS 2024,Poster", "abstract": "Large vision language models, such as CLIP, demonstrate impressive robustness to spurious features than single-modal models trained on ImageNet. However, existing test datasets are typically curated based on ImageNet-trained models, which aim to capture the spurious features inherited in ImageNet. Benchmarking CLIP models based on the ImageNet-oriented spurious features may not be sufficient to reflect the extent to which CLIP models are robust to spurious correlations within CLIP training data, e.g., LAION. To this end, we craft a new challenging dataset named CounterAnimal designed to reveal the reliance of CLIP models on realistic spurious features. Specifically, we split animal photos into groups according to the backgrounds, and then identify a pair of groups for each class where a CLIP model shows high-performance drops across the two groups. Our evaluations show that the spurious features captured by CounterAnimal are generically learned by CLIP models with different backbones and pre-train data, yet have limited influence for ImageNet models. We provide theoretical insights that the CLIP objective cannot offer additional robustness. Furthermore, we also re-evaluate strategies such as scaling up parameters and high-quality pre-trained data. We find that they still help mitigate the spurious features, providing a promising path for future developments.", "pdf": "https://openreview.net/pdf/416b6767b5924fc0e5fe05a6729b748f4fdecdc6.pdf"} {"title": "LLMs Can Evolve Continually on Modality for $\\mathbb{X}$-Modal Reasoning", "url": "https://openreview.net/forum?id=drpJ7KOr3F", "detail_url": "https://openreview.net/forum?id=drpJ7KOr3F", "authors": "Jiazuo Yu,Haomiao Xiong,Lu Zhang,Haiwen Diao,Yunzhi Zhuge,Lanqing HONG,Dong Wang,Huchuan Lu,You He,Long Chen", "tags": "NIPS 2024,Poster", "abstract": "Multimodal Large Language Models (MLLMs) have gained significant attention due to their impressive capabilities in multimodal understanding. However, existing methods rely heavily on extensive modal-specific pretraining and joint-modal tuning, leading to significant computational burdens when expanding to new modalities. In this paper, we propose \\textbf{PathWeave}, a flexible and scalable framework with modal-\\textbf{path} s\\textbf{w}itching and \\textbf{e}xp\\textbf{a}nsion abilities that enables MLLMs to continually \\textbf{ev}olve on modalities for $\\mathbb{X}$-modal reasoning. We leverage the concept of Continual Learning and develop an incremental training strategy atop pre-trained MLLMs, enabling their expansion to new modalities using uni-modal data, without executing joint-modal pretraining. In detail, a novel Adapter-in-Adapter (AnA) framework is introduced, in which uni-modal and cross-modal adapters are seamlessly integrated to facilitate efficient modality alignment and collaboration. Additionally, an MoE-based gating module is applied between two types of adapters to further enhance the multimodal interaction. To investigate the proposed method, we establish a challenging benchmark called \\textbf{C}ontinual \\textbf{L}earning of \\textbf{M}odality (MCL), which consists of high-quality QA data from five distinct modalities: image, video, \\textcolor{black}{audio, depth} and point cloud. Extensive experiments demonstrate the effectiveness of the proposed AnA framework on learning plasticity and memory stability during continual learning. Furthermore, PathWeave performs comparably to state-of-the-art MLLMs while concurrently reducing parameter training burdens by 98.73\\%. Our code locates at \\url{https://github.com/JiazuoYu/PathWeave}.", "pdf": "https://openreview.net/pdf/88ffc1349e56e5d503b9f27c3847318369820fe5.pdf"} {"title": "Multi-Stage Predict+Optimize for (Mixed Integer) Linear Programs", "url": "https://openreview.net/forum?id=pXFiHHySEw", "detail_url": "https://openreview.net/forum?id=pXFiHHySEw", "authors": "Xinyi HU,Jasper C.H. Lee,Jimmy H.M. Lee,Peter J. Stuckey", "tags": "NIPS 2024,Poster", "abstract": "The recently-proposed framework of Predict+Optimize tackles optimization problems with parameters that are unknown at solving time, in a supervised learning setting. Prior frameworks consider only the scenario where all unknown parameters are (eventually) revealed simultaneously. In this work, we propose Multi-Stage Predict+Optimize, a novel extension catering to applications where unknown parameters are revealed in sequential stages, with optimization decisions made in between. We further develop three training algorithms for neural networks (NNs) for our framework as proof of concept, both of which handle all mixed integer linear programs. The first baseline algorithm is a natural extension of prior work, training a single NN which makes a single prediction of unknown parameters. The second and third algorithms instead leverage the possibility of updating parameter predictions between stages, and trains one NN per stage. To handle the interdependency between the neural networks, we adopt sequential and parallelized versions of coordinate descent for training. Experimentation on three benchmarks demonstrates the superior learning performance of our methods over classical approaches.", "pdf": "https://openreview.net/pdf/f3791048483811412cf9912413a9ba8a587bf482.pdf"} {"title": "Preferential Normalizing Flows", "url": "https://openreview.net/forum?id=sRSjr9SDKR", "detail_url": "https://openreview.net/forum?id=sRSjr9SDKR", "authors": "Petrus Mikkola,Luigi Acerbi,Arto Klami", "tags": "NIPS 2024,Poster", "abstract": "Eliciting a high-dimensional probability distribution from an expert via noisy judgments is notoriously challenging, yet useful for many applications, such as prior elicitation and reward modeling. We introduce a method for eliciting the expert's belief density as a normalizing flow based solely on preferential questions such as comparing or ranking alternatives. This allows eliciting in principle arbitrarily flexible densities, but flow estimation is susceptible to the challenge of collapsing or diverging probability mass that makes it difficult in practice. We tackle this problem by introducing a novel functional prior for the flow, motivated by a decision-theoretic argument, and show empirically that the belief density can be inferred as the function-space maximum a posteriori estimate. We demonstrate our method by eliciting multivariate belief densities of simulated experts, including the prior belief of a general-purpose large language model over a real-world dataset.", "pdf": "https://openreview.net/pdf/6fcedaf81a4fcb36eae41dbaf65b641926d679bd.pdf"} {"title": "Era3D: High-Resolution Multiview Diffusion using Efficient Row-wise Attention", "url": "https://openreview.net/forum?id=XdCJAYYiTP", "detail_url": "https://openreview.net/forum?id=XdCJAYYiTP", "authors": "Peng Li,Yuan Liu,Xiaoxiao Long,Feihu Zhang,Cheng Lin,Mengfei Li,Xingqun Qi,Shanghang Zhang,Wei Xue,Wenhan Luo,Ping Tan,Wenping Wang,Qifeng Liu,Yike Guo", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we introduce **Era3D**, a novel multiview diffusion method that generates high-resolution multiview images from a single-view image. Despite significant advancements in multiview generation, existing methods still suffer from camera prior mismatch, inefficacy, and low resolution, resulting in poor-quality multiview images. Specifically, these methods assume that the input images should comply with a predefined camera type, e.g. a perspective camera with a fixed focal length, leading to distorted shapes when the assumption fails. Moreover, the full-image or dense multiview attention they employ leads to a dramatic explosion of computational complexity as image resolution increases, resulting in prohibitively expensive training costs. To bridge the gap between assumption and reality, Era3D first proposes a diffusion-based camera prediction module to estimate the focal length and elevation of the input image, which allows our method to generate images without shape distortions. Furthermore, a simple but efficient attention layer, named row-wise attention, is used to enforce epipolar priors in the multiview diffusion, facilitating efficient cross-view information fusion. Consequently, compared with state-of-the-art methods, Era3D generates high-quality multiview images with up to a 512\u00d7512 resolution while reducing computation complexity of multiview attention by 12x times. Comprehensive experiments demonstrate the superior generation power of Era3D- it can reconstruct high-quality and detailed 3D meshes from diverse single-view input images, significantly outperforming baseline multiview diffusion methods.", "pdf": "https://openreview.net/pdf/0c17e5b3cb0d0d8f8ac1eb4ab2c5ead4fd983220.pdf"} {"title": "First-Order Methods for Linearly Constrained Bilevel Optimization", "url": "https://openreview.net/forum?id=eNCYpTCGhr", "detail_url": "https://openreview.net/forum?id=eNCYpTCGhr", "authors": "Guy Kornowski,Swati Padmanabhan,Kai Wang,Zhe Zhang,Suvrit Sra", "tags": "NIPS 2024,Poster", "abstract": "Algorithms for bilevel optimization often encounter Hessian computations, which are prohibitive in high dimensions. While recent works offer first-order methods for unconstrained bilevel problems, the constrained setting remains relatively underexplored. We present first-order linearly constrained optimization methods with finite-time hypergradient stationarity guarantees. For linear equality constraints, we attain $\\epsilon$-stationarity in $\\widetilde{O}(\\epsilon^{-2})$ gradient oracle calls, which is nearly-optimal. \nFor linear inequality constraints, we attain $(\\delta,\\epsilon)$-Goldstein stationarity in $\\widetilde{O}(d{\\delta^{-1} \\epsilon^{-3}})$ gradient oracle calls, where $d$ is the upper-level dimension. \nFinally, we obtain for the linear inequality setting dimension-free rates of $\\widetilde{O}({\\delta^{-1} \\epsilon^{-4}})$ oracle complexity under the additional assumption of oracle access to the optimal dual variable. Along the way, we develop new nonsmooth nonconvex optimization methods with inexact oracles. Our numerical experiments verify these guarantees.", "pdf": "https://openreview.net/pdf/0b165b5b77964b9ec08d38a7acabff0e75d9018e.pdf"} {"title": "MoLE: Enhancing Human-centric Text-to-image Diffusion via Mixture of Low-rank Experts", "url": "https://openreview.net/forum?id=XWzw2dsjWd", "detail_url": "https://openreview.net/forum?id=XWzw2dsjWd", "authors": "Jie Zhu,Yixiong Chen,Mingyu Ding,Ping Luo,Leye Wang,Jingdong Wang", "tags": "NIPS 2024,Poster", "abstract": "Text-to-image diffusion has attracted vast attention due to its impressive image-generation capabilities. However, when it comes to human-centric text-to-image generation, particularly in the context of faces and hands, the results often fall short of naturalness due to insufficient training priors. We alleviate the issue in this work from two perspectives. 1) From the data aspect, we carefully collect a human-centric dataset comprising over one million high-quality human-in-the-scene images and two specific sets of close-up images of faces and hands. These datasets collectively provide a rich prior knowledge base to enhance the human-centric image generation capabilities of the diffusion model. 2) On the methodological front, we propose a simple yet effective method called Mixture of Low-rank Experts (MoLE) by considering low-rank modules trained on close-up hand and face images respectively as experts. This concept draws inspiration from our observation of low-rank refinement, where a low-rank module trained by a customized close-up dataset has the potential to enhance the corresponding image part when applied at an appropriate scale. To validate the superiority of MoLE in the context of human-centric image generation compared to state-of-the-art, we construct two benchmarks and perform evaluations with diverse metrics and human studies. Datasets, model, and code are released at https://sites.google.com/view/mole4diffuser/.", "pdf": "https://openreview.net/pdf/e0673d766881bf139eaefe51c989402bf8a6bce8.pdf"} {"title": "Interpreting Learned Feedback Patterns in Large Language Models", "url": "https://openreview.net/forum?id=xUoNgR1Byy", "detail_url": "https://openreview.net/forum?id=xUoNgR1Byy", "authors": "Luke Marks,Amir Abdullah,Clement Neo,Rauno Arike,David Krueger,Philip Torr,Fazl Barez", "tags": "NIPS 2024,Poster", "abstract": "Reinforcement learning from human feedback (RLHF) is widely used to train large language models (LLMs). However, it is unclear whether LLMs accurately learn the underlying preferences in human feedback data. We coin the term **Learned Feedback Pattern** (LFP) for patterns in an LLM's activations learned during RLHF that improve its performance on the fine-tuning task. We hypothesize that LLMs with LFPs accurately aligned to the fine-tuning feedback exhibit consistent activation patterns for outputs that would have received similar feedback during RLHF. To test this, we train probes to estimate the feedback signal implicit in the activations of a fine-tuned LLM. We then compare these estimates to the true feedback, measuring how accurate the LFPs are to the fine-tuning feedback. Our probes are trained on a condensed, sparse and interpretable representation of LLM activations, making it easier to correlate features of the input with our probe's predictions. We validate our probes by comparing the neural features they correlate with positive feedback inputs against the features GPT-4 describes and classifies as related to LFPs. Understanding LFPs can help minimize discrepancies between LLM behavior and training objectives, which is essential for the **safety** and **alignment** of LLMs.", "pdf": "https://openreview.net/pdf/df55089e1689943ee66585be002d20df9b191eae.pdf"} {"title": "LoRANN: Low-Rank Matrix Factorization for Approximate Nearest Neighbor Search", "url": "https://openreview.net/forum?id=wyYsCI3K7U", "detail_url": "https://openreview.net/forum?id=wyYsCI3K7U", "authors": "Elias J\u00e4\u00e4saari,Ville Hyv\u00f6nen,Teemu Roos", "tags": "NIPS 2024,Poster", "abstract": "Approximate nearest neighbor (ANN) search is a key component in many modern machine learning pipelines; recent use cases include retrieval-augmented generation (RAG) and vector databases. Clustering-based ANN algorithms, that use score computation methods based on product quantization (PQ), are often used in industrial-scale applications due to their scalability and suitability for distributed and disk-based implementations. However, they have slower query times than the leading graph-based ANN algorithms. In this work, we propose a new supervised score computation method based on the observation that inner product approximation is a multivariate (multi-output) regression problem that can be solved efficiently by reduced-rank regression. Our experiments show that on modern high-dimensional data sets, the proposed reduced-rank regression (RRR) method is superior to PQ in both query latency and memory usage. We also introduce LoRANN, a clustering-based ANN library that leverages the proposed score computation method. LoRANN is competitive with the leading graph-based algorithms and outperforms the state-of-the-art GPU ANN methods on high-dimensional data sets.", "pdf": "https://openreview.net/pdf/818415f345eae75fa997bddb07da900aca844fb4.pdf"} {"title": "MemVLT: Vision-Language Tracking with Adaptive Memory-based Prompts", "url": "https://openreview.net/forum?id=ZK1CZXKgG5", "detail_url": "https://openreview.net/forum?id=ZK1CZXKgG5", "authors": "Xiaokun Feng,Xuchen Li,Shiyu Hu,Dailing Zhang,Meiqi Wu,Jing Zhang,Xiaotang Chen,Kaiqi Huang", "tags": "NIPS 2024,Poster", "abstract": "Vision-language tracking (VLT) enhances traditional visual object tracking by integrating language descriptions, requiring the tracker to flexibly understand complex and diverse text in addition to visual information. However, most existing vision-language trackers still overly rely on initial fixed multimodal prompts, which struggle to provide effective guidance for dynamically changing targets. Fortunately, the Complementary Learning Systems (CLS) theory suggests that the human memory system can dynamically store and utilize multimodal perceptual information, thereby adapting to new scenarios. Inspired by this, (i) we propose a Memory-based Vision-Language Tracker (MemVLT). By incorporating memory modeling to adjust static prompts, our approach can provide adaptive prompts for tracking guidance. \n(ii) Specifically, the memory storage and memory interaction modules are designed in accordance with CLS theory. These modules facilitate the storage and flexible interaction between short-term and long-term memories, generating prompts that adapt to target variations.\n (iii) Finally, we conduct extensive experiments on mainstream VLT datasets (e.g., MGIT, TNL2K, LaSOT and LaSOT$_{ext}$). Experimental results show that MemVLT achieves new state-of-the-art performance. Impressively, it achieves 69.4% AUC on the MGIT and 63.3% AUC on the TNL2K, improving the existing best result by 8.4% and 4.7%, respectively.", "pdf": "https://openreview.net/pdf/e6f3bdb4021ace45094e99a74efa2bdd046c7bcb.pdf"} {"title": "Construction and Application of Materials Knowledge Graph in Multidisciplinary Materials Science via Large Language Model", "url": "https://openreview.net/forum?id=GB5a0RRYuv", "detail_url": "https://openreview.net/forum?id=GB5a0RRYuv", "authors": "Yanpeng Ye,Jie Ren,Shaozhou Wang,Yuwei Wan,Imran Razzak,Bram Hoex,Haofen Wang,Tong Xie,Wenjie Zhang", "tags": "NIPS 2024,Poster", "abstract": "Knowledge in materials science is widely dispersed across extensive scientific literature, posing significant challenges for efficient discovery and integration of new materials. Traditional methods, often reliant on costly and time-consuming experimental approaches, further complicate rapid innovation. Addressing these challenges, the integration of artificial intelligence with materials science has opened avenues for accelerating the discovery process, though it also demands precise annotation, data extraction, and traceability of information. To tackle these issues, this article introduces the Materials Knowledge Graph (MKG), which utilizes advanced natural language processing techniques, integrated with large language models to extract and systematically organize a decade's worth of high-quality research into structured triples, contains 162,605 nodes and 731,772 edges. MKG categorizes information into comprehensive labels such as Name, Formula, and Application, structured around a meticulously designed ontology, thus enhancing data usability and integration. By implementing network-based algorithms, MKG not only facilitates efficient link prediction but also significantly reduces reliance on traditional experimental methods. This structured approach not only streamlines materials research but also lays the groundwork for more sophisticated materials knowledge graphs.", "pdf": "https://openreview.net/pdf/0f4331dc6d871d8c02e01cfcea96bb460391cca7.pdf"} {"title": "Domain Adaptation for Large-Vocabulary Object Detectors", "url": "https://openreview.net/forum?id=deZpmEfmTo", "detail_url": "https://openreview.net/forum?id=deZpmEfmTo", "authors": "Kai Jiang,Jiaxing Huang,Weiying Xie,Jie Lei,Yunsong Li,Ling Shao,Shijian Lu", "tags": "NIPS 2024,Poster", "abstract": "Large-vocabulary object detectors (LVDs) aim to detect objects of many categories, which learn super objectness features and can locate objects accurately while applied to various downstream data. However, LVDs often struggle in recognizing the located objects due to domain discrepancy in data distribution and object vocabulary. At the other end, recent vision-language foundation models such as CLIP demonstrate superior open-vocabulary recognition capability. \nThis paper presents KGD, a Knowledge Graph Distillation technique that exploits the implicit knowledge graphs (KG) in CLIP for effectively adapting LVDs to various downstream domains.\nKGD consists of two consecutive stages: 1) KG extraction that employs CLIP to encode downstream domain data as nodes and their feature distances as edges, constructing KG that inherits the rich semantic relations in CLIP explicitly; \nand 2) KG encapsulation that transfers the extracted KG into LVDs to enable accurate cross-domain object classification. \nIn addition, KGD can extract both visual and textual KG independently, providing complementary vision and language knowledge for object localization and object classification in detection tasks over various downstream domains. \nExperiments over multiple widely adopted detection benchmarks show that KGD outperforms the state-of-the-art consistently by large margins. \nCodes will be released.", "pdf": "https://openreview.net/pdf/6ece1e624407d8bde3abec49423b7e66660ff5e5.pdf"} {"title": "$\\textit{Bifr\\\"ost}$: 3D-Aware Image Compositing with Language Instructions", "url": "https://openreview.net/forum?id=VcPtU8e6yK", "detail_url": "https://openreview.net/forum?id=VcPtU8e6yK", "authors": "Lingxiao Li,Kaixiong Gong,Wei-Hong Li,Xili Dai,Tao Chen,Xiaojun Yuan,Xiangyu Yue", "tags": "NIPS 2024,Poster", "abstract": "This paper introduces $\\textit{Bifr\u00f6st}$, a novel 3D-aware framework that is built upon diffusion models to perform instruction-based image composition. Previous methods concentrate on image compositing at the 2D level, which fall short in handling complex spatial relationships ($\\textit{e.g.}$, occlusion). $\\textit{Bifr\u00f6st}$ addresses these issues by training MLLM as a 2.5D location predictor and integrating depth maps as an extra condition during the generation process to bridge the gap between 2D and 3D, which enhances spatial comprehension and supports sophisticated spatial interactions. Our method begins by fine-tuning MLLM with a custom counterfactual dataset to predict 2.5D object locations in complex backgrounds from language instructions. Then, the image-compositing model is uniquely designed to process multiple types of input features, enabling it to perform high-fidelity image compositions that consider occlusion, depth blur, and image harmonization. Extensive qualitative and quantitative evaluations demonstrate that $\\textit{Bifr\u00f6st}$ significantly outperforms existing methods, providing a robust solution for generating realistically composited images in scenarios demanding intricate spatial understanding. This work not only pushes the boundaries of generative image compositing but also reduces reliance on expensive annotated datasets by effectively utilizing existing resources in innovative ways.", "pdf": "https://openreview.net/pdf/c42739460d5e1f240511064b226cf028e4ead871.pdf"} {"title": "EAGLE: Efficient Adaptive Geometry-based Learning in Cross-view Understanding", "url": "https://openreview.net/forum?id=AXcYtHQnxt", "detail_url": "https://openreview.net/forum?id=AXcYtHQnxt", "authors": "Thanh-Dat Truong,Utsav Prabhu,Dongyi Wang,Bhiksha Raj,Susan Gauch,Jeyamkondan Subbiah,Khoa Luu", "tags": "NIPS 2024,Poster", "abstract": "Unsupervised Domain Adaptation has been an efficient approach to transferring the semantic segmentation model across data distributions. Meanwhile, the recent Open-vocabulary Semantic Scene understanding based on large-scale vision language models is effective in open-set settings because it can learn diverse concepts and categories. However, these prior methods fail to generalize across different camera views due to the lack of cross-view geometric modeling. At present, there are limited studies analyzing cross-view learning. To address this problem, we introduce a novel Unsupervised Cross-view Adaptation Learning approach to modeling the geometric structural change across views in Semantic Scene Understanding. First, we introduce a novel Cross-view Geometric Constraint on Unpaired Data to model structural changes in images and segmentation masks across cameras. Second, we present a new Geodesic Flow-based Correlation Metric to efficiently measure the geometric structural changes across camera views. Third, we introduce a novel view-condition prompting mechanism to enhance the view-information modeling of the open-vocabulary segmentation network in cross-view adaptation learning. The experiments on different cross-view adaptation benchmarks have shown the effectiveness of our approach in cross-view modeling, demonstrating that we achieve State-of-the-Art (SOTA) performance compared to prior unsupervised domain adaptation and open-vocabulary semantic segmentation methods.", "pdf": "https://openreview.net/pdf/2521467502fd1c48db30a9603467eadfff45cc51.pdf"} {"title": "Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks", "url": "https://openreview.net/forum?id=XXOMCwZ6by", "detail_url": "https://openreview.net/forum?id=XXOMCwZ6by", "authors": "Zaijing Li,Yuquan Xie,Rui Shao,Gongwei Chen,Dongmei Jiang,Liqiang Nie", "tags": "NIPS 2024,Poster", "abstract": "Building a general-purpose agent is a long-standing vision in the field of artificial intelligence. Existing agents have made remarkable progress in many domains, yet they still struggle to complete long-horizon tasks in an open world. We attribute this to the lack of necessary world knowledge and multimodal experience that can guide agents through a variety of long-horizon tasks. In this paper, we propose a Hybrid Multimodal Memory module to address the above challenges. It 1) transforms knowledge into Hierarchical Directed Knowledge Graph that allows agents to explicitly represent and learn world knowledge, and 2) summarises historical information into Abstracted Multimodal Experience Pool that provide agents with rich references for in-context learning. On top of the Hybrid Multimodal Memory module, a multimodal agent, Optimus-1, is constructed with dedicated Knowledge-guided Planner and Experience-Driven Reflector, contributing to a better planning and reflection in the face of long-horizon tasks in Minecraft. Extensive experimental results show that Optimus-1 significantly outperforms all existing agents on challenging long-horizon task benchmarks, and exhibits near human-level performance on many tasks. In addition, we introduce various Multimodal Large Language Models (MLLMs) as the backbone of Optimus-1. Experimental results show that Optimus-1 exhibits strong generalization with the help of the Hybrid Multimodal Memory module, outperforming the GPT-4V baseline on many tasks.", "pdf": "https://openreview.net/pdf/d57bc4c4a18c0395e77ebc2cfa868685e62bbb3c.pdf"} {"title": "Unveiling and Mitigating Backdoor Vulnerabilities based on Unlearning Weight Changes and Backdoor Activeness", "url": "https://openreview.net/forum?id=MfGRUVFtn9", "detail_url": "https://openreview.net/forum?id=MfGRUVFtn9", "authors": "Weilin Lin,Li Liu,Shaokui Wei,Jianze Li,Hui Xiong", "tags": "NIPS 2024,Poster", "abstract": "The security threat of backdoor attacks is a central concern for deep neural networks (DNNs). Recently, without poisoned data, unlearning models with clean data and then learning a pruning mask have contributed to backdoor defense. Additionally, vanilla fine-tuning with those clean data can help recover the lost clean accuracy. However, the behavior of clean unlearning is still under-explored, and vanilla fine-tuning unintentionally induces back the backdoor effect. In this work, we first investigate model unlearning from the perspective of weight changes and gradient norms, and find two interesting observations in the backdoored model: 1) the weight changes between poison and clean unlearning are positively correlated, making it possible for us to identify the backdoored-related neurons without using poisoned data; 2) the neurons of the backdoored model are more active (*i.e.*, larger gradient norm) than those in the clean model, suggesting the need to suppress the gradient norm during fine-tuning. Then, we propose an effective two-stage defense method. In the first stage, an efficient *Neuron Weight Change (NWC)-based Backdoor Reinitialization* is proposed based on observation 1). In the second stage, based on observation 2), we design an *Activeness-Aware Fine-Tuning* to replace the vanilla fine-tuning. Extensive experiments, involving eight backdoor attacks on three benchmark datasets, demonstrate the superior performance of our proposed method compared to recent state-of-the-art backdoor defense approaches. The code is available at https://github.com/linweiii/TSBD.git.", "pdf": "https://openreview.net/pdf/3b0213a7f5920c4fa37a6bb82103f172f7311105.pdf"} {"title": "ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification", "url": "https://openreview.net/forum?id=5t4ZAkPiJs", "detail_url": "https://openreview.net/forum?id=5t4ZAkPiJs", "authors": "Yefei He,Luoming Zhang,Weijia Wu,Jing Liu,Hong Zhou,Bohan Zhuang", "tags": "NIPS 2024,Poster", "abstract": "KV cache stores key and value states from previous tokens to avoid re-computation, yet it demands substantial storage space, especially for long sequences. \n Adaptive KV cache compression seeks to discern the saliency of tokens, preserving vital information while aggressively compressing those of less importance.\n However, previous methods of this approach exhibit significant performance degradation at high compression ratios due to inaccuracies in identifying salient tokens. \n Additionally, the compression process introduces excessive overhead, substantially increasing memory burdens and the generation latency.\n In this paper, we present ZipCache, an accurate and efficient KV cache quantization method for large language models (LLMs). \n First, we construct a strong baseline for quantizing KV cache. Through the proposed channel-separable tokenwise quantization scheme, the memory overhead of quantization parameters are substantially reduced compared to fine-grained groupwise quantization.\n To enhance the compression ratio, we propose normalized attention score as an effective metric for identifying salient tokens by considering the lower triangle characteristics of the attention matrix. The quantization bit-width for each token is then adaptively assigned based on their saliency.\n Moreover, we develop an efficient approximation method that decouples the saliency metric from full attention scores, enabling compatibility with fast attention implementations like FlashAttention.\n Extensive experiments demonstrate that ZipCache achieves superior compression ratios, fast generation speed and minimal performance losses compared with previous KV cache compression methods. For instance, when evaluating Mistral-7B model on GSM8k dataset, ZipCache is capable of compressing the KV cache by $4.98\\times$, with only a 0.38% drop in accuracy. In terms of efficiency, ZipCache also showcases a 37.3% reduction in prefill-phase latency, a 56.9% reduction in decoding-phase latency, and a 19.8% reduction in GPU memory usage when evaluating LLaMA3-8B model with a input length of 4096. Code is available at https://github.com/ThisisBillhe/ZipCache/.", "pdf": "https://openreview.net/pdf/651bd5c707273d1342237b51bdc0d9f3da4db812.pdf"} {"title": "Seek Commonality but Preserve Differences: Dissected Dynamics Modeling for Multi-modal Visual RL", "url": "https://openreview.net/forum?id=4php6bGL2W", "detail_url": "https://openreview.net/forum?id=4php6bGL2W", "authors": "Yangru Huang,Peixi Peng,Yifan Zhao,Guangyao Chen,Yonghong Tian", "tags": "NIPS 2024,Poster", "abstract": "Accurate environment dynamics modeling is crucial for obtaining effective state representations in visual reinforcement learning (RL) applications. However, when facing multiple input modalities, existing dynamics modeling methods (e.g., DeepMDP) usually stumble in addressing the complex and volatile relationship between different modalities. In this paper, we study the problem of efficient dynamics modeling for multi-modal visual RL. We find that under the existence of modality heterogeneity, modality-correlated and distinct features are equally important but play different roles in reflecting the evolution of environmental dynamics. Motivated by this fact, we propose Dissected Dynamics Modeling (DDM), a novel multi-modal dynamics modeling method for visual RL. Unlike existing methods, DDM explicitly distinguishes consistent and inconsistent information across modalities and treats them separately with a divide-and-conquer strategy. This is done by dispatching the features carrying different information into distinct dynamics modeling pathways, which naturally form a series of implicit regularizations along the learning trajectories. In addition, a reward predictive function is further introduced to filter task-irrelevant information in both modality-consistent and inconsistent features, ensuring information integrity while avoiding potential distractions. Extensive experiments show that DDM consistently achieves competitive performance in challenging multi-modal visual environments.", "pdf": "https://openreview.net/pdf/93ad247051bb483111b439578bac3992ed225188.pdf"} {"title": "Spatio-Temporal Interactive Learning for Efficient Image Reconstruction of Spiking Cameras", "url": "https://openreview.net/forum?id=S4ZqnMywcM", "detail_url": "https://openreview.net/forum?id=S4ZqnMywcM", "authors": "Bin Fan,Jiaoyang Yin,Yuchao Dai,Chao Xu,Tiejun Huang,Boxin Shi", "tags": "NIPS 2024,Poster", "abstract": "The spiking camera is an emerging neuromorphic vision sensor that records high-speed motion scenes by asynchronously firing continuous binary spike streams. Prevailing image reconstruction methods, generating intermediate frames from these spike streams, often rely on complex step-by-step network architectures that overlook the intrinsic collaboration of spatio-temporal complementary information. In this paper, we propose an efficient spatio-temporal interactive reconstruction network to jointly perform inter-frame feature alignment and intra-frame feature filtering in a coarse-to-fine manner. Specifically, it starts by extracting hierarchical features from a concise hybrid spike representation, then refines the motion fields and target frames scale-by-scale, ultimately obtaining a full-resolution output. Meanwhile, we introduce a symmetric interactive attention block and a multi-motion field estimation block to further enhance the interaction capability of the overall network. Experiments on synthetic and real-captured data show that our approach exhibits excellent performance while maintaining low model complexity.", "pdf": "https://openreview.net/pdf/bb87f17700d0c54b5c58f6ab4c699c3530ce79bd.pdf"} {"title": "Aligning Vision Models with Human Aesthetics in Retrieval: Benchmarks and Algorithms", "url": "https://openreview.net/forum?id=wT5AgMVkaJ", "detail_url": "https://openreview.net/forum?id=wT5AgMVkaJ", "authors": "Miaosen Zhang,Yixuan Wei,Zhen Xing,Yifei Ma,Zuxuan Wu,Ji Li,Zheng Zhang,Qi Dai,Chong Luo,Xin Geng,Baining Guo", "tags": "NIPS 2024,Poster", "abstract": "Modern vision models are trained on very large noisy datasets. While these models acquire strong capabilities, they may not follow the user's intent to output the desired results in certain aspects, e.g., visual aesthetic, preferred style, and responsibility. In this paper, we target the realm of visual aesthetics and aim to align vision models with human aesthetic standards in a retrieval system. Advanced retrieval systems usually adopt a cascade of aesthetic models as re-rankers or filters, which are limited to low-level features like saturation and perform poorly when stylistic, cultural or knowledge contexts are involved. We find that utilizing the reasoning ability of large language models (LLMs) to rephrase the search query and extend the aesthetic expectations can make up for this shortcoming. Based on the above findings, we propose a preference-based reinforcement learning method that fine-tunes the vision models to distill the knowledge from both LLMs reasoning and the aesthetic models to better align the vision models with human aesthetics. Meanwhile, with rare benchmarks designed for evaluating retrieval systems, we leverage large multi-modality model (LMM) to evaluate the aesthetic performance with their strong abilities. As aesthetic assessment is one of the most subjective tasks, to validate the robustness of LMM, we further propose a novel dataset named HPIR to benchmark the alignment with human aesthetics. Experiments demonstrate that our method significantly enhances the aesthetic behaviors of the vision models, under several metrics. We believe the proposed algorithm can be a general practice for aligning vision models with human values.", "pdf": "https://openreview.net/pdf/ea5f1333d5f234e8c6e2fe92907a8aba4c99a5cb.pdf"} {"title": "DiffPhyCon: A Generative Approach to Control Complex Physical Systems", "url": "https://openreview.net/forum?id=MbZuh8L0Xg", "detail_url": "https://openreview.net/forum?id=MbZuh8L0Xg", "authors": "Long Wei,Peiyan Hu,Ruiqi Feng,Haodong Feng,Yixuan Du,Tao Zhang,Rui Wang,Yue Wang,Zhi-Ming Ma,Tailin Wu", "tags": "NIPS 2024,Poster", "abstract": "Controlling the evolution of complex physical systems is a fundamental task across science and engineering. \nClassical techniques suffer from limited applicability or huge computational costs. On the other hand, recent deep learning and reinforcement learning-based approaches often struggle to optimize long-term control sequences under the constraints of system dynamics. In this work, we introduce Diffusion Physical systems Control (DiffPhyCon), a new class of method to address the physical systems control problem. DiffPhyCon excels by simultaneously minimizing both the learned generative energy function and the predefined control objectives across the entire trajectory and control sequence. Thus, it can explore globally and plan near-optimal control sequences. Moreover, we enhance DiffPhyCon with prior reweighting, enabling the discovery of control sequences that significantly deviate from the training distribution. We test our method on three tasks: 1D Burgers' equation, 2D jellyfish movement control, and 2D high-dimensional smoke control, where our generated jellyfish dataset is released as a benchmark for complex physical system control research. Our method outperforms widely applied classical approaches and state-of-the-art deep learning and reinforcement learning methods. Notably, DiffPhyCon unveils an intriguing fast-close-slow-open pattern observed in the jellyfish, aligning with established findings in the field of fluid dynamics. The project website, jellyfish dataset, and code can be found at https://github.com/AI4Science-WestlakeU/diffphycon.", "pdf": "https://openreview.net/pdf/6810872982165a9ec0a0137199ae9e8765cd0a64.pdf"} {"title": "Learnability Matters: Active Learning for Video Captioning", "url": "https://openreview.net/forum?id=4GP7S7U0lJ", "detail_url": "https://openreview.net/forum?id=4GP7S7U0lJ", "authors": "Yiqian Zhang,Buyu Liu,Jun Bao,Qiang Huang,Min Zhang,Jun Yu", "tags": "NIPS 2024,Poster", "abstract": "This work focuses on the active learning in video captioning. In particular, we propose to address the learnability problem in active learning, which has been brought up by collective outliers in video captioning and neglected in the literature. To start with, we conduct a comprehensive study of collective outliers, exploring their hard-to-learn property and concluding that ground truth inconsistency is one of the main causes. Motivated by this, we design a novel active learning algorithm that takes three complementary aspects, namely learnability, diversity, and uncertainty, into account. Ideally, learnability is reflected by ground truth consistency. Under the active learning scenario where ground truths are not available until human involvement, we measure the consistency on estimated ground truths, where predictions from off-the-shelf models are utilized as approximations to ground truths. These predictions are further used to estimate sample frequency and reliability, evincing the diversity and uncertainty respectively. With the help of our novel caption-wise active learning protocol, our algorithm is capable of leveraging knowledge from humans in a more effective yet intellectual manner. Results on publicly available video captioning datasets with diverse video captioning models demonstrate that our algorithm outperforms SOTA active learning methods by a large margin, e.g. we achieve about 103% of full performance on CIDEr with 25% of human annotations on MSR-VTT.", "pdf": "https://openreview.net/pdf/bb99de3367bdcd7c7afbfde19367c9eac2393c5e.pdf"} {"title": "CA-SSLR: Condition-Aware Self-Supervised Learning Representation for Generalized Speech Processing", "url": "https://openreview.net/forum?id=aXApeuAYkg", "detail_url": "https://openreview.net/forum?id=aXApeuAYkg", "authors": "Yen-Ju Lu,Jing Liu,Thomas Thebaud,Laureano Moro-Velazquez,Ariya Rastrow,Najim Dehak,Jesus Villalba", "tags": "NIPS 2024,Poster", "abstract": "We introduce Condition-Aware Self-Supervised Learning Representation (CA-SSLR), a generalist conditioning model broadly applicable to various speech-processing tasks. Compared to standard fine-tuning methods that optimize for downstream models, CA-SSLR integrates language and speaker embeddings from earlier layers, making the SSL model aware of the current language and speaker context.\nThis approach reduces the reliance on the input audio features while preserving the integrity of the base SSLR. CA-SSLR improves the model\u2019s capabilities and demonstrates its generality on unseen tasks with minimal task-specific tuning. Our method employs linear modulation to dynamically adjust internal representations, enabling fine-grained adaptability without significantly altering the original model behavior. Experiments show that CA-SSLR reduces the number of trainable parameters, mitigates overfitting, and excels in under-resourced and unseen tasks. Specifically, CA-SSLR achieves a 10\\% relative reduction in LID errors, a 37\\% improvement in ASR CER on the ML-SUPERB benchmark, and a 27\\% decrease in SV EER on VoxCeleb-1, demonstrating its effectiveness.", "pdf": "https://openreview.net/pdf/4fbddc91cc2fe0b130a3140c347d48b22ae829e1.pdf"} {"title": "RoboMamba: Efficient Vision-Language-Action Model for Robotic Reasoning and Manipulation", "url": "https://openreview.net/forum?id=JxOQeg1NkH", "detail_url": "https://openreview.net/forum?id=JxOQeg1NkH", "authors": "Jiaming Liu,Mengzhen Liu,Zhenyu Wang,Pengju An,Xiaoqi Li,Kaichen Zhou,Senqiao Yang,Renrui Zhang,Yandong Guo,Shanghang Zhang", "tags": "NIPS 2024,Poster", "abstract": "A fundamental objective in robot manipulation is to enable models to comprehend visual scenes and execute actions. Although existing Vision-Language-Action (VLA) models for robots can handle a range of basic tasks, they still face challenges in two areas: (1) insufficient reasoning ability to tackle complex tasks, and (2) high computational costs for VLA model fine-tuning and inference. The recently proposed state space model (SSM) known as Mamba demonstrates promising capabilities in non-trivial sequence modeling with linear inference complexity. Inspired by this, we introduce RoboMamba, an end-to-end robotic VLA model that leverages Mamba to deliver both robotic reasoning and action capabilities, while maintaining efficient fine-tuning and inference. Specifically, we first integrate the vision encoder with Mamba, aligning visual tokens with language embedding through co-training, empowering our model with visual common sense and robotic-related reasoning. To further equip RoboMamba with SE(3) pose prediction abilities, we explore an efficient fine-tuning strategy with a simple policy head. We find that once RoboMamba possesses sufficient reasoning capability, it can acquire manipulation skills with minimal fine-tuning parameters (0.1\\% of the model) and time. In experiments, RoboMamba demonstrates outstanding reasoning capabilities on general and robotic evaluation benchmarks. Meanwhile, our model showcases impressive pose prediction results in both simulation and real-world experiments, achieving inference speeds 3 times faster than existing VLA models.", "pdf": "https://openreview.net/pdf/d121b003663e64b010981e0a4b9e271adebd7c2d.pdf"} {"title": "Activating Self-Attention for Multi-Scene Absolute Pose Regression", "url": "https://openreview.net/forum?id=rM24UUgZg8", "detail_url": "https://openreview.net/forum?id=rM24UUgZg8", "authors": "Miso Lee,Jihwan Kim,Jae-Pil Heo", "tags": "NIPS 2024,Poster", "abstract": "Multi-scene absolute pose regression addresses the demand for fast and memory-efficient camera pose estimation across various real-world environments. Nowadays, transformer-based model has been devised to regress the camera pose directly in multi-scenes. Despite its potential, transformer encoders are underutilized due to the collapsed self-attention map, having low representation capacity. This work highlights the problem and investigates it from a new perspective: distortion of query-key embedding space. Based on the statistical analysis, we reveal that queries and keys are mapped in completely different spaces while only a few keys are blended into the query region. This leads to the collapse of the self-attention map as all queries are considered similar to those few keys. Therefore, we propose simple but effective solutions to activate self-attention. Concretely, we present an auxiliary loss that aligns queries and keys, preventing the distortion of query-key space and encouraging the model to find global relations by self-attention. In addition, the fixed sinusoidal positional encoding is adopted instead of undertrained learnable one to reflect appropriate positional clues into the inputs of self-attention. As a result, our approach resolves the aforementioned problem effectively, thus outperforming existing methods in both outdoor and indoor scenes.", "pdf": "https://openreview.net/pdf/7a7ae5c375a7545b171107f24006a8f0766bf47f.pdf"} {"title": "Self-Guided Masked Autoencoder", "url": "https://openreview.net/forum?id=7vXufiEzSy", "detail_url": "https://openreview.net/forum?id=7vXufiEzSy", "authors": "Jeongwoo Shin,Inseo Lee,Junho Lee,Joonseok Lee", "tags": "NIPS 2024,Poster", "abstract": "Masked Autoencoder (MAE) is a self-supervised approach for representation learning, widely applicable to a variety of downstream tasks in computer vision. In spite of its success, it is still not fully uncovered what and how MAE exactly learns. In this paper, with an in-depth analysis, we discover that MAE intrinsically learns pattern-based patch-level clustering from surprisingly early stages of pre-training. Upon this understanding, we propose self-guided masked autoencoder, which internally generates informed mask by utilizing its progress in patch clustering, substituting the naive random masking of the vanilla MAE. Our approach significantly boosts its learning process without relying on any external models or supplementary information, keeping the benefit of self-supervised nature of MAE intact. Comprehensive experiments on various downstream tasks verify the effectiveness of the proposed method.", "pdf": "https://openreview.net/pdf/6536681167755f591a957e70741cb620ce215b0f.pdf"} {"title": "Learning Interaction-aware 3D Gaussian Splatting for One-shot Hand Avatars", "url": "https://openreview.net/forum?id=BxPa7Sn5Zq", "detail_url": "https://openreview.net/forum?id=BxPa7Sn5Zq", "authors": "Xuan Huang,Hanhui Li,Wanquan Liu,Xiaodan Liang,Yiqiang Yan,Yuhao Cheng,CHENQIANG GAO", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we propose to create animatable avatars for interacting hands with 3D Gaussian Splatting (GS) and single-image inputs. Existing GS-based methods designed for single subjects often yield unsatisfactory results due to limited input views, various hand poses, and occlusions. To address these challenges, we introduce a novel two-stage interaction-aware GS framework that exploits cross-subject hand priors and refines 3D Gaussians in interacting areas. Particularly, to handle hand variations, we disentangle the 3D presentation of hands into optimization-based identity maps and learning-based latent geometric features and neural texture maps. Learning-based features are captured by trained networks to provide reliable priors for poses, shapes, and textures, while optimization-based identity maps enable efficient one-shot fitting of out-of-distribution hands. Furthermore, we devise an interaction-aware attention module and a self-adaptive Gaussian refinement module. These modules enhance image rendering quality in areas with intra- and inter-hand interactions, overcoming the limitations of existing GS-based methods. Our proposed method is validated via extensive experiments on the large-scale InterHand2.6M dataset, and it significantly improves the state-of-the-art performance in image quality. Code and models will be released upon acceptance.", "pdf": "https://openreview.net/pdf/b4413f109b864ac38db802e66276511c1e2e8f2d.pdf"} {"title": "Learning Mixtures of Unknown Causal Interventions", "url": "https://openreview.net/forum?id=aC9mB1PqYJ", "detail_url": "https://openreview.net/forum?id=aC9mB1PqYJ", "authors": "Abhinav Kumar,Kirankumar Shiragur,Caroline Uhler", "tags": "NIPS 2024,Poster", "abstract": "The ability to conduct interventions plays a pivotal role in learning causal relationships among variables, thus facilitating applications across diverse scientific disciplines such as genomics, economics, and machine learning. However, in many instances within these applications, the process of generating interventional data is subject to noise: rather than data being sampled directly from the intended interventional distribution, interventions often yield data sampled from a blend of both intended and unintended interventional distributions.\n\nWe consider the fundamental challenge of disentangling mixed interventional and observational data within linear Structural Equation Models (SEMs) with Gaussian additive noise without the knowledge of the true causal graph. We demonstrate that conducting interventions, whether do or soft, yields distributions with sufficient diversity and properties conducive to efficiently recovering each component within the mixture. Furthermore, we establish that the sample complexity required to disentangle mixed data inversely correlates with the extent of change induced by an intervention in the equations governing the affected variable values. As a result, the causal graph can be identified up to its interventional Markov Equivalence Class, similar to scenarios where no noise influences the generation of interventional data. We further support our theoretical findings by conducting simulations wherein we perform causal discovery from such mixed data.", "pdf": "https://openreview.net/pdf/f29e05d474067eb4990aab6f4bf59356e7719426.pdf"} {"title": "C-GAIL: Stabilizing Generative Adversarial Imitation Learning with Control Theory", "url": "https://openreview.net/forum?id=t4VwoIYBf0", "detail_url": "https://openreview.net/forum?id=t4VwoIYBf0", "authors": "Tianjiao Luo,Tim Pearce,Huayu Chen,Jianfei Chen,Jun Zhu", "tags": "NIPS 2024,Poster", "abstract": "Generative Adversarial Imitation Learning (GAIL) provides a promising approach to training a generative policy to imitate a demonstrator. It uses on-policy Reinforcement Learning (RL) to optimize a reward signal derived from an adversarial discriminator. However, optimizing GAIL is difficult in practise, with the training loss oscillating during training, slowing convergence. This optimization instability can prevent GAIL from finding a good policy, harming its final performance. In this paper, we study GAIL\u2019s optimization from a control-theoretic perspective. We show that GAIL cannot converge to the desired equilibrium. In response, we analyze the training dynamics of GAIL in function space and design a novel controller that not only pushes GAIL to the desired equilibrium but also achieves asymptotic stability in a simplified \u201cone-step\u201d setting. Going from theory to practice, we propose Controlled-GAIL (C-GAIL), which adds a differentiable regularization term on the GAIL objective to stabilize training. Empirically, the C-GAIL regularizer improves the training of various existing GAIL methods, including the popular GAIL-DAC, by speeding up the convergence, reducing the range of oscillation, and matching the expert distribution more closely.", "pdf": "https://openreview.net/pdf/82916812fa37eed846d5121ee9e03fd6d11bf34c.pdf"} {"title": "Deep Homomorphism Networks", "url": "https://openreview.net/forum?id=KXUijdMFdG", "detail_url": "https://openreview.net/forum?id=KXUijdMFdG", "authors": "Takanori Maehara,Hoang NT", "tags": "NIPS 2024,Poster", "abstract": "Many real-world graphs are large and have some characteristic subgraph patterns, such as triangles in social networks, cliques in web graphs, and cycles in molecular networks.\nDetecting such subgraph patterns is important in many applications; therefore, establishing graph neural networks (GNNs) that can detect such patterns and run fast on large graphs is demanding.\nIn this study, we propose a new GNN layer, named \\emph{graph homomorphism layer}.\nIt enumerates local subgraph patterns that match the predefined set of patterns $\\mathcal{P}^\\bullet$, applies non-linear transformations to node features, and aggregates them along with the patterns. \nBy stacking these layers, we obtain a deep GNN model called \\emph{deep homomorphism network (DHN)}.\nThe expressive power of the DHN is completely characterised by the set of patterns generated from $\\mathcal{P}^\\bullet$ by graph-theoretic operations;\nhence, it serves as a useful theoretical tool to analyse the expressive power of many GNN models.\nFurthermore, the model runs in the same time complexity as the graph homomorphisms, which is fast in many real-word graphs.\nThus, it serves as a practical and lightweight model that solves difficult problems using domain knowledge.", "pdf": "https://openreview.net/pdf/b0c6c25ec41799cfd9b874a03e628bb6745561bf.pdf"} {"title": "Rejection via Learning Density Ratios", "url": "https://openreview.net/forum?id=JzcIKnnOpJ", "detail_url": "https://openreview.net/forum?id=JzcIKnnOpJ", "authors": "Alexander Soen,Hisham Husain,Philip Schulz,Vu Nguyen", "tags": "NIPS 2024,Poster", "abstract": "Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions. \nThe predominant approach is to alter the supervised learning pipeline by augmenting typical loss functions, letting model rejection incur a lower loss than an incorrect prediction.\nInstead, we propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.\nThis can be formalized via the optimization of a loss's risk with a $ \\phi$-divergence regularization term.\nThrough this idealized distribution, a rejection decision can be made by utilizing the density ratio between this distribution and the data distribution.\nWe focus on the setting where our $ \\phi $-divergences are specified by the family of $ \\alpha $-divergence.\nOur framework is tested empirically over clean and noisy datasets.", "pdf": "https://openreview.net/pdf/f2b01a04238d621124397c4ff2f483be587f669c.pdf"} {"title": "Exploring and Exploiting the Asymmetric Valley of Deep Neural Networks", "url": "https://openreview.net/forum?id=CW0OVWEKKu", "detail_url": "https://openreview.net/forum?id=CW0OVWEKKu", "authors": "Xin-Chun Li,Jin-Lin Tang,Bo Zhang,Lan Li,De-Chuan Zhan", "tags": "NIPS 2024,Poster", "abstract": "Exploring the loss landscape offers insights into the inherent principles of deep neural networks (DNNs). Recent work suggests an additional asymmetry of the valley beyond the flat and sharp ones, yet without thoroughly examining its causes or implications. Our study methodically explores the factors affecting the symmetry of DNN valleys, encompassing (1) the dataset, network architecture, initialization, and hyperparameters that influence the convergence point; and (2) the magnitude and direction of the noise for 1D visualization. Our major observation shows that the {\\it degree of sign consistency} between the noise and the convergence point is a critical indicator of valley symmetry. Theoretical insights from the aspects of ReLU activation and softmax function could explain the interesting phenomenon. Our discovery propels novel understanding and applications in the scenario of Model Fusion: (1) the efficacy of interpolating separate models significantly correlates with their sign consistency ratio, and (2) imposing sign alignment during federated learning emerges as an innovative approach for model parameter alignment.", "pdf": "https://openreview.net/pdf/d21d3189b045ed9b8e3cb15ac98c94136a8a14dd.pdf"} {"title": "DreamSteerer: Enhancing Source Image Conditioned Editability using Personalized Diffusion Models", "url": "https://openreview.net/forum?id=UekHycx0lz", "detail_url": "https://openreview.net/forum?id=UekHycx0lz", "authors": "Zhengyang Yu,Zhaoyuan Yang,Jing Zhang", "tags": "NIPS 2024,Poster", "abstract": "Recent text-to-image (T2I) personalization methods have shown great premise in teaching a diffusion model user-specified concepts given a few images for reusing the acquired concepts in a novel context. With massive efforts being dedicated to personalized generation, a promising extension is personalized editing, namely to edit an image using personalized concepts, which can provide more precise guidance signal than traditional textual guidance. To address this, one straightforward solution is to incorporate a personalized diffusion model with a text-driven editing framework. However, such solution often shows unsatisfactory editability on the source image. To address this, we propose DreamSteerer, a plug-in method for augmenting existing T2I personalization methods. Specifically, we enhance the source image conditioned editability of a personalized diffusion model via a novel Editability Driven Score Distillation (EDSD) objective. Moreover, we identify a mode trapping issue with EDSD, and propose a mode shifting regularization with spatial feature guided sampling to avoid such issue. We further employ two key modifications on the Delta Denoising Score framework that enable high-fidelity local editing with personalized concepts. Extensive experiments validate that DreamSteerer can significantly improve the editability of several T2I personalization baselines while being computationally efficient.", "pdf": "https://openreview.net/pdf/adc0818afa2e0d397b14e4fa39fbe97d8c7a41eb.pdf"} {"title": "Diffusion Actor-Critic with Entropy Regulator", "url": "https://openreview.net/forum?id=l0c1j4QvTq", "detail_url": "https://openreview.net/forum?id=l0c1j4QvTq", "authors": "Yinuo Wang,Likun Wang,Yuxuan Jiang,Wenjun Zou,Tong Liu,Xujie Song,Wenxuan Wang,Liming Xiao,Jiang WU,Jingliang Duan,Shengbo Eben Li", "tags": "NIPS 2024,Poster", "abstract": "Reinforcement learning (RL) has proven highly effective in addressing complex decision-making and control tasks. However, in most traditional RL algorithms, the policy is typically parameterized as a diagonal Gaussian distribution with learned mean and variance, which constrains their capability to acquire complex policies. In response to this problem, we propose an online RL algorithm termed diffusion actor-critic with entropy regulator (DACER). This algorithm conceptualizes the reverse process of the diffusion model as a novel policy function and leverages the capability of the diffusion model to fit multimodal distributions, thereby enhancing the representational capacity of the policy. Since the distribution of the diffusion policy lacks an analytical expression, its entropy cannot be determined analytically. To mitigate this, we propose a method to estimate the entropy of the diffusion policy utilizing Gaussian mixture model. Building on the estimated entropy, we can learn a parameter $\\alpha$ that modulates the degree of exploration and exploitation. Parameter $\\alpha$ will be employed to adaptively regulate the variance of the added noise, which is applied to the action output by the diffusion model. Experimental trials on MuJoCo benchmarks and a multimodal task demonstrate that the DACER algorithm achieves state-of-the-art (SOTA) performance in most MuJoCo control tasks while exhibiting a stronger representational capacity of the diffusion policy.", "pdf": "https://openreview.net/pdf/0bcd516c1659debb7a9519921f2284981132e3b0.pdf"} {"title": "Efficient Leverage Score Sampling for Tensor Train Decomposition", "url": "https://openreview.net/forum?id=fi3aKVnBQo", "detail_url": "https://openreview.net/forum?id=fi3aKVnBQo", "authors": "Vivek Bharadwaj,Beheshteh T. Rakhshan,Osman Asif Malik,Guillaume Rabusseau", "tags": "NIPS 2024,Poster", "abstract": "Tensor Train~(TT) decomposition is widely used in the machine learning and quantum physics communities as a popular tool to efficiently compress high-dimensional tensor data. In this paper, we propose an efficient algorithm to accelerate computing the TT decomposition with the Alternating Least Squares (ALS) algorithm relying on exact leverage scores sampling. For this purpose, we propose a data structure that allows us to efficiently sample from the tensor with time complexity logarithmic in the product of the tensor dimensions. Our contribution specifically leverages the canonical form of the TT decomposition. By maintaining the canonical form through each iteration of ALS, we can efficiently compute (and sample from) the leverage scores, thus achieving significant speed-up in solving each sketched least-square problem. Experiments on synthetic and real data on dense and sparse tensors demonstrate that our method outperforms SVD-based and ALS-based algorithms.", "pdf": "https://openreview.net/pdf/f81f3a44cb73d91e225b1a4c92fd78f70d61cc71.pdf"} {"title": "Enhancing Large Language Models through Adaptive Tokenizers", "url": "https://openreview.net/forum?id=3H1wqEdK4z", "detail_url": "https://openreview.net/forum?id=3H1wqEdK4z", "authors": "Mengyu Zheng,Hanting Chen,Tianyu Guo,Chong Zhu,Binfan Zheng,Chang Xu,Yunhe Wang", "tags": "NIPS 2024,Poster", "abstract": "Tokenizers serve as crucial interfaces between models and linguistic data, substantially influencing the efficacy and precision of large language models (LLMs). Traditional tokenization methods often rely on static frequency-based statistics and are not inherently synchronized with LLM architectures, which may limit model performance. In this study, we propose a simple but effective method to learn tokenizers specifically engineered for seamless integration with LLMs. Initiating with a broad initial vocabulary, we refine our tokenizer by monitoring changes in the model\u2019s perplexity during training, allowing for the selection of a tokenizer that is closely aligned with the model\u2019s evolving dynamics. Through iterative refinement, we develop an optimized tokenizer. Our empirical evaluations demonstrate that this adaptive approach significantly enhances accuracy compared to conventional methods, maintaining comparable vocabulary sizes and affirming its potential to improve LLM functionality.", "pdf": "https://openreview.net/pdf/acc98f9552b7a433f16acd31392d1a7e00f1df35.pdf"} {"title": "Parallelizing Model-based Reinforcement Learning Over the Sequence Length", "url": "https://openreview.net/forum?id=R6N9AGyz13", "detail_url": "https://openreview.net/forum?id=R6N9AGyz13", "authors": "ZiRui Wang,Yue DENG,Junfeng Long,Yin Zhang", "tags": "NIPS 2024,Poster", "abstract": "Recently, Model-based Reinforcement Learning (MBRL) methods have demonstrated stunning sample efficiency in various RL domains.\nHowever, achieving this extraordinary sample efficiency comes with additional training costs in terms of computations, memory, and training time.\nTo address these challenges, we propose the **Pa**rallelized **Mo**del-based **R**einforcement **L**earning (**PaMoRL**) framework.\nPaMoRL introduces two novel techniques: the **P**arallel **W**orld **M**odel (**PWM**) and the **P**arallelized **E**ligibility **T**race **E**stimation (**PETE**) to parallelize both model learning and policy learning stages of current MBRL methods over the sequence length.\nOur PaMoRL framework is hardware-efficient and stable, and it can be applied to various tasks with discrete or continuous action spaces using a single set of hyperparameters.\nThe empirical results demonstrate that the PWM and PETE within PaMoRL significantly increase training speed without sacrificing inference efficiency.\nIn terms of sample efficiency, PaMoRL maintains an MBRL-level sample efficiency that outperforms other no-look-ahead MBRL methods and model-free RL methods, and it even exceeds the performance of planning-based MBRL methods and methods with larger networks in certain tasks.", "pdf": "https://openreview.net/pdf/efd1bd3b496d9002319a1b079472ba0368edf169.pdf"} {"title": "Just Add $100 More: Augmenting Pseudo-LiDAR Point Cloud for Resolving Class-imbalance Problem", "url": "https://openreview.net/forum?id=NlpHKNjNNZ", "detail_url": "https://openreview.net/forum?id=NlpHKNjNNZ", "authors": "Mincheol Chang,Siyeong Lee,Jinkyu Kim,Namil Kim", "tags": "NIPS 2024,Poster", "abstract": "Typical LiDAR-based 3D object detection models are trained with real-world data collection, which is often imbalanced over classes.\nTo deal with it, augmentation techniques are commonly used, such as copying ground truth LiDAR points and pasting them into scenes.\nHowever, existing methods struggle with the lack of sample diversity for minority classes and the limitation of suitable placement.\nIn this work, we introduce a novel approach that utilizes pseudo LiDAR point clouds generated from low-cost miniatures or real-world videos, which is called Pseudo Ground Truth augmentation (PGT-Aug).\nPGT-Aug involves three key steps: (i) volumetric 3D instance reconstruction using a 2D-to-3D view synthesis model, (ii) object-level domain alignment with LiDAR intensity simulation, and (iii) a hybrid context-aware placement method from ground and map information. \nWe demonstrate the superiority and generality of our method through performance improvements in extensive experiments conducted on popular benchmarks, i.e., nuScenes, KITTI, and Lyft, especially for the datasets with large domain gaps captured by different LiDAR configurations.\nThe project webpage is https://just-add-100-more.github.io.", "pdf": "https://openreview.net/pdf/1811098cec4bd9ae87fb5f0de9ad844cef9e6f81.pdf"} {"title": "$\\text{ID}^3$: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition", "url": "https://openreview.net/forum?id=x4HMnqs6IE", "detail_url": "https://openreview.net/forum?id=x4HMnqs6IE", "authors": "Jianqing Xu,Shen Li,Jiaying Wu,Miao Xiong,Ailin Deng,Jiazhen Ji,Yuge Huang,Guodong Mu,Wenjie Feng,Shouhong Ding,Bryan Hooi", "tags": "NIPS 2024,Poster", "abstract": "Synthetic face recognition (SFR) aims to generate synthetic face datasets that mimic the distribution of real face data, which allows for training face recognition models in a privacy-preserving manner. Despite the remarkable potential of diffusion models in image generation, current diffusion-based SFR models struggle with generalization to real-world faces. To address this limitation, we outline three key objectives for SFR: (1) promoting diversity across identities (inter-class diversity), (2) ensuring diversity within each identity by injecting various facial attributes (intra-class diversity), and (3) maintaining identity consistency within each identity group (intra-class identity preservation). Inspired by these goals, we introduce a diffusion-fueled SFR model termed $\\text{ID}^3$. $\\text{ID}^3$ employs an ID-preserving loss to generate diverse yet identity-consistent facial appearances. Theoretically, we show that minimizing this loss is equivalent to maximizing the lower bound of an adjusted conditional log-likelihood over ID-preserving data. This equivalence motivates an ID-preserving sampling algorithm, which operates over an adjusted gradient vector field, enabling the generation of fake face recognition datasets that approximate the distribution of real-world faces. Extensive experiments across five challenging benchmarks validate the advantages of $\\text{ID}^3$.", "pdf": "https://openreview.net/pdf/773ad46986854b909ee32c632457744c279a0962.pdf"} {"title": "RA-PbRL: Provably Efficient Risk-Aware Preference-Based Reinforcement Learning", "url": "https://openreview.net/forum?id=JNDcFOczOf", "detail_url": "https://openreview.net/forum?id=JNDcFOczOf", "authors": "Yujie Zhao,Jose Efraim Aguilar Escamilla,Weyl Lu,Huazheng Wang", "tags": "NIPS 2024,Poster", "abstract": "Reinforcement Learning from Human Feedback (RLHF) has recently surged in popularity, particularly for aligning large language models and other AI systems with human intentions. At its core, RLHF can be viewed as a specialized instance of Preference-based Reinforcement Learning (PbRL), where the preferences specifically originate from human judgments rather than arbitrary evaluators. Despite this connection, most existing approaches in both RLHF and PbRL primarily focus on optimizing a mean reward objective, neglecting scenarios that necessitate risk-awareness, such as AI safety, healthcare, and autonomous driving. These scenarios often operate under a one-episode-reward setting, which makes conventional risk-sensitive objectives inapplicable. To address this, we explore and prove the applicability of two risk-aware objectives to PbRL : nested and static quantile risk objectives. We also introduce Risk-AwarePbRL (RA-PbRL), an algorithm designed to optimize both nested and static objectives. Additionally, we provide a theoretical analysis of the regret upper bounds, demonstrating that they are sublinear with respect to the number of episodes, and present empirical results to support our findings. Our code is available in https://github.com/aguilarjose11/PbRLNeurips.", "pdf": "https://openreview.net/pdf/44cfb27e38660c77d507aee08676f6eae8e0b937.pdf"} {"title": "FERERO: A Flexible Framework for Preference-Guided Multi-Objective Learning", "url": "https://openreview.net/forum?id=BmG3NgH5xu", "detail_url": "https://openreview.net/forum?id=BmG3NgH5xu", "authors": "Lisha Chen,A F M Saif,Yanning Shen,Tianyi Chen", "tags": "NIPS 2024,Poster", "abstract": "Finding specific preference-guided Pareto solutions that represent different trade-offs among multiple objectives is critical yet challenging in multi-objective problems. \nExisting methods are restrictive in preference definitions and/or their theoretical guarantees.\nIn this work, we introduce a Flexible framEwork for pREfeRence-guided multi-Objective learning (**FERERO**) by casting it as a constrained vector optimization problem.\nSpecifically, two types of preferences are incorporated into this formulation -- the *relative preference* defined by the partial ordering induced by a polyhedral cone, and the *absolute preference* defined by constraints that are linear functions of the objectives. \nTo solve this problem, convergent algorithms are developed with both single-loop and stochastic variants. \nNotably, this is the *first single-loop primal algorithm* for constrained vector optimization to our knowledge. \nThe proposed algorithms adaptively adjust to both constraint and objective values, eliminating the need to solve different subproblems at different stages of constraint satisfaction. \nExperiments on multiple benchmarks demonstrate the proposed method is very competitive in finding preference-guided optimal solutions.\nCode is available at https://github.com/lisha-chen/FERERO/.", "pdf": "https://openreview.net/pdf/37ea3dcb40f6381ea577785f0c53cbeabc631f14.pdf"} {"title": "Decision Mamba: Reinforcement Learning via Hybrid Selective Sequence Modeling", "url": "https://openreview.net/forum?id=wFzIMbTsY7", "detail_url": "https://openreview.net/forum?id=wFzIMbTsY7", "authors": "Sili Huang,Jifeng Hu,Zhejian Yang,Liwei Yang,Tao Luo,Hechang Chen,Lichao Sun,Bo Yang", "tags": "NIPS 2024,Poster", "abstract": "Recent works have shown the remarkable superiority of transformer models in reinforcement learning (RL), where the decision-making problem is formulated as sequential generation. Transformer-based agents could emerge with self-improvement in online environments by providing task contexts, such as multiple trajectories, called in-context RL. However, due to the quadratic computation complexity of attention in transformers, current in-context RL methods suffer from huge computational costs as the task horizon increases. In contrast, the Mamba model is renowned for its efficient ability to process long-term dependencies, which provides an opportunity for in-context RL to solve tasks that require long-term memory. To this end, we first implement Decision Mamba (DM) by replacing the backbone of Decision Transformer (DT). Then, we propose a Decision Mamba-Hybrid (DM-H) with the merits of transformers and Mamba in high-quality prediction and long-term memory. Specifically, DM-H first generates high-value sub-goals from long-term memory through the Mamba model. Then, we use sub-goals to prompt the transformer, establishing high-quality predictions. Experimental results demonstrate that DM-H achieves state-of-the-art in long and short-term tasks, such as D4RL, Grid World, and Tmaze benchmarks. Regarding efficiency, the online testing of DM-H in the long-term task is 28$\\times$ times faster than the transformer-based baselines.", "pdf": "https://openreview.net/pdf/f79cbc369ef6968176c7cc958c79839cb99e59b0.pdf"} {"title": "Decomposing and Interpreting Image Representations via Text in ViTs Beyond CLIP", "url": "https://openreview.net/forum?id=Vhh7ONtfvV", "detail_url": "https://openreview.net/forum?id=Vhh7ONtfvV", "authors": "Sriram Balasubramanian,Samyadeep Basu,Soheil Feizi", "tags": "NIPS 2024,Poster", "abstract": "Recent work has explored how individual components of the CLIP-ViT model contribute to the final representation by leveraging the shared image-text representation space of CLIP. These components, such as attention heads and MLPs, have been shown to capture distinct image features like shape, color or texture. However, understanding the role of these components in arbitrary vision transformers (ViTs) is challenging. To this end, we introduce a general framework which can identify the roles of various components in ViTs beyond CLIP. Specifically, we (a) automate the decomposition of the final representation into contributions from different model components, and (b) linearly map these contributions to CLIP space to interpret them via text. Additionally, we introduce a novel scoring function to rank components by their importance with respect to specific features.\nApplying our framework to various ViT variants (e.g. DeiT, DINO, DINOv2, Swin, MaxViT), we gain insights into the roles of different components concerning particular image features. These insights facilitate applications such as image retrieval using text descriptions or reference images, visualizing token importance heatmaps, and mitigating spurious correlations. We release our [code](https://github.com/SriramB-98/vit-decompose) to reproduce the experiments in the paper.", "pdf": "https://openreview.net/pdf/bf8e24b3cef239069ec37fdfe2cbddd0767fbd8e.pdf"} {"title": "Trap-MID: Trapdoor-based Defense against Model Inversion Attacks", "url": "https://openreview.net/forum?id=GNhrGRCerd", "detail_url": "https://openreview.net/forum?id=GNhrGRCerd", "authors": "Zhen-Ting Liu,Shang-Tse Chen", "tags": "NIPS 2024,Poster", "abstract": "Model Inversion (MI) attacks pose a significant threat to the privacy of Deep Neural Networks by recovering training data distribution from well-trained models. While existing defenses often rely on regularization techniques to reduce information leakage, they remain vulnerable to recent attacks. In this paper, we propose the Trapdoor-based Model Inversion Defense (Trap-MID) to mislead MI attacks. A trapdoor is integrated into the model to predict a specific label when the input is injected with the corresponding trigger. Consequently, this trapdoor information serves as the \"shortcut\" for MI attacks, leading them to extract trapdoor triggers rather than private data. We provide theoretical insights into the impacts of trapdoor's effectiveness and naturalness on deceiving MI attacks. In addition, empirical experiments demonstrate the state-of-the-art defense performance of Trap-MID against various MI attacks without the requirements for extra data or large computational overhead. Our source code is publicly available at https://github.com/ntuaislab/Trap-MID.", "pdf": "https://openreview.net/pdf/dd8aac63b1183030ea525d742d358e15e637d960.pdf"} {"title": "Goal-Conditioned On-Policy Reinforcement Learning", "url": "https://openreview.net/forum?id=KP7EUORJYI", "detail_url": "https://openreview.net/forum?id=KP7EUORJYI", "authors": "Gong Xudong,Feng Dawei,Kele Xu,Bo Ding,Huaimin Wang", "tags": "NIPS 2024,Poster", "abstract": "Existing Goal-Conditioned Reinforcement Learning (GCRL) algorithms are built upon Hindsight Experience Replay (HER), which densifies rewards through hindsight replay and leverages historical goal-achieving information to construct a learning curriculum. However, when the task is characterized by a non-Markovian reward (NMR), whose computation depends on multiple steps of states and actions, HER can no longer densify rewards by treating a single encountered state as the hindsight goal. The lack of informative rewards hinders policy learning, resulting in rolling out failed trajectories. Consequently, the replay buffer is overwhelmed with failed trajectories, impeding the establishment of an applicable curriculum. To circumvent these limitations, we deviate from existing HER-based methods and propose an on-policy GCRL framework, GCPO, which is applicable to both multi-goal Markovian reward (MR) and NMR problems.\nGCPO consists of (1) Pre-training from Demonstrations, which pre-trains the policy to possess an initial goal-achieving capability, thereby diminishing the difficulty of subsequent online learning. (2) Online Self-Curriculum Learning, which first estimates the policy's goal-achieving capability based on historical evaluation information and then selects progressively challenging goals for learning based on its current capability. We evaluate GCPO on a challenging multi-goal long-horizon task: fixed-wing UAV velocity vector control. Experimental results demonstrate that GCPO is capable of effectively addressing both multi-goal MR and NMR problems.", "pdf": "https://openreview.net/pdf/b1fdf1b3bd25c9015a3a7ea76becbb6aa3514885.pdf"} {"title": "The Importance of Online Data: Understanding Preference Fine-tuning via Coverage", "url": "https://openreview.net/forum?id=HBj86RMdZ8", "detail_url": "https://openreview.net/forum?id=HBj86RMdZ8", "authors": "Yuda Song,Gokul Swamy,Aarti Singh,Drew Bagnell,Wen Sun", "tags": "NIPS 2024,Poster", "abstract": "Learning from human preference data has emerged as the dominant paradigm for fine-tuning large language models (LLMs). The two most common families of techniques -- online reinforcement learning (RL) such as Proximal Policy Optimization (PPO) and offline contrastive methods such as Direct Preference Optimization (DPO) -- were positioned as equivalent in prior work due to the fact that both have to start from the same offline preference dataset. To further expand our theoretical understanding of the similarities and differences between online and offline techniques for preference fine-tuning, we conduct a rigorous analysis through the lens of *dataset coverage*, a concept that captures how the training data covers the test distribution and is widely used in RL. We prove that a global coverage condition is both necessary and sufficient for offline contrastive methods to converge to the optimal policy, but a weaker partial coverage condition suffices for online RL methods. This separation provides one explanation of why online RL methods can perform better than offline methods, especially when the offline preference data is not diverse enough. Finally, motivated by our preceding theoretical observations, we derive a hybrid preference optimization (HyPO) algorithm that uses offline data for contrastive-based preference optimization and online unlabeled data for KL regularization. Theoretically and empirically, we demonstrate that HyPO is more performant than its pure offline counterpart DPO, while still preserving its computation and memory efficiency.", "pdf": "https://openreview.net/pdf/6ce5edca1c5e6e6c9285c836cbcc6e6324c0d1dd.pdf"} {"title": "Prospective Representation Learning for Non-Exemplar Class-Incremental Learning", "url": "https://openreview.net/forum?id=ZtDARpmbun", "detail_url": "https://openreview.net/forum?id=ZtDARpmbun", "authors": "Wuxuan Shi,Mang Ye", "tags": "NIPS 2024,Poster", "abstract": "Non-exemplar class-incremental learning (NECIL) is a challenging task that requires recognizing both old and new classes without retaining any old class samples. Current works mainly deal with the conflicts between old and new classes retrospectively as a new task comes in. However, the lack of old task data makes balancing old and new classes difficult. Instead, we propose a Prospective Representation Learning (PRL) approach to prepare the model for handling conflicts in advance. In the base phase, we squeeze the embedding distribution of the current classes to reserve space for forward compatibility with future classes. In the incremental phase, we make the new class features away from the saved prototypes of old classes in a latent space while aligning the current embedding space with the latent space when updating the model. Thereby, the new class features are clustered in the reserved space to minimize the shock of the new classes on the former classes. Our approach can help existing NECIL baselines to balance old and new classes in a plug-and-play manner. Extensive experiments on several benchmarks demonstrate that our approach outperforms the state-of-the-art methods.", "pdf": "https://openreview.net/pdf/2f8d050f5d447b67f9060d68aa18b6e1154b819a.pdf"} {"title": "A Surprisingly Simple Approach to Generalized Few-Shot Semantic Segmentation", "url": "https://openreview.net/forum?id=p3nPHMpx04", "detail_url": "https://openreview.net/forum?id=p3nPHMpx04", "authors": "Tomoya Sakai,Haoxiang Qiu,Takayuki Katsuki,Daiki Kimura,Takayuki Osogami,Tadanobu Inoue", "tags": "NIPS 2024,Poster", "abstract": "The goal of *generalized* few-shot semantic segmentation (GFSS) is to recognize *novel-class* objects through training with a few annotated examples and the *base-class* model that learned the knowledge about the base classes.\nUnlike the classic few-shot semantic segmentation, GFSS aims to classify pixels into both base and novel classes, meaning it is a more practical setting.\nCurrent GFSS methods rely on several techniques such as using combinations of customized modules, carefully designed loss functions, meta-learning, and transductive learning.\nHowever, we found that a simple rule and standard supervised learning substantially improve the GFSS performance.\nIn this paper, we propose a simple yet effective method for GFSS that does not use the techniques mentioned above.\nAlso, we theoretically show that our method perfectly maintains the segmentation performance of the base-class model over most of the base classes.\nThrough numerical experiments, we demonstrated the effectiveness of our method.\nIt improved in novel-class segmentation performance in the $1$-shot scenario by $6.1$% on the PASCAL-$5^i$ dataset, $4.7$% on the PASCAL-$10^i$ dataset, and $1.0$% on the COCO-$20^i$ dataset.", "pdf": "https://openreview.net/pdf/52b0ad2f493708391feb4bd8c807552ff581a936.pdf"} {"title": "HonestLLM: Toward an Honest and Helpful Large Language Model", "url": "https://openreview.net/forum?id=F7tGQ7b10q", "detail_url": "https://openreview.net/forum?id=F7tGQ7b10q", "authors": "Chujie Gao,Siyuan Wu,Yue Huang,Dongping Chen,Qihui Zhang,Zhengyan Fu,Yao Wan,Lichao Sun,Xiangliang Zhang", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) have achieved remarkable success across various industries and applications, owing to their exceptional generative capabilities. Nevertheless, honesty and helpfulness, which ensure safe and useful real-world deployments, have been considered as the longstanding cornerstones in practice. In this paper, we first established comprehensive principles for honesty LLM and further created the HoneSet with 930 queries across six categories, which is designed to evaluate LLMs\u2019 ability to maintain honesty. Then, we improved the honesty and helpfulness of LLMs in both training-free and fine-tuning settings. Specifically, we propose a training-free method named Curiosity-Driven Prompting, which enables LLMs to express their internal confusion and uncertainty about the given query and then optimize their responses. Moreover, we also propose a two-stage fine-tuning approach, inspired by curriculum learning, to enhance the honesty and helpfulness of LLMs. The method first teaches LLMs to distinguish between honest and dishonest, and then LLMs are trained to learn to respond more helpfully. Experimental results demonstrated that both of the two proposed methods improve the helpfulness of LLMs while making them maintain honesty. Our research has paved the way for more reliable and trustworthy LLMs in real-world applications.", "pdf": "https://openreview.net/pdf/a9c812c05c635e6ce30035d85f1204f1f13a316c.pdf"} {"title": "SAFE: Slow and Fast Parameter-Ef\ufb01cient Tuning for Continual Learning with Pre-Trained Models", "url": "https://openreview.net/forum?id=Cjnirz5pan", "detail_url": "https://openreview.net/forum?id=Cjnirz5pan", "authors": "Linglan Zhao,Xuerui Zhang,Ke Yan,Shouhong Ding,Weiran Huang", "tags": "NIPS 2024,Poster", "abstract": "Continual learning aims to incrementally acquire new concepts in data streams while resisting forgetting previous knowledge.\nWith the rise of powerful pre-trained models (PTMs), there is a growing interest of training incremental learning systems using these foundation models, rather than learning from scratch. \nExisting works often view PTMs as a strong initial point and directly apply parameter-efficient tuning (PET) in the first session for adapting to downstream tasks.\nIn the following sessions, most methods freeze model parameters for tackling forgetting issues. \nHowever, applying PET directly to downstream data cannot fully explore the inherent knowledge in PTMs.\nAdditionally, freezing the parameters in incremental sessions hinders models' plasticity to novel concepts not covered in the first session. \nTo solve the above issues, we propose a Slow And Fast parameter-Efficient tuning (SAFE) framework.\nIn particular, to inherit general knowledge from foundation models, we include a transfer loss function by measuring the correlation between the PTM and the PET-applied model.\nAfter calibrating in the first session, the slow efficient tuning parameters can capture more informative features, improving generalization to incoming classes.\nMoreover, to further incorporate novel concepts, we strike a balance between stability and plasticity by fixing slow efficient tuning parameters and continuously updating the fast ones.\nSpecifically, a cross-classification loss with feature alignment is proposed to circumvent catastrophic forgetting.\nDuring inference, we introduce an entropy-based aggregation strategy to dynamically utilize the complementarity in the slow and fast learners.\nExtensive experiments on seven benchmark datasets verify the effectiveness of our method by significantly surpassing the state-of-the-art.", "pdf": "https://openreview.net/pdf/0d0cbd6d4b593d16bd3e4fb3e1b7c2e737e4a5c5.pdf"} {"title": "MonoMAE: Enhancing Monocular 3D Detection through Depth-Aware Masked Autoencoders", "url": "https://openreview.net/forum?id=wiK6bwuxjE", "detail_url": "https://openreview.net/forum?id=wiK6bwuxjE", "authors": "Xueying Jiang,Sheng Jin,Xiaoqin Zhang,Ling Shao,Shijian Lu", "tags": "NIPS 2024,Poster", "abstract": "Monocular 3D object detection aims for precise 3D localization and identification of objects from a single-view image. Despite its recent progress, it often struggles while handling pervasive object occlusions that tend to complicate and degrade the prediction of object dimensions, depths, and orientations. We design MonoMAE, a monocular 3D detector inspired by Masked Autoencoders that addresses the object occlusion issue by masking and reconstructing objects in the feature space. MonoMAE consists of two novel designs. The first is depth-aware masking that selectively masks certain parts of non-occluded object queries in the feature space for simulating occluded object queries for network training. It masks non-occluded object queries by balancing the masked and preserved query portions adaptively according to the depth information. The second is lightweight query completion that works with the depth-aware masking to learn to reconstruct and complete the masked object queries. With the proposed feature-space occlusion and completion, MonoMAE learns enriched 3D representations that achieve superior monocular 3D detection performance qualitatively and quantitatively for both occluded and non-occluded objects. Additionally, MonoMAE learns generalizable representations that can work well in new domains.", "pdf": "https://openreview.net/pdf/525c91ccdb8e051ae4ee3dc5b9a7bbac283b9be6.pdf"} {"title": "Reawakening knowledge: Anticipatory recovery from catastrophic interference via structured training", "url": "https://openreview.net/forum?id=YSs1z5udBY", "detail_url": "https://openreview.net/forum?id=YSs1z5udBY", "authors": "Yanlai Yang,Matt Jones,Michael Curtis Mozer,Mengye Ren", "tags": "NIPS 2024,Poster", "abstract": "We explore the training dynamics of neural networks in a structured non-IID setting where documents are presented cyclically in a fixed, repeated sequence. Typically, networks suffer from catastrophic interference when training on a sequence of documents; however, we discover a curious and remarkable property of LLMs finetuned sequentially in this setting: they exhibit *anticipatory* behavior, recovering from the forgetting on documents *before* seeing them again. The behavior emerges and becomes more robust as the architecture scales up its number of parameters. Through comprehensive experiments and visualizations, we uncover new insights into training over-parameterized networks in structured environments.", "pdf": "https://openreview.net/pdf/b658600e982e4f69a5a764e8b096d9b77cb897c7.pdf"} {"title": "Simple and Fast Distillation of Diffusion Models", "url": "https://openreview.net/forum?id=Ao0FiZqrXa", "detail_url": "https://openreview.net/forum?id=Ao0FiZqrXa", "authors": "Zhenyu Zhou,Defang Chen,Can Wang,Chun Chen,Siwei Lyu", "tags": "NIPS 2024,Poster", "abstract": "Diffusion-based generative models have demonstrated their powerful performance across various tasks, but this comes at a cost of the slow sampling speed. To achieve both efficient and high-quality synthesis, various distillation-based accelerated sampling methods have been developed recently. However, they generally require time-consuming fine tuning with elaborate designs to achieve satisfactory performance in a specific number of function evaluation (NFE), making them difficult to employ in practice. To address this issue, we propose **S**imple and **F**ast **D**istillation (SFD) of diffusion models, which simplifies the paradigm used in existing methods and largely shortens their fine-tuning time up to $1000\\times$. We begin with a vanilla distillation-based sampling method and boost its performance to state of the art by identifying and addressing several small yet vital factors affecting the synthesis efficiency and quality. Our method can also achieve sampling with variable NFEs using a single distilled model. Extensive experiments demonstrate that SFD strikes a good balance between the sample quality and fine-tuning costs in few-step image generation task. For example, SFD achieves 4.53 FID (NFE=2) on CIFAR-10 with only **0.64 hours** of fine-tuning on a single NVIDIA A100 GPU.", "pdf": "https://openreview.net/pdf/d2311c63dc60612338edb13986d58e0059c5c1f8.pdf"} {"title": "Taming Cross-Domain Representation Variance in Federated Prototype Learning with Heterogeneous Data Domains", "url": "https://openreview.net/forum?id=6SRPizFuaE", "detail_url": "https://openreview.net/forum?id=6SRPizFuaE", "authors": "Lei Wang,Jieming Bian,Letian Zhang,Chen Chen,Jie Xu", "tags": "NIPS 2024,Poster", "abstract": "Federated learning (FL) allows collaborative machine learning training without sharing private data. While most FL methods assume identical data domains across clients, real-world scenarios often involve heterogeneous data domains. Federated Prototype Learning (FedPL) addresses this issue, using mean feature vectors as prototypes to enhance model generalization. However, existing FedPL methods create the same number of prototypes for each client, leading to cross-domain performance gaps and disparities for clients with varied data distributions. To mitigate cross-domain feature representation variance, we introduce FedPLVM, which establishes variance-aware dual-level prototypes clustering and employs a novel $\\alpha$-sparsity prototype loss. The dual-level prototypes clustering strategy creates local clustered prototypes based on private data features, then performs global prototypes clustering to reduce communication complexity and preserve local data privacy. The $\\alpha$-sparsity prototype loss aligns samples from underrepresented domains, enhancing intra-class similarity and reducing inter-class similarity. Evaluations on Digit-5, Office-10, and DomainNet datasets demonstrate our method's superiority over existing approaches.", "pdf": "https://openreview.net/pdf/baa6ccf738732b459e03f73c74434c8fcd823db3.pdf"} {"title": "Trading off Consistency and Dimensionality of Convex Surrogates for Multiclass Classification", "url": "https://openreview.net/forum?id=xCIbVuXwPM", "detail_url": "https://openreview.net/forum?id=xCIbVuXwPM", "authors": "Enrique Nueve,Dhamma Kimpara,Bo Waggoner,Jessica Finocchiaro", "tags": "NIPS 2024,Poster", "abstract": "In multiclass classification over $n$ outcomes, we typically optimize some surrogate loss $L: \\mathbb{R}^d \\times\\mathcal{Y} \\to \\mathbb{R}$ assigning real-valued error to predictions in $\\mathbb{R}^d$. In this paradigm, outcomes must be embedded into the reals with dimension $d \\approx n$ in order to design a consistent surrogate loss. Consistent losses are well-motivated theoretically, yet for large $n$, such as in information retrieval and structured prediction tasks, their optimization may be computationally infeasible. In practice, outcomes are typically embedded into some $\\mathbb{R}^d$ for $d \\ll n$, with little known about their suitability for multiclass classification. We investigate two approaches for trading off consistency and dimensionality in multiclass classification while using a convex surrogate loss. We first formalize partial consistency when the optimized surrogate has dimension $d \\ll n$. \nWe then check if partial consistency holds under a given embedding and low-noise assumption, providing insight into when to use a particular embedding into $\\mathbb{R}^d$. Finally, we present a new method to construct (fully) consistent losses with $d \\ll n$ out of multiple problem instances. Our practical approach leverages parallelism to sidestep lower bounds on $d$.", "pdf": "https://openreview.net/pdf/8719768b890f591c4ee35ef267ec626035024441.pdf"} {"title": "Take A Shortcut Back: Mitigating the Gradient Vanishing for Training Spiking Neural Networks", "url": "https://openreview.net/forum?id=xjyU6zmZD7", "detail_url": "https://openreview.net/forum?id=xjyU6zmZD7", "authors": "Yufei Guo,Yuanpei Chen,Zecheng Hao,Weihang Peng,Zhou Jie,Yuhan Zhang,Xiaode Liu,Zhe Ma", "tags": "NIPS 2024,Poster", "abstract": "The Spiking Neural Network (SNN) is a biologically inspired neural network infrastructure that has recently garnered significant attention. It utilizes binary spike activations to transmit information, thereby replacing multiplications with additions and resulting in high energy efficiency. However, training an SNN directly poses a challenge due to the undefined gradient of the firing spike process. Although prior works have employed various surrogate gradient training methods that use an alternative function to replace the firing process during back-propagation, these approaches ignore an intrinsic problem: gradient vanishing. To address this issue, we propose a shortcut back-propagation method in the paper, which advocates for transmitting the gradient directly from the loss to the shallow layers. This enables us to present the gradient to the shallow layers directly, thereby significantly mitigating the gradient vanishing problem. Additionally, this method does not introduce any burden during the inference phase.\nTo strike a balance between final accuracy and ease of training, we also propose an evolutionary training framework and implement it by inducing a balance coefficient that dynamically changes with the training epoch, which further improves the network's performance. Extensive experiments conducted over static and dynamic datasets using several popular network structures reveal that our method consistently outperforms state-of-the-art methods.", "pdf": "https://openreview.net/pdf/6dfff0aec6f93d33ffa638873f008d9ca6857190.pdf"} {"title": "Predictor-Corrector Enhanced Transformers with Exponential Moving Average Coefficient Learning", "url": "https://openreview.net/forum?id=Ur9f4hNIpN", "detail_url": "https://openreview.net/forum?id=Ur9f4hNIpN", "authors": "Bei Li,Tong Zheng,Rui Wang,Jiahao Liu,Qingyan Guo,Junliang Guo,Xu Tan,Tong Xiao,JingBo Zhu,Jingang Wang,Xunliang Cai", "tags": "NIPS 2024,Poster", "abstract": "Residual networks, as discrete approximations of Ordinary Differential Equations (ODEs), have inspired significant advancements in neural network design, including multistep methods, high-order methods, and multi-particle dynamical systems. The precision of the solution to ODEs significantly affects parameter optimization, thereby impacting model performance. In this work, we present a series of advanced explorations of Transformer architecture design to minimize the error compared to the true ``solution.'' First, we introduce a predictor-corrector learning framework to minimize truncation errors, which consists of a high-order predictor and a multistep corrector. Second, we propose an exponential moving average-based coefficient learning method to strengthen our higher-order predictor. Extensive experiments on large-scale machine translation, abstractive summarization, language modeling, and natural language understanding benchmarks demonstrate the superiority of our approach. On the WMT'14 English-German and English-French tasks, our model achieved BLEU scores of 30.95 and 44.27, respectively. Furthermore, on the OPUS multilingual machine translation task, our model surpasses a robust 3.8B DeepNet by an average of 2.9 SacreBLEU, using only 1/3 parameters. Notably, it also beats LLama models by 5.7 accuracy points on the LM Harness Evaluation.", "pdf": "https://openreview.net/pdf/c8360f1fbd0a8aed94e37a341f263837d920a24a.pdf"} {"title": "FreqBlender: Enhancing DeepFake Detection by Blending Frequency Knowledge", "url": "https://openreview.net/forum?id=otZPBS0un6", "detail_url": "https://openreview.net/forum?id=otZPBS0un6", "authors": "Hanzhe LI,Jiaran Zhou,Yuezun Li,Baoyuan Wu,Bin Li,Junyu Dong", "tags": "NIPS 2024,Poster", "abstract": "Generating synthetic fake faces, known as pseudo-fake faces, is an effective way to improve the generalization of DeepFake detection. Existing methods typically generate these faces by blending real or fake faces in spatial domain. While these methods have shown promise, they overlook the simulation of frequency distribution in pseudo-fake faces, limiting the learning of generic forgery traces in-depth. To address this, this paper introduces {\\em FreqBlender}, a new method that can generate pseudo-fake faces by blending frequency knowledge. Concretely, we investigate the major frequency components and propose a Frequency Parsing Network to adaptively partition frequency components related to forgery traces. Then we blend this frequency knowledge from fake faces into real faces to generate pseudo-fake faces. Since there is no ground truth for frequency components, we describe a dedicated training strategy by leveraging the inner correlations among different frequency knowledge to instruct the learning process. Experimental results demonstrate the effectiveness of our method in enhancing DeepFake detection, making it a potential plug-and-play strategy for other methods.", "pdf": "https://openreview.net/pdf/069e8992a7029f0458ddc78b200f7894c856f29b.pdf"} {"title": "Efficient LLM Jailbreak via Adaptive Dense-to-sparse Constrained Optimization", "url": "https://openreview.net/forum?id=bN5PA3HHo8", "detail_url": "https://openreview.net/forum?id=bN5PA3HHo8", "authors": "Kai Hu,Weichen Yu,Yining Li,Tianjun Yao,Xiang Li,Wenhe Liu,Lijun Yu,Zhiqiang Shen,Kai Chen,Matt Fredrikson", "tags": "NIPS 2024,Poster", "abstract": "Recent research indicates that large language models (LLMs) are susceptible to jailbreaking attacks that can generate harmful content. This paper introduces a novel token-level attack method, Adaptive Dense-to-Sparse Constrained Optimization (ADC), which has been shown to successfully jailbreak multiple open-source LLMs. Drawing inspiration from the difficulties of discrete token optimization, our method relaxes the discrete jailbreak optimization into a continuous optimization process while gradually increasing the sparsity of the optimizing vectors. This technique effectively bridges the gap between discrete and continuous space optimization. Experimental results demonstrate that our method is more effective and efficient than state-of-the-art token-level methods. On Harmbench, our approach achieves the highest attack success rate on seven out of eight LLMs compared to the latest jailbreak methods. \\textcolor{red}{Trigger Warning: This paper contains model behavior that can be offensive in nature.}", "pdf": "https://openreview.net/pdf/dba1623df3eaa27fd20c334efb5355035700e0b8.pdf"} {"title": "FFAM: Feature Factorization Activation Map for Explanation of 3D Detectors", "url": "https://openreview.net/forum?id=rpZWSDjc4N", "detail_url": "https://openreview.net/forum?id=rpZWSDjc4N", "authors": "Shuai Liu,Boyang Li,Zhiyu Fang,Mingyue Cui,Kai Huang", "tags": "NIPS 2024,Poster", "abstract": "LiDAR-based 3D object detection has made impressive progress recently, yet most existing models are black-box, lacking interpretability. Previous explanation approaches primarily focus on analyzing image-based models and are not readily applicable to LiDAR-based 3D detectors. In this paper, we propose a feature factorization activation map (FFAM) to generate high-quality visual explanations for 3D detectors. FFAM employs non-negative matrix factorization to generate concept activation maps and subsequently aggregates these maps to obtain a global visual explanation. To achieve object-specific visual explanations, we refine the global visual explanation using the feature gradient of a target object. Additionally, we introduce a voxel upsampling strategy to align the scale between the activation map and input point cloud. We qualitatively and quantitatively analyze FFAM with multiple detectors on several datasets. Experimental results validate the high-quality visual explanations produced by FFAM. The code is available at \\url{https://anonymous.4open.science/r/FFAM-B9AF}.", "pdf": "https://openreview.net/pdf/08f720727dc29316599fe6ed03219a4f2f88435e.pdf"} {"title": "Understanding and Improving Adversarial Collaborative Filtering for Robust Recommendation", "url": "https://openreview.net/forum?id=k8AYft5ED1", "detail_url": "https://openreview.net/forum?id=k8AYft5ED1", "authors": "Kaike Zhang,Qi Cao,Yunfan Wu,Fei Sun,Huawei Shen,Xueqi Cheng", "tags": "NIPS 2024,Poster", "abstract": "Adversarial Collaborative Filtering (ACF), which typically applies adversarial perturbations at user and item embeddings through adversarial training, is widely recognized as an effective strategy for enhancing the robustness of Collaborative Filtering (CF) recommender systems against poisoning attacks. Besides, numerous studies have empirically shown that ACF can also improve recommendation performance compared to traditional CF. Despite these empirical successes, the theoretical understanding of ACF's effectiveness in terms of both performance and robustness remains unclear. To bridge this gap, in this paper, we first theoretically show that ACF can achieve a lower recommendation error compared to traditional CF with the same training epochs in both clean and poisoned data contexts. Furthermore, by establishing bounds for reductions in recommendation error during ACF's optimization process, we find that applying personalized magnitudes of perturbation for different users based on their embedding scales can further improve ACF's effectiveness. Building on these theoretical understandings, we propose Personalized Magnitude Adversarial Collaborative Filtering (PamaCF). Extensive experiments demonstrate that PamaCF effectively defends against various types of poisoning attacks while significantly enhancing recommendation performance.", "pdf": "https://openreview.net/pdf/9ff2572b003febc56e01190bdc05435dc777c13c.pdf"} {"title": "Elliptical Attention", "url": "https://openreview.net/forum?id=Ejg4d4FVrs", "detail_url": "https://openreview.net/forum?id=Ejg4d4FVrs", "authors": "Stefan Nielsen,Laziz Abdullaev,Rachel Teo,Tan Minh Nguyen", "tags": "NIPS 2024,Poster", "abstract": "Pairwise dot-product self-attention is key to the success of transformers that achieve state-of-the-art performance across a variety of applications in language and vision. This dot-product self-attention computes attention weights among the input tokens using Euclidean distance, which makes the model prone to representation collapse and vulnerable to contaminated samples. In this paper, we propose using a Mahalanobis distance metric for computing the attention weights to stretch the underlying feature space in directions of high contextual relevance. In particular, we define a hyper-ellipsoidal neighborhood around each query to increase the attention weights of the tokens lying in the contextually important directions. We term this novel class of attention Elliptical Attention. Our Elliptical Attention provides two benefits: 1) reducing representation collapse and 2) enhancing the model's robustness as the Elliptical Attention pays more attention to contextually relevant information, rather than focusing on some small subset of informative features. We empirically demonstrate the advantages of Elliptical Attention over the baseline dot-product attention and state-of-the-art attention methods on various practical tasks, including object classification, image\nsegmentation, and language modeling across different data modalities.", "pdf": "https://openreview.net/pdf/5a05adf9dee4474f7026e4a12cd9886dcb18b29c.pdf"} {"title": "Kernel PCA for Out-of-Distribution Detection", "url": "https://openreview.net/forum?id=EZpKBC1ohS", "detail_url": "https://openreview.net/forum?id=EZpKBC1ohS", "authors": "Kun Fang,Qinghua Tao,Kexin Lv,Mingzhen He,Xiaolin Huang,JIE YANG", "tags": "NIPS 2024,Poster", "abstract": "Out-of-Distribution (OoD) detection is vital for the reliability of Deep Neural Networks (DNNs).\nExisting works have shown the insufficiency of Principal Component Analysis (PCA) straightforwardly applied on the features of DNNs in detecting OoD data from In-Distribution (InD) data.\nThe failure of PCA suggests that the network features residing in OoD and InD are not well separated by simply proceeding in a linear subspace, which instead can be resolved through proper non-linear mappings.\nIn this work, we leverage the framework of Kernel PCA (KPCA) for OoD detection, and seek suitable non-linear kernels that advocate the separability between InD and OoD data in the subspace spanned by the principal components.\nBesides, explicit feature mappings induced from the devoted task-specific kernels are adopted so that the KPCA reconstruction error for new test samples can be efficiently obtained with large-scale data.\nExtensive theoretical and empirical results on multiple OoD data sets and network structures verify the superiority of our KPCA detector in efficiency and efficacy with state-of-the-art detection performance.", "pdf": "https://openreview.net/pdf/d88514a029b1facc7e9bec460854a8c10dbf7aae.pdf"} {"title": "Infinite-Dimensional Feature Interaction", "url": "https://openreview.net/forum?id=xO9GHdmK76", "detail_url": "https://openreview.net/forum?id=xO9GHdmK76", "authors": "Chenhui Xu,Fuxun Yu,Maoliang Li,Zihao Zheng,Zirui Xu,Jinjun Xiong,Xiang Chen", "tags": "NIPS 2024,Poster", "abstract": "The past neural network design has largely focused on feature \\textit{representation space} dimension and its capacity scaling (e.g., width, depth), but overlooked the feature \\textit{interaction space} scaling. \n Recent advancements have shown shifted focus towards element-wise multiplication to facilitate higher-dimensional feature interaction space for better information transformation. Despite this progress, multiplications predominantly capture low-order interactions, thus remaining confined to a finite-dimensional interaction space. To transcend this limitation, classic kernel methods emerge as a promising solution to engage features in an infinite-dimensional space. We introduce InfiNet, a model architecture that enables feature interaction within an infinite-dimensional space created by RBF kernel. Our experiments reveal that InfiNet achieves new state-of-the-art, owing to its capability to leverage infinite-dimensional interactions, significantly enhancing model performance.", "pdf": "https://openreview.net/pdf/b14995433876b9e28417e0ab94774923baeecd15.pdf"} {"title": "GSGAN: Adversarial Learning for Hierarchical Generation of 3D Gaussian Splats", "url": "https://openreview.net/forum?id=sFaFDcVNbW", "detail_url": "https://openreview.net/forum?id=sFaFDcVNbW", "authors": "Sangeek Hyun,Jae-Pil Heo", "tags": "NIPS 2024,Poster", "abstract": "Most advances in 3D Generative Adversarial Networks (3D GANs) largely depend on ray casting-based volume rendering, which incurs demanding rendering costs. One promising alternative is rasterization-based 3D Gaussian Splatting (3D-GS), providing a much faster rendering speed and explicit 3D representation. In this paper, we exploit Gaussian as a 3D representation for 3D GANs by leveraging its efficient and explicit characteristics. However, in an adversarial framework, we observe that a na\\\"ive generator architecture suffers from training instability and lacks the capability to adjust the scale of Gaussians. This leads to model divergence and visual artifacts due to the absence of proper guidance for initialized positions of Gaussians and densification to manage their scales adaptively. To address these issues, we introduce GSGAN, a generator architecture with a hierarchical multi-scale Gaussian representation that effectively regularizes the position and scale of generated Gaussians. Specifically, we design a hierarchy of Gaussians where finer-level Gaussians are parameterized by their coarser-level counterparts; the position of finer-level Gaussians would be located near their coarser-level counterparts, and the scale would monotonically decrease as the level becomes finer, modeling both coarse and fine details of the 3D scene. Experimental results demonstrate that ours achieves a significantly faster rendering speed (\u00d7100) compared to state-of-the-art 3D consistent GANs with comparable 3D generation capability.", "pdf": "https://openreview.net/pdf/460ef1b126e0fe69e5f5e9e5794ac258d86790db.pdf"} {"title": "Parameter Efficient Adaptation for Image Restoration with Heterogeneous Mixture-of-Experts", "url": "https://openreview.net/forum?id=R7w68Z5iqf", "detail_url": "https://openreview.net/forum?id=R7w68Z5iqf", "authors": "Hang Guo,Tao Dai,Yuanchao Bai,Bin Chen,Xudong Ren,Zexuan Zhu,Shu-Tao Xia", "tags": "NIPS 2024,Poster", "abstract": "Designing single-task image restoration models for specific degradation has seen great success in recent years. To achieve generalized image restoration, all-in-one methods have recently been proposed and shown potential for multiple restoration tasks using one single model. Despite the promising results, the existing all-in-one paradigm still suffers from high computational costs as well as limited generalization on unseen degradations. In this work, we introduce an alternative solution to improve the generalization of image restoration models. Drawing inspiration from recent advancements in Parameter Efficient Transfer Learning (PETL), we aim to tune only a small number of parameters to adapt pre-trained restoration models to various tasks. However, current PETL methods fail to generalize across varied restoration tasks due to their homogeneous representation nature. To this end, we propose AdaptIR, a Mixture-of-Experts (MoE) with orthogonal multi-branch design to capture local spatial, global spatial, and channel representation bases, followed by adaptive base combination to obtain heterogeneous representation for different degradations. Extensive experiments demonstrate that our AdaptIR achieves stable performance on single-degradation tasks, and excels in hybrid-degradation tasks, with training only 0.6% parameters for 8 hours.", "pdf": "https://openreview.net/pdf/ba6ab8744b41d553787bc19f68d105474e2047b1.pdf"} {"title": "DenseFormer: Enhancing Information Flow in Transformers via Depth Weighted Averaging", "url": "https://openreview.net/forum?id=kMnoh7CXrq", "detail_url": "https://openreview.net/forum?id=kMnoh7CXrq", "authors": "Matteo Pagliardini,Amirkeivan Mohtashami,Fran\u00e7ois Fleuret,Martin Jaggi", "tags": "NIPS 2024,Poster", "abstract": "The transformer architecture by Vaswani et al. (2017) is now ubiquitous across application domains, from natural language processing to speech processing and image understanding. We propose DenseFormer, a simple modification to the standard architecture that improves the perplexity of the model without increasing its size---adding a few thousand parameters for large-scale models in the 100B parameters range. Our approach relies on an additional averaging step after each transformer block, which computes a weighted average of current and past representations---we refer to this operation as Depth-Weighted-Average (DWA). The learned DWA weights exhibit coherent patterns of information flow, revealing the strong and structured reuse of activations from distant layers. Experiments demonstrate that DenseFormer is more data efficient, reaching the same perplexity of much deeper transformer models, and that for the same perplexity, these new models outperform transformer baselines in terms of memory efficiency and inference time.", "pdf": "https://openreview.net/pdf/03cba71ba6f566405b4789c98f7c477405d8231d.pdf"} {"title": "Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning", "url": "https://openreview.net/forum?id=qXZVSy9LFR", "detail_url": "https://openreview.net/forum?id=qXZVSy9LFR", "authors": "Zebang Cheng,Zhi-Qi Cheng,Jun-Yan He,Kai Wang,Yuxiang Lin,Zheng Lian,Xiaojiang Peng,Alexander G Hauptmann", "tags": "NIPS 2024,Poster", "abstract": "Accurate emotion perception is crucial for various applications, including human-computer interaction, education, and counseling.\nHowever, traditional single-modality approaches often fail to capture the complexity of real-world emotional expressions, which are inherently multimodal. Moreover, existing Multimodal Large Language Models (MLLMs) face challenges in integrating audio and recognizing subtle facial micro-expressions. To address this, we introduce the MERR dataset, containing 28,618 coarse-grained and 4,487 fine-grained annotated samples across diverse emotional categories. This dataset enables models to learn from varied scenarios and generalize to real-world applications. Furthermore, we propose Emotion-LLaMA, a model that seamlessly integrates audio, visual, and textual inputs through emotion-specific encoders. By aligning features into a shared space and employing a modified LLaMA model with instruction tuning, Emotion-LLaMA significantly enhances both emotional recognition and reasoning capabilities. Extensive evaluations show Emotion-LLaMA outperforms other MLLMs, achieving top scores in Clue Overlap (7.83) and Label Overlap (6.25) on EMER, an F1 score of 0.9036 on MER2023-SEMI challenge, and the highest UAR (45.59) and WAR (59.37) in zero-shot evaluations on DFEW dataset.", "pdf": "https://openreview.net/pdf/13ca05d54004661d249951eee464726f7dc73bad.pdf"} {"title": "Gliding over the Pareto Front with Uniform Designs", "url": "https://openreview.net/forum?id=WoEXVQcHFw", "detail_url": "https://openreview.net/forum?id=WoEXVQcHFw", "authors": "Xiaoyuan Zhang,Genghui Li,Xi Lin,Yichi Zhang,Yifan Chen,Qingfu Zhang", "tags": "NIPS 2024,Poster", "abstract": "Multiobjective optimization (MOO) plays a critical role in various real-world domains. A major challenge therein is generating $K$ uniform Pareto-optimal solutions to represent the entire Pareto front. To address this issue, this paper firstly introduces \\emph{fill distance} to evaluate the $K$ design points, which provides a quantitative metric for the representativeness of the design. However, directly specifying the optimal design that minimizes the fill distance is nearly intractable due to the nested $\\min-\\max-\\min$ optimization problem. To address this, we propose a surrogate ``max-packing'' design for the fill distance design, which is easier to optimize and leads to a rate-optimal design with a fill distance at most $4\\times$ the minimum value.\n Extensive experiments on synthetic and real-world benchmarks demonstrate that our proposed paradigm efficiently produces high-quality, representative solutions and outperforms baseline methods.", "pdf": "https://openreview.net/pdf/c398884fc6cbeb1fd9e5a839c618077e2c0bd2a0.pdf"} {"title": "Gaussian Graph Network: Learning Efficient and Generalizable Gaussian Representations from Multi-view Images", "url": "https://openreview.net/forum?id=2dfBpyqh0A", "detail_url": "https://openreview.net/forum?id=2dfBpyqh0A", "authors": "Shengjun Zhang,Xin Fei,Fangfu Liu,Haixu Song,Yueqi Duan", "tags": "NIPS 2024,Poster", "abstract": "3D Gaussian Splatting (3DGS) has demonstrated impressive novel view synthesis performance. While conventional methods require per-scene optimization, more recently several feed-forward methods have been proposed to generate pixel-aligned Gaussian representations with a learnable network, which are generalizable to different scenes. However, these methods simply combine pixel-aligned Gaussians from multiple views as scene representations, thereby leading to artifacts and extra memory cost without fully capturing the relations of Gaussians from different images. In this paper, we propose Gaussian Graph Network (GGN) to generate efficient and generalizable Gaussian representations. Specifically, we construct Gaussian Graphs to model the relations of Gaussian groups from different views. To support message passing at Gaussian level, we reformulate the basic graph operations over Gaussian representations, enabling each Gaussian to benefit from its connected Gaussian groups with Gaussian feature fusion. Furthermore, we design a Gaussian pooling layer to aggregate various Gaussian groups for efficient representations. We conduct experiments on the large-scale RealEstate10K and ACID datasets to demonstrate the efficiency and generalization of our method. Compared to the state-of-the-art methods, our model uses fewer Gaussians and achieves better image quality with higher rendering speed.", "pdf": "https://openreview.net/pdf/46ddc547adbdc82405ec97bc49d8972211d3dd83.pdf"} {"title": "A PID Controller Approach for Adaptive Probability-dependent Gradient Decay in Model Calibration", "url": "https://openreview.net/forum?id=fAnubdSFpn", "detail_url": "https://openreview.net/forum?id=fAnubdSFpn", "authors": "Siyuan Zhang,Linbo Xie", "tags": "NIPS 2024,Poster", "abstract": "Modern deep learning models often exhibit overconfident predictions, inadequately capturing uncertainty. During model optimization, the expected calibration error tends to overfit earlier than classification accuracy, indicating distinct optimization objectives for classification error and calibration error. To ensure consistent optimization of both model accuracy and model calibration, we propose a novel method incorporating a probability-dependent gradient decay coefficient into loss function. This coefficient exhibits a strong correlation with the overall confidence level. To maintain model calibration during optimization, we utilize a proportional-integral-derivative (PID) controller to dynamically adjust this gradient decay rate, where the adjustment relies on the proposed relative calibration error feedback in each epoch, thereby preventing the model from exhibiting over-confidence or under-confidence. Within the PID control system framework, the proposed relative calibration error serves as the control system output, providing an indication of the overall confidence level, while the gradient decay rate functions as the controlled variable. Moreover, recognizing the impact of gradient amplitude of adaptive decay rates, we implement an adaptive learning rate mechanism for gradient compensation to prevent inadequate learning of over-small or over-large gradient. Empirical experiments validate the efficacy of our PID-based adaptive gradient decay rate approach, ensuring consistent optimization of model calibration and model accuracy.", "pdf": "https://openreview.net/pdf/7a0287f0d64f3b5142f6e9bf773c8521ed202538.pdf"} {"title": "Disentangling and mitigating the impact of task similarity for continual learning", "url": "https://openreview.net/forum?id=bE7GWLQzkM", "detail_url": "https://openreview.net/forum?id=bE7GWLQzkM", "authors": "Naoki Hiratani", "tags": "NIPS 2024,Poster", "abstract": "Continual learning of partially similar tasks poses a challenge for artificial neural networks, as task similarity presents both an opportunity for knowledge transfer and a risk of interference and catastrophic forgetting.\nHowever, it remains unclear how task similarity in input features and readout patterns influences knowledge transfer and forgetting, as well as how they interact with common algorithms for continual learning.\nHere, we develop a linear teacher-student model with latent structure and show analytically that high input feature similarity coupled with low readout similarity is catastrophic for both knowledge transfer and retention. \nConversely, the opposite scenario is relatively benign. \nOur analysis further reveals that task-dependent activity gating improves knowledge retention at the expense of transfer, while task-dependent plasticity gating does not affect either retention or transfer performance at the over-parameterized limit. \nIn contrast, weight regularization based on the Fisher information metric significantly improves retention, regardless of task similarity, without compromising transfer performance. Nevertheless, its diagonal approximation and regularization in the Euclidean space are much less robust against task similarity. \nWe demonstrate consistent results in a permuted MNIST task with latent variables. Overall, this work provides insights into when continual learning is difficult and how to mitigate it.", "pdf": "https://openreview.net/pdf/a615623ba5e9b57a77694d9816984ebb20ebf11f.pdf"} {"title": "RGMDT: Return-Gap-Minimizing Decision Tree Extraction in Non-Euclidean Metric Space", "url": "https://openreview.net/forum?id=mdWz5koY5p", "detail_url": "https://openreview.net/forum?id=mdWz5koY5p", "authors": "Jingdi Chen,Hanhan Zhou,Yongsheng Mei,Carlee Joe-Wong,Gina Adam,Nathaniel D. Bastian,Tian Lan", "tags": "NIPS 2024,Poster", "abstract": "Deep Reinforcement Learning (DRL) algorithms have achieved great success in solving many challenging tasks while their black-box nature hinders interpretability and real-world applicability, making it difficult for human experts to interpret and understand DRL policies. \nExisting works on interpretable reinforcement learning have shown promise in extracting decision tree (DT) based policies from DRL policies with most focus on the single-agent settings while prior attempts to introduce DT policies in multi-agent scenarios mainly focus on heuristic designs which do not provide any quantitative guarantees on the expected return.\nIn this paper, we establish an upper bound on the return gap between the oracle expert policy and an optimal decision tree policy. This enables us to recast the DT extraction problem into a novel non-euclidean clustering problem over the local observation and action values space of each agent, with action values as cluster labels and the upper bound on the return gap as clustering loss.\nBoth the algorithm and the upper bound are extended to multi-agent decentralized DT extractions by an iteratively-grow-DT procedure guided by an action-value function conditioned on the current DTs of other agents. Further, we propose the Return-Gap-Minimization Decision Tree (RGMDT) algorithm, which is a surprisingly simple design and is integrated with reinforcement learning through the utilization of a novel Regularized Information Maximization loss. Evaluations on tasks like D4RL show that RGMDT significantly outperforms heuristic DT-based baselines and can achieve nearly optimal returns under given DT complexity constraints (e.g., maximum number of DT nodes).", "pdf": "https://openreview.net/pdf/41ac0f4467bcfc63a3fb68d7ce0f9cbeeaf0bbd9.pdf"} {"title": "M$^3$GPT: An Advanced Multimodal, Multitask Framework for Motion Comprehension and Generation", "url": "https://openreview.net/forum?id=ODbTlAs0Oj", "detail_url": "https://openreview.net/forum?id=ODbTlAs0Oj", "authors": "Mingshuang Luo,RuiBing Hou,Zhuo Li,Hong Chang,Zimo Liu,Yaowei Wang,Shiguang Shan", "tags": "NIPS 2024,Poster", "abstract": "This paper presents M$^3$GPT, an advanced $\\textbf{M}$ultimodal, $\\textbf{M}$ultitask framework for $\\textbf{M}$otion comprehension and generation. M$^3$GPT operates on three fundamental principles. The first focuses on creating a unified representation space for various motion-relevant modalities. We employ discrete vector quantization for multimodal conditional signals, such as text, music and motion/dance, enabling seamless integration into a large language model (LLM) with a single vocabulary.\nThe second involves modeling motion generation directly in the raw motion space. This strategy circumvents the information loss associated with a discrete tokenizer, resulting in more detailed and comprehensive motion generation. \nThird, M$^3$GPT learns to model the connections and synergies among various motion-relevant tasks. Text, the most familiar and well-understood modality for LLMs, is utilized as a bridge to establish connections between different motion tasks, facilitating mutual \nreinforcement. To our knowledge, M$^3$GPT is the first model capable of comprehending and generating motions based on multiple signals.\nExtensive experiments highlight M$^3$GPT's superior performance across various motion-relevant tasks and its powerful zero-shot generalization capabilities for extremely challenging tasks. Project page: \\url{https://github.com/luomingshuang/M3GPT}.", "pdf": "https://openreview.net/pdf/cee09305e51de2abc6c260584c7482e74de84fb0.pdf"} {"title": "Is One GPU Enough? Pushing Image Generation at Higher-Resolutions with Foundation Models.", "url": "https://openreview.net/forum?id=Ffb30OVVCa", "detail_url": "https://openreview.net/forum?id=Ffb30OVVCa", "authors": "Athanasios Tragakis,Marco Aversa,Chaitanya Kaul,Roderick Murray-Smith,Daniele Faccio", "tags": "NIPS 2024,Poster", "abstract": "In this work, we introduce Pixelsmith, a zero-shot text-to-image generative framework to sample images at higher resolutions with a single GPU. We are the first to show that it is possible to scale the output of a pre-trained diffusion model by a factor of 1000, opening the road to gigapixel image generation at no extra cost. Our cascading method uses the image generated at the lowest resolution as baseline to sample at higher resolutions. For the guidance, we introduce the Slider, a mechanism that fuses the overall structure contained in the first-generated image with enhanced fine details. At each inference step, we denoise patches rather than the entire latent space, minimizing memory demands so that a single GPU can handle the process, regardless of the image's resolution. Our experimental results show that this method not only achieves higher quality and diversity compared to existing techniques but also reduces sampling time and ablation artifacts.", "pdf": "https://openreview.net/pdf/8bcf2faa948547c803b558817a89cc317b05abf5.pdf"} {"title": "Text-Guided Attention is All You Need for Zero-Shot Robustness in Vision-Language Models", "url": "https://openreview.net/forum?id=cOw65A9FGf", "detail_url": "https://openreview.net/forum?id=cOw65A9FGf", "authors": "Lu Yu,Haiyang Zhang,Changsheng Xu", "tags": "NIPS 2024,Poster", "abstract": "Due to the impressive zero-shot capabilities, pre-trained vision-language models (e.g. CLIP), have attracted widespread attention and adoption across various domains. Nonetheless, CLIP has been observed to be susceptible to adversarial examples. Through experimental analysis, we have observed a phenomenon wherein adversarial perturbations induce shifts in text-guided attention. Building upon this observation, we propose a simple yet effective strategy: Text-Guided Attention for Zero-Shot Robustness (TGA-ZSR). This framework incorporates two components: the Attention Refinement module and the Attention-based Model Constraint module. Our goal is to maintain the generalization of the CLIP model and enhance its adversarial robustness: The Attention Refinement module aligns the text-guided attention obtained from the target model via adversarial examples with the text-guided attention acquired from the original model via clean examples. This alignment enhances the model\u2019s robustness. Additionally, the Attention-based Model Constraint module acquires text-guided attention from both the target and original models using clean examples. Its objective is to maintain model performance on clean samples while enhancing overall robustness. The experiments validate that our method yields a 9.58% enhancement in zero-shot robust accuracy over the current state-of-the-art techniques across 16 datasets. Our code is available at https://github.com/zhyblue424/TGA-ZSR.", "pdf": "https://openreview.net/pdf/0eca8640dcb6ad067469bfabed5e124e03952d5d.pdf"} {"title": "Test-time Adaptation in Non-stationary Environments via Adaptive Representation Alignment", "url": "https://openreview.net/forum?id=0EfUYVMrLv", "detail_url": "https://openreview.net/forum?id=0EfUYVMrLv", "authors": "Zhen-Yu Zhang,Zhiyu Xie,Huaxiu Yao,Masashi Sugiyama", "tags": "NIPS 2024,Poster", "abstract": "Adapting to distribution shifts is a critical challenge in modern machine learning, especially as data in many real-world applications accumulate continuously in the form of streams. We investigate the problem of sequentially adapting a model to non-stationary environments, where the data distribution is continuously shifting and only a small amount of unlabeled data are available each time. Continual test-time adaptation methods have shown promising results by using reliable pseudo-labels, but they still fall short in exploring representation alignment with the source domain in non-stationary environments. In this paper, we propose to leverage non-stationary representation learning to adaptively align the unlabeled data stream, with its changing distributions, to the source data representation using a sketch of the source data. To alleviate the data scarcity in non-stationary representation learning, we propose a novel adaptive representation alignment algorithm called Ada-ReAlign. This approach employs a group of base learners to explore different lengths of the unlabeled data stream, which are adaptively combined by a meta learner to handle unknown and continuously evolving data distributions. The proposed method comes with nice theoretical guarantees under convexity assumptions. Experiments on both benchmark datasets and a real-world application validate the effectiveness and adaptability of our proposed algorithm.", "pdf": "https://openreview.net/pdf/be25a9406407e0296d8240eb457f27cf4070b837.pdf"} {"title": "Neural Experts: Mixture of Experts for Implicit Neural Representations", "url": "https://openreview.net/forum?id=wWguwYhpAY", "detail_url": "https://openreview.net/forum?id=wWguwYhpAY", "authors": "Yizhak Ben-Shabat,Chamin P Hewa Koneputugodage,Sameera Ramasinghe,Stephen Gould", "tags": "NIPS 2024,Poster", "abstract": "Implicit neural representations (INRs) have proven effective in various tasks including image, shape, audio, and video reconstruction. These INRs typically learn the implicit field from sampled input points. This is often done using a single network for the entire domain, imposing many global constraints on a single function. \nIn this paper, we propose a mixture of experts (MoE) implicit neural representation approach that enables learning local piece-wise continuous functions that simultaneously learns to subdivide the domain and fit it locally. \nWe show that incorporating a mixture of experts architecture into existing INR formulations provides a boost in speed, accuracy, and memory requirements. Additionally, we introduce novel conditioning and pretraining methods for the gating network that improves convergence to the desired solution. \nWe evaluate the effectiveness of our approach on multiple reconstruction tasks, including surface reconstruction, image reconstruction, and audio signal reconstruction and show improved performance compared to non-MoE methods. Code is available at our project page https://sitzikbs.github.io/neural-experts-projectpage/ .", "pdf": "https://openreview.net/pdf/29a5178e806f04207f02516fcb74d8395ed9af42.pdf"} {"title": "ReLIZO: Sample Reusable Linear Interpolation-based Zeroth-order Optimization", "url": "https://openreview.net/forum?id=yzviAnpvU6", "detail_url": "https://openreview.net/forum?id=yzviAnpvU6", "authors": "Xiaoxing Wang,Xiaohan Qin,Xiaokang Yang,Junchi Yan", "tags": "NIPS 2024,Poster", "abstract": "Gradient estimation is critical in zeroth-order optimization methods, which aims to obtain the descent direction by sampling update directions and querying function evaluations. Extensive research has been conducted including smoothing and linear interpolation. The former methods smooth the objective function, causing a biased gradient estimation, while the latter often enjoys more accurate estimates, at the cost of large amounts of samples and queries at each iteration to update variables. This paper resorts to the linear interpolation strategy and proposes to reduce the complexity of gradient estimation by reusing queries in the prior iterations while maintaining the sample size unchanged. Specifically, we model the gradient estimation as a quadratically constrained linear program problem and manage to derive the analytical solution. It innovatively decouples the required sample size from the variable dimension without extra conditions required, making it able to leverage the queries in the prior iterations. Moreover, part of the intermediate variables that contribute to the gradient estimation can be directly indexed, significantly reducing the computation complexity. Experiments on both simulation functions and real scenarios (black-box adversarial attacks neural architecture search, and parameter-efficient fine-tuning for large language models), show its efficacy and efficiency. Our code is available at https://github.com/Thinklab-SJTU/ReLIZO.git.", "pdf": "https://openreview.net/pdf/fc7ea3437f516b19fb6feff0c372deeda8df7019.pdf"} {"title": "BitDelta: Your Fine-Tune May Only Be Worth One Bit", "url": "https://openreview.net/forum?id=XuWWq3gy7W", "detail_url": "https://openreview.net/forum?id=XuWWq3gy7W", "authors": "James Liu,Guangxuan Xiao,Kai Li,Jason D. Lee,Song Han,Tri Dao,Tianle Cai", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) are typically trained in two phases: pre-training on large internet-scale datasets, and fine-tuning for downstream tasks. Given the higher computational demand of pre-training, it is intuitive to assume that fine-tuning adds less new information to the model, and is thus more compressible. We explore this assumption by decomposing the weights of fine-tuned models into their pre-trained components and an additional delta. We introduce a simple method, BitDelta, which successfully quantizes this delta down to 1 bit without compromising performance. This interesting finding not only highlights the potential redundancy of information added during fine-tuning, but also has significant implications for the multi-tenant serving and multi-tenant storage of fine-tuned models. By enabling the use of a single high-precision base model accompanied by multiple 1-bit deltas, BitDelta dramatically reduces GPU memory requirements by more than 10x, thus reducing per-user generation latency by more than 10x in multi-tenant settings. We validate BitDelta through experiments across Llama-2, Mistral and MPT model families, and on models up to 70B parameters, showcasing minimal performance degradation in all tested settings.", "pdf": "https://openreview.net/pdf/778525a1f5cc61f3c51860fb0542505d7a6458fb.pdf"} {"title": "Fixed Confidence Best Arm Identification in the Bayesian Setting", "url": "https://openreview.net/forum?id=hFTye9Ge40", "detail_url": "https://openreview.net/forum?id=hFTye9Ge40", "authors": "Kyoungseok Jang,Junpei Komiyama,Kazutoshi Yamazaki", "tags": "NIPS 2024,Poster", "abstract": "We consider the fixed-confidence best arm identification (FC-BAI) problem in the Bayesian setting. This problem aims to find the arm of the largest mean with a fixed confidence level when the bandit model has been sampled from the known prior. \nMost studies on the FC-BAI problem have been conducted in the frequentist setting, where the bandit model is predetermined before the game starts. \nWe show that the traditional FC-BAI algorithms studied in the frequentist setting, such as track-and-stop and top-two algorithms, result in arbitrarily suboptimal performances in the Bayesian setting. \nWe also obtain a lower bound of the expected number of samples in the Bayesian setting and introduce a variant of successive elimination that has a matching performance with the lower bound up to a logarithmic factor. Simulations verify the theoretical results.", "pdf": "https://openreview.net/pdf/7f1b744ab1c7f4a18cae1ef2f6f2d474a760a149.pdf"} {"title": "Autonomous Driving with Spiking Neural Networks", "url": "https://openreview.net/forum?id=95VyH4VxN9", "detail_url": "https://openreview.net/forum?id=95VyH4VxN9", "authors": "Rui-Jie Zhu,Ziqing Wang,Leilani H. Gilpin,Jason Eshraghian", "tags": "NIPS 2024,Poster", "abstract": "Autonomous driving demands an integrated approach that encompasses perception, prediction, and planning, all while operating under strict energy constraints to enhance scalability and environmental sustainability. We present Spiking Autonomous Driving (SAD), the first unified Spiking Neural Network (SNN) to address the energy challenges faced by autonomous driving systems through its event-driven and energy-efficient nature. SAD is trained end-to-end and consists of three main modules: perception, which processes inputs from multi-view cameras to construct a spatiotemporal bird's eye view; prediction, which utilizes a novel dual-pathway with spiking neurons to forecast future states; and planning, which generates safe trajectories considering predicted occupancy, traffic rules, and ride comfort. Evaluated on the nuScenes dataset, SAD achieves competitive performance in perception, prediction, and planning tasks, while drawing upon the energy efficiency of SNNs. This work highlights the potential of neuromorphic computing to be applied to energy-efficient autonomous driving, a critical step toward sustainable and safety-critical automotive technology. Our code is available at [https://github.com/ridgerchu/SAD](https://github.com/ridgerchu/SAD).", "pdf": "https://openreview.net/pdf/34ed36798e9b7d351dff02e4dbf857fdd9a02b5c.pdf"} {"title": "HYSYNTH: Context-Free LLM Approximation for Guiding Program Synthesis", "url": "https://openreview.net/forum?id=5jt0ZSA6Co", "detail_url": "https://openreview.net/forum?id=5jt0ZSA6Co", "authors": "Shraddha Barke,Emmanuel Anaya Gonzalez,Saketh Ram Kasibatla,Taylor Berg-Kirkpatrick,Nadia Polikarpova", "tags": "NIPS 2024,Poster", "abstract": "Many structured prediction and reasoning tasks can be framed as program synthesis problems, where the goal is to generate a program in a \\emph{domain-specific language} (DSL) that transforms input data into the desired output. Unfortunately, purely neural approaches, such as large language models (LLMs), often fail to produce fully correct programs in unfamiliar DSLs, while purely symbolic methods based on combinatorial search scale poorly to complex problems. Motivated by these limitations, we introduce a hybrid approach, where LLM completions for a given task are used to learn a task-specific, context-free surrogate model, which is then used to guide program synthesis. We evaluate this hybrid approach on three domains, and show that it outperforms both unguided search and direct sampling from LLMs, as well as existing program synthesizers.", "pdf": "https://openreview.net/pdf/d3d5fc4b1e16f544e7b9cf699a8f089fc35d5f0e.pdf"} {"title": "Exploring the trade-off between deep-learning and explainable models for brain-machine interfaces", "url": "https://openreview.net/forum?id=UDi51I8K1p", "detail_url": "https://openreview.net/forum?id=UDi51I8K1p", "authors": "Luis Hernan Cubillos,Guy Revach,Matthew Mender,Joseph T Costello,Hisham Temmar,Aren Hite,Diksha Anoop Kumar Zutshi,Dylan Michael Wallace,Xiaoyong Ni,Madison M. Kelberman,Matt Willsey,Ruud Van Sloun,Nir Shlezinger,Parag Ganapati Patil,Anne Draelos,Cynthia Chestek", "tags": "NIPS 2024,Poster", "abstract": "People with brain or spinal cord-related paralysis often need to rely on others for basic tasks, limiting their independence. A potential solution is brain-machine interfaces (BMIs), which could allow them to voluntarily control external devices (e.g., robotic arm) by decoding brain activity to movement commands. In the past decade, deep-learning decoders have achieved state-of-the-art results in most BMI applications, ranging from speech production to finger control. However, the 'black-box' nature of deep-learning decoders could lead to unexpected behaviors, resulting in major safety concerns in real-world physical control scenarios. In these applications, explainable but lower-performing decoders, such as the Kalman filter (KF), remain the norm. In this study, we designed a BMI decoder based on KalmanNet, an extension of the KF that augments its operation with recurrent neural networks to compute the Kalman gain. This results in a varying \u201ctrust\u201d that shifts between inputs and dynamics. We used this algorithm to predict finger movements from the brain activity of two monkeys. We compared KalmanNet results offline (pre-recorded data, $n=13$ days) and online (real-time predictions, $n=5$ days) with a simple KF and two recent deep-learning algorithms: tcFNN (non-ReFIT version) and LSTM. KalmanNet achieved comparable or better results than other deep learning models in offline and online modes, relying on the dynamical model for stopping while depending more on neural inputs for initiating movements. We further validated this mechanism by implementing a heteroscedastic KF that used the same strategy, and it also approached state-of-the-art performance while remaining in the explainable domain of standard KFs. However, we also see two downsides to KalmanNet. KalmanNet shares the limited generalization ability of existing deep-learning decoders, and its usage of the KF as an inductive bias limits its performance in the presence of unseen noise distributions. Despite this trade-off, our analysis successfully integrates traditional controls and modern deep-learning approaches to motivate high-performing yet still explainable BMI designs.", "pdf": "https://openreview.net/pdf/e668d56c878fa03d1e227267a3d0e5ccff829595.pdf"} {"title": "Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion", "url": "https://openreview.net/forum?id=yDo1ynArjj", "detail_url": "https://openreview.net/forum?id=yDo1ynArjj", "authors": "Boyuan Chen,Diego Mart\u00ed Mons\u00f3,Yilun Du,Max Simchowitz,Russ Tedrake,Vincent Sitzmann", "tags": "NIPS 2024,Poster", "abstract": "This paper presents Diffusion Forcing, a new training paradigm where a diffusion model is trained to denoise a set of tokens with independent per-token noise levels. We apply Diffusion Forcing to sequence generative modeling by training a causal next-token prediction model to generate one or several future tokens without fully diffusing past ones. Our approach is shown to combine the strengths of next-token prediction models, such as variable-length generation, with the strengths of full-sequence diffusion models, such as the ability to guide sampling to desirable trajectories. Our method offers a range of additional capabilities, such as (1) rolling-out sequences of continuous tokens, such as video, with lengths past the training horizon, where baselines diverge and (2) new sampling and guiding schemes that uniquely profit from Diffusion Forcing's variable-horizon and causal architecture, and which lead to marked performance gains in decision-making and planning tasks. In addition to its empirical success, our method is proven to optimize a variational lower bound on the likelihoods of all subsequences of tokens drawn from the true joint distribution. Project website: https://boyuan.space/diffusion-forcing/", "pdf": "https://openreview.net/pdf/5a6e9d157a4d33dc36773c5c32370c3c7941d6c2.pdf"} {"title": "Self-Supervised Alignment with Mutual Information: Learning to Follow Principles without Preference Labels", "url": "https://openreview.net/forum?id=UvbpbEhGaw", "detail_url": "https://openreview.net/forum?id=UvbpbEhGaw", "authors": "Jan-Philipp Fr\u00e4nken,Eric Zelikman,Rafael Rafailov,Kanishk Gandhi,Tobias Gerstenberg,Noah Goodman", "tags": "NIPS 2024,Poster", "abstract": "When prompting a language model (LM), users often expect the model to adhere to a set of behavioral principles across diverse tasks, such as producing insightful content while avoiding harmful or biased language. Instilling such principles (i.e., a constitution) into a model is resource-intensive, technically challenging, and generally requires human preference labels or examples. We introduce SAMI, an iterative algorithm that finetunes a pretrained language model (without requiring preference labels or demonstrations) to increase the conditional mutual information between constitutions and self-generated responses given queries from a dataset. On single-turn dialogue and summarization, a SAMI-trained mistral-7b outperforms the initial pretrained model, with win rates between 66% and 77%. Strikingly, it also surpasses an instruction-finetuned baseline (mistral-7b-instruct) with win rates between 55% and 57% on single-turn dialogue. SAMI requires a model that writes the principles. To avoid dependence on strong models for writing principles, we align a strong pretrained model (mixtral-8x7b) using constitutions written by a weak instruction-finetuned model (mistral-7b-instruct), achieving a 65% win rate on summarization. Finally, we investigate whether SAMI generalizes to diverse summarization principles (e.g., \"summaries should be scientific\") and scales to stronger models (llama3-70b), finding that it achieves win rates of up to 68% for learned and 67% for held-out principles compared to the base model. Our results show that a pretrained LM can learn to follow constitutions without using preference labels, demonstrations, or human oversight.", "pdf": "https://openreview.net/pdf/128cb22ad304c76f1281e20dc9186dc9fc940926.pdf"} {"title": "Towards Robust Multimodal Sentiment Analysis with Incomplete Data", "url": "https://openreview.net/forum?id=mYEjc7qGRA", "detail_url": "https://openreview.net/forum?id=mYEjc7qGRA", "authors": "Haoyu Zhang,Wenbin Wang,Tianshu Yu", "tags": "NIPS 2024,Poster", "abstract": "The field of Multimodal Sentiment Analysis (MSA) has recently witnessed an emerging direction seeking to tackle the issue of data incompleteness. Recognizing that the language modality typically contains dense sentiment information, we consider it as the dominant modality and present an innovative Language-dominated Noise-resistant Learning Network (LNLN) to achieve robust MSA. The proposed LNLN features a dominant modality correction (DMC) module and dominant modality based multimodal learning (DMML) module, which enhances the model's robustness across various noise scenarios by ensuring the quality of dominant modality representations. Aside from the methodical design, we perform comprehensive experiments under random data missing scenarios, utilizing diverse and meaningful settings on several popular datasets (e.g., MOSI, MOSEI, and SIMS), providing additional uniformity, transparency, and fairness compared to existing evaluations in the literature. Empirically, LNLN consistently outperforms existing baselines, demonstrating superior performance across these challenging and extensive evaluation metrics.", "pdf": "https://openreview.net/pdf/9da4ae43fcdb223baae597dc851a5bdef7c54a42.pdf"} {"title": "High-probability complexity bounds for stochastic non-convex minimax optimization", "url": "https://openreview.net/forum?id=XMQTNzlgTJ", "detail_url": "https://openreview.net/forum?id=XMQTNzlgTJ", "authors": "Yassine Laguel,Yasa Syed,Necdet Aybat,Mert Gurbuzbalaban", "tags": "NIPS 2024,Poster", "abstract": "Stochastic smooth nonconvex minimax problems are prevalent in machine learning, e.g., GAN training, fair classification, and distributionally robust learning. Stochastic gradient descent ascent (GDA)-type methods are popular in practice due to their simplicity and single-loop nature. However, there is a significant gap between the theory and practice regarding high-probability complexity guarantees for these methods on stochastic nonconvex minimax problems. Existing high-probability bounds for GDA-type single-loop methods only apply to convex/concave minimax problems and to particular non-monotone variational inequality problems under some restrictive assumptions. In this work, we address this gap by providing the first high-probability complexity guarantees for nonconvex/PL minimax problems corresponding to a smooth function that satisfies the PL-condition in the dual variable. Specifically, we show that when the stochastic gradients are light-tailed, the smoothed alternating GDA method can compute an $\\varepsilon$-stationary point within $\\mathcal{O}(\\frac{\\ell \\kappa^2 \\delta^2}{\\varepsilon^4} + \\frac{\\kappa}{\\varepsilon^2}(\\ell+\\delta^2\\log({1}/{\\bar{q}})))$ stochastic gradient calls with probability at least $1-\\bar{q}$ for any $\\bar{q}\\in(0,1)$, where $\\mu$ is the PL constant, $\\ell$ is the Lipschitz constant of the gradient, $\\kappa=\\ell/\\mu$ is the condition number, and $\\delta^2$ denotes a bound on the variance of stochastic gradients. We also present numerical results on a nonconvex/PL problem with synthetic data and on distributionally robust optimization problems with real data, illustrating our theoretical findings.", "pdf": "https://openreview.net/pdf/54f2e1683603d036f8030c1e6bea30720a146552.pdf"} {"title": "Accelerating Transformers with Spectrum-Preserving Token Merging", "url": "https://openreview.net/forum?id=PPdJPIO3mV", "detail_url": "https://openreview.net/forum?id=PPdJPIO3mV", "authors": "Hoai-Chau Tran,Duy Minh Ho Nguyen,Manh-Duy Nguyen,TrungTin Nguyen,Ngan Hoang Le,Pengtao Xie,Daniel Sonntag,James Zou,Binh T. Nguyen,Mathias Niepert", "tags": "NIPS 2024,Poster", "abstract": "Increasing the throughput of the Transformer architecture, a foundational component used in numerous state-of-the-art models for vision and language tasks (e.g., GPT, LLaVa), is an important problem in machine learning. One recent and effective strategy is to merge token representations within Transformer models, aiming to reduce computational and memory requirements while maintaining accuracy. Prior work has proposed algorithms based on Bipartite Soft Matching (BSM), which divides tokens into distinct sets and merges the top $k$ similar tokens. However, these methods have significant drawbacks, such as sensitivity to token-splitting strategies and damage to informative tokens in later layers. This paper presents a novel paradigm called PiToMe, which prioritizes the preservation of informative tokens using an additional metric termed the \\textit{energy score}. This score identifies large clusters of similar tokens as high-energy, indicating potential candidates for merging, while smaller (unique and isolated) clusters are considered as low-energy and preserved. Experimental findings demonstrate that PiToMe saved from 40-60\\% FLOPs of the base models while exhibiting superior off-the-shelf performance on image classification (0.5\\% average performance drop of ViT-MAEH compared to 2.6\\% as baselines), image-text retrieval (0.3\\% average performance drop of Clip on Flick30k compared to 4.5\\% as others), and analogously in visual questions answering with LLaVa-7B. Furthermore, PiToMe is theoretically shown to preserve intrinsic spectral properties to the original token space under mild conditions.", "pdf": "https://openreview.net/pdf/60a06c3801b6127cda4ba4a57e1502f4ae5f8fd2.pdf"} {"title": "Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just Clip Gradient Differences", "url": "https://openreview.net/forum?id=G8aS48B9bm", "detail_url": "https://openreview.net/forum?id=G8aS48B9bm", "authors": "Grigory Malinovsky,Peter Richt\u00e1rik,Samuel Horv\u00e1th,Eduard Gorbunov", "tags": "NIPS 2024,Poster", "abstract": "Distributed learning has emerged as a leading paradigm for training large machine learning models. However, in real-world scenarios, participants may be unreliable or malicious, posing a significant challenge to the integrity and accuracy of the trained models. Byzantine fault tolerance mechanisms have been proposed to address these issues, but they often assume full participation from all clients, which is not always practical due to the unavailability of some clients or communication constraints. In our work, we propose the first distributed method with client sampling and provable tolerance to Byzantine workers. The key idea behind the developed method is the use of gradient clipping to control stochastic gradient differences in recursive variance reduction. This allows us to bound the potential harm caused by Byzantine workers, even during iterations when all sampled clients are Byzantine. Furthermore, we incorporate communication compression into the method to enhance communication efficiency. Under general assumptions, we prove convergence rates for the proposed method that match the existing state-of-the-art (SOTA) theoretical results. We also propose a heuristic on how to adjust any Byzantine-robust method to a partial participation scenario via clipping.", "pdf": "https://openreview.net/pdf/80db342360af43ad116fa66a85baa7ae2276fe30.pdf"} {"title": "SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors", "url": "https://openreview.net/forum?id=uS0PwIBzC0", "detail_url": "https://openreview.net/forum?id=uS0PwIBzC0", "authors": "Vijay Lingam,Atula Tejaswi Neerkaje,Aditya Vavre,Aneesh Shetty,Gautham Krishna Gudur,Joydeep Ghosh,Eunsol Choi,Alex Dimakis,Aleksandar Bojchevski,sujay sanghavi", "tags": "NIPS 2024,Poster", "abstract": "Popular parameter-efficient fine-tuning (PEFT) methods, such as LoRA and its variants, freeze pre-trained model weights $\\(\\mathbf{W}\\)$ and inject learnable matrices $\\(\\mathbf{\\Delta W}\\)$. These $\\(\\mathbf{\\Delta W}\\)$ matrices are structured for efficient parameterization, often using techniques like low-rank approximations or scaling vectors. However, these methods typically exhibit a performance gap compared to full fine-tuning. While recent PEFT methods have narrowed this gap, they do so at the expense of additional learnable parameters. We propose SVFT, a *simple* approach that structures $\\(\\mathbf{\\Delta W}\\)$ based on the specific weight matrix $\\(\\mathbf{W}\\)$. SVFT updates $\\(\\mathbf{W}\\)$ as a sparse combination $\\(M\\)$ of outer products of its singular vectors, training only the coefficients of these combinations. Crucially, we make additional off-diagonal elements in $M$ learnable, enabling a smooth trade-off between trainable parameters and expressivity\u2014an aspect that distinctly sets our approach apart from previous works leveraging singular values. Extensive experiments on language and vision benchmarks show that SVFT recovers up to **96%** of full fine-tuning performance while training only **0.006 to 0.25%** of parameters, outperforming existing methods that achieve only up to **{85\\%}** performance with **0.03 to 0.8%** of the trainable parameter budget.", "pdf": "https://openreview.net/pdf/adde9792a32c217eb77036b356c635e95d846915.pdf"} {"title": "Robust Neural Contextual Bandit against Adversarial Corruptions", "url": "https://openreview.net/forum?id=6U8iV9HVpS", "detail_url": "https://openreview.net/forum?id=6U8iV9HVpS", "authors": "Yunzhe Qi,Yikun Ban,Arindam Banerjee,Jingrui He", "tags": "NIPS 2024,Poster", "abstract": "Contextual bandit algorithms aim to identify the optimal arm with the highest reward among a set of candidates, based on the accessible contextual information. Among these algorithms, neural contextual bandit methods have shown generally superior performances against linear and kernel ones, due to the representation power of neural networks. However, similar to other neural network applications, neural bandit algorithms can be vulnerable to adversarial attacks or corruptions on the received labels (i.e., arm rewards), which can lead to unexpected performance degradation without proper treatments. As a result, it is necessary to improve the robustness of neural bandit models against potential reward corruptions. In this work, we propose a novel neural contextual bandit algorithm named R-NeuralUCB, which utilizes a novel context-aware Gradient Descent (GD) training strategy to improve the robustness against adversarial reward corruptions. Under over-parameterized neural network settings, we provide regret analysis for R-NeuralUCB to quantify reward corruption impacts, without the commonly adopted arm separateness assumption in existing neural bandit works. We also conduct experiments against baselines on real data sets under different scenarios, in order to demonstrate the effectiveness of our proposed R-NeuralUCB.", "pdf": "https://openreview.net/pdf/2898cb72d918ff4f69c0be90c98ca74b4e3b8e88.pdf"} {"title": "Don't Compress Gradients in Random Reshuffling: Compress Gradient Differences", "url": "https://openreview.net/forum?id=CzPtBzgfae", "detail_url": "https://openreview.net/forum?id=CzPtBzgfae", "authors": "Abdurakhmon Sadiev,Grigory Malinovsky,Eduard Gorbunov,Igor Sokolov,Ahmed Khaled,Konstantin Pavlovich Burlachenko,Peter Richt\u00e1rik", "tags": "NIPS 2024,Poster", "abstract": "Gradient compression is a popular technique for improving communication complexity of stochastic first-order methods in distributed training of machine learning models. However, the existing works consider only with-replacement sampling of stochastic gradients. In contrast, it is well-known in practice and recently confirmed in theory that stochastic methods based on without-replacement sampling, e.g., Random Reshuffling (RR) method, perform better than ones that sample the gradients with-replacement. In this work, we close this gap in the literature and provide the first analysis of methods with gradient compression and without-replacement sampling. We first develop a distributed variant of random reshuffling with gradient compression (Q-RR), and show how to reduce the variance coming from gradient quantization through the use of control iterates. Next, to have a better fit to Federated Learning applications, we incorporate local computation and propose a variant of Q-RR called Q-NASTYA. Q-NASTYA uses local gradient steps and different local and global stepsizes. Next, we show how to reduce compression variance in this setting as well. Finally, we prove the convergence results for the proposed methods and outline several settings in which they improve upon existing algorithms.", "pdf": "https://openreview.net/pdf/5b46b7f13f760a5ccbe55baeb34675ba84df08dd.pdf"} {"title": "Connectivity-Driven Pseudo-Labeling Makes Stronger Cross-Domain Segmenters", "url": "https://openreview.net/forum?id=VIqQSFNjyP", "detail_url": "https://openreview.net/forum?id=VIqQSFNjyP", "authors": "Dong Zhao,Qi Zang,Shuang Wang,Nicu Sebe,Zhun Zhong", "tags": "NIPS 2024,Poster", "abstract": "Presently, pseudo-labeling stands as a prevailing approach in cross-domain semantic segmentation, enhancing model efficacy by training with pixels assigned with reliable pseudo-labels. However, we identify two key limitations within this paradigm: (1) under relatively severe domain shifts, most selected reliable pixels appear speckled and remain noisy. (2) when dealing with wild data, some pixels belonging to the open-set class may exhibit high confidence and also appear speckled. These two points make it difficult for the pixel-level selection mechanism to identify and correct these speckled close- and open-set noises. As a result, error accumulation is continuously introduced into subsequent self-training, leading to inefficiencies in pseudo-labeling. To address these limitations, we propose a novel method called Semantic Connectivity-driven Pseudo-labeling (SeCo). SeCo formulates pseudo-labels at the connectivity level, which makes it easier to locate and correct closed and open set noise. Specifically, SeCo comprises two key components: Pixel Semantic Aggregation (PSA) and Semantic Connectivity Correction (SCC). Initially, PSA categorizes semantics into ``stuff'' and ``things'' categories and aggregates speckled pseudo-labels into semantic connectivity through efficient interaction with the Segment Anything Model (SAM). This enables us not only to obtain accurate boundaries but also simplifies noise localization. Subsequently, SCC introduces a simple connectivity classification task, which enables us to locate and correct connectivity noise with the guidance of loss distribution. Extensive experiments demonstrate that SeCo can be flexibly applied to various cross-domain semantic segmentation tasks, \\textit{i.e.} domain generalization and domain adaptation, even including source-free, and black-box domain adaptation, significantly improving the performance of existing state-of-the-art methods. The code is provided in the appendix and will be open-source.", "pdf": "https://openreview.net/pdf/e0b827ff76f6a06b6237392d9b5c01b9d1ec8283.pdf"} {"title": "Latent Plan Transformer for Trajectory Abstraction: Planning as Latent Space Inference", "url": "https://openreview.net/forum?id=0KvYLaTBTE", "detail_url": "https://openreview.net/forum?id=0KvYLaTBTE", "authors": "Deqian Kong,Dehong Xu,Minglu Zhao,Bo Pang,Jianwen Xie,Andrew Lizarraga,Yuhao Huang,Sirui Xie,Ying Nian Wu", "tags": "NIPS 2024,Poster", "abstract": "In tasks aiming for long-term returns, planning becomes essential. We study generative modeling for planning with datasets repurposed from offline reinforcement learning. Specifically, we identify temporal consistency in the absence of step-wise rewards as one key technical challenge. We introduce the Latent Plan Transformer (LPT), a novel model that leverages a latent variable to connect a Transformer- based trajectory generator and the final return. LPT can be learned with maximum likelihood estimation on trajectory-return pairs. In learning, posterior sampling of the latent variable naturally integrates sub-trajectories to form a consistent abstrac- tion despite the finite context. At test time, the latent variable is inferred from an expected return before policy execution, realizing the idea of planning as inference. Our experiments demonstrate that LPT can discover improved decisions from sub- optimal trajectories, achieving competitive performance across several benchmarks, including Gym-Mujoco, Franka Kitchen, Maze2D, and Connect Four. It exhibits capabilities in nuanced credit assignments, trajectory stitching, and adaptation to environmental contingencies. These results validate that latent variable inference can be a strong alternative to step-wise reward prompting.", "pdf": "https://openreview.net/pdf/0b6e118d3320c87fc9990c0f1752d8af260e3055.pdf"} {"title": "The Mamba in the Llama: Distilling and Accelerating Hybrid Models", "url": "https://openreview.net/forum?id=uAzhODjALU", "detail_url": "https://openreview.net/forum?id=uAzhODjALU", "authors": "Junxiong Wang,Daniele Paliotta,Avner May,Alexander M Rush,Tri Dao", "tags": "NIPS 2024,Poster", "abstract": "Linear RNN architectures, like Mamba, can be competitive with Transformer models in language modeling while having advantageous deployment characteristics. Given the focus on training large-scale Transformer models, we consider the challenge of converting these pretrained models for deployment. \nWe demonstrate that it is feasible to distill large Transformers into linear RNNs by reusing the linear projection weights from attention layers with academic GPU resources. The resulting hybrid model, which incorporates a quarter of the attention layers, achieves performance comparable to the original Transformer in chat benchmarks and outperforms open-source hybrid Mamba models trained from scratch with trillions of tokens in both chat benchmarks and general benchmarks. Moreover, we introduce a hardware-aware speculative decoding algorithm that accelerates the inference speed of Mamba and hybrid models. Overall we show how, with limited computation resources, we can remove many of the original attention layers and generate from the resulting model more efficiently. \nOur top-performing model, distilled from Llama3-8B-Instruct, achieves a 29.61 length-controlled win rate on AlpacaEval 2 against GPT-4 and 7.35 on MT-Bench, surpassing the best 8B scale instruction-tuned linear RNN model.", "pdf": "https://openreview.net/pdf/eb0c260f8851c9b273a268fea0db46f8b5d35b2b.pdf"} {"title": "The Space Complexity of Approximating Logistic Loss", "url": "https://openreview.net/forum?id=vDlj3veE9a", "detail_url": "https://openreview.net/forum?id=vDlj3veE9a", "authors": "Gregory Dexter,Petros Drineas,Rajiv Khanna", "tags": "NIPS 2024,Poster", "abstract": "We provide space complexity lower bounds for data structures that approximate logistic loss up to $\\epsilon$-relative error on a logistic regression problem with data $\\mathbf{X} \\in \\mathbb{R}^{n \\times d}$ and labels $\\mathbf{y} \\in \\\\{-1,1\\\\}^d$. The space complexity of existing coreset constructions depend on a natural complexity measure $\\mu_\\mathbf{y}(\\mathbf{X})$. We give an $\\tilde{\\Omega}(\\frac{d}{\\epsilon^2})$ space complexity lower bound in the regime $\\mu_\\mathbf{y}(\\mathbf{X}) = \\mathcal{O}(1)$ that shows existing coresets are optimal in this regime up to lower order factors. We also prove a general $\\tilde{\\Omega}(d\\cdot \\mu_\\mathbf{y}(\\mathbf{X}))$ space lower bound when $\\epsilon$ is constant, showing that the dependency on $\\mu_\\mathbf{y}(\\mathbf{X})$ is not an artifact of mergeable coresets. Finally, we refute a prior conjecture that $\\mu_\\mathbf{y}(\\mathbf{X})$ is hard to compute by providing an efficient linear programming formulation, and we empirically compare our algorithm to prior approximate methods.", "pdf": "https://openreview.net/pdf/d07d7ff0d145d5e17efb7382670c8cce125b4acd.pdf"} {"title": "2D-OOB: Attributing Data Contribution Through Joint Valuation Framework", "url": "https://openreview.net/forum?id=vBxeeH1X4y", "detail_url": "https://openreview.net/forum?id=vBxeeH1X4y", "authors": "Yifan Sun,Jingyan Shen,Yongchan Kwon", "tags": "NIPS 2024,Poster", "abstract": "Data valuation has emerged as a powerful framework for quantifying each datum's contribution to the training of a machine learning model. However, it is crucial to recognize that the quality of cells within a single data point can vary greatly in practice. For example, even in the case of an abnormal data point, not all cells are necessarily noisy. The single scalar score assigned by existing data valuation methods blurs the distinction between noisy and clean cells of a data point, making it challenging to interpret the data values. In this paper, we propose 2D-OOB, an out-of-bag estimation framework for jointly determining helpful (or detrimental) samples as well as the particular cells that drive them. Our comprehensive experiments demonstrate that 2D-OOB achieves state-of-the-art performance across multiple use cases while being exponentially faster. Specifically, 2D-OOB shows promising results in detecting and rectifying fine-grained outliers at the cell level, and localizing backdoor triggers in data poisoning attacks.", "pdf": "https://openreview.net/pdf/65578d6d33f116ff8c52f0c5a25fcffd34ca1797.pdf"} {"title": "Online Posterior Sampling with a Diffusion Prior", "url": "https://openreview.net/forum?id=7v0UyO0B6q", "detail_url": "https://openreview.net/forum?id=7v0UyO0B6q", "authors": "Branislav Kveton,Boris N. Oreshkin,Youngsuk Park,Aniket Anand Deshmukh,Rui Song", "tags": "NIPS 2024,Poster", "abstract": "Posterior sampling in contextual bandits with a Gaussian prior can be implemented exactly or approximately using the Laplace approximation. The Gaussian prior is computationally efficient but it cannot describe complex distributions. In this work, we propose approximate posterior sampling algorithms for contextual bandits with a diffusion model prior. The key idea is to sample from a chain of approximate conditional posteriors, one for each stage of the reverse diffusion process, which are obtained by the Laplace approximation. Our approximations are motivated by posterior sampling with a Gaussian prior, and inherit its simplicity and efficiency. They are asymptotically consistent and perform well empirically on a variety of contextual bandit problems.", "pdf": "https://openreview.net/pdf/c9294f13ffb04f439eb04fca780e192dc4b2514e.pdf"} {"title": "MatFormer: Nested Transformer for Elastic Inference", "url": "https://openreview.net/forum?id=fYa6ezMxD5", "detail_url": "https://openreview.net/forum?id=fYa6ezMxD5", "authors": "Fnu Devvrit,Sneha Kudugunta,Aditya Kusupati,Tim Dettmers,Kaifeng Chen,Inderjit S Dhillon,Yulia Tsvetkov,Hannaneh Hajishirzi,Sham M. Kakade,Ali Farhadi,Prateek Jain", "tags": "NIPS 2024,Poster", "abstract": "Foundation models are applied in a broad spectrum of settings with different inference constraints, from massive multi-accelerator clusters to resource-constrained standalone mobile devices. However, the substantial costs associated with training these models often limit the number of unique model sizes that can be offered. Consequently, practitioners are compelled to select a model that may not be optimally aligned with their specific latency and cost requirements. We present MatFormer, a novel Transformer architecture designed to provide elastic inference across diverse deployment constraints. MatFormer achieves this by incorporating a nested Feed Forward Network (FFN) block structure within a standard Transformer model. During training, we optimize the parameters of multiple nested FFN blocks with varying sizes, enabling the extraction of hundreds of accurate smaller models without incurring additional computational costs. We empirically validate the efficacy of MatFormer across different model classes (decoders and encoders) and modalities (language and vision), demonstrating its potential for real-world deployment. We show that a 850M decoder-only MatFormer language model (MatLM) allows us to extract multiple smaller models spanning from 582M to 850M parameters, each exhibiting better validation loss and one-shot downstream evaluations than independently trained counterparts. Furthermore, we observe that smaller encoders extracted from a universal MatFormer-based ViT (MatViT) encoder preserve the metric-space structure for adaptive large-scale retrieval. Finally, we showcase that speculative decoding with the accurate and consistent submodels extracted from MatFormer can lead to significant reduction in inference latency.", "pdf": "https://openreview.net/pdf/eed604160d257e22b2e52cbbff2d5ab577fe90ae.pdf"} {"title": "HEALNet: Multimodal Fusion for Heterogeneous Biomedical Data", "url": "https://openreview.net/forum?id=HUxtJcQpDS", "detail_url": "https://openreview.net/forum?id=HUxtJcQpDS", "authors": "Konstantin Hemker,Nikola Simidjievski,Mateja Jamnik", "tags": "NIPS 2024,Poster", "abstract": "Technological advances in medical data collection, such as high-throughput genomic sequencing and digital high-resolution histopathology, have contributed to the rising requirement for multimodal biomedical modelling, specifically for image, tabular and graph data. Most multimodal deep learning approaches use modality-specific architectures that are often trained separately and cannot capture the crucial cross-modal information that motivates the integration of different data sources. This paper presents the **H**ybrid **E**arly-fusion **A**ttention **L**earning **Net**work (HEALNet) \u2013 a flexible multimodal fusion architecture, which: a) preserves modality-specific structural information, b) captures the cross-modal interactions and structural information in a shared latent space, c) can effectively handle missing modalities during training and inference, and d) enables intuitive model inspection by learning on the raw data input instead of opaque embeddings. We conduct multimodal survival analysis on Whole Slide Images and Multi-omic data on four cancer datasets from The Cancer Genome Atlas (TCGA). HEALNet achieves state-of-the-art performance compared to other end-to-end trained fusion models, substantially improving over unimodal and multimodal baselines whilst being robust in scenarios with missing modalities. The code is available at https://github.com/konst-int-i/healnet.", "pdf": "https://openreview.net/pdf/7a421a12079a87d530f2e94d0dbb4d84190e551b.pdf"} {"title": "GTBench: Uncovering the Strategic Reasoning Capabilities of LLMs via Game-Theoretic Evaluations", "url": "https://openreview.net/forum?id=ypggxVWIv2", "detail_url": "https://openreview.net/forum?id=ypggxVWIv2", "authors": "Jinhao Duan,Renming Zhang,James Diffenderfer,Bhavya Kailkhura,Lichao Sun,Elias Stengel-Eskin,Mohit Bansal,Tianlong Chen,Kaidi Xu", "tags": "NIPS 2024,Poster", "abstract": "As Large Language Models (LLMs) are integrated into critical real-world applications, their strategic and logical reasoning abilities are increasingly crucial. This paper evaluates LLMs' reasoning abilities in competitive environments through game-theoretic tasks, e.g., board and card games that require pure logic and strategic reasoning to compete with opponents. We first propose GTBench, a language-driven environment composing 10 widely-recognized tasks, across a comprehensive game taxonomy: complete versus incomplete information, dynamic versus static, and probabilistic versus deterministic scenarios. Then, we (1) Characterize the game-theoretic reasoning of LLMs; and (2) Perform LLM-vs.-LLM competitions as reasoning evaluation. We observe that (1) LLMs have distinct behaviors regarding various gaming scenarios; for example, LLMs fail in complete and deterministic games yet they are competitive in probabilistic gaming scenarios; (2) Most open-source LLMs, e.g., CodeLlama-34b-Instruct and Llama-2-70b-chat, are less competitive than commercial LLMs, e.g., GPT-4, in complex games, yet the recently released Llama-3-70b-Instruct makes up for this shortcoming. In addition, code-pretraining greatly benefits strategic reasoning, while advanced reasoning methods such as Chain-of-Thought (CoT) and Tree-of-Thought (ToT) do not always help. We further characterize the game-theoretic properties of LLMs, such as equilibrium and Pareto Efficiency in repeated games. Detailed error profiles are provided for a better understanding of LLMs' behavior. We hope our research provides standardized protocols and serves as a foundation to spur further explorations in the strategic reasoning of LLMs.", "pdf": "https://openreview.net/pdf/1616ae3f3c1970951b0401486556c3a49f3df00c.pdf"} {"title": "Fast and Memory-Efficient Video Diffusion Using Streamlined Inference", "url": "https://openreview.net/forum?id=iNvXYQrkpi", "detail_url": "https://openreview.net/forum?id=iNvXYQrkpi", "authors": "Zheng Zhan,Yushu Wu,Yifan Gong,Zichong Meng,Zhenglun Kong,Changdi Yang,Geng Yuan,Pu Zhao,Wei Niu,Yanzhi Wang", "tags": "NIPS 2024,Poster", "abstract": "The rapid progress in artificial intelligence-generated content (AIGC), especially with diffusion models, has significantly advanced development of high-quality video generation. However, current video diffusion models exhibit demanding computational requirements and high peak memory usage, especially for generating longer and higher-resolution videos. These limitations greatly hinder the practical application of video diffusion models on standard hardware platforms. To tackle this issue, we present a novel, training-free framework named Streamlined Inference, which leverages the temporal and spatial properties of video diffusion models. Our approach integrates three core components: Feature Slicer, Operator Grouping, and Step Rehash. Specifically, Feature Slicer effectively partitions input features into sub-features and Operator Grouping processes each sub-feature with a group of consecutive operators, resulting in significant memory reduction without sacrificing the quality or speed. Step Rehash further exploits the similarity between adjacent steps in diffusion, and accelerates inference through skipping unnecessary steps. Extensive experiments demonstrate that our approach significantly reduces peak memory and computational overhead, making it feasible to generate high-quality videos on a single consumer GPU (e.g., reducing peak memory of Animatediff from 42GB to 11GB, featuring faster inference on 2080Ti).", "pdf": "https://openreview.net/pdf/746d0dded936a447f6abe89c86574a1936c3d8bd.pdf"} {"title": "GraphMETRO: Mitigating Complex Graph Distribution Shifts via Mixture of Aligned Experts", "url": "https://openreview.net/forum?id=QtYg4g3Deu", "detail_url": "https://openreview.net/forum?id=QtYg4g3Deu", "authors": "Shirley Wu,Kaidi Cao,Bruno Ribeiro,James Zou,Jure Leskovec", "tags": "NIPS 2024,Poster", "abstract": "Graph data are inherently complex and heterogeneous, leading to a high natural diversity of distributional shifts. However, it remains unclear how to build machine learning architectures that generalize to the complex distributional shifts naturally occurring in the real world. Here, we develop GraphMETRO, a Graph Neural Network architecture that models natural diversity and captures complex distributional shifts. GraphMETRO employs a Mixture-of-Experts (MoE) architecture with a gating model and multiple expert models, where each expert model targets a specific distributional shift to produce a referential representation w.r.t. a reference model, and the gating model identifies shift components. Additionally, we design a novel objective that aligns the representations from different expert models to ensure reliable optimization. GraphMETRO achieves state-of-the-art results on four datasets from the GOOD benchmark, which is comprised of complex and natural real-world distribution shifts, improving by 67% and 4.2% on the WebKB and Twitch datasets. Code and data are available at https://github.com/Wuyxin/GraphMETRO.", "pdf": "https://openreview.net/pdf/a59c39b6e2e8c57881013980e1abeffcb307c42e.pdf"} {"title": "DynaMo: In-Domain Dynamics Pretraining for Visuo-Motor Control", "url": "https://openreview.net/forum?id=vUrOuc6NR3", "detail_url": "https://openreview.net/forum?id=vUrOuc6NR3", "authors": "Zichen Jeff Cui,Hengkai Pan,Aadhithya Iyer,Siddhant Haldar,Lerrel Pinto", "tags": "NIPS 2024,Poster", "abstract": "Imitation learning has proven to be a powerful tool for training complex visuo-motor policies. However, current methods often require hundreds to thousands of expert demonstrations to handle high-dimensional visual observations. A key reason for this poor data efficiency is that visual representations are predominantly either pretrained on out-of-domain data or trained directly through a behavior cloning objective. In this work, we present DynaMo, a new in-domain, self-supervised method for learning visual representations. Given a set of expert demonstrations, we jointly learn a latent inverse dynamics model and a forward dynamics model over a sequence of image embeddings, predicting the next frame in latent space, without augmentations, contrastive sampling, or access to ground truth actions. Importantly, DynaMo does not require any out-of-domain data such as Internet datasets or cross-embodied datasets. On a suite of six simulated and real environments, we show that representations learned with DynaMo significantly improve downstream imitation learning performance over prior self-supervised learning objectives, and pretrained representations. Gains from using DynaMo hold across policy classes such as Behavior Transformer, Diffusion Policy, MLP, and nearest neighbors. Finally, we ablate over key components of DynaMo and measure its impact on downstream policy performance. Robot videos are best viewed at https://dynamo-ssl.github.io.", "pdf": "https://openreview.net/pdf/a80285940d66984b6d99e1990c79614edb3af61b.pdf"} {"title": "AvaTaR: Optimizing LLM Agents for Tool Usage via Contrastive Reasoning", "url": "https://openreview.net/forum?id=N4quRxE19p", "detail_url": "https://openreview.net/forum?id=N4quRxE19p", "authors": "Shirley Wu,Shiyu Zhao,Qian Huang,Kexin Huang,Michihiro Yasunaga,Kaidi Cao,Vassilis N. Ioannidis,Karthik Subbian,Jure Leskovec,James Zou", "tags": "NIPS 2024,Poster", "abstract": "Large language model (LLM) agents have demonstrated impressive capabilities in utilizing external tools and knowledge to boost accuracy and reduce hallucinations. However, developing prompting techniques that enable LLM agents to effectively use these tools and knowledge remains a heuristic and labor-intensive task. Here, we introduce AvaTaR, a novel and automated framework that optimizes an LLM agent to effectively leverage provided tools, improving performance on a given task. During optimization, we design a comparator module to iteratively deliver insightful and comprehensive prompts to the LLM agent by contrastively reasoning between positive and negative examples sampled from training data. We demon- strate AvaTaR on four complex multimodal retrieval datasets featuring textual, visual, and relational information, and three general question-answering (QA) datasets. We find AvaTaR consistently outperforms state-of-the-art approaches across all seven tasks, exhibiting strong generalization ability when applied to novel cases and achieving an average relative improvement of 14% on the Hit@1 metric for the retrieval datasets and 13% for the QA datasets. Code and dataset are available at https://github.com/zou-group/avatar.", "pdf": "https://openreview.net/pdf/f7db92d498b11d9cde7e3292c993419ad4109a4f.pdf"} {"title": "Exploring Token Pruning in Vision State Space Models", "url": "https://openreview.net/forum?id=eWiGn0Fcdx", "detail_url": "https://openreview.net/forum?id=eWiGn0Fcdx", "authors": "Zheng Zhan,Zhenglun Kong,Yifan Gong,Yushu Wu,Zichong Meng,Hangyu Zheng,Xuan Shen,Stratis Ioannidis,Wei Niu,Pu Zhao,Yanzhi Wang", "tags": "NIPS 2024,Poster", "abstract": "State Space Models (SSMs) have the advantage of keeping linear computational complexity compared to attention modules in transformers, and have been applied to vision tasks as a new type of powerful vision foundation model. Inspired by the observations that the final prediction in vision transformers (ViTs) is only based on a subset of most informative tokens, we take the novel step of enhancing the efficiency of SSM-based vision models through token-based pruning. However, direct applications of existing token pruning techniques designed for ViTs fail to deliver good performance, even with extensive fine-tuning. To address this issue, we revisit the unique computational characteristics of SSMs and discover that naive application disrupts the sequential token positions. This insight motivates us to design a novel and general token pruning method specifically for SSM-based vision models. We first introduce a pruning-aware hidden state alignment method to stabilize the neighborhood of remaining tokens for performance enhancement. Besides, based on our detailed analysis, we propose a token importance evaluation method adapted for SSM models, to guide the token pruning. With efficient implementation and practical acceleration methods, our method brings actual speedup. Extensive experiments demonstrate that our approach can achieve significant computation reduction with minimal impact on performance across different tasks. Notably, we achieve 81.7\\% accuracy on ImageNet with a 41.6\\% reduction in the FLOPs for pruned PlainMamba-L3. Furthermore, our work provides deeper insights into understanding the behavior of SSM-based vision models for future research.", "pdf": "https://openreview.net/pdf/4b28fb34d078ed0cb16c5e0ad1f85d1229adc4b7.pdf"} {"title": "Mixture of Nested Experts: Adaptive Processing of Visual Tokens", "url": "https://openreview.net/forum?id=HbV5vRJMOY", "detail_url": "https://openreview.net/forum?id=HbV5vRJMOY", "authors": "Gagan Jain,Nidhi Hegde,Aditya Kusupati,Arsha Nagrani,Shyamal Buch,Prateek Jain,Anurag Arnab,Sujoy Paul", "tags": "NIPS 2024,Poster", "abstract": "The visual medium (images and videos) naturally contains a large amount of information redundancy, thereby providing a great opportunity for leveraging efficiency in processing. While Vision Transformer (ViT) based models scale effectively to large data regimes, they fail to capitalize on this inherent redundancy, leading to higher computational costs. Mixture of Experts (MoE) networks demonstrate scalability while maintaining same inference-time costs, but they come with a larger parameter footprint. We present Mixture of Nested Experts (MoNE), which utilizes a nested structure for experts, wherein individual experts fall on an increasing compute-accuracy curve. Given a compute budget, MoNE learns to dynamically choose tokens in a priority order, and thus redundant tokens are processed through cheaper nested experts. Using this framework, we achieve equivalent performance as the baseline models, while reducing inference time compute by over two-fold. We validate our approach on standard image and video datasets - ImageNet-21K, Kinetics400, and Something-Something-v2. We further highlight MoNE's adaptability by showcasing its ability to maintain strong performance across different inference-time compute budgets on videos, using only a single trained model.", "pdf": "https://openreview.net/pdf/34a8b1b90acda923db4408bc2dca4c3d5f6d3531.pdf"} {"title": "Superposed Decoding: Multiple Generations from a Single Autoregressive Inference Pass", "url": "https://openreview.net/forum?id=KSOkkHm9I7", "detail_url": "https://openreview.net/forum?id=KSOkkHm9I7", "authors": "Ethan Shen,Alan Fan,Sarah M Pratt,Jae Sung Park,Matthew Wallingford,Sham M. Kakade,Ari Holtzman,Ranjay Krishna,Ali Farhadi,Aditya Kusupati", "tags": "NIPS 2024,Poster", "abstract": "Many applications today provide users with multiple auto-complete drafts as they type, including GitHub's code completion, Gmail's smart compose, and Apple's messaging auto-suggestions. Under the hood, language models support this by running an autoregressive inference pass to provide a draft. Consequently, providing $k$ drafts to the user requires running an expensive language model $k$ times. To alleviate the computation cost of running $k$ inference passes, we propose Superposed Decoding, a new decoding algorithm that generates $k$ drafts at the computation cost of one autoregressive inference pass. We achieve this by feeding a superposition of the most recent token embeddings from the $k$ drafts as input to the next decoding step of the language model. At every inference step we combine the $k$ drafts with the top-$k$ tokens to get $k^2$ new drafts and cache the $k$ most likely options, using an n-gram interpolation with minimal compute overhead to filter out incoherent generations. Our experiments show that $k$ drafts from Superposed Decoding are at least as coherent and factual as Nucleus Sampling and Greedy Decoding respectively, while being at least $2.44\\times$ faster for $k\\ge3$. In a compute-normalized setting, user evaluations demonstrably favor text generated by Superposed Decoding over Nucleus Sampling. Superposed Decoding can also be combined with other decoding strategies, resulting in universal coverage gains when scaling inference time compute. Code and more examples open-sourced at https://github.com/RAIVNLab/SuperposedDecoding.", "pdf": "https://openreview.net/pdf/05fda488855ff14b7237ddd637cedcbe3f1a129d.pdf"} {"title": "Adapting to Unknown Low-Dimensional Structures in Score-Based Diffusion Models", "url": "https://openreview.net/forum?id=SnTxbQSrW7", "detail_url": "https://openreview.net/forum?id=SnTxbQSrW7", "authors": "Gen Li,Yuling Yan", "tags": "NIPS 2024,Poster", "abstract": "This paper investigates score-based diffusion models when the underlying target distribution is concentrated on or near low-dimensional manifolds within the higher-dimensional space in which they formally reside, a common characteristic of natural image distributions. Despite previous efforts to understand the data generation process of diffusion models, existing theoretical support remains highly suboptimal in the presence of low-dimensional structure, which we strengthen in this paper. For the popular Denoising Diffusion Probabilistic Model (DDPM), we find that the dependency of the error incurred within each denoising step on the ambient dimension $d$ is in general unavoidable. We further identify a unique design of coefficients that yields a converges rate at the order of $O(k^{2}/\\sqrt{T})$ (up to log factors), where $k$ is the intrinsic dimension of the target distribution and $T$ is the number of steps. This represents the first theoretical demonstration that the DDPM sampler can adapt to unknown low-dimensional structures in the target distribution, highlighting the critical importance of coefficient design. All of this is achieved by a novel set of analysis tools that characterize the algorithmic dynamics in a more deterministic manner.", "pdf": "https://openreview.net/pdf/afbf7c4fa794ff6224a121ed9ea89f5d072d5f65.pdf"} {"title": "BitsFusion: 1.99 bits Weight Quantization of Diffusion Model", "url": "https://openreview.net/forum?id=0m19blQT6y", "detail_url": "https://openreview.net/forum?id=0m19blQT6y", "authors": "Yang Sui,Yanyu Li,Anil Kag,Yerlan Idelbayev,Junli Cao,Ju Hu,Dhritiman Sagar,Bo Yuan,Sergey Tulyakov,Jian Ren", "tags": "NIPS 2024,Poster", "abstract": "Diffusion-based image generation models have achieved great success in recent years by showing the capability of synthesizing high-quality content. However, these models contain a huge number of parameters, resulting in a significantly large model size. Saving and transferring them is a major bottleneck for various applications, especially those running on resource-constrained devices. In this work, we develop a novel weight quantization method that quantizes the UNet from Stable Diffusion v1.5 to $1.99$ bits, achieving a model with $7.9\\times$ smaller size while exhibiting even better generation quality than the original one. Our approach includes several novel techniques, such as assigning optimal bits to each layer, initializing the quantized model for better performance, and improving the training strategy to dramatically reduce quantization error. Furthermore, we extensively evaluate our quantized model across various benchmark datasets and through human evaluation to demonstrate its superior generation quality.", "pdf": "https://openreview.net/pdf/1f5fe51e21bc16b8079186f8972da0ea4b388b85.pdf"} {"title": "Uncertainty-aware Fine-tuning of Segmentation Foundation Models", "url": "https://openreview.net/forum?id=qNXRXUC90b", "detail_url": "https://openreview.net/forum?id=qNXRXUC90b", "authors": "Kangning Liu,Brian L. Price,Jason Kuen,Yifei Fan,Zijun Wei,Luis Figueroa,Krzysztof J. Geras,Carlos Fernandez-Granda", "tags": "NIPS 2024,Poster", "abstract": "The Segment Anything Model (SAM) is a large-scale foundation model that has revolutionized segmentation methodology. Despite its impressive generalization ability, the segmentation accuracy of SAM on images with intricate structures is often unsatisfactory. Recent works have proposed lightweight fine-tuning using high-quality annotated data to improve accuracy on such images. However, here we provide extensive empirical evidence that this strategy leads to forgetting how to \"segment anything\": these models lose the original generalization abilities of SAM, in the sense that they perform worse for segmentation tasks not represented in the annotated fine-tuning set. To improve performance without forgetting, we introduce a novel framework that combines high-quality annotated data with a large unlabeled dataset. The framework relies on two methodological innovations. First, we quantify the uncertainty in the SAM pseudo labels associated with the unlabeled data and leverage it to perform uncertainty-aware fine-tuning. Second, we encode the type of segmentation task associated with each training example using a $\\textit{task prompt}$ to reduce ambiguity. We evaluated the proposed Segmentation with Uncertainty Model (SUM) on a diverse test set consisting of 14 public benchmarks, where it achieves state-of-the-art results. Notably, our method consistently surpasses SAM by 3-6 points in mean IoU and 4-7 in mean boundary IoU across point-prompt interactive segmentation rounds. Code is available at https://github.com/Kangningthu/SUM", "pdf": "https://openreview.net/pdf/0fc25afd9425b35ac875b47913c45db57670edef.pdf"} {"title": "DeformableTST: Transformer for Time Series Forecasting without Over-reliance on Patching", "url": "https://openreview.net/forum?id=B1Iq1EOiVU", "detail_url": "https://openreview.net/forum?id=B1Iq1EOiVU", "authors": "Donghao Luo,Xue Wang", "tags": "NIPS 2024,Poster", "abstract": "With the proposal of patching technique in time series forecasting, Transformerbased models have achieved compelling performance and gained great interest from\nthe time series community. But at the same time, we observe a new problem that\nthe recent Transformer-based models are overly reliant on patching to achieve ideal\nperformance, which limits their applicability to some forecasting tasks unsuitable\nfor patching. In this paper, we intent to handle this emerging issue. Through diving\ninto the relationship between patching and full attention (the core mechanism\nin Transformer-based models), we further find out the reason behind this issue\nis that full attention relies overly on the guidance of patching to focus on the\nimportant time points and learn non-trivial temporal representation. Based on this\nfinding, we propose DeformableTST as an effective solution to this emerging\nissue. Specifically, we propose deformable attention, a sparse attention mechanism\nthat can better focus on the important time points by itself, to get rid of the need of\npatching. And we also adopt a hierarchical structure to alleviate the efficiency issue\ncaused by the removal of patching. Experimentally, our DeformableTST achieves\nthe consistent state-of-the-art performance in a broader range of time series tasks,\nespecially achieving promising performance in forecasting tasks unsuitable for\npatching, therefore successfully reducing the reliance on patching and broadening\nthe applicability of Transformer-based models. Code is available at this repository:\nhttps://github.com/luodhhh/DeformableTST.", "pdf": "https://openreview.net/pdf/b6a09f0041b665553173bb4fd3a004d62b3e3472.pdf"} {"title": "MAmmoTH2: Scaling Instructions from the Web", "url": "https://openreview.net/forum?id=yVu5dnPlqA", "detail_url": "https://openreview.net/forum?id=yVu5dnPlqA", "authors": "Xiang Yue,Tianyu Zheng,Ge Zhang,Wenhu Chen", "tags": "NIPS 2024,Poster", "abstract": "Instruction tuning improves the reasoning abilities of large language models (LLMs), with data quality and scalability being the crucial factors. Most instruction tuning data come from human crowd-sourcing or GPT-4 distillation. We propose a paradigm to efficiently harvest 10 million naturally existing instruction data from the pre-training web corpus to enhance LLM reasoning. Our approach involves (1) recalling relevant documents, (2) extracting instruction-response pairs, and (3) refining the extracted pairs using open-source LLMs. Fine-tuning base LLMs on this dataset, we build MAmmoTH2 models, which significantly boost performance on reasoning benchmarks. Notably, MAmmoTH2-7B\u2019s (Mistral) performance increases from 11% to 36.7% on MATH and from 36% to 68.4% on GSM8K without training on any in-domain data. Further training MAmmoTH2 on public instruction tuning datasets yields MAmmoTH2-Plus, achieving state-of-the-art performance on several reasoning and chatbot benchmarks. Our work demonstrates how to harvest large-scale, high-quality instruction data without costly human annotation or GPT-4 distillation, providing a new paradigm for building better instruction tuning data.", "pdf": "https://openreview.net/pdf/bd04b97c020c0de7784c77b26776ae56292d2d38.pdf"} {"title": "CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuning", "url": "https://openreview.net/forum?id=Gi00NVru6n", "detail_url": "https://openreview.net/forum?id=Gi00NVru6n", "authors": "Yibo Yang,Xiaojie Li,Zhongzhu Zhou,Shuaiwen Leon Song,Jianlong Wu,Liqiang Nie,Bernard Ghanem", "tags": "NIPS 2024,Poster", "abstract": "Current parameter-efficient fine-tuning (PEFT) methods build adapters widely agnostic of the context of downstream task to learn, or the context of important knowledge to maintain. As a result, there is often a performance gap compared to full-parameter fine-tuning, and meanwhile the fine-tuned model suffers from catastrophic forgetting of the pre-trained world knowledge. In this paper, we propose **CorDA**, a Context-oriented Decomposition Adaptation method that builds learnable **task-aware adapters** from weight decomposition oriented by the context of downstream task or the world knowledge to maintain. Concretely, we collect a few data samples, and perform singular value decomposition for each linear layer of a pre-trained LLM multiplied by the covariance matrix of the input activation using these samples. The inverse of the covariance matrix is multiplied with the decomposed components to reconstruct the original weights. By doing so, the context of the representative samples is captured through deciding the factorizing orientation. Our method enables two options, the **knowledge-preserved adaptation** and the **instruction-previewed adaptation**. For the former, we use question-answering samples to obtain the covariance matrices, and use the decomposed components with the smallest $r$ singular values to initialize a learnable adapter, with the others frozen such that the world knowledge is better preserved. For the latter, we use the instruction data from the fine-tuning task, such as math or coding, to orientate the decomposition and train the largest $r$ components that most correspond to the task to learn. We conduct extensive experiments on Math, Code, and Instruction Following tasks. Our knowledge-preserved adaptation not only achieves better performance than LoRA on fine-tuning tasks, but also mitigates the forgetting of world knowledge. Our instruction-previewed adaptation is able to further enhance the fine-tuning performance to be comparable with full fine-tuning, surpassing \nthe state-of-the-art PEFT methods such as LoRA, DoRA, and PiSSA.", "pdf": "https://openreview.net/pdf/f853b429f8de16c5f0f984d92abc92e21cd4a7b5.pdf"} {"title": "Cluster-Learngene: Inheriting Adaptive Clusters for Vision Transformers", "url": "https://openreview.net/forum?id=92vVuJVLVW", "detail_url": "https://openreview.net/forum?id=92vVuJVLVW", "authors": "Qiufeng Wang,Xu Yang,Fu Feng,Jing wang,Xin Geng", "tags": "NIPS 2024,Poster", "abstract": "In recent years, the merging of vast datasets with powerful computational resources has led to the emergence of large pre-trained models in the field of deep learning. However, the common practices often overgeneralize the applicability of these models, overlooking the task-specific resource constraints. To mitigate this issue, we propose \\textbf{Cluster-Learngene}, which effectively clusters critical internal modules from a large ancestry model and then inherits them to initialize descendant models of elastic scales. Specifically, based on the density characteristics of attention heads, our method adaptively clusters attention heads of each layer and position-wise feed-forward networks (FFNs) in the ancestry model as the learngene. Moreover, we introduce priority weight-sharing and learnable parameter transformations that expand the learngene to initialize descendant models of elastic scales. Through extensive experimentation, we demonstrate that Cluster-Learngene not only is more efficient compared to other initialization methods but also customizes models of elastic scales according to downstream task resources.", "pdf": "https://openreview.net/pdf/cee6ddb5740f9f6ec6d2bd161cccf647d3a5aa8f.pdf"} {"title": "Transforming Vision Transformer: Towards Efficient Multi-Task Asynchronous Learner", "url": "https://openreview.net/forum?id=VWf6ZVx5S2", "detail_url": "https://openreview.net/forum?id=VWf6ZVx5S2", "authors": "Hanwen Zhong,Jiaxin Chen,Yutong Zhang,Di Huang,Yunhong Wang", "tags": "NIPS 2024,Poster", "abstract": "Multi-Task Learning (MTL) for Vision Transformer aims at enhancing the model capability by tackling multiple tasks simultaneously. Most recent works have predominantly focused on designing Mixture-of-Experts (MoE) structures and integrating Low-Rank Adaptation (LoRA) to efficiently perform multi-task learning. However, their rigid combination hampers both the optimization of MoE and the effectiveness of reparameterization of LoRA, leading to sub-optimal performance and low inference speed. In this work, we propose a novel approach dubbed Efficient Multi-Task Learning (EMTAL) by transforming a pre-trained Vision Transformer into an efficient multi-task learner during training, and reparameterizing the learned structure for efficient inference. Specifically, we firstly develop the MoEfied LoRA structure, which decomposes the pre-trained Transformer into a low-rank MoE structure and employ LoRA to fine-tune the parameters. Subsequently, we take into account the intrinsic asynchronous nature of multi-task learning and devise a learning Quality Retaining (QR) optimization mechanism, by leveraging the historical high-quality class logits to prevent a well-trained task from performance degradation. Finally, we design a router fading strategy to integrate the learned parameters into the original Transformer, archiving efficient inference. Extensive experiments on public benchmarks demonstrate the superiority of our method, compared to the state-of-the-art multi-task learning approaches.", "pdf": "https://openreview.net/pdf/493f3495ca5664245ec796e2853efef1372a7f3b.pdf"} {"title": "Hierarchical Programmatic Option Framework", "url": "https://openreview.net/forum?id=FeCWZviCeP", "detail_url": "https://openreview.net/forum?id=FeCWZviCeP", "authors": "Yu-An Lin,Chen-Tao Lee,Chih-Han Yang,Guan-Ting Liu,Shao-Hua Sun", "tags": "NIPS 2024,Poster", "abstract": "Deep reinforcement learning aims to learn deep neural network policies to solve large-scale decision-making problems. However, approximating policies using deep neural networks makes it difficult to interpret the learned decision-making process. To address this issue, prior works (Trivedi et al., 2021; Liu et al., 2023; Carvalho et al., 2024) proposed to use human-readable programs as policies to increase the interpretability of the decision-making pipeline. Nevertheless, programmatic policies generated by these methods struggle to effectively solve long and repetitive RL tasks and cannot generalize to even longer horizons during testing. To solve these problems, we propose the Hierarchical Programmatic Option framework (HIPO), which aims to solve long and repetitive RL problems with human-readable programs as options (low-level policies). Specifically, we propose a method that retrieves a set of effective, diverse, and compatible programs as options. Then, we learn a high-level policy to effectively reuse these programmatic options to solve reoccurring subtasks. Our proposed framework outperforms programmatic RL and deep RL baselines on various tasks. Ablation studies justify the effectiveness of our proposed search algorithm for retrieving a set of programmatic options.", "pdf": "https://openreview.net/pdf/f08e8e4a46ab747a1ae5c9ae6e9da6e38df2a34e.pdf"} {"title": "Efficiently Learning Significant Fourier Feature Pairs for Statistical Independence Testing", "url": "https://openreview.net/forum?id=BEiqNQZIky", "detail_url": "https://openreview.net/forum?id=BEiqNQZIky", "authors": "Yixin Ren,Yewei Xia,Hao Zhang,Jihong Guan,Shuigeng Zhou", "tags": "NIPS 2024,Poster", "abstract": "We propose a novel method to efficiently learn significant Fourier feature pairs for maximizing the power of Hilbert-Schmidt Independence Criterion~(HSIC) based independence tests. We first reinterpret HSIC in the frequency domain, which reveals its limited discriminative power due to the inability to adapt to specific frequency-domain features under the current inflexible configuration. To remedy this shortcoming, we introduce a module of learnable Fourier features, thereby developing a new criterion. We then derive a finite sample estimate of the test power by modeling the behavior of the criterion, thus formulating an optimization objective for significant Fourier feature pairs learning. We show that this optimization objective can be computed in linear time (with respect to the sample size $n$), which ensures fast independence tests. We also prove the convergence property of the optimization objective and establish the consistency of the independence tests. Extensive empirical evaluation on both synthetic and real datasets validates our method's superiority in effectiveness and efficiency, particularly in handling high-dimensional data and dealing with large-scale scenarios.", "pdf": "https://openreview.net/pdf/dc1dd582e88e396ec283acc538f1c23ec2226c3c.pdf"} {"title": "Search for Efficient Large Language Models", "url": "https://openreview.net/forum?id=lxSmLxlVks", "detail_url": "https://openreview.net/forum?id=lxSmLxlVks", "authors": "Xuan Shen,Pu Zhao,Yifan Gong,Zhenglun Kong,Zheng Zhan,Yushu Wu,Ming Lin,Chao Wu,Xue Lin,Yanzhi Wang", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) have long held sway in the realms of artificial intelligence research.\nNumerous efficient techniques, including weight pruning, quantization, and distillation, have been embraced to compress LLMs, targeting memory reduction and inference acceleration, which underscore the redundancy in LLMs.\nHowever, most model compression techniques concentrate on weight optimization, overlooking the exploration of optimal architectures.\nBesides, traditional architecture search methods, limited by the elevated complexity with extensive parameters, struggle to demonstrate their effectiveness on LLMs.\nIn this paper, we propose a training-free architecture search framework to identify optimal subnets that preserve the fundamental strengths of the original LLMs while achieving inference acceleration.\nFurthermore, after generating subnets that inherit specific weights from the original LLMs, we introduce a reformation algorithm that utilizes the omitted weights to rectify the inherited weights with a small amount of calibration data.\nCompared with SOTA training-free structured pruning works that can generate smaller networks, our method demonstrates superior performance across standard benchmarks.\nFurthermore, our generated subnets can directly reduce the usage of GPU memory and achieve inference acceleration.", "pdf": "https://openreview.net/pdf/52a4b2cabc9e2c0c70ccee7a15600851da69cfb9.pdf"} {"title": "Randomized Exploration in Cooperative Multi-Agent Reinforcement Learning", "url": "https://openreview.net/forum?id=7Tir0u0ukg", "detail_url": "https://openreview.net/forum?id=7Tir0u0ukg", "authors": "Hao-Lun Hsu,Weixin Wang,Miroslav Pajic,Pan Xu", "tags": "NIPS 2024,Poster", "abstract": "We present the first study on provably efficient randomized exploration in cooperative multi-agent reinforcement learning (MARL). We propose a unified algorithm framework for randomized exploration in parallel Markov Decision Processes (MDPs), and two Thompson Sampling (TS)-type algorithms, CoopTS-PHE and CoopTS-LMC, incorporating the perturbed-history exploration (PHE) strategy and the Langevin Monte Carlo exploration (LMC) strategy respectively, which are flexible in design and easy to implement in practice. For a special class of parallel MDPs where the transition is (approximately) linear, we theoretically prove that both CoopTS-PHE and CoopTS-LMC achieve a $\\widetilde{\\mathcal{O}}(d^{3/2}H^2\\sqrt{MK})$ regret bound with communication complexity $\\widetilde{\\mathcal{O}}(dHM^2)$, where $d$ is the feature dimension, $H$ is the horizon length, $M$ is the number of agents, and $K$ is the number of episodes. This is the first theoretical result for randomized exploration in cooperative MARL. We evaluate our proposed method on multiple parallel RL environments, including a deep exploration problem (i.e., $N$-chain), a video game, and a real-world problem in energy systems. Our experimental results support that our framework can achieve better performance, even under conditions of misspecified transition models. Additionally, we establish a connection between our unified framework and the practical application of federated learning.", "pdf": "https://openreview.net/pdf/2ac53b19e0a9ca2b8f06a74ca163d25fef316583.pdf"} {"title": "Treatment of Statistical Estimation Problems in Randomized Smoothing for Adversarial Robustness", "url": "https://openreview.net/forum?id=s4Wx2qXhv9", "detail_url": "https://openreview.net/forum?id=s4Wx2qXhv9", "authors": "Vaclav Voracek", "tags": "NIPS 2024,Poster", "abstract": "Randomized smoothing is a popular certified defense against adversarial attacks. In its essence, we need to solve a problem of statistical estimation which is usually very time-consuming since we need to perform numerous (usually $10^5$) forward passes of the classifier for every point to be certified. In this paper, we review the statistical estimation problems for randomized smoothing to find out if the computational burden is necessary. In particular, we consider the (standard) task of adversarial robustness where we need to decide if a point is robust at a certain radius or not using as few samples as possible while maintaining statistical guarantees. We present estimation procedures employing confidence sequences enjoying the same statistical guarantees as the standard methods, with the optimal sample complexities for the estimation task and empirically demonstrate their good performance. Additionally, we provide a randomized version of Clopper-Pearson confidence intervals resulting in strictly stronger certificates.", "pdf": "https://openreview.net/pdf/830fac348e03c4f94d90f23bc7b26c26e67508d9.pdf"} {"title": "SSDiff: Spatial-spectral Integrated Diffusion Model for Remote Sensing Pansharpening", "url": "https://openreview.net/forum?id=QMVydwvrx7", "detail_url": "https://openreview.net/forum?id=QMVydwvrx7", "authors": "Yu Zhong,Xiao Wu,Liang-Jian Deng,Zihan Cao,Hong-Xia Dou", "tags": "NIPS 2024,Poster", "abstract": "Pansharpening is a significant image fusion technique that merges the spatial content and spectral characteristics of remote sensing images to generate high-resolution multispectral images. Recently, denoising diffusion probabilistic models have been gradually applied to visual tasks, enhancing controllable image generation through low-rank adaptation (LoRA). In this paper, we introduce a spatial-spectral integrated diffusion model for the remote sensing pansharpening task, called SSDiff, which considers the pansharpening process as the fusion process of spatial and spectral components from the perspective of subspace decomposition. Specifically, SSDiff utilizes spatial and spectral branches to learn spatial details and spectral features separately, then employs a designed alternating projection fusion module (APFM) to accomplish the fusion. Furthermore, we propose a frequency modulation inter-branch module (FMIM) to modulate the frequency distribution between branches. The two components of SSDiff can perform favorably against the APFM when utilizing a LoRA-like branch-wise alternative fine-tuning method. It refines SSDiff to capture component-discriminating features more sufficiently. Finally, extensive experiments on four commonly used datasets, i.e., WorldView-3, WorldView-2, GaoFen-2, and QuickBird, demonstrate the superiority of SSDiff both visually and quantitatively. The code is available at https://github.com/Z-ypnos/SSdiff_main.", "pdf": "https://openreview.net/pdf/e9c0e06109de2566feaf08533f05626b119247d0.pdf"} {"title": "ProSST: Protein Language Modeling with Quantized Structure and Disentangled Attention", "url": "https://openreview.net/forum?id=4Z7RZixpJQ", "detail_url": "https://openreview.net/forum?id=4Z7RZixpJQ", "authors": "Mingchen Li,Yang Tan,Xinzhu Ma,Bozitao Zhong,Huiqun Yu,Ziyi Zhou,Wanli Ouyang,Bingxin Zhou,Pan Tan,Liang Hong", "tags": "NIPS 2024,Poster", "abstract": "Protein language models (PLMs) have shown remarkable capabilities in various protein function prediction tasks. However, while protein function is intricately tied to structure, most existing PLMs do not incorporate protein structure information. To address this issue, we introduce ProSST, a Transformer-based protein language model that seamlessly integrates both protein sequences and structures. ProSST incorporates a structure quantization module and a Transformer architecture with disentangled attention. The structure quantization module translates a 3D protein structure into a sequence of discrete tokens by first serializing the protein structure into residue-level local structures and then embeds them into dense vector space. These vectors are then quantized into discrete structure tokens by a pre-trained clustering model. These tokens serve as an effective protein structure representation. Furthermore, ProSST explicitly learns the relationship between protein residue token sequences and structure token sequences through the sequence-structure disentangled attention. We pre-train ProSST on millions of protein structures using a masked language model objective, enabling it to learn comprehensive contextual representations of proteins. To evaluate the proposed ProSST, we conduct extensive experiments on the zero-shot mutation effect prediction and several supervised downstream tasks, where ProSST achieves the state-of-the-art performance among all baselines. Our code and pre-trained models are publicly available.", "pdf": "https://openreview.net/pdf/6c75eed7fafd81f83f87f954b10f95768b59f37b.pdf"} {"title": "Slicing Vision Transformer for Flexible Inference", "url": "https://openreview.net/forum?id=zJNSbgl4UA", "detail_url": "https://openreview.net/forum?id=zJNSbgl4UA", "authors": "Yitian Zhang,Huseyin Coskun,Xu Ma,Huan Wang,Ke Ma,Stephen Xi Chen,Derek Hao Hu,Yun Fu", "tags": "NIPS 2024,Poster", "abstract": "Vision Transformers (ViT) is known for its scalability. In this work, we target to scale down a ViT to fit in an environment with dynamic-changing resource constraints. We observe that smaller ViTs are intrinsically the sub-networks of a larger ViT with different widths. Thus, we propose a general framework, named Scala, to enable a single network to represent multiple smaller ViTs with flexible inference capability, which aligns with the inherent design of ViT to vary from widths. Concretely, Scala activates several subnets during training, introduces Isolated Activation to disentangle the smallest sub-network from other subnets, and leverages Scale Coordination to ensure each sub-network receives simplified, steady, and accurate learning objectives. Comprehensive empirical validations on different tasks demonstrate that with only one-shot training, Scala learns slimmable representation without modifying the original ViT structure and matches the performance of Separate Training. Compared with the prior art, Scala achieves an average improvement of 1.6% on ImageNet-1K with fewer parameters.", "pdf": "https://openreview.net/pdf/d586f3321f7b5f7435024391bab2f98aeaac3132.pdf"} {"title": "Splatter a Video: Video Gaussian Representation for Versatile Processing", "url": "https://openreview.net/forum?id=bzuQtVDxv0", "detail_url": "https://openreview.net/forum?id=bzuQtVDxv0", "authors": "Yang-Tian Sun,Yi-Hua Huang,Lin Ma,Xiaoyang Lyu,Yan-Pei Cao,XIAOJUAN QI", "tags": "NIPS 2024,Poster", "abstract": "Video representation is a long-standing problem that is crucial for various downstream tasks, such as tracking, depth prediction, segmentation, view synthesis, and editing. However, current methods either struggle to model complex motions due to the absence of 3D structure or rely on implicit 3D representations that are ill-suited for manipulation tasks. To address these challenges, we introduce a novel explicit 3D representation\u2014video Gaussian representation\u2014that embeds a video into 3D Gaussians. \nOur proposed representation models video appearance in a 3D canonical space using explicit Gaussians as proxies and associates each Gaussian with 3D motions for video motion. This approach offers a more intrinsic and explicit representation than layered atlas or volumetric pixel matrices. To obtain such a representation, we distill 2D priors, such as optical flow and depth, from foundation models to regularize learning in this ill-posed setting.\nExtensive applications demonstrate the versatility of our new video representation. It has been proven effective in numerous video processing tasks, including tracking, consistent video depth and feature refinement, motion and appearance editing, and stereoscopic video generation.", "pdf": "https://openreview.net/pdf/d8203dcb42e1329589c7539b6ee7b267032da700.pdf"} {"title": "Bridging Gaps: Federated Multi-View Clustering in Heterogeneous Hybrid Views", "url": "https://openreview.net/forum?id=GVlJVX3iiq", "detail_url": "https://openreview.net/forum?id=GVlJVX3iiq", "authors": "Xinyue Chen,Yazhou Ren,Jie Xu,Fangfei Lin,Xiaorong Pu,Yang Yang", "tags": "NIPS 2024,Poster", "abstract": "Recently, federated multi-view clustering (FedMVC) has emerged to explore cluster structures in multi-view data distributed on multiple clients. Many existing approaches tend to assume that clients are isomorphic and all of them belong to either single-view clients or multi-view clients. While these methods have succeeded, they may encounter challenges in practical FedMVC scenarios involving heterogeneous hybrid views, where a mixture of single-view and multi-view clients exhibit varying degrees of heterogeneity. In this paper, we propose a novel FedMVC framework, which concurrently addresses two challenges associated with heterogeneous hybrid views, i.e., client gap and view gap. To address the client gap, we design a local-synergistic contrastive learning approach that helps single-view clients and multi-view clients achieve consistency for mitigating heterogeneity among all clients. To address the view gap, we develop a global-specific weighting aggregation method, which encourages global models to learn complementary features from hybrid views. The interplay between local-synergistic contrastive learning and global-specific weighting aggregation mutually enhances the exploration of the data cluster structures distributed on multiple clients. Theoretical analysis and extensive experiments demonstrate that our method can handle the heterogeneous hybrid views in FedMVC and outperforms state-of-the-art methods.", "pdf": "https://openreview.net/pdf/59016a6c5f553d7ac7c4d3ff4464b6f4c4e3048b.pdf"} {"title": "Diffusion Policy Attacker: Crafting Adversarial Attacks for Diffusion-based Policies", "url": "https://openreview.net/forum?id=1L5vaNIoK5", "detail_url": "https://openreview.net/forum?id=1L5vaNIoK5", "authors": "Yipu Chen,Haotian Xue,Yongxin Chen", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models have emerged as a promising approach for behavior cloning (BC), leveraging their exceptional ability to model multi-modal distributions. Diffusion policies (DP) have elevated BC performance to new heights, demonstrating robust efficacy across diverse tasks, coupled with their inherent flexibility and ease of implementation. Despite the increasing adoption of Diffusion Policies (DP) as a foundation for policy generation, the critical issue of safety remains largely unexplored. While previous attempts have targeted deep policy networks, DP used diffusion models as the policy network, making it ineffective to be attacked using previous methods because of its chained structure and randomness injected. In this paper, we undertake a comprehensive examination of DP safety concerns by introducing adversarial scenarios, encompassing offline and online attacks, global and patch-based attacks. We propose DP-Attacker, a suite of algorithms that can craft effective adversarial attacks across all aforementioned scenarios. We conduct attacks on pre-trained diffusion policies across various manipulation tasks. Through extensive experiments, we demonstrate that DP-Attacker has the capability to significantly decrease the success rate of DP for all scenarios. Particularly in offline scenarios, we exhibit the generation of highly transferable perturbations applicable to all frames. Furthermore, we illustrate the creation of adversarial physical patches that, when applied to the environment, effectively deceive the model. Video results are\nput in: https://sites.google.com/view/dp-attacker-videos/.", "pdf": "https://openreview.net/pdf/ec80acb56b94cb34d7b6f867a138de80110d613e.pdf"} {"title": "Enhancing In-Context Learning Performance with just SVD-Based Weight Pruning: A Theoretical Perspective", "url": "https://openreview.net/forum?id=wT6GHk5ShC", "detail_url": "https://openreview.net/forum?id=wT6GHk5ShC", "authors": "Xinhao Yao,Xiaolin Hu,Shenzhi Yang,Yong Liu", "tags": "NIPS 2024,Poster", "abstract": "Pre-trained large language models (LLMs) based on Transformer have demonstrated striking in-context learning (ICL) abilities. With a few demonstration input-label pairs, they can predict the label for an unseen input without any parameter updates. In this paper, we show an exciting phenomenon that SVD-based weight pruning can enhance ICL performance, and more surprising, pruning weights in deep layers often results in more stable performance improvements than in shallow layers. However, the underlying mechanism of those findings still remains an open question. To reveal those findings, we conduct an in-depth theoretical analysis by presenting the implicit gradient descent (GD) trajectories of ICL and giving the mutual information based generalization bounds of ICL via full implicit GD trajectories. This helps us reasonably explain the surprising experimental findings. Besides, based on all our experimental and theoretical insights, we intuitively propose a simple, model-compression and derivative-free algorithm for downstream tasks in enhancing ICL inference. Experiments on benchmark datasets and open source LLMs display the method effectiveness.", "pdf": "https://openreview.net/pdf/2b62bd7805c4971355586b1fc5697d6266237e68.pdf"} {"title": "The GAN is dead; long live the GAN! A Modern GAN Baseline", "url": "https://openreview.net/forum?id=OrtN9hPP7V", "detail_url": "https://openreview.net/forum?id=OrtN9hPP7V", "authors": "Nick Huang,Aaron Gokaslan,Volodymyr Kuleshov,James Tompkin", "tags": "NIPS 2024,Poster", "abstract": "There is a widely-spread claim that GANs are difficult to train, and GAN architectures in the literature are littered with empirical tricks. We provide evidence against this claim and build a modern GAN baseline in a more principled manner. First, we derive a well-behaved regularized relativistic GAN loss that addresses issues of mode dropping and non-convergence that were previously tackled via a bag of ad-hoc tricks. We analyze our loss mathematically and prove that it admits local convergence guarantees, unlike most existing relativistic losses. Second, this loss allows us to discard all ad-hoc tricks and replace outdated backbones used in common GANs with modern architectures. Using StyleGAN2 as an example, we present a roadmap of simplification and modernization that results in a new minimalist baseline---R3GAN. Despite being simple, our approach surpasses StyleGAN2 on FFHQ, ImageNet, CIFAR, and Stacked MNIST datasets, and compares favorably against state-of-the-art GANs and diffusion models. Code: https://www.github.com/brownvc/R3GAN", "pdf": "https://openreview.net/pdf/28891492c7330999bfba2ab14453c12a15fc1134.pdf"} {"title": "ALPS: Improved Optimization for Highly Sparse One-Shot Pruning for Large Language Models", "url": "https://openreview.net/forum?id=0lBx844upd", "detail_url": "https://openreview.net/forum?id=0lBx844upd", "authors": "Xiang Meng,Kayhan Behdin,Haoyue Wang,Rahul Mazumder", "tags": "NIPS 2024,Poster", "abstract": "The impressive performance of Large Language Models (LLMs) across various natural language processing tasks comes at the cost of vast computational resources and storage requirements. One-shot pruning techniques offer a way to alleviate these burdens by removing redundant weights without the need for retraining. Yet, the massive scale of LLMs often forces current pruning approaches to rely on heuristics instead of optimization-based techniques, potentially resulting in suboptimal compression. In this paper, we introduce ALPS, an optimization-based framework that tackles the pruning problem using the operator splitting technique and a preconditioned conjugate gradient-based post-processing step. Our approach incorporates novel techniques to accelerate and theoretically guarantee convergence while leveraging vectorization and GPU parallelism for efficiency. ALPS substantially outperforms state-of-the-art methods in terms of the pruning objective and perplexity reduction, particularly for highly sparse models. On the LLaMA3-8B model with 70\\% sparsity, ALPS achieves a 29\\% reduction in test perplexity on the WikiText dataset and a 8\\% improvement in zero-shot benchmark performance compared to existing methods. Our code is available at https://github.com/mazumder-lab/ALPS.", "pdf": "https://openreview.net/pdf/ce7ab57798de55c580cd7d9d9e334240383aafd2.pdf"} {"title": "G3: An Effective and Adaptive Framework for Worldwide Geolocalization Using Large Multi-Modality Models", "url": "https://openreview.net/forum?id=21tn63ee15", "detail_url": "https://openreview.net/forum?id=21tn63ee15", "authors": "Pengyue Jia,Yiding Liu,Xiaopeng Li,Xiangyu Zhao,Yuhao Wang,Yantong Du,Xiao Han,Xuetao Wei,Shuaiqiang Wang,Dawei Yin", "tags": "NIPS 2024,Poster", "abstract": "Worldwide geolocalization aims to locate the precise location at the coordinate level of photos taken anywhere on the Earth. It is very challenging due to 1) the difficulty of capturing subtle location-aware visual semantics, and 2) the heterogeneous geographical distribution of image data. As a result, existing studies have clear limitations when scaled to a worldwide context. They may easily confuse distant images with similar visual contents, or cannot adapt to various locations worldwide with different amounts of relevant data. To resolve these limitations, we propose **G3**, a novel framework based on Retrieval-Augmented Generation (RAG). In particular, G3 consists of three steps, i.e., **G**eo-alignment, **G**eo-diversification, and **G**eo-verification to optimize both retrieval and generation phases of worldwide geolocalization. During Geo-alignment, our solution jointly learns expressive multi-modal representations for images, GPS and textual descriptions, which allows us to capture location-aware semantics for retrieving nearby images for a given query. During Geo-diversification, we leverage a prompt ensembling method that is robust to inconsistent retrieval performance for different image queries. Finally, we combine both retrieved and generated GPS candidates in Geo-verification for location prediction. Experiments on two well-established datasets IM2GPS3k and YFCC4k verify the superiority of G3 compared to other state-of-the-art methods. Our code is available online [https://github.com/Applied-Machine-Learning-Lab/G3](https://github.com/Applied-Machine-Learning-Lab/G3) for reproduction.", "pdf": "https://openreview.net/pdf/4e1073c515b719617da555908fd5adbfade3866b.pdf"} {"title": "QUEST: Quadruple Multimodal Contrastive Learning with Constraints and Self-Penalization", "url": "https://openreview.net/forum?id=lA48H7pW3q", "detail_url": "https://openreview.net/forum?id=lA48H7pW3q", "authors": "Qi Song,Tianxiang Gong,Shiqi Gao,Haoyi Zhou,Jianxin Li", "tags": "NIPS 2024,Poster", "abstract": "Multimodal contrastive learning (MCL) has recently demonstrated significant success across various tasks. However, the existing MCL treats all negative samples equally and ignores the potential semantic association with positive samples, which limits the model's ability to achieve fine-grained alignment. In multi-view scenarios, MCL tends to prioritize shared information while neglecting modality-specific unique information across different views, leading to feature suppression and suboptimal performance in downstream tasks. To address these limitations, we propose a novel contrastive framework name *QUEST: Quadruple Multimodal Contrastive Learning with Constraints and Self-Penalization*. In the QUEST framework, we propose quaternion contrastive objectives and orthogonal constraints to extract sufficient unique information. Meanwhile, a shared information-guided penalization is introduced to ensure that shared information does not excessively influence the optimization of unique information. Our method leverages quaternion vector spaces to simultaneously optimize shared and unique information. Experiments on multiple datasets show that our method achieves superior performance in multimodal contrastive learning benchmarks. On public benchmark, our approach achieves state-of-the-art performance, and on synthetic shortcut datasets, we outperform existing baseline methods by an average of 97.95\\% on the CLIP model.", "pdf": "https://openreview.net/pdf/84107019c16a3341189d0a7d6a78e026b2f05c9c.pdf"} {"title": "FOOGD: Federated Collaboration for Both Out-of-distribution Generalization and Detection", "url": "https://openreview.net/forum?id=D6MQrw9HFu", "detail_url": "https://openreview.net/forum?id=D6MQrw9HFu", "authors": "Xinting Liao,Weiming Liu,Pengyang Zhou,Fengyuan Yu,Jiahe Xu,Jun Wang,Wenjie Wang,Chaochao Chen,Xiaolin Zheng", "tags": "NIPS 2024,Poster", "abstract": "Federated learning (FL) is a promising machine learning paradigm that collaborates with client models to capture global knowledge. However, deploying FL models in real-world scenarios remains unreliable due to the coexistence of in-distribution data and unexpected out-of-distribution (OOD) data, such as covariate-shift and semantic-shift data. Current FL researches typically address either covariate-shift data through OOD generalization or semantic-shift data via OOD detection, overlooking the simultaneous occurrence of various OOD shifts. In this work, we propose FOOGD, a method that estimates the probability density of each client and obtains reliable global distribution as guidance for the subsequent FL process. Firstly, SM3D in FOOGD estimates score model for arbitrary distributions without prior constraints, and detects semantic-shift data powerfully. Then SAG in FOOGD provides invariant yet diverse knowledge for both local covariate-shift generalization and client performance generalization. In empirical validations, FOOGD significantly enjoys three main advantages: (1) reliably estimating non-normalized decentralized distributions, (2) detecting semantic shift data via score values, and (3) generalizing to covariate-shift data by regularizing feature extractor. The project is open in https://github.com/XeniaLLL/FOOGD-main.git.", "pdf": "https://openreview.net/pdf/29a16833ddab99a1b4cefb2e248680fe13914239.pdf"} {"title": "Classification Diffusion Models: Revitalizing Density Ratio Estimation", "url": "https://openreview.net/forum?id=d99yCfOnwK", "detail_url": "https://openreview.net/forum?id=d99yCfOnwK", "authors": "Shahar Yadin,Noam Elata,Tomer Michaeli", "tags": "NIPS 2024,Poster", "abstract": "A prominent family of methods for learning data distributions relies on density ratio estimation (DRE), where a model is trained to *classify* between data samples and samples from some reference distribution. DRE-based models can directly output the likelihood for any given input, a highly desired property that is lacking in most generative techniques. Nevertheless, to date, DRE methods have failed in accurately capturing the distributions of complex high-dimensional data, like images, and have thus been drawing reduced research attention in recent years. \nIn this work we present *classification diffusion models* (CDMs), a DRE-based generative method that adopts the formalism of denoising diffusion models (DDMs) while making use of a classifier that predicts the level of noise added to a clean signal. Our method is based on an analytical connection that we derive between the MSE-optimal denoiser for removing white Gaussian noise and the cross-entropy-optimal classifier for predicting the noise level. Our method is the first DRE-based technique that can successfully generate images beyond the MNIST dataset. Furthermore, it can output the likelihood of any input in a single forward pass, achieving state-of-the-art negative log likelihood (NLL) among methods with this property.", "pdf": "https://openreview.net/pdf/7a3cdbe4f47f55ae7d3ca4cd5a00709bdf8bf386.pdf"} {"title": "Multi-times Monte Carlo Rendering for Inter-reflection Reconstruction", "url": "https://openreview.net/forum?id=TLUGoShY30", "detail_url": "https://openreview.net/forum?id=TLUGoShY30", "authors": "Zhu Tengjie,Zhuo Chen,Jingnan Gao,Yichao Yan,Xiaokang Yang", "tags": "NIPS 2024,Poster", "abstract": "Inverse rendering methods have achieved remarkable performance in reconstructing high-fidelity 3D objects with disentangled geometries, materials, and environmental light. However, they still face huge challenges in reflective surface reconstruction. Although recent methods model the light trace to learn specularity, the ignorance of indirect illumination makes it hard to handle inter-reflections among multiple smooth objects. In this work, we propose Ref-MC2 that introduces the multi-time Monte Carlo sampling which comprehensively computes the environmental illumination and meanwhile considers the reflective light from object surfaces. To address the computation challenge as the times of Monte Carlo sampling grow, we propose a specularity-adaptive sampling strategy, significantly reducing the computational complexity. Besides the computational resource, higher geometry accuracy is also required because geometric errors accumulate multiple times. Therefore, we further introduce a reflection-aware surface model to initialize the geometry and refine it during inverse rendering. We construct a challenging dataset containing scenes with multiple objects and inter-reflections. Experiments show that our method outperforms other inverse rendering methods on various object groups. We also show downstream applications, e.g., relighting and material editing, to illustrate the disentanglement ability of our method.", "pdf": "https://openreview.net/pdf/1af71561edebf43909aa1bf5842c61d4d6474085.pdf"} {"title": "Parameter-free Clipped Gradient Descent Meets Polyak", "url": "https://openreview.net/forum?id=SGcnphYOeq", "detail_url": "https://openreview.net/forum?id=SGcnphYOeq", "authors": "Yuki Takezawa,Han Bao,Ryoma Sato,Kenta Niwa,Makoto Yamada", "tags": "NIPS 2024,Poster", "abstract": "Gradient descent and its variants are de facto standard algorithms for training machine learning models. As gradient descent is sensitive to its hyperparameters, we need to tune the hyperparameters carefully using a grid search. However, the method is time-consuming, particularly when multiple hyperparameters exist. Therefore, recent studies have analyzed parameter-free methods that adjust the hyperparameters on the fly. However, the existing work is limited to investigations of parameter-free methods for the stepsize, and parameter-free methods for other hyperparameters have not been explored. For instance, although the gradient clipping threshold is a crucial hyperparameter in addition to the stepsize for preventing gradient explosion issues, none of the existing studies have investigated parameter-free methods for clipped gradient descent. Therefore, in this study, we investigate the parameter-free methods for clipped gradient descent. Specifically, we propose Inexact Polyak Stepsize, which converges to the optimal solution without any hyperparameters tuning, and its convergence rate is asymptotically independent of $L$ under $L$-smooth and $(L_0, L_1)$-smooth assumptions of the loss function, similar to that of clipped gradient descent with well-tuned hyperparameters. We numerically validated our convergence results using a synthetic function and demonstrated the effectiveness of our proposed methods using LSTM, Nano-GPT, and T5.", "pdf": "https://openreview.net/pdf/f354672b35ef05ee13ef090fcd5e077407706173.pdf"} {"title": "Fourier-enhanced Implicit Neural Fusion Network for Multispectral and Hyperspectral Image Fusion", "url": "https://openreview.net/forum?id=CscowTrOP9", "detail_url": "https://openreview.net/forum?id=CscowTrOP9", "authors": "Yujie Liang,Zihan Cao,Shangqi Deng,Hong-Xia Dou,Liang-Jian Deng", "tags": "NIPS 2024,Poster", "abstract": "Recently, implicit neural representations (INR) have made significant strides in various vision-related domains, providing a novel solution for Multispectral and Hyperspectral Image Fusion (MHIF) tasks. However, INR is prone to losing high-frequency information and is confined to the lack of global perceptual capabilities. To address these issues, this paper introduces a Fourier-enhanced Implicit Neural Fusion Network (FeINFN) specifically designed for MHIF task, targeting the following phenomena: The Fourier amplitudes of the HR-HSI latent code and LR-HSI are remarkably similar; however, their phases exhibit different patterns. In FeINFN, we innovatively propose a spatial and frequency implicit fusion function (Spa-Fre IFF), helping INR capture high-frequency information and expanding the receptive field. Besides, a new decoder employing a complex Gabor wavelet activation function, called Spatial-Frequency Interactive Decoder (SFID), is invented to enhance the interaction of INR features. Especially, we further theoretically prove that the Gabor wavelet activation possesses a time-frequency tightness property that favors learning the optimal bandwidths in the decoder. Experiments on two benchmark MHIF datasets verify the state-of-the-art (SOTA) performance of the proposed method, both visually and quantitatively. Also, ablation studies demonstrate the mentioned contributions. The code can be available at https://github.com/294coder/Efficient-MIF.", "pdf": "https://openreview.net/pdf/47fc5a0fe0ab13636baf68cd349dd993997ad327.pdf"} {"title": "Towards Multi-Domain Learning for Generalizable Video Anomaly Detection", "url": "https://openreview.net/forum?id=ywEQkCmImh", "detail_url": "https://openreview.net/forum?id=ywEQkCmImh", "authors": "MyeongAh Cho,Taeoh Kim,Minho Shim,Dongyoon Wee,Sangyoun Lee", "tags": "NIPS 2024,Poster", "abstract": "Most of the existing Video Anomaly Detection (VAD) studies have been conducted within single-domain learning, where training and evaluation are performed on a single dataset. However, the criteria for abnormal events differ across VAD datasets, making it problematic to apply a single-domain model to other domains. In this paper, we propose a new task called Multi-Domain learning forVAD (MDVAD) to explore various real-world abnormal events using multiple datasets for a general model. MDVAD involves training on datasets from multiple domains simultaneously, and we experimentally observe that Abnormal Conflicts between domains hinder learning and generalization. The task aims to address two key objectives: (i) better distinguishing between general normal and abnormal events across multiple domains, and (ii) being aware of ambiguous abnormal conflicts. This paper is the first to tackle abnormal conflict issue and introduces a new benchmark, baselines, and evaluation protocols for MDVAD. As baselines, we propose a framework with Null(Angular)-Multiple Instance Learning and an Abnormal Conflict classifier. Through experiments on a MDVAD benchmark composed of six VAD datasets and using four different evaluation protocols, we reveal abnormal conflicts and demonstrate that the proposed baseline effectively handles these conflicts, showing robustness and adaptability across multiple domains.", "pdf": "https://openreview.net/pdf/e8a9752b43978f5dd7a7f88fc83f109cdef34692.pdf"} {"title": "OneActor: Consistent Subject Generation via Cluster-Conditioned Guidance", "url": "https://openreview.net/forum?id=2gtNa14V45", "detail_url": "https://openreview.net/forum?id=2gtNa14V45", "authors": "Jiahao Wang,Caixia Yan,Haonan Lin,Weizhan Zhang,Mengmeng Wang,Tieliang Gong,Guang Dai,Hao Sun", "tags": "NIPS 2024,Poster", "abstract": "Text-to-image diffusion models benefit artists with high-quality image generation. Yet their stochastic nature hinders artists from creating consistent images of the same subject. Existing methods try to tackle this challenge and generate consistent content in various ways. However, they either depend on external restricted data or require expensive tuning of the diffusion model. For this issue, we propose a novel one-shot tuning paradigm, termed OneActor. It efficiently performs consistent subject generation solely driven by prompts via a learned semantic guidance to bypass the laborious backbone tuning. We lead the way to formalize the objective of consistent subject generation from a clustering perspective, and thus design a cluster-conditioned model. To mitigate the overfitting challenge shared by one-shot tuning pipelines, we augment the tuning with auxiliary samples and devise two inference strategies: semantic interpolation and cluster guidance. These techniques are later verified to significantly improve the generation quality. Comprehensive experiments show that our method outperforms a variety of baselines with satisfactory subject consistency, superior prompt conformity as well as high image quality. Our method is capable of multi-subject generation and compatible with popular diffusion extensions. Besides, we achieve a $4\\times$ faster tuning speed than tuning-based baselines and, if desired, avoid increasing the inference time. Furthermore, our method can be naturally utilized to pre-train a consistent subject generation network from scratch, which will implement this research task into more practical applications. (Project page: https://johnneywang.github.io/OneActor-webpage/)", "pdf": "https://openreview.net/pdf/337e3a70e74da68be66d94529ad1ae1f520077cb.pdf"} {"title": "Learning 3D Garment Animation from Trajectories of A Piece of Cloth", "url": "https://openreview.net/forum?id=yeFx5NQmr7", "detail_url": "https://openreview.net/forum?id=yeFx5NQmr7", "authors": "Yidi Shao,Chen Change Loy,Bo Dai", "tags": "NIPS 2024,Poster", "abstract": "Garment animation is ubiquitous in various applications, such as virtual reality, gaming, and film producing. Recently, learning-based approaches obtain compelling performance in animating diverse garments under versatile scenarios. Nevertheless, to mimic the deformations of the observed garments, data-driven methods require large scale of garment data, which are both resource-wise expensive and time-consuming. In addition, forcing models to match the dynamics of observed garment animation may hinder the potentials to generalize to unseen cases. In this paper, instead of using garment-wise supervised-learning we adopt a disentangled scheme to learn how to animate observed garments: 1). learning constitutive behaviors from the observed cloth; 2). dynamically animate various garments constrained by the learned constitutive laws. Specifically, we propose Energy Unit network (EUNet) to model the constitutive relations in the format of energy. Without the priors from analytical physics models and differentiable simulation engines, EUNet is able to directly capture the constitutive behaviors from the observed piece of cloth and uniformly describes the change of energy caused by deformations, such as stretching and bending. We further apply the pre-trained EUNet to animate various garments based on energy optimizations. The disentangled scheme alleviates the need of garment data and enables us to utilize the dynamics of a piece of cloth for animating garments. Experiments show that while EUNet effectively delivers the energy gradients due to the deformations, models constrained by EUNet achieve more stable and physically plausible performance comparing with those trained in garment-wise supervised manner.", "pdf": "https://openreview.net/pdf/d57b0731216ccd13a02117aa1f63730ec58dae56.pdf"} {"title": "Spectral Editing of Activations for Large Language Model Alignment", "url": "https://openreview.net/forum?id=pqYceEa87j", "detail_url": "https://openreview.net/forum?id=pqYceEa87j", "authors": "Yifu QIU,Zheng Zhao,Yftah Ziser,Anna Korhonen,Edoardo Ponti,Shay B Cohen", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) often exhibit undesirable behaviours, such as generating untruthful or biased content. Editing their internal representations has been shown to be effective in mitigating such behaviours on top of the existing alignment methods. We propose a novel inference-time editing method, namely spectral editing of activations (SEA), to project the input representations into directions with maximal covariance with the positive demonstrations (e.g., truthful) while minimising covariance with the negative demonstrations (e.g., hallucinated). We also extend our method to non-linear editing using feature functions. We run extensive experiments on benchmarks concerning truthfulness and bias with six open-source LLMs of different sizes and model families. The results demonstrate the superiority of SEA in effectiveness, generalisation to similar tasks, as well as computation and data efficiency. We also show that SEA editing only has a limited negative impact on other model capabilities.", "pdf": "https://openreview.net/pdf/309a57afcf8bfef9e232743dc0f09597f4f7b601.pdf"} {"title": "From News to Forecast: Integrating Event Analysis in LLM-Based Time Series Forecasting with Reflection", "url": "https://openreview.net/forum?id=tj8nsfxi5r", "detail_url": "https://openreview.net/forum?id=tj8nsfxi5r", "authors": "Xinlei Wang,Maike Feng,Jing Qiu,Jinjin Gu,Junhua Zhao", "tags": "NIPS 2024,Poster", "abstract": "This paper introduces a novel approach that leverages Large Language Models (LLMs) and Generative Agents to enhance time series forecasting by reasoning across both text and time series data. With language as a medium, our method adaptively integrates social events into forecasting models, aligning news content with time series fluctuations to provide richer insights. Specifically, we utilize LLM-based agents to iteratively filter out irrelevant news and employ human-like reasoning to evaluate predictions. This enables the model to analyze complex events, such as unexpected incidents and shifts in social behavior, and continuously refine the selection logic of news and the robustness of the agent's output. By integrating selected news events with time series data, we fine-tune a pre-trained LLM to predict sequences of digits in time series. The results demonstrate significant improvements in forecasting accuracy, suggesting a potential paradigm shift in time series forecasting through the effective utilization of unstructured news data.", "pdf": "https://openreview.net/pdf/ee49cee9662dd2a8cae6583a9b82aff4cb4b5298.pdf"} {"title": "Verifiably Robust Conformal Prediction", "url": "https://openreview.net/forum?id=5pJfDlaSxV", "detail_url": "https://openreview.net/forum?id=5pJfDlaSxV", "authors": "Linus Jeary,Tom Kuipers,Mehran Hosseini,Nicola Paoletti", "tags": "NIPS 2024,Poster", "abstract": "Conformal Prediction (CP) is a popular uncertainty quantification method that provides distribution-free, statistically valid prediction sets, assuming that training and test data are exchangeable. In such a case, CP's prediction sets are guaranteed to cover the (unknown) true test output with a user-specified probability. Nevertheless, this guarantee is violated when the data is subjected to adversarial attacks, which often result in a significant loss of coverage. Recently, several approaches have been put forward to recover CP guarantees in this setting. These approaches leverage variations of randomised smoothing to produce conservative sets which account for the effect of the adversarial perturbations. They are, however, limited in that they only support $\\ell_2$-bounded perturbations and classification tasks. This paper introduces VRCP (Verifiably Robust Conformal Prediction), a new framework that leverages recent neural network verification methods to recover coverage guarantees under adversarial attacks. Our VRCP method is the first to support perturbations bounded by arbitrary norms including $\\ell_1$, $\\ell_2$, and $\\ell_\\infty$, as well as regression tasks. We evaluate and compare our approach on image classification tasks (CIFAR10, CIFAR100, and TinyImageNet) and regression tasks for deep reinforcement learning environments. In every case, VRCP achieves above nominal coverage and yields significantly more efficient and informative prediction regions than the SotA.", "pdf": "https://openreview.net/pdf/954b633afed259b10cdaa634a9a6bef9bcc084f4.pdf"} {"title": "Block Sparse Bayesian Learning: A Diversified Scheme", "url": "https://openreview.net/forum?id=a4cPpx1xYg", "detail_url": "https://openreview.net/forum?id=a4cPpx1xYg", "authors": "Yanhao Zhang,Zhihan Zhu,Yong Xia", "tags": "NIPS 2024,Poster", "abstract": "This paper introduces a novel prior called Diversified Block Sparse Prior to characterize the widespread block sparsity phenomenon in real-world data. By allowing diversification on intra-block variance and inter-block correlation matrices, we effectively address the sensitivity issue of existing block sparse learning methods to pre-defined block information, which enables adaptive block estimation while mitigating the risk of overfitting. Based on this, a diversified block sparse Bayesian learning method (DivSBL) is proposed, utilizing EM algorithm and dual ascent method for hyperparameter estimation. Moreover, we establish the global and local optimality theory of our model. Experiments validate the advantages of DivSBL over existing algorithms.", "pdf": "https://openreview.net/pdf/cf83f762ba06ad0c08bd84dc5acccb0eb4e3af03.pdf"} {"title": "CoVoMix: Advancing Zero-Shot Speech Generation for Human-like Multi-talker Conversations", "url": "https://openreview.net/forum?id=VNbQbv658b", "detail_url": "https://openreview.net/forum?id=VNbQbv658b", "authors": "leying zhang,Yao Qian,Long Zhou,Shujie LIU,Dongmei Wang,Xiaofei Wang,Midia Yousefi,Yanmin Qian,Jinyu Li,Lei He,sheng zhao,Michael Zeng", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements in zero-shot text-to-speech (TTS) modeling have led to significant strides in generating high-fidelity and diverse speech. However, dialogue generation, along with achieving human-like naturalness in speech, continues to be a challenge. In this paper, we introduce CoVoMix: Conversational Voice Mixture Generation, a novel model for zero-shot, human-like, multi-speaker, multi-round dialogue speech generation. CoVoMix first converts dialogue text into multiple streams of discrete tokens, with each token stream representing semantic information for individual talkers. These token streams are then fed into a flow-matching based acoustic model to generate mixed mel-spectrograms. Finally, the speech waveforms are produced using a HiFi-GAN model. Furthermore, we devise a comprehensive set of metrics for measuring the effectiveness of dialogue modeling and generation. Our experimental results show that CoVoMix can generate dialogues that are not only human-like in their naturalness and coherence but also involve multiple talkers engaging in multiple rounds of conversation. This is exemplified by instances generated in a single channel where one speaker's utterance is seamlessly mixed with another's interjections or laughter, indicating the latter's role as an attentive listener. Audio samples are enclosed in the supplementary.", "pdf": "https://openreview.net/pdf/5582abe645d982c7af83406df2b0f0c81ae6b50f.pdf"} {"title": "Learning the Expected Core of Strictly Convex Stochastic Cooperative Games", "url": "https://openreview.net/forum?id=ZRYFftR4xn", "detail_url": "https://openreview.net/forum?id=ZRYFftR4xn", "authors": "Nam Phuong Tran,The-Anh Ta,Shuqing Shi,Debmalya Mandal,Yali Du,Long Tran-Thanh", "tags": "NIPS 2024,Poster", "abstract": "Reward allocation, also known as the credit assignment problem, has been an important topic in economics, engineering, and machine learning. An important concept in reward allocation is the core, which is the set of stable allocations where no agent has the motivation to deviate from the grand coalition. In previous works, computing the core requires either knowledge of the reward function in deterministic games or the reward distribution in stochastic games. However, this is unrealistic, as the reward function or distribution is often only partially known and may be subject to uncertainty. In this paper, we consider the core learning problem in stochastic cooperative games, where the reward distribution is unknown. Our goal is to learn the expected core, that is, the set of allocations that are stable in expectation, given an oracle that returns a stochastic reward for an enquired coalition each round. Within the class of strictly convex games, we present an algorithm named \\texttt{Common-Points-Picking} that returns a point in the expected core given a polynomial number of samples, with high probability. To analyse the algorithm, we develop a new extension of the separation hyperplane theorem for multiple convex sets.t.", "pdf": "https://openreview.net/pdf/cc32b424dc3ee86536e7b9ec5795f512ca261122.pdf"} {"title": "Diversity Is Not All You Need: Training A Robust Cooperative Agent Needs Specialist Partners", "url": "https://openreview.net/forum?id=15460JjocO", "detail_url": "https://openreview.net/forum?id=15460JjocO", "authors": "Rujikorn Charakorn,Poramate Manoonpong,Nat Dilokthanakul", "tags": "NIPS 2024,Poster", "abstract": "Partner diversity is known to be crucial for training a robust generalist cooperative agent. In this paper, we show that partner specialization, in addition to diversity, is crucial for the robustness of a downstream generalist agent. We propose a principled method for quantifying both the diversity and specialization of a partner population based on the concept of mutual information. Then, we observe that the recently proposed cross-play minimization (XP-min) technique produces diverse and specialized partners. However, the generated partners are overfit, reducing their usefulness as training partners. To address this, we propose simple methods, based on reinforcement learning and supervised learning, for extracting the diverse and specialized behaviors of XP-min generated partners but not their overfitness. We demonstrate empirically that the proposed method effectively removes overfitness, and extracted populations produce more robust generalist agents compared to the source XP-min populations.", "pdf": "https://openreview.net/pdf/05f91b5d0168fb0c17bc3236085be55c4d2bee4c.pdf"} {"title": "Regularized Conditional Diffusion Model for Multi-Task Preference Alignment", "url": "https://openreview.net/forum?id=YCS0xGFrb4", "detail_url": "https://openreview.net/forum?id=YCS0xGFrb4", "authors": "Xudong Yu,Chenjia Bai,Haoran He,Changhong Wang,Xuelong Li", "tags": "NIPS 2024,Poster", "abstract": "Sequential decision-making can be formulated as a conditional generation process, with targets for alignment with human intents and versatility across various tasks. Previous return-conditioned diffusion models manifest comparable performance but rely on well-defined reward functions, which requires amounts of human efforts and faces challenges in multi-task settings. Preferences serve as an alternative but recent work rarely considers preference learning given multiple tasks. To facilitate the alignment and versatility in multi-task preference learning, we adopt multi-task preferences as a unified framework. In this work, we propose to learn preference representations aligned with preference labels, which are then used as conditions to guide the conditional generation process of diffusion models. The traditional classifier-free guidance paradigm suffers from the inconsistency between the conditions and generated trajectories. We thus introduce an auxiliary regularization objective to maximize the mutual info", "pdf": "https://openreview.net/pdf/5fcdfbe7882d49af3e6629020a94b758ffab66a2.pdf"} {"title": "Fast Rates for Bandit PAC Multiclass Classification", "url": "https://openreview.net/forum?id=6zOKbzjBO4", "detail_url": "https://openreview.net/forum?id=6zOKbzjBO4", "authors": "Liad Erez,Alon Cohen,Tomer Koren,Yishay Mansour,Shay Moran", "tags": "NIPS 2024,Poster", "abstract": "We study multiclass PAC learning with bandit feedback, where inputs are classified into one of $K$ possible labels and feedback is limited to whether or not the predicted labels are correct. Our main contribution is in designing a novel learning algorithm for the agnostic $(\\varepsilon,\\delta)$-PAC version of the problem, with sample complexity of $O\\big( (\\operatorname{poly}(K) + 1 / \\varepsilon^2) \n\\log (|\\mathcal{H}| / \\delta) \\big)$ for any finite hypothesis class $\\mathcal{H}$. In terms of the leading dependence on $\\varepsilon$, this improves upon existing bounds for the problem, that are of the form $O(K/\\varepsilon^2)$. We also provide an extension of this result to general classes and establish similar sample complexity bounds in which $\\log |\\mathcal{H}|$ is replaced by the Natarajan dimension.\nThis matches the optimal rate in the full-information version of the problem and resolves an open question studied by Daniely, Sabato, Ben-David, and Shalev-Shwartz (2011) who demonstrated that the multiplicative price of bandit feedback in realizable PAC learning is $\\Theta(K)$. We complement this by revealing a stark contrast with the agnostic case, where the price of bandit feedback is only $O(1)$ as $\\varepsilon \\to 0$. Our algorithm utilizes a stochastic optimization technique to minimize a log-barrier potential based on Frank-Wolfe updates for computing a low-variance exploration distribution over the hypotheses, and is made computationally efficient provided access to an ERM oracle over $\\mathcal{H}$.", "pdf": "https://openreview.net/pdf/e8b8a1df7ffbe746b96e1aa5f276c39ea8d6b5dd.pdf"} {"title": "ODGEN: Domain-specific Object Detection Data Generation with Diffusion Models", "url": "https://openreview.net/forum?id=kTtK65vKvD", "detail_url": "https://openreview.net/forum?id=kTtK65vKvD", "authors": "JingYuan Zhu,Shiyu Li,Yuxuan Liu,Jian Yuan,Ping Huang,Jiulong Shan,Huimin Ma", "tags": "NIPS 2024,Poster", "abstract": "Modern diffusion-based image generative models have made significant progress and become promising to enrich training data for the object detection task. However, the generation quality and the controllability for complex scenes containing multi-class objects and dense objects with occlusions remain limited. This paper presents ODGEN, a novel method to generate high-quality images conditioned on bounding boxes, thereby facilitating data synthesis for object detection. Given a domain-specific object detection dataset, we first fine-tune a pre-trained diffusion model on both cropped foreground objects and entire images to fit target distributions. Then we propose to control the diffusion model using synthesized visual prompts with spatial constraints and object-wise textual descriptions. ODGEN exhibits robustness in handling complex scenes and specific domains. Further, we design a dataset synthesis pipeline to evaluate ODGEN on 7 domain-specific benchmarks to demonstrate its effectiveness. Adding training data generated by ODGEN improves up to 25.3% mAP@.50:.95 with object detectors like YOLOv5 and YOLOv7, outperforming prior controllable generative methods. In addition, we design an evaluation protocol based on COCO-2014 to validate ODGEN in general domains and observe an advantage up to 5.6% in mAP@.50:.95 against existing methods.", "pdf": "https://openreview.net/pdf/43caf5b26317143d806ba6e2ea8d28c2724fe228.pdf"} {"title": "Dual Cone Gradient Descent for Training Physics-Informed Neural Networks", "url": "https://openreview.net/forum?id=gvtCR7dHJ3", "detail_url": "https://openreview.net/forum?id=gvtCR7dHJ3", "authors": "Youngsik Hwang,Dongyoung Lim", "tags": "NIPS 2024,Poster", "abstract": "Physics-informed neural networks (PINNs) have emerged as a prominent approach for solving partial differential equations (PDEs) by minimizing a combined loss function that incorporates both boundary loss and PDE residual loss. Despite their remarkable empirical performance in various scientific computing tasks, PINNs often fail to generate reasonable solutions, and such pathological behaviors remain difficult to explain and resolve. In this paper, we identify that PINNs can be adversely trained when gradients of each loss function exhibit a significant imbalance in their magnitudes and present a negative inner product value. To address these issues, we propose a novel optimization framework, *Dual Cone Gradient Descent* (DCGD), which adjusts the direction of the updated gradient to ensure it falls within a dual cone region. This region is defined as a set of vectors where the inner products with both the gradients of the PDE residual loss and the boundary loss are non-negative. Theoretically, we analyze the convergence properties of DCGD algorithms in a non-convex setting. On a variety of benchmark equations, we demonstrate that DCGD outperforms other optimization algorithms in terms of various evaluation metrics. In particular, DCGD achieves superior predictive accuracy and enhances the stability of training for failure modes of PINNs and complex PDEs, compared to existing optimally tuned models. Moreover, DCGD can be further improved by combining it with popular strategies for PINNs, including learning rate annealing and the Neural Tangent Kernel (NTK).", "pdf": "https://openreview.net/pdf/78549b8e94110ae4049bb43c357d057f2a245728.pdf"} {"title": "Opponent Modeling based on Subgoal Inference", "url": "https://openreview.net/forum?id=Lt6wO0oZ8k", "detail_url": "https://openreview.net/forum?id=Lt6wO0oZ8k", "authors": "XiaoPeng Yu,Jiechuan Jiang,Zongqing Lu", "tags": "NIPS 2024,Poster", "abstract": "When an agent is in a multi-agent environment, it may face previously unseen opponents, and it is a challenge to cooperate with other agents to accomplish the task together or to maximize its own rewards. Most opponent modeling methods deal with the non-stationarity caused by unknown opponent policies via predicting the opponent\u2019s actions. However, focusing on the opponent\u2019s action is shortsighted, which also constrains the adaptability to unknown opponents in complex tasks. In this paper, we propose opponent modeling based on subgoal inference, which infers the opponent\u2019s subgoals through historical trajectories. As subgoals are likely to be shared by different opponent policies, predicting subgoals can yield better generalization to unknown opponents. Additionally, we design two subgoal selection modes for cooperative games and general-sum games respectively. Empirically, we show that our method achieves more effective adaptation than existing methods in a variety of tasks.", "pdf": "https://openreview.net/pdf/0ca418807e214aeca27cb47355eeebcb749b324b.pdf"} {"title": "EffiLearner: Enhancing Efficiency of Generated Code via Self-Optimization", "url": "https://openreview.net/forum?id=KhwOuB0fs9", "detail_url": "https://openreview.net/forum?id=KhwOuB0fs9", "authors": "Dong HUANG,Jianbo Dai,Han Weng,Puzhen Wu,Yuhao QING,Heming Cui,Zhijiang Guo,Jie Zhang", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have shown remarkable progress in code generation, but their generated code often suffers from inefficiency, resulting in longer execution times and higher memory consumption. To address this issue, we propose EffiLearner, a self-optimization framework that utilizes execution overhead profiles to improve the efficiency of LLM-generated code. EffiLearner first generates code using an LLM, then executes it locally to capture execution time and memory usage profiles. These profiles are fed back to the LLM, which then revises the code to reduce overhead. To evaluate the effectiveness of EffiLearner, we conduct extensive experiments on EffiBench and two commonly used code generation benchmarks with 16 open-source and 6 closed-source models. Our evaluation results demonstrate that through iterative self-optimization, EffiLearner significantly enhances the efficiency of LLM-generated code. For example, the execution time (ET) of StarCoder2-15B for the EffiBench decreases from 0.93 (s) to 0.12 (s) which reduces 87.1\\% execution time requirement compared with the initial code. The total memory usage (TMU) of StarCoder2-15B also decreases from 22.02 (Mb*s) to 2.03 (Mb*s), which decreases 90.8\\% total memory consumption during the execution process.", "pdf": "https://openreview.net/pdf/f2947a3beaa66d7c6b94670eb7ebc3fe56e1e81b.pdf"} {"title": "Linearly Decomposing and Recomposing Vision Transformers for Diverse-Scale Models", "url": "https://openreview.net/forum?id=Yhd0yzC8yD", "detail_url": "https://openreview.net/forum?id=Yhd0yzC8yD", "authors": "Shuxia Lin,Miaosen Zhang,Ruiming Chen,Xu Yang,Qiufeng Wang,Xin Geng", "tags": "NIPS 2024,Poster", "abstract": "Vision Transformers (ViTs) are widely used in a variety of applications, while they usually have a fixed architecture that may not match the varying computational resources of different deployment environments. Thus, it is necessary to adapt ViT architectures to devices with diverse computational overheads to achieve an accuracy-efficient trade-off. This concept is consistent with the motivation behind Learngene. To achieve this, inspired by polynomial decomposition in calculus, where a function can be approximated by linearly combining several basic components, we propose to linearly decompose the ViT model into a set of components called learngenes during element-wise training. These learngenes can then be recomposed into differently scaled, pre-initialized models to satisfy different computational resource constraints. Such a decomposition-recomposition strategy provides an economical and flexible approach to generating different scales of ViT models for different deployment scenarios. Compared to model compression or training from scratch, which require to repeatedly train on large datasets for diverse-scale models, such strategy reduces computational costs since it only requires to train on large datasets once. Extensive experiments are used to validate the effectiveness of our method: ViTs can be decomposed and the decomposed learngenes can be recomposed into diverse-scale ViTs, which can achieve comparable or better performance compared to traditional model compression and pre-training methods. The code for our experiments is available in the supplemental material.", "pdf": "https://openreview.net/pdf/4c2e5359333d526a7851cb3d502c438c85ec5ebe.pdf"} {"title": "Understanding Linear Probing then Fine-tuning Language Models from NTK Perspective", "url": "https://openreview.net/forum?id=1v4gKsyGfe", "detail_url": "https://openreview.net/forum?id=1v4gKsyGfe", "authors": "Akiyoshi Tomihari,Issei Sato", "tags": "NIPS 2024,Poster", "abstract": "The two-stage fine-tuning (FT) method, linear probing (LP) then fine-tuning (LP-FT), outperforms linear probing and FT alone. This holds true for both in-distribution (ID) and out-of-distribution (OOD) data. One key reason for its success is the preservation of pre-trained features, achieved by obtaining a near-optimal linear head during LP. However, despite the widespread use of large language models, there has been limited exploration of more complex architectures such as Transformers. In this paper, we analyze the training dynamics of LP-FT for classification tasks on the basis of the neural tangent kernel (NTK) theory. Our analysis decomposes the NTK matrix into two components. This decomposition highlights the importance of the linear head norm alongside the prediction accuracy at the start of the FT stage. We also observe a significant increase in the linear head norm during LP, which stems from training with the cross-entropy (CE) loss. This increase in the linear head norm effectively reduces changes in learned features. Furthermore, we find that this increased norm can adversely affect model calibration, which can be corrected using temperature scaling. Additionally, we extend our analysis with the NTK to the low-rank adaptation (LoRA) method and validate its effectiveness. Our experiments using a Transformer-based model on multiple natural language processing datasets confirm our theoretical analysis. Our study demonstrates the effectiveness of LP-FT for fine-tuning language models. Code is available at https://github.com/tom4649/lp-ft_ntk.", "pdf": "https://openreview.net/pdf/beb4c978b427cff9a115e27e93479c660b2889c9.pdf"} {"title": "Recurrent Reinforcement Learning with Memoroids", "url": "https://openreview.net/forum?id=nA4Q983a1v", "detail_url": "https://openreview.net/forum?id=nA4Q983a1v", "authors": "Steven Morad,Chris Lu,Ryan Kortvelesy,Stephan Liwicki,Jakob Nicolaus Foerster,Amanda Prorok", "tags": "NIPS 2024,Poster", "abstract": "Memory models such as Recurrent Neural Networks (RNNs) and Transformers address Partially Observable Markov Decision Processes (POMDPs) by mapping trajectories to latent Markov states. Neither model scales particularly well to long sequences, especially compared to an emerging class of memory models called Linear Recurrent Models. We discover that the recurrent update of these models resembles a monoid, leading us to reformulate existing models using a novel monoid-based framework that we call memoroids. We revisit the traditional approach to batching in recurrent reinforcement learning, highlighting theoretical and empirical deficiencies. We leverage memoroids to propose a batching method that improves sample efficiency, increases the return, and simplifies the implementation of recurrent loss functions in reinforcement learning.", "pdf": "https://openreview.net/pdf/35e29d8eea598ce21a9717776bdf8cdf9000f2bb.pdf"} {"title": "Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning", "url": "https://openreview.net/forum?id=z6KNvOe9zQ", "detail_url": "https://openreview.net/forum?id=z6KNvOe9zQ", "authors": "Chenyu Yang,Xizhou Zhu,Jinguo Zhu,Weijie Su,Junjie Wang,Xuan Dong,Wenhai Wang,Bin Li,Jie Zhou,Yu Qiao,Jifeng Dai", "tags": "NIPS 2024,Poster", "abstract": "Recently, vision model pre-training has evolved from relying on manually annotated datasets to leveraging large-scale, web-crawled image-text data. Despite these advances, there is no pre-training method that effectively exploits the interleaved image-text data, which is very prevalent on the Internet. Inspired by the recent success of compression learning in natural language processing, we propose a novel vision model pre-training method called Latent Compression Learning (LCL) for interleaved image-text data. This method performs latent compression learning by maximizing the mutual information between the inputs and outputs of a causal attention model. The training objective can be decomposed into two basic tasks: 1) contrastive learning between visual representation and preceding context, and 2) generating subsequent text based on visual representation. Our experiments demonstrate that our method not only matches the performance of CLIP on paired pre-training datasets (e.g., LAION), but can also leverage interleaved pre-training data (e.g., MMC4) to learn robust visual representations from scratch, showcasing the potential of vision model pre-training with interleaved image-text data.", "pdf": "https://openreview.net/pdf/d285c73049fe1865644d41380d358b90dcf98a21.pdf"} {"title": "A Boosting-Type Convergence Result for AdaBoost.MH with Factorized Multi-Class Classifiers", "url": "https://openreview.net/forum?id=7Lv8zHQWwS", "detail_url": "https://openreview.net/forum?id=7Lv8zHQWwS", "authors": "Xin Zou,Zhengyu Zhou,Jingyuan Xu,Weiwei Liu", "tags": "NIPS 2024,Poster", "abstract": "AdaBoost is a well-known algorithm in boosting. Schapire and Singer propose, an extension of AdaBoost, named AdaBoost.MH, for multi-class classification problems. K\u00e9gl shows empirically that AdaBoost.MH works better when the classical one-against-all base classifiers are replaced by factorized base classifiers containing a binary classifier and a vote (or code) vector. However, the factorization makes it much more difficult to provide a convergence result for the factorized version of AdaBoost.MH. Then, K\u00e9gl raises an open problem in COLT 2014 to look for a convergence result for the factorized AdaBoost.MH. In this work, we resolve this open problem by presenting a convergence result for AdaBoost.MH with factorized multi-class classifiers.", "pdf": "https://openreview.net/pdf/903e7df5f41ca48ccc3a8443bad79290b84c00a2.pdf"} {"title": "MetaAligner: Towards Generalizable Multi-Objective Alignment of Language Models", "url": "https://openreview.net/forum?id=dIVb5C0QFf", "detail_url": "https://openreview.net/forum?id=dIVb5C0QFf", "authors": "Kailai Yang,Zhiwei Liu,Qianqian Xie,Jimin Huang,Tianlin Zhang,Sophia Ananiadou", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements in large language models (LLMs) focus on aligning to heterogeneous human expectations and values via multi-objective preference alignment. However, existing methods are dependent on the policy model parameters, which require high-cost repetition of their alignment algorithms for each new policy model, and they cannot expand to unseen objectives due to their static alignment objectives. In this work, we propose Meta-Objective Aligner (MetaAligner), the first policy-agnostic and generalizable method for multi-objective preference alignment.\nMetaAligner models multi-objective alignment into three stages: (1) dynamic objectives reformulation algorithm reorganizes traditional alignment datasets to supervise the model on performing flexible alignment across different objectives; (2) conditional weak-to-strong correction paradigm aligns the weak outputs of fixed policy models to approach strong outputs with higher preferences in the corresponding alignment objectives, enabling plug-and-play inferences on any policy models, which significantly reduces training costs and facilitates alignment on close-source policy models; (3) generalizable inference method flexibly adjusts target objectives by updating their text descriptions in the prompts, facilitating generalizable alignment to unseen objectives.\nExperimental results show that MetaAligner achieves significant and balanced improvements in multi-objective alignments on 10 state-of-the-art policy models, and saves up to 93.63% of GPU training hours compared to previous alignment methods. The model also effectively aligns unseen objectives, marking the first step towards generalizable multi-objective preference alignment.", "pdf": "https://openreview.net/pdf/2b50ca7c02b87462ff14e67088e45c7f68826c20.pdf"} {"title": "Protected Test-Time Adaptation via Online Entropy Matching: A Betting Approach", "url": "https://openreview.net/forum?id=qamfjyhPeg", "detail_url": "https://openreview.net/forum?id=qamfjyhPeg", "authors": "Yarin Bar,Shalev Shaer,Yaniv Romano", "tags": "NIPS 2024,Poster", "abstract": "We present a novel approach for test-time adaptation via online self-training, consisting of two components. First, we introduce a statistical framework that detects distribution shifts in the classifier's entropy values obtained on a stream of unlabeled samples. Second, we devise an online adaptation mechanism that utilizes the evidence of distribution shifts captured by the detection tool to dynamically update the classifier's parameters. The resulting adaptation process drives the distribution of test entropy values obtained from the self-trained classifier to match those of the source domain, building invariance to distribution shifts. This approach departs from the conventional self-training method, which focuses on minimizing the classifier's entropy. Our approach combines concepts in betting martingales and online learning to form a detection tool capable of quickly reacting to distribution shifts. We then reveal a tight relation between our adaptation scheme and optimal transport, which forms the basis of our novel self-supervised loss. Experimental results demonstrate that our approach improves test-time accuracy under distribution shifts while maintaining accuracy and calibration in their absence, outperforming leading entropy minimization methods across various scenarios.", "pdf": "https://openreview.net/pdf/3b05ac83a1fb5d521f4feec8aa525ecf3c343fcd.pdf"} {"title": "Towards Flexible 3D Perception: Object-Centric Occupancy Completion Augments 3D Object Detection", "url": "https://openreview.net/forum?id=yktQNqtepd", "detail_url": "https://openreview.net/forum?id=yktQNqtepd", "authors": "Chaoda Zheng,Feng Wang,Naiyan Wang,Shuguang Cui,Zhen Li", "tags": "NIPS 2024,Poster", "abstract": "While 3D object bounding box (bbox) representation has been widely used in autonomous driving perception, it lacks the ability to capture the precise details of an object's intrinsic geometry. Recently, occupancy has emerged as a promising alternative for 3D scene perception. However, constructing a high-resolution occupancy map remains infeasible for large scenes due to computational constraints. Recognizing that foreground objects only occupy a small portion of the scene, we introduce object-centric occupancy as a supplement to object bboxes. This representation not only provides intricate details for detected objects but also enables higher voxel resolution in practical applications. We advance the development of object-centric occupancy perception from both data and algorithm perspectives. On the data side, we construct the first object-centric occupancy dataset from scratch using an automated pipeline. From the algorithmic standpoint, we introduce a novel object-centric occupancy completion network equipped with an implicit shape decoder that manages dynamic-size occupancy generation. This network accurately predicts the complete object-centric occupancy volume for inaccurate object proposals by leveraging temporal information from long sequences. Our method demonstrates robust performance in completing object shapes under noisy detection and tracking conditions. Additionally, we show that our occupancy features significantly enhance the detection results of state-of-the-art 3D object detectors, especially for incomplete or distant objects in the Waymo Open Dataset.", "pdf": "https://openreview.net/pdf/7caaf2ac1f758304a70b57129814d809e45dc1b5.pdf"} {"title": "Dealing with Synthetic Data Contamination in Online Continual Learning", "url": "https://openreview.net/forum?id=Lc8gemv97Y", "detail_url": "https://openreview.net/forum?id=Lc8gemv97Y", "authors": "Maorong Wang,Nicolas Michel,Jiafeng Mao,Toshihiko Yamasaki", "tags": "NIPS 2024,Poster", "abstract": "Image generation has shown remarkable results in generating high-fidelity realistic images, in particular with the advancement of diffusion-based models. However, the prevalence of AI-generated images may have side effects for the machine learning community that are not clearly identified. Meanwhile, the success of deep learning in computer vision is driven by the massive dataset collected on the Internet. The extensive quantity of synthetic data being added to the Internet would become an obstacle for future researchers to collect \"clean\" datasets without AI-generated content. Prior research has shown that using datasets contaminated by synthetic images may result in performance degradation when used for training. In this paper, we investigate the potential impact of contaminated datasets on Online Continual Learning (CL) research. We experimentally show that contaminated datasets might hinder the training of existing online CL methods. Also, we propose Entropy Selection with Real-synthetic similarity Maximization (ESRM), a method to alleviate the performance deterioration caused by synthetic images when training online CL models. Experiments show that our method can significantly alleviate performance deterioration, especially when the contamination is severe. For reproducibility, the source code of our work is available at https://github.com/maorong-wang/ESRM.", "pdf": "https://openreview.net/pdf/eeeaa1a535b3be8d4d63b515ebc76e0021839555.pdf"} {"title": "Scaling the Codebook Size of VQ-GAN to 100,000 with a Utilization Rate of 99%", "url": "https://openreview.net/forum?id=RbU10yvkk6", "detail_url": "https://openreview.net/forum?id=RbU10yvkk6", "authors": "Lei Zhu,Fangyun Wei,Yanye Lu,Dong Chen", "tags": "NIPS 2024,Poster", "abstract": "In the realm of image quantization exemplified by VQGAN, the process encodes images into discrete tokens drawn from a codebook with a predefined size. Recent advancements, particularly with LLAMA 3, reveal that enlarging the codebook significantly enhances model performance. However, VQGAN and its derivatives, such as VQGAN-FC (Factorized Codes) and VQGAN-EMA, continue to grapple with challenges related to expanding the codebook size and enhancing codebook utilization. For instance, VQGAN-FC is restricted to learning a codebook with a maximum size of 16,384, maintaining a typically low utilization rate of less than 12% on ImageNet. In this work, we propose a novel image quantization model named VQGAN-LC (Large Codebook), which extends the codebook size to 100,000, achieving an utilization rate exceeding 99%. Unlike previous methods that optimize each codebook entry, our approach begins with a codebook initialized with 100,000 features extracted by a pre-trained vision encoder. Optimization then focuses on training a projector that aligns the entire codebook with the feature distributions of the encoder in VQGAN-LC. We demonstrate the superior performance of our model over its counterparts across a variety of tasks, including image reconstruction, image classification, auto-regressive image generation using GPT, and image creation with diffusion- and flow-based generative models.", "pdf": "https://openreview.net/pdf/80f6d72624e2ec1298abe304c5eca0b278032ce8.pdf"} {"title": "Resfusion: Denoising Diffusion Probabilistic Models for Image Restoration Based on Prior Residual Noise", "url": "https://openreview.net/forum?id=JrIPBXWiS8", "detail_url": "https://openreview.net/forum?id=JrIPBXWiS8", "authors": "Zhenning Shi,Haoshuai Zheng,Chen Xu,Changsheng Dong,Bin Pan,Xie xueshuo,Along He,Tao Li,Huazhu Fu", "tags": "NIPS 2024,Poster", "abstract": "Recently, research on denoising diffusion models has expanded its application to the field of image restoration. Traditional diffusion-based image restoration methods utilize degraded images as conditional input to effectively guide the reverse generation process, without modifying the original denoising diffusion process. However, since the degraded images already include low-frequency information, starting from Gaussian white noise will result in increased sampling steps. We propose Resfusion, a general framework that incorporates the residual term into the diffusion forward process, starting the reverse process directly from the noisy degraded images. The form of our inference process is consistent with the DDPM. We introduced a weighted residual noise, named resnoise, as the prediction target and explicitly provide the quantitative relationship between the residual term and the noise term in resnoise. By leveraging a smooth equivalence transformation, Resfusion determine the optimal acceleration step and maintains the integrity of existing noise schedules, unifying the training and inference processes. The experimental results demonstrate that Resfusion exhibits competitive performance on ISTD dataset, LOL dataset and Raindrop dataset with only five sampling steps. Furthermore, Resfusion can be easily applied to image generation and emerges with strong versatility. Our code and model are available at https://github.com/nkicsl/Resfusion.", "pdf": "https://openreview.net/pdf/6ad207f44bca0f57ab8216b4e6b359253b763e6a.pdf"} {"title": "D2R2: Diffusion-based Representation with Random Distance Matching for Tabular Few-shot Learning", "url": "https://openreview.net/forum?id=lS9e36lkxG", "detail_url": "https://openreview.net/forum?id=lS9e36lkxG", "authors": "Ruoxue Liu,Linjiajie Fang,Wenjia Wang,Bingyi Jing", "tags": "NIPS 2024,Poster", "abstract": "Tabular data is widely utilized in a wide range of real-world applications. The challenge of few-shot learning with tabular data stands as a crucial problem in both industry and academia, due to the high cost or even impossibility of annotating additional samples. However, the inherent heterogeneity of tabular features, combined with the scarcity of labeled data, presents a significant challenge in tabular few-shot classification. In this paper, we propose a novel approach named Diffusion-based Representation with Random Distance matching (D2R2) for tabular few-shot learning. D2R2 leverages the powerful expression ability of diffusion models to extract essential semantic knowledge crucial for denoising process. This semantic knowledge proves beneficial in few-shot downstream tasks. During the training process of our designed diffusion model, we introduce a random distance matching to preserve distance information in the embeddings, thereby improving effectiveness for classification. During the classification stage, we introduce an instance-wise iterative prototype scheme to improve performance by accommodating the multimodality of embeddings and increasing clustering robustness. Our experiments reveal the significant efficacy of D2R2 across various tabular few-shot learning benchmarks, demonstrating its state-of-the-art performance in this field.", "pdf": "https://openreview.net/pdf/41a59ee8015cb8195a7efc2d4ad6ee9223b8efcc.pdf"} {"title": "Information Re-Organization Improves Reasoning in Large Language Models", "url": "https://openreview.net/forum?id=SciWuYPNG0", "detail_url": "https://openreview.net/forum?id=SciWuYPNG0", "authors": "Xiaoxia Cheng,Zeqi Tan,Wei Xue,Weiming Lu", "tags": "NIPS 2024,Poster", "abstract": "Improving the reasoning capabilities of large language models (LLMs) has attracted considerable interest. Recent approaches primarily focus on improving the reasoning process to yield a more precise final answer. However, in scenarios involving contextually aware reasoning, these methods neglect the importance of first identifying logical relationships from the context before proceeding with the reasoning. This oversight could lead to a superficial understanding and interaction with the context, potentially undermining the quality and reliability of the reasoning outcomes. In this paper, we propose an information re-organization (\\textbf{InfoRE}) method before proceeding with the reasoning to enhance the reasoning ability of LLMs. Our re-organization method involves initially extracting logical relationships from the contextual content, such as documents or paragraphs, and subsequently pruning redundant content to minimize noise. Then, we utilize the re-organized information in the reasoning process. This enables LLMs to deeply understand the contextual content by clearly perceiving these logical relationships, while also ensuring high-quality responses by eliminating potential noise. To demonstrate the effectiveness of our approach in improving the reasoning ability, we conduct experiments using Llama2-70B, GPT-3.5, and GPT-4 on various contextually aware multi-hop reasoning tasks. Using only a zero-shot setting, our method achieves an average absolute improvement of 4\\% across all tasks, highlighting its potential to improve the reasoning performance of LLMs.", "pdf": "https://openreview.net/pdf/ae474d512ac2929e2786df8b77389304cfb6c4ba.pdf"} {"title": "Efficient LLM Scheduling by Learning to Rank", "url": "https://openreview.net/forum?id=wlLjYl0Gi6", "detail_url": "https://openreview.net/forum?id=wlLjYl0Gi6", "authors": "Yichao Fu,Siqi Zhu,Runlong Su,Aurick Qiao,Ion Stoica,Hao Zhang", "tags": "NIPS 2024,Poster", "abstract": "In Large Language Model (LLM) inference, the output length of an LLM request is typically regarded as not known a priori. Consequently, most LLM serving systems employ a simple First-come-first-serve (FCFS) scheduling strategy, leading to Head-Of-Line (HOL) blocking and reduced throughput and service quality. \nIn this paper, we reexamine this assumption -- we show that, although predicting the exact generation length of each request is infeasible, it is possible to predict the relative ranks of output lengths in a batch of requests, using learning to rank. The ranking information offers valuable guidance for scheduling requests. Building on this insight, we develop a novel scheduler for LLM inference and serving that can approximate the shortest-job-first (SJF) schedule better than existing approaches. We integrate this scheduler with the state-of-the-art LLM serving system and show significant performance improvement in several important applications: 2.8x lower latency in chatbot serving and 6.5x higher throughput in synthetic data generation. Our code is available at https://github.com/hao-ai-lab/vllm-ltr.git", "pdf": "https://openreview.net/pdf/ef9ade264c14ae815c219f762df83610938eb101.pdf"} {"title": "FedGMKD: An Efficient Prototype Federated Learning Framework through Knowledge Distillation and Discrepancy-Aware Aggregation", "url": "https://openreview.net/forum?id=c3OZBJpN7M", "detail_url": "https://openreview.net/forum?id=c3OZBJpN7M", "authors": "Jianqiao Zhang,Caifeng Shan,Jungong Han", "tags": "NIPS 2024,Poster", "abstract": "Federated Learning (FL) faces significant challenges due to data heterogeneity across distributed clients. To address this, we propose FedGMKD, a novel framework that combines knowledge distillation and differential aggregation for efficient prototype-based personalized FL without the need for public datasets or server-side generative models. FedGMKD introduces Cluster Knowledge Fusion, utilizing Gaussian Mixture Models to generate prototype features and soft predictions on the client side, enabling effective knowledge distillation while preserving data privacy. Additionally, we implement a Discrepancy-Aware Aggregation Technique that weights client contributions based on data quality and quantity, enhancing the global model's generalization across diverse client distributions. Theoretical analysis confirms the convergence of FedGMKD. Extensive experiments on benchmark datasets, including SVHN, CIFAR-10, and CIFAR-100, demonstrate that FedGMKD outperforms state-of-the-art methods, significantly improving both local and global accuracy in non-IID data settings.", "pdf": "https://openreview.net/pdf/b180442ab802a95745b6663635fedf836bad910c.pdf"} {"title": "Unified Graph Augmentations for Generalized Contrastive Learning on Graphs", "url": "https://openreview.net/forum?id=jgkKroLxeC", "detail_url": "https://openreview.net/forum?id=jgkKroLxeC", "authors": "Jiaming Zhuo,Yintong Lu,Hui Ning,Kun Fu,Bingxin Niu,Dongxiao He,Chuan Wang,Yuanfang Guo,Zhen Wang,Xiaochun Cao,Liang Yang", "tags": "NIPS 2024,Poster", "abstract": "In real-world scenarios, networks (graphs) and their tasks possess unique characteristics, requiring the development of a versatile graph augmentation (GA) to meet the varied demands of network analysis. Unfortunately, most Graph Contrastive Learning (GCL) frameworks are hampered by the specificity, complexity, and incompleteness of their GA techniques. Firstly, GAs designed for specific scenarios may compromise the universality of models if mishandled. Secondly, the process of identifying and generating optimal augmentations generally involves substantial computational overhead. Thirdly, the effectiveness of the GCL, even the learnable ones, is constrained by the finite selection of GAs available. To overcome the above limitations, this paper introduces a novel unified GA module dubbed UGA after reinterpreting the mechanism of GAs in GCLs from a message-passing perspective. Theoretically, this module is capable of unifying any explicit GAs, including node, edge, attribute, and subgraph augmentations. Based on the proposed UGA, a novel generalized GCL framework dubbed Graph cOntrastive UnifieD Augmentations (GOUDA) is proposed. It seamlessly integrates widely adopted contrastive losses and an introduced independence loss to fulfill the common requirements of consistency and diversity of augmentation across diverse scenarios. Evaluations across various datasets and tasks demonstrate the generality and efficiency of the proposed GOUDA over existing state-of-the-art GCLs.", "pdf": "https://openreview.net/pdf/47bddc87562b53c0ec8f652b511422847f3fe88e.pdf"} {"title": "Unlocking the Capabilities of Masked Generative Models for Image Synthesis via Self-Guidance", "url": "https://openreview.net/forum?id=1l9cEyFmxg", "detail_url": "https://openreview.net/forum?id=1l9cEyFmxg", "authors": "Jiwan Hur,Dong-Jae Lee,Gyojin Han,Jaehyun Choi,Yunho Jeon,Junmo Kim", "tags": "NIPS 2024,Poster", "abstract": "Masked generative models (MGMs) have shown impressive generative ability while providing an order of magnitude efficient sampling steps compared to continuous diffusion models. However, MGMs still underperform in image synthesis compared to recent well-developed continuous diffusion models with similar size in terms of quality and diversity of generated samples. A key factor in the performance of continuous diffusion models stems from the guidance methods, which enhance the sample quality at the expense of diversity. In this paper, we extend these guidance methods to generalized guidance formulation for MGMs and propose a self-guidance sampling method, which leads to better generation quality. The proposed approach leverages an auxiliary task for semantic smoothing in vector-quantized token space, analogous to the Gaussian blur in continuous pixel space. Equipped with the parameter-efficient fine-tuning method and high-temperature sampling, MGMs with the proposed self-guidance achieve a superior quality-diversity trade-off, outperforming existing sampling methods in MGMs with more efficient training and sampling costs. Extensive experiments with the various sampling hyperparameters confirm the effectiveness of the proposed self-guidance.", "pdf": "https://openreview.net/pdf/92196d574ac8e75adcd1664063c20a95a258b369.pdf"} {"title": "Mind the Gap Between Prototypes and Images in Cross-domain Finetuning", "url": "https://openreview.net/forum?id=JWLiK3kKWQ", "detail_url": "https://openreview.net/forum?id=JWLiK3kKWQ", "authors": "Hongduan Tian,Feng Liu,Zhanke Zhou,Tongliang Liu,Chengqi Zhang,Bo Han", "tags": "NIPS 2024,Poster", "abstract": "In _cross-domain few-shot classification_ (CFC), recent works mainly focus on adapting a simple transformation head on top of a frozen pre-trained backbone with few labeled data to project embeddings into a task-specific metric space where classification can be performed by measuring similarities between image instance and prototype representations. Technically, an _assumption_ implicitly adopted in such a framework is that the prototype and image instance embeddings share the same representation transformation. However, in this paper, we find that there naturally exists a gap, which resembles the modality gap, between the prototype and image instance embeddings extracted from the frozen pre-trained backbone, and simply applying the same transformation during the adaptation phase constrains exploring the optimal representation distributions and shrinks the gap between prototype and image representations. To solve this problem, we propose a simple yet effective method, _contrastive prototype-image adaptation_ (CoPA), to adapt different transformations for prototypes and images similarly to CLIP by treating prototypes as text prompts. \n Extensive experiments on Meta-Dataset demonstrate that CoPA achieves the _state-of-the-art_ performance more efficiently. Meanwhile, further analyses also indicate that CoPA can learn better representation clusters, enlarge the gap, and achieve the minimum validation loss at the enlarged gap.", "pdf": "https://openreview.net/pdf/6380b9942d38d2a521405e779c603d7120ace9b6.pdf"} {"title": "RealCompo: Balancing Realism and Compositionality Improves Text-to-Image Diffusion Models", "url": "https://openreview.net/forum?id=R8mfn3rHd5", "detail_url": "https://openreview.net/forum?id=R8mfn3rHd5", "authors": "Xinchen Zhang,Ling Yang,YaQi Cai,Zhaochen Yu,Kai-Ni Wang,xie jiake,Ye Tian,Minkai Xu,Yong Tang,Yujiu Yang,Bin CUI", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models have achieved remarkable advancements in text-to-image generation. However, existing models still have many difficulties when faced with multiple-object compositional generation. In this paper, we propose ***RealCompo***, a new *training-free* and *transferred-friendly* text-to-image generation framework, which aims to leverage the respective advantages of text-to-image models and spatial-aware image diffusion models (e.g., layout, keypoints and segmentation maps) to enhance both realism and compositionality of the generated images. An intuitive and novel *balancer* is proposed to dynamically balance the strengths of the two models in denoising process, allowing plug-and-play use of any model without extra training. Extensive experiments show that our RealCompo consistently outperforms state-of-the-art text-to-image models and spatial-aware image diffusion models in multiple-object compositional generation while keeping satisfactory realism and compositionality of the generated images. Notably, our RealCompo can be seamlessly extended with a wide range of spatial-aware image diffusion models and stylized diffusion models. Code is available at: https://github.com/YangLing0818/RealCompo", "pdf": "https://openreview.net/pdf/3857cfea20b8f6475998ef1905a07ed3c5470dc8.pdf"} {"title": "Gradient-Free Methods for Nonconvex Nonsmooth Stochastic Compositional Optimization", "url": "https://openreview.net/forum?id=UVAq3uJ0gc", "detail_url": "https://openreview.net/forum?id=UVAq3uJ0gc", "authors": "Zhuanghua Liu,Luo Luo,Bryan Kian Hsiang Low", "tags": "NIPS 2024,Poster", "abstract": "The stochastic compositional optimization (SCO) is popular in many real-world applications, including risk management, reinforcement learning, and meta-learning. However, most of the previous methods for SCO require the smoothness assumption on both the outer and inner functions, which limits their applications to a wider range of problems. In this paper, we study the SCO problem in that both the outer and inner functions are Lipschitz continuous but possibly nonconvex and nonsmooth. In particular, we propose gradient-free stochastic methods for finding the $(\\delta, \\epsilon)$-Goldstein stationary points of such problems with non-asymptotic convergence rates. Our results also lead to an improved convergence rate for the convex nonsmooth SCO problem. Furthermore, we conduct numerical experiments to demonstrate the effectiveness of the proposed methods.", "pdf": "https://openreview.net/pdf/a69ce9c228890fce5cf8f1a72320f55171d3b5be.pdf"} {"title": "Long-Range Feedback Spiking Network Captures Dynamic and Static Representations of the Visual Cortex under Movie Stimuli", "url": "https://openreview.net/forum?id=bxDok3uaK6", "detail_url": "https://openreview.net/forum?id=bxDok3uaK6", "authors": "Liwei Huang,Zhengyu Ma,Liutao Yu,Huihui Zhou,Yonghong Tian", "tags": "NIPS 2024,Poster", "abstract": "Deep neural networks (DNNs) are widely used models for investigating biological visual representations. However, existing DNNs are mostly designed to analyze neural responses to static images, relying on feedforward structures and lacking physiological neuronal mechanisms. There is limited insight into how the visual cortex represents natural movie stimuli that contain context-rich information. To address these problems, this work proposes the long-range feedback spiking network (LoRaFB-SNet), which mimics top-down connections between cortical regions and incorporates spike information processing mechanisms inherent to biological neurons. Taking into account the temporal dependence of representations under movie stimuli, we present Time-Series Representational Similarity Analysis (TSRSA) to measure the similarity between model representations and visual cortical representations of mice. LoRaFB-SNet exhibits the highest level of representational similarity, outperforming other well-known and leading alternatives across various experimental paradigms, especially when representing long movie stimuli. We further conduct experiments to quantify how temporal structures (dynamic information) and static textures (static information) of the movie stimuli influence representational similarity, suggesting that our model benefits from long-range feedback to encode context-dependent representations just like the brain. Altogether, LoRaFB-SNet is highly competent in capturing both dynamic and static representations of the mouse visual cortex and contributes to the understanding of movie processing mechanisms of the visual system. Our codes are available at https://github.com/Grasshlw/SNN-Neural-Similarity-Movie.", "pdf": "https://openreview.net/pdf/f06ba1d8b823b35c34bce028877dc14fe88c02cc.pdf"} {"title": "Stabilizing Zero-Shot Prediction: A Novel Antidote to Forgetting in Continual Vision-Language Tasks", "url": "https://openreview.net/forum?id=C4zmR2kyP8", "detail_url": "https://openreview.net/forum?id=C4zmR2kyP8", "authors": "Zijian Gao,Xingxing Zhang,Kele Xu,Xinjun Mao,Huaimin Wang", "tags": "NIPS 2024,Poster", "abstract": "Continual learning (CL) empowers pre-trained vision-language (VL) models to efficiently adapt to a sequence of downstream tasks. However, these models often encounter challenges in retaining previously acquired skills due to parameter shifts and limited access to historical data. In response, recent efforts focus on devising specific frameworks and various replay strategies, striving for a typical learning-forgetting trade-off. Surprisingly, both our empirical research and theoretical analysis demonstrate that the stability of the model in consecutive zero-shot predictions serves as a reliable indicator of its anti-forgetting capabilities for previously learned tasks. \nMotivated by these insights, we develop a novel replay-free CL method named ZAF (Zero-shot Antidote to Forgetting), which preserves acquired knowledge through a zero-shot stability regularization applied to wild data in a plug-and-play manner. To enhance efficiency in adapting to new tasks and seamlessly access historical models, we introduce a parameter-efficient EMA-LoRA neural architecture based on the Exponential Moving Average (EMA). ZAF utilizes new data for low-rank adaptation (LoRA), complemented by a zero-shot antidote on wild data, effectively decoupling learning from forgetting. Our extensive experiments demonstrate ZAF's superior performance and robustness in pre-trained models across various continual VL concept learning tasks, achieving leads of up to 3.70\\%, 4.82\\%, and 4.38\\%, along with at least a 10x acceleration in training speed on three benchmarks, respectively. Additionally, our zero-shot antidote significantly reduces forgetting in existing models by at least 6.37\\%. Our code is available at https://github.com/Zi-Jian-Gao/Stabilizing-Zero-Shot-Prediction-ZAF.", "pdf": "https://openreview.net/pdf/1e4f89aa8989851c0193e90f12bf91641a6ce9a3.pdf"} {"title": "Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image", "url": "https://openreview.net/forum?id=UO7Mvch1Z5", "detail_url": "https://openreview.net/forum?id=UO7Mvch1Z5", "authors": "Kailu Wu,Fangfu Liu,Zhihan Cai,Runjie Yan,Hanyang Wang,Yating Hu,Yueqi Duan,Kaisheng Ma", "tags": "NIPS 2024,Poster", "abstract": "In this work, we introduce Unique3D, a novel image-to-3D framework for efficiently generating high-quality 3D meshes from single-view images, featuring state-of-the-art generation fidelity and strong generalizability. Previous methods based on Score Distillation Sampling (SDS) can produce diversified 3D results by distilling 3D knowledge from large 2D diffusion models, but they usually suffer from long per-case optimization time with inconsistent issues. Recent works address the problem and generate better 3D results either by finetuning a multi-view diffusion model or training a fast feed-forward model. However, they still lack intricate textures and complex geometries due to inconsistency and limited generated resolution. To simultaneously achieve high fidelity, consistency, and efficiency in single image-to-3D, we propose a novel framework Unique3D that includes a multi-view diffusion model with a corresponding normal diffusion model to generate multi-view images with their normal maps, a multi-level upscale process to progressively improve the resolution of generated orthographic multi-views, as well as an instant and consistent mesh reconstruction algorithm called ISOMER, which fully integrates the color and geometric priors into mesh results. Extensive experiments demonstrate that our Unique3D significantly outperforms other image-to-3D baselines in terms of geometric and textural details.", "pdf": "https://openreview.net/pdf/8373449e05561d5537d1f25c6019e83aad831982.pdf"} {"title": "GACL: Exemplar-Free Generalized Analytic Continual Learning", "url": "https://openreview.net/forum?id=P6aJ7BqYlc", "detail_url": "https://openreview.net/forum?id=P6aJ7BqYlc", "authors": "Huiping Zhuang,Yizhu Chen,Di Fang,Run He,Kai Tong,Hongxin Wei,Ziqian Zeng,Cen Chen", "tags": "NIPS 2024,Poster", "abstract": "Class incremental learning (CIL) trains a network on sequential tasks with separated categories in each task but suffers from catastrophic forgetting, where models quickly lose previously learned knowledge when acquiring new tasks. The generalized CIL (GCIL) aims to address the CIL problem in a more real-world scenario, where incoming data have mixed data categories and unknown sample size distribution. Existing attempts for the GCIL either have poor performance or invade data privacy by saving exemplars. In this paper, we propose a new exemplar-free GCIL technique named generalized analytic continual learning (GACL). The GACL adopts analytic learning (a gradient-free training technique) and delivers an analytical (i.e., closed-form) solution to the GCIL scenario. This solution is derived via decomposing the incoming data into exposed and unexposed classes, thereby attaining a weight-invariant property, a rare yet valuable property supporting an equivalence between incremental learning and its joint training. Such an equivalence is crucial in GCIL settings as data distributions among different tasks no longer pose challenges to adopting our GACL. Theoretically, this equivalence property is validated through matrix analysis tools. Empirically, we conduct extensive experiments where, compared with existing GCIL methods, our GACL exhibits a consistently leading performance across various datasets and GCIL settings. Source code is available at https://github.com/CHEN-YIZHU/GACL.", "pdf": "https://openreview.net/pdf/7d7f8049c3a8d96f5824e696ca7a41551b337c51.pdf"} {"title": "Learning 1D Causal Visual Representation with De-focus Attention Networks", "url": "https://openreview.net/forum?id=LxRmdXf72k", "detail_url": "https://openreview.net/forum?id=LxRmdXf72k", "authors": "Chenxin Tao,Xizhou Zhu,Shiqian Su,Lewei Lu,Changyao Tian,Xuan Luo,Gao Huang,Hongsheng Li,Yu Qiao,Jie Zhou,Jifeng Dai", "tags": "NIPS 2024,Poster", "abstract": "Modality differences have led to the development of heterogeneous architectures for vision and language models. While images typically require 2D non-causal modeling, texts utilize 1D causal modeling. This distinction poses significant challenges in constructing unified multi-modal models. This paper explores the feasibility of representing images using 1D causal modeling. We identify an \"over-focus\" issue in existing 1D causal vision models, where attention overly concentrates on a small proportion of visual tokens. The issue of \"over-focus\" hinders the model's ability to extract diverse visual features and to receive effective gradients for optimization. To address this, we propose De-focus Attention Networks, which employ learnable bandpass filters to create varied attention patterns. During training, large and scheduled drop path rates, and an auxiliary loss on globally pooled features for global understanding tasks are introduced. These two strategies encourage the model to attend to a broader range of tokens and enhance network optimization. Extensive experiments validate the efficacy of our approach, demonstrating that 1D causal visual representation can perform comparably to 2D non-causal representation in tasks such as global perception, dense prediction, and multi-modal understanding. Code shall be released.", "pdf": "https://openreview.net/pdf/90682e76c941e2d7256d58a477ad83e95ebe037c.pdf"} {"title": "MV2Cyl: Reconstructing 3D Extrusion Cylinders from Multi-View Images", "url": "https://openreview.net/forum?id=jDF2ZXI8AX", "detail_url": "https://openreview.net/forum?id=jDF2ZXI8AX", "authors": "Eunji Hong,Nguyen Minh Hieu,Mikaela Angelina Uy,Minhyuk Sung", "tags": "NIPS 2024,Poster", "abstract": "We present MV2Cyl, a novel method for reconstructing 3D from 2D multi-view images, not merely as a field or raw geometry but as a sketch-extrude CAD. Extracting extrusion cylinders from raw 3D geometry has been extensively researched in computer vision, while the processing of 3D data through neural networks has remained a bottleneck. Since 3D scans are generally accompanied by multi-view images, leveraging 2D convolutional neural networks allows these images to be exploited as a rich source for extracting extrusion cylinder information. However, we observe that extracting only the surface information of the extrudes and utilizing it results in suboptimal outcomes due to the challenges in the occlusion and surface segmentation. By synergizing with the extracted base curve information, we achieve the optimal reconstruction result with the best accuracy in 2D sketch and extrude parameter estimation. Our experiments, comparing our method with previous work that takes a raw 3D point cloud as input, demonstrate the effectiveness of our approach by taking advantage of multi-view images.", "pdf": "https://openreview.net/pdf/6511548e802a6a8e7b6c5799bca4e41fba855761.pdf"} {"title": "Semantic Feature Learning for Universal Unsupervised Cross-Domain Retrieval", "url": "https://openreview.net/forum?id=zZVqZRXSao", "detail_url": "https://openreview.net/forum?id=zZVqZRXSao", "authors": "Lixu Wang,Xinyu Du,Qi Zhu", "tags": "NIPS 2024,Poster", "abstract": "Cross-domain retrieval (CDR) is finding increasingly broad applications across various domains. However, existing efforts have several major limitations, with the most critical being their reliance on accurate supervision. Recent studies thus focus on achieving unsupervised CDR, but they typically assume that the category spaces across domains are identical, an assumption that is often unrealistic in real-world scenarios. This is because only through dedicated and comprehensive analysis can the category composition of a data domain be obtained, which contradicts the premise of unsupervised scenarios. Therefore, in this work, we introduce the problem of **U**niversal **U**nsupervised **C**ross-**D**omain **R**etrieval (U^2CDR) for the first time and design a two-stage semantic feature learning framework to address it. In the first stage, a cross-domain unified prototypical structure is established under the guidance of an instance-prototype-mixed contrastive loss and a semantic-enhanced loss, to counteract category space differences. In the second stage, through a modified adversarial training mechanism, we ensure minimal changes for the established prototypical structure during domain alignment, enabling more accurate nearest-neighbor searching. Extensive experiments across multiple datasets and scenarios, including close-set, partial, and open-set CDR, demonstrate that our approach significantly outperforms existing state-of-the-art CDR methods and other related methods in solving U^2CDR challenges.", "pdf": "https://openreview.net/pdf/cb8530b37653420a6b121610aa8e7adce3e7940a.pdf"} {"title": "SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering", "url": "https://openreview.net/forum?id=mXpq6ut8J3", "detail_url": "https://openreview.net/forum?id=mXpq6ut8J3", "authors": "John Yang,Carlos E Jimenez,Alexander Wettig,Kilian Lieret,Shunyu Yao,Karthik R Narasimhan,Ofir Press", "tags": "NIPS 2024,Poster", "abstract": "Language model agents are increasingly being used to automate complicated tasks in digital environments. Just as humans benefit from powerful software applications, such as integrated development environments, for complex tasks like software engineering, we posit that language model agents represent a new category of end users with their own needs and abilities, and would benefit from specially built interfaces to the software they use. We investigate how the role of interface design affects the performance of language model agents. As a result of this exploration, we introduce SWE-agent: a system that facilitates language model agents to autonomously use computers to solve software engineering tasks. SWE-agent's custom agent-computer interface significantly enhances an agent's ability to create and edit code files, navigate entire repositories, and execute tests and other programs. We evaluate SWE-agent on SWE-bench and HumanEvalFix, achieving state-of-the-art performance on both with a pass@1 rate of 12.5% and 87.7%, respectively, far exceeding the previous state-of-the-art achieved with non-interactive language models. Finally, we provide insight on how the design of the agent-computer interface can impact agents' behavior and performance.", "pdf": "https://openreview.net/pdf/7b9425730150fb166d4e6c77995f67ea38638fca.pdf"} {"title": "Online Feature Updates Improve Online (Generalized) Label Shift Adaptation", "url": "https://openreview.net/forum?id=HNH1ykRjXf", "detail_url": "https://openreview.net/forum?id=HNH1ykRjXf", "authors": "Ruihan Wu,Siddhartha Datta,Yi Su,Dheeraj Baby,Yu-Xiang Wang,Kilian Q Weinberger", "tags": "NIPS 2024,Poster", "abstract": "This paper addresses the prevalent issue of label shift in an online setting with missing labels, where data distributions change over time and obtaining timely labels is challenging. While existing methods primarily focus on adjusting or updating the final layer of a pre-trained classifier, we explore the untapped potential of enhancing feature representations using unlabeled data at test-time. Our novel method, Online Label Shift adaptation with Online Feature Updates (OLS-OFU), leverages self-supervised learning to refine the feature extraction process, thereby improving the prediction model. By carefully designing the algorithm, theoretically OLS-OFU maintains the similar online regret convergence to the results in the literature while taking the improved features into account. Empirically, it achieves substantial improvements over existing methods, which is as significant as the gains existing methods have over the baseline (i.e., without distribution shift adaptations).", "pdf": "https://openreview.net/pdf/fd68b648c0193632794083c0fd7720a3a3e2a8ba.pdf"} {"title": "Event-3DGS: Event-based 3D Reconstruction Using 3D Gaussian Splatting", "url": "https://openreview.net/forum?id=EJZfcKXdiT", "detail_url": "https://openreview.net/forum?id=EJZfcKXdiT", "authors": "Haiqian Han,Jianing Li,Henglu Wei,Xiangyang Ji", "tags": "NIPS 2024,Poster", "abstract": "Event cameras, offering high temporal resolution and high dynamic range, have brought a new perspective to addressing 3D reconstruction challenges in fast-motion and low-light scenarios. Most methods use the Neural Radiance Field (NeRF) for event-based photorealistic 3D reconstruction. However, these NeRF methods suffer from time-consuming training and inference, as well as limited scene-editing capabilities of implicit representations. To address these problems, we propose Event-3DGS, the first event-based reconstruction using 3D Gaussian splatting (3DGS) for synthesizing novel views freely from event streams. Technically, we first propose an event-based 3DGS framework that directly processes event data and reconstructs 3D scenes by simultaneously optimizing scenario and sensor parameters. Then, we present a high-pass filter-based photovoltage estimation module, which effectively reduces noise in event data to improve the robustness of our method in real-world scenarios. Finally, we design an event-based 3D reconstruction loss to optimize the parameters of our method for better reconstruction quality. The results show that our method outperforms state-of-the-art methods in terms of reconstruction quality on both simulated and real-world datasets. We also verify that our method can perform robust 3D reconstruction even in real-world scenarios with extreme noise, fast motion, and low-light conditions. Our code is available in https://github.com/lanpokn/Event-3DGS.", "pdf": "https://openreview.net/pdf/d35d6156466eceae00b7b471d351ef43cc1b397e.pdf"} {"title": "CigTime: Corrective Instruction Generation Through Inverse Motion Editing", "url": "https://openreview.net/forum?id=gktA1Qycj9", "detail_url": "https://openreview.net/forum?id=gktA1Qycj9", "authors": "Qihang Fang,Chengcheng Tang,Bugra Tekin,Yanchao Yang", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements in models linking natural language with human motions have shown significant promise in motion generation and editing based on instructional text. Motivated by applications in sports coaching and motor skill learning, we investigate the inverse problem: generating corrective instructional text, leveraging motion editing and generation models. We introduce a novel approach that, given a user\u2019s current motion (source) and the desired motion (target), generates text instructions to guide the user towards achieving the target motion. We leverage large language models to generate corrective texts and utilize existing motion generation and editing frameworks to compile datasets of triplets (source motion, target motion, and corrective text). Using this data, we propose a new motion-language model for generating corrective instructions. We present both qualitative and quantitative results across a diverse range of applications that largely improve upon baselines. Our approach demonstrates its effectiveness in instructional scenarios, offering text-based guidance to correct and enhance user performance.", "pdf": "https://openreview.net/pdf/d58cc47949e92bf93bee9fde15407cc70fc91849.pdf"} {"title": "Conjugated Semantic Pool Improves OOD Detection with Pre-trained Vision-Language Models", "url": "https://openreview.net/forum?id=qqQFOcUEqM", "detail_url": "https://openreview.net/forum?id=qqQFOcUEqM", "authors": "Mengyuan Chen,Junyu Gao,Changsheng Xu", "tags": "NIPS 2024,Poster", "abstract": "A straightforward pipeline for zero-shot out-of-distribution (OOD) detection involves selecting potential OOD labels from an extensive semantic pool and then leveraging a pre-trained vision-language model to perform classification on both in-distribution (ID) and OOD labels. In this paper, we theorize that enhancing performance requires expanding the semantic pool, while increasing the expected probability of selected OOD labels being activated by OOD samples, and ensuring low mutual dependence among the activations of these OOD labels. A natural expansion manner is to adopt a larger lexicon; however, the inevitable introduction of numerous synonyms and uncommon words fails to meet the above requirements, indicating that viable expansion manners move beyond merely selecting words from a lexicon. Since OOD detection aims to correctly classify input images into ID/OOD class groups, we can \"make up\" OOD label candidates which are not standard class names but beneficial for the process. Observing that the original semantic pool is comprised of unmodified specific class names, we correspondingly construct a conjugated semantic pool (CSP) consisting of modified superclass names, each serving as a cluster center for samples sharing similar properties across different categories. Consistent with our established theory, expanding OOD label candidates with the CSP satisfies the requirements and outperforms existing works by 7.89% in FPR95. Codes are available in https://github.com/MengyuanChen21/NeurIPS2024-CSP.", "pdf": "https://openreview.net/pdf/3c7240aebfc66bb661b3ec74c805bcb284715406.pdf"} {"title": "Video Diffusion Models are Training-free Motion Interpreter and Controller", "url": "https://openreview.net/forum?id=ZvQ4Bn75kN", "detail_url": "https://openreview.net/forum?id=ZvQ4Bn75kN", "authors": "Zeqi Xiao,Yifan Zhou,Shuai Yang,Xingang Pan", "tags": "NIPS 2024,Poster", "abstract": "Video generation primarily aims to model authentic and customized motion across frames, making understanding and controlling the motion a crucial topic. Most diffusion-based studies on video motion focus on motion customization with training-based paradigms, which, however, demands substantial training resources and necessitates retraining for diverse models. Crucially, these approaches do not explore how video diffusion models encode cross-frame motion information in their features, lacking interpretability and transparency in their effectiveness. To answer this question, this paper introduces a novel perspective to understand, localize, and manipulate motion-aware features in video diffusion models. Through analysis using Principal Component Analysis (PCA), our work discloses that robust motion-aware feature already exists in video diffusion models. We present a new MOtion FeaTure (MOFT) by eliminating content correlation information and filtering motion channels. MOFT provides a distinct set of benefits, including the ability to encode comprehensive motion information with clear interpretability, extraction without the need for training, and generalizability across diverse architectures. Leveraging MOFT, we propose a novel training-free video motion control framework. Our method demonstrates competitive performance in generating natural and faithful motion, providing architecture-agnostic insights and applicability in a variety of downstream tasks.", "pdf": "https://openreview.net/pdf/655c3c980fb10954cccd67f1942955a6f177a0b8.pdf"} {"title": "BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models", "url": "https://openreview.net/forum?id=0uXtFk5KNJ", "detail_url": "https://openreview.net/forum?id=0uXtFk5KNJ", "authors": "Qijun Luo,Hengxu Yu,Xiao Li", "tags": "NIPS 2024,Poster", "abstract": "This work presents BAdam, an optimization method that leverages the block coordinate descent (BCD) framework with Adam's update rule. BAdam offers a memory efficient approach to the full parameter finetuning of large language models. We conduct \n a theoretical convergence analysis for BAdam in the deterministic case. Experimentally, we apply BAdam to finetune the Llama 3-8B and Llama 3-70B models using a single RTX3090-24GB GPU and 4 A100-80GB GPUs, respectively. The results confirm BAdam's efficiency in terms of memory usage, running time, and optimization capability. Furthermore, the downstream performance evaluation based on MT-bench and math benchmarks shows that BAdam outperforms existing memory efficient baselines such as LoRA. It also demonstrates that BAdam can achieve comparable or even superior performance compared to Adam. Finally, the ablation study using SGD's update rule illustrates the suitability of BCD for finetuning LLMs. Our code can be easily integrated into any PyTorch-based codebase and is available at https://github.com/Ledzy/BAdam.", "pdf": "https://openreview.net/pdf/160fd292c93de350b0b316312da9dafb0255e647.pdf"} {"title": "Hyper-opinion Evidential Deep Learning for Out-of-Distribution Detection", "url": "https://openreview.net/forum?id=Te8vI2wGTh", "detail_url": "https://openreview.net/forum?id=Te8vI2wGTh", "authors": "Jingen Qu,Yufei Chen,Xiaodong Yue,Wei Fu,Qiguang Huang", "tags": "NIPS 2024,Poster", "abstract": "Evidential Deep Learning (EDL), grounded in Evidence Theory and Subjective Logic (SL), provides a robust framework to estimate uncertainty for out-of-distribution (OOD) detection alongside traditional classification probabilities.However, the EDL framework is constrained by its focus on evidence that supports only single categories, neglecting the other collective evidences that could corroborate multiple in-distribution categories. This limitation leads to a diminished estimation of uncertainty and a subsequent decline in OOD detection performance.Additionally, EDL encounters the vanishing gradient problem within its fully-connected layers, further degrading classification accuracy.To address these issues, we introduce hyper-domain and propose Hyper-opinion Evidential Deep Learning (HEDL). HEDL extends the evidence modeling paradigm by explicitly integrating sharp evidence, which supports a singular category, with vague evidence that accommodates multiple potential categories.Additionally, we propose a novel opinion projection mechanism that translates hyper-opinion into multinomial-opinion, which is then optimized within the EDL framework to ensure precise classification and refined uncertainty estimation.HEDL integrates evidences across various categories to yield a holistic evidentiary foundation for achieving superior OOD detection. Furthermore, our proposed opinion projection method effectively mitigates the vanishing gradient issue, ensuring classification accuracy without additional model complexity. Extensive experiments over many datasets demonstrate our proposed method outperforms existing OOD detection methods.", "pdf": "https://openreview.net/pdf/176b84846bd7315c9fe964480696c7af249bda8a.pdf"} {"title": "Bandit-Feedback Online Multiclass Classification: Variants and Tradeoffs", "url": "https://openreview.net/forum?id=90IpKvVdXd", "detail_url": "https://openreview.net/forum?id=90IpKvVdXd", "authors": "Yuval Filmus,Steve Hanneke,Idan Mehalel,Shay Moran", "tags": "NIPS 2024,Poster", "abstract": "Consider the domain of multiclass classification within the adversarial online setting. What is the price of relying on bandit feedback as opposed to full information? To what extent can an adaptive adversary amplify the loss compared to an oblivious one? To what extent can a randomized learner reduce the loss compared to a deterministic one? We study these questions in the mistake bound model and provide nearly tight answers.\nWe demonstrate that the optimal mistake bound under bandit feedback is at most $O(k)$ times higher than the optimal mistake bound in the full information case, where $k$ represents the number of labels. This bound is tight and provides an answer to an open question previously posed and studied by Daniely and Helbertal ['13] and by Long ['17, '20], who focused on deterministic learners.\nMoreover, we present nearly optimal bounds of $\\tilde{\\Theta}(k)$ on the gap between randomized and deterministic learners, as well as between adaptive and oblivious adversaries in the bandit feedback setting. This stands in contrast to the full information scenario, where adaptive and oblivious adversaries are equivalent, and the gap in mistake bounds between randomized and deterministic learners is a constant multiplicative factor of $2$.\nIn addition, our results imply that in some cases the optimal randomized mistake bound is approximately the square-root of its deterministic parallel. Previous results show that this is essentially the smallest it can get.\nSome of our results are proved via a reduction to prediction with expert advice under bandit feedback, a problem interesting on its own right. For this problem, we provide a randomized algorithm which is nearly optimal in some scenarios.", "pdf": "https://openreview.net/pdf/bef9a356fe53a33be1d545937e37835c9ab5b388.pdf"} {"title": "Happy: A Debiased Learning Framework for Continual Generalized Category Discovery", "url": "https://openreview.net/forum?id=hdUCZiMkFO", "detail_url": "https://openreview.net/forum?id=hdUCZiMkFO", "authors": "Shijie Ma,Fei Zhu,Zhun Zhong,Wenzhuo Liu,Xu-Yao Zhang,Cheng-Lin Liu", "tags": "NIPS 2024,Poster", "abstract": "Constantly discovering novel concepts is crucial in evolving environments. This paper explores the underexplored task of Continual Generalized Category Discovery (C-GCD), which aims to incrementally discover new classes from *unlabeled* data while maintaining the ability to recognize previously learned classes. Although several settings are proposed to study the C-GCD task, they have limitations that do not reflect real-world scenarios. We thus study a more practical C-GCD setting, which includes more new classes to be discovered over a longer period, without storing samples of past classes. In C-GCD, the model is initially trained on labeled data of known classes, followed by multiple incremental stages where the model is fed with unlabeled data containing both old and new classes. The core challenge involves two conflicting objectives: discover new classes and prevent forgetting old ones. We delve into the conflicts and identify that models are susceptible to *prediction bias* and *hardness bias*. To address these issues, we introduce a debiased learning framework, namely **Happy**, characterized by **H**ardness-**a**ware **p**rototype sampling and soft entro**py** regularization. For the *prediction bias*, we first introduce clustering-guided initialization to provide robust features. In addition, we propose soft entropy regularization to assign appropriate probabilities to new classes, which can significantly enhance the clustering performance of new classes. For the *harness bias*, we present the hardness-aware prototype sampling, which can effectively reduce the forgetting issue for previously seen classes, especially for difficult classes. Experimental results demonstrate our method proficiently manages the conflicts of C-GCD and achieves remarkable performance across various datasets, e.g., 7.5% overall gains on ImageNet-100. Our code is publicly available at https://github.com/mashijie1028/Happy-CGCD.", "pdf": "https://openreview.net/pdf/9dac6a913dac9d587db239e81dd19377779ee4c2.pdf"} {"title": "Adaptive Passive-Aggressive Framework for Online Regression with Side Information", "url": "https://openreview.net/forum?id=kV80nC1afE", "detail_url": "https://openreview.net/forum?id=kV80nC1afE", "authors": "Runhao Shi,Jiaxi Ying,Daniel P. Palomar", "tags": "NIPS 2024,Poster", "abstract": "The Passive-Aggressive (PA) method is widely used in online regression problems for handling large-scale streaming data, typically updating model parameters in a passive-aggressive manner based on whether the error exceeds a predefined threshold. However, this approach struggles with determining optimal thresholds and adapting to complex scenarios with side information, where tracking accuracy is not the sole metric in the regression model. To address these challenges, we introduce a novel adaptive framework that allows finer adjustments to the weight vector in PA using side information. This framework adaptively selects the threshold parameter in PA, theoretically ensuring convergence to the optimal setting. Additionally, we present an efficient implementation of our algorithm that significantly reduces computational complexity. Numerical experiments show that our model achieves outstanding performance associated with the side information while maintaining low tracking error, demonstrating marked improvements over traditional PA methods across various scenarios.", "pdf": "https://openreview.net/pdf/5012fa5c9454cc7c9e5f86f0f644c2d92dc2ef9e.pdf"} {"title": "OmniTokenizer: A Joint Image-Video Tokenizer for Visual Generation", "url": "https://openreview.net/forum?id=H6C4p8Dir7", "detail_url": "https://openreview.net/forum?id=H6C4p8Dir7", "authors": "Junke Wang,Yi Jiang,Zehuan Yuan,BINGYUE PENG,Zuxuan Wu,Yu-Gang Jiang", "tags": "NIPS 2024,Poster", "abstract": "Tokenizer, serving as a translator to map the intricate visual data into a compact latent space, lies at the core of visual generative models. Based on the finding that existing tokenizers are tailored to either image or video inputs, this paper presents OmniTokenizer, a transformer-based tokenizer for joint image and video tokenization. OmniTokenizer is designed with a spatial-temporal decoupled architecture, which integrates window attention and causal attention for spatial and temporal modeling, respectively. To exploit the complementary nature of image and video data, we further propose a progressive training strategy, where OmniTokenizer is first trained on image data on a fixed resolution to develop the spatial encoding capacity and then jointly trained on image and video data on multiple resolutions to learn the temporal dynamics. OmniTokenizer, for the first time, handles both image and video inputs within a unified framework and proves the possibility of realizing their synergy. Extensive experiments demonstrate that OmniTokenizer achieves state-of-the-art (SOTA) reconstruction performance on various image and video datasets, e.g., 1.11 reconstruction FID on ImageNet and 42 reconstruction FVD on UCF-101, beating the previous SOTA methods by 13% and 26%, respectively. Additionally, we also show that when integrated with OmniTokenizer, both language model-based approaches and diffusion models can realize advanced visual synthesis performance, underscoring the superiority and versatility of our method.", "pdf": "https://openreview.net/pdf/756ad2d6f19434bd4623e423aeb3bc08185ab2f2.pdf"} {"title": "Scaling Law for Time Series Forecasting", "url": "https://openreview.net/forum?id=Cr2jEHJB9q", "detail_url": "https://openreview.net/forum?id=Cr2jEHJB9q", "authors": "Jingzhe Shi,Qinwei Ma,Huan Ma,Lei Li", "tags": "NIPS 2024,Poster", "abstract": "Scaling law that rewards large datasets, complex models and enhanced data granularity has been observed in various fields of deep learning. Yet, studies on time series forecasting have cast doubt on scaling behaviors of deep learning methods for time series forecasting: while more training data improves performance, more capable models do not always outperform less capable models, and longer input horizon may hurt performance for some models. We propose a theory for scaling law for time series forecasting that can explain these seemingly abnormal behaviors. We take into account the impact of dataset size and model complexity, as well as time series data granularity, particularly focusing on the look-back horizon, an aspect that has been unexplored in previous theories. Furthermore, we empirically evaluate various models using a diverse set of time series forecasting datasets, which (1) verifies the validity of scaling law on dataset size and model complexity within the realm of time series forecasting, and (2) validates our theoretical framework, particularly regarding the influence of look back horizon. We hope our findings may inspire new models targeting time series forecasting datasets of limited size, as well as large foundational datasets and models for time series forecasting in future works.", "pdf": "https://openreview.net/pdf/a4828ad3298538d0990f22ebdd9671451de7932f.pdf"} {"title": "In Pursuit of Causal Label Correlations for Multi-label Image Recognition", "url": "https://openreview.net/forum?id=yBHbeSpwYS", "detail_url": "https://openreview.net/forum?id=yBHbeSpwYS", "authors": "Zhao-Min Chen,Xin Jin,YisuGe,Sixian Chan", "tags": "NIPS 2024,Poster", "abstract": "Multi-label image recognition aims to predict all objects present in an input image. A common belief is that modeling the correlations between objects is beneficial for multi-label recognition. However, this belief has been recently challenged as label correlations may mislead the classifier in testing, due to the possible contextual bias in training. Accordingly, a few of recent works not only discarded label correlation modeling, but also advocated to remove contextual information for multi-label image recognition. This work explicitly explores label correlations for multi-label image recognition based on a principled causal intervention approach. With causal intervention, we pursue causal label correlations and suppress spurious label correlations, as the former tend to convey useful contextual cues while the later may mislead the classifier. Specifically, we decouple label-specific features with a Transformer decoder attached to the backbone network, and model the confounders which may give rise to spurious correlations by clustering spatial features of all training images. Based on label-specific features and confounders, we employ a cross-attention module to implement causal intervention, quantifying the causal correlations from all object categories to each predicted object category. Finally, we obtain image labels by combining the predictions from decoupled features and causal label correlations. Extensive experiments clearly validate the effectiveness of our approach for multi-label image recognition in both common and cross-dataset settings.", "pdf": "https://openreview.net/pdf/03a5e626cf9b315df0a1676f88fc6226ff69ec95.pdf"} {"title": "Disentangled Unsupervised Skill Discovery for Efficient Hierarchical Reinforcement Learning", "url": "https://openreview.net/forum?id=ePOBcWfNFC", "detail_url": "https://openreview.net/forum?id=ePOBcWfNFC", "authors": "Jiaheng Hu,Zizhao Wang,Peter Stone,Roberto Mart\u00edn-Mart\u00edn", "tags": "NIPS 2024,Poster", "abstract": "A hallmark of intelligent agents is the ability to learn reusable skills purely from unsupervised interaction with the environment. However, existing unsupervised skill discovery methods often learn entangled skills where one skill variable simultaneously influences many entities in the environment, making downstream skill chaining extremely challenging. We propose Disentangled Unsupervised Skill Discovery (DUSDi), a method for learning disentangled skills that can be efficiently reused to solve downstream tasks. DUSDi decomposes skills into disentangled components, where each skill component only affects one factor of the state space. Importantly, these skill components can be concurrently composed to generate low-level actions, and efficiently chained to tackle downstream tasks through hierarchical Reinforcement Learning. DUSDi defines a novel mutual-information-based objective to enforce disentanglement between the influences of different skill components, and utilizes value factorization to optimize this objective efficiently. Evaluated in a set of challenging environments, DUSDi successfully learns disentangled skills, and significantly outperforms previous skill discovery methods when it comes to applying the learned skills to solve downstream tasks.", "pdf": "https://openreview.net/pdf/ebc2e9dd5bcc5999be1ab852ce054f266bb08f8d.pdf"} {"title": "ActFusion: a Unified Diffusion Model for Action Segmentation and Anticipation", "url": "https://openreview.net/forum?id=NN9U0lEcAn", "detail_url": "https://openreview.net/forum?id=NN9U0lEcAn", "authors": "Dayoung Gong,Suha Kwak,Minsu Cho", "tags": "NIPS 2024,Poster", "abstract": "Temporal action segmentation and long-term action anticipation are two popular vision tasks for the temporal analysis of actions in videos. \nDespite apparent relevance and potential complementarity, these two problems have been investigated as separate and distinct tasks. In this work, we tackle these two problems, action segmentation, and action anticipation, jointly using a unified diffusion model dubbed ActFusion. \nThe key idea to unification is to train the model to effectively handle both visible and invisible parts of the sequence in an integrated manner;\nthe visible part is for temporal segmentation, and the invisible part is for future anticipation. \nTo this end, we introduce a new anticipative masking strategy during training in which a late part of the video frames is masked as invisible, and learnable tokens replace these frames to learn to predict the invisible future.\nExperimental results demonstrate the bi-directional benefits between action segmentation and anticipation.\nActFusion achieves the state-of-the-art performance across the standard benchmarks of 50 Salads, Breakfast, and GTEA, outperforming task-specific models in both of the two tasks with a single unified model through joint learning.", "pdf": "https://openreview.net/pdf/8d62375e90dea5b80ff689954c0bf2de607414b4.pdf"} {"title": "Calibrated Self-Rewarding Vision Language Models", "url": "https://openreview.net/forum?id=nXYedmTf1T", "detail_url": "https://openreview.net/forum?id=nXYedmTf1T", "authors": "Yiyang Zhou,Zhiyuan Fan,Dongjie Cheng,Sihan Yang,Zhaorun Chen,Chenhang Cui,Xiyao Wang,Yun Li,Linjun Zhang,Huaxiu Yao", "tags": "NIPS 2024,Poster", "abstract": "Large Vision-Language Models (LVLMs) have made substantial progress by integrating pre-trained large language models (LLMs) and vision models through instruction tuning. Despite these advancements, LVLMs often exhibit the hallucination phenomenon, where generated text responses appear linguistically plausible but contradict the input image, indicating a misalignment between image and text pairs. This misalignment arises because the model tends to prioritize textual information over visual input, even when both the language model and visual representations are of high quality. Existing methods leverage additional models or human annotations to curate preference data and enhance modality alignment through preference optimization. These approaches are resource-intensive and may not effectively reflect the target LVLM's preferences, making the curated preferences easily distinguishable. Our work addresses these challenges by proposing the Calibrated Self-Rewarding (CSR) approach, which enables the model to self-improve by iteratively generating candidate responses, evaluating the reward for each response, and curating preference data for fine-tuning. In the reward modeling, we employ a step-wise strategy and incorporate visual constraints into the self-rewarding process to place greater emphasis on visual input. Empirical results demonstrate that CSR significantly enhances performance and reduces hallucinations across twelve benchmarks and tasks, achieving substantial improvements over existing methods by 7.62\\%. Our empirical results are further supported by rigorous theoretical analysis, under mild assumptions, verifying the effectiveness of introducing visual constraints into the self-rewarding paradigm. Additionally, CSR shows compatibility with different vision-language models and the ability to incrementally improve performance through iterative fine-tuning.", "pdf": "https://openreview.net/pdf/f89939751678e3e19c85ea240d620c05f898576b.pdf"} {"title": "Demystify Mamba in Vision: A Linear Attention Perspective", "url": "https://openreview.net/forum?id=LvJ1R88KAk", "detail_url": "https://openreview.net/forum?id=LvJ1R88KAk", "authors": "Dongchen Han,Ziyi Wang,Zhuofan Xia,Yizeng Han,Yifan Pu,Chunjiang Ge,Jun Song,Shiji Song,Bo Zheng,Gao Huang", "tags": "NIPS 2024,Poster", "abstract": "Mamba is an effective state space model with linear computation complexity. It has recently shown impressive efficiency in dealing with high-resolution inputs across various vision tasks. In this paper, we reveal that the powerful Mamba model shares surprising similarities with linear attention Transformer, which typically underperform conventional Transformer in practice. By exploring the similarities and disparities between the effective Mamba and subpar linear attention Transformer, we provide comprehensive analyses to demystify the key factors behind Mamba\u2019s success. Specifically, we reformulate the selective state space model and linear attention within a unified formulation, rephrasing Mamba as a variant of linear attention Transformer with six major distinctions: input gate, forget gate, shortcut, no attention normalization, single-head, and modified block design. For each design, we meticulously analyze its pros and cons, and empirically evaluate its impact on model performance in vision tasks. Interestingly, the results highlight the forget gate and block design as the core contributors to Mamba\u2019s success, while the other four designs are less crucial. Based on these findings, we propose a Mamba-\nInspired Linear Attention (MILA) model by incorporating the merits of these two key designs into linear attention. The resulting model outperforms various vision Mamba models in both image classification and high-resolution dense prediction tasks, while enjoying parallelizable computation and fast inference speed. Code is available at https://github.com/LeapLabTHU/MLLA.", "pdf": "https://openreview.net/pdf/002561240bf6933e803eff5349eed78eae4422e9.pdf"} {"title": "Vector Quantization Prompting for Continual Learning", "url": "https://openreview.net/forum?id=ACCqGLviig", "detail_url": "https://openreview.net/forum?id=ACCqGLviig", "authors": "Li Jiao,Qiuxia Lai,YU LI,Qiang Xu", "tags": "NIPS 2024,Poster", "abstract": "Continual learning requires to overcome catastrophic forgetting when training a single model on a sequence of tasks. Recent top-performing approaches are prompt-based methods that utilize a set of learnable parameters (i.e., prompts) to encode task knowledge, from which appropriate ones are selected to guide the fixed pre-trained model in generating features tailored to a certain task. However, existing methods rely on predicting prompt identities for prompt selection, where the identity prediction process cannot be optimized with task loss. This limitation leads to sub-optimal prompt selection and inadequate adaptation of pre-trained features for a specific task. Previous efforts have tried to address this by directly generating prompts from input queries instead of selecting from a set of candidates. However, these prompts are continuous, which lack sufficient abstraction for task knowledge representation, making them less effective for continual learning. To address these challenges, we propose VQ-Prompt, a prompt-based continual learning method that incorporates Vector Quantization (VQ) into end-to-end training of a set of discrete prompts. In this way, VQ-Prompt can optimize the prompt selection process with task loss and meanwhile achieve effective abstraction of task knowledge for continual learning. Extensive experiments show that VQ-Prompt outperforms state-of-the-art continual learning methods across a variety of benchmarks under the challenging class-incremental setting.", "pdf": "https://openreview.net/pdf/fe56049dfd050804f643de97820660c0ab7ace62.pdf"} {"title": "Fourier Amplitude and Correlation Loss: Beyond Using L2 Loss for Skillful Precipitation Nowcasting", "url": "https://openreview.net/forum?id=0aN7VWwp4g", "detail_url": "https://openreview.net/forum?id=0aN7VWwp4g", "authors": "Chiu-Wai Yan,Shi Quan Foo,Van-Hoan Trinh,Dit-Yan Yeung,Ka-Hing Wong,Wai-Kin Wong", "tags": "NIPS 2024,Poster", "abstract": "Deep learning approaches have been widely adopted for precipitation nowcasting in recent years. Previous studies mainly focus on proposing new model architectures to improve pixel-wise metrics. However, they frequently result in blurry predictions which provide limited utility to forecasting operations. In this work, we propose a new Fourier Amplitude and Correlation Loss (FACL) which consists of two novel loss terms: Fourier Amplitude Loss (FAL) and Fourier Correlation Loss (FCL). FAL regularizes the Fourier amplitude of the model prediction and FCL complements the missing phase information. The two loss terms work together to replace the traditional L2 losses such as MSE and weighted MSE for the spatiotemporal prediction problem on signal-based data. Our method is generic, parameter-free and efficient. Extensive experiments using one synthetic dataset and three radar echo datasets demonstrate that our method improves perceptual metrics and meteorology skill scores, with a small trade-off to pixel-wise accuracy and structural similarity. Moreover, to improve the error margin in meteorological skill scores such as Critical Success Index (CSI) and Fractions Skill Score (FSS), we propose and adopt the Regional Histogram Divergence (RHD), a distance metric that considers the patch-wise similarity between signal-based imagery patterns with tolerance to local transforms.", "pdf": "https://openreview.net/pdf/83342cf0e9e7532a338787e802069fff91f9936d.pdf"} {"title": "Exploiting Representation Curvature for Boundary Detection in Time Series", "url": "https://openreview.net/forum?id=WK2KxPAMQv", "detail_url": "https://openreview.net/forum?id=WK2KxPAMQv", "authors": "Yooju Shin,Jaehyun Park,Susik Yoon,Hwanjun Song,Byung Suk Lee,Jae-Gil Lee", "tags": "NIPS 2024,Poster", "abstract": "*Boundaries* are the timestamps at which a class in a time series changes. Recently, representation-based boundary detection has gained popularity, but its emphasis on consecutive distance difference backfires, especially when the changes are gradual. In this paper, we propose a boundary detection method, **RECURVE**, based on a novel change metric, the ***curvature*** of a representation trajectory, to accommodate both gradual and abrupt changes. Here, a sequence of representations in the representation space is interpreted as a trajectory, and a curvature at each timestamp can be computed. Using the theory of random walk, we formally show that the mean curvature is lower near boundaries than at other points. Extensive experiments using diverse real-world time-series datasets confirm the superiority of RECURVE over state-of-the-art methods.", "pdf": "https://openreview.net/pdf/195ca567f4ffb81f9709969358dffd0069188ebe.pdf"} {"title": "DMesh: A Differentiable Mesh Representation", "url": "https://openreview.net/forum?id=Io1qKqCVIK", "detail_url": "https://openreview.net/forum?id=Io1qKqCVIK", "authors": "Sanghyun Son,Matheus Gadelha,Yang Zhou,Zexiang Xu,Ming Lin,Yi Zhou", "tags": "NIPS 2024,Poster", "abstract": "We present a differentiable representation, DMesh, for general 3D triangular meshes. DMesh considers both the geometry and connectivity information of a mesh. In our design, we first get a set of convex tetrahedra that compactly tessellates the domain based on Weighted Delaunay Triangulation (WDT), and select triangular faces on the tetrahedra to define the final mesh. We formulate probability of faces to exist on the actual surface in a differentiable manner based on the WDT. This enables DMesh to represent meshes of various topology in a differentiable way, and allows us to reconstruct the mesh under various observations, such as point clouds and multi-view images using gradient-based optimization. We publicize the source code and supplementary material at our project page (https://sonsang.github.io/dmesh-project).", "pdf": "https://openreview.net/pdf/eef0bd2b53e527e3b2e3db58f3437d05f1420722.pdf"} {"title": "Improving Robustness of 3D Point Cloud Recognition from a Fourier Perspective", "url": "https://openreview.net/forum?id=4jn7KWPHSD", "detail_url": "https://openreview.net/forum?id=4jn7KWPHSD", "authors": "Yibo Miao,Yinpeng Dong,Jinlai Zhang,Lijia Yu,Xiao Yang,Xiao-Shan Gao", "tags": "NIPS 2024,Poster", "abstract": "Although 3D point cloud recognition has achieved substantial progress on standard benchmarks, the typical models are vulnerable to point cloud corruptions, leading to security threats in real-world applications. To improve the corruption robustness, various data augmentation methods have been studied, but they are mainly limited to the spatial domain. As the point cloud has low information density and significant spatial redundancy, it is challenging to analyze the effects of corruptions. In this paper, we focus on the frequency domain to observe the underlying structure of point clouds and their corruptions. Through graph Fourier transform (GFT), we observe a correlation between the corruption robustness of point cloud recognition models and their sensitivity to different frequency bands, which is measured by the GFT spectrum of the model\u2019s Jacobian matrix. To reduce the sensitivity and improve the corruption robustness, we propose Frequency Adversarial Training (FAT) that adopts frequency-domain adversarial examples as data augmentation to train robust point cloud recognition models against corruptions. Theoretically, we provide a guarantee of FAT on its out-of-distribution generalization performance. Empirically, we conduct extensive experiments with various network architectures to validate the effectiveness of FAT, which achieves the new state-of-the-art results.", "pdf": "https://openreview.net/pdf/c1ef59c496a230cbbf72ce3cbba5b83c1aa6d366.pdf"} {"title": "Post-Hoc Reversal: Are We Selecting Models Prematurely?", "url": "https://openreview.net/forum?id=3R7Go6WkDm", "detail_url": "https://openreview.net/forum?id=3R7Go6WkDm", "authors": "Rishabh Ranjan,Saurabh Garg,Mrigank Raman,Carlos Guestrin,Zachary Chase Lipton", "tags": "NIPS 2024,Poster", "abstract": "Trained models are often composed with post-hoc transforms such as temperature scaling (TS), ensembling and stochastic weight averaging (SWA) to improve performance, robustness, uncertainty estimation, etc. However, such transforms are typically applied only after the base models have already been finalized by standard means. In this paper, we challenge this practice with an extensive empirical study. In particular, we demonstrate a phenomenon that we call post-hoc reversal, where performance trends are reversed after applying post-hoc transforms. This phenomenon is especially prominent in high-noise settings. For example, while base models overfit badly early in training, both ensembling and SWA favor base models trained for more epochs. Post-hoc reversal can also prevent the appearance of double descent and mitigate mismatches between test loss and test error seen in base models. Preliminary analyses suggest that these transforms induce reversal by suppressing the influence of mislabeled examples, exploiting differences in their learning dynamics from those of clean examples. Based on our findings, we propose post-hoc selection, a simple technique whereby post-hoc metrics inform model development decisions such as early stopping, checkpointing, and broader hyperparameter choices. Our experiments span real-world vision, language, tabular and graph datasets. On an LLM instruction tuning dataset, post-hoc selection results in >1.5x MMLU improvement compared to naive selection.", "pdf": "https://openreview.net/pdf/67cc9d3c6e5ca0dcf0d76bace27e62308697e8c3.pdf"} {"title": "Sequential Signal Mixing Aggregation for Message Passing Graph Neural Networks", "url": "https://openreview.net/forum?id=aRokfUfIQs", "detail_url": "https://openreview.net/forum?id=aRokfUfIQs", "authors": "Mitchell Keren Taraday,Almog David,Chaim Baskin", "tags": "NIPS 2024,Poster", "abstract": "Message Passing Graph Neural Networks (MPGNNs) have emerged as the preferred method for modeling complex interactions across diverse graph entities. While the theory of such models is well understood, their aggregation module has not received sufficient attention. Sum-based aggregators have solid theoretical foundations regarding their separation capabilities. However, practitioners often prefer using more complex aggregations and mixtures of diverse aggregations. In this work, we unveil a possible explanation for this gap. We claim that sum-based aggregators fail to \"mix\" features belonging to distinct neighbors, preventing them from succeeding at downstream tasks.\nTo this end, we introduce Sequential Signal Mixing Aggregation (SSMA), a novel plug-and-play aggregation for MPGNNs. SSMA treats the neighbor features as 2D discrete signals and sequentially convolves them, inherently enhancing the ability to mix features attributed to distinct neighbors. By performing extensive experiments, we show that when combining SSMA with well-established MPGNN architectures, we achieve substantial performance gains across various benchmarks, achieving new state-of-the-art results in many settings.\nWe published our code at https://almogdavid.github.io/SSMA/.", "pdf": "https://openreview.net/pdf/269589cd0b814f89a895dbf6b64f8e5cb836358b.pdf"} {"title": "Federated Black-Box Adaptation for Semantic Segmentation", "url": "https://openreview.net/forum?id=Fp3JVz5XE7", "detail_url": "https://openreview.net/forum?id=Fp3JVz5XE7", "authors": "Jay Nitin Paranjape,Shameema Sikder,S. Swaroop Vedula,Vishal M. Patel", "tags": "NIPS 2024,Poster", "abstract": "Federated Learning (FL) is a form of distributed learning that allows multiple institutions or clients to collaboratively learn a global model to solve a task. This allows the model to utilize the information from every institute while preserving data privacy. However, recent studies show that the promise of protecting the privacy of data is not upheld by existing methods and that it is possible to recreate the training data from the different institutions. This is done by utilizing gradients transferred between the clients and the global server during training or by knowing the model architecture at the client end. In this paper, we propose a federated learning framework for semantic segmentation without knowing the model architecture nor transferring gradients between the client and the server, thus enabling better privacy preservation. We propose \\textit{BlackFed} - a black-box adaptation of neural networks that utilizes zero order optimization (ZOO) to update the client model weights and first order optimization (FOO) to update the server weights. We evaluate our approach on several computer vision and medical imaging datasets to demonstrate its effectiveness. To the best of our knowledge, this work is one of the first works in employing federated learning for segmentation, devoid of gradients or model information exchange. Code: https://github.com/JayParanjape/blackfed/tree/master", "pdf": "https://openreview.net/pdf/a09c9a14d17f0642cc630e9d1af6a14866ccb280.pdf"} {"title": "Clustering then Propagation: Select Better Anchors for Knowledge Graph Embedding", "url": "https://openreview.net/forum?id=BpJ6OTfWw3", "detail_url": "https://openreview.net/forum?id=BpJ6OTfWw3", "authors": "KE LIANG,Yue Liu,Hao Li,Lingyuan Meng,Suyuan Liu,Siwei Wang,sihang zhou,Xinwang Liu", "tags": "NIPS 2024,Poster", "abstract": "Traditional knowledge graph embedding (KGE) models map entities and relations to unique embedding vectors in a shallow lookup manner. As the scale of data becomes larger, this manner will raise unaffordable computational costs. Anchor-based strategies have been treated as effective ways to alleviate such efficiency problems by propagation on representative entities instead of the whole graph. However, most existing anchor-based KGE models select the anchors in a primitive manner, which limits their performance. To this end, we propose a novel anchor-based strategy for KGE, i.e., a relational clustering-based anchor selection strategy (RecPiece), where two characteristics are leveraged, i.e., (1) representative ability of the cluster centroids and (2) descriptive ability of relation types in KGs. Specifically, we first perform clustering over features of factual triplets instead of entities, where cluster number is naturally set as number of relation types since each fact can be characterized by its relation in KGs. Then, representative triplets are selected around the clustering centroids, further mapped into corresponding anchor entities. Extensive experiments on six datasets show that RecPiece achieves higher performances but comparable or even fewer parameters compared to previous anchor-based KGE models, indicating that our model can select better anchors in a more scalable way.", "pdf": "https://openreview.net/pdf/625c02226aab316df309b8e8dbcac4364d6f6fea.pdf"} {"title": "Interpretable Image Classification with Adaptive Prototype-based Vision Transformers", "url": "https://openreview.net/forum?id=hjhpCJfbFG", "detail_url": "https://openreview.net/forum?id=hjhpCJfbFG", "authors": "Chiyu Ma,Jon Donnelly,Wenjun Liu,Soroush Vosoughi,Cynthia Rudin,Chaofan Chen", "tags": "NIPS 2024,Poster", "abstract": "We present ProtoViT, a method for interpretable image classification combining deep learning and case-based reasoning. This method classifies an image by comparing it to a set of learned prototypes, providing explanations of the form ``this looks like that.'' In our model, a prototype consists of **parts**, which can deform over irregular geometries to create a better comparison between images. Unlike existing models that rely on Convolutional Neural Network (CNN) backbones and spatially rigid prototypes, our model integrates Vision Transformer (ViT) backbones into prototype based models, while offering spatially deformed prototypes that not only accommodate geometric variations of objects but also provide coherent and clear prototypical feature representations with an adaptive number of prototypical parts. Our experiments show that our model can generally achieve higher performance than the existing prototype based models. Our comprehensive analyses ensure that the prototypes are consistent and the interpretations are faithful.", "pdf": "https://openreview.net/pdf/6d3ae8c5fbc5bf469444e6954e7312a80f0e91f7.pdf"} {"title": "Hierarchical Hybrid Sliced Wasserstein: A Scalable Metric for Heterogeneous Joint Distributions", "url": "https://openreview.net/forum?id=XwrMd1njqq", "detail_url": "https://openreview.net/forum?id=XwrMd1njqq", "authors": "Khai Nguyen,Nhat Ho", "tags": "NIPS 2024,Poster", "abstract": "Sliced Wasserstein (SW) and Generalized Sliced Wasserstein (GSW) have been widely used in applications due to their computational and statistical scalability. However, the SW and the GSW are only defined between distributions supported on a homogeneous domain. This limitation prevents their usage in applications with heterogeneous joint distributions with marginal distributions supported on multiple different domains. Using SW and GSW directly on the joint domains cannot make a meaningful comparison since their homogeneous slicing operator, i.e., Radon Transform (RT) and Generalized Radon Transform (GRT) are not expressive enough to capture the structure of the joint supports set. To address the issue, we propose two new slicing operators, i.e., Partial Generalized Radon Transform (PGRT) and Hierarchical Hybrid Radon Transform (HHRT). In greater detail, PGRT is the generalization of Partial Radon Transform (PRT), which transforms a subset of function arguments non-linearly while HHRT is the composition of PRT and multiple domain-specific PGRT on marginal domain arguments. By using HHRT, we extend the SW into Hierarchical Hybrid Sliced Wasserstein (H2SW) distance which is designed specifically for comparing heterogeneous joint distributions. We then discuss the topological, statistical, and computational properties of H2SW. Finally, we demonstrate the favorable performance of H2SW in 3D mesh deformation, deep 3D mesh autoencoders, and datasets comparison.", "pdf": "https://openreview.net/pdf/733da6b21f2062b5712c38862e989ac6d93a7ce1.pdf"} {"title": "ReF-LDM: A Latent Diffusion Model for Reference-based Face Image Restoration", "url": "https://openreview.net/forum?id=QY4SpBhQZI", "detail_url": "https://openreview.net/forum?id=QY4SpBhQZI", "authors": "Chi-Wei Hsiao,Yu-Lun Liu,Cheng-Kun Yang,Sheng-Po Kuo,Kevin Jou,Chia-Ping Chen", "tags": "NIPS 2024,Poster", "abstract": "While recent works on blind face image restoration have successfully produced impressive high-quality (HQ) images with abundant details from low-quality (LQ) input images, the generated content may not accurately reflect the real appearance of a person. To address this problem, incorporating well-shot personal images as additional reference inputs may be a promising strategy. Inspired by the recent success of the Latent Diffusion Model (LDM) in image generation, we propose ReF-LDM\u2014an adaptation of LDM designed to generate HQ face images conditioned on one LQ image and multiple HQ reference images. Our LDM-based model incorporates an effective and efficient mechanism, CacheKV, for conditioning on reference images. Additionally, we design a timestep-scaled identity loss, enabling LDM to focus on learning the discriminating features of human faces. Lastly, we construct FFHQ-ref, a dataset consisting of 20,406 high-quality (HQ) face images with corresponding reference images, which can serve as both training and evaluation data for reference-based face restoration models.", "pdf": "https://openreview.net/pdf/3295f48507a67047b4e1b185e983bb83468d4c81.pdf"} {"title": "Improving Generalization of Dynamic Graph Learning via Environment Prompt", "url": "https://openreview.net/forum?id=RJG8ar4wHA", "detail_url": "https://openreview.net/forum?id=RJG8ar4wHA", "authors": "Kuo Yang,Zhengyang Zhou,Qihe Huang,Limin Li,Yuxuan Liang,Yang Wang", "tags": "NIPS 2024,Poster", "abstract": "Out-of-distribution (OOD) generalization issue is a well-known challenge within deep learning tasks. In dynamic graphs, the change of temporal environments is regarded as the main cause of data distribution shift. While numerous OOD studies focusing on environment factors have achieved remarkable performance, they still fail to systematically solve the two issue of environment inference and utilization. In this work, we propose a novel dynamic graph learning model named EpoD based on prompt learning and structural causal model to comprehensively enhance both environment inference and utilization. Inspired by the superior performance of prompt learning in understanding underlying semantic and causal associations, we first design a self-prompted learning mechanism to infer unseen environment factors. We then rethink the role of environment variable within spatio-temporal causal structure model, and introduce a novel causal pathway where dynamic subgraphs serve as mediating variables. The extracted dynamic subgraph can effectively capture the data distribution shift by incorporating the inferred environment variables into the node-wise dependencies. Theoretical discussions and intuitive analysis support the generalizability and interpretability of EpoD. Extensive experiments on seven real-world datasets across domains showcase the superiority of EpoD against baselines, and toy example experiments further verify the powerful interpretability and rationality of our EpoD.", "pdf": "https://openreview.net/pdf/edcf1f816274fd3374a0e57f47bc980ef49c2ec3.pdf"} {"title": "Improving Neural ODE Training with Temporal Adaptive Batch Normalization", "url": "https://openreview.net/forum?id=ARLEUVVfTL", "detail_url": "https://openreview.net/forum?id=ARLEUVVfTL", "authors": "Su Zheng,Zhengqi Gao,Fan-Keng Sun,Duane S Boning,Bei Yu,Martin D. Wong", "tags": "NIPS 2024,Poster", "abstract": "Neural ordinary differential equations (Neural ODEs) is a family of continuous-depth neural networks where the evolution of hidden states is governed by learnable temporal derivatives. We identify a significant limitation in applying traditional Batch Normalization (BN) to Neural ODEs, due to a fundamental mismatch --- BN was initially designed for discrete neural networks with no temporal dimension, whereas Neural ODEs operate continuously over time. To bridge this gap, we introduce temporal adaptive Batch Normalization (TA-BN), a novel technique that acts as the continuous-time analog to traditional BN. Our empirical findings reveal that TA-BN enables the stacking of more layers within Neural ODEs, enhancing their performance. Moreover, when confined to a model architecture consisting of a single Neural ODE followed by a linear layer, TA-BN achieves 91.1\\% test accuracy on CIFAR-10 with 2.2 million parameters, making it the first \\texttt{unmixed} Neural ODE architecture to approach MobileNetV2-level parameter efficiency. Extensive numerical experiments on image classification and physical system modeling substantiate the superiority of TA-BN compared to baseline methods.", "pdf": "https://openreview.net/pdf/7c5c12cb65a3fada9b5e57b51b52bea4f8bea802.pdf"} {"title": "Face2QR: A Unified Framework for Aesthetic, Face-Preserving, and Scannable QR Code Generation", "url": "https://openreview.net/forum?id=rvBabL7DUu", "detail_url": "https://openreview.net/forum?id=rvBabL7DUu", "authors": "Xuehao Cui,Guangyang Wu,Zhenghao Gan,Guangtao Zhai,Xiaohong Liu", "tags": "NIPS 2024,Poster", "abstract": "Existing methods to generate aesthetic QR codes, such as image and style transfer techniques, tend to compromise either the visual appeal or the scannability of QR codes when they incorporate human face identity. Addressing these imperfections, we present Face2QR\u2014a novel pipeline specifically designed for generating personalized QR codes that harmoniously blend aesthetics, face identity, and scannability. Our pipeline introduces three innovative components. First, the ID-refined QR integration (IDQR) seamlessly intertwines the background styling with face ID, utilizing a unified SD-based framework with control networks. Second, the ID-aware QR ReShuffle (IDRS) effectively rectifies the conflicts between face IDs and QR patterns, rearranging QR modules to maintain the integrity of facial features without compromising scannability. Lastly, the ID-preserved Scannability Enhancement (IDSE) markedly boosts scanning robustness through latent code optimization, striking a delicate balance between face ID, aesthetic quality and QR functionality. In comprehensive experiments, Face2QR demonstrates remarkable performance, outperforming existing approaches, particularly in preserving facial recognition features within custom QR code designs.", "pdf": "https://openreview.net/pdf/a95aab049e780f04b397e4ea0e14a130da25c7ca.pdf"} {"title": "On-Road Object Importance Estimation: A New Dataset and A Model with Multi-Fold Top-Down Guidance", "url": "https://openreview.net/forum?id=xvTMc9Ovx3", "detail_url": "https://openreview.net/forum?id=xvTMc9Ovx3", "authors": "Zhixiong Nan,Yilong Chen,Tianfei Zhou,Tao Xiang", "tags": "NIPS 2024,Poster", "abstract": "This paper addresses the problem of on-road object importance estimation, which utilizes video sequences captured from the driver's perspective as the input. Although this problem is significant for safer and smarter driving systems, the exploration of this problem remains limited. On one hand, publicly-available large-scale datasets are scarce in the community. To address this dilemma, this paper contributes a new large-scale dataset named Traffic Object Importance (TOI). On the other hand, existing methods often only consider either bottom-up feature or single-fold guidance, leading to limitations in handling highly dynamic and diverse traffic scenarios. Different from existing methods, this paper proposes a model that integrates multi-fold top-down guidance with the bottom-up feature. Specifically, three kinds of top-down guidance factors (i.e., driver intention, semantic context, and traffic rule) are integrated into our model. These factors are important for object importance estimation, but none of the existing methods simultaneously consider them. To our knowledge, this paper proposes the first on-road object importance estimation model that fuses multi-fold top-down guidance factors with bottom-up feature. Extensive experiments demonstrate that our model outperforms state-of-the-art methods by large margins, achieving 23.1% Average Precision (AP) improvement compared with the recently proposed model (i.e., Goal).", "pdf": "https://openreview.net/pdf/87bd5670b38ec3e2f101018aef40eb48c6c26a89.pdf"} {"title": "DI-MaskDINO: A Joint Object Detection and Instance Segmentation Model", "url": "https://openreview.net/forum?id=srQxkSPJLW", "detail_url": "https://openreview.net/forum?id=srQxkSPJLW", "authors": "Zhixiong Nan,Xianghong Li,Tao Xiang,Jifeng Dai", "tags": "NIPS 2024,Poster", "abstract": "This paper is motivated by an interesting phenomenon: the performance of object detection lags behind that of instance segmentation (i.e., performance imbalance) when investigating the intermediate results from the beginning transformer decoder layer of MaskDINO (i.e., the SOTA model for joint detection and segmentation). This phenomenon inspires us to think about a question: will the performance imbalance at the beginning layer of transformer decoder constrain the upper bound of the final performance? With this question in mind, we further conduct qualitative and quantitative pre-experiments, which validate the negative impact of detection-segmentation imbalance issue on the model performance. To address this issue, this paper proposes DI-MaskDINO model, the core idea of which is to improve the final performance by alleviating the detection-segmentation imbalance. DI-MaskDINO is implemented by configuring our proposed De-Imbalance (DI) module and Balance-Aware Tokens Optimization (BATO) module to MaskDINO. DI is responsible for generating balance-aware query, and BATO uses the balance-aware query to guide the optimization of the initial feature tokens. The balance-aware query and optimized feature tokens are respectively taken as the Query and Key&Value of transformer decoder to perform joint object detection and instance segmentation. DI-MaskDINO outperforms existing joint object detection and instance segmentation models on COCO and BDD100K benchmarks, achieving +1.2 $AP^{box}$ and +0.9 $AP^{mask}$ improvements compared to SOTA joint detection and segmentation model MaskDINO. In addition, DI-MaskDINO also obtains +1.0 $AP^{box}$ improvement compared to SOTA object detection model DINO and +3.0 $AP^{mask}$ improvement compared to SOTA segmentation model Mask2Former.", "pdf": "https://openreview.net/pdf/a4368bec2b96c2307f38e80b692b13071c2affb4.pdf"} {"title": "Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length", "url": "https://openreview.net/forum?id=XlAbMZu4Bo", "detail_url": "https://openreview.net/forum?id=XlAbMZu4Bo", "authors": "Xuezhe Ma,Xiaomeng Yang,Wenhan Xiong,Beidi Chen,LILI YU,Hao Zhang,Jonathan May,Luke Zettlemoyer,Omer Levy,Chunting Zhou", "tags": "NIPS 2024,Poster", "abstract": "The quadratic complexity and weak length extrapolation of Transformers limits their ability to scale to long sequences, and while sub-quadratic solutions like linear attention and state space models exist, they empirically underperform Transformers in pretraining efficiency and downstream task accuracy. We introduce MEGALODON, an neural architecture for efficient sequence modeling with unlimited context length. MEGALODON inherits the architecture of MEGA (exponential moving average with gated attention), and further introduces multiple technical components to improve its capability and stability, including complex exponential moving average (CEMA), timestep normalization layer, normalized attention mechanism and pre-norm with two-hop residual configuration. In a controlled head-to-head comparison with LLAMA2, MEGALODON achieves better efficiency than Transformer in the scale of 7 billion parameters and 2 trillion training tokens. MEGALODON reaches a training loss of 1.70, landing mid-way between LLAMA2-7B (1.75) and LLAMA2-13B (1.67). This result is robust throughout a wide range of benchmarks, where MEGALODON consistently outperforms Transformers across different tasks, domains, and modalities.", "pdf": "https://openreview.net/pdf/70aaca704207816c7c033948248607819f055288.pdf"} {"title": "Knowledge-Empowered Dynamic Graph Network for Irregularly Sampled Medical Time Series", "url": "https://openreview.net/forum?id=9hCn01VAdC", "detail_url": "https://openreview.net/forum?id=9hCn01VAdC", "authors": "Yicheng Luo,Zhen Liu,Linghao Wang,Binquan Wu,Junhao Zheng,Qianli Ma", "tags": "NIPS 2024,Poster", "abstract": "Irregularly Sampled Medical Time Series (ISMTS) are commonly found in the healthcare domain, where different variables exhibit unique temporal patterns while interrelated. However, many existing methods fail to efficiently consider the differences and correlations among medical variables together, leading to inadequate capture of fine-grained features at the variable level in ISMTS. We propose Knowledge-Empowered Dynamic Graph Network (KEDGN), a graph neural network empowered by variables' textual medical knowledge, aiming to model variable-specific temporal dependencies and inter-variable dependencies in ISMTS. Specifically, we leverage a pre-trained language model to extract semantic representations for each variable from their textual descriptions of medical properties, forming an overall semantic view among variables from a medical perspective. Based on this, we allocate variable-specific parameter spaces to capture variable-specific temporal patterns and generate a complete variable graph to measure medical correlations among variables. Additionally, we employ a density-aware mechanism to dynamically adjust the variable graph at different timestamps, adapting to the time-varying correlations among variables in ISMTS. The variable-specific parameter spaces and dynamic graphs are injected into the graph convolutional recurrent network to capture intra-variable and inter-variable dependencies in ISMTS together. Experiment results on four healthcare datasets demonstrate that KEDGN significantly outperforms existing methods.", "pdf": "https://openreview.net/pdf/79e7212fb105de571bcf7741893be6b14a70af8f.pdf"} {"title": "Worst-Case Offline Reinforcement Learning with Arbitrary Data Support", "url": "https://openreview.net/forum?id=63VajkIDEu", "detail_url": "https://openreview.net/forum?id=63VajkIDEu", "authors": "Kohei Miyaguchi", "tags": "NIPS 2024,Poster", "abstract": "We propose a method of offline reinforcement learning (RL) featuring the performance guarantee without any assumptions on the data support. Under such conditions, estimating or optimizing the conventional performance metric is generally infeasible due to the distributional discrepancy between data and target policy distributions. To address this issue, we employ a worst-case policy value as a new metric and constructively show that the sample complexity bound of $O(\\epsilon^{\u22122})$ is attainable without any data-support conditions, where $\\epsilon>0$ is the policy suboptimality in the new metric. Moreover, as the new metric generalizes the conventional one, the algorithm can address standard offline RL tasks without modification. In this context, our sample complexity bound can be seen as a strict improvement on the previous bounds under the single-policy concentrability and the single-policy realizability.", "pdf": "https://openreview.net/pdf/a96e437c1000fec001d2a625fb842958c495ae86.pdf"} {"title": "Fine-grained Control of Generative Data Augmentation in IoT Sensing", "url": "https://openreview.net/forum?id=ZCygNDMIII", "detail_url": "https://openreview.net/forum?id=ZCygNDMIII", "authors": "Tianshi Wang,Qikai Yang,Ruijie Wang,Dachun Sun,Jinyang Li,Yizhuo Chen,Yigong Hu,Chaoqi Yang,Tomoyoshi Kimura,Denizhan Kara,Tarek F. Abdelzaher", "tags": "NIPS 2024,Poster", "abstract": "Internet of Things (IoT) sensing models often suffer from overfitting due to data distribution shifts between training dataset and real-world scenarios. To address this, data augmentation techniques have been adopted to enhance model robustness by bolstering the diversity of synthetic samples within a defined vicinity of existing samples. This paper introduces a novel paradigm of data augmentation for IoT sensing signals by adding fine-grained control to generative models. We define a metric space with statistical metrics that capture the essential features of the short-time Fourier transformed (STFT) spectrograms of IoT sensing signals. These metrics serve as strong conditions for a generative model, enabling us to tailor the spectrogram characteristics in the time-frequency domain according to specific application needs. Furthermore, we propose a set of data augmentation techniques within this metric space to create new data samples. Our method is evaluated across various generative models, datasets, and downstream IoT sensing models. The results demonstrate that our approach surpasses the conventional transformation-based data augmentation techniques and prior generative data augmentation models.", "pdf": "https://openreview.net/pdf/ac50073f98d5a7c4bbd1b96c7482027c5c6c7ccb.pdf"} {"title": "A Globally Optimal Portfolio for m-Sparse Sharpe Ratio Maximization", "url": "https://openreview.net/forum?id=p54CYwdjVP", "detail_url": "https://openreview.net/forum?id=p54CYwdjVP", "authors": "Yizun Lin,Zhao-Rong Lai,Cheng Li", "tags": "NIPS 2024,Poster", "abstract": "The Sharpe ratio is an important and widely-used risk-adjusted return in financial engineering. In modern portfolio management, one may require an m-sparse (no more than m active assets) portfolio to save managerial and financial costs. However, few existing methods can optimize the Sharpe ratio with the m-sparse constraint, due to the nonconvexity and the complexity of this constraint. We propose to convert the m-sparse fractional optimization problem into an equivalent m-sparse quadratic programming problem. The semi-algebraic property of the resulting objective function allows us to exploit the Kurdyka-Lojasiewicz property to develop an efficient Proximal Gradient Algorithm (PGA) that leads to a portfolio which achieves the globally optimal m-sparse Sharpe ratio under certain conditions. The convergence rates of PGA are also provided. To the best of our knowledge, this is the first proposal that achieves a globally optimal m-sparse Sharpe ratio with a theoretically-sound guarantee.", "pdf": "https://openreview.net/pdf/46ac508deb5379c54e47d2192ddac171e2014a47.pdf"} {"title": "QueST: Self-Supervised Skill Abstractions for Learning Continuous Control", "url": "https://openreview.net/forum?id=P3v3x7HnV0", "detail_url": "https://openreview.net/forum?id=P3v3x7HnV0", "authors": "Atharva Mete,Haotian Xue,Albert Wilcox,Yongxin Chen,Animesh Garg", "tags": "NIPS 2024,Poster", "abstract": "Generalization capabilities, or rather a lack thereof, is one of the most important unsolved problems in the field of robot learning, and while several large scale efforts have set out to tackle this problem, unsolved it remains. In this paper, we hypothesize that learning temporal action abstractions using latent variable models (LVMs), which learn to map data to a compressed latent space and back, is a\npromising direction towards low-level skills that can readily be used for new tasks. Although several works have attempted to show this, they have generally been limited by architectures that do not faithfully capture sharable representations. To address this we present Quantized Skill Transformer (QueST), which learns a larger and more flexible latent encoding that is more capable of modeling the breadth of low-level skills necessary for a variety of tasks. To make use of this extra flexibility, QueST imparts causal inductive bias from the action sequence data into the latent space, leading to more semantically useful and transferable representations. We compare to state-of-the-art imitation learning and LVM baselines and see that QueST\u2019s architecture leads to strong performance on several multitask and few-shot learning benchmarks. Further results and videos are available at https://quest-model.github.io.", "pdf": "https://openreview.net/pdf/6e4a1a752de62e19eb4b95bd7f3502742333da3d.pdf"} {"title": "Revisiting motion information for RGB-Event tracking with MOT philosophy", "url": "https://openreview.net/forum?id=bzGAELYOyL", "detail_url": "https://openreview.net/forum?id=bzGAELYOyL", "authors": "Tianlu Zhang,Kurt Debattista,Qiang Zhang,Guiguang Ding,Jungong Han", "tags": "NIPS 2024,Poster", "abstract": "RGB-Event single object tracking (SOT) aims to leverage the merits of RGB and event data to achieve higher performance. However, existing frameworks focus on exploring complementary appearance information within multi-modal data, and struggle to address the association problem of targets and distractors in the temporal domain using motion information from the event stream. In this paper, we introduce the Multi-Object Tracking (MOT) philosophy into RGB-E SOT to keep track of targets as well as distractors by using both RGB and event data, thereby improving the robustness of the tracker. Specifically, an appearance model is employed to predict the initial candidates. Subsequently, the initially predicted tracking results, in combination with the RGB-E features, are encoded into appearance and motion embeddings, respectively. Furthermore, a Spatial-Temporal Transformer Encoder is proposed to model the spatial-temporal relationships and learn discriminative features for each candidate through guidance of the appearance-motion embeddings. Simultaneously, a Dual-Branch Transformer Decoder is designed to adopt such motion and appearance information for candidate matching, thus distinguishing between targets and distractors. The proposed method is evaluated on multiple benchmark datasets and achieves state-of-the-art performance on all the datasets tested.", "pdf": "https://openreview.net/pdf/e2a89bd1152a0d552b1c4bed6771eeb10fb9fe7c.pdf"} {"title": "Estimating Epistemic and Aleatoric Uncertainty with a Single Model", "url": "https://openreview.net/forum?id=WPxa6OcIdg", "detail_url": "https://openreview.net/forum?id=WPxa6OcIdg", "authors": "Matthew Albert Chan,Maria J. Molina,Christopher Metzler", "tags": "NIPS 2024,Poster", "abstract": "Estimating and disentangling epistemic uncertainty, uncertainty that is reducible with more training data, and aleatoric uncertainty, uncertainty that is inherent to the task at hand, is critically important when applying machine learning to high-stakes applications such as medical imaging and weather forecasting. Conditional diffusion models' breakthrough ability to accurately and efficiently sample from the posterior distribution of a dataset now makes uncertainty estimation conceptually straightforward: One need only train and sample from a large ensemble of diffusion models. Unfortunately, training such an ensemble becomes computationally intractable as the complexity of the model architecture grows. In this work we introduce a new approach to ensembling, hyper-diffusion models (HyperDM), which allows one to accurately estimate both epistemic and aleatoric uncertainty with a single model. Unlike existing single-model uncertainty methods like Monte-Carlo dropout and Bayesian neural networks, HyperDM offers prediction accuracy on par with, and in some cases superior to, multi-model ensembles. Furthermore, our proposed approach scales to modern network architectures such as Attention U-Net and yields more accurate uncertainty estimates compared to existing methods. We validate our method on two distinct real-world tasks: x-ray computed tomography reconstruction and weather temperature forecasting.", "pdf": "https://openreview.net/pdf/a83de1600fdc68ae46d6deca88b2d879e72285b6.pdf"} {"title": "Sample-Efficient Agnostic Boosting", "url": "https://openreview.net/forum?id=ufKBRvYxtp", "detail_url": "https://openreview.net/forum?id=ufKBRvYxtp", "authors": "Udaya Ghai,Karan Singh", "tags": "NIPS 2024,Poster", "abstract": "The theory of boosting provides a computational framework for aggregating approximate weak learning algorithms, which perform marginally better than a random predictor, into an accurate strong learner. In the realizable case, the success of the boosting approach is underscored by a remarkable fact that the resultant sample complexity matches that of a computationally demanding alternative, namely Empirical Risk Minimization (ERM). This in particular implies that the realizable boosting methodology has the potential to offer computational relief without compromising on sample efficiency.\n\nDespite recent progress, in agnostic boosting, where assumptions on the conditional distribution of labels given feature descriptions are absent, ERM outstrips the agnostic boosting methodology in being quadratically more sample efficient than all known agnostic boosting algorithms. In this paper, we make progress on closing this gap, and give a substantially more sample efficient agnostic boosting algorithm than those known, without compromising on the computational (or oracle) complexity. A key feature of our algorithm is that it leverages the ability to reuse samples across multiple rounds of boosting, while guaranteeing a generalization error strictly better than those obtained by blackbox applications of uniform convergence arguments. We also apply our approach to other previously studied learning problems, including boosting for reinforcement learning, and demonstrate improved results.", "pdf": "https://openreview.net/pdf/6e20c0464b80b05ec2b8673322c045da67b6811d.pdf"} {"title": "Verified Safe Reinforcement Learning for Neural Network Dynamic Models", "url": "https://openreview.net/forum?id=tGDUDKirAy", "detail_url": "https://openreview.net/forum?id=tGDUDKirAy", "authors": "Junlin Wu,Huan Zhang,Yevgeniy Vorobeychik", "tags": "NIPS 2024,Poster", "abstract": "Learning reliably safe autonomous control is one of the core problems in trustworthy autonomy. However, training a controller that can be formally verified to be safe remains a major challenge. We introduce a novel approach for learning verified safe control policies in nonlinear neural dynamical systems while maximizing overall performance. Our approach aims to achieve safety in the sense of finite-horizon reachability proofs, and is comprised of three key parts. The first is a novel curriculum learning scheme that iteratively increases the verified safe horizon. The second leverages the iterative nature of gradient-based learning to leverage incremental verification, reusing information from prior verification runs. Finally, we learn multiple verified initial-state-dependent controllers, an idea that is especially valuable for more complex domains where learning a single universal verified safe controller is extremely challenging. Our experiments on five safe control problems demonstrate that our trained controllers can achieve verified safety over horizons that are as much as an order of magnitude longer than state-of-the-art baselines, while maintaining high reward, as well as a perfect safety record over entire episodes.", "pdf": "https://openreview.net/pdf/80e30450b128fb51039b52fb98607fc612b1c4bf.pdf"} {"title": "On Statistical Rates and Provably Efficient Criteria of Latent Diffusion Transformers (DiTs)", "url": "https://openreview.net/forum?id=cV2LKBdlz4", "detail_url": "https://openreview.net/forum?id=cV2LKBdlz4", "authors": "Jerry Yao-Chieh Hu,Weimin Wu,Zhuoru Li,Sophia Pi,Zhao Song,Han Liu", "tags": "NIPS 2024,Poster", "abstract": "We investigate the statistical and computational limits of latent **Di**ffusion **T**ransformers (**DiTs**) under the low-dimensional linear latent space assumption. Statistically, we study the universal approximation and sample complexity of the DiTs score function, as well as the distribution recovery property of the initial data. Specifically, under mild data assumptions, we derive an approximation error bound for the score network of latent DiTs, which is sub-linear in the latent space dimension. Additionally, we derive the corresponding sample complexity bound and show that the data distribution generated from the estimated score function converges toward a proximate area of the original one.\nComputationally, we characterize the hardness of both forward inference and backward computation of latent DiTs, assuming the Strong Exponential Time Hypothesis (SETH). For forward inference, we identify efficient criteria for all possible latent DiTs inference algorithms and showcase our theory by pushing the efficiency toward almost-linear time inference. For backward computation, we leverage the low-rank structure within the gradient computation of DiTs training for possible algorithmic speedup. Specifically, we show that such speedup achieves almost-linear time latent DiTs training by casting the DiTs gradient as a series of chained low-rank approximations with bounded error.\nUnder the low-dimensional assumption, we show that the statistical rates and the computational efficiency are all dominated by the dimension of the subspace, suggesting that latent DiTs have the potential to bypass the challenges associated with the high dimensionality of initial data.", "pdf": "https://openreview.net/pdf/4b9e0bd64dfdc399a44bbc1e5461a8cf96219433.pdf"} {"title": "Causal Temporal Representation Learning with Nonstationary Sparse Transition", "url": "https://openreview.net/forum?id=J709rtAUD1", "detail_url": "https://openreview.net/forum?id=J709rtAUD1", "authors": "Xiangchen Song,Zijian Li,Guangyi Chen,Yujia Zheng,Yewen Fan,Xinshuai Dong,Kun Zhang", "tags": "NIPS 2024,Poster", "abstract": "Causal Temporal Representation Learning (Ctrl) methods aim to identify the temporal causal dynamics of complex nonstationary temporal sequences. Despite the success of existing Ctrl methods, they require either directly observing the domain variables or assuming a Markov prior on them. Such requirements limit the application of these methods in real-world scenarios when we do not have such prior knowledge of the domain variables. To address this problem, this work adopts a sparse transition assumption, aligned with intuitive human understanding, and presents identifiability results from a theoretical perspective. In particular, we explore under what conditions on the significance of the variability of the transitions we can build a model to identify the distribution shifts. Based on the theoretical result, we introduce a novel framework, *Causal Temporal Representation Learning with Nonstationary Sparse Transition* (CtrlNS), designed to leverage the constraints on transition sparsity and conditional independence to reliably identify both distribution shifts and latent factors. Our experimental evaluations on synthetic and real-world datasets demonstrate significant improvements over existing baselines, highlighting the effectiveness of our approach.", "pdf": "https://openreview.net/pdf/186b67ba23a623ccd652075782ed79fd03e35ba7.pdf"} {"title": "ZeroMark: Towards Dataset Ownership Verification without Disclosing Watermark", "url": "https://openreview.net/forum?id=Eyyt3ZmNV6", "detail_url": "https://openreview.net/forum?id=Eyyt3ZmNV6", "authors": "Junfeng Guo,Yiming Li,Ruibo Chen,Yihan Wu,Chenxi Liu,Heng Huang", "tags": "NIPS 2024,Poster", "abstract": "High-quality public datasets significantly prompt the prosperity of deep neural networks (DNNs). Currently, dataset ownership verification (DOV), which consists of dataset watermarking and ownership verification, is the only feasible solution to protect their copyright by preventing unauthorized use. In this paper, we revisit existing DOV methods and find that they all mainly focused on the first stage by designing different types of dataset watermarks and directly exploiting watermarked samples as the verification samples for ownership verification. As such, their success relies on an underlying assumption that verification is a \\emph{one-time} and \\emph{privacy-preserving} process, which does not necessarily hold in practice. To alleviate this problem, we propose \\emph{ZeroMark} to conduct ownership verification without disclosing dataset-specified watermarks. Our method is inspired by our empirical and theoretical findings of the intrinsic property of DNNs trained on the watermarked dataset. Specifically, ZeroMark first generates the closest boundary version of given benign samples and calculates their boundary gradients under the label-only black-box setting. After that, it examines whether the given suspicious method has been trained on the protected dataset by performing a hypothesis test, based on the cosine similarity measured on the boundary gradients and the watermark pattern. Extensive experiments on benchmark datasets verify the effectiveness of our ZeroMark and its resistance to potential adaptive attacks. The codes for reproducing our main experiments are publicly available at \\href{https://github.com/JunfengGo/ZeroMark.git}{GitHub}.", "pdf": "https://openreview.net/pdf/5614e5aca7a90a22bf418af39a1292b73f6c89aa.pdf"} {"title": "Is O(log N) practical? Near-Equivalence Between Delay Robustness and Bounded Regret in Bandits and RL", "url": "https://openreview.net/forum?id=hYJOfWfw1P", "detail_url": "https://openreview.net/forum?id=hYJOfWfw1P", "authors": "Enoch H. Kang,Panganamala Kumar", "tags": "NIPS 2024,Poster", "abstract": "Interactive decision making, encompassing bandits, contextual bandits, and reinforcement learning, has recently been of interest to theoretical studies of experimentation design and recommender system algorithm research. One recent finding in this area is that the well-known Graves-Lai constant being zero is a necessary and sufficient condition for achieving bounded (or constant) regret in interactive decision-making. As this condition may be a strong requirement for many applications, the practical usefulness of pursuing bounded regret has been questioned. In this paper, we show that the condition of the Graves-Lai constant being zero is also necessary for a consistent algorithm to achieve delay model robustness when reward delays are unknown (i.e., when feedback is anonymous). Here, model robustness is measured in terms of $\\epsilon$-robustness, one of the most widely used and one of the least adversarial robustness concepts in the robust statistics literature. In particular, we show that $\\epsilon$-robustness cannot be achieved for a consistent (i.e., uniformly sub-polynomial regret) algorithm, however small the nonzero $\\epsilon$ value is, when the Grave-Lai constant is not zero. While this is a strongly negative result, we also provide a positive result for linear rewards models (contextual linear bandits, reinforcement learning with linear MDP) that the Grave-Lai constant being zero is also sufficient for achieving bounded regret without any knowledge of delay models, i.e., the best of both the efficiency world and the delay robustness world.", "pdf": "https://openreview.net/pdf/1e53dfb26defe32c158070ad1e319e477a3721db.pdf"} {"title": "IQA-EVAL: Automatic Evaluation of Human-Model Interactive Question Answering", "url": "https://openreview.net/forum?id=MzM99vV5Rx", "detail_url": "https://openreview.net/forum?id=MzM99vV5Rx", "authors": "Ruosen Li,Ruochen Li,Barry Wang,Xinya Du", "tags": "NIPS 2024,Poster", "abstract": "To evaluate Large Language Models (LLMs) for question answering (QA), traditional methods typically focus on directly assessing the immediate responses generated by the models based on the given question and context. In the common use case of humans seeking AI assistant\u2019s help in finding information, these non-interactive evaluations do not account for the dynamic nature of human-model conversations, and interaction-aware evaluations have shown that accurate models are not necessarily preferred by humans Lee et al. Recent works in human-computer interaction (HCI) have employed human evaluators to conduct interactions and evaluations, but they are often prohibitively expensive and time-consuming to scale. In this work, we introduce an automated evaluation framework IQA-EVAL to Interactive Question Answering Evaluations, more specifically, we introduce LLM-based Evaluation Agent (LEA) that can: (1) simulate human behaviors to generate interactions with IQA models; (2) automatically evaluate the generated interactions. Moreover, we propose assigning personas to LEAs to better simulate groups of real human evaluators. We show that: (1) our evaluation framework with GPT-4 (or Claude) as the backbone model achieves a high correlation with human evaluations on the IQA task; (2) assigning personas to LEA to better represent the crowd further significantly improves correlations. Finally, we use our automated metric to evaluate five recent LLMs with over 1000 questions from complex and ambiguous question answering tasks, which would cost $5k if evaluated by humans.", "pdf": "https://openreview.net/pdf/2413a018973b3ca9681c42b02c40b36c57101666.pdf"} {"title": "Microstructures and Accuracy of Graph Recall by Large Language Models", "url": "https://openreview.net/forum?id=tNhwg9U767", "detail_url": "https://openreview.net/forum?id=tNhwg9U767", "authors": "Yanbang Wang,Hejie Cui,Jon Kleinberg", "tags": "NIPS 2024,Poster", "abstract": "Graphs data is crucial for many applications, and much of it exists in the relations described in textual format. As a result, being able to accurately recall and encode a graph described in earlier text is a basic yet pivotal ability that LLMs need to demonstrate if they are to perform reasoning tasks that involve graph-structured information. Human performance at graph recall by has been studied by cognitive scientists for decades, and has been found to often exhibit certain structural patterns of bias that align with human handling of social relationships. To date, however, we know little about how LLMs behave in analogous graph recall tasks: do their recalled graphs also exhibit certain biased patterns, and if so, how do they compare with humans and affect other graph reasoning tasks? In this work, we perform the first systematical study of graph recall by LLMs, investigating the accuracy and biased microstructures (local structural patterns) in their recall. We find that LLMs not only underperform often in graph recall, but also tend to favor more triangles and alternating 2-paths. Moreover, we find that more advanced LLMs have a striking dependence on the domain that a real-world graph comes from --- by yielding the best recall accuracy when the graph is narrated in a language style consistent with its original domain.", "pdf": "https://openreview.net/pdf/9a7fc5c6bae6f8bb957b0be6f8041b144393455e.pdf"} {"title": "Fractal Patterns May Illuminate the Success of Next-Token Prediction", "url": "https://openreview.net/forum?id=clAFYReaYE", "detail_url": "https://openreview.net/forum?id=clAFYReaYE", "authors": "Ibrahim Alabdulmohsin,Vinh Q. Tran,Mostafa Dehghani", "tags": "NIPS 2024,Poster", "abstract": "We study the fractal structure of language, aiming to provide a precise formalism for quantifying properties that may have been previously suspected but not formally shown. We establish that language is: (1) self-similar, exhibiting complexities at all levels of granularity, with no particular characteristic context length, and (2) long-range dependent (LRD), with a Hurst parameter of approximately 0.7.\nBased on these findings, we argue that short-term patterns/dependencies in language, such as in paragraphs, mirror the patterns/dependencies over larger scopes, like entire documents. This may shed some light on how next-token prediction can capture the structure of text across multiple levels of granularity, from words and clauses to broader contexts and intents. In addition, we carry out an extensive analysis across different domains and architectures, showing that fractal parameters are robust.\nFinally, we demonstrate that the tiny variations in fractal parameters seen across LLMs improve upon perplexity-based bits-per-byte (BPB) in predicting their downstream performance. We hope these findings offer a fresh perspective on language and the mechanisms underlying the success of LLMs.", "pdf": "https://openreview.net/pdf/ca6bb4f9a7bde25d4aadcab0d9e77240941bd659.pdf"} {"title": "A Recipe for Charge Density Prediction", "url": "https://openreview.net/forum?id=b7REKaNUTv", "detail_url": "https://openreview.net/forum?id=b7REKaNUTv", "authors": "Xiang Fu,Andrew Scott Rosen,Kyle Bystrom,Rui Wang,Albert Musaelian,Boris Kozinsky,Tess Smidt,Tommi Jaakkola", "tags": "NIPS 2024,Poster", "abstract": "In density functional theory, charge density is the core attribute of atomic systems from which all chemical properties can be derived. Machine learning methods are promising in significantly accelerating charge density prediction, yet existing approaches either lack accuracy or scalability. We propose a recipe that can achieve both. In particular, we identify three key ingredients: (1) representing the charge density with atomic and virtual orbitals (spherical fields centered at atom/virtual coordinates); (2) using expressive and learnable orbital basis sets (basis function for the spherical fields); and (3) using high-capacity equivariant neural network architecture. Our method achieves state-of-the-art accuracy while being more than an order of magnitude faster than existing methods. Furthermore, our method enables flexible efficiency-accuracy trade-offs by adjusting the model/basis sizes.", "pdf": "https://openreview.net/pdf/fba5d5f14c2efcc2d8819f9f54549ccad5dcf1a2.pdf"} {"title": "Robust Reinforcement Learning from Corrupted Human Feedback", "url": "https://openreview.net/forum?id=cR2QDzdpEv", "detail_url": "https://openreview.net/forum?id=cR2QDzdpEv", "authors": "Alexander Bukharin,Ilgee Hong,Haoming Jiang,Zichong Li,Qingru Zhang,Zixuan Zhang,Tuo Zhao", "tags": "NIPS 2024,Poster", "abstract": "Reinforcement learning from human feedback (RLHF) provides a principled framework for aligning AI systems with human preference data. For various reasons, e.g., personal bias, context ambiguity, lack of training, etc, human annotators may give incorrect or inconsistent preference labels. To tackle this challenge, we propose a robust RLHF approach -- $R^3M$, which models the potentially corrupted preference label as sparse outliers. Accordingly, we formulate the robust reward learning as an $\\ell_1$-regularized maximum likelihood estimation problem. Computationally, we develop an efficient alternating optimization algorithm, which only incurs negligible computational overhead compared with the standard RLHF approach. Theoretically, we prove that under proper regularity conditions, $R^3M$ can consistently learn the underlying reward and identify outliers, provided that the number of outlier labels scales sublinearly with the preference sample size. Furthermore, we remark that $R^3M$ is versatile and can be extended to various preference optimization methods, including direct preference optimization (DPO). Our experiments on robotic control and natural language generation with large language models (LLMs) show that $R^3M$ improves robustness of the reward against several types of perturbations to the preference data.", "pdf": "https://openreview.net/pdf/ed58795eedc15829f681216397668d1bc4d6e894.pdf"} {"title": "No Filter: Cultural and Socioeconomic Diversity in Contrastive Vision-Language Models", "url": "https://openreview.net/forum?id=UmW9BYj761", "detail_url": "https://openreview.net/forum?id=UmW9BYj761", "authors": "Ang\u00e9line Pouget,Lucas Beyer,Emanuele Bugliarello,Xiao Wang,Andreas Peter Steiner,Xiaohua Zhai,Ibrahim Alabdulmohsin", "tags": "NIPS 2024,Poster", "abstract": "We study cultural and socioeconomic diversity in contrastive vision-language models (VLMs). Using a broad range of benchmark datasets and evaluation metrics, we bring to attention several important findings. First, the common filtering of training data to English image-text pairs disadvantages communities of lower socioeconomic status and negatively impacts cultural understanding. Notably, this performance gap is not captured by - and even at odds with - the currently popular evaluation metrics derived from the Western-centric ImageNet and COCO datasets. Second, pretraining with global, unfiltered data before fine-tuning on English content can improve cultural understanding without sacrificing performance on said popular benchmarks. Third, we introduce the task of geo-localization as a novel evaluation metric to assess cultural diversity in VLMs. Our work underscores the value of using diverse data to create more inclusive multimodal systems and lays the groundwork for developing VLMs that better represent global perspectives.", "pdf": "https://openreview.net/pdf/fa8f16e42b4a1c1b1a0ff08c1a81a71cc424ed82.pdf"} {"title": "Iteration Head: A Mechanistic Study of Chain-of-Thought", "url": "https://openreview.net/forum?id=QBCxWpOt5w", "detail_url": "https://openreview.net/forum?id=QBCxWpOt5w", "authors": "Vivien Cabannes,Charles Arnal,Wassim Bouaziz,Xingyu Alice Yang,Francois Charton,Julia Kempe", "tags": "NIPS 2024,Poster", "abstract": "Chain-of-Thought (CoT) reasoning is known to improve Large Language Models both empirically and in terms of theoretical approximation power.\nHowever, our understanding of the inner workings and conditions of apparition of CoT capabilities remains limited.\nThis paper helps fill this gap by demonstrating how CoT reasoning emerges in transformers in a controlled and interpretable setting.\nIn particular, we observe the appearance of a specialized attention mechanism dedicated to iterative reasoning, which we coined \"iteration heads\".\nWe track both the emergence and the precise working of these iteration heads down to the attention level, and measure the transferability of the CoT skills to which they give rise between tasks.", "pdf": "https://openreview.net/pdf/1516c908cc539461217148deb410ac1ce3ac5316.pdf"} {"title": "Graph Diffusion Policy Optimization", "url": "https://openreview.net/forum?id=8ohsbxw7q8", "detail_url": "https://openreview.net/forum?id=8ohsbxw7q8", "authors": "Yijing Liu,Chao Du,Tianyu Pang,Chongxuan Li,Min Lin,Wei Chen", "tags": "NIPS 2024,Poster", "abstract": "Recent research has made significant progress in optimizing diffusion models for downstream objectives, which is an important pursuit in fields such as graph generation for drug design. However, directly applying these models to graph presents challenges, resulting in suboptimal performance. This paper introduces graph diffusion policy optimization (GDPO), a novel approach to optimize graph diffusion models for arbitrary (e.g., non-differentiable) objectives using reinforcement learning. GDPO is based on an eager policy gradient tailored for graph diffusion models, developed through meticulous analysis and promising improved performance. Experimental results show that GDPO achieves state-of-the-art performance in various graph generation tasks with complex and diverse objectives. Code is available at https://github.com/sail-sg/GDPO.", "pdf": "https://openreview.net/pdf/75cae37cadd98ce9d48cbac1067ad40e6d308f2a.pdf"} {"title": "Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses", "url": "https://openreview.net/forum?id=zMNd0JuceF", "detail_url": "https://openreview.net/forum?id=zMNd0JuceF", "authors": "Xiaosen Zheng,Tianyu Pang,Chao Du,Qian Liu,Jing Jiang,Min Lin", "tags": "NIPS 2024,Poster", "abstract": "Recently, Anil et al. (2024) show that many-shot (up to hundreds of) demonstrations can jailbreak state-of-the-art LLMs by exploiting their long-context capability. Nevertheless, is it possible to use few-shot demonstrations to efficiently jailbreak LLMs within limited context sizes? While the vanilla few-shot jailbreaking may be inefficient, we propose improved techniques such as injecting special system tokens like [/INST] and employing demo-level random search from a collected demo pool. These simple techniques result in surprisingly effective jailbreaking against aligned LLMs (even with advanced defenses). For example, our method achieves >80% (mostly >95%) ASRs on Llama-2-7B and Llama-3-8B without multiple restarts, even if the models are enhanced by strong defenses such as perplexity detection and/or SmoothLLM, which is challenging for suffix-based jailbreaking. In addition, we conduct comprehensive and elaborate (e.g., making sure to use correct system prompts) evaluations against other aligned LLMs and advanced defenses, where our method consistently achieves nearly 100% ASRs. Our code is available at https://github.com/sail-sg/I-FSJ.", "pdf": "https://openreview.net/pdf/3ed8249e86ca655d716f66c7d8b210570210f646.pdf"} {"title": "NVRC: Neural Video Representation Compression", "url": "https://openreview.net/forum?id=I29aiMdm4u", "detail_url": "https://openreview.net/forum?id=I29aiMdm4u", "authors": "Ho Man Kwan,Ge Gao,Fan Zhang,Andrew Peter Gower,David Bull", "tags": "NIPS 2024,Poster", "abstract": "Recent advances in implicit neural representation (INR)-based video coding have\ndemonstrated its potential to compete with both conventional and other learning-\nbased approaches. With INR methods, a neural network is trained to overfit a\nvideo sequence, with its parameters compressed to obtain a compact representation\nof the video content. However, although promising results have been achieved,\nthe best INR-based methods are still out-performed by the latest standard codecs,\nsuch as VVC VTM, partially due to the simple model compression techniques\nemployed. In this paper, rather than focusing on representation architectures, which\nis a common focus in many existing works, we propose a novel INR-based video\ncompression framework, Neural Video Representation Compression (NVRC),\ntargeting compression of the representation. Based on its novel quantization and\nentropy coding approaches, NVRC is the first framework capable of optimizing an\nINR-based video representation in a fully end-to-end manner for the rate-distortion\ntrade-off. To further minimize the additional bitrate overhead introduced by the\nentropy models, NVRC also compresses all the network, quantization and entropy\nmodel parameters hierarchically. Our experiments show that NVRC outperforms\nmany conventional and learning-based benchmark codecs, with a 23% average\ncoding gain over VVC VTM (Random Access) on the UVG dataset, measured\nin PSNR. As far as we are aware, this is the first time an INR-based video codec\nachieving such performance.", "pdf": "https://openreview.net/pdf/91ea65c8ac7523de37d2071dd3d45e0806c6802d.pdf"} {"title": "Universal In-Context Approximation By Prompting Fully Recurrent Models", "url": "https://openreview.net/forum?id=GproaSYZk5", "detail_url": "https://openreview.net/forum?id=GproaSYZk5", "authors": "Aleksandar Petrov,Tom A. Lamb,Alasdair Paren,Philip Torr,Adel Bibi", "tags": "NIPS 2024,Poster", "abstract": "Zero-shot and in-context learning enable solving tasks without model fine-tuning, making them essential for developing generative model solutions. Therefore, it is crucial to understand whether a pretrained model can be prompted to approximate any function, i.e., whether it is a universal in-context approximator. While it was recently shown that transformer models do possess this property, these results rely on their attention mechanism. Hence, these findings do not apply to fully recurrent architectures like RNNs, LSTMs, and the increasingly popular SSMs. We demonstrate that RNNs, LSTMs, GRUs, Linear RNNs, and linear gated architectures such as Mamba and Hawk/Griffin can also serve be universal in-context approximators. To streamline our argument, we introduce a programming language called LSRL that compiles to these fully recurrent architectures. LSRL may be of independent interest for further studies of fully recurrent models, such as constructing interpretability benchmarks. We also study the role of multiplicative gating and observe that architectures incorporating such gating (e.g., LSTMs, GRUs, Hawk/Griffin) can implement certain operations more stably, making them more viable candidates for practical in-context universal approximation.", "pdf": "https://openreview.net/pdf/5e09734cb861d077d2310425e541cd769d1e4804.pdf"} {"title": "What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights", "url": "https://openreview.net/forum?id=PcyioHOmjq", "detail_url": "https://openreview.net/forum?id=PcyioHOmjq", "authors": "Xin Wen,Bingchen Zhao,Yilun Chen,Jiangmiao Pang,XIAOJUAN QI", "tags": "NIPS 2024,Poster", "abstract": "Severe data imbalance naturally exists among web-scale vision-language datasets. Despite this, we find CLIP pre-trained thereupon exhibits notable robustness to the data imbalance compared to supervised learning, and demonstrates significant effectiveness in learning generalizable representations. With an aim to investigate the reasons behind this finding, we conduct controlled experiments to study various underlying factors, and reveal that CLIP's pretext task forms a dynamic classification problem wherein only a subset of classes is present in training. This isolates the bias from dominant classes and implicitly balances the learning signal. Furthermore, the robustness and discriminability of CLIP improve with more descriptive language supervision, larger data scale, and broader open-world concepts, which are inaccessible to supervised learning. Our study not only uncovers the mechanisms behind CLIP's generalizability beyond data imbalance but also provides transferable insights for the research community. The findings are validated in both supervised and self-supervised learning, enabling models trained on imbalanced data to achieve CLIP-level performance on diverse recognition tasks. Code and data are available at: https://github.com/CVMI-Lab/clip-beyond-tail.", "pdf": "https://openreview.net/pdf/860aabea6116af4e0ab9429cf94c2f1e5524cbc7.pdf"} {"title": "Optimal Batched Best Arm Identification", "url": "https://openreview.net/forum?id=ATSPPGEmAA", "detail_url": "https://openreview.net/forum?id=ATSPPGEmAA", "authors": "Tianyuan Jin,Yu Yang,Jing Tang,Xiaokui Xiao,Pan Xu", "tags": "NIPS 2024,Poster", "abstract": "We study the batched best arm identification (BBAI) problem, where the learner's goal is to identify the best arm while switching the policy as less as possible. In particular, we aim to find the best arm with probability $1-\\delta$ for some small constant $\\delta>0$ while minimizing both the sample complexity (total number of arm pulls) and the batch complexity (total number of batches). We propose the three-batch best arm identification (Tri-BBAI) algorithm, which is the first batched algorithm that achieves the optimal sample complexity in the asymptotic setting (i.e., $\\delta\\rightarrow 0$) and runs in $3$ batches in expectation. Based on Tri-BBAI, we further propose the almost optimal batched best arm identification (Opt-BBAI) algorithm, which is the first algorithm that achieves the near-optimal sample and batch complexity in the non-asymptotic setting (i.e., $1/\\delta$ is finite), while enjoying the same batch and sample complexity as Tri-BBAI when $\\delta$ tends to zero. Moreover, in the non-asymptotic setting, the complexity of previous batch algorithms is usually conditioned on the event that the best arm is returned (with a probability of at least $1-\\delta$), which is potentially unbounded in cases where a sub-optimal arm is returned. In contrast, the complexity of Opt-BBAI does not rely on such an event. This is achieved through a novel procedure that we design for checking whether the best arm is eliminated, which is of independent interest.", "pdf": "https://openreview.net/pdf/210abbae8e31c277dde9d07d3554330ecf75115b.pdf"} {"title": "Smoothed Energy Guidance: Guiding Diffusion Models with Reduced Energy Curvature of Attention", "url": "https://openreview.net/forum?id=JK728xy8G7", "detail_url": "https://openreview.net/forum?id=JK728xy8G7", "authors": "Susung Hong", "tags": "NIPS 2024,Poster", "abstract": "Conditional diffusion models have shown remarkable success in visual content generation, producing high-quality samples across various domains, largely due to classifier-free guidance (CFG). Recent attempts to extend guidance to unconditional models have relied on heuristic techniques, resulting in suboptimal generation quality and unintended effects. In this work, we propose Smoothed Energy Guidance (SEG), a novel training- and condition-free approach that leverages the energy-based perspective of the self-attention mechanism to enhance image generation. By defining the energy of self-attention, we introduce a method to reduce the curvature of the energy landscape of attention and use the output as the unconditional prediction. Practically, we control the curvature of the energy landscape by adjusting the Gaussian kernel parameter while keeping the guidance scale parameter fixed. Additionally, we present a query blurring method that is equivalent to blurring the entire attention weights without incurring quadratic complexity in the number of tokens. In our experiments, SEG achieves a Pareto improvement in both quality and the reduction of side effects. The code is available at https://github.com/SusungHong/SEG-SDXL.", "pdf": "https://openreview.net/pdf/4f24aa67f3286a54a9f8960ffea7bf16c84c2b46.pdf"} {"title": "MaNo: Exploiting Matrix Norm for Unsupervised Accuracy Estimation Under Distribution Shifts", "url": "https://openreview.net/forum?id=mH1xtt2bJE", "detail_url": "https://openreview.net/forum?id=mH1xtt2bJE", "authors": "RENCHUNZI XIE,Ambroise Odonnat,Vasilii Feofanov,Weijian Deng,Jianfeng Zhang,Bo An", "tags": "NIPS 2024,Poster", "abstract": "Leveraging the model\u2019s outputs, specifically the logits, is a common approach to estimating the test accuracy of a pre-trained neural network on out-of-distribution (OOD) samples without requiring access to the corresponding ground-truth labels.\nDespite their ease of implementation and computational efficiency, current logit-based methods are vulnerable to overconfidence issues, leading to prediction bias, especially under the natural shift. In this work, we first study the relationship between logits and generalization performance from the view of low-density separation assumption. Our findings motivate our proposed method \\method{} that \\textbf{(1)}~applies a data-dependent normalization on the logits to reduce prediction bias, and \\textbf{(2)} takes the $L_p$ norm of the matrix of normalized logits as the estimation score. Our theoretical analysis highlights the connection between the provided score and the model's uncertainty. \nWe conduct an extensive empirical study on common unsupervised accuracy estimation benchmarks and demonstrate that \\method{} achieves state-of-the-art performance across various architectures in the presence of synthetic, natural, or subpopulation shifts. The code is available at https://github.com/Renchunzi-Xie/MaNo.", "pdf": "https://openreview.net/pdf/9631c6bbc3edf891cd13ddecc4321745d5758956.pdf"} {"title": "DeNetDM: Debiasing by Network Depth Modulation", "url": "https://openreview.net/forum?id=0dtA21q83C", "detail_url": "https://openreview.net/forum?id=0dtA21q83C", "authors": "Silpa Vadakkeeveetil Sreelatha,Adarsh Kappiyath,Abhra Chaudhuri,Anjan Dutta", "tags": "NIPS 2024,Poster", "abstract": "Neural networks trained on biased datasets tend to inadvertently learn spurious correlations, hindering generalization. We formally prove that (1) samples that exhibit spurious correlations lie on a lower rank manifold relative to the ones that do not; and (2) the depth of a network acts as an implicit regularizer on the rank of the attribute subspace that is encoded in its representations. Leveraging these insights, we present DeNetDM, a novel debiasing method that uses network depth modulation as a way of developing robustness to spurious correlations. Using a training paradigm derived from Product of Experts, we create both biased and debiased branches with deep and shallow architectures and then distill knowledge to produce the target debiased model. Our method requires no bias annotations or explicit data augmentation while performing on par with approaches that require either or both. We demonstrate that DeNetDM outperforms existing debiasing techniques on both synthetic and real-world datasets by 5\\%. The project page is available at https://vssilpa.github.io/denetdm/.", "pdf": "https://openreview.net/pdf/9114261717f42179cdab7dfd174c783d97688e8a.pdf"} {"title": "NeuralSolver: Learning Algorithms For Consistent and Efficient Extrapolation Across General Tasks", "url": "https://openreview.net/forum?id=IxRf7Q3s5e", "detail_url": "https://openreview.net/forum?id=IxRf7Q3s5e", "authors": "Bernardo Esteves,Miguel Vasco,Francisco S. Melo", "tags": "NIPS 2024,Poster", "abstract": "We contribute NeuralSolver, a novel recurrent solver that can efficiently and consistently extrapolate, i.e., learn algorithms from smaller problems (in terms of observation size) and execute those algorithms in large problems. Contrary to previous recurrent solvers, NeuralSolver can be naturally applied in both same-size problems, where the input and output sizes are the same, and in different-size problems, where the size of the input and output differ. To allow for this versatility, we design NeuralSolver with three main components: a recurrent module, that iteratively processes input information at different scales, a processing module, responsible for aggregating the previously processed information, and a curriculum-based training scheme, that improves the extrapolation performance of the method.\nTo evaluate our method we introduce a set of novel different-size tasks and we show that NeuralSolver consistently outperforms the prior state-of-the-art recurrent solvers in extrapolating to larger problems, considering smaller training problems and requiring less parameters than other approaches.", "pdf": "https://openreview.net/pdf/1637e6a79daaa5332c2df06284d97cccc303eed4.pdf"} {"title": "Introducing Spectral Attention for Long-Range Dependency in Time Series Forecasting", "url": "https://openreview.net/forum?id=dxyNVEBQMp", "detail_url": "https://openreview.net/forum?id=dxyNVEBQMp", "authors": "Bong Gyun Kang,Dongjun Lee,HyunGi Kim,Dohyun Chung,Sungroh Yoon", "tags": "NIPS 2024,Poster", "abstract": "Sequence modeling faces challenges in capturing long-range dependencies across diverse tasks. Recent linear and transformer-based forecasters have shown superior performance in time series forecasting. However, they are constrained by their inherent inability to effectively address long-range dependencies in time series data, primarily due to using fixed-size inputs for prediction. Furthermore, they typically sacrifice essential temporal correlation among consecutive training samples by shuffling them into mini-batches. To overcome these limitations, we introduce a fast and effective Spectral Attention mechanism, which preserves temporal correlations among samples and facilitates the handling of long-range information while maintaining the base model structure. Spectral Attention preserves long-period trends through a low-pass filter and facilitates gradient to flow between samples. Spectral Attention can be seamlessly integrated into most sequence models, allowing models with fixed-sized look-back windows to capture long-range dependencies over thousands of steps. Through extensive experiments on 11 real-world time series datasets using 7 recent forecasting models, we consistently demonstrate the efficacy of our Spectral Attention mechanism, achieving state-of-the-art results.", "pdf": "https://openreview.net/pdf/a2a625fdbd4531ef073c8b9a594943e4b4756d64.pdf"} {"title": "Fine-grained Image-to-LiDAR Contrastive Distillation with Visual Foundation Models", "url": "https://openreview.net/forum?id=63xeWav1lU", "detail_url": "https://openreview.net/forum?id=63xeWav1lU", "authors": "Yifan Zhang,Junhui Hou", "tags": "NIPS 2024,Poster", "abstract": "Contrastive image-to-LiDAR knowledge transfer, commonly used for learning 3D representations with synchronized images and point clouds, often faces a self-conflict dilemma. This issue arises as contrastive losses unintentionally dissociate features of unmatched points and pixels that share semantic labels, compromising the integrity of learned representations. To overcome this, we harness Visual Foundation Models (VFMs), which have revolutionized the acquisition of pixel-level semantics, to enhance 3D representation learning. Specifically, we utilize off-the-shelf VFMs to generate semantic labels for weakly-supervised pixel-to-point contrastive distillation. Additionally, we employ von Mises-Fisher distributions to structure the feature space, ensuring semantic embeddings within the same class remain consistent across varying inputs. Furthermore, we adapt sampling probabilities of points to address imbalances in spatial distribution and category frequency, promoting comprehensive and balanced learning. Extensive experiments demonstrate that our approach mitigates the challenges posed by traditional methods and consistently surpasses existing image-to-LiDAR contrastive distillation methods in downstream tasks. We have included the code in supplementary materials.", "pdf": "https://openreview.net/pdf/2ec07c56bf2d54c3e615ba8440cb116562b9a546.pdf"} {"title": "Smoothed Online Classification can be Harder than Batch Classification", "url": "https://openreview.net/forum?id=NO9MSeZs6g", "detail_url": "https://openreview.net/forum?id=NO9MSeZs6g", "authors": "Vinod Raman,Unique Subedi,Ambuj Tewari", "tags": "NIPS 2024,Poster", "abstract": "We study online classification under smoothed adversaries. In this setting, at each time point, the adversary draws an example from a distribution that has a bounded density with respect to a fixed base measure, which is known apriori to the learner. For binary classification and scalar-valued regression, previous works [Haghtalab et al., 2020, Block et al., 2022] have shown that smoothed online learning is as easy as learning in the iid batch setting under PAC model. However, we show that smoothed online classification can be harder than the iid batch classification when the label space is unbounded. In particular, we construct a hypothesis class that is learnable in the iid batch setting under the PAC model but is not learnable under the smoothed online model. Finally, we identify a condition that ensures that the PAC learnability of a hypothesis class is sufficient for its smoothed online learnability.", "pdf": "https://openreview.net/pdf/327ecbdb30d276e39bfd7ab4af9676dcfedb4a5e.pdf"} {"title": "Online Classification with Predictions", "url": "https://openreview.net/forum?id=MB0DD5qAz8", "detail_url": "https://openreview.net/forum?id=MB0DD5qAz8", "authors": "Vinod Raman,Ambuj Tewari", "tags": "NIPS 2024,Poster", "abstract": "We study online classification when the learner has access to predictions about future examples. We design an online learner whose expected regret is never worse than the worst-case regret, gracefully improves with the quality of the predictions, and can be significantly better than the worst-case regret when the predictions of future examples are accurate. As a corollary, we show that if the learner is always guaranteed to observe data where future examples are easily predictable, then online learning can be as easy as transductive online learning. Our results complement recent work in online algorithms with predictions and smoothed online classification, which go beyond a worse-case analysis by using machine-learned predictions and distributional assumptions respectively.", "pdf": "https://openreview.net/pdf/56668ce57092f574a73e9f701399ee45704e71c3.pdf"} {"title": "RoME: A Robust Mixed-Effects Bandit Algorithm for Optimizing Mobile Health Interventions", "url": "https://openreview.net/forum?id=eKVugi5zr0", "detail_url": "https://openreview.net/forum?id=eKVugi5zr0", "authors": "Easton Knight Huch,Jieru Shi,Madeline R Abbott,Jessica R Golbus,Alexander Moreno,Walter H. Dempsey", "tags": "NIPS 2024,Poster", "abstract": "Mobile health leverages personalized and contextually tailored interventions optimized through bandit and reinforcement learning algorithms. In practice, however, challenges such as participant heterogeneity, nonstationarity, and nonlinear relationships hinder algorithm performance. We propose RoME, a **Ro**bust **M**ixed-**E**ffects contextual bandit algorithm that simultaneously addresses these challenges via (1) modeling the differential reward with user- and time-specific random effects, (2) network cohesion penalties, and (3) debiased machine learning for flexible estimation of baseline rewards. We establish a high-probability regret bound that depends solely on the dimension of the differential-reward model, enabling us to achieve robust regret bounds even when the baseline reward is highly complex. We demonstrate the superior performance of the RoME algorithm in a simulation and two off-policy evaluation studies.", "pdf": "https://openreview.net/pdf/62fc12290cd7fb3ec436e0b04de4642407685df9.pdf"} {"title": "The Sample Complexity of Gradient Descent in Stochastic Convex Optimization", "url": "https://openreview.net/forum?id=2INcTKPBy4", "detail_url": "https://openreview.net/forum?id=2INcTKPBy4", "authors": "Roi Livni", "tags": "NIPS 2024,Poster", "abstract": "We analyze the sample complexity of full-batch Gradient Descent (GD) in the setup of non-smooth Stochastic Convex Optimization. We show that the generalization error of GD, with common choice of hyper-parameters, can be $\\tilde \\Theta(d/m+1/\\sqrt{m})$, where d is the dimension and m is the sample size. This matches the sample complexity of \\emph{worst-case} empirical risk minimizers. That means that, in contrast with other algorithms, GD has no advantage over naive ERMs. Our bound follows from a new generalization bound that depends on both the dimension as well as the learning rate and number of iterations. Our bound also shows that, for general hyper-parameters, when the dimension is strictly larger than number of samples, $T=\\Omega(1/\\epsilon^4)$ iterations are necessary to avoid overfitting. This resolves an open problem by Schlisserman et al.23 and Amir er Al.21, and improves over previous lower bounds that demonstrated that the sample size must be at least square root of the dimension.", "pdf": "https://openreview.net/pdf/05d196d389362478435b024f2271b8f2c4f184ff.pdf"} {"title": "MoTE: Reconciling Generalization with Specialization for Visual-Language to Video Knowledge Transfer", "url": "https://openreview.net/forum?id=vpEq2bzsS0", "detail_url": "https://openreview.net/forum?id=vpEq2bzsS0", "authors": "Minghao Zhu,Zhengpu Wang,Mengxian Hu,Ronghao Dang,Xiao Lin,Xun Zhou,Chengju Liu,Qijun Chen", "tags": "NIPS 2024,Poster", "abstract": "Transferring visual-language knowledge from large-scale foundation models for video recognition has proved to be effective. To bridge the domain gap, additional parametric modules are added to capture the temporal information. However, zero-shot generalization diminishes with the increase in the number of specialized parameters, making existing works a trade-off between zero-shot and close-set performance. In this paper, we present MoTE, a novel framework that enables generalization and specialization to be balanced in one unified model. Our approach tunes a mixture of temporal experts to learn multiple task views with various degrees of data fitting. To maximally preserve the knowledge of each expert, we propose Weight Merging Regularization, which regularizes the merging process of experts in weight space. Additionally with temporal feature modulation to regularize the contribution of temporal feature during test. We achieve a sound balance between zero-shot and close-set video recognition tasks and obtain state-of-the-art or competitive results on various datasets, including Kinetics-400 \\& 600, UCF, and HMDB. Code is available at https://github.com/ZMHH-H/MoTE.", "pdf": "https://openreview.net/pdf/70da6cead9fc894a60ab0f52a2a9b0e9149a95a3.pdf"} {"title": "REDUCR: Robust Data Downsampling using Class Priority Reweighting", "url": "https://openreview.net/forum?id=Jz7Z7KkR94", "detail_url": "https://openreview.net/forum?id=Jz7Z7KkR94", "authors": "William Bankes,George Hughes,Ilija Bogunovic,Zi Wang", "tags": "NIPS 2024,Poster", "abstract": "Modern machine learning models are becoming increasingly expensive to train for real-world image and text classification tasks, where massive web-scale data is collected in a streaming fashion. To reduce the training cost, online batch selection techniques have been developed to choose the most informative datapoints. However, many existing techniques are not robust to class imbalance and distributional shifts, and can suffer from poor worst-class generalization performance. This work introduces REDUCR, a robust and efficient data downsampling method that uses class priority reweighting. REDUCR reduces the training data while preserving worst-class generalization performance. REDUCR assigns priority weights to datapoints in a class-aware manner using an online learning algorithm. We demonstrate the data efficiency and robust performance of REDUCR on vision and text classification tasks. On web-scraped datasets with imbalanced class distributions, REDUCR significantly improves worst-class test accuracy (and average accuracy), surpassing state-of-the-art methods by around 15\\%.", "pdf": "https://openreview.net/pdf/92f45c14d5a65b0fd04dc3e7a1f0716aef5d0c18.pdf"} {"title": "Few-Shot Diffusion Models Escape the Curse of Dimensionality", "url": "https://openreview.net/forum?id=JrraNaaZm5", "detail_url": "https://openreview.net/forum?id=JrraNaaZm5", "authors": "Ruofeng Yang,Bo Jiang,Cheng Chen,Ruinan Jin,Baoxiang Wang,Shuai Li", "tags": "NIPS 2024,Poster", "abstract": "While diffusion models have demonstrated impressive performance, there is a growing need for generating samples tailored to specific user-defined concepts. The customized requirements promote the development of few-shot diffusion models, which use limited $n_{ta}$ target samples to fine-tune a pre-trained diffusion model trained on $n_s$ source samples. Despite the empirical success, no theoretical work specifically analyzes few-shot diffusion models. Moreover, the existing results for diffusion models without a fine-tuning phase can not explain why few-shot models generate great samples due to the curse of dimensionality. In this work, we analyze few-shot diffusion models under a linear structure distribution with a latent dimension $d$. From the approximation perspective, we prove that few-shot models have a $\\widetilde{O}(n_s^{-2/d}+n_{ta}^{-1/2})$ bound to approximate the target score function, which is better than $n_{ta}^{-2/d}$ results. From the optimization perspective, we consider a latent Gaussian special case and prove that the optimization problem has a closed-form minimizer. This means few-shot models can directly obtain an approximated minimizer without a complex optimization process. Furthermore, we also provide the accuracy bound $\\widetilde{O}(1/n_{ta}+1/\\sqrt{n_s})$ for the empirical solution, which still has better dependence on $n_{ta}$ compared to $n_s$. The results of the real-world experiments also show that the models obtained by only fine-tuning the encoder and decoder specific to the target distribution can produce novel images with the target feature, which supports our theoretical results.", "pdf": "https://openreview.net/pdf/62e640c146c9784484a1e12a196d734f5a983723.pdf"} {"title": "The Prevalence of Neural Collapse in Neural Multivariate Regression", "url": "https://openreview.net/forum?id=Wq6aY6fC2H", "detail_url": "https://openreview.net/forum?id=Wq6aY6fC2H", "authors": "George Andriopoulos,Zixuan Dong,Li Guo,Zifan Zhao,Keith W. Ross", "tags": "NIPS 2024,Poster", "abstract": "Recently it has been observed that neural networks exhibit Neural Collapse (NC) during the final stage of training for the classification problem. We empirically show that multivariate regression, as employed in imitation learning and other applications, exhibits Neural Regression Collapse (NRC), a new form of neural collapse: (NRC1) The last-layer feature vectors collapse to the subspace spanned by the $n$ principal components of the feature vectors, where $n$ is the dimension of the targets (for univariate regression, $n=1$); (NRC2) The last-layer feature vectors also collapse to the subspace spanned by the last-layer weight vectors; (NRC3) The Gram matrix for the weight vectors converges to a specific functional form that depends on the covariance matrix of the targets. After empirically establishing the prevalence of (NRC1)-(NRC3) for a variety of datasets and network architectures, we provide an explanation of these phenomena by modeling the regression task in the context of the Unconstrained Feature Model (UFM), in which the last layer feature vectors are treated as free variables when minimizing the loss function. We show that when the regularization parameters in the UFM model are strictly positive, then (NRC1)-(NRC3) also emerge as solutions in the UFM optimization problem. We also show that if the regularization parameters are equal to zero, then there is no collapse. To our knowledge, this is the first empirical and theoretical study of neural collapse in the context of regression. This extension is significant not only because it broadens the applicability of neural collapse to a new category of problems but also because it suggests that the phenomena of neural collapse could be a universal behavior in deep learning.", "pdf": "https://openreview.net/pdf/d816a4bfede163a0b51fdf035b9f8f2876124b39.pdf"} {"title": "Interfacing Foundation Models' Embeddings", "url": "https://openreview.net/forum?id=U3hQoqgQDJ", "detail_url": "https://openreview.net/forum?id=U3hQoqgQDJ", "authors": "Xueyan Zou,Linjie Li,Jianfeng Wang,Jianwei Yang,Mingyu Ding,Junyi Wei,Zhengyuan Yang,Feng Li,Hao Zhang,Shilong Liu,Arul Aravinthan,Yong Jae Lee,Lijuan Wang", "tags": "NIPS 2024,Poster", "abstract": "Foundation models possess strong capabilities in reasoning and memorizing across modalities. To further unleash the power of foundation models, we present FIND, a generalized interface for aligning foundation models' embeddings with unified image and dataset-level understanding spanning modality and granularity. As shown in Fig.1, a lightweight transformer interface without tuning any foundation model weights is enough for segmentation, grounding, and retrieval in an interleaved manner. The proposed interface has the following favorable attributes: (1) Generalizable. It applies to various tasks spanning retrieval, segmentation, etc., under the same architecture and weights. (2) Interleavable. With the benefit of multi-task multi-modal training, the proposed interface creates an interleaved shared embedding space. (3) Extendable. The proposed interface is adaptive to new tasks, and new models. In light of the interleaved embedding space, we introduce FIND-Bench, which introduces new training and evaluation annotations to the COCO dataset for interleaved segmentation and retrieval. We are the first work aligning foundations models' embeddings for interleave understanding. Meanwhile, our approach achieves state-of-the-art performance on FIND-Bench and competitive performance on standard retrieval and segmentation settings.", "pdf": "https://openreview.net/pdf/84a07b49d47a8fe611f8368de91a45e448a7a868.pdf"} {"title": "Leveraging Drift to Improve Sample Complexity of Variance Exploding Diffusion Models", "url": "https://openreview.net/forum?id=euQ0C4iS7O", "detail_url": "https://openreview.net/forum?id=euQ0C4iS7O", "authors": "Ruofeng Yang,Zhijie Wang,Bo Jiang,Shuai Li", "tags": "NIPS 2024,Poster", "abstract": "Variance exploding (VE) based diffusion models, an important class of diffusion models, have shown state-of-the-art (SOTA) performance. However, only a few theoretical works analyze VE-based models, and those works suffer from a worse forward convergence rate $1/\\text{poly}(T)$ than the $\\exp{(-T)}$ of variance preserving (VP) based models, where $T$ is the forward diffusion time and the rate measures the distance between forward marginal distribution $q_T$ and pure Gaussian noise. The slow rate is due to the Brownian Motion without a drift term. In this work, we design a new drifted VESDE forward process, which allows a faster $\\exp{(-T)}$ forward convergence rate. With this process, we achieve the first efficient polynomial sample complexity for a series of VE-based models with reverse SDE under the manifold hypothesis. Furthermore, unlike previous works, we allow the diffusion coefficient to be unbounded instead of a constant, which is closer to the SOTA models. Besides the reverse SDE, the other common reverse process is the probability flow ODE (PFODE) process, which is deterministic and enjoys faster sample speed. To deepen the understanding of VE-based models, we consider a more general setting considering reverse SDE and PFODE simultaneously, propose a unified tangent-based analysis framework, and prove the first quantitative convergence guarantee for SOTA VE-based models with reverse PFODE.\nWe also show that the drifted VESDE can balance different error terms and improve generated samples without training through synthetic and real-world experiments.", "pdf": "https://openreview.net/pdf/05ca69f4daef20ab2561b65902f0cc6bb096148c.pdf"} {"title": "IODA: Instance-Guided One-shot Domain Adaptation for Super-Resolution", "url": "https://openreview.net/forum?id=qbvt3ocQxB", "detail_url": "https://openreview.net/forum?id=qbvt3ocQxB", "authors": "Zaizuo Tang,Yu-Bin Yang", "tags": "NIPS 2024,Poster", "abstract": "The domain adaptation method effectively mitigates the negative impact of domain gaps on the performance of super-resolution (SR) networks through the guidance of numerous target domain low-resolution (LR) images. However, in real-world scenarios, the availability of target domain LR images is often limited, sometimes even to just one, which inevitably impairs the domain adaptation performance of SR networks. We propose Instance-guided One-shot Domain Adaptation for Super-Resolution (IODA) to enable efficient domain adaptation with only a single unlabeled target domain LR image. To address the limited diversity of the target domain distribution caused by a single target domain LR image, we propose an instance-guided target domain distribution expansion strategy. This strategy effectively expands the diversity of the target domain distribution by generating instance-specific features focused on different instances within the image. For SR tasks emphasizing texture details, we propose an image-guided domain adaptation method. Compared to existing methods that use text representation for domain difference, this method utilizes pixel-level representation with higher granularity, enabling efficient domain adaptation guidance for SR networks. Finally, we validate the effectiveness of IODA on multiple datasets and various network architectures, achieving satisfactory one-shot domain adaptation for SR networks. Our code is available at https://github.com/ZaizuoTang/IODA.", "pdf": "https://openreview.net/pdf/8e30ba7218af6df482043e5a2bc47e57f1b975d2.pdf"} {"title": "Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition", "url": "https://openreview.net/forum?id=PaqJ71zf1M", "detail_url": "https://openreview.net/forum?id=PaqJ71zf1M", "authors": "Zi-Hao Zhou,Siyuan Fang,Zi-Jing Zhou,Tong Wei,Yuanyu Wan,Min-Ling Zhang", "tags": "NIPS 2024,Poster", "abstract": "Long-tailed semi-supervised learning poses a significant challenge in training models with limited labeled data exhibiting a long-tailed label distribution. Current state-of-the-art LTSSL approaches heavily rely on high-quality pseudo-labels for large-scale unlabeled data. However, these methods often neglect the impact of representations learned by the neural network and struggle with real-world unlabeled data, which typically follows a different distribution than labeled data. This paper introduces a novel probabilistic framework that unifies various recent proposals in long-tail learning. Our framework derives the class-balanced contrastive loss through Gaussian kernel density estimation. We introduce a continuous contrastive learning method, CCL, extending our framework to unlabeled data using *reliable* and *smoothed* pseudo-labels. By progressively estimating the underlying label distribution and optimizing its alignment with model predictions, we tackle the diverse distribution of unlabeled data in real-world scenarios. Extensive experiments across multiple datasets with varying unlabeled data distributions demonstrate that CCL consistently outperforms prior state-of-the-art methods, achieving over 4% improvement on the ImageNet-127 dataset. The supplementary material includes the source code for reproducibility.", "pdf": "https://openreview.net/pdf/00e160f103e08aceec40d0b53179b60661f39e8d.pdf"} {"title": "MADiff: Offline Multi-agent Learning with Diffusion Models", "url": "https://openreview.net/forum?id=PvoxbjcRPT", "detail_url": "https://openreview.net/forum?id=PvoxbjcRPT", "authors": "Zhengbang Zhu,Minghuan Liu,Liyuan Mao,Bingyi Kang,Minkai Xu,Yong Yu,Stefano Ermon,Weinan Zhang", "tags": "NIPS 2024,Poster", "abstract": "Offline reinforcement learning (RL) aims to learn policies from pre-existing datasets without further interactions, making it a challenging task. Q-learning algorithms struggle with extrapolation errors in offline settings, while supervised learning methods are constrained by model expressiveness. Recently, diffusion models (DMs) have shown promise in overcoming these limitations in single-agent learning, but their application in multi-agent scenarios remains unclear. Generating trajectories for each agent with independent DMs may impede coordination, while concatenating all agents\u2019 information can lead to low sample efficiency. Accordingly, we propose MADiff, which is realized with an attention-based diffusion model to model the complex coordination among behaviors of multiple agents. To our knowledge, MADiff is the first diffusion-based multi-agent learning framework, functioning as both a decentralized policy and a centralized controller. During decentralized executions, MADiff simultaneously performs teammate modeling, and the centralized controller can also be applied in multi-agent trajectory predictions. Our experiments demonstrate that MADiff outperforms baseline algorithms across various multi-agent learning tasks, highlighting its effectiveness in modeling complex multi-agent interactions.", "pdf": "https://openreview.net/pdf/37477cb53f290215c287505636803a14d8c9a184.pdf"} {"title": "Hallo3D: Multi-Modal Hallucination Detection and Mitigation for Consistent 3D Content Generation", "url": "https://openreview.net/forum?id=pqi4vqBYXW", "detail_url": "https://openreview.net/forum?id=pqi4vqBYXW", "authors": "Hongbo Wang,Jie Cao,Jin Liu,Xiaoqiang Zhou,Huaibo Huang,Ran He", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements in 3D content generation have been significant, primarily due to the visual priors provided by pretrained diffusion models. However, large 2D visual models exhibit spatial perception hallucinations, leading to multi-view inconsistency in 3D content generated through Score Distillation Sampling (SDS). This phenomenon, characterized by overfitting to specific views, is referred to as the \"Janus Problem\". In this work, we investigate the hallucination issues of pretrained models and find that large multimodal models without geometric constraints possess the capability to infer geometric structures, which can be utilized to mitigate multi-view inconsistency. Building on this, we propose a novel tuning-free method. We represent the multimodal inconsistency query information to detect specific hallucinations in 3D content, using this as an enhanced prompt to re-consist the 2D renderings of 3D and jointly optimize the structure and appearance across different views. Our approach does not require 3D training data and can be implemented plug-and-play within existing frameworks. Extensive experiments demonstrate that our method significantly improves the consistency of 3D content generation and specifically mitigates hallucinations caused by pretrained large models, achieving state-of-the-art performance compared to other optimization methods.", "pdf": "https://openreview.net/pdf/fb79eff5c9130f4e3bb3bf8229453ab45ce5a1f3.pdf"} {"title": "LiT: Unifying LiDAR \"Languages\" with LiDAR Translator", "url": "https://openreview.net/forum?id=wcX04Wn34u", "detail_url": "https://openreview.net/forum?id=wcX04Wn34u", "authors": "Yixing Lao,Tao Tang,Xiaoyang Wu,Peng Chen,Kaicheng Yu,Hengshuang Zhao", "tags": "NIPS 2024,Poster", "abstract": "LiDAR data exhibits significant domain gaps due to variations in sensors, vehicles, and driving environments, creating \u201clanguage barriers\u201d that limit the effective use of data across domains and the scalability of LiDAR perception models. To address these challenges, we introduce the LiDAR Translator (LiT), a framework that directly translates LiDAR data across domains, enabling both cross-domain adaptation and multi-domain joint learning. LiT integrates three key components: a scene modeling module for precise foreground and background reconstruction, a LiDAR modeling module that models LiDAR rays statistically and simulates ray-drop, and a fast, hardware-accelerated ray casting engine. LiT enables state-of-the-art zero-shot and unified domain detection across diverse LiDAR datasets, marking a step toward data-driven domain unification for autonomous driving systems. Source code and demos are available at: https://yxlao.github.io/lit.", "pdf": "https://openreview.net/pdf/b9f11e717fae53a1228a5b9c208bb323f8080693.pdf"} {"title": "Instruction Tuning With Loss Over Instructions", "url": "https://openreview.net/forum?id=GcZgo9ffGt", "detail_url": "https://openreview.net/forum?id=GcZgo9ffGt", "authors": "Zhengyan Shi,Adam X. Yang,Bin Wu,Laurence Aitchison,Emine Yilmaz,Aldo Lipani", "tags": "NIPS 2024,Poster", "abstract": "Instruction tuning plays a crucial role in shaping the outputs of language models (LMs) to desired styles. In this work, we propose a simple yet effective method, Instruction Modelling (IM), which trains LMs by applying a loss function to the instruction and prompt part rather than solely to the output part. Through experiments across 21 diverse benchmarks, we show that, in many scenarios, IM can effectively improve the LM performance on both NLP tasks (*e.g.,* MMLU, TruthfulQA, and HumanEval) and open-ended generation benchmarks (*e.g.,* MT-Bench and AlpacaEval). Remarkably, in the most advantageous case, IM boosts model performance on AlpacaEval 1.0 by over 100%. We identify two key factors influencing the effectiveness of IM: (1) The ratio between instruction length and output length in the training data; and (2) The number of training examples. We observe that IM is especially beneficial when trained on datasets with lengthy instructions paired with brief outputs, or under the Superficial Alignment Hypothesis (SAH) where a small amount of training examples are used for instruction tuning. Further analysis substantiates our hypothesis that our improvement can be attributed to reduced overfitting to instruction tuning datasets. It is worth noting that we are not proposing \\ours as a replacement for the current instruction tuning process.\nInstead, our work aims to provide practical guidance for instruction tuning LMs, especially in low-resource scenarios.\nOur code is available at https://github.com/ZhengxiangShi/InstructionModelling.", "pdf": "https://openreview.net/pdf/2489a69de5d25fdd19c9250d4ef90033722cebc1.pdf"} {"title": "The Map Equation Goes Neural: Mapping Network Flows with Graph Neural Networks", "url": "https://openreview.net/forum?id=aFWx1N84Fe", "detail_url": "https://openreview.net/forum?id=aFWx1N84Fe", "authors": "Christopher Bl\u00f6cker,Chester Tan,Ingo Scholtes", "tags": "NIPS 2024,Poster", "abstract": "Community detection is an essential tool for unsupervised data exploration and revealing the organisational structure of networked systems. With a long history in network science, community detection typically relies on objective functions, optimised with custom-tailored search algorithms, but often without leveraging recent advances in deep learning. Recently, first works have started incorporating such objectives into loss functions for deep graph clustering and pooling. We consider the map equation, a popular information-theoretic objective function for unsupervised community detection, and express it in differentiable tensor form for optimisation through gradient descent. Our formulation turns the map equation compatible with any neural network architecture, enables end-to-end learning, incorporates node features, and chooses the optimal number of clusters automatically, all without requiring explicit regularisation. Applied to unsupervised graph clustering tasks, we achieve competitive performance against state-of-the-art deep graph clustering baselines in synthetic and real-world datasets.", "pdf": "https://openreview.net/pdf/cb0c7dd59b750afd4ddd0d81e8e0c82fb964244e.pdf"} {"title": "Effective Rank Analysis and Regularization for Enhanced 3D Gaussian Splatting", "url": "https://openreview.net/forum?id=EwWpAPzcay", "detail_url": "https://openreview.net/forum?id=EwWpAPzcay", "authors": "Junha Hyung,Susung Hong,Sungwon Hwang,Jaeseong Lee,Jaegul Choo,Jin-Hwa Kim", "tags": "NIPS 2024,Poster", "abstract": "3D reconstruction from multi-view images is one of the fundamental challenges in computer vision and graphics. \nRecently, 3D Gaussian Splatting (3DGS) has emerged as a promising technique capable of real-time rendering with high-quality 3D reconstruction. This method utilizes 3D Gaussian representation and tile-based splatting techniques, bypassing the expensive neural field querying. Despite its potential, 3DGS encounters challenges, including needle-like artifacts, suboptimal geometries, and inaccurate normals, due to the Gaussians converging into anisotropic Gaussians with one dominant variance.\nWe propose using effective rank analysis to examine the shape statistics of 3D Gaussian primitives, and identify the Gaussians indeed converge into needle-like shapes with the effective rank 1. To address this, we introduce effective rank as a regularization, which constrains the structure of the Gaussians. Our new regularization method enhances normal and geometry reconstruction while reducing needle-like artifacts. The approach can be integrated as an add-on module to other 3DGS variants, improving their quality without compromising visual fidelity.", "pdf": "https://openreview.net/pdf/fc3623f0ad337417db48cc4a09f4de75bc2958b8.pdf"} {"title": "InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint", "url": "https://openreview.net/forum?id=AH1mFs3c7o", "detail_url": "https://openreview.net/forum?id=AH1mFs3c7o", "authors": "Zhenzhi Wang,Jingbo Wang,Yixuan Li,Dahua Lin,Bo Dai", "tags": "NIPS 2024,Poster", "abstract": "Text-conditioned motion synthesis has made remarkable progress with the emergence of diffusion models. However, the majority of these motion diffusion models are primarily designed for a single character and overlook multi-human interactions. In our approach, we strive to explore this problem by synthesizing human motion with interactions for a group of characters of any size in a zero-shot manner. The key aspect of our approach is the adaptation of human-wise interactions as pairs of human joints that can be either in contact or separated by a desired distance. In contrast to existing methods that necessitate training motion generation models on multi-human motion datasets with a fixed number of characters, our approach inherently possesses the flexibility to model human interactions involving an arbitrary number of individuals, thereby transcending the limitations imposed by the training data. We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs. It consists of a motion controller and an inverse kinematics guidance module that realistically and accurately aligns the joints of synthesized characters to the desired location. Furthermore, we demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model (LLM). Experimental results highlight the capability of our framework to generate interactions with multiple human characters and its potential to work with off-the-shelf physics-based character simulators. Code is available at https://github.com/zhenzhiwang/intercontrol.", "pdf": "https://openreview.net/pdf/35d3981666c7f71568912b6aabb2fe5cab510696.pdf"} {"title": "LG-CAV: Train Any Concept Activation Vector with Language Guidance", "url": "https://openreview.net/forum?id=MjD9Y05Q6i", "detail_url": "https://openreview.net/forum?id=MjD9Y05Q6i", "authors": "Qihan Huang,Jie Song,Mengqi Xue,Haofei Zhang,Bingde Hu,Huiqiong Wang,Hao Jiang,Xingen Wang,Mingli Song", "tags": "NIPS 2024,Poster", "abstract": "Concept activation vector (CAV) has attracted broad research interest in explainable AI, by elegantly attributing model predictions to specific concepts. However, the training of CAV often necessitates a large number of high-quality images, which are expensive to curate and thus limited to a predefined set of concepts. To address this issue, we propose Language-Guided CAV (LG-CAV) to harness the abundant concept knowledge within the certain pre-trained vision-language models (e.g., CLIP). This method allows training any CAV without labeled data, by utilizing the corresponding concept descriptions as guidance. To bridge the gap between vision-language model and the target model, we calculate the activation values of concept descriptions on a common pool of images (probe images) with vision-language model and utilize them as language guidance to train the LG-CAV. Furthermore, after training high-quality LG-CAVs related to all the predicted classes in the target model, we propose the activation sample reweighting (ASR), serving as a model correction technique, to improve the performance of the target model in return. Experiments on four datasets across nine architectures demonstrate that LG-CAV achieves significantly superior quality to previous CAV methods given any concept, and our model correction method achieves state-of-the-art performance compared to existing concept-based methods. Our code is available at https://github.com/hqhQAQ/LG-CAV.", "pdf": "https://openreview.net/pdf/0ea67069032fd28eb5f760cc3b5bf00b4d9cbd00.pdf"} {"title": "Toward Dynamic Non-Line-of-Sight Imaging with Mamba Enforced Temporal Consistency", "url": "https://openreview.net/forum?id=QiCJomIW3l", "detail_url": "https://openreview.net/forum?id=QiCJomIW3l", "authors": "Yue Li,Yi Sun,Shida Sun,Juntian Ye,Yueyi Zhang,Feihu Xu,Zhiwei Xiong", "tags": "NIPS 2024,Poster", "abstract": "Dynamic reconstruction in confocal non-line-of-sight imaging encounters great challenges since the dense raster-scanning manner limits the practical frame rate. A fewer pioneer works reconstruct high-resolution volumes from the under-scanning transient measurements but overlook temporal consistency among transient frames. To fully exploit multi-frame information, we propose the first spatial-temporal Mamba (ST-Mamba) based method tailored for dynamic reconstruction of transient videos. Our method capitalizes on neighbouring transient frames to aggregate the target 3D hidden volume. Specifically, the interleaved features extracted from the input transient frames are fed to the proposed ST-Mamba blocks, which leverage the time-resolving causality in transient measurement. The cross ST-Mamba blocks are then devised to integrate the adjacent transient features. The target high-resolution transient frame is subsequently recovered by the transient spreading module. After transient fusion and recovery, a physical-based network is employed to reconstruct the hidden volume. To tackle the substantial noise inherent in transient videos, we propose a wave-based loss function to impose constraints within the phasor field. Besides, we introduce a new dataset, comprising synthetic videos for training and real-world videos for evaluation. Extensive experiments showcase the superior performance of our method on both synthetic data and real world data captured by different imaging setups. The code and data are available at https://github.com/Depth2World/Dynamic_NLOS.", "pdf": "https://openreview.net/pdf/2aca9e41126c3cecdcdc3ad9a1cc1b193b154e3b.pdf"} {"title": "Rethinking Human Evaluation Protocol for Text-to-Video Models: Enhancing Reliability, Reproducibility, and Practicality", "url": "https://openreview.net/forum?id=0AwMciNShl", "detail_url": "https://openreview.net/forum?id=0AwMciNShl", "authors": "Tianle Zhang,Langtian Ma,Yuchen Yan,Yuchen Zhang,Yue Yang,Ziyao Guo,Wenqi Shao,Kai Wang,Yang You,Yu Qiao,Ping Luo,Kaipeng Zhang", "tags": "NIPS 2024,Poster", "abstract": "Recent text-to-video (T2V) technology advancements, as demonstrated by models such as Gen2, Pika, and Sora, have significantly broadened its applicability and popularity. \nDespite these strides, evaluating these models poses substantial challenges. \nPrimarily, due to the limitations inherent in automatic metrics, manual evaluation is often considered a superior method for assessing T2V generation. However, existing manual evaluation protocols face reproducibility, reliability, and practicality issues.\nTo address these challenges, this paper introduces the Text-to-Video Human Evaluation (T2VHE) protocol, a comprehensive and standardized protocol for T2V models. \nThe T2VHE protocol includes well-defined metrics, thorough annotator training, and an effective dynamic evaluation module. \nExperimental results demonstrate that this protocol not only ensures high-quality annotations but can also reduce evaluation costs by nearly 50\\%.\nWe will open-source the entire setup of the T2VHE protocol, including the complete protocol workflow, the dynamic evaluation component details, and the annotation interface code. This will help communities establish more sophisticated human assessment protocols.", "pdf": "https://openreview.net/pdf/aa115a2ffe88cc5707a0f711b0ee921175fa9141.pdf"} {"title": "Improving Adaptivity via Over-Parameterization in Sequence Models", "url": "https://openreview.net/forum?id=UfLH4T676K", "detail_url": "https://openreview.net/forum?id=UfLH4T676K", "authors": "Yicheng Li,Qian Lin", "tags": "NIPS 2024,Poster", "abstract": "It is well known that eigenfunctions of a kernel play a crucial role in kernel regression.\n Through several examples, we demonstrate that even with the same set of eigenfunctions, the order of these functions significantly impacts regression outcomes.\n Simplifying the model by diagonalizing the kernel, we introduce an over-parameterized gradient descent in the realm of sequence model to capture the effects of various orders of a fixed set of eigen-functions.\n This method is designed to explore the impact of varying eigenfunction orders.\n Our theoretical results show that the over-parameterization gradient flow can adapt to the underlying structure of the signal and significantly outperform the vanilla gradient flow method.\n Moreover, we also demonstrate that deeper over-parameterization can further enhance the generalization capability of the model.\n These results not only provide a new perspective on the benefits of over-parameterization and but also offer insights into the adaptivity and generalization potential of neural networks beyond the kernel regime.", "pdf": "https://openreview.net/pdf/56a0a57950ecc1f98333f4dbe3c966f2ef47b9c4.pdf"} {"title": "An Expectation-Maximization Algorithm for Training Clean Diffusion Models from Corrupted Observations", "url": "https://openreview.net/forum?id=jURBh4V9N4", "detail_url": "https://openreview.net/forum?id=jURBh4V9N4", "authors": "WeiminBai,Yifei Wang,Wenzheng Chen,He Sun", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models excel in solving imaging inverse problems due to their ability to model complex image priors. However, their reliance on large, clean datasets for training limits their practical use where clean data is scarce. In this paper, we propose EMDiffusion, an expectation-maximization (EM) approach to train diffusion models from corrupted observations. Our method alternates between reconstructing clean images from corrupted data using a known diffusion model (E-step) and refining diffusion model weights based on these reconstructions (M-step). This iterative process leads the learned diffusion model to gradually converge to a local optimum, that is, to approximate the true clean data distribution. We validate our method through extensive experiments on diverse computational imaging tasks, including random inpainting, denoising, and deblurring, achieving new state-of-the-art performance.", "pdf": "https://openreview.net/pdf/1f1dd65c22e1a178d8655da0fe83c26fdde9a37a.pdf"} {"title": "From Transparent to Opaque: Rethinking Neural Implicit Surfaces with $\\alpha$-NeuS", "url": "https://openreview.net/forum?id=Pojt9RWIjJ", "detail_url": "https://openreview.net/forum?id=Pojt9RWIjJ", "authors": "Haoran Zhang,Junkai Deng,Xuhui Chen,Fei Hou,Wencheng Wang,Hong Qin,Chen Qian,Ying He", "tags": "NIPS 2024,Poster", "abstract": "Traditional 3D shape reconstruction techniques from multi-view images, such as structure from motion and multi-view stereo, primarily focus on opaque surfaces. Similarly, recent advances in neural radiance fields and its variants also primarily address opaque objects, encountering difficulties with the complex lighting effects caused by transparent materials. This paper introduces $\\alpha$-NeuS, a new method for simultaneously reconstructing thin transparent objects and opaque objects based on neural implicit surfaces (NeuS). Our method leverages the observation that transparent surfaces induce local extreme values in the learned distance fields during neural volumetric rendering, contrasting with opaque surfaces that align with zero level sets. Traditional iso-surfacing algorithms such as marching cubes, which rely on fixed iso-values, are ill-suited for this data. We address this by taking the absolute value of the distance field and developing an optimization method that extracts level sets corresponding to both non-negative local minima and zero iso-values. We prove that the reconstructed surfaces are unbiased for both transparent and opaque objects. To validate our approach, we construct a benchmark that includes both real-world and synthetic scenes, demonstrating its practical utility and effectiveness. Our data and code are publicly available at https://github.com/728388808/alpha-NeuS.", "pdf": "https://openreview.net/pdf/53910e2bce8e33f397be9cc5dc40284d10469de3.pdf"} {"title": "End-to-End Video Semantic Segmentation in Adverse Weather using Fusion Blocks and Temporal-Spatial Teacher-Student Learning", "url": "https://openreview.net/forum?id=paobkszgIA", "detail_url": "https://openreview.net/forum?id=paobkszgIA", "authors": "Xin Yang,YAN WENDING,Michael Bi Mi,Yuan Yuan,Robby T. Tan", "tags": "NIPS 2024,Poster", "abstract": "Adverse weather conditions can significantly degrade the video frames, causing existing video semantic segmentation methods to produce erroneous predictions. In this work, we target adverse weather conditions and introduce an end-to-end domain adaptation strategy that leverages a fusion block, temporal-spatial teacher-student learning, and a temporal weather degradation augmentation approach. The fusion block integrates temporal information from adjacent frames at the feature level, trained end-to-end, eliminating the need for pretrained optical flow, distinguishing our method from existing approaches. Our teacher-student approach involves two teachers: one focuses on exploring temporal information from adjacent frames, and the other harnesses spatial information from the current frame. Finally, we apply temporal weather degradation augmentation to consecutive frames to more accurately represent adverse weather degradations. Our method achieves a performance of 25.4 and 33.0 mIoU on the adaptation from VIPER and Synthia to MVSS, respectively, representing an improvement of 4.3 and 5.8 mIoU over the existing state-of-the-art method.", "pdf": "https://openreview.net/pdf/61806e8257932882d9ce48b17c3f6ff437775eb6.pdf"} {"title": "Diffusion-based Reinforcement Learning via Q-weighted Variational Policy Optimization", "url": "https://openreview.net/forum?id=UWUUVKtKeu", "detail_url": "https://openreview.net/forum?id=UWUUVKtKeu", "authors": "Shutong Ding,Ke Hu,Zhenhao Zhang,Kan Ren,Weinan Zhang,Jingyi Yu,Jingya Wang,Ye Shi", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models have garnered widespread attention in Reinforcement Learning (RL) for their powerful expressiveness and multimodality. It has been verified that utilizing diffusion policies can significantly improve the performance of RL algorithms in continuous control tasks by overcoming the limitations of unimodal policies, such as Gaussian policies. Furthermore, the multimodality of diffusion policies also shows the potential of providing the agent with enhanced exploration capabilities. However, existing works mainly focus on applying diffusion policies in offline RL, while their incorporation into online RL has been less investigated. The diffusion model's training objective, known as the variational lower bound, cannot be applied directly in online RL due to the unavailability of 'good' samples (actions). To harmonize the diffusion model with online RL, we propose a novel model-free diffusion-based online RL algorithm named Q-weighted Variational Policy Optimization (QVPO). Specifically, we introduce the Q-weighted variational loss and its approximate implementation in practice. Notably, this loss is shown to be a tight lower bound of the policy objective. To further enhance the exploration capability of the diffusion policy, we design a special entropy regularization term. Unlike Gaussian policies, the log-likelihood in diffusion policies is inaccessible; thus this entropy term is nontrivial. Moreover, to reduce the large variance of diffusion policies, we also develop an efficient behavior policy through action selection. This can further improve its sample efficiency during online interaction. Consequently, the QVPO algorithm leverages the exploration capabilities and multimodality of diffusion policies, preventing the RL agent from converging to a sub-optimal policy. To verify the effectiveness of QVPO, we conduct comprehensive experiments on MuJoCo continuous control benchmarks. The final results demonstrate that QVPO achieves state-of-the-art performance in terms of both cumulative reward and sample efficiency.", "pdf": "https://openreview.net/pdf/c1357b6c6bb1f29929d6bb3a243ec5be15a84408.pdf"} {"title": "DG-SLAM: Robust Dynamic Gaussian Splatting SLAM with Hybrid Pose Optimization", "url": "https://openreview.net/forum?id=tGozvLTDY3", "detail_url": "https://openreview.net/forum?id=tGozvLTDY3", "authors": "Yueming Xu,Haochen Jiang,Zhongyang Xiao,Jianfeng Feng,Li Zhang", "tags": "NIPS 2024,Poster", "abstract": "Achieving robust and precise pose estimation in dynamic scenes is a significant research challenge in Visual Simultaneous Localization and Mapping (SLAM). Recent advancements integrating Gaussian Splatting into SLAM systems have proven effective in creating high-quality renderings using explicit 3D Gaussian models, significantly improving environmental reconstruction fidelity. However, these approaches depend on a static environment assumption and face challenges in dynamic environments due to inconsistent observations of geometry and photometry. To address this problem, we propose DG-SLAM, the first robust dynamic visual SLAM system grounded in 3D Gaussians, which provides precise camera pose estimation alongside high-fidelity reconstructions. Specifically, we propose effective strategies, including motion mask generation, adaptive Gaussian point management, and a hybrid camera tracking algorithm to improve the accuracy and robustness of pose estimation. Extensive experiments demonstrate that DG-SLAM delivers state-of-the-art performance in camera pose estimation, map reconstruction, and novel-view synthesis in dynamic scenes, outperforming existing methods meanwhile preserving real-time rendering ability.", "pdf": "https://openreview.net/pdf/003a4a26d3ccd1f63f63eb5a3f55b8f83ce8f36b.pdf"} {"title": "Unveiling the Potential of Robustness in Selecting Conditional Average Treatment Effect Estimators", "url": "https://openreview.net/forum?id=k4EP46Q9X2", "detail_url": "https://openreview.net/forum?id=k4EP46Q9X2", "authors": "Yiyan HUANG,Cheuk Hang LEUNG,WANG Siyi,YIJUN LI,Qi WU", "tags": "NIPS 2024,Poster", "abstract": "The growing demand for personalized decision-making has led to a surge of interest in estimating the Conditional Average Treatment Effect (CATE). Various types of CATE estimators have been developed with advancements in machine learning and causal inference. However, selecting the desirable CATE estimator through a conventional model validation procedure remains impractical due to the absence of counterfactual outcomes in observational data. Existing approaches for CATE estimator selection, such as plug-in and pseudo-outcome metrics, face two challenges. First, they must determine the metric form and the underlying machine learning models for fitting nuisance parameters (e.g., outcome function, propensity function, and plug-in learner). Second, they lack a specific focus on selecting a robust CATE estimator. To address these challenges, this paper introduces a Distributionally Robust Metric (DRM) for CATE estimator selection. The proposed DRM is nuisance-free, eliminating the need to fit models for nuisance parameters, and it effectively prioritizes the selection of a distributionally robust CATE estimator. The experimental results validate the effectiveness of the DRM method in selecting CATE estimators that are robust to the distribution shift incurred by covariate shift and hidden confounders.", "pdf": "https://openreview.net/pdf/1e73b68e1034ef4c885567c4e519ec3a06b8e2ee.pdf"} {"title": "SureMap: Simultaneous mean estimation for single-task and multi-task disaggregated evaluation", "url": "https://openreview.net/forum?id=aTNT3FuVBG", "detail_url": "https://openreview.net/forum?id=aTNT3FuVBG", "authors": "Mikhail Khodak,Lester Mackey,Alexandra Chouldechova,Miroslav Dud\u00edk", "tags": "NIPS 2024,Poster", "abstract": "Disaggregated evaluation—estimation of performance of a machine learning model on different subpopulations—is a core task when assessing performance and group-fairness of AI systems.\nA key challenge is that evaluation data is scarce, and subpopulations arising from intersections of attributes (e.g., race, sex, age) are often tiny.\nToday, it is common for multiple clients to procure the same AI model from a model developer, and the task of disaggregated evaluation is faced by each customer individually. This gives rise to what we call the *multi-task disaggregated evaluation problem*, wherein multiple clients seek to conduct a disaggregated evaluation of a given model in their own data setting (task). In this work we develop a disaggregated evaluation method called **SureMap** that has high estimation accuracy for both multi-task *and* single-task disaggregated evaluations of blackbox models. SureMap's efficiency gains come from\n(1) transforming the problem into structured simultaneous Gaussian mean estimation and (2) incorporating external data, e.g., from the AI system creator or from their other clients. Our method combines *maximum a posteriori* (MAP) estimation using a well-chosen prior together with cross-validation-free tuning via Stein's unbiased risk estimate (SURE).\nWe evaluate SureMap on disaggregated evaluation tasks in multiple domains, observing significant accuracy improvements over several strong competitors.", "pdf": "https://openreview.net/pdf/f1dbeb283b43d7e10e9e6d57c9cc1f823f890e71.pdf"} {"title": "UMB: Understanding Model Behavior for Open-World Object Detection", "url": "https://openreview.net/forum?id=9Pa6cCB3gL", "detail_url": "https://openreview.net/forum?id=9Pa6cCB3gL", "authors": "Xing Xi,Yangyang Huang,Zhijie Zhong,Ronghua Luo", "tags": "NIPS 2024,Poster", "abstract": "Open-World Object Detection (OWOD) is a challenging task that requires the detector to identify unlabeled objects and continuously demands the detector to learn new knowledge based on existing ones. Existing methods primarily focus on recalling unknown objects, neglecting to explore the reasons behind them. This paper aims to understand the model's behavior in predicting the unknown category. First, we model the text attribute and the positive sample probability, obtaining their empirical probability, which can be seen as the detector's estimation of the likelihood of the target with certain known attributes being predicted as the foreground. Then, we jointly decide whether the current object should be categorized in the unknown category based on the empirical, the in-distribution, and the out-of-distribution probability. Finally, based on the decision-making process, we can infer the similarity of an unknown object to known classes and identify the attribute with the most significant impact on the decision-making process. This additional information can help us understand the behavior of the model's prediction in the unknown class. The evaluation results on the Real-World Object Detection (RWD) benchmark, which consists of five real-world application datasets, show that we surpassed the previous state-of-the-art (SOTA) with an absolute gain of 5.3 mAP for unknown classes, reaching 20.5 mAP. Our code is available at https://github.com/xxyzll/UMB.", "pdf": "https://openreview.net/pdf/be9c8f516c205741c8fa3ab79f27118e93b9c5e0.pdf"} {"title": "Learning Cooperative Trajectory Representations for Motion Forecasting", "url": "https://openreview.net/forum?id=mcY221BgKi", "detail_url": "https://openreview.net/forum?id=mcY221BgKi", "authors": "Hongzhi Ruan,Haibao Yu,Wenxian Yang,Siqi Fan,Zaiqing Nie", "tags": "NIPS 2024,Poster", "abstract": "Motion forecasting is an essential task for autonomous driving, and utilizing information from infrastructure and other vehicles can enhance forecasting capabilities.\nExisting research mainly focuses on leveraging single-frame cooperative information to enhance the limited perception capability of the ego vehicle, while underutilizing the motion and interaction context of traffic participants observed from cooperative devices. \nIn this paper, we propose a forecasting-oriented representation paradigm to utilize motion and interaction features from cooperative information. \nSpecifically, we present V2X-Graph, a representative framework to achieve interpretable and end-to-end trajectory feature fusion for cooperative motion forecasting. \nV2X-Graph is evaluated on V2X-Seq in vehicle-to-infrastructure (V2I) scenarios.\nTo further evaluate on vehicle-to-everything (V2X) scenario, we construct the first real-world V2X motion forecasting dataset V2X-Traj, which contains multiple autonomous vehicles and infrastructure in every scenario.\nExperimental results on both V2X-Seq and V2X-Traj show the advantage of our method. \nWe hope both V2X-Graph and V2X-Traj will benefit the further development of cooperative motion forecasting.\nFind the project at https://github.com/AIR-THU/V2X-Graph.", "pdf": "https://openreview.net/pdf/262f0f4a1a5579270c8d2e1921cd2c8b943c1a59.pdf"} {"title": "What do Graph Neural Networks learn? Insights from Tropical Geometry", "url": "https://openreview.net/forum?id=Oy2x0Xfx0u", "detail_url": "https://openreview.net/forum?id=Oy2x0Xfx0u", "authors": "Tuan Anh Pham,Vikas Garg", "tags": "NIPS 2024,Poster", "abstract": "Graph neural networks (GNNs) have been analyzed from multiple perspectives, including the WL-hierarchy, which exposes limits on their expressivity to distinguish graphs. However, characterizing the class of functions that they learn has remained unresolved. We address this fundamental question for message passing GNNs under ReLU activations, i.e., the de-facto choice for most GNNs.\n\nWe first show that such GNNs learn tropical rational signomial maps or continuous piecewise linear functions, establishing an equivalence with feedforward networks (FNNs). We then elucidate the role of the choice of aggregation and update functions, and derive the first general upper and lower bounds on the geometric complexity (i.e., the number of linear regions), establishing new results for popular architectures such as GraphSAGE and GIN. We also introduce and theoretically analyze several new architectures to illuminate the relative merits of the feedforward and the message passing layers, and the tradeoffs involving depth and number of trainable parameters. Finally, we also characterize the decision boundary for node and graph classification tasks.", "pdf": "https://openreview.net/pdf/e8fade086b7229866b1230b08a944a357e941e2f.pdf"} {"title": "Unsupervised Object Detection with Theoretical Guarantees", "url": "https://openreview.net/forum?id=x33oWJQyH0", "detail_url": "https://openreview.net/forum?id=x33oWJQyH0", "authors": "Marian Longa,Joao F. Henriques", "tags": "NIPS 2024,Poster", "abstract": "Unsupervised object detection using deep neural networks is typically a difficult problem with few to no guarantees about the learned representation. In this work we present the first unsupervised object detection method that is theoretically guaranteed to recover the true object positions up to quantifiable small shifts. We develop an unsupervised object detection architecture and prove that the learned variables correspond to the true object positions up to small shifts related to the encoder and decoder receptive field sizes, the object sizes, and the widths of the Gaussians used in the rendering process. We perform detailed analysis of how the error depends on each of these variables and perform synthetic experiments validating our theoretical predictions up to a precision of individual pixels. We also perform experiments on CLEVR-based data and show that, unlike current SOTA object detection methods (SAM, CutLER), our method's prediction errors always lie within our theoretical bounds. We hope that this work helps open up an avenue of research into object detection methods with theoretical guarantees.", "pdf": "https://openreview.net/pdf/7878a3bbb19a093b6e8f4e67ce9a0e0e1dfa65b6.pdf"} {"title": "Boosting Graph Pooling with Persistent Homology", "url": "https://openreview.net/forum?id=WcmqdY2AKu", "detail_url": "https://openreview.net/forum?id=WcmqdY2AKu", "authors": "Chaolong Ying,Xinjian Zhao,Tianshu Yu", "tags": "NIPS 2024,Poster", "abstract": "Recently, there has been an emerging trend to integrate persistent homology (PH) into graph neural networks (GNNs) to enrich expressive power. However, naively plugging PH features into GNN layers always results in marginal improvement with low interpretability. In this paper, we investigate a novel mechanism for injecting global topological invariance into pooling layers using PH, motivated by the observation that filtration operation in PH naturally aligns graph pooling in a cut-off manner. In this fashion, message passing in the coarsened graph acts along persistent pooled topology, leading to improved performance. Experimentally, we apply our mechanism to a collection of graph pooling methods and observe consistent and substantial performance gain over several popular datasets, demonstrating its wide applicability and flexibility.", "pdf": "https://openreview.net/pdf/41d00587b119c87d77aa0338e15fd41c4ee2e90d.pdf"} {"title": "MSPE: Multi-Scale Patch Embedding Prompts Vision Transformers to Any Resolution", "url": "https://openreview.net/forum?id=9Q9UiAyV40", "detail_url": "https://openreview.net/forum?id=9Q9UiAyV40", "authors": "Wenzhuo Liu,Fei Zhu,Shijie Ma,Cheng-Lin Liu", "tags": "NIPS 2024,Poster", "abstract": "Although Vision Transformers (ViTs) have recently advanced computer vision tasks significantly, an important real-world problem was overlooked: adapting to variable input resolutions. Typically, images are resized to a fixed resolution, such as 224x224, for efficiency during training and inference. However, uniform input size conflicts with real-world scenarios where images naturally vary in resolution. Modifying the preset resolution of a model may severely degrade the performance. In this work, we propose to enhance the model adaptability to resolution variation by optimizing the patch embedding. The proposed method, called Multi-Scale Patch Embedding (MSPE), substitutes the standard patch embedding with multiple variable-sized patch kernels and selects the best parameters for different resolutions, eliminating the need to resize the original image. Our method does not require high-cost training or modifications to other parts, making it easy to apply to most ViT models. Experiments in image classification, segmentation, and detection tasks demonstrate the effectiveness of MSPE, yielding superior performance on low-resolution inputs and performing comparably on high-resolution inputs with existing methods.", "pdf": "https://openreview.net/pdf/685ed99b391a25770fac39a9bdbde62c1a7808ac.pdf"} {"title": "AlterMOMA: Fusion Redundancy Pruning for Camera-LiDAR Fusion Models with Alternative Modality Masking", "url": "https://openreview.net/forum?id=ujwIlTNrAP", "detail_url": "https://openreview.net/forum?id=ujwIlTNrAP", "authors": "shiqi sun,Yantao Lu,Ning Liu,Bo Jiang,Jinchao Chen,Ying Zhang", "tags": "NIPS 2024,Poster", "abstract": "Camera-LiDAR fusion models significantly enhance perception performance in autonomous driving. The fusion mechanism leverages the strengths of each modality while minimizing their weaknesses. Moreover, in practice, camera-LiDAR fusion models utilize pre-trained backbones for efficient training. However, we argue that directly loading single-modal pre-trained camera and LiDAR backbones into camera-LiDAR fusion models introduces similar feature redundancy across modalities due to the nature of the fusion mechanism. Unfortunately, existing pruning methods are developed explicitly for single-modal models, and thus, they struggle to effectively identify these specific redundant parameters in camera-LiDAR fusion models. In this paper, to address the issue above on camera-LiDAR fusion models, we propose a novelty pruning framework Alternative Modality Masking Pruning (AlterMOMA), which employs alternative masking on each modality and identifies the redundant parameters. Specifically, when one modality parameters are masked (deactivated), the absence of features from the masked backbone compels the model to reactivate previous redundant features of the other modality backbone. Therefore, these redundant features and relevant redundant parameters can be identified via the reactivation process. The redundant parameters can be pruned by our proposed importance score evaluation function, Alternative Evaluation (AlterEva), which is based on the observation of the loss changes when certain modality parameters are activated and deactivated. Extensive experiments on the nuScene and KITTI datasets encompassing diverse tasks, baseline models, and pruning algorithms showcase that AlterMOMA outperforms existing pruning methods, attaining state-of-the-art performance.", "pdf": "https://openreview.net/pdf/96b5c41a32824f29863a3aebe65b09d5c34a30f7.pdf"} {"title": "Almost Minimax Optimal Best Arm Identification in Piecewise Stationary Linear Bandits", "url": "https://openreview.net/forum?id=Q5e3ftQ3q3", "detail_url": "https://openreview.net/forum?id=Q5e3ftQ3q3", "authors": "Yunlong Hou,Vincent Y. F. Tan,Zixin Zhong", "tags": "NIPS 2024,Poster", "abstract": "We propose a novel piecewise stationary linear bandit (PSLB) model, where the environment randomly samples a context from an unknown probability distribution at each changepoint, and the quality of an arm is measured by its return averaged over all contexts. The contexts and their distribution, as well as the changepoints are unknown to the agent.\nWe design Piecewise-Stationary $\\varepsilon$-Best Arm Identification$^+$ (PS$\\varepsilon$BAI$^+$), an algorithm that is guaranteed to identify an $\\varepsilon$-optimal arm with probability $\\ge 1-\\delta$ and with a minimal number of samples.\nPS$\\varepsilon$BAI$^+$ consists of two subroutines, PS$\\varepsilon$BAI and Na\u00efve $\\varepsilon$-BAI (N$\\varepsilon$BAI), which are executed in parallel. PS$\\varepsilon$BAI actively detects changepoints and aligns contexts to facilitate the arm identification process.\nWhen PS$\\varepsilon$BAI and N$\\varepsilon$BAI are utilized judiciously in parallel, PS$\\varepsilon$BAI$^+$ is shown to have a finite expected sample complexity. \nBy proving a lower bound, we show the expected sample complexity of PS$\\varepsilon$BAI$^+$ is optimal up to a logarithmic factor.\nWe compare PS$\\varepsilon$BAI$^+$ to baseline algorithms using numerical experiments which demonstrate its efficiency.\nBoth our analytical and numerical results corroborate that the efficacy of PS$\\varepsilon$BAI$^+$ is due to the delicate change detection and context alignment procedures embedded in PS$\\varepsilon$BAI.", "pdf": "https://openreview.net/pdf/921af89c2e2daa295a74d33344c1e3147cad90e3.pdf"} {"title": "Unsupervised Modality Adaptation with Text-to-Image Diffusion Models for Semantic Segmentation", "url": "https://openreview.net/forum?id=5BwWgyvgwR", "detail_url": "https://openreview.net/forum?id=5BwWgyvgwR", "authors": "Ruihao Xia,Yu Liang,Peng-Tao Jiang,Hao Zhang,Bo Li,Yang Tang,Pan Zhou", "tags": "NIPS 2024,Poster", "abstract": "Despite their success, unsupervised domain adaptation methods for semantic segmentation primarily focus on adaptation between image domains and do not utilize other abundant visual modalities like depth, infrared and event. This limitation hinders their performance and restricts their application in real-world multimodal scenarios. To address this issue, we propose Modality Adaptation with text-to-image Diffusion Models (MADM) for semantic segmentation task which utilizes text-to-image diffusion models pre-trained on extensive image-text pairs to enhance the model's cross-modality capabilities. Specifically, MADM comprises two key complementary components to tackle major challenges. First, due to the large modality gap, using one modal data to generate pseudo labels for another modality suffers from a significant drop in accuracy. To address this, MADM designs diffusion-based pseudo-label generation which adds latent noise to stabilize pseudo-labels and enhance label accuracy. Second, to overcome the limitations of latent low-resolution features in diffusion models, MADM introduces the label palette and latent regression which converts one-hot encoded labels into the RGB form by palette and regresses them in the latent space, thus ensuring the pre-trained decoder for up-sampling to obtain fine-grained features. Extensive experimental results demonstrate that MADM achieves state-of-the-art adaptation performance across various modality tasks, including images to depth, infrared, and event modalities. We open-source our code and models at https://github.com/XiaRho/MADM.", "pdf": "https://openreview.net/pdf/90678708628f9ad04321a23499e73041a00e6297.pdf"} {"title": "Leveraging Contrastive Learning for Enhanced Node Representations in Tokenized Graph Transformers", "url": "https://openreview.net/forum?id=u6FuiKzT1K", "detail_url": "https://openreview.net/forum?id=u6FuiKzT1K", "authors": "Jinsong Chen,Hanpeng Liu,John E. Hopcroft,Kun He", "tags": "NIPS 2024,Poster", "abstract": "While tokenized graph Transformers have demonstrated strong performance in node classification tasks, their reliance on a limited subset of nodes with high similarity scores for constructing token sequences overlooks valuable information from other nodes, hindering their ability to fully harness graph information for learning optimal node representations. To address this limitation, we propose a novel graph Transformer called GCFormer. Unlike previous approaches, GCFormer develops a hybrid token generator to create two types of token sequences, positive and negative, to capture diverse graph information. And a tailored Transformer-based backbone is adopted to learn meaningful node representations from these generated token sequences. Additionally, GCFormer introduces contrastive learning to extract valuable information from both positive and negative token sequences, enhancing the quality of learned node representations. Extensive experimental results across various datasets, including homophily and heterophily graphs, demonstrate the superiority of GCFormer in node classification, when compared to representative graph neural networks (GNNs) and graph Transformers.", "pdf": "https://openreview.net/pdf/9f0c38f076c089cd29634dab2e092a81cad9505d.pdf"} {"title": "Taming Generative Diffusion Prior for Universal Blind Image Restoration", "url": "https://openreview.net/forum?id=NbFOrcwqbR", "detail_url": "https://openreview.net/forum?id=NbFOrcwqbR", "authors": "Siwei Tu,Weidong Yang,Ben Fei", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models have been widely utilized for image restoration. However, previous blind image restoration methods still need to assume the type of degradation model while leaving the parameters to be optimized, limiting their real-world applications. Therefore, we aim to tame generative diffusion prior for universal blind image restoration dubbed BIR-D, which utilizes an optimizable convolutional kernel to simulate the degradation model and dynamically update the parameters of the kernel in the diffusion steps, enabling it to achieve blind image restoration results even in various complex situations. Besides, based on mathematical reasoning, we have provided an empirical formula for the chosen of adaptive guidance scale, eliminating the need for a grid search for the optimal parameter. Experimentally, Our BIR-D has demonstrated superior practicality and versatility than off-the-shelf unsupervised methods across various tasks both on real-world and synthetic datasets, qualitatively and quantitatively. BIR-D is able to fulfill multi-guidance blind image restoration. Moreover, BIR-D can also restore images that undergo multiple and complicated degradations, demonstrating the practical applications. The code is available at https://github.com/Tusiwei/BIR-D.", "pdf": "https://openreview.net/pdf/c92a718379b301ab0dc2a04c65e3cd3f56316f7d.pdf"} {"title": "SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Models", "url": "https://openreview.net/forum?id=t7wvJstsiV", "detail_url": "https://openreview.net/forum?id=t7wvJstsiV", "authors": "Jianyi Zhang,Da-Cheng Juan,Cyrus Rashtchian,Chun-Sung Ferng,Heinrich Jiang,Yiran Chen", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have demonstrated remarkable capabilities, but their outputs can sometimes be unreliable or factually incorrect. To address this, we introduce Self Logits Evolution Decoding (SLED), a novel decoding framework that enhances the truthfulness of LLMs without relying on external knowledge bases or requiring further fine-tuning. From an optimization perspective, our SLED framework leverages the latent knowledge embedded within the LLM by contrasting the output logits from the final layer with those from early layers. It then utilizes an approximate gradient approach to enable latent knowledge to guide the self-refinement of outputs, thereby effectively improving factual accuracy. Extensive experiments have been conducted on established benchmarks across a diverse range of model families (LLaMA 2, LLaMA 3, Gemma) and scales (from 2B to 70B), including more advanced architectural configurations such as the mixture of experts (MoE). Our evaluation spans a wide variety of tasks, including multi-choice, open-generation, and adaptations to chain-of-thought reasoning tasks. The results demonstrate that SLED consistently improves factual accuracy by up to 20\\% compared to existing decoding methods while maintaining natural language fluency and negligible latency overhead. Furthermore, it can be flexibly combined with other decoding methods to further enhance their performance.", "pdf": "https://openreview.net/pdf/f90243bc5b83369f8668b4ec7f3bcdfc2a7f7f9c.pdf"} {"title": "Locating What You Need: Towards Adapting Diffusion Models to OOD Concepts In-the-Wild", "url": "https://openreview.net/forum?id=65htepluYE", "detail_url": "https://openreview.net/forum?id=65htepluYE", "authors": "Jianan Yang,Chenchao Gao,Zhiqing Xiao,Junbo Zhao,Sai Wu,Gang Chen,Haobo Wang", "tags": "NIPS 2024,Poster", "abstract": "The recent large-scale text-to-image generative models have attained unprecedented performance, while people established *adaptor* modules like LoRA and DreamBooth to extend this performance to even more unseen concept tokens. However, we empirically find that this workflow often fails to accurately depict the *out-of-distribution* concepts. This failure is highly related to the low quality of training data. To resolve this, we present a framework called Controllable Adaptor Towards Out-of-Distribution Concepts (CATOD). Our framework follows the active learning paradigm which includes high-quality data accumulation and adaptor training, enabling a finer-grained enhancement of generative results. The *aesthetics* score and *concept-matching* score are two major factors that impact the quality of synthetic results. One key component of CATOD is the weighted scoring system that automatically balances between these two scores and we also offer comprehensive theoretical analysis for this point. Then, it determines how to select data and schedule the adaptor training based on this scoring system. The extensive results show that CATOD significantly outperforms the prior approaches with an 11.10 boost on the CLIP score and a 33.08% decrease on the CMMD metric.", "pdf": "https://openreview.net/pdf/0fc543fd65a2897c2f5093d8d26f5cfa1377d114.pdf"} {"title": "AdvAD: Exploring Non-Parametric Diffusion for Imperceptible Adversarial Attacks", "url": "https://openreview.net/forum?id=s8Pxz7cvHT", "detail_url": "https://openreview.net/forum?id=s8Pxz7cvHT", "authors": "Jin Li,Ziqiang He,Anwei Luo,Jian-Fang Hu,Z. Jane Wang,Xiangui Kang", "tags": "NIPS 2024,Poster", "abstract": "Imperceptible adversarial attacks aim to fool DNNs by adding imperceptible perturbation to the input data. Previous methods typically improve the imperceptibility of attacks by integrating common attack paradigms with specifically designed perception-based losses or the capabilities of generative models. In this paper, we propose Adversarial Attacks in Diffusion (AdvAD), a novel modeling framework distinct from existing attack paradigms. AdvAD innovatively conceptualizes attacking as a non-parametric diffusion process by theoretically exploring basic modeling approach rather than using the denoising or generation abilities of regular diffusion models requiring neural networks. At each step, much subtler yet effective adversarial guidance is crafted using only the attacked model without any additional network, which gradually leads the end of diffusion process from the original image to a desired imperceptible adversarial example. Grounded in a solid theoretical foundation of the proposed non-parametric diffusion process, AdvAD achieves high attack efficacy and imperceptibility with intrinsically lower overall perturbation strength. Additionally, an enhanced version AdvAD-X is proposed to evaluate the extreme of our novel framework under an ideal scenario. Extensive experiments demonstrate the effectiveness of the proposed AdvAD and AdvAD-X. Compared with state-of-the-art imperceptible attacks, AdvAD achieves an average of 99.9% (+17.3%) ASR with 1.34 (-0.97) $l_2$ distance, 49.74 (+4.76) PSNR and 0.9971 (+0.0043) SSIM against four prevalent DNNs with three different architectures on the ImageNet-compatible dataset. Code is available at https://github.com/XianguiKang/AdvAD.", "pdf": "https://openreview.net/pdf/edd586d663019327dfd268abca5445420d0591fa.pdf"} {"title": "Cross-Scale Self-Supervised Blind Image Deblurring via Implicit Neural Representation", "url": "https://openreview.net/forum?id=CFez7MFUFd", "detail_url": "https://openreview.net/forum?id=CFez7MFUFd", "authors": "Tianjing Zhang,Yuhui Quan,Hui Ji", "tags": "NIPS 2024,Poster", "abstract": "Blind image deblurring (BID) is an important yet challenging image recovery problem. Most existing deep learning methods require supervised training with ground truth (GT) images. This paper introduces a self-supervised method for BID that does not require GT images. The key challenge is to regularize the training to prevent over-fitting due to the absence of GT images. By leveraging an exact relationship among the blurred image, latent image, and blur kernel across consecutive scales, we propose an effective cross-scale consistency loss. This is implemented by representing the image and kernel with implicit neural representations (INRs), whose resolution-free property enables consistent yet efficient computation for network training across multiple scales. Combined with a progressively coarse-to-fine training scheme, the proposed method significantly outperforms existing self-supervised methods in extensive experiments.", "pdf": "https://openreview.net/pdf/e68b99104a8f0df870e6239dae76778883c81fdf.pdf"} {"title": "Visual Anchors Are Strong Information Aggregators For Multimodal Large Language Model", "url": "https://openreview.net/forum?id=2YPdpWzEsF", "detail_url": "https://openreview.net/forum?id=2YPdpWzEsF", "authors": "Haogeng Liu,Quanzeng You,Xiaotian Han,Yongfei Liu,Huaibo Huang,Ran He,Hongxia Yang", "tags": "NIPS 2024,Poster", "abstract": "In the realm of Multimodal Large Language Models (MLLMs), vision-language connector plays a crucial role to link the pre-trained vision encoders with Large Language Models (LLMs). Despite its importance, the vision-language connector has been relatively less explored. In this study, we aim to propose a strong vision-language connector that enables MLLM to simultaneously achieve high accuracy and low computation cost. We first reveal the existence of the visual anchors in Vision Transformer and propose a cost-effective search algorithm to progressively extract them. Building on these findings, we introduce the Anchor Former (AcFormer), a novel vision-language connector designed to leverage the rich prior knowledge obtained from these visual anchors during pretraining, guiding the aggregation of information. \nThrough extensive experimentation, we demonstrate that the proposed method significantly reduces computational costs by nearly two-thirds, while simultaneously outperforming baseline methods. This highlights the effectiveness and efficiency of AcFormer.", "pdf": "https://openreview.net/pdf/4041aad3b42874372a267d6990a28307a3c622bb.pdf"} {"title": "Eye-gaze Guided Multi-modal Alignment for Medical Representation Learning", "url": "https://openreview.net/forum?id=0bINeW40u4", "detail_url": "https://openreview.net/forum?id=0bINeW40u4", "authors": "Chong Ma,Hanqi Jiang,Wenting Chen,Yiwei Li,Zihao Wu,Xiaowei Yu,Zhengliang Liu,Lei Guo,Dajiang Zhu,Tuo Zhang,Dinggang Shen,Tianming Liu,Xiang Li", "tags": "NIPS 2024,Poster", "abstract": "In the medical multi-modal frameworks, the alignment of cross-modality features presents a significant challenge. However, existing works have learned features that are implicitly aligned from the data, without considering the explicit relationships in the medical context. This data-reliance may lead to low generalization of the learned alignment relationships. In this work, we propose the Eye-gaze Guided Multi-modal Alignment (EGMA) framework to harness eye-gaze data for better alignment of medical visual and textual features. We explore the natural auxiliary role of radiologists' eye-gaze data in aligning medical images and text, and introduce a novel approach by using eye-gaze data, collected synchronously by radiologists during diagnostic evaluations. We conduct downstream tasks of image classification and image-text retrieval on four medical datasets, where EGMA achieved state-of-the-art performance and stronger generalization across different datasets. Additionally, we explore the impact of varying amounts of eye-gaze data on model performance, highlighting the feasibility and utility of integrating this auxiliary data into multi-modal alignment framework.", "pdf": "https://openreview.net/pdf/a3b4e9cce2279a54117f44aa692feec697ecefb5.pdf"} {"title": "E-Motion: Future Motion Simulation via Event Sequence Diffusion", "url": "https://openreview.net/forum?id=pWowK7jqok", "detail_url": "https://openreview.net/forum?id=pWowK7jqok", "authors": "Song Wu,Zhiyu Zhu,Junhui Hou,Guangming Shi,Jinjian Wu", "tags": "NIPS 2024,Poster", "abstract": "Forecasting a typical object's future motion is a critical task for interpreting and interacting with dynamic environments in computer vision. Event-based sensors, which could capture changes in the scene with exceptional temporal granularity, may potentially offer a unique opportunity to predict future motion with a level of detail and precision previously unachievable. Inspired by that, we propose to integrate the strong learning capacity of the video diffusion model with the rich motion information of an event camera as a motion simulation framework. Specifically, we initially employ pre-trained stable video diffusion models to adapt the event sequence dataset. This process facilitates the transfer of extensive knowledge from RGB videos to an event-centric domain. Moreover, we introduce an alignment mechanism that utilizes reinforcement learning techniques to enhance the reverse generation trajectory of the diffusion model, ensuring improved performance and accuracy. Through extensive testing and validation, we demonstrate the effectiveness of our method in various complex scenarios, showcasing its potential to revolutionize motion flow prediction in computer vision applications such as autonomous vehicle guidance, robotic navigation, and interactive media. Our findings suggest a promising direction for future research in enhancing the interpretative power and predictive accuracy of computer vision systems. The source code is\npublicly available at https://github.com/p4r4mount/E-Motion.", "pdf": "https://openreview.net/pdf/ca8e985c1cc3321d121c4f1490cfeb012e05d5f7.pdf"} {"title": "Flatten Anything: Unsupervised Neural Surface Parameterization", "url": "https://openreview.net/forum?id=eNeqGc9AgR", "detail_url": "https://openreview.net/forum?id=eNeqGc9AgR", "authors": "Qijian Zhang,Junhui Hou,Wenping Wang,Ying He", "tags": "NIPS 2024,Poster", "abstract": "Surface parameterization plays an essential role in numerous computer graphics and geometry processing applications. Traditional parameterization approaches are designed for high-quality meshes laboriously created by specialized 3D modelers, thus unable to meet the processing demand for the current explosion of ordinary 3D data. Moreover, their working mechanisms are typically restricted to certain simple topologies, thus relying on cumbersome manual efforts (e.g., surface cutting, part segmentation) for pre-processing. In this paper, we introduce the Flatten Anything Model (FAM), an unsupervised neural architecture to achieve global free-boundary surface parameterization via learning point-wise mappings between 3D points on the target geometric surface and adaptively-deformed UV coordinates within the 2D parameter domain. To mimic the actual physical procedures, we ingeniously construct geometrically-interpretable sub-networks with specific functionalities of surface cutting, UV deforming, unwrapping, and wrapping, which are assembled into a bi-directional cycle mapping framework. Compared with previous methods, our FAM directly operates on discrete surface points without utilizing connectivity information, thus significantly reducing the strict requirements for mesh quality and even applicable to unstructured point cloud data. More importantly, our FAM is fully-automated without the need for pre-cutting and can deal with highly-complex topologies, since its learning process adaptively finds reasonable cutting seams and UV boundaries. Extensive experiments demonstrate the universality, superiority, and inspiring potential of our proposed neural surface parameterization paradigm. Our code is available at https://github.com/keeganhk/FlattenAnything.", "pdf": "https://openreview.net/pdf/dd713a293868fd593e5d5827170ca0eb58ba5ce5.pdf"} {"title": "Learning Where to Edit Vision Transformers", "url": "https://openreview.net/forum?id=VIlyDguGEz", "detail_url": "https://openreview.net/forum?id=VIlyDguGEz", "authors": "Yunqiao Yang,Long-Kai Huang,Shengzhuang Chen,Kede Ma,Ying Wei", "tags": "NIPS 2024,Poster", "abstract": "Model editing aims to data-efficiently correct predictive errors of large pre-trained models while ensuring generalization to neighboring failures and locality to minimize unintended effects on unrelated examples. While significant progress has been made in editing Transformer-based large language models, effective strategies for editing vision Transformers (ViTs) in computer vision remain largely untapped. In this paper, we take initial steps towards correcting predictive errors of ViTs, particularly those arising from subpopulation shifts. Taking a locate-then-edit approach, we first address the ``where-to-edit`` challenge by meta-learning a hypernetwork on CutMix-augmented data generated for editing reliability. This trained hypernetwork produces generalizable binary masks that identify a sparse subset of structured model parameters, responsive to real-world failure samples. Afterward, we solve the ``how-to-edit`` problem by simply fine-tuning the identified parameters using a variant of gradient descent to achieve successful edits. To validate our method, we construct an editing benchmark that introduces subpopulation shifts towards natural underrepresented images and AI-generated images, thereby revealing the limitations of pre-trained ViTs for object recognition. Our approach not only achieves superior performance on the proposed benchmark but also allows for adjustable trade-offs between generalization and locality. Our code is available at https://github.com/hustyyq/Where-to-Edit.", "pdf": "https://openreview.net/pdf/0c454d2d3896e750b8290fe2a4ff97bb2c403ebe.pdf"} {"title": "Generalizing Weather Forecast to Fine-grained Temporal Scales via Physics-AI Hybrid Modeling", "url": "https://openreview.net/forum?id=ioAlzcELTf", "detail_url": "https://openreview.net/forum?id=ioAlzcELTf", "authors": "Wanghan Xu,Fenghua Ling,Wenlong Zhang,Tao Han,Hao Chen,Wanli Ouyang,LEI BAI", "tags": "NIPS 2024,Poster", "abstract": "Data-driven artificial intelligence (AI) models have made significant advancements in weather forecasting, particularly in medium-range and nowcasting. However, most data-driven weather forecasting models are black-box systems that focus on learning data mapping rather than fine-grained physical evolution in the time dimension. Consequently, the limitations in the temporal scale of datasets prevent these models from forecasting at finer time scales. This paper proposes a physics-AI hybrid model (i.e., WeatherGFT) which Generalizes weather forecasts to Finer-grained Temporal scales beyond training dataset. Specifically, we employ a carefully designed PDE kernel to simulate physical evolution on a small time scale (e.g., 300 seconds) and use a parallel neural networks with a learnable router for bias correction. Furthermore, we introduce a lead time-aware training framework to promote the generalization of the model at different lead times. The weight analysis of physics-AI modules indicates that physics conducts major evolution while AI performs corrections adaptively. Extensive experiments show that WeatherGFT trained on an hourly dataset, achieves state-of-the-art performance across multiple lead times and exhibits the capability to generalize 30-minute forecasts.", "pdf": "https://openreview.net/pdf/e03f894d3b581a13ac17ad48da73991f5c8af667.pdf"} {"title": "Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity", "url": "https://openreview.net/forum?id=jTyjwRpLZ5", "detail_url": "https://openreview.net/forum?id=jTyjwRpLZ5", "authors": "Qian Yu,Yining Wang,Baihe Huang,Qi Lei,Jason D. Lee", "tags": "NIPS 2024,Poster", "abstract": "Optimization of convex functions under stochastic zeroth-order feedback has been a major and challenging question in online learning. In this work, we consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries. \nWe provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds. \nWe propose an algorithm that features a combination of a bootstrapping stage and a mirror-descent stage. \nOur main technical innovation consists of a sharp characterization for the spherical-sampling gradient estimator under higher-order smoothness conditions, which allows the algorithm to optimally balance the bias-variance tradeoff, \nand a new iterative method for the bootstrapping stage, which maintains the performance for unbounded Hessian.", "pdf": "https://openreview.net/pdf/b21a76944143b8944e4d43e1e863716151af0fb4.pdf"} {"title": "Improved Analysis for Bandit Learning in Matching Markets", "url": "https://openreview.net/forum?id=07N0qoaZ2L", "detail_url": "https://openreview.net/forum?id=07N0qoaZ2L", "authors": "Fang Kong,Zilong Wang,Shuai Li", "tags": "NIPS 2024,Poster", "abstract": "A rich line of works study the bandit learning problem in two-sided matching markets, where one side of market participants (players) are uncertain about their preferences and hope to find a stable matching during iterative matchings with the other side (arms). The state-of-the-art analysis shows that the player-optimal stable regret is of order $O(K\\log T/\\Delta^2)$ where $K$ is the number of arms, $T$ is the horizon and $\\Delta$ is the players' minimum preference gap. However, this result may be far from the lower bound $\\Omega(\\max\\{N\\log T/\\Delta^2, K\\log T/\\Delta\\})$ since the number $K$ of arms (workers, publisher slots) may be much larger than that $N$ of players (employers in labor markets, advertisers in online advertising, respectively). \nIn this paper, we propose a new algorithm and show that the regret can be upper bounded by $O(N^2\\log T/\\Delta^2 + K \\log T/\\Delta)$. This result removes the dependence on $K$ in the main order term and improves the state-of-the-art guarantee in common cases where $N$ is much smaller than $K$. Such an advantage is also verified in experiments. \nIn addition, we provide a refined analysis for the existing centralized UCB algorithm and show that, under $\\alpha$-condition, it achieves an improved $O(N \\log T/\\Delta^2 + K \\log T / \\Delta)$ regret.", "pdf": "https://openreview.net/pdf/9725a8beebbc61ad9e438a635deaf066514301f9.pdf"} {"title": "Spectral Graph Pruning Against Over-Squashing and Over-Smoothing", "url": "https://openreview.net/forum?id=EMkrwJY2de", "detail_url": "https://openreview.net/forum?id=EMkrwJY2de", "authors": "Adarsh Jamadandi,Celia Rubio-Madrigal,Rebekka Burkholz", "tags": "NIPS 2024,Poster", "abstract": "Message Passing Graph Neural Networks are known to suffer from two problems that are sometimes believed to be diametrically opposed: over-squashing and over-smoothing. The former results from topological bottlenecks that hamper the information flow from distant nodes and are mitigated by spectral gap maximization, primarily, by means of edge additions. However, such additions often promote over-smoothing that renders nodes of different classes less distinguishable. Inspired by the Braess phenomenon, we argue that deleting edges can address over-squashing and over-smoothing simultaneously. This insight explains how edge deletions can improve generalization, thus connecting spectral gap optimization to a seemingly disconnected objective of reducing computational resources by pruning graphs for lottery tickets. To this end, we propose a computationally effective spectral gap optimization framework to add or delete edges and demonstrate its effectiveness on the long range graph benchmark and on larger heterophilous datasets.", "pdf": "https://openreview.net/pdf/6c02e3410cc4995ad21dc0152cadb338be80d408.pdf"} {"title": "Revive Re-weighting in Imbalanced Learning by Density Ratio Estimation", "url": "https://openreview.net/forum?id=vx4NgdyyVG", "detail_url": "https://openreview.net/forum?id=vx4NgdyyVG", "authors": "Jiaan Luo,Feng Hong,Jiangchao Yao,Bo Han,Ya Zhang,Yanfeng Wang", "tags": "NIPS 2024,Poster", "abstract": "In deep learning, model performance often deteriorates when trained on highly imbalanced datasets, especially when evaluation metrics require robust generalization across underrepresented classes. To address the challenges posed by imbalanced data distributions, this study introduces a novel method utilizing density ratio estimation for dynamic class weight adjustment, termed as Re-weighting with Density Ratio (RDR). Our method adaptively adjusts the importance of each class during training, mitigates overfitting on dominant classes and enhances model adaptability across diverse datasets. Extensive experiments conducted on various large scale benchmark datasets validate the effectiveness of our method. Results demonstrate substantial improvements in generalization capabilities, particularly under severely imbalanced conditions.", "pdf": "https://openreview.net/pdf/44e3160934c3d9637c541a15c8827e17ec5bba0e.pdf"} {"title": "LSH-MoE: Communication-efficient MoE Training via Locality-Sensitive Hashing", "url": "https://openreview.net/forum?id=bjFhVbky5A", "detail_url": "https://openreview.net/forum?id=bjFhVbky5A", "authors": "Xiaonan Nie,Qibin Liu,Fangcheng Fu,Shenhan Zhu,Xupeng Miao,Xiaoyang Li,Yang Zhang,Shouda Liu,Bin CUI", "tags": "NIPS 2024,Poster", "abstract": "Larger transformer models perform better on various downstream tasks but require more cost to scale up the model size. To efficiently enlarge models, the Mixture-of-Expert (MoE) architecture is widely adopted, which consists of a gate network and a series of experts and keep the training cost constant by routing the input data to a fixed number of experts instead of all.\nIn existing large-scale MoE training systems, experts would be distributed among different GPUs for parallelization, and thus input data requires additional all-to-all communication to access the target expert and conduct corresponding computation. \nHowever, upon evaluating the training process of three mainstream MoE models on commonly used GPU clusters, we found that the all-to-all communication ratio averaged around 45\\%, which significantly hinders the training efficiency and scalability of MoE models.\n\nIn this paper, we propose LSH-MoE, a communication-efficient MoE training framework using locality-sensitive hashing (LSH). \nWe first present the problems of scaling MoE training in existing systems and highlight the potential of exploiting token similarity to facilitate data compression.\nThen, we introduce an efficient LSH-based compression technique, which utilizes the cross-polytope hashing for rapid clustering and implements a residual-based error compensation scheme to alleviate the adverse impact of compression. \nTo verify the effectiveness of our methods, we conduct experiments on both language models (e.g., RoBERTa, GPT, and T5) and vision models (e.g., Swin) for both pre-training and fine-tuning tasks. The results demonstrate that our method substantially outperforms its counterparts across different tasks by 1.28-2.2$\\times$ of speedup.", "pdf": "https://openreview.net/pdf/e9cd30270a359f04756c54da343f427aa2ded9c4.pdf"} {"title": "Harmonizing Stochasticity and Determinism: Scene-responsive Diverse Human Motion Prediction", "url": "https://openreview.net/forum?id=NQCkNM6TES", "detail_url": "https://openreview.net/forum?id=NQCkNM6TES", "authors": "Zhenyu Lou,Qiongjie Cui,Tuo Wang,Zhenbo Song,Luoming Zhang,Cheng Cheng,Haofan Wang,Xu Tang,Huaxia Li,Hong Zhou", "tags": "NIPS 2024,Poster", "abstract": "Diverse human motion prediction (HMP) is a fundamental application in computer vision that has recently attracted considerable interest. Prior methods primarily focus on the stochastic nature of human motion, while neglecting the specific impact of external environment, leading to the pronounced artifacts in prediction when applied to real-world scenarios. To fill this gap, this work introduces a novel task: predicting diverse human motion within real-world 3D scenes. In contrast to prior works, it requires harmonizing the deterministic constraints imposed by the surrounding 3D scenes with the stochastic aspect of human motion. For this purpose, we propose DiMoP3D, a diverse motion prediction framework with 3D scene awareness, which leverages the 3D point cloud and observed sequence to generate diverse and high-fidelity predictions. DiMoP3D is able to comprehend the 3D scene, and determines the probable target objects and their desired interactive pose based on the historical motion. Then, it plans the obstacle-free trajectory towards these interested objects, and generates diverse and physically-consistent future motions. On top of that, DiMoP3D identifies deterministic factors in the scene and integrates them into the stochastic modeling, making the diverse HMP in realistic scenes become a controllable stochastic generation process. On two real-captured benchmarks, DiMoP3D has demonstrated significant improvements over state-of-the-art methods, showcasing its effectiveness in generating diverse and physically-consistent motion predictions within real-world 3D environments.", "pdf": "https://openreview.net/pdf/505af28d14be82438700ac8ba80d19a9d2598bb1.pdf"} {"title": "Long-range Meta-path Search on Large-scale Heterogeneous Graphs", "url": "https://openreview.net/forum?id=hbOWLtJNMK", "detail_url": "https://openreview.net/forum?id=hbOWLtJNMK", "authors": "Chao Li,Zijie Guo,Qiuting He,Kun He", "tags": "NIPS 2024,Poster", "abstract": "Utilizing long-range dependency, a concept extensively studied in homogeneous graphs, remains underexplored in heterogeneous graphs, especially on large ones, posing two significant challenges: Reducing computational costs while maximizing effective information utilization in the presence of heterogeneity, and overcoming the over-smoothing issue in graph neural networks. To address this gap, we investigate the importance of different meta-paths and introduce \nan automatic framework for utilizing long-range dependency on heterogeneous graphs, denoted as Long-range Meta-path Search through Progressive Sampling (LMSPS). Specifically, we develop a search space with all meta-paths related to the target node type. By employing a progressive sampling algorithm, LMSPS dynamically shrinks the search space with hop-independent time complexity. Through a sampling evaluation strategy, LMSPS conducts a specialized and effective meta-path selection, leading to retraining with only effective meta-paths, thus mitigating costs and over-smoothing. Extensive experiments across diverse heterogeneous datasets validate LMSPS's capability in discovering effective long-range meta-paths, surpassing state-of-the-art methods. Our code is available at https://github.com/JHL-HUST/LMSPS.", "pdf": "https://openreview.net/pdf/8668b12ac4cacbf3c4b557a17e5cb842c7f2a5d8.pdf"} {"title": "Towards Accurate and Fair Cognitive Diagnosis via Monotonic Data Augmentation", "url": "https://openreview.net/forum?id=B4k2TecKT2", "detail_url": "https://openreview.net/forum?id=B4k2TecKT2", "authors": "Zheng Zhang,Wei Song,Qi Liu,Qingyang Mao,Yiyan Wang,Weibo Gao,Zhenya Huang,Shijin Wang,Enhong Chen", "tags": "NIPS 2024,Poster", "abstract": "Intelligent education stands as a prominent application of machine learning. Within this domain, cognitive diagnosis (CD) is a key research focus that aims to diagnose students' proficiency levels in specific knowledge concepts. As a crucial task within the field of education, cognitive diagnosis encompasses two fundamental requirements: accuracy and fairness. Existing studies have achieved significant success by primarily utilizing observed historical logs of student-exercise interactions. However, real-world scenarios often present a challenge, where a substantial number of students engage with a limited number of exercises. This data sparsity issue can lead to both inaccurate and unfair diagnoses. To this end, we introduce a monotonic data augmentation framework, CMCD, to tackle the data sparsity issue and thereby achieve accurate and fair CD results. Specifically, CMCD integrates the monotonicity assumption, a fundamental educational principle in CD, to establish two constraints for data augmentation. These constraints are general and can be applied to the majority of CD backbones. Furthermore, we provide theoretical analysis to guarantee the accuracy and convergence speed of CMCD. Finally, extensive experiments on real-world datasets showcase the efficacy of our framework in addressing the data sparsity issue with accurate and fair CD results.", "pdf": "https://openreview.net/pdf/205c7282011ad79ec0f37379236033b6e18b7a4f.pdf"} {"title": "From Instance Training to Instruction Learning: Task Adapters Generation from Instructions", "url": "https://openreview.net/forum?id=CluvZBfrjj", "detail_url": "https://openreview.net/forum?id=CluvZBfrjj", "authors": "Huanxuan Liao,Shizhu He,Yao Xu,Yuanzhe Zhang,Yanchao Hao,Shengping Liu,Kang Liu,Jun Zhao", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have acquired the ability to solve general tasks by utilizing instruction finetuning (IFT). However, IFT still relies heavily on instance training of extensive task data, which greatly limits the adaptability of LLMs to real-world scenarios where labeled task instances are scarce and broader task generalization becomes paramount. Contrary to LLMs, humans acquire skills and complete tasks not merely through repeated practice but also by understanding and following instructional guidelines. This paper is dedicated to simulating human learning to address the shortcomings of instance training, focusing on instruction learning to enhance cross-task generalization. Within this context, we introduce Task Adapters Generation from Instructions (TAGI), which automatically constructs the task-specific model in a parameter generation manner based on the given task instructions without retraining for unseen tasks. Specifically, we utilize knowledge distillation to enhance the consistency between TAGI developed through Learning with Instruction and task-specific models developed through Training with Instance, by aligning the labels, output logits, and adapter parameters between them. TAGI is endowed with cross-task generalization capabilities through a two-stage training process that includes hypernetwork pretraining and finetuning. We evaluate TAGI on the Super-Natural Instructions and P3 datasets. The experimental results demonstrate that TAGI can match or even outperform traditional meta-trained models and other hypernetwork models, while significantly reducing computational requirements. Our code will be available at https://github.com/Xnhyacinth/TAGI.", "pdf": "https://openreview.net/pdf/659d3a7d23e295a63d2531d8b137ac06320b501a.pdf"} {"title": "Semi-supervised Knowledge Transfer Across Multi-omic Single-cell Data", "url": "https://openreview.net/forum?id=sKEhebkEdz", "detail_url": "https://openreview.net/forum?id=sKEhebkEdz", "authors": "Fan Zhang,Tianyu Liu,Zihao Chen,Xiaojiang Peng,Chong Chen,Xian-Sheng Hua,Xiao Luo,Hongyu Zhao", "tags": "NIPS 2024,Poster", "abstract": "Knowledge transfer between multi-omic single-cell data aims to effectively transfer cell types from scRNA-seq data to unannotated scATAC-seq data. Several approaches aim to reduce the heterogeneity of multi-omic data while maintaining the discriminability of cell types with extensive annotated data. However, in reality, the cost of collecting both a large amount of labeled scRNA-seq data and scATAC-seq data is expensive. Therefore, this paper explores a practical yet underexplored problem of knowledge transfer across multi-omic single-cell data under cell type scarcity. To address this problem, we propose a semi-supervised knowledge transfer framework named Dual label scArcity elimiNation with Cross-omic multi-samplE Mixup (DANCE). To overcome the label scarcity in scRNA-seq data, we generate pseudo-labels based on optimal transport and merge them into the labeled scRNA-seq data. Moreover, we adopt a divide-and-conquer strategy which divides the scATAC-seq data into source-like and target-specific data. For source-like samples, we employ consistency regularization with random perturbations while for target-specific samples, we select a few candidate labels and progressively eliminate incorrect cell types from the label set for additional supervision. Next, we generate virtual scRNA-seq samples with multi-sample Mixup based on the class-wise similarity to reduce cell heterogeneity. Extensive experiments on many benchmark datasets suggest the superiority of our DANCE over a series of state-of-the-art methods.", "pdf": "https://openreview.net/pdf/de7161289e3fd90f8a1f92ddbf473bb198bfdf40.pdf"} {"title": "Genetic-guided GFlowNets for Sample Efficient Molecular Optimization", "url": "https://openreview.net/forum?id=B4q98aAZwt", "detail_url": "https://openreview.net/forum?id=B4q98aAZwt", "authors": "Hyeonah Kim,Minsu Kim,Sanghyeok Choi,Jinkyoo Park", "tags": "NIPS 2024,Poster", "abstract": "The challenge of discovering new molecules with desired properties is crucial in domains like drug discovery and material design. Recent advances in deep learning-based generative methods have shown promise but face the issue of sample efficiency due to the computational expense of evaluating the reward function. This paper proposes a novel algorithm for sample-efficient molecular optimization by distilling a powerful genetic algorithm into deep generative policy using GFlowNets training, the off-policy method for amortized inference. This approach enables the deep generative policy to learn from domain knowledge, which has been explicitly integrated into the genetic algorithm. Our method achieves state-of-the-art performance in the official molecular optimization benchmark, significantly outperforming previous methods. It also demonstrates effectiveness in designing inhibitors against SARS-CoV-2 with substantially fewer reward calls.", "pdf": "https://openreview.net/pdf/db8161bc43b22264ad7b7c95370c837c9830ddd6.pdf"} {"title": "Octopus: A Multi-modal LLM with Parallel Recognition and Sequential Understanding", "url": "https://openreview.net/forum?id=ujE83r50tR", "detail_url": "https://openreview.net/forum?id=ujE83r50tR", "authors": "Chuyang Zhao,YuXin Song,Junru Chen,KANG RONG,Haocheng Feng,Gang Zhang,Shufan Ji,Jingdong Wang,Errui Ding,Yifan Sun", "tags": "NIPS 2024,Poster", "abstract": "A mainstream of Multi-modal Large Language Models (MLLMs) have two essential functions, i.e., visual recognition (e.g., grounding) and understanding (e.g., visual question answering). Presently, all these MLLMs integrate visual recognition and understanding in a same sequential manner in the LLM head, i.e., generating the response token-by-token for both recognition and understanding. We think unifying them in the same sequential manner is not optimal for two reasons: 1) parallel recognition is more efficient than sequential recognition and is actually prevailing in deep visual recognition, and 2) the recognition results can be integrated to help high-level cognition (while the current manner does not). Such motivated, this paper proposes a novel \u201cparallel recognition \u2192 sequential understanding\u201d framework for MLLMs. The bottom LLM layers are utilized for parallel recognition and the recognition results are relayed into the top LLM layers for sequential understanding. Specifically, parallel recognition in the bottom LLM layers is implemented via object queries, a popular mechanism in DEtection TRansformer, which we find to harmonize well with the LLM layers. Empirical studies show our MLLM named Octopus improves accuracy on popular MLLM tasks and is up to 5\u00d7 faster on visual grounding tasks.", "pdf": "https://openreview.net/pdf/2555221db7dc3b492d4ad067e94664955534d668.pdf"} {"title": "FedSSP: Federated Graph Learning with Spectral Knowledge and Personalized Preference", "url": "https://openreview.net/forum?id=I96GFYalFO", "detail_url": "https://openreview.net/forum?id=I96GFYalFO", "authors": "Zihan Tan,Guancheng Wan,Wenke Huang,Mang Ye", "tags": "NIPS 2024,Poster", "abstract": "Personalized Federated Graph Learning (pFGL) facilitates the decentralized training of Graph Neural Networks (GNNs) without compromising privacy while accommodating personalized requirements for non-IID participants. In cross-domain scenarios, structural heterogeneity poses significant challenges for pFGL. Nevertheless, previous pFGL methods incorrectly share non-generic knowledge globally and fail to tailor personalized solutions locally under domain structural shift. We innovatively reveal that the spectral nature of graphs can well reflect inherent domain structural shifts. Correspondingly, our method overcomes it by sharing generic spectral knowledge. Moreover, we indicate the biased message-passing schemes for graph structures and propose the personalized preference module. Combining both strategies, we propose our pFGL framework $\\textbf{FedSSP}$ which $\\textbf{S}$hares generic $\\textbf{S}$pectral knowledge while satisfying graph $\\textbf{P}$references. Furthermore, We perform extensive experiments on cross-dataset and cross-domain settings to demonstrate the superiority of our framework. The code is available at https://github.com/OakleyTan/FedSSP.", "pdf": "https://openreview.net/pdf/18577d7628b39515c1c3b134e7df64dab4c50b81.pdf"} {"title": "Suppress Content Shift: Better Diffusion Features via Off-the-Shelf Generation Techniques", "url": "https://openreview.net/forum?id=QvqLdeSLWA", "detail_url": "https://openreview.net/forum?id=QvqLdeSLWA", "authors": "Benyuan Meng,Qianqian Xu,Zitai Wang,Zhiyong Yang,Xiaochun Cao,Qingming Huang", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models are powerful generative models, and this capability can also be applied to discrimination. The inner activations of a pre-trained diffusion model can serve as features for discriminative tasks, namely, diffusion feature. We discover that diffusion feature has been hindered by a hidden yet universal phenomenon that we call content shift. To be specific, there are content differences between features and the input image, such as the exact shape of a certain object. We locate the cause of content shift as one inherent characteristic of diffusion models, which suggests the broad existence of this phenomenon in diffusion feature. Further empirical study also indicates that its negative impact is not negligible even when content shift is not visually perceivable. Hence, we propose to suppress content shift to enhance the overall quality of diffusion features. Specifically, content shift is related to the information drift during the process of recovering an image from the noisy input, pointing out the possibility of turning off-the-shelf generation techniques into tools for content shift suppression. We further propose a practical guideline named GATE to efficiently evaluate the potential benefit of a technique and provide an implementation of our methodology. Despite the simplicity, the proposed approach has achieved superior results on various tasks and datasets, validating its potential as a generic booster for diffusion features. Our code is available at https://github.com/Darkbblue/diffusion-content-shift.", "pdf": "https://openreview.net/pdf/594e3578a4c8d12a94ed91aabc8478089d8fdf10.pdf"} {"title": "What Matters in Graph Class Incremental Learning? An Information Preservation Perspective", "url": "https://openreview.net/forum?id=tJGX7tpGO8", "detail_url": "https://openreview.net/forum?id=tJGX7tpGO8", "authors": "Jialu Li,Yu Wang,Pengfei Zhu,Wanyu Lin,Qinghua Hu", "tags": "NIPS 2024,Poster", "abstract": "Graph class incremental learning (GCIL) requires the model to classify emerging nodes of new classes while remembering old classes. Existing methods are designed to preserve effective information of old models or graph data to alleviate forgetting, but there is no clear theoretical understanding of what matters in information preservation. In this paper, we consider that present practice suffers from high semantic and structural shifts assessed by two devised shift metrics. We provide insights into information preservation in GCIL and find that maintaining graph information can preserve information of old models in theory to calibrate node semantic and graph structure shifts. We correspond graph information into low-frequency local-global information and high-frequency information in spatial domain. Based on the analysis, we propose a framework, Graph Spatial Information Preservation (GSIP). Specifically, for low-frequency information preservation, the old node representations obtained by inputting replayed nodes into the old model are aligned with the outputs of the node and its neighbors in the new model, and then old and new outputs are globally matched after pooling. For high-frequency information preservation, the new node representations are encouraged to imitate the near-neighbor pair similarity of old node representations. GSIP achieves a 10\\% increase in terms of the forgetting metric compared to prior methods on large-scale datasets. Our framework can also seamlessly integrate existing replay designs. The code is available through https://github.com/Jillian555/GSIP.", "pdf": "https://openreview.net/pdf/f16531b97e4d2d6593f7273b8e6bb7292070ff71.pdf"} {"title": "PrefPaint: Aligning Image Inpainting Diffusion Model with Human Preference", "url": "https://openreview.net/forum?id=fVRCsK4EoM", "detail_url": "https://openreview.net/forum?id=fVRCsK4EoM", "authors": "Kendong Liu,Zhiyu Zhu,Chuanhao Li,Hui LIU,Huanqiang Zeng,Junhui Hou", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we make the first attempt to align diffusion models for image inpainting with human aesthetic standards via a reinforcement learning framework, significantly improving the quality and visual appeal of inpainted images. Specifically, instead of directly measuring the divergence with paired images, we train a reward model with the dataset we construct, consisting of nearly 51,000 images annotated with human preferences. Then, we adopt a reinforcement learning process to fine-tune the distribution of a pre-trained diffusion model for image inpainting in the direction of higher reward. Moreover, we theoretically deduce the upper bound on the error of the reward model, which illustrates the potential confidence of reward estimation throughout the reinforcement alignment process, thereby facilitating accurate regularization.\nExtensive experiments on inpainting comparison and downstream tasks, such as image extension and 3D reconstruction, demonstrate the effectiveness of our approach, showing significant improvements in the alignment of inpainted images with human preference compared with state-of-the-art methods. This research not only advances the field of image inpainting but also provides a framework for incorporating human preference into the iterative refinement of generative models based on modeling reward accuracy, with broad implications for the design of visually driven AI applications. Our code and dataset are publicly available at \\url{https://prefpaint.github.io}.", "pdf": "https://openreview.net/pdf/eed9b5fa2c7c313521b7c54afaf2466f5115309f.pdf"} {"title": "Most Influential Subset Selection: Challenges, Promises, and Beyond", "url": "https://openreview.net/forum?id=qWi33pPecC", "detail_url": "https://openreview.net/forum?id=qWi33pPecC", "authors": "Yuzheng Hu,Pingbang Hu,Han Zhao,Jiaqi Ma", "tags": "NIPS 2024,Poster", "abstract": "How can we attribute the behaviors of machine learning models to their training data? While the classic influence function sheds light on the impact of individual samples, it often fails to capture the more complex and pronounced collective influence of a set of samples. To tackle this challenge, we study the Most Influential Subset Selection (MISS) problem, which aims to identify a subset of training samples with the greatest collective influence. We conduct a comprehensive analysis of the prevailing approaches in MISS, elucidating their strengths and weaknesses. Our findings reveal that influence-based greedy heuristics, a dominant class of algorithms in MISS, can provably fail even in linear regression. We delineate the failure modes, including the errors of influence function and the non-additive structure of the collective influence. Conversely, we demonstrate that an adaptive version of these heuristics which applies them iteratively, can effectively capture the interactions among samples and thus partially address the issues. Experiments on real-world datasets corroborate these theoretical findings, and further demonstrate that the merit of adaptivity can extend to more complex scenarios such as classification tasks and non-linear neural networks. We conclude our analysis by emphasizing the inherent trade-off between performance and computational efficiency, questioning the use of additive metrics such as the linear datamodeling score, and offering a range of discussions.", "pdf": "https://openreview.net/pdf/51cd9620327c3e23453b59a431e678bf4d94adff.pdf"} {"title": "Globally Q-linear Gauss-Newton Method for Overparameterized Non-convex Matrix Sensing", "url": "https://openreview.net/forum?id=2QvCOFw058", "detail_url": "https://openreview.net/forum?id=2QvCOFw058", "authors": "Xixi Jia,Fangchen FENG,Deyu Meng,Defeng Sun", "tags": "NIPS 2024,Poster", "abstract": "This paper focuses on the optimization of overparameterized, non-convex low-rank matrix sensing (LRMS)\u2014an essential component in contemporary statistics and machine learning. Recent years have witnessed significant breakthroughs in first-order methods, such as gradient descent, for tackling this non-convex optimization problem. However, the presence of numerous saddle points often prolongs the time required for gradient descent to overcome these obstacles. Moreover, overparameterization can markedly decelerate gradient descent methods, transitioning its convergence rate from linear to sub-linear. In this paper, we introduce an approximated Gauss-Newton (AGN) method for tackling the non-convex LRMS problem. Notably, AGN incurs a computational cost comparable to gradient descent per iteration but converges much faster without being slowed down by saddle points. We prove that, despite the non-convexity of the objective function, AGN achieves Q-linear convergence from random initialization to the global optimal solution. The global Q-linear convergence of AGN represents a substantial enhancement over the convergence of the existing methods for the overparameterized non-convex LRMS. The code for this paper is available at \\url{https://github.com/hsijiaxidian/AGN}.", "pdf": "https://openreview.net/pdf/3884e95044a055e5159dcc80be724bd8b3e836f2.pdf"} {"title": "One for All: Multi-Domain Joint Training for Point Cloud Based 3D Object Detection", "url": "https://openreview.net/forum?id=ndoeHX1Acq", "detail_url": "https://openreview.net/forum?id=ndoeHX1Acq", "authors": "Zhenyu Wang,Ya-Li Li,Hengshuang Zhao,Shengjin Wang", "tags": "NIPS 2024,Poster", "abstract": "The current trend in computer vision is to utilize one universal model to address all various tasks. Achieving such a universal model inevitably requires incorporating multi-domain data for joint training to learn across multiple problem scenarios. In point cloud based 3D object detection, however, such multi-domain joint training is highly challenging, because large domain gaps among point clouds from different datasets lead to the severe domain-interference problem. In this paper, we propose OneDet3D, a universal one-for-all model that addresses 3D detection across different domains, including diverse indoor and outdoor scenes, within the same framework and only one set of parameters. We propose the domain-aware partitioning in scatter and context, guided by a routing mechanism, to address the data interference issue, and further incorporate the text modality for a language-guided classification to unify the multi-dataset label spaces and mitigate the category interference issue. The fully sparse structure and anchor-free head further accommodate point clouds with significant scale disparities. Extensive experiments demonstrate the strong universal ability of OneDet3D to utilize only one trained model for addressing almost all 3D object detection tasks (Fig. 1). We will open-source the code for future research and applications.", "pdf": "https://openreview.net/pdf/08fc8dc3f9dfe59b9f9cc69d4cdc28f677604b19.pdf"} {"title": "FuseMoE: Mixture-of-Experts Transformers for Fleximodal Fusion", "url": "https://openreview.net/forum?id=jfE7XCE89y", "detail_url": "https://openreview.net/forum?id=jfE7XCE89y", "authors": "Xing Han,Huy Nguyen,Carl William Harris,Nhat Ho,Suchi Saria", "tags": "NIPS 2024,Poster", "abstract": "As machine learning models in critical fields increasingly grapple with multimodal data, they face the dual challenges of handling a wide array of modalities, often incomplete due to missing elements, and the temporal irregularity and sparsity of collected samples. Successfully leveraging this complex data, while overcoming the scarcity of high-quality training samples, is key to improving these models' predictive performance. We introduce ``FuseMoE'', a mixture-of-experts framework incorporated with an innovative gating function. Designed to integrate a diverse number of modalities, FuseMoE is effective in managing scenarios with missing modalities and irregularly sampled data trajectories. Theoretically, our unique gating function contributes to enhanced convergence rates, leading to better performance in multiple downstream tasks. The practical utility of FuseMoE in the real world is validated by a diverse set of challenging prediction tasks.", "pdf": "https://openreview.net/pdf/5d277cf257b139773a1af7cd738795eb55583910.pdf"} {"title": "Voxel Proposal Network via Multi-Frame Knowledge Distillation for Semantic Scene Completion", "url": "https://openreview.net/forum?id=02HWT9c4Lp", "detail_url": "https://openreview.net/forum?id=02HWT9c4Lp", "authors": "Lubo Wang,Di Lin,Kairui Yang,Ruonan Liu,Qing Guo,Wuyuan Xie,Miaohui Wang,Lingyu Liang,Yi Wang,Ping Li", "tags": "NIPS 2024,Poster", "abstract": "Semantic scene completion is a difficult task that involves completing the geometry and semantics of a scene from point clouds in a large-scale environment. Many current methods use 3D/2D convolutions or attention mechanisms, but these have limitations in directly constructing geometry and accurately propagating features from related voxels, the completion likely fails while propagating features in a single pass without considering multiple potential pathways. And they are generally only suitable for static scenes and struggle to handle dynamic aspects. This paper introduces Voxel Proposal Network (VPNet) that completes scenes from 3D and Bird's-Eye-View (BEV) perspectives. It includes Confident Voxel Proposal based on voxel-wise coordinates to propose confident voxels with high reliability for completion. This method reconstructs the scene geometry and implicitly models the uncertainty of voxel-wise semantic labels by presenting multiple possibilities for voxels. VPNet employs Multi-Frame Knowledge Distillation based on the point clouds of multiple adjacent frames to accurately predict the voxel-wise labels by condensing various possibilities of voxel relationships. VPNet has shown superior performance and achieved state-of-the-art results on the SemanticKITTI and SemanticPOSS datasets.", "pdf": "https://openreview.net/pdf/3f85a5f95fbad0b4bf7c58bb45f23e0c242c2ba9.pdf"} {"title": "Motion Graph Unleashed: A Novel Approach to Video Prediction", "url": "https://openreview.net/forum?id=4ztP4PujOG", "detail_url": "https://openreview.net/forum?id=4ztP4PujOG", "authors": "Yiqi Zhong,Luming Liang,Bohan Tang,Ilya Zharkov,Ulrich Neumann", "tags": "NIPS 2024,Poster", "abstract": "We introduce motion graph, a novel approach to address the video prediction problem, i.e., predicting future video frames from limited past data. The motion graph transforms patches of video frames into interconnected graph nodes, to comprehensively describe the spatial-temporal relationships among them. This representation overcomes the limitations of existing motion representations such as image differences, optical flow, and motion matrix that either fall short in capturing complex motion patterns or suffer from excessive memory consumption. We further present a video prediction pipeline empowered by motion graph, exhibiting substantial performance improvements and cost reductions. Extensive experiments on various datasets, including UCF Sports, KITTI and Cityscapes, highlight the strong representative ability of motion graph. Especially on UCF Sports, our method matches and outperforms the SOTA methods with a significant reduction in model size by 78% and a substantial decrease in GPU memory utilization by 47%.", "pdf": "https://openreview.net/pdf/54389d09c724a3aab390a7618c1fe795ab97fb4b.pdf"} {"title": "DC-Gaussian: Improving 3D Gaussian Splatting for Reflective Dash Cam Videos", "url": "https://openreview.net/forum?id=ja20BpFAPa", "detail_url": "https://openreview.net/forum?id=ja20BpFAPa", "authors": "Linhan Wang,Kai Cheng,Shuo Lei,Shengkun Wang,Wei Yin,Chenyang Lei,Xiaoxiao Long,Chang-Tien Lu", "tags": "NIPS 2024,Poster", "abstract": "We present DC-Gaussian, a new method for generating novel views from in-vehicle dash cam videos. While neural rendering techniques have made significant strides in driving scenarios, existing methods are primarily designed for videos collected by autonomous vehicles. However, these videos are limited in both quantity and diversity compared to dash cam videos, which are more widely used across various types of vehicles and capture a broader range of scenarios. Dash cam videos often suffer from severe obstructions such as reflections and occlusions on the windshields, which significantly impede the application of neural rendering techniques. To address this challenge, we develop DC-Gaussian based on the recent real-time neural rendering technique 3D Gaussian Splatting (3DGS). Our approach includes an adaptive image decomposition module to model reflections and occlusions in a unified manner. Additionally, we introduce illumination-aware obstruction modeling to manage reflections and occlusions under varying lighting conditions. Lastly, we employ a geometry-guided Gaussian enhancement strategy to improve rendering details by incorporating additional geometry priors. Experiments on self-captured and public dash cam videos show that our method not only achieves state-of-the-art performance in novel view synthesis, but also accurately reconstructing captured scenes getting rid of obstructions.", "pdf": "https://openreview.net/pdf/921fa18c64b1af416c5f2722ac7f0b07227a3f37.pdf"} {"title": "Personalized Adapter for Large Meteorology Model on Devices: Towards Weather Foundation Models", "url": "https://openreview.net/forum?id=llTroju97T", "detail_url": "https://openreview.net/forum?id=llTroju97T", "authors": "Shengchao Chen,Guodong Long,Jing Jiang,Chengqi Zhang", "tags": "NIPS 2024,Poster", "abstract": "This paper demonstrates that pre-trained language models (PLMs) are strong foundation models for on-device meteorological variable modeling. We present LM-Weather, a generic approach to taming PLMs, that have learned massive sequential knowledge from the universe of natural language databases, to acquire an immediate capability to obtain highly customized models for heterogeneous meteorological data on devices while keeping high efficiency. Concretely, we introduce a lightweight personalized adapter into PLMs and endows it with weather pattern awareness. During communication between clients and the server, low-rank-based transmission is performed to effectively fuse the global knowledge among devices while maintaining high communication efficiency and ensuring privacy. Experiments on real-wold dataset show that LM-Weather outperforms the state-of-the-art results by a large margin across various tasks (e.g., forecasting and imputation at different scales). We provide extensive and in-depth analyses experiments, which verify that LM-Weather can (1) indeed leverage sequential knowledge from natural language to accurately handle meteorological sequence, (2) allows each devices obtain highly customized models under significant heterogeneity, and (3) generalize under data-limited and out-of-distribution (OOD) scenarios.", "pdf": "https://openreview.net/pdf/f2a6e5afdf4428ea2b7277c8b93ae10ce241b3ca.pdf"} {"title": "Unleashing the Potential of the Diffusion Model in Few-shot Semantic Segmentation", "url": "https://openreview.net/forum?id=3ACXaFxjTy", "detail_url": "https://openreview.net/forum?id=3ACXaFxjTy", "authors": "Muzhi Zhu,Yang Liu,Zekai Luo,Chenchen Jing,Hao Chen,Guangkai Xu,Xinlong Wang,Chunhua Shen", "tags": "NIPS 2024,Poster", "abstract": "The Diffusion Model has not only garnered noteworthy achievements in the realm of image generation \nbut has also demonstrated its potential as an effective pretraining method utilizing unlabeled data. \nDrawing from the extensive potential unveiled by the Diffusion Model in both semantic correspondence and open vocabulary segmentation, our work initiates an investigation into employing the Latent Diffusion Model for Few-shot Semantic Segmentation.\nRecently, inspired by the in-context learning ability of large language models, Few-shot Semantic Segmentation has evolved into In-context Segmentation tasks, morphing into a crucial element in assessing generalist segmentation models.\nIn this context, we concentrate \non Few-shot Semantic Segmentation, \nestablishing a solid foundation for the future development of a Diffusion-based generalist model for segmentation. Our initial focus lies in understanding how to facilitate interaction between the query image and the support image, resulting in the proposal of a KV fusion method within the self-attention framework.\nSubsequently, we delve deeper into optimizing the infusion of information from the support mask and simultaneously re-evaluating how to provide reasonable supervision from the query mask.\nBased on our analysis, we establish a simple and effective framework named DiffewS, maximally retaining the original Latent Diffusion Model's generative framework and effectively utilizing the pre-training prior. Experimental results demonstrate that our method significantly outperforms the previous SOTA models in multiple settings.", "pdf": "https://openreview.net/pdf/b91dfae95ded39c07f0943d622728b605df53467.pdf"} {"title": "Boosting Transferability and Discriminability for Time Series Domain Adaptation", "url": "https://openreview.net/forum?id=cIBSsXowMr", "detail_url": "https://openreview.net/forum?id=cIBSsXowMr", "authors": "Mingyang Liu,Xinyang Chen,Yang Shu,Xiucheng Li,Weili Guan,Liqiang Nie", "tags": "NIPS 2024,Poster", "abstract": "Unsupervised domain adaptation excels in transferring knowledge from a labeled source domain to an unlabeled target domain, playing a critical role in time series applications. Existing time series domain adaptation methods either ignore frequency features or treat temporal and frequency features equally, which makes it challenging to fully exploit the advantages of both types of features. In this paper, we delve into transferability and discriminability, two crucial properties in transferable representation learning. It's insightful to note that frequency features are more discriminative within a specific domain, while temporal features show better transferability across domains. Based on the findings, we propose **A**dversarial **CO**-learning **N**etworks (**ACON**), to enhance transferable representation learning through a collaborative learning manner in three aspects: (1) Considering the multi-periodicity in time series, multi-period frequency feature learning is proposed to enhance the discriminability of frequency features; (2) Temporal-frequency domain mutual learning is proposed to enhance the discriminability of temporal features in the source domain and improve the transferability of frequency features in the target domain; (3) Domain adversarial learning is conducted in the correlation subspaces of temporal-frequency features instead of original feature spaces to further enhance the transferability of both features. Extensive experiments conducted on a wide range of time series datasets and five common applications demonstrate the state-of-the-art performance of ACON. Code is available at .", "pdf": "https://openreview.net/pdf/806d81fe07c87e657f70624dc214316549750557.pdf"} {"title": "A Simple Image Segmentation Framework via In-Context Examples", "url": "https://openreview.net/forum?id=esDvZi2Cf3", "detail_url": "https://openreview.net/forum?id=esDvZi2Cf3", "authors": "Yang Liu,Chenchen Jing,Hengtao Li,Muzhi Zhu,Hao Chen,Xinlong Wang,Chunhua Shen", "tags": "NIPS 2024,Poster", "abstract": "Recently, there have been explorations of generalist segmentation models that can effectively tackle a variety of image segmentation tasks within a unified in-context learning framework. However, these methods still struggle with task ambiguity in in-context segmentation, as not all in-context examples can accurately convey the task information. In order to address this issue, we present SINE, a simple image $\\textbf{S}$egmentation framework utilizing $\\textbf{in}$-context $\\textbf{e}$xamples. Our approach leverages a Transformer encoder-decoder structure, where the encoder provides high-quality image representations, and the decoder is designed to yield multiple task-specific output masks to eliminate task ambiguity effectively. Specifically, we introduce an In-context Interaction module to complement in-context information and produce correlations between the target image and the in-context example and a Matching Transformer that uses fixed matching and a Hungarian algorithm to eliminate differences between different tasks. In addition, we have further perfected the current evaluation system for in-context image segmentation, aiming to facilitate a holistic appraisal of these models. Experiments on various segmentation tasks show the effectiveness of the proposed method.", "pdf": "https://openreview.net/pdf/7d77c867965f20ef319ca88e0205321eb9bc012c.pdf"} {"title": "Enhancing LLM\u2019s Cognition via Structurization", "url": "https://openreview.net/forum?id=q5CkneUn6K", "detail_url": "https://openreview.net/forum?id=q5CkneUn6K", "authors": "Kai Liu,Zhihang Fu,Chao Chen,Wei Zhang,Rongxin Jiang,Fan Zhou,Yaowu Chen,Yue Wu,Jieping Ye", "tags": "NIPS 2024,Poster", "abstract": "When reading long-form text, human cognition is complex and structurized. While large language models (LLMs) process input contexts through a causal and sequential perspective, this approach can potentially limit their ability to handle intricate and complex inputs effectively. To enhance LLM\u2019s cognition capability, this paper presents a novel concept of context structurization. Specifically, we transform the plain, unordered contextual sentences into well-ordered and hierarchically structurized elements. By doing so, LLMs can better grasp intricate and extended contexts through precise attention and information-seeking along the organized structures. Extensive evaluations are conducted across various model architectures and sizes (including a series of auto-regressive LLMs as well as BERT-like masking models) on a diverse set of NLP tasks (e.g., context-based question-answering, exhaustive hallucination evaluation, and passage-level dense retrieval). Empirical results show consistent and significant performance gains afforded by a single-round structurization. In particular, we boost the open-sourced LLaMA2-70B model to achieve comparable performance against GPT-3.5-Turbo as the halluci- nation evaluator. Besides, we show the feasibility of distilling advanced LLMs\u2019 language processing abilities to a smaller yet effective StruXGPT-7B to execute structurization, addressing the practicality of our approach. Code is available at https://github.com/alibaba/struxgpt.", "pdf": "https://openreview.net/pdf/a708447ad64073dc643e6107c373fa3a8e70dfad.pdf"} {"title": "Surge Phenomenon in Optimal Learning Rate and Batch Size Scaling", "url": "https://openreview.net/forum?id=hD9TUV4xdz", "detail_url": "https://openreview.net/forum?id=hD9TUV4xdz", "authors": "Shuaipeng Li,Penghao Zhao,Hailin Zhang,Samm Sun,Hao Wu,Dian Jiao,Weiyan Wang,Chengjun Liu,Zheng Fang,Jinbao Xue,Yangyu Tao,Bin CUI,Di Wang", "tags": "NIPS 2024,Poster", "abstract": "In current deep learning tasks, Adam-style optimizers\u2014such as Adam, Adagrad, RMSprop, Adafactor, and Lion\u2014have been widely used as alternatives to SGD-style optimizers. These optimizers typically update model parameters using the sign of gradients, resulting in more stable convergence curves. \nThe learning rate and the batch size are the most critical hyperparameters for optimizers, which require careful tuning to enable effective convergence. Previous research has shown that the optimal learning rate increases linearly (or follows similar rules) with batch size for SGD-style optimizers. However, this conclusion is not applicable to Adam-style optimizers. \nIn this paper, we elucidate the connection between optimal learning rates and batch sizes for Adam-style optimizers through both theoretical analysis and extensive experiments. \nFirst, we raise the scaling law between batch sizes and optimal learning rates in the \u201csign of gradient\u201d case, in which we prove that the optimal learning rate first rises and then falls as the batch size increases. Moreover, the peak value of the surge will gradually move toward the larger batch size as training progresses.\nSecond, we conduct experiments on various CV and NLP tasks and verify the correctness of the scaling law.", "pdf": "https://openreview.net/pdf/8406347cd2abe4fe1415499a3de40e174d991c3b.pdf"} {"title": "Rethinking Out-of-Distribution Detection on Imbalanced Data Distribution", "url": "https://openreview.net/forum?id=EWxNEnFjKR", "detail_url": "https://openreview.net/forum?id=EWxNEnFjKR", "authors": "Kai Liu,Zhihang Fu,Sheng Jin,Chao Chen,Ze Chen,Rongxin Jiang,Fan Zhou,Yaowu Chen,Jieping Ye", "tags": "NIPS 2024,Poster", "abstract": "Detecting and rejecting unknown out-of-distribution (OOD) samples is critical for deployed neural networks to void unreliable predictions. In real-world scenarios, however, the efficacy of existing OOD detection methods is often impeded by the inherent imbalance of in-distribution (ID) data, which causes significant performance decline. Through statistical observations, we have identified two common challenges faced by different OOD detectors: misidentifying tail class ID samples as OOD, while erroneously predicting OOD samples as head class from ID. To explain this phenomenon, we introduce a generalized statistical framework, termed ImOOD, to formulate the OOD detection problem on imbalanced data distribution. Consequently, the theoretical analysis reveals that there exists a class-aware bias item between balanced and imbalanced OOD detection, which contributes to the performance gap. Building upon this finding, we present a unified training-time regularization technique to mitigate the bias and boost imbalanced OOD detectors across architecture designs. Our theoretically grounded method translates into consistent improvements on the representative CIFAR10-LT, CIFAR100-LT, and ImageNet-LT benchmarks against several state-of-the-art OOD detection ap- proaches. Code is available at https://github.com/alibaba/imood.", "pdf": "https://openreview.net/pdf/0f5504939a2af37b795c46393672bdb91215f792.pdf"} {"title": "Grid4D: 4D Decomposed Hash Encoding for High-Fidelity Dynamic Gaussian Splatting", "url": "https://openreview.net/forum?id=eyfYC19gOd", "detail_url": "https://openreview.net/forum?id=eyfYC19gOd", "authors": "Jiawei Xu,Zexin Fan,Jian Yang,Jin Xie", "tags": "NIPS 2024,Poster", "abstract": "Recently, Gaussian splatting has received more and more attention in the field of static scene rendering. Due to the low computational overhead and inherent flexibility of explicit representations, plane-based explicit methods are popular ways to predict deformations for Gaussian-based dynamic scene rendering models. However, plane-based methods rely on the inappropriate low-rank assumption and excessively decompose the space-time 4D encoding, resulting in overmuch feature overlap and unsatisfactory rendering quality. To tackle these problems, we propose Grid4D, a dynamic scene rendering model based on Gaussian splatting and employing a novel explicit encoding method for the 4D input through the hash encoding. Different from plane-based explicit representations, we decompose the 4D encoding into one spatial and three temporal 3D hash encodings without the low-rank assumption. Additionally, we design a novel attention module that generates the attention scores in a directional range to aggregate the spatial and temporal features. The directional attention enables Grid4D to more accurately fit the diverse deformations across distinct scene components based on the spatial encoded features. Moreover, to mitigate the inherent lack of smoothness in explicit representation methods, we introduce a smooth regularization term that keeps our model from the chaos of deformation prediction. Our experiments demonstrate that Grid4D significantly outperforms the state-of-the-art models in visual quality and rendering speed.", "pdf": "https://openreview.net/pdf/997cb5e7d321dc183da55b3ac5e7360d9c3eeb68.pdf"} {"title": "Source Code Foundation Models are Transferable Binary Analysis Knowledge Bases", "url": "https://openreview.net/forum?id=qPpVDzPhSL", "detail_url": "https://openreview.net/forum?id=qPpVDzPhSL", "authors": "Zian Su,Xiangzhe Xu,Ziyang Huang,Kaiyuan Zhang,Xiangyu Zhang", "tags": "NIPS 2024,Poster", "abstract": "Human-Oriented Binary Reverse Engineering (HOBRE) lies at the intersection of binary and source code, aiming to lift binary code to human-readable content relevant to source code, thereby bridging the binary-source semantic gap. Recent advancements in uni-modal code model pre-training, particularly in generative Source Code Foundation Models (SCFMs) and binary understanding models, have laid the groundwork for transfer learning applicable to HOBRE. However, existing approaches for HOBRE rely heavily on uni-modal models like SCFMs for supervised fine-tuning or general LLMs for prompting, resulting in sub-optimal performance. Inspired by recent progress in large multi-modal models, we propose that it is possible to harness the strengths of uni-modal code models from both sides to bridge the semantic gap effectively. In this paper, we introduce a novel probe-and-recover framework that incorporates a binary-source encoder-decoder model and black-box LLMs for binary analysis. Our approach leverages the pre-trained knowledge within SCFMs to synthesize relevant, symbol-rich code fragments as context. This additional context enables black-box LLMs to enhance recovery accuracy. We demonstrate significant improvements in zero-shot binary summarization and binary function name recovery, with a 10.3% relative gain in CHRF and a 16.7% relative gain in a GPT4-based metric for summarization, as well as a 6.7% and 7.4% absolute increase in token-level precision and recall for name recovery, respectively. These results highlight the effectiveness of our approach in automating and improving binary code analysis.", "pdf": "https://openreview.net/pdf/5cbff746a2a2d748695aeed7679dc157b97f5e6d.pdf"} {"title": "Flipping-based Policy for Chance-Constrained Markov Decision Processes", "url": "https://openreview.net/forum?id=7t9eDEY2GT", "detail_url": "https://openreview.net/forum?id=7t9eDEY2GT", "authors": "Xun Shen,Shuo Jiang,Akifumi Wachi,Kazumune Hashimoto,Sebastien Gros", "tags": "NIPS 2024,Poster", "abstract": "Safe reinforcement learning (RL) is a promising approach for many real-world decision-making problems where ensuring safety is a critical necessity. In safe RL research, while expected cumulative safety constraints (ECSCs) are typically the first choices, chance constraints are often more pragmatic for incorporating safety under uncertainties. This paper proposes a \\textit{flipping-based policy} for Chance-Constrained Markov Decision Processes (CCMDPs). The flipping-based policy selects the next action by tossing a potentially distorted coin between two action candidates. The probability of the flip and the two action candidates vary depending on the state. We establish a Bellman equation for CCMDPs and further prove the existence of a flipping-based policy within the optimal solution sets. Since solving the problem with joint chance constraints is challenging in practice, we then prove that joint chance constraints can be approximated into Expected Cumulative Safety Constraints (ECSCs) and that there exists a flipping-based policy in the optimal solution sets for constrained MDPs with ECSCs. As a specific instance of practical implementations, we present a framework for adapting constrained policy optimization to train a flipping-based policy. This framework can be applied to other safe RL algorithms. We demonstrate that the flipping-based policy can improve the performance of the existing safe RL algorithms under the same limits of safety constraints on Safety Gym benchmarks.", "pdf": "https://openreview.net/pdf/ce9faa63f83be7fccd370498d97e71e13a32ed1e.pdf"} {"title": "AverNet: All-in-one Video Restoration for Time-varying Unknown Degradations", "url": "https://openreview.net/forum?id=UcdaNf2PKL", "detail_url": "https://openreview.net/forum?id=UcdaNf2PKL", "authors": "Haiyu Zhao,Lei Tian,Xinyan Xiao,Peng Hu,Yuanbiao Gou,Xi Peng", "tags": "NIPS 2024,Poster", "abstract": "Traditional video restoration approaches were designed to recover clean videos from a specific type of degradation, making them ineffective in handling multiple unknown types of degradation. To address this issue, several studies have been conducted and have shown promising results. However, these studies overlook that the degradations in video usually change over time, dubbed time-varying unknown degradations (TUD). To tackle such a less-touched challenge, we propose an innovative method, termed as All-in-one VidEo Restoration Network (AverNet), which comprises two core modules, i.e., Prompt-Guided Alignment (PGA) module and Prompt-Conditioned Enhancement (PCE) module. Specifically, PGA addresses the issue of pixel shifts caused by time-varying degradations by learning and utilizing prompts to align video frames at the pixel level. To handle multiple unknown degradations, PCE recasts it into a conditional restoration problem by implicitly establishing a conditional map between degradations and ground truths. Thanks to the collaboration between PGA and PCE modules, AverNet empirically demonstrates its effectiveness in recovering videos from TUD. Extensive experiments are carried out on two synthesized datasets featuring seven types of degradations with random corruption levels. The code is available at https://github.com/XLearning-SCU/2024-NeurIPS-AverNet.", "pdf": "https://openreview.net/pdf/cd985f5642f31d02e47d062bc783deb7c2d1fa8a.pdf"} {"title": "Can Large Language Model Agents Simulate Human Trust Behavior?", "url": "https://openreview.net/forum?id=CeOwahuQic", "detail_url": "https://openreview.net/forum?id=CeOwahuQic", "authors": "Chengxing Xie,Canyu Chen,Feiran Jia,Ziyu Ye,Shiyang Lai,Kai Shu,Jindong Gu,Adel Bibi,Ziniu Hu,David Jurgens,James Evans,Philip Torr,Bernard Ghanem,Guohao Li", "tags": "NIPS 2024,Poster", "abstract": "Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in social science and role-playing applications. However, one fundamental question remains: can LLM agents really simulate human behavior? In this paper, we focus on one critical and elemental behavior in human interactions, trust, and investigate whether LLM agents can simulate human trust behavior. We first find that LLM agents generally exhibit trust behavior, referred to as agent trust, under the framework of Trust Games, which are widely recognized in behavioral economics. Then, we discover that GPT-4 agents manifest high behavioral alignment with humans in terms of trust behavior, indicating the feasibility of simulating human trust behavior with LLM agents. In addition, we probe the biases of agent trust and differences in agent trust towards other LLM agents and humans. We also explore the intrinsic properties of agent trust under conditions including external manipulations and advanced reasoning strategies. Our study provides new insights into the behaviors of LLM agents and the fundamental analogy between LLMs and humans beyond value alignment. We further illustrate broader implications of our discoveries for applications where trust is paramount.", "pdf": "https://openreview.net/pdf/83af3ec1753188a2e392e049fc2d839ffd53ab72.pdf"} {"title": "Fine-Grained Dynamic Framework for Bias-Variance Joint Optimization on Data Missing Not at Random", "url": "https://openreview.net/forum?id=gLoe70Tn8V", "detail_url": "https://openreview.net/forum?id=gLoe70Tn8V", "authors": "Mingming Ha,Taoxuewen,Wenfang Lin,QIONGXU MA,Wujiang Xu,Linxun Chen", "tags": "NIPS 2024,Poster", "abstract": "In most practical applications such as recommendation systems, display advertising, and so forth, the collected data often contains missing values and those missing values are generally missing-not-at-random, which deteriorates the prediction performance of models. Some existing estimators and regularizers attempt to achieve unbiased estimation to improve the predictive performance. However, variances and generalization bound of these methods are generally unbounded when the propensity scores tend to zero, compromising their stability and robustness. In this paper, we first theoretically reveal that limitations of regularization techniques. Besides, we further illustrate that, for more general estimators, unbiasedness will inevitably lead to unbounded variance. These general laws inspire us that the estimator designs is not merely about eliminating bias, reducing variance, or simply achieve a bias-variance trade-off. Instead, it involves a quantitative joint optimization of bias and variance. Then, we develop a systematic fine-grained dynamic learning framework to jointly optimize bias and variance, which adaptively selects an appropriate estimator for each user-item pair according to the predefined objective function. With this operation, the generalization bounds and variances of models are reduced and bounded with theoretical guarantees. Extensive experiments are conducted to verify the theoretical results and the effectiveness of the proposed dynamic learning framework.", "pdf": "https://openreview.net/pdf/505e046e62d58252e698b07126ccf23dc3559dce.pdf"} {"title": "Beware of Road Markings: A New Adversarial Patch Attack to Monocular Depth Estimation", "url": "https://openreview.net/forum?id=satH8Evs2y", "detail_url": "https://openreview.net/forum?id=satH8Evs2y", "authors": "Hangcheng Liu,Zhenhu Wu,Hao Wang,XINGSHUO HAN,Shangwei Guo,Tao Xiang,Tianwei Zhang", "tags": "NIPS 2024,Poster", "abstract": "Monocular Depth Estimation (MDE) enables the prediction of scene depths from a single RGB image, having been widely integrated into production-grade autonomous driving systems, e.g., Tesla Autopilot. Current adversarial attacks to MDE models focus on attaching an optimized adversarial patch to a designated obstacle. Although effective, this approach presents two inherent limitations: its reliance on specific obstacles and its limited malicious impact. In contrast, we propose a pioneering attack to MDE models that \\textit{decouples obstacles from patches physically and deploys optimized patches on roads}, thereby extending the attack scope to arbitrary traffic participants. This approach is inspired by our groundbreaking discovery: \\textit{various MDE models with different architectures, trained for autonomous driving, heavily rely on road regions} when predicting depths for different obstacles. Based on this discovery, we design the Adversarial Road Marking (AdvRM) attack, which camouflages patches as ordinary road markings and deploys them on roads, thereby posing a continuous threat within the environment. Experimental results from both dataset simulations and real-world scenarios demonstrate that AdvRM is effective, stealthy, and robust against various MDE models, achieving about 1.507 of Mean Relative Shift Ratio (MRSR) over 8 MDE models. The code is available at \\url{https://github.com/a-c-a-c/AdvRM.git}", "pdf": "https://openreview.net/pdf/a1ebdee7fb14adc46308d7fb5dd26e102ea198cd.pdf"} {"title": "Should We Really Edit Language Models? On the Evaluation of Edited Language Models", "url": "https://openreview.net/forum?id=m0DS4OOmSY", "detail_url": "https://openreview.net/forum?id=m0DS4OOmSY", "authors": "Qi Li,Xiang Liu,Zhenheng Tang,Peijie Dong,Zeyu Li,Xinglin Pan,Xiaowen Chu", "tags": "NIPS 2024,Poster", "abstract": "Model editing has become an increasingly popular alternative for efficiently updating knowledge within language models. \nCurrent methods mainly focus on reliability, generalization, and locality, with many methods excelling across these criteria. \nSome recent works disclose the pitfalls of these editing methods such as knowledge distortion or conflict. However, the general abilities of post-edited language models remain unexplored. \nIn this paper, we perform a comprehensive evaluation on various editing methods and different language models, and have following findings.\n(1) Existing editing methods lead to inevitable performance deterioration on general benchmarks, indicating that existing editing methods maintain the general abilities of the model within only a few dozen edits.\nWhen the number of edits is slightly large, the intrinsic knowledge structure of the model is disrupted or even completely damaged. \n(2) Instruction-tuned models are more robust to editing, showing less performance drop on general knowledge after editing. \n(3) Language model with large scale is more resistant to editing compared to small model.\n(4) The safety of the edited model, is significantly weakened, even for those safety-aligned models.\nOur findings indicate that current editing methods are only suitable for small-scale knowledge updates within language models, which motivates further research on more practical and reliable editing methods.", "pdf": "https://openreview.net/pdf/c301e68e1654036a0ce0fcafa25aff07b827879a.pdf"} {"title": "A Layer-Wise Natural Gradient Optimizer for Training Deep Neural Networks", "url": "https://openreview.net/forum?id=niG3Yyb6oA", "detail_url": "https://openreview.net/forum?id=niG3Yyb6oA", "authors": "Xiaolei Liu,Shaoshuai Li,Kaixin Gao,Binfeng Wang", "tags": "NIPS 2024,Poster", "abstract": "Second-order optimization algorithms, such as the Newton method and the natural gradient descent (NGD) method exhibit excellent convergence properties for training deep neural networks, but the high computational cost limits its practical application. In this paper, we focus on the NGD method and propose a novel layer-wise natural gradient descent (LNGD) method to further reduce computational costs and accelerate the training process. Specifically, based on the block diagonal approximation of the Fisher information matrix, we first propose the layer-wise sample method to compute each block matrix without performing a complete back-propagation. Then, each block matrix is approximated as a Kronecker product of two smaller matrices, one of which is a diagonal matrix, while keeping the traces equal before and after approximation. By these two steps, we provide a new approximation for the Fisher information matrix, which can effectively reduce the computational cost while preserving the main information of each block matrix. Moreover, we propose a new adaptive layer-wise learning rate to further accelerate training. Based on these new approaches, we propose the LNGD optimizer. The global convergence analysis of LNGD is established under some assumptions. Experiments on image classification and machine translation tasks show that our method is quite competitive compared to the state-of-the-art methods.", "pdf": "https://openreview.net/pdf/4e9a309d32add1feb680de166d83ef130bbc0ada.pdf"} {"title": "PLIP: Language-Image Pre-training for Person Representation Learning", "url": "https://openreview.net/forum?id=e49QqJxCwq", "detail_url": "https://openreview.net/forum?id=e49QqJxCwq", "authors": "Jialong Zuo,Jiahao Hong,Feng Zhang,Changqian Yu,Hanyu Zhou,Changxin Gao,Nong Sang,Jingdong Wang", "tags": "NIPS 2024,Poster", "abstract": "Language-image pre-training is an effective technique for learning powerful representations in general domains. However, when directly turning to person representation learning, these general pre-training methods suffer from unsatisfactory performance. The reason is that they neglect critical person-related characteristics, i.e., fine-grained attributes and identities. To address this issue, we propose a novel language-image pre-training framework for person representation learning, termed PLIP. Specifically, we elaborately design three pretext tasks: 1) Text-guided Image Colorization, aims to establish the correspondence between the person-related image regions and the fine-grained color-part textual phrases. 2) Image-guided Attributes Prediction, aims to mine fine-grained attribute information of the person body in the image; and 3) Identity-based Vision-Language Contrast, aims to correlate the cross-modal representations at the identity level rather than the instance level. Moreover, to implement our pre-train framework, we construct a large-scale person dataset with image-text pairs named SYNTH-PEDES by automatically generating textual annotations. We pre-train PLIP on SYNTH-PEDES and evaluate our models by spanning downstream person-centric tasks. PLIP not only significantly improves existing methods on all these tasks, but also shows great ability in the zero-shot and domain generalization settings. The code, dataset and weight will be made publicly available.", "pdf": "https://openreview.net/pdf/9e7e1f6b05e0249cddf38f0408b075a905a38419.pdf"} {"title": "One-Step Effective Diffusion Network for Real-World Image Super-Resolution", "url": "https://openreview.net/forum?id=TPtXnpRvur", "detail_url": "https://openreview.net/forum?id=TPtXnpRvur", "authors": "Rongyuan Wu,Lingchen Sun,Zhiyuan Ma,Lei Zhang", "tags": "NIPS 2024,Poster", "abstract": "The pre-trained text-to-image diffusion models have been increasingly employed to tackle the real-world image super-resolution (Real-ISR) problem due to their powerful generative image priors. Most of the existing methods start from random noise to reconstruct the high-quality (HQ) image under the guidance of the given low-quality (LQ) image. While promising results have been achieved, such Real-ISR methods require multiple diffusion steps to reproduce the HQ image, increasing the computational cost. Meanwhile, the random noise introduces uncertainty in the output, which is unfriendly to image restoration tasks. To address these issues, we propose a one-step effective diffusion network, namely OSEDiff, for the Real-ISR problem. \nWe argue that the LQ image contains rich information to restore its HQ counterpart, and hence the given LQ image can be directly taken as the starting point for diffusion, eliminating the uncertainty introduced by random noise sampling. We finetune the pre-trained diffusion network with trainable layers to adapt it to complex image degradations. To ensure that the one-step diffusion model could yield HQ Real-ISR output, we apply variational score distillation in the latent space to conduct KL-divergence regularization. As a result, our OSEDiff model can efficiently and effectively generate HQ images in just one diffusion step. \nOur experiments demonstrate that OSEDiff achieves comparable or even better Real-ISR results, in terms of both objective metrics and subjective evaluations, than previous diffusion model-based Real-ISR methods that require dozens or hundreds of steps. The source codes are released at https://github.com/cswry/OSEDiff.", "pdf": "https://openreview.net/pdf/4c1b54962cbd7e494adea4814a6a5108930a14cb.pdf"} {"title": "AdaNovo: Towards Robust \\emph{De Novo} Peptide Sequencing in Proteomics against Data Biases", "url": "https://openreview.net/forum?id=0zfUiSX5si", "detail_url": "https://openreview.net/forum?id=0zfUiSX5si", "authors": "Jun Xia,Shaorong Chen,Jingbo Zhou,Xiaojun Shan,Wenjie Du,Zhangyang Gao,Cheng Tan,Bozhen Hu,Jiangbin Zheng,Stan Z. Li", "tags": "NIPS 2024,Poster", "abstract": "Tandem mass spectrometry has played a pivotal role in advancing proteomics, enabling the high-throughput analysis of protein composition in biological tissues. Despite the development of several deep learning methods for predicting amino acid sequences (peptides) responsible for generating the observed mass spectra, training data biases hinder further advancements of \\emph{de novo} peptide sequencing. Firstly, prior methods struggle to identify amino acids with Post-Translational Modifications (PTMs) due to their lower frequency in training data compared to canonical amino acids, further resulting in unsatisfactory peptide sequencing performance. Secondly, various noise and missing peaks in mass spectra reduce the reliability of training data (Peptide-Spectrum Matches, PSMs). To address these challenges, we propose AdaNovo, a novel and domain knowledge-inspired framework that calculates Conditional Mutual Information (CMI) between the mass spectra and amino acids or peptides, using CMI for robust training against above biases. Extensive experiments indicate that AdaNovo outperforms previous competitors on the widely-used 9-species benchmark, meanwhile yielding 3.6\\% - 9.4\\% improvements in PTMs identification. The supplements contain the code.", "pdf": "https://openreview.net/pdf/7308639090747be10f61a16d97e074fc72d512c7.pdf"} {"title": "Plan-on-Graph: Self-Correcting Adaptive Planning of Large Language Model on Knowledge Graphs", "url": "https://openreview.net/forum?id=CwCUEr6wO5", "detail_url": "https://openreview.net/forum?id=CwCUEr6wO5", "authors": "Liyi Chen,Panrong Tong,Zhongming Jin,Ying Sun,Jieping Ye,Hui Xiong", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) have shown remarkable reasoning capabilities on complex tasks, but they still suffer from out-of-date knowledge, hallucinations, and opaque decision-making. In contrast, Knowledge Graphs (KGs) can provide explicit and editable knowledge for LLMs to alleviate these issues. Existing paradigm of KG-augmented LLM manually predefines the breadth of exploration space and requires flawless navigation in KGs. However, this paradigm cannot adaptively explore reasoning paths in KGs based on the question semantics and self-correct erroneous reasoning paths, resulting in a bottleneck in efficiency and effect. To address these limitations, we propose a novel self-correcting adaptive planning paradigm for KG-augmented LLM named Plan-on-Graph (PoG), which first decomposes the question into several sub-objectives and then repeats the process of adaptively exploring reasoning paths, updating memory, and reflecting on the need to self-correct erroneous reasoning paths until arriving at the answer. Specifically, three important mechanisms of Guidance, Memory, and Reflection are designed to work together, to guarantee the adaptive breadth of self-correcting planning for graph reasoning. Finally, extensive experiments on three real-world datasets demonstrate the effectiveness and efficiency of PoG.", "pdf": "https://openreview.net/pdf/8d8eb41237720f3000f638b3f7d88dde88ce6ff4.pdf"} {"title": "SAM-Guided Masked Token Prediction for 3D Scene Understanding", "url": "https://openreview.net/forum?id=F9i1avQTla", "detail_url": "https://openreview.net/forum?id=F9i1avQTla", "authors": "Zhimin Chen,Liang Yang,Yingwei Li,Longlong Jing,Bing Li", "tags": "NIPS 2024,Poster", "abstract": "Foundation models have significantly enhanced 2D task performance, and recent works like Bridge3D have successfully applied these models to improve 3D scene understanding through knowledge distillation, marking considerable advancements. Nonetheless, challenges such as the misalignment between 2D and 3D representations and the persistent long-tail distribution in 3D datasets still restrict the effectiveness of knowledge distillation from 2D to 3D using foundation models. To tackle these issues, we introduce a novel SAM-guided tokenization method that seamlessly aligns 3D transformer structures with region-level knowledge distillation, replacing the traditional KNN-based tokenization techniques. Additionally, we implement a group-balanced re-weighting strategy to effectively address the long-tail problem in knowledge distillation. Furthermore, inspired by the recent success of masked feature prediction, our framework incorporates a two-stage masked token prediction process in which the student model predicts both the global embeddings and token-wise local embeddings derived from the teacher models trained in the first stage. Our methodology has been validated across multiple datasets, including SUN RGB-D, ScanNet, and S3DIS, for tasks like 3D object detection and semantic segmentation. The results demonstrate significant improvements over current state-of-the-art self-supervised methods, establishing new benchmarks in this field.", "pdf": "https://openreview.net/pdf/94b2e7e6e3253c831e11f1650cfbe13886a2a3eb.pdf"} {"title": "IR-CM: The Fast and General-purpose Image Restoration Method Based on Consistency Model", "url": "https://openreview.net/forum?id=2bon4HLFkN", "detail_url": "https://openreview.net/forum?id=2bon4HLFkN", "authors": "Xiaoxuan Gong,Jie Ma", "tags": "NIPS 2024,Poster", "abstract": "This paper proposes a fast and general-purpose image restoration method. The key idea is to achieve few-step or even one-step inference by conducting consistency distilling or training on a specific mean-reverting stochastic differential equations. Furthermore, based on this, we propose a novel linear-nonlinear decoupling training strategy, significantly enhancing training effectiveness and surpassing consistency distillation on inference performance. This allows our method to be independent of any pre-trained checkpoint, enabling it to serve as an effective standalone image-to-image transformation model. Finally, to avoid trivial solutions and stabilize model training, we introduce a simple origin-guided loss. To validate the effectiveness of our proposed method, we conducted experiments on tasks including image deraining, denoising, deblurring, and low-light image enhancement. The experiments show that our method achieves highly competitive results with only one-step inference. And with just two-step inference, it can achieve state-of-the-art performance in low-light image enhancement. Furthermore, a number of ablation experiments demonstrate the effectiveness of the proposed training strategy. our code is available at https://github.com/XiaoxuanGong/IR-CM.", "pdf": "https://openreview.net/pdf/05c34ffd5d7f8dedee63503f55c6d7f983fcecbc.pdf"} {"title": "RAMP: Boosting Adversarial Robustness Against Multiple $l_p$ Perturbations for Universal Robustness", "url": "https://openreview.net/forum?id=u1Z3HWz4VJ", "detail_url": "https://openreview.net/forum?id=u1Z3HWz4VJ", "authors": "Enyi Jiang,Gagandeep Singh", "tags": "NIPS 2024,Poster", "abstract": "Most existing works focus on improving robustness against adversarial attacks bounded by a single $l_p$ norm using adversarial training (AT). However, these AT models' multiple-norm robustness (union accuracy) is still low, which is crucial since in the real-world an adversary is not necessarily bounded by a single norm. The tradeoffs among robustness against multiple $l_p$ perturbations and accuracy/robustness make obtaining good union and clean accuracy challenging. We design a logit pairing loss to improve the union accuracy by analyzing the tradeoffs from the lens of distribution shifts. We connect natural training (NT) with AT via gradient projection, to incorporate useful information from NT into AT, where we empirically and theoretically show it moderates the accuracy/robustness tradeoff. We propose a novel training framework \\textbf{RAMP}, to boost the robustness against multiple $l_p$ perturbations. \\textbf{RAMP} can be easily adapted for robust fine-tuning and full AT. For robust fine-tuning, \\textbf{RAMP} obtains a union accuracy up to $53.3\\%$ on CIFAR-10, and $29.1\\%$ on ImageNet. For training from scratch, \\textbf{RAMP} achieves a union accuracy of $44.6\\%$ and good clean accuracy of $81.2\\%$ on ResNet-18 against AutoAttack on CIFAR-10. Beyond multi-norm robustness \\textbf{RAMP}-trained models achieve superior \\textit{universal robustness}, effectively generalizing against a range of unseen adversaries and natural corruptions.", "pdf": "https://openreview.net/pdf/545fc3560c1ccd73e712583aba057a4b6cd52393.pdf"} {"title": "The High Line: Exact Risk and Learning Rate Curves of Stochastic Adaptive Learning Rate Algorithms", "url": "https://openreview.net/forum?id=4VWnC5unAV", "detail_url": "https://openreview.net/forum?id=4VWnC5unAV", "authors": "Elizabeth Collins-Woodfin,Inbar Seroussi,Bego\u00f1a Garc\u00eda Malaxechebarr\u00eda,Andrew Mackenzie,Elliot Paquette,Courtney Paquette", "tags": "NIPS 2024,Poster", "abstract": "We develop a framework for analyzing the training and learning rate dynamics on a large class of high-dimensional optimization problems, which we call the high line, trained using one-pass stochastic gradient descent (SGD) with adaptive learning rates. We give exact expressions for the risk and learning rate curves in terms of a deterministic solution to a system of ODEs. We then investigate in detail two adaptive learning rates -- an idealized exact line search and AdaGrad-Norm -- on the least squares problem. When the data covariance matrix has strictly positive eigenvalues, this idealized exact line search strategy can exhibit arbitrarily slower convergence when compared to the optimal fixed learning rate with SGD. Moreover we exactly characterize the limiting learning rate (as time goes to infinity) for line search in the setting where the data covariance has only two distinct eigenvalues. For noiseless targets, we further demonstrate that the AdaGrad-Norm learning rate converges to a deterministic constant inversely proportional to the average eigenvalue of the data covariance matrix, and identify a phase transition when the covariance density of eigenvalues follows a power law distribution. We provide\nour code for evaluation at https://github.com/amackenzie1/highline2024.", "pdf": "https://openreview.net/pdf/07b35fb1dfd73b1bf3e77793bfb98d5cc8c6fdde.pdf"} {"title": "Alleviating Distortion in Image Generation via Multi-Resolution Diffusion Models and Time-Dependent Layer Normalization", "url": "https://openreview.net/forum?id=3JwMwL8i5f", "detail_url": "https://openreview.net/forum?id=3JwMwL8i5f", "authors": "Qihao Liu,Zhanpeng Zeng,Ju He,Qihang Yu,Xiaohui Shen,Liang-Chieh Chen", "tags": "NIPS 2024,Poster", "abstract": "This paper presents innovative enhancements to diffusion models by integrating a novel multi-resolution network and time-dependent layer normalization.\nDiffusion models have gained prominence for their effectiveness in high-fidelity image generation.\nWhile conventional approaches rely on convolutional U-Net architectures, recent Transformer-based designs have demonstrated superior performance and scalability.\nHowever, Transformer architectures, which tokenize input data (via \"patchification\"), face a trade-off between visual fidelity and computational complexity due to the quadratic nature of self-attention operations concerning token length.\nWhile larger patch sizes enable attention computation efficiency, they struggle to capture fine-grained visual details, leading to image distortions.\nTo address this challenge, we propose augmenting the **Di**ffusion model with the **M**ulti-**R**esolution network (DiMR), a framework that refines features across multiple resolutions, progressively enhancing detail from low to high resolution.\nAdditionally, we introduce Time-Dependent Layer Normalization (TD-LN), a parameter-efficient approach that incorporates time-dependent parameters into layer normalization to inject time information and achieve superior performance.\nOur method's efficacy is demonstrated on the class-conditional ImageNet generation benchmark, where DiMR-XL variants surpass previous diffusion models, achieving FID scores of 1.70 on ImageNet $256 \\times 256$ and 2.89 on ImageNet $512 \\times 512$. Our best variant, DiMR-G, further establishes a state-of-the-art 1.63 FID on ImageNet $256 \\times 256$.", "pdf": "https://openreview.net/pdf/e7f4c2683e89372926dfe4ef56f61032d8f0dc35.pdf"} {"title": "ProtGO: Function-Guided Protein Modeling for Unified Representation Learning", "url": "https://openreview.net/forum?id=0oUutV92YF", "detail_url": "https://openreview.net/forum?id=0oUutV92YF", "authors": "Bozhen Hu,Cheng Tan,Yongjie Xu,Zhangyang Gao,Jun Xia,Lirong Wu,Stan Z. Li", "tags": "NIPS 2024,Poster", "abstract": "Protein representation learning is indispensable for various downstream applications of artificial intelligence for bio-medicine research, such as drug design and function prediction. However, achieving effective representation learning for proteins poses challenges due to the diversity of data modalities involved, including sequence, structure, and function annotations. Despite the impressive capabilities of large language models in biomedical text modelling, there remains a pressing need for a framework that seamlessly integrates these diverse modalities, particularly focusing on the three critical aspects of protein information: sequence, structure, and function. Moreover, addressing the inherent data scale differences among these modalities is essential. To tackle these challenges, we introduce ProtGO, a unified model that harnesses a teacher network equipped with a customized graph neural network (GNN) and a Gene Ontology (GO) encoder to learn hybrid embeddings. Notably, our approach eliminates the need for additional functions as input for the student network, which shares the same GNN module. Importantly, we utilize a domain adaptation method to facilitate distribution approximation for guiding the training of the teacher-student framework. This approach leverages distributions learned from latent representations to avoid the alignment of individual samples. Benchmark experiments highlight that ProtGO significantly outperforms state-of-the-art baselines, clearly demonstrating the advantages of the proposed unified framework.", "pdf": "https://openreview.net/pdf/4c612b62ad07fff120769db3671c70a742345427.pdf"} {"title": "Learning Complete Protein Representation by Dynamically Coupling of Sequence and Structure", "url": "https://openreview.net/forum?id=0e5uOaJxo1", "detail_url": "https://openreview.net/forum?id=0e5uOaJxo1", "authors": "Bozhen Hu,Cheng Tan,Jun Xia,Yue Liu,Lirong Wu,Jiangbin Zheng,Yongjie Xu,Yufei Huang,Stan Z. Li", "tags": "NIPS 2024,Poster", "abstract": "Learning effective representations is imperative for comprehending proteins and deciphering their biological functions. Recent strides in language models and graph neural networks have empowered protein models to harness primary or tertiary structure information for representation learning. Nevertheless, the absence of practical methodologies to appropriately model intricate inter-dependencies between protein sequences and structures has resulted in embeddings that exhibit low performance on tasks such as protein function prediction. In this study, we introduce CoupleNet, a novel framework designed to interlink protein sequences and structures to derive informative protein representations. CoupleNet integrates multiple levels and scales of features in proteins, encompassing residue identities and positions for sequences, as well as geometric representations for tertiary structures from both local and global perspectives. A two-type dynamic graph is constructed to capture adjacent and distant sequential features and structural geometries, achieving completeness at the amino acid and backbone levels. Additionally, convolutions are executed on nodes and edges simultaneously to generate comprehensive protein embeddings. Experimental results on benchmark datasets showcase that CoupleNet outperforms state-of-the-art methods, exhibiting particularly superior performance in low-sequence similarities scenarios, adeptly identifying infrequently encountered functions and effectively capturing remote homology relationships in proteins.", "pdf": "https://openreview.net/pdf/9852aa5ac3399daafadffb9c16a38eabd5ab96d6.pdf"} {"title": "DiGRAF: Diffeomorphic Graph-Adaptive Activation Function", "url": "https://openreview.net/forum?id=ZZoW4Z3le4", "detail_url": "https://openreview.net/forum?id=ZZoW4Z3le4", "authors": "Krishna Sri Ipsit Mantri,Xinzhi Wang,Carola-Bibiane Sch\u00f6nlieb,Bruno Ribeiro,Beatrice Bevilacqua,Moshe Eliasof", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we propose a novel activation function tailored specifically for graph data in Graph Neural Networks (GNNs). Motivated by the need for graph-adaptive and flexible activation functions, we introduce DiGRAF, leveraging Continuous Piecewise-Affine Based (CPAB) transformations, which we augment with an additional GNN to learn a graph-adaptive diffeomorphic activation function in an end-to-end manner. In addition to its graph-adaptivity and flexibility, DiGRAF also possesses properties that are widely recognized as desirable for activation functions, such as differentiability, boundness within the domain, and computational efficiency. \nWe conduct an extensive set of experiments across diverse datasets and tasks, demonstrating a consistent and superior performance of DiGRAF compared to traditional and graph-specific activation functions, highlighting its effectiveness as an activation function for GNNs. Our code is available at https://github.com/ipsitmantri/DiGRAF.", "pdf": "https://openreview.net/pdf/11ef9ae8285b6dddf7a09b3540e27c711f7a182f.pdf"} {"title": "Information-theoretic Limits of Online Classification with Noisy Labels", "url": "https://openreview.net/forum?id=Ke3MSP8Nr6", "detail_url": "https://openreview.net/forum?id=Ke3MSP8Nr6", "authors": "Changlong Wu,Ananth Grama,Wojciech Szpankowski", "tags": "NIPS 2024,Poster", "abstract": "We study online classification with general hypothesis classes where the true labels are determined by some function within the class, but are corrupted by *unknown* stochastic noise, and the features are generated adversarially. Predictions are made using observed *noisy* labels and noiseless features, while the performance is measured via minimax risk when comparing against *true* labels. The noisy mechanism is modeled via a general noisy kernel that specifies, for any individual data point, a set of distributions from which the actual noisy label distribution is chosen. We show that minimax risk is *tightly* characterized (up to a logarithmic factor of the hypothesis class size) by the *Hellinger gap* of the noisy label distributions induced by the kernel, *independent* of other properties such as the means and variances of the noise. Our main technique is based on a novel reduction to an online comparison scheme of two hypotheses, along with a new *conditional* version of Le Cam-Birg\u00e9 testing suitable for online settings. Our work provides the first comprehensive characterization of noisy online classification with guarantees that apply to the *ground truth* while addressing *general* noisy observations.", "pdf": "https://openreview.net/pdf/65e82b580900e3c346c24f73f48b1651548ad990.pdf"} {"title": "The Power of Extrapolation in Federated Learning", "url": "https://openreview.net/forum?id=FuTfZK7PK3", "detail_url": "https://openreview.net/forum?id=FuTfZK7PK3", "authors": "Hanmin Li,Kirill Acharya,Peter Richt\u00e1rik", "tags": "NIPS 2024,Poster", "abstract": "We propose and study several server-extrapolation strategies for enhancing the theoretical and empirical convergence properties of the popular federated learning optimizer FedProx [Li et al., 2020]. While it has long been known that some form of extrapolation can help in the practice of FL, only a handful of works provide any theoretical guarantees. The phenomenon seems elusive, and our current theoretical understanding remains severely incomplete. In our work, we focus on smooth convex or strongly convex problems in the interpolation regime. In particular, we propose Extrapolated FedProx (FedExProx), and study three extrapolation strategies: a constant strategy (depending on various smoothness parameters and the number of participating devices), and two smoothness-adaptive strategies; one based on the notion of gradient diversity (FedExProx-GraDS), and the other one based on the stochastic Polyak stepsize (FedExProx-StoPS). Our theory is corroborated with carefully constructed numerical experiments.", "pdf": "https://openreview.net/pdf/81e6fbb8b08ec8a8caa9f09cb0d56bddb12a6334.pdf"} {"title": "First-Order Minimax Bilevel Optimization", "url": "https://openreview.net/forum?id=GZoAUVSkaw", "detail_url": "https://openreview.net/forum?id=GZoAUVSkaw", "authors": "Yifan Yang,Zhaofeng Si,Siwei Lyu,Kaiyi Ji", "tags": "NIPS 2024,Poster", "abstract": "Multi-block minimax bilevel optimization has been studied recently due to its great potential in multi-task learning, robust machine learning, and few-shot learning. However, due to the complex three-level optimization structure, existing algorithms often suffer from issues such as high computing costs due to the second-order model derivatives or high memory consumption in storing all blocks' parameters. In this paper, we tackle these challenges by proposing two novel fully first-order algorithms named FOSL and MemCS. FOSL features a fully single-loop structure by updating all three variables simultaneously, and MemCS is a memory-efficient double-loop algorithm with cold-start initialization. We provide a comprehensive convergence analysis for both algorithms under full and partial block participation, and show that their sample complexities match or outperform those of the same type of methods in standard bilevel optimization. We evaluate our methods in two applications: the recently proposed multi-task deep AUC maximization and a novel rank-based robust meta-learning. Our methods consistently improve over existing methods with better performance over various datasets.", "pdf": "https://openreview.net/pdf/50827d183e9d283c204e1c4fb7696727a4b887a5.pdf"} {"title": "A Functional Extension of Semi-Structured Networks", "url": "https://openreview.net/forum?id=WJAiaslhin", "detail_url": "https://openreview.net/forum?id=WJAiaslhin", "authors": "David R\u00fcgamer,Bernard X.W. Liew,Zainab Altai,Almond St\u00f6cker", "tags": "NIPS 2024,Poster", "abstract": "Semi-structured networks (SSNs) merge the structures familiar from additive models with deep neural networks, allowing the modeling of interpretable partial feature effects while capturing higher-order non-linearities at the same time. A significant challenge in this integration is maintaining the interpretability of the additive model component. Inspired by large-scale biomechanics datasets, this paper explores extending SSNs to functional data. Existing methods in functional data analysis are promising but often not expressive enough to account for all interactions and non-linearities and do not scale well to large datasets. Although the SSN approach presents a compelling potential solution, its adaptation to functional data remains complex. In this work, we propose a functional SSN method that retains the advantageous properties of classical functional regression approaches while also improving scalability. Our numerical experiments demonstrate that this approach accurately recovers underlying signals, enhances predictive performance, and performs favorably compared to competing methods.", "pdf": "https://openreview.net/pdf/1793929181991d2caa1689355ceaa7eae219dd1f.pdf"} {"title": "Gradient-based Discrete Sampling with Automatic Cyclical Scheduling", "url": "https://openreview.net/forum?id=4syq5cgwA2", "detail_url": "https://openreview.net/forum?id=4syq5cgwA2", "authors": "Patrick Pynadath,Riddhiman Bhattacharya,ARUN NARAYANAN HARIHARAN,Ruqi Zhang", "tags": "NIPS 2024,Poster", "abstract": "Discrete distributions, particularly in high-dimensional deep models, are often highly multimodal due to inherent discontinuities. While gradient-based discrete sampling has proven effective, it is susceptible to becoming trapped in local modes due to the gradient information. To tackle this challenge, we propose an automatic cyclical scheduling, designed for efficient and accurate sampling in multimodal discrete distributions. Our method contains three key components: (1) a cyclical step size schedule where large steps discover new modes and small steps exploit each mode; (2) a cyclical balancing schedule, ensuring \"balanced\" proposals for given step sizes and high efficiency of the Markov chain; and (3) an automatic tuning scheme for adjusting the hyperparameters in the cyclical schedules, allowing adaptability across diverse datasets with minimal tuning. We prove the non-asymptotic convergence and inference guarantee for our method in general discrete distributions. Extensive experiments demonstrate the superiority of our method in sampling complex multimodal discrete distributions.", "pdf": "https://openreview.net/pdf/edbaedc42e1e4a5e356e0f6f7339a2e5bb695489.pdf"} {"title": "Soft Superpixel Neighborhood Attention", "url": "https://openreview.net/forum?id=bxH6T1w1FW", "detail_url": "https://openreview.net/forum?id=bxH6T1w1FW", "authors": "Kent Gauen,Stanley H. Chan", "tags": "NIPS 2024,Poster", "abstract": "Images contain objects with deformable boundaries, such as the contours of a human face, yet attention operators act on square windows. This mixes features from perceptually unrelated regions, which can degrade the quality of a denoiser. One can exclude pixels using an estimate of perceptual groupings, such as superpixels, but the naive use of superpixels can be theoretically and empirically worse than standard attention. Using superpixel probabilities rather than superpixel assignments, this paper proposes soft superpixel neighborhood attention (SNA), which interpolates between the existing neighborhood attention and the naive superpixel neighborhood attention. This paper presents theoretical results showing SNA is the optimal denoiser under a latent superpixel model. SNA outperforms alternative local attention modules on image denoising, and we compare the superpixels learned from denoising with those learned with supervision.", "pdf": "https://openreview.net/pdf/2f0825c56e915b0f754631070e6565418593f92e.pdf"} {"title": "Minimax Optimal and Computationally Efficient Algorithms for Distributionally Robust Offline Reinforcement Learning", "url": "https://openreview.net/forum?id=9SghPrjYU1", "detail_url": "https://openreview.net/forum?id=9SghPrjYU1", "authors": "Zhishuai Liu,Pan Xu", "tags": "NIPS 2024,Poster", "abstract": "Distributionally robust offline reinforcement learning (RL), which seeks robust policy training against environment perturbation by modeling dynamics uncertainty, calls for function approximations when facing large state-action spaces. However, the consideration of dynamics uncertainty introduces essential nonlinearity and computational burden, posing unique challenges for analyzing and practically employing function approximation. Focusing on a basic setting where the nominal model and perturbed models are linearly parameterized, we propose minimax optimal and computationally efficient algorithms realizing function approximation and initiate the study on instance-dependent suboptimality analysis in the context of robust offline RL. Our results uncover that function approximation in robust offline RL is essentially distinct from and probably harder than that in standard offline RL. Our algorithms and theoretical results crucially depend on a novel function approximation mechanism incorporating variance information, a new procedure of suboptimality and estimation uncertainty decomposition, a quantification of the robust value function shrinkage, and a meticulously designed family of hard instances, which might be of independent interest.", "pdf": "https://openreview.net/pdf/d55cf6b7009ad0d7d0065fe538b81195635d3204.pdf"} {"title": "Prior-itizing Privacy: A Bayesian Approach to Setting the Privacy Budget in Differential Privacy", "url": "https://openreview.net/forum?id=kamAXSJxGV", "detail_url": "https://openreview.net/forum?id=kamAXSJxGV", "authors": "Zeki Kazan,Jerome Reiter", "tags": "NIPS 2024,Poster", "abstract": "When releasing outputs from confidential data, agencies need to balance the analytical usefulness of the released data with the obligation to protect data subjects' confidentiality. For releases satisfying differential privacy, this balance is reflected by the privacy budget, $\\varepsilon$. We provide a framework for setting $\\varepsilon$ based on its relationship with Bayesian posterior probabilities of disclosure. The agency responsible for the data release decides how much posterior risk it is willing to accept at various levels of prior risk, which implies a unique $\\varepsilon$. Agencies can evaluate different risk profiles to determine one that leads to an acceptable trade-off in risk and utility.", "pdf": "https://openreview.net/pdf/ad3ee055c2553f84b6b18d6557012e1793d1c17a.pdf"} {"title": "Coherence-free Entrywise Estimation of Eigenvectors in Low-rank Signal-plus-noise Matrix Models", "url": "https://openreview.net/forum?id=CiuH7zOBCQ", "detail_url": "https://openreview.net/forum?id=CiuH7zOBCQ", "authors": "Hao Yan,Keith Levin", "tags": "NIPS 2024,Poster", "abstract": "Spectral methods are widely used to estimate eigenvectors of a low-rank signal matrix subject to noise. These methods use the leading eigenspace of an observed matrix to estimate this low-rank signal. Typically, the entrywise estimation error of these methods depends on the coherence of the low-rank signal matrix with respect to the standard basis. In this work, we present a novel method for eigenvector estimation that avoids this dependence on coherence. Assuming a rank-one signal matrix, under mild technical conditions, the entrywise estimation error of our method provably has no dependence on the coherence under Gaussian noise (i.e., in the spiked Wigner model), and achieves the optimal estimation rate up to logarithmic factors. Simulations demonstrate that our method performs well under non-Gaussian noise and that an extension of our method to the case of a rank-$r$ signal matrix has little to no dependence on the coherence. In addition, we derive new metric entropy bounds for rank-$r$ singular subspaces under $\\ell_{2,\\infty}$ distance, which may be of independent interest. We use these new bounds to improve the best known lower bound for rank-$r$ eigenspace estimation under $\\ell_{2,\\infty}$ distance.", "pdf": "https://openreview.net/pdf/54e14e75f0ce42bceffa36d12e24c19bb9a76219.pdf"} {"title": "Video Token Merging for Long Video Understanding", "url": "https://openreview.net/forum?id=wduRaBDRBS", "detail_url": "https://openreview.net/forum?id=wduRaBDRBS", "authors": "Seon-Ho Lee,Jue Wang,Zhikang Zhang,David Fan,Xinyu Li", "tags": "NIPS 2024,Poster", "abstract": "As the scale of data and models for video understanding rapidly expand, handling long-form video input in transformer-based models presents a practical challenge. Rather than resorting to input sampling or token dropping, which may result in information loss, token merging shows promising results when used in collaboration with transformers. However, the application of token merging for long-form video processing is not trivial. We begin with the premise that token merging should not rely solely on the similarity of video tokens; the saliency of tokens should also be considered. To address this, we explore various video token merging strategies for long-form video classification, starting with a simple extension of image token merging, moving to region-concentrated merging, and finally proposing a learnable video token merging (VTM) algorithm that dynamically merges tokens based on their saliency. Extensive experimental results show that we achieve better or comparable performances on the LVU, COIN, and Breakfast datasets. Moreover, our approach significantly reduces memory costs by 84% and boosts throughput by approximately 6.89 times compared to baseline algorithms.", "pdf": "https://openreview.net/pdf/02ca725ea4f0f64f7c82a9d5389359bf6b97e1bf.pdf"} {"title": "DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features", "url": "https://openreview.net/forum?id=7fScrgJ3An", "detail_url": "https://openreview.net/forum?id=7fScrgJ3An", "authors": "Letian Wang,Seung Wook Kim,Jiawei Yang,Cunjun Yu,Boris Ivanovic,Steven L. Waslander,Yue Wang,Sanja Fidler,Marco Pavone,Peter Karkus", "tags": "NIPS 2024,Poster", "abstract": "We propose DistillNeRF, a self-supervised learning framework addressing the challenge of understanding 3D environments from limited 2D observations in outdoor autonomous driving scenes. Our method is a generalizable feedforward model that predicts a rich neural scene representation from sparse, single-frame multi-view camera inputs with limited view overlap, and is trained self-supervised with differentiable rendering to reconstruct RGB, depth, or feature images. Our first insight is to exploit per-scene optimized Neural Radiance Fields (NeRFs) by generating dense depth and virtual camera targets from them, which helps our model to learn enhanced 3D geometry from sparse non-overlapping image inputs. Second, to learn a semantically rich 3D representation, we propose distilling features from pre-trained 2D foundation models, such as CLIP or DINOv2, thereby enabling various downstream tasks without the need for costly 3D human annotations. To leverage these two insights, we introduce a novel model architecture with a two-stage lift-splat-shoot encoder and a parameterized sparse hierarchical voxel representation. Experimental results on the NuScenes and Waymo NOTR datasets demonstrate that DistillNeRF significantly outperforms existing comparable state-of-the-art self-supervised methods for scene reconstruction, novel view synthesis, and depth estimation; and it allows for competitive zero-shot 3D semantic occupancy prediction, as well as open-world scene understanding through distilled foundation model features. Demos and code will be available at https://distillnerf.github.io/.", "pdf": "https://openreview.net/pdf/79b5a842ea672e6507dac0dcbe672d988d795a46.pdf"} {"title": "Navigating Chemical Space with Latent Flows", "url": "https://openreview.net/forum?id=aAaV4ZbQ9j", "detail_url": "https://openreview.net/forum?id=aAaV4ZbQ9j", "authors": "Guanghao Wei,Yining Huang,Chenru Duan,Yue Song,Yuanqi Du", "tags": "NIPS 2024,Poster", "abstract": "Recent progress of deep generative models in the vision and language domain has stimulated significant interest in more structured data generation such as molecules. However, beyond generating new random molecules, efficient exploration and a comprehensive understanding of the vast chemical space are of great importance to molecular science and applications in drug design and materials discovery.\nIn this paper, we propose a new framework, ChemFlow, to traverse chemical space through navigating the latent space learned by molecule generative models through flows. We introduce a dynamical system perspective that formulates the problem as learning a vector field that transports the mass of the molecular distribution to the region with desired molecular properties or structure diversity. \nUnder this framework, we unify previous approaches on molecule latent space traversal and optimization and propose alternative competing methods incorporating different physical priors. \nWe validate the efficacy of ChemFlow on molecule manipulation and single- and multi-objective molecule optimization tasks under both supervised and unsupervised molecular discovery settings.\nCodes and demos are publicly available on GitHub at \n[https://github.com/garywei944/ChemFlow](https://github.com/garywei944/ChemFlow).", "pdf": "https://openreview.net/pdf/61e516bc27676bd44f2fe394a5a9589f144a5b39.pdf"} {"title": "Offline Behavior Distillation", "url": "https://openreview.net/forum?id=89fSR2gpxp", "detail_url": "https://openreview.net/forum?id=89fSR2gpxp", "authors": "Shiye Lei,Sen Zhang,Dacheng Tao", "tags": "NIPS 2024,Poster", "abstract": "Massive reinforcement learning (RL) data are typically collected to train policies offline without the need for interactions, but the large data volume can cause training inefficiencies. To tackle this issue, we formulate offline behavior distillation (OBD), which synthesizes limited expert behavioral data from sub-optimal RL data, enabling rapid policy learning. We propose two naive OBD objectives, DBC and PBC, which measure distillation performance via the decision difference between policies trained on distilled data and either offline data or a near-expert policy. Due to intractable bi-level optimization, the OBD objective is difficult to minimize to small values, which deteriorates PBC by its distillation performance guarantee with quadratic discount complexity $\\mathcal{O}(1/(1-\\gamma)^2)$. We theoretically establish the equivalence between the policy performance and action-value weighted decision difference, and introduce action-value weighted PBC (Av-PBC) as a more effective OBD objective. By optimizing the weighted decision difference, Av-PBC achieves a superior distillation guarantee with linear discount complexity $\\mathcal{O}(1/(1-\\gamma))$. Extensive experiments on multiple D4RL datasets reveal that Av-PBC offers significant improvements in OBD performance, fast distillation convergence speed, and robust cross-architecture/optimizer generalization.", "pdf": "https://openreview.net/pdf/01ac06766484d1c8d8849a0b6ad390eccfb98ea0.pdf"} {"title": "MoMu-Diffusion: On Learning Long-Term Motion-Music Synchronization and Correspondence", "url": "https://openreview.net/forum?id=YscR3LBIi7", "detail_url": "https://openreview.net/forum?id=YscR3LBIi7", "authors": "Fuming You,Minghui Fang,Li Tang,Rongjie Huang,Yongqi Wang,Zhou Zhao", "tags": "NIPS 2024,Poster", "abstract": "Motion-to-music and music-to-motion have been studied separately, each attracting substantial research interest within their respective domains. The interaction between human motion and music is a reflection of advanced human intelligence, and establishing a unified relationship between them is particularly important. However, to date, there has been no work that considers them jointly to explore the modality alignment within. To bridge this gap, we propose a novel framework, termed MoMu-Diffusion, for long-term and synchronous motion-music generation. Firstly, to mitigate the huge computational costs raised by long sequences, we propose a novel Bidirectional Contrastive Rhythmic Variational Auto-Encoder (BiCoR-VAE) that extracts the modality-aligned latent representations for both motion and music inputs. Subsequently, leveraging the aligned latent spaces, we introduce a multi-modal diffusion Transformer model and a cross-guidance sampling strategy to enable various generation tasks, including cross-modal, multi-modal, and variable-length generation. Extensive experiments demonstrate that MoMu-Diffusion surpasses recent state-of-the-art methods both qualitatively and quantitatively, and can synthesize realistic, diverse, long-term, and beat-matched music or motion sequences. The generated motion-music samples are available at https://momu-diffusion.github.io/.", "pdf": "https://openreview.net/pdf/232e907448f9be7c7100f146c00a49c5e729b256.pdf"} {"title": "Efficient and Sharp Off-Policy Evaluation in Robust Markov Decision Processes", "url": "https://openreview.net/forum?id=LKGuc2rY5v", "detail_url": "https://openreview.net/forum?id=LKGuc2rY5v", "authors": "Andrew Bennett,Nathan Kallus,Miruna Oprescu,Wen Sun,Kaiwen Wang", "tags": "NIPS 2024,Poster", "abstract": "We study the evaluation of a policy under best- and worst-case perturbations to a Markov decision process (MDP), using transition observations from the original MDP, whether they are generated under the same or a different policy. This is an important problem when there is the possibility of a shift between historical and future environments, \\emph{e.g.} due to unmeasured confounding, distributional shift, or an adversarial environment. We propose a perturbation model that allows changes in the transition kernel densities up to a given multiplicative factor or its reciprocal, extending the classic marginal sensitivity model (MSM) for single time-step decision-making to infinite-horizon RL. We characterize the sharp bounds on policy value under this model -- \\emph{i.e.}, the tightest possible bounds based on transition observations from the original MDP -- and we study the estimation of these bounds from such transition observations. We develop an estimator with several important guarantees: it is semiparametrically efficient, and remains so even when certain necessary nuisance functions, such as worst-case Q-functions, are estimated at slow, nonparametric rates. Our estimator is also asymptotically normal, enabling straightforward statistical inference using Wald confidence intervals. Moreover, when certain nuisances are estimated inconsistently, the estimator still provides valid, albeit possibly not sharp, bounds on the policy value. We validate these properties in numerical simulations. The combination of accounting for environment shifts from train to test (robustness), being insensitive to nuisance-function estimation (orthogonality), and addressing the challenge of learning from finite samples (inference) together leads to credible and reliable policy evaluation.", "pdf": "https://openreview.net/pdf/30a981bf867fb45243b0f531f82087f19eb06a66.pdf"} {"title": "Cherry on Top: Parameter Heterogeneity and Quantization in Large Language Models", "url": "https://openreview.net/forum?id=QAiKLaCrKj", "detail_url": "https://openreview.net/forum?id=QAiKLaCrKj", "authors": "Wanyun Cui,Qianle Wang", "tags": "NIPS 2024,Poster", "abstract": "This paper reveals the phenomenon of parameter heterogeneity in large language models (LLMs). We find that a small subset of ``cherry'' parameters exhibit a disproportionately large influence on model performance, while the vast majority of parameters have minimal impact. This heterogeneity is found to be prevalent across different model families, scales, and types. Motivated by this observation, we propose CherryQ, a novel quantization method that unifies the optimization of mixed-precision parameters. CherryQ identifies and preserves the critical cherry parameters in high precision while aggressively quantizing the remaining parameters to low precision. Extensive experiments demonstrate the effectiveness of CherryQ. CherryQ outperforms existing quantization approaches in terms of perplexity and downstream task performance. Notably, our 3-bit quantized Vicuna-1.5 exhibits competitive performance compared to their 16-bit counterparts. These findings highlight the potential of CherryQ for enabling efficient deployment of LLMs by taking advantage of parameter heterogeneity.", "pdf": "https://openreview.net/pdf/8f2807518f5c80e59305d5889a4883a6ce7cfb9d.pdf"} {"title": "e-COP : Episodic Constrained Optimization of Policies", "url": "https://openreview.net/forum?id=5IRtAcVbiC", "detail_url": "https://openreview.net/forum?id=5IRtAcVbiC", "authors": "Akhil Agnihotri,Rahul Jain,Deepak Ramachandran,Sahil Singla", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we present the e-COP algorithm, the first policy optimization algorithm for constrained Reinforcement Learning (RL) in episodic (finite horizon) settings. Such formulations are applicable when there are separate sets of optimization criteria and constraints on a system's behavior. We approach this problem by first establishing a policy difference lemma for the episodic setting, which provides the theoretical foundation for the algorithm. Then, we propose to combine a set of established and novel solution ideas to yield the e-COP algorithm that is easy to implement and numerically stable, and provide a theoretical guarantee on optimality under certain scaling assumptions. Through extensive empirical analysis using benchmarks in the Safety Gym suite, we show that our algorithm has similar or better performance than SoTA (non-episodic) algorithms adapted for the episodic setting. The scalability of the algorithm opens the door to its application in safety-constrained Reinforcement Learning from Human Feedback for Large Language or Diffusion Models.", "pdf": "https://openreview.net/pdf/e2087acb4d653ff52e076f87d71a30d1772bd69a.pdf"} {"title": "Perplexity-aware Correction for Robust Alignment with Noisy Preferences", "url": "https://openreview.net/forum?id=OUXnnPJzXJ", "detail_url": "https://openreview.net/forum?id=OUXnnPJzXJ", "authors": "Keyi Kong,Xilie Xu,Di Wang,Jingfeng Zhang,Mohan Kankanhalli", "tags": "NIPS 2024,Poster", "abstract": "Alignment techniques are critical in ensuring that large language models (LLMs) output helpful and harmless content by enforcing the LLM-generated content to align with human preferences. \nHowever, the existence of noisy preferences (NPs), where the responses are mistakenly labelled as chosen or rejected, could spoil the alignment, thus making the LLMs generate useless and even malicious content. \nExisting methods mitigate the issue of NPs from the loss perspective by adjusting the alignment loss based on a clean validation dataset.\nOrthogonal to these loss-oriented methods, we propose perplexity-aware correction (PerpCorrect) from the data perspective for robust alignment which detects and corrects NPs based on the differences between the perplexity of the chosen and rejected responses (dubbed as PPLDiff). \nIntuitively, a higher PPLDiff indicates a higher probability of the NP because a rejected/chosen response which is mistakenly labelled as chosen/rejected is less preferable to be generated by an aligned LLM, thus having a higher/lower perplexity.\nPerpCorrect works in three steps: \n(1) PerpCorrect aligns a surrogate LLM using the clean validation data to make the PPLDiff able to distinguish clean preferences (CPs) and NPs. \n(2) PerpCorrect further aligns the surrogate LLM by incorporating the reliable clean training data whose PPLDiff is extremely small and reliable noisy training data whose PPLDiff is extremely large after correction to boost the discriminatory power.\n(3) Detecting and correcting NPs according to the PPLDiff obtained by the aligned surrogate LLM to obtain a denoised training dataset for robust alignment.\nComprehensive experiments validate that our proposed PerpCorrect can achieve state-of-the-art alignment performance under NPs.\nNotably, PerpCorrect demonstrates practical utility by requiring only a modest amount of validation data and being compatible with various alignment techniques. \nOur code is available at [PerpCorrect](https://github.com/luxinyayaya/PerpCorrect).", "pdf": "https://openreview.net/pdf/1db6e393841a73cee7789fdcdedd18d034cf07d1.pdf"} {"title": "FlexCap: Describe Anything in Images in Controllable Detail", "url": "https://openreview.net/forum?id=P5dEZeECGu", "detail_url": "https://openreview.net/forum?id=P5dEZeECGu", "authors": "Debidatta Dwibedi,Vidhi Jain,Jonathan Tompson,Andrew Zisserman,Yusuf Aytar", "tags": "NIPS 2024,Poster", "abstract": "We introduce FlexCap, a vision-language model that generates region-specific descriptions of varying lengths. FlexCap is trained to produce length-conditioned captions for input boxes, enabling control over information density, with descriptions ranging from concise object labels to detailed captions. To achieve this, we create large-scale training datasets of image region descriptions with varying lengths from captioned web images. We demonstrate FlexCap\u2019s effectiveness in several applications: first, it achieves strong performance in dense captioning tasks on the Visual Genome dataset. Second, we show how FlexCap\u2019s localized descriptions can serve as input to a large language model to create a visual question answering (VQA) system, achieving state-of-the-art zero-shot performance on multiple VQA benchmarks. Our experiments illustrate FlexCap\u2019s utility for tasks including image labeling, object attribute recognition, and visual dialog. Project webpage: https://flex-cap.github.io.", "pdf": "https://openreview.net/pdf/0a3db5141b1a3ddc1c53f6544197f7efa079f715.pdf"} {"title": "SocialGPT: Prompting LLMs for Social Relation Reasoning via Greedy Segment Optimization", "url": "https://openreview.net/forum?id=xcF2VbyZts", "detail_url": "https://openreview.net/forum?id=xcF2VbyZts", "authors": "Wanhua Li,Zibin Meng,Jiawei Zhou,Donglai Wei,Chuang Gan,Hanspeter Pfister", "tags": "NIPS 2024,Poster", "abstract": "Social relation reasoning aims to identify relation categories such as friends, spouses, and colleagues from images. While current methods adopt the paradigm of training a dedicated network end-to-end using labeled image data, they are limited in terms of generalizability and interpretability. To address these issues, we first present a simple yet well-crafted framework named SocialGPT, which combines the perception capability of Vision Foundation Models (VFMs) and the reasoning capability of Large Language Models (LLMs) within a modular framework, providing a strong baseline for social relation recognition. Specifically, we instruct VFMs to translate image content into a textual social story, and then utilize LLMs for text-based reasoning. SocialGPT introduces systematic design principles to adapt VFMs and LLMs separately and bridge their gaps. Without additional model training, it achieves competitive zero-shot results on two databases while offering interpretable answers, as LLMs can generate language-based explanations for the decisions. The manual prompt design process for LLMs at the reasoning phase is tedious and an automated prompt optimization method is desired. As we essentially convert a visual classification task into a generative task of LLMs, automatic prompt optimization encounters a unique long prompt optimization issue. To address this issue, we further propose the Greedy Segment Prompt Optimization (GSPO), which performs a greedy search by utilizing gradient information at the segment level. Experimental results show that GSPO significantly improves performance, and our method also generalizes to different image styles. The code is available at https://github.com/Mengzibin/SocialGPT.", "pdf": "https://openreview.net/pdf/70105568d7ff06cd079c119532147dd737450a1c.pdf"} {"title": "Attack-Resilient Image Watermarking Using Stable Diffusion", "url": "https://openreview.net/forum?id=e6KrSouGHJ", "detail_url": "https://openreview.net/forum?id=e6KrSouGHJ", "authors": "Lijun Zhang,Xiao Liu,Antoni Viros i Martin,Cindy Xiong Bearfield,Yuriy Brun,Hui Guan", "tags": "NIPS 2024,Poster", "abstract": "Watermarking images is critical for tracking image provenance and proving ownership. With the advent of generative models, such as stable diffusion, that can create fake but realistic images, watermarking has become particularly important to make human-created images reliably identifiable. Unfortunately, the very same stable diffusion technology can remove watermarks injected using existing methods.\nTo address this problem, we present ZoDiac, which uses a pre-trained stable diffusion model to inject a watermark into the trainable latent space, resulting in watermarks that can be reliably detected in the latent vector even when attacked. We evaluate ZoDiac on three benchmarks, MS-COCO, DiffusionDB, and WikiArt, and find that ZoDiac is robust against state-of-the-art watermark attacks, with a watermark detection rate above 98% and a false positive rate below 6.4%, outperforming state-of-the-art watermarking methods. We hypothesize that the reciprocating denoising process in diffusion models may inherently enhance the robustness of the watermark when faced with strong attacks and validate the hypothesis. Our research demonstrates that stable diffusion is a promising approach to robust watermarking, able to withstand even stable-diffusion-based attack methods. ZoDiac is open-sourced and available at https://github.com/zhanglijun95/ZoDiac.", "pdf": "https://openreview.net/pdf/704e1edc3d21024a833aeaa3b4323fc5ca9abe09.pdf"} {"title": "Exploiting Activation Sparsity with Dense to Dynamic-k Mixture-of-Experts Conversion", "url": "https://openreview.net/forum?id=38UFpdt3Tr", "detail_url": "https://openreview.net/forum?id=38UFpdt3Tr", "authors": "Filip Szatkowski,Bartosz W\u00f3jcik,Miko\u0142aj Pi\u00f3rczy\u0144ski,Simone Scardapane", "tags": "NIPS 2024,Poster", "abstract": "Transformer models can face practical limitations due to their high computational requirements. At the same time, such models exhibit significant activation sparsity, which can be leveraged to reduce the inference cost by converting parts of the network into equivalent Mixture-of-Experts (MoE) layers. Despite the crucial role played by activation sparsity, its impact on this process remains unexplored. We demonstrate that the efficiency of the conversion can be significantly enhanced by a proper regularization of the activation sparsity of the base model. Moreover, motivated by the high variance of the number of activated neurons for different inputs, we introduce a more effective dynamic-$k$ expert selection rule that adjusts the number of executed experts on a per-token basis. To achieve further savings, we extend this approach to multi-head attention projections. Finally, we develop an efficient implementation that translates these computational savings into actual wall-clock speedup. The proposed method, Dense to Dynamic-$k$ Mixture-of-Experts (D2DMoE), outperforms existing approaches on common NLP and vision tasks, reducing inference cost by up to 60\\% without significantly impacting performance.", "pdf": "https://openreview.net/pdf/214cac9fad144967266f67042945a7b58e468af6.pdf"} {"title": "Score Distillation via Reparametrized DDIM", "url": "https://openreview.net/forum?id=4DcpFagQ9e", "detail_url": "https://openreview.net/forum?id=4DcpFagQ9e", "authors": "Artem Lukoianov,Haitz S\u00e1ez de Oc\u00e1riz Borde,Kristjan Greenewald,Vitor Campagnolo Guizilini,Timur Bagautdinov,Vincent Sitzmann,Justin Solomon", "tags": "NIPS 2024,Poster", "abstract": "While 2D diffusion models generate realistic, high-detail images, 3D shape generation methods like Score Distillation Sampling (SDS) built on these 2D diffusion models produce cartoon-like, over-smoothed shapes. To help explain this discrepancy, we show that the image guidance used in Score Distillation can be understood as the velocity field of a 2D denoising generative process, up to the choice of a noise term. In particular, after a change of variables, SDS resembles a high-variance version of Denoising Diffusion Implicit Models (DDIM) with a differently-sampled noise term: SDS introduces noise i.i.d. randomly at each step, while DDIM infers it from the previous noise predictions. This excessive variance can lead to over-smoothing and unrealistic outputs. We show that a better noise approximation can be recovered by inverting DDIM in each SDS update step. This modification makes SDS's generative process for 2D images almost identical to DDIM. In 3D, it removes over-smoothing, preserves higher-frequency detail, and brings the generation quality closer to that of 2D samplers. Experimentally, our method achieves better or similar 3D generation quality compared to other state-of-the-art Score Distillation methods, all without training additional neural networks or multi-view supervision, and providing useful insights into relationship between 2D and 3D asset generation with diffusion models.", "pdf": "https://openreview.net/pdf/05f67feb11af67d59223b99aca20b6e92ec5dc5d.pdf"} {"title": "Towards Combating Frequency Simplicity-biased Learning for Domain Generalization", "url": "https://openreview.net/forum?id=VMiLdBkCJM", "detail_url": "https://openreview.net/forum?id=VMiLdBkCJM", "authors": "Xilin He,Jingyu Hu,Qinliang Lin,Cheng Luo,Weicheng Xie,Siyang Song,Muhammad Haris Khan,Linlin Shen", "tags": "NIPS 2024,Poster", "abstract": "Domain generalization methods aim to learn transferable knowledge from source domains that can generalize well to unseen target domains. \nRecent studies show that neural networks frequently suffer from a simplicity-biased learning behavior which leads to over-reliance on specific frequency sets, namely as frequency shortcuts, instead of semantic information, resulting in poor generalization performance. \nDespite previous data augmentation techniques successfully enhancing generalization performances, they intend to apply more frequency shortcuts, thereby causing hallucinations of generalization improvement.\nIn this paper, we aim to prevent such learning behavior of applying frequency shortcuts from a data-driven perspective. Given the theoretical justification of models' biased learning behavior on different spatial frequency components, which is based on the dataset frequency properties, we argue that the learning behavior on various frequency components could be manipulated by changing the dataset statistical structure in the Fourier domain. \nIntuitively, as frequency shortcuts are hidden in the dominant and highly dependent frequencies of dataset structure, dynamically perturbating the over-reliance frequency components could prevent the application of frequency shortcuts.\nTo this end, we propose two effective data augmentation modules designed to collaboratively and adaptively adjust the frequency characteristic of the dataset, aiming to dynamically influence the learning behavior of the model and ultimately serving as a strategy to mitigate shortcut learning. Our code will be made publicly available.", "pdf": "https://openreview.net/pdf/1f0f5670f421586cd563925e2b6c31ce51929a7f.pdf"} {"title": "Learning Structure-Aware Representations of Dependent Types", "url": "https://openreview.net/forum?id=e397soEZh8", "detail_url": "https://openreview.net/forum?id=e397soEZh8", "authors": "Konstantinos Kogkalidis,Orestis Melkonian,Jean-Philippe Bernardy", "tags": "NIPS 2024,Poster", "abstract": "Agda is a dependently-typed programming language and a proof assistant, pivotal in proof formalization and programming language theory.\nThis paper extends the Agda ecosystem into machine learning territory, and, vice versa, makes Agda-related resources available to machine learning practitioners.\nWe introduce and release a novel dataset of Agda program-proofs that is elaborate and extensive enough to support various machine learning applications -- the first of its kind.\nLeveraging the dataset's ultra-high resolution, which details proof states at the sub-type level, we propose a novel neural architecture targeted at faithfully representing dependently-typed programs on the basis of structural rather than nominal principles.\nWe instantiate and evaluate our architecture in a premise selection setup, where it achieves promising initial results, surpassing strong baselines.", "pdf": "https://openreview.net/pdf/25eb0c54c99bd7e383c8dd241e8c8327b9e95ede.pdf"} {"title": "Natural Counterfactuals With Necessary Backtracking", "url": "https://openreview.net/forum?id=N6zJ8DclC2", "detail_url": "https://openreview.net/forum?id=N6zJ8DclC2", "authors": "Guang-Yuan Hao,Jiji Zhang,Biwei Huang,Hao Wang,Kun Zhang", "tags": "NIPS 2024,Poster", "abstract": "Counterfactual reasoning is pivotal in human cognition and especially important for providing explanations and making decisions. While Judea Pearl's influential approach is theoretically elegant, its generation of a counterfactual scenario often requires too much deviation from the observed scenarios to be feasible, as we show using simple examples. To mitigate this difficulty, we propose a framework of natural counterfactuals and a method for generating counterfactuals that are more feasible with respect to the actual data distribution. Our methodology incorporates a certain amount of backtracking when needed, allowing changes in causally preceding variables to minimize deviations from realistic scenarios. Specifically, we introduce a novel optimization framework that permits but also controls the extent of backtracking with a \"naturalness'' criterion. Empirical experiments demonstrate the effectiveness of our method. The code is available at https://github.com/GuangyuanHao/natural_counterfactuals.", "pdf": "https://openreview.net/pdf/f4d4f12c9592a87ff3baa57d7edc51b76fb01e52.pdf"} {"title": "An exactly solvable model for emergence and scaling laws in the multitask sparse parity problem", "url": "https://openreview.net/forum?id=cuWsR25bbI", "detail_url": "https://openreview.net/forum?id=cuWsR25bbI", "authors": "Yoonsoo Nam,Nayara Fonseca,Seok Hyeong Lee,Chris Mingard,Ard A. Louis", "tags": "NIPS 2024,Poster", "abstract": "Deep learning models can exhibit what appears to be a sudden ability to solve a new problem as training time, training data, or model size increases, a phenomenon known as emergence. In this paper, we present a framework where each new ability (a skill) is represented as a basis function. We solve a simple multi-linear model in this skill-basis, finding analytic expressions for the emergence of new skills, as well as for scaling laws of the loss with training time, data size, model size, and optimal compute. We compare our detailed calculations to direct simulations of a two-layer neural network trained on multitask sparse parity, where the tasks in the dataset are distributed according to a power-law. Our simple model captures, using a single fit parameter, the sigmoidal emergence of multiple new skills as training time, data size or model size increases in the neural network.", "pdf": "https://openreview.net/pdf/2a31ca06eb9c08b2484b1182dca6ab2bc67a8230.pdf"} {"title": "UniAudio 1.5: Large Language Model-Driven Audio Codec is A Few-Shot Audio Task Learner", "url": "https://openreview.net/forum?id=NGrINZyZKk", "detail_url": "https://openreview.net/forum?id=NGrINZyZKk", "authors": "Dongchao Yang,Haohan Guo,Yuanyuan Wang,Rongjie Huang,Xiang Li,Xu Tan,Xixin Wu,Helen M. Meng", "tags": "NIPS 2024,Poster", "abstract": "Large Language models (LLMs) have demonstrated supreme capabilities in textual understanding and generation, but cannot be directly applied to cross-modal tasks without fine-tuning. This paper proposes a cross-modal in-context learning approach, empowering the frozen LLMs to achieve multiple audio tasks in a few-shot style without any parameter update. \nSpecifically, we propose a novel LLM-driven audio codec model, LLM-Codec, which transfers the audio modality into textual space by representing audio tokens with words or sub-words from the LLM vocabulary, while maintaining high audio reconstruction quality.\nThe key idea is to reduce the modality heterogeneity between text and audio by compressing the audio modality into the well-trained textual space of LLMs. Thus, the audio representation can be viewed as a new \\textit{foreign language}, and LLMs can learn the new \\textit{foreign language} with several demonstrations. In experiments, we investigate the performance of the proposed approach across multiple audio understanding and generation tasks, \\textit{e.g.} speech emotion classification, audio classification, text-to-speech generation, speech enhancement, etc. Experimental results show that LLMs equipped with the LLM-Codec, named as UniAudio 1.5, prompted by only a few examples, can perform effectively in simple scenarios, validating our cross-modal in-context learning approach.\nTo facilitate research on few-shot audio task learning and multi-modal LLMs, we have open-sourced the LLM-Codec model.", "pdf": "https://openreview.net/pdf/46c26e64b82d8e710b1ae946b4fd77f67ac6a94a.pdf"} {"title": "Automating Data Annotation under Strategic Human Agents: Risks and Potential Solutions", "url": "https://openreview.net/forum?id=2UJLv3KPGO", "detail_url": "https://openreview.net/forum?id=2UJLv3KPGO", "authors": "Tian Xie,Xueru Zhang", "tags": "NIPS 2024,Poster", "abstract": "As machine learning (ML) models are increasingly used in social domains to make consequential decisions about humans, they often have the power to reshape data distributions. Humans, as strategic agents, continuously adapt their behaviors in response to the learning system. As populations change dynamically, ML systems may need frequent updates to ensure high performance. However, acquiring high-quality *human-annotated* samples can be highly challenging and even infeasible in social domains. A common practice to address this issue is using the model itself to annotate unlabeled data samples. This paper investigates the long-term impacts when ML models are retrained with *model-annotated* samples when they incorporate human strategic responses. We first formalize the interactions between strategic agents and the model and then analyze how they evolve under such dynamic interactions. We find that agents are increasingly likely to receive positive decisions as the model gets retrained, whereas the proportion of agents with positive labels may decrease over time. We thus propose a *refined retraining process* to stabilize the dynamics. Last, we examine how algorithmic fairness can be affected by these retraining processes and find that enforcing common fairness constraints at every round may not benefit the disadvantaged group in the long run. Experiments on (semi-)synthetic and real data validate the theoretical findings.", "pdf": "https://openreview.net/pdf/44841391152cc9c0c9a8aa7e562ff8698b75fdd0.pdf"} {"title": "Make-it-Real: Unleashing Large Multimodal Model for Painting 3D Objects with Realistic Materials", "url": "https://openreview.net/forum?id=88rbNOtAez", "detail_url": "https://openreview.net/forum?id=88rbNOtAez", "authors": "Ye Fang,Zeyi Sun,Tong Wu,Jiaqi Wang,Ziwei Liu,Gordon Wetzstein,Dahua Lin", "tags": "NIPS 2024,Poster", "abstract": "Physically realistic materials are pivotal in augmenting the realism of 3D assets across various applications and lighting conditions. However, existing 3D assets and generative models often lack authentic material properties. Manual assignment of materials using graphic software is a tedious and time-consuming task. In this paper, we exploit advancements in Multimodal Large Language Models (MLLMs), particularly GPT-4V, to present a novel approach, Make-it-Real: 1) We demonstrate that GPT-4V can effectively recognize and describe materials, allowing the construction of a detailed material library. 2) Utilizing a combination of visual cues and hierarchical text prompts, GPT-4V precisely identifies and aligns materials with the corresponding components of 3D objects. 3) The correctly matched materials are then meticulously applied as reference for the new SVBRDF material generation according to the original albedo map, significantly enhancing their visual authenticity. Make-it-Real offers a streamlined integration into the 3D content creation workflow, showcasing its utility as an essential tool for developers of 3D assets.", "pdf": "https://openreview.net/pdf/f24e92bf016abae57997ad0db6aac50eadaf45df.pdf"} {"title": "Representation Noising: A Defence Mechanism Against Harmful Finetuning", "url": "https://openreview.net/forum?id=eP9auEJqFg", "detail_url": "https://openreview.net/forum?id=eP9auEJqFg", "authors": "Domenic Rosati,Jan Wehner,Kai Williams,Lukasz Bartoszcze,Robie Gonzales,carsten maple,Subhabrata Majumdar,Hassan Sajjad,Frank Rudzicz", "tags": "NIPS 2024,Poster", "abstract": "Releasing open-source large language models (LLMs) presents a dual-use risk since bad actors can easily fine-tune these models for harmful purposes. Even without the open release of weights, weight stealing and fine-tuning APIs make closed models vulnerable to harmful fine-tuning attacks (HFAs). While safety measures like preventing jailbreaks and improving safety guardrails are important, such measures can easily be reversed through fine-tuning. In this work, we propose Representation Noising (\\textsf{\\small RepNoise}), a defence mechanism that operates even when attackers have access to the weights. \\textsf{\\small RepNoise} works by removing information about harmful representations such that it is difficult to recover them during fine-tuning. Importantly, our defence is also able to generalize across different subsets of harm that have not been seen during the defence process as long as they are drawn from the same distribution of the attack set. Our method does not degrade the general capability of LLMs and retains the ability to train the model on harmless tasks. We provide empirical evidence that the efficacy of our defence lies in its ``depth'': the degree to which information about harmful representations is removed across {\\em all layers} of the LLM. We also find areas where \\textsf{\\small RepNoise} still remains ineffective and highlight how those limitations can inform future research.", "pdf": "https://openreview.net/pdf/d1c9a27d1f0015f12f9faf5af4bf3193768bc5d5.pdf"} {"title": "Compositional PAC-Bayes: Generalization of GNNs with persistence and beyond", "url": "https://openreview.net/forum?id=ZNcJtNN3e8", "detail_url": "https://openreview.net/forum?id=ZNcJtNN3e8", "authors": "Kirill Brilliantov,Amauri H Souza,Vikas Garg", "tags": "NIPS 2024,Poster", "abstract": "Heterogeneity, e.g., due to different types of layers or multiple sub-models, poses key challenges in analyzing the generalization behavior of several modern architectures. For instance, descriptors based on Persistent Homology (PH) are being increasingly integrated into Graph Neural Networks (GNNs) to augment them with rich topological features; however, the generalization of such PH schemes remains unexplored. We introduce a novel _compositional_ PAC-Bayes framework that provides a general recipe to analyze a broad spectrum of models including those with heterogeneous layers. Specifically, we provide the first data-dependent generalization bounds for a widely adopted PH vectorization scheme (that subsumes persistence landscapes, images, and silhouettes) as well as PH-augmented GNNs. Using our framework, we also obtain bounds for GNNs and neural nets with ease. Our bounds also inform the design of novel regularizers. Empirical evaluations on several standard real-world datasets demonstrate that our theoretical bounds highly correlate with empirical generalization performance, leading to improved classifier design via our regularizers. Overall, this work bridges a crucial gap in the theoretical understanding of PH methods and general heterogeneous models, paving the way for the design of better models for (graph) representation learning. \nOur code is available at https://github.com/Aalto-QuML/Compositional-PAC-Bayes.", "pdf": "https://openreview.net/pdf/419dd77f107cf4b829dfb623e0f790185e1da9b2.pdf"} {"title": "Unified Lexical Representation for Interpretable Visual-Language Alignment", "url": "https://openreview.net/forum?id=xoCFd1WKpf", "detail_url": "https://openreview.net/forum?id=xoCFd1WKpf", "authors": "Yifan Li,Yikai Wang,Yanwei Fu,Dongyu Ru,Zheng Zhang,Tong He", "tags": "NIPS 2024,Poster", "abstract": "Visual-Language Alignment (VLA) has gained a lot of attention since CLIP's groundbreaking work. \nAlthough CLIP performs well, the typical direct latent feature alignment lacks clarity in its representation and similarity scores. \nOn the other hand, lexical representation, a vector whose element represents the similarity between the sample and a word from the vocabulary, is a natural sparse representation and interpretable, providing exact matches for individual words.\nHowever, lexical representations are difficult to learn due to no ground-truth supervision and false-discovery issues, and thus requires complex design to train effectively.\nIn this paper, we introduce LexVLA, a more interpretable VLA framework by learning a unified lexical representation for both modalities without complex design. \nWe use DINOv2 as our visual model for its local-inclined features and Llama 2, a generative language model, to leverage its in-context lexical prediction ability.\nTo avoid the false discovery, we propose an overuse penalty to refrain the lexical representation from falsely frequently activating meaningless words.\nWe demonstrate that these two pre-trained uni-modal models can be well-aligned by fine-tuning on the modest multi-modal dataset and avoid intricate training configurations. \nOn cross-modal retrieval benchmarks, LexVLA, trained on the CC-12M multi-modal dataset, outperforms baselines fine-tuned on larger datasets (e.g., YFCC15M) and those trained from scratch on even bigger datasets (e.g., 1.1B data, including CC-12M).\nWe conduct extensive experiments to analyze LexVLA. \nCodes are available at https://github.com/Clementine24/LexVLA.", "pdf": "https://openreview.net/pdf/3206cbc8f56e0bf6c85cccc342384845c2232940.pdf"} {"title": "In-N-Out: Lifting 2D Diffusion Prior for 3D Object Removal via Tuning-Free Latents Alignment", "url": "https://openreview.net/forum?id=gffaYDu9mM", "detail_url": "https://openreview.net/forum?id=gffaYDu9mM", "authors": "Dongting Hu,Huan Fu,Jiaxian Guo,Liuhua Peng,Tingjin Chu,Feng Liu,Tongliang Liu,Mingming Gong", "tags": "NIPS 2024,Poster", "abstract": "Neural representations for 3D scenes have made substantial advancements recently, yet object removal remains a challenging yet practical issue, due to the absence of multi-view supervision over occluded areas. Diffusion Models (DMs), trained on extensive 2D images, show diverse and high-fidelity generative capabilities in the 2D domain. However, due to not being specifically trained on 3D data, their application to multi-view data often exacerbates inconsistency, hence impacting the overall quality of the 3D output. To address these issues, we introduce \"In-N-Out\", a novel approach that begins by inpainting a prior, i.e., the occluded area from a single view using DMs, followed by outstretching it to create multi-view inpaintings via latents alignments. Our analysis identifies that the variability in DMs' outputs mainly arises from initially sampled latents and intermediate latents predicted in the denoising process. We explicitly align of initial latents using a Neural Radiance Field (NeRF) to establish a consistent foundational structure in the inpainted area, complemented by an implicit alignment of intermediate latents through cross-view attention during the denoising phases, enhancing appearance consistency across views. To further enhance rendering results, we apply a patch-based hybrid loss to optimize NeRF. We demonstrate that our techniques effectively mitigate the challenges posed by inconsistencies in DMs and substantially improve the fidelity and coherence of inpainted 3D representations.", "pdf": "https://openreview.net/pdf/6e2e783e0eb3bbd0528f7fbc8af4ce316117e6ea.pdf"} {"title": "Q-Distribution guided Q-learning for offline reinforcement learning: Uncertainty penalized Q-value via consistency model", "url": "https://openreview.net/forum?id=NIcIdhyfQX", "detail_url": "https://openreview.net/forum?id=NIcIdhyfQX", "authors": "Jing Zhang,Linjiajie Fang,Kexin Shi,Wenjia Wang,Bingyi Jing", "tags": "NIPS 2024,Poster", "abstract": "``Distribution shift'' is the primary obstacle to the success of offline reinforcement learning. As a learning policy may take actions beyond the knowledge of the behavior policy (referred to as Out-of-Distribution (OOD) actions), the Q-values of these OOD actions can be easily overestimated. Consequently, the learning policy becomes biasedly optimized using the incorrect recovered Q-value function. One commonly used idea to avoid the overestimation of Q-value is to make a pessimistic adjustment. Our key idea is to penalize the Q-values of OOD actions that correspond to high uncertainty. In this work, we propose Q-Distribution guided Q-learning (QDQ) which pessimistic Q-value on OOD regions based on uncertainty estimation. The uncertainty measure is based on the conditional Q-value distribution, which is learned via a high-fidelity and efficient consistency model. On the other hand, to avoid the overly conservative problem, we introduce an uncertainty-aware optimization objective to update the Q-value function. The proposed QDQ demonstrates solid theoretical guarantees for the accuracy of Q-value distribution learning and uncertainty measurement, as well as the performance of the learning policy. QDQ consistently exhibits strong performance in the D4RL benchmark and shows significant improvements for many tasks. Our code can be found at .", "pdf": "https://openreview.net/pdf/c5344fa64f307d985ee3f5f54986253446e5b50a.pdf"} {"title": "Amortized Bayesian Experimental Design for Decision-Making", "url": "https://openreview.net/forum?id=zBG7WogAvm", "detail_url": "https://openreview.net/forum?id=zBG7WogAvm", "authors": "Daolang Huang,Yujia Guo,Luigi Acerbi,Samuel Kaski", "tags": "NIPS 2024,Poster", "abstract": "Many critical decisions, such as personalized medical diagnoses and product pricing, are made based on insights gained from designing, observing, and analyzing a series of experiments. This highlights the crucial role of experimental design, which goes beyond merely collecting information on system parameters as in traditional Bayesian experimental design (BED), but also plays a key part in facilitating downstream decision-making. Most recent BED methods use an amortized policy network to rapidly design experiments. However, the information gathered through these methods is suboptimal for down-the-line decision-making, as the experiments are not inherently designed with downstream objectives in mind. In this paper, we present an amortized decision-aware BED framework that prioritizes maximizing downstream decision utility. We introduce a novel architecture, the Transformer Neural Decision Process (TNDP), capable of instantly proposing the next experimental design, whilst inferring the downstream decision, thus effectively amortizing both tasks within a unified workflow. We demonstrate the performance of our method across several tasks, showing that it can deliver informative designs and facilitate accurate decision-making.", "pdf": "https://openreview.net/pdf/67b2e48fbef5361774799536072d5907137d322c.pdf"} {"title": "Forgetting, Ignorance or Myopia: Revisiting Key Challenges in Online Continual Learning", "url": "https://openreview.net/forum?id=opaRhDvQRD", "detail_url": "https://openreview.net/forum?id=opaRhDvQRD", "authors": "Wang Xinrui,Chuanxing Geng,Wenhai Wan,Shao-Yuan Li,Songcan Chen", "tags": "NIPS 2024,Poster", "abstract": "Online continual learning (OCL) requires the models to learn from constant, endless streams of data. While significant efforts have been made in this field, most were focused on mitigating the \\textit{catastrophic forgetting} issue to achieve better classification ability, at the cost of a much heavier training workload. They overlooked that in real-world scenarios, e.g., in high-speed data stream environments, data do not pause to accommodate slow models. In this paper, we emphasize that \\textit{model throughput}-- defined as the maximum number of training samples that a model can process within a unit of time -- is equally important. It directly limits how much data a model can utilize and presents a challenging dilemma for current methods. With this understanding, we revisit key challenges in OCL from both empirical and theoretical perspectives, highlighting two critical issues beyond the well-documented catastrophic forgetting: (\\romannumeral1) Model's ignorance: the single-pass nature of OCL challenges models to learn effective features within constrained training time and storage capacity, leading to a trade-off between effective learning and model throughput; (\\romannumeral2) Model's myopia: the local learning nature of OCL on the current task leads the model to adopt overly simplified, task-specific features and \\textit{excessively sparse classifier}, resulting in the gap between the optimal solution for the current task and the global objective. To tackle these issues, we propose the Non-sparse Classifier Evolution framework (NsCE) to facilitate effective global discriminative feature learning with minimal time cost. NsCE integrates non-sparse maximum separation regularization and targeted experience replay techniques with the help of pre-trained models, enabling rapid acquisition of new globally discriminative features. Extensive experiments demonstrate the substantial improvements of our framework in performance, throughput and real-world practicality.", "pdf": "https://openreview.net/pdf/f234665a6b29bf4968da01a5adc0303e595efb5c.pdf"} {"title": "SpatialPIN: Enhancing Spatial Reasoning Capabilities of Vision-Language Models through Prompting and Interacting 3D Priors", "url": "https://openreview.net/forum?id=YTHJ8O6SCB", "detail_url": "https://openreview.net/forum?id=YTHJ8O6SCB", "authors": "Chenyang Ma,Kai Lu,Ta-Ying Cheng,Niki Trigoni,Andrew Markham", "tags": "NIPS 2024,Poster", "abstract": "Current state-of-the-art spatial reasoning-enhanced VLMs are trained to excel at spatial visual question answering (VQA). However, we believe that higher-level 3D-aware tasks, such as articulating dynamic scene changes and motion planning, require a fundamental and explicit 3D understanding beyond current spatial VQA datasets. In this work, we present SpatialPIN, a framework designed to enhance the spatial reasoning capabilities of VLMs through prompting and interacting with priors from multiple 3D foundation models in a zero-shot, training-free manner. Extensive experiments demonstrate that our spatial reasoning-imbued VLM performs well on various forms of spatial VQA and can extend to help in various downstream robotics tasks such as pick and stack and trajectory planning.", "pdf": "https://openreview.net/pdf/867437d5e4ce3abc8790c6ec15f3bd74162253dc.pdf"} {"title": "Universal Online Convex Optimization with $1$ Projection per Round", "url": "https://openreview.net/forum?id=xNncVKbwwS", "detail_url": "https://openreview.net/forum?id=xNncVKbwwS", "authors": "Wenhao Yang,Yibo Wang,Peng Zhao,Lijun Zhang", "tags": "NIPS 2024,Poster", "abstract": "To address the uncertainty in function types, recent progress in online convex optimization (OCO) has spurred the development of universal algorithms that simultaneously attain minimax rates for multiple types of convex functions. However, for a $T$-round online problem, state-of-the-art methods typically conduct $O(\\log T)$ projections onto the domain in each round, a process potentially time-consuming with complicated feasible sets. In this paper, inspired by the black-box reduction of Cutkosky and Orabona [2018], we employ a surrogate loss defined over simpler domains to develop universal OCO algorithms that only require $1$ projection. Embracing the framework of prediction with expert advice, we maintain a set of experts for each type of functions and aggregate their predictions via a meta-algorithm. The crux of our approach lies in a uniquely designed expert-loss for strongly convex functions, stemming from an innovative decomposition of the regret into the meta-regret and the expert-regret. Our analysis sheds new light on the surrogate loss, facilitating a rigorous examination of the discrepancy between the regret of the original loss and that of the surrogate loss, and carefully controlling meta-regret under the strong convexity condition. With only $1$ projection per round, we establish optimal regret bounds for general convex, exponentially concave, and strongly convex functions simultaneously. Furthermore, we enhance the expert-loss to exploit the smoothness property, and demonstrate that our algorithm can attain small-loss regret for multiple types of convex and smooth functions.", "pdf": "https://openreview.net/pdf/7dc76a57e2e7e68167e764bbd0b24f559f8773a9.pdf"} {"title": "SpeechForensics: Audio-Visual Speech Representation Learning for Face Forgery Detection", "url": "https://openreview.net/forum?id=ZsS0megTsh", "detail_url": "https://openreview.net/forum?id=ZsS0megTsh", "authors": "Yachao Liang,Min Yu,Gang Li,Jianguo Jiang,Boquan Li,Feng Yu,Ning Zhang,Xiang Meng,Weiqing Huang", "tags": "NIPS 2024,Poster", "abstract": "Detection of face forgery videos remains a formidable challenge in the field of digital forensics, especially the generalization to unseen datasets and common perturbations. In this paper, we tackle this issue by leveraging the synergy between audio and visual speech elements, embarking on a novel approach through audio-visual speech representation learning. Our work is motivated by the finding that audio signals, enriched with speech content, can provide precise information effectively reflecting facial movements. To this end, we first learn precise audio-visual speech representations on real videos via a self-supervised masked prediction task, which encodes both local and global semantic information simultaneously. Then, the derived model is directly transferred to the forgery detection task. Extensive experiments demonstrate that our method outperforms the state-of-the-art methods in terms of cross-dataset generalization and robustness, without the participation of any fake video in model training.", "pdf": "https://openreview.net/pdf/2a2f26a592cbe5de920de47ed2ae5e4ee3cf043d.pdf"} {"title": "DiffuLT: Diffusion for Long-tail Recognition Without External Knowledge", "url": "https://openreview.net/forum?id=Kcsj9FGnKR", "detail_url": "https://openreview.net/forum?id=Kcsj9FGnKR", "authors": "Jie Shao,Ke Zhu,Hanxiao Zhang,Jianxin Wu", "tags": "NIPS 2024,Poster", "abstract": "This paper introduces a novel pipeline for long-tail (LT) recognition that diverges from conventional strategies. Instead, it leverages the long-tailed dataset itself to generate a balanced proxy dataset without utilizing external data or model. We deploy a diffusion model trained from scratch on only the long-tailed dataset to create this proxy and verify the effectiveness of the data produced. Our analysis identifies approximately-in-distribution (AID) samples, which slightly deviate from the real data distribution and incorporate a blend of class information, as the crucial samples for enhancing the generative model's performance in long-tail classification. We promote the generation of AID samples during the training of a generative model by utilizing a feature extractor to guide the process and filter out detrimental samples during generation. Our approach, termed Diffusion model for Long-Tail recognition (DiffuLT), represents a pioneer application of generative models in long-tail recognition. DiffuLT achieves state-of-the-art results on CIFAR10-LT, CIFAR100-LT, and ImageNet-LT, surpassing leading competitors by significant margins. Comprehensive ablations enhance the interpretability of our pipeline. Notably, the entire generative process is conducted without relying on external data or pre-trained model weights, which leads to its generalizability to real-world long-tailed scenarios.", "pdf": "https://openreview.net/pdf/0b7ba8adece9839b070c097763eba21357029fef.pdf"} {"title": "ImOV3D: Learning Open Vocabulary Point Clouds 3D Object Detection from Only 2D Images", "url": "https://openreview.net/forum?id=RCO9fRP8AJ", "detail_url": "https://openreview.net/forum?id=RCO9fRP8AJ", "authors": "Timing Yang,Yuanliang Ju,Li Yi", "tags": "NIPS 2024,Poster", "abstract": "Open-vocabulary 3D object detection (OV-3Det) aims to generalize beyond the limited number of base categories labeled during the training phase. The biggest bottleneck is the scarcity of annotated 3D data, whereas 2D image datasets are abundant and richly annotated. Consequently, it is intuitive to leverage the wealth of annotations in 2D images to alleviate the inherent data scarcity in OV-3Det. In this paper, we push the task setup to its limits by exploring the potential of using solely 2D images to learn OV-3Det. The major challenges for this setup is the modality gap between training images and testing point clouds, which prevents effective integration of 2D knowledge into OV-3Det. To address this challenge, we propose a novel framework ImOV3D to leverage pseudo multimodal representation containing both images and point clouds (PC) to close the modality gap. The key of ImOV3D lies in flexible modality conversion where 2D images can be lifted into 3D using monocular depth estimation and can also be derived from 3D scenes through rendering. This allows unifying both training images and testing point clouds into a common image-PC representation, encompassing a wealth of 2D semantic information and also incorporating the depth and structural characteristics of 3D spatial data. We carefully conduct such conversion to minimize the domain gap between training and test cases. Extensive experiments on two benchmark datasets, SUNRGBD and ScanNet, show that ImOV3D significantly outperforms existing methods, even in the absence of ground truth 3D training data. With the inclusion of a minimal amount of real 3D data for fine-tuning, the performance also significantly surpasses previous state-of-the-art. Codes and pre-trained models are released on the https://github.com/yangtiming/ImOV3D.", "pdf": "https://openreview.net/pdf/8bdc515c6aaa0f393f77cb92481d4453ce94dbf5.pdf"} {"title": "UQ-Guided Hyperparameter Optimization for Iterative Learners", "url": "https://openreview.net/forum?id=k9uZfaeerK", "detail_url": "https://openreview.net/forum?id=k9uZfaeerK", "authors": "Jiesong Liu,Feng Zhang,Jiawei Guan,Xipeng Shen", "tags": "NIPS 2024,Poster", "abstract": "Hyperparameter Optimization (HPO) plays a pivotal role in unleashing the potential of iterative machine learning models. This paper addresses a crucial aspect that has largely been overlooked in HPO: the impact of uncertainty in ML model training. The paper introduces the concept of uncertainty-aware HPO and presents a novel approach called the UQ-guided scheme for quantifying uncertainty. This scheme offers a principled and versatile method to empower HPO techniques in handling model uncertainty during their exploration of the candidate space.\nBy constructing a probabilistic model and implementing probability-driven candidate selection and budget allocation, this approach enhances the quality of the resulting model hyperparameters. It achieves a notable performance improvement of over 50\\% in terms of accuracy regret and exploration time.", "pdf": "https://openreview.net/pdf/c8864263288112957724326a0561151ae2fa6186.pdf"} {"title": "Bayesian Optimization of Functions over Node Subsets in Graphs", "url": "https://openreview.net/forum?id=KxjGi1krBi", "detail_url": "https://openreview.net/forum?id=KxjGi1krBi", "authors": "Huidong Liang,Xingchen Wan,Xiaowen Dong", "tags": "NIPS 2024,Poster", "abstract": "We address the problem of optimizing over functions defined on node subsets in a graph. The optimization of such functions is often a non-trivial task given their combinatorial, black-box and expensive-to-evaluate nature. Although various algorithms have been introduced in the literature, most are either task-specific or computationally inefficient and only utilize information about the graph structure without considering the characteristics of the function. To address these limitations, we utilize Bayesian Optimization (BO), a sample-efficient black-box solver, and propose a novel framework for combinatorial optimization on graphs. More specifically, we map each $k$-node subset in the original graph to a node in a new combinatorial graph and adopt a local modeling approach to efficiently traverse the latter graph by progressively sampling its subgraphs using a recursive algorithm. Extensive experiments under both synthetic and real-world setups demonstrate the effectiveness of the proposed BO framework on various types of graphs and optimization tasks, where its behavior is analyzed in detail with ablation studies.", "pdf": "https://openreview.net/pdf/4100c73ea0bf991eed715d01781fb5fd152ebb75.pdf"} {"title": "Toward Robust Incomplete Multimodal Sentiment Analysis via Hierarchical Representation Learning", "url": "https://openreview.net/forum?id=XgwTH95kCl", "detail_url": "https://openreview.net/forum?id=XgwTH95kCl", "authors": "Mingcheng Li,Dingkang Yang,Yang Liu,Shunli Wang,Jiawei Chen,Shuaibing Wang,Jinjie Wei,Yue Jiang,Qingyao Xu,Xiaolu Hou,Mingyang Sun,Ziyun Qian,Dongliang Kou,Lihua Zhang", "tags": "NIPS 2024,Poster", "abstract": "Multimodal Sentiment Analysis (MSA) is an important research area that aims to understand and recognize human sentiment through multiple modalities. The complementary information provided by multimodal fusion promotes better sentiment analysis compared to utilizing only a single modality. Nevertheless, in real-world applications, many unavoidable factors may lead to situations of uncertain modality missing, thus hindering the effectiveness of multimodal modeling and degrading the model\u2019s performance. To this end, we propose a Hierarchical Representation Learning Framework (HRLF) for the MSA task under uncertain missing modalities. Specifically, we propose a fine-grained representation factorization module that sufficiently extracts valuable sentiment information by factorizing modality into sentiment-relevant and modality-specific representations through crossmodal translation and sentiment semantic reconstruction. Moreover, a hierarchical mutual information maximization mechanism is introduced to incrementally maximize the mutual information between multi-scale representations to align and reconstruct the high-level semantics in the representations. Ultimately, we propose a hierarchical adversarial learning mechanism that further aligns and adapts the latent distribution of sentiment-relevant representations to produce robust joint multimodal representations. Comprehensive experiments on three datasets demonstrate that HRLF significantly improves MSA performance under uncertain modality missing cases.", "pdf": "https://openreview.net/pdf/2ea6e298e8c919cfc8fe8bdeecf9f3e0e4925a17.pdf"} {"title": "Can large language models explore in-context?", "url": "https://openreview.net/forum?id=OWPzhVqIux", "detail_url": "https://openreview.net/forum?id=OWPzhVqIux", "authors": "Akshay Krishnamurthy,Keegan Harris,Dylan J Foster,Cyril Zhang,Aleksandrs Slivkins", "tags": "NIPS 2024,Poster", "abstract": "We investigate the extent to which contemporary Large Language Models (LLMs) can engage in exploration, a core capability in reinforcement learning and decision making. We focus on native performance of existing LLMs, without training interventions. We deploy LLMs as agents in simple multi-armed bandit environments, specifying the environment description and interaction history entirely in-context, i.e., within the LLM prompt. We experiment with GPT-3.5, GPT-4, and Llama2, using a variety of prompt designs, and find that the models do not robustly engage in exploration without substantial interventions: i) Only one configuration resulted in satisfactory exploratory behavior: GPT-4 with chain-of-thought reasoning and an externally summarized interaction history; ii) All other configurations did not result in robust exploratory behavior, including those with chain-of-thought reasoning but unsummarized history. While these findings can be interpreted positively, they suggest that external summarization\u2014which may not be possible in more complex settings\u2014is essential for desirable LLM behavior. We conclude that non-trivial algorithmic interventions, such as fine-tuning or dataset curation, may be required to empower LLM-based decision making agents in complex settings.", "pdf": "https://openreview.net/pdf/923ac7f98d88e534f33e4824eb39ac888ee96068.pdf"} {"title": "One-shot Federated Learning via Synthetic Distiller-Distillate Communication", "url": "https://openreview.net/forum?id=6292sp7HiE", "detail_url": "https://openreview.net/forum?id=6292sp7HiE", "authors": "Junyuan Zhang,Songhua Liu,Xinchao Wang", "tags": "NIPS 2024,Poster", "abstract": "One-shot Federated learning (FL) is a powerful technology facilitating collaborative training of machine learning models in a single round of communication. While its superiority lies in communication efficiency and privacy preservation compared to iterative FL, one-shot FL often compromises model performance. Prior research has primarily focused on employing data-free knowledge distillation to optimize data generators and ensemble models for better aggregating local knowledge into the server model. However, these methods typically struggle with data heterogeneity, where inconsistent local data distributions can cause teachers to provide misleading knowledge. Additionally, they may encounter scalability issues with complex datasets due to inherent two-step information loss: first, during local training (from data to model), and second, when transferring knowledge to the server model (from model to inversed data). In this paper, we propose FedSD2C, a novel and practical one-shot FL framework designed to address these challenges. FedSD2C introduces a distiller to synthesize informative distillates directly from local data to reduce information loss and proposes sharing synthetic distillates instead of inconsistent local models to tackle data heterogeneity. Our empirical results demonstrate that FedSD2C consistently outperforms other one-shot FL methods with more complex and real datasets, achieving up to 2.6 $\\times$ the performance of the best baseline. Code: https://github.com/Carkham/FedSD2C", "pdf": "https://openreview.net/pdf/e9fc9ea641481d8b3a60bc33d30efb2afc5cdb6f.pdf"} {"title": "Aligning Audio-Visual Joint Representations with an Agentic Workflow", "url": "https://openreview.net/forum?id=QMaLS4VeY3", "detail_url": "https://openreview.net/forum?id=QMaLS4VeY3", "authors": "Shentong Mo,Yibing Song", "tags": "NIPS 2024,Poster", "abstract": "Visual content and accompanied audio signals naturally formulate a joint representation to improve audio-visual (AV) related applications. While studies develop various AV representation learning frameworks, the importance of AV data alignment is usually undermined for achieving high-quality representation. We observe that an audio signal may contain background noise interference. Also, non-synchronization may appear between audio and video streams. These non-strict data alignment limits representation quality and downgrade application performance. In this paper, we propose to improve AV joint representations from a data-centric perspective by aligning audio signals to visual data. Our alignment is conducted in an agentic workflow controlled by an LLM-based assistant named AVAgent. For each input AV data pair, our AVAgent uses a multi-modal LLM to convert audio and visual data into language descriptions separately (i.e., tool use). Then, AVAgent reasons whether this paired data is aligned well and plans to edit the audio signal if needed (i.e., planning). The audio editing is executed by predefined actions that filter noise or augment data. Moreover, we use a VLM to evaluate how modified audio signals match the visual content and provide feedback to AVAgent (i.e., reflection). The tool use, planning, and reflection steps operate cyclically to become an agentic workflow where audio signals are gradually aligned to visual content. To this end, existing methods can directly leverage the aligned AV data via our agentic workflow to improve AV joint representations. The experimental results comprehensively demonstrate the state-of-the-art performance of the proposed approach against previous baselines in diverse downstream tasks.", "pdf": "https://openreview.net/pdf/56396006acd9ab133fbb46575dda3fbf9cc38b24.pdf"} {"title": "Fast Rates in Stochastic Online Convex Optimization by Exploiting the Curvature of Feasible Sets", "url": "https://openreview.net/forum?id=Y58T1MQhh6", "detail_url": "https://openreview.net/forum?id=Y58T1MQhh6", "authors": "Taira Tsuchiya,Shinji Ito", "tags": "NIPS 2024,Poster", "abstract": "In this work, we explore online convex optimization (OCO) and introduce a new condition and analysis that provides fast rates by exploiting the curvature of feasible sets. In online linear optimization, it is known that if the average gradient of loss functions exceeds a certain threshold, the curvature of feasible sets can be exploited by the follow-the-leader (FTL) algorithm to achieve a logarithmic regret. This study reveals that algorithms adaptive to the curvature of loss functions can also leverage the curvature of feasible sets. In particular, we first prove that if an optimal decision is on the boundary of a feasible set and the gradient of an underlying loss function is non-zero, then the algorithm achieves a regret bound of $O(\\rho \\log T)$ in stochastic environments. Here, $\\rho > 0$ is the radius of the smallest sphere that includes the optimal decision and encloses the feasible set. Our approach, unlike existing ones, can work directly with convex loss functions, exploiting the curvature of loss functions simultaneously, and can achieve the logarithmic regret only with a local property of feasible sets. Additionally, the algorithm achieves an $O(\\sqrt{T})$ regret even in adversarial environments, in which FTL suffers an $\\Omega(T)$ regret, and achieves an $O(\\rho \\log T + \\sqrt{C \\rho \\log T})$ regret in corrupted stochastic environments with corruption level $C$. Furthermore, by extending our analysis, we establish a matching regret upper bound of $O\\Big(T^{\\frac{q-2}{2(q-1)}} (\\log T)^{\\frac{q}{2(q-1)}}\\Big)$ for $q$-uniformly convex feasible sets, where uniformly convex sets include strongly convex sets and $\\ell_p$-balls for $p \\in [2,\\infty)$. This bound bridges the gap between the $O(\\log T)$ bound for strongly convex sets~($q=2$) and the $O(\\sqrt{T})$ bound for non-curved sets~($q\\to\\infty$).", "pdf": "https://openreview.net/pdf/19fae4b5b2a7e08a3d922180e12f8a7af2894525.pdf"} {"title": "Near-Optimal Distributed Minimax Optimization under the Second-Order Similarity", "url": "https://openreview.net/forum?id=pgUQFIJ6BE", "detail_url": "https://openreview.net/forum?id=pgUQFIJ6BE", "authors": "Qihao Zhou,Haishan Ye,Luo Luo", "tags": "NIPS 2024,Poster", "abstract": "This paper considers the distributed convex-concave minimax optimization under the second-order similarity.\nWe propose stochastic variance-reduced optimistic gradient sliding (SVOGS) method, which takes the advantage of the finite-sum structure in the objective by involving the mini-batch client sampling and variance reduction.\nWe prove SVOGS can achieve the $\\varepsilon$-duality gap within communication rounds of \n${\\mathcal O}(\\delta D^2/\\varepsilon)$, \ncommunication complexity of ${\\mathcal O}(n+\\sqrt{n}\\delta D^2/\\varepsilon)$,\nand local gradient calls of \n$\\tilde{\\mathcal O}(n+(\\sqrt{n}\\delta+L)D^2/\\varepsilon\\log(1/\\varepsilon))$, \nwhere $n$ is the number of nodes, $\\delta$ is the degree of the second-order similarity, $L$ is the smoothness parameter and $D$ is the diameter of the constraint set.\nWe can verify that all of above complexity (nearly) matches the corresponding lower bounds.\nFor the specific $\\mu$-strongly-convex-$\\mu$-strongly-convex case, \nour algorithm has the upper bounds on communication rounds, communication complexity, and local gradient calls of $\\mathcal O(\\delta/\\mu\\log(1/\\varepsilon))$, ${\\mathcal O}((n+\\sqrt{n}\\delta/\\mu)\\log(1/\\varepsilon))$, and $\\tilde{\\mathcal O}(n+(\\sqrt{n}\\delta+L)/\\mu)\\log(1/\\varepsilon))$ respectively, which are also nearly tight.\nFurthermore, we conduct the numerical experiments to show the empirical advantages of proposed method.", "pdf": "https://openreview.net/pdf/b064e9a30204be0e7d128cbb02db397e56aba3b8.pdf"} {"title": "TAIA: Large Language Models are Out-of-Distribution Data Learners", "url": "https://openreview.net/forum?id=XxSME6GE1G", "detail_url": "https://openreview.net/forum?id=XxSME6GE1G", "authors": "Shuyang Jiang,Yusheng Liao,Ya Zhang,Yanfeng Wang,Yu Wang", "tags": "NIPS 2024,Poster", "abstract": "Fine-tuning on task-specific question-answer pairs is a predominant method for enhancing the performance of instruction-tuned large language models (LLMs) on downstream tasks. However, in certain specialized domains, such as healthcare or harmless content generation, it is nearly impossible to obtain a large volume of high-quality data that matches the downstream distribution. To improve the performance of LLMs in data-scarce domains with domain-mismatched data, we re-evaluated the Transformer architecture and discovered that not all parameter updates during fine-tuning contribute positively to downstream performance. Our analysis reveals that within the self-attention and feed-forward networks, only the fine-tuned attention parameters are particularly beneficial when the training set's distribution does not fully align with the test set. Based on this insight, we propose an effective inference-time intervention method: \\uline{T}raining \\uline{A}ll parameters but \\uline{I}nferring with only \\uline{A}ttention (TAIA). We empirically validate TAIA using two general instruction-tuning datasets and evaluate it on seven downstream tasks involving math, reasoning, and knowledge understanding across LLMs of different parameter sizes and fine-tuning techniques. Our comprehensive experiments demonstrate that TAIA achieves superior improvements compared to both the fully fine-tuned model and the base model in most scenarios, with significant performance gains. The high tolerance of TAIA to data mismatches makes it resistant to jailbreaking tuning and enhances specialized tasks using general data. Code is available in \\url{https://github.com/pixas/TAIA_LLM}.", "pdf": "https://openreview.net/pdf/e73ce6efb7b2652b077b7e393f0902b4622e151d.pdf"} {"title": "Improving Deep Learning Optimization through Constrained Parameter Regularization", "url": "https://openreview.net/forum?id=rCXTkIhkbF", "detail_url": "https://openreview.net/forum?id=rCXTkIhkbF", "authors": "J\u00f6rg K.H. Franke,Michael Hefenbrock,Gregor Koehler,Frank Hutter", "tags": "NIPS 2024,Poster", "abstract": "Regularization is a critical component in deep learning. The most commonly used approach, weight decay, applies a constant penalty coefficient uniformly across all parameters. This may be overly restrictive for some parameters, while insufficient for others. To address this, we present Constrained Parameter Regularization (CPR) as an alternative to traditional weight decay. Unlike the uniform application of a single penalty, CPR enforces an upper bound on a statistical measure, such as the L$_2$-norm, of individual parameter matrices. Consequently, learning becomes a constraint optimization problem, which we tackle using an adaptation of the augmented Lagrangian method. CPR introduces only a minor runtime overhead and only requires setting an upper bound. We propose simple yet efficient mechanisms for initializing this bound, making CPR rely on no hyperparameter or one, akin to weight decay. Our empirical studies on computer vision and language modeling tasks demonstrate CPR's effectiveness. The results show that CPR can outperform traditional weight decay and increase performance in pre-training and fine-tuning.", "pdf": "https://openreview.net/pdf/550c636f174d72a8593d652148fdb97d6944ae3e.pdf"} {"title": "Mind the Gap: A Causal Perspective on Bias Amplification in Prediction & Decision-Making", "url": "https://openreview.net/forum?id=aXYL24yhjN", "detail_url": "https://openreview.net/forum?id=aXYL24yhjN", "authors": "Drago Plecko,Elias Bareinboim", "tags": "NIPS 2024,Poster", "abstract": "As society increasingly relies on AI-based tools for decision-making in socially sensitive domains, investigating fairness and equity of such automated systems has become a critical field of inquiry. Most of the literature in fair machine learning focuses on defining and achieving fairness criteria in the context of prediction, while not explicitly focusing on how these predictions may be used later on in the pipeline. For instance, if commonly used criteria, such as independence or sufficiency, are satisfied for a prediction score $S$ used for binary classification, they need not be satisfied after an application of a simple thresholding operation on $S$ (as commonly used in practice). \nIn this paper, we take an important step to address this issue in numerous statistical and causal notions of fairness. We introduce the notion of a margin complement, which measures how much a prediction score $S$ changes due to a thresholding operation.\nWe then demonstrate that the marginal difference in the optimal 0/1 predictor $\\widehat Y$ between groups, written $P(\\hat y \\mid x_1) - P(\\hat y \\mid x_0)$, can be causally decomposed into the influences of $X$ on the $L_2$-optimal prediction score $S$ and the influences of $X$ on the margin complement $M$, along different causal pathways (direct, indirect, spurious). We then show that under suitable causal assumptions, the influences of $X$ on the prediction score $S$ are equal to the influences of $X$ on the true outcome $Y$. This yields a new decomposition of the disparity in the predictor $\\widehat Y$ that allows us to disentangle causal differences inherited from the true outcome $Y$ that exists in the real world vs. those coming from the optimization procedure itself. This observation highlights the need for more regulatory oversight due to the potential for bias amplification, and to address this issue we introduce new notions of weak and strong business necessity, together with an algorithm for assessing whether these notions are satisfied. We apply our method to three real-world datasets and derive new insights on bias amplification in prediction and decision-making.", "pdf": "https://openreview.net/pdf/168102b3afa2d4baaaf84041a7fb378b2db4e20f.pdf"} {"title": "Lorentz-Equivariant Geometric Algebra Transformers for High-Energy Physics", "url": "https://openreview.net/forum?id=X34GKv8sYT", "detail_url": "https://openreview.net/forum?id=X34GKv8sYT", "authors": "Jonas Spinner,Victor Breso Pla,Pim De Haan,Tilman Plehn,Jesse Thaler,Johann Brehmer", "tags": "NIPS 2024,Poster", "abstract": "Extracting scientific understanding from particle-physics experiments requires solving diverse learning problems with high precision and good data efficiency. We propose the Lorentz Geometric Algebra Transformer (L-GATr), a new multi-purpose architecture for high-energy physics. L-GATr represents high-energy data in a geometric algebra over four-dimensional space-time and is equivariant under Lorentz transformations, the symmetry group of relativistic kinematics. At the same time, the architecture is a Transformer, which makes it versatile and scalable to large systems. L-GATr is first demonstrated on regression and classification tasks from particle physics. We then construct the first Lorentz-equivariant generative model: a continuous normalizing flow based on an L-GATr network, trained with Riemannian flow matching. Across our experiments, L-GATr is on par with or outperforms strong domain-specific baselines.", "pdf": "https://openreview.net/pdf/8ddc6739ac0f4a9405f3368dba8b88248133a915.pdf"} {"title": "Multi-Scale VMamba: Hierarchy in Hierarchy Visual State Space Model", "url": "https://openreview.net/forum?id=r70jUOpDCM", "detail_url": "https://openreview.net/forum?id=r70jUOpDCM", "authors": "Yuheng Shi,Minjing Dong,Chang Xu", "tags": "NIPS 2024,Poster", "abstract": "Despite the significant achievements of Vision Transformers (ViTs) in various vision tasks, they are constrained by the quadratic complexity. Recently, State Space Models (SSMs) have garnered widespread attention due to their global receptive field and linear complexity with respect to the input length, demonstrating substantial potential across fields including natural language processing and computer vision. To improve the performance of SSMs in vision tasks, a multi-scan strategy is widely adopted, which leads to significant redundancy of SSMs. For a better trade-off between efficiency and performance, we analyze the underlying reasons behind the success of the multi-scan strategy, where long-range dependency plays an important role. Based on the analysis, we introduce Multi-Scale Vision Mamba (MSVMamba) to preserve the superiority of SSMs in vision tasks with limited parameters. It employs a multi-scale 2D scanning technique on both original and downsampled feature maps, which not only benefits long-range dependency learning but also reduces computational costs. Additionally, we integrate a Convolutional Feed-Forward Network (ConvFFN) to address the lack of channel mixing. Our experiments demonstrate that MSVMamba is highly competitive, with the MSVMamba-Tiny model achieving 83.0% top-1 accuracy on ImageNet, 46.9% box mAP, and 42.5% instance mAP with the Mask R-CNN framework, 1x training schedule on COCO, and 47.9% mIoU with single-scale testing on ADE20K. Code is available at https://github.com/YuHengsss/MSVMamba.", "pdf": "https://openreview.net/pdf/38df95e2f7b9e4e65c57f789574f602bf2f62a6a.pdf"} {"title": "DiffCut: Catalyzing Zero-Shot Semantic Segmentation with Diffusion Features and Recursive Normalized Cut", "url": "https://openreview.net/forum?id=N0xNf9Qqmc", "detail_url": "https://openreview.net/forum?id=N0xNf9Qqmc", "authors": "Paul Couairon,Mustafa Shukor,Jean-Emmanuel HAUGEARD,Matthieu Cord,Nicolas THOME", "tags": "NIPS 2024,Poster", "abstract": "Foundation models have emerged as powerful tools across various domains including language, vision, and multimodal tasks. While prior works have addressed unsupervised semantic segmentation, they significantly lag behind supervised models. In this paper, we use a diffusion UNet encoder as a foundation vision encoder and introduce DiffCut, an unsupervised zero-shot segmentation method that solely harnesses the output features from the final self-attention block. Through extensive experimentation, we demonstrate that using these diffusion features in a graph based segmentation algorithm, significantly outperforms previous state-of-the-art methods on zero-shot segmentation. Specifically, we leverage a recursive Normalized Cut algorithm that regulates the granularity of detected objects and produces well-defined segmentation maps that precisely capture intricate image details. Our work highlights the remarkably accurate semantic knowledge embedded within diffusion UNet encoders that could then serve as foundation vision encoders for downstream tasks.", "pdf": "https://openreview.net/pdf/097d74b141112b34bb635c01a450683c24e5ea54.pdf"} {"title": "Multi-Agent Coordination via Multi-Level Communication", "url": "https://openreview.net/forum?id=3l2HnZXNou", "detail_url": "https://openreview.net/forum?id=3l2HnZXNou", "authors": "Ziluo Ding,Zeyuan Liu,Zhirui Fang,Kefan Su,Liwen Zhu,Zongqing Lu", "tags": "NIPS 2024,Poster", "abstract": "The partial observability and stochasticity in multi-agent settings can be mitigated by accessing more information about others via communication. However, the coordination problem still exists since agents cannot communicate actual actions with each other at the same time due to the circular dependencies. In this paper, we propose a novel multi-level communication scheme, Sequential Communication (SeqComm). SeqComm treats agents asynchronously (the upper-level agents make decisions before the lower-level ones) and has two communication phases. In the negotiation phase, agents determine the priority of decision-making by communicating hidden states of observations and comparing the value of intention, which is obtained by modeling the environment dynamics. In the launching phase, the upper-level agents take the lead in making decisions and then communicate their actions with the lower-level agents. Theoretically, we prove the policies learned by SeqComm are guaranteed to improve monotonically and converge. Empirically, we show that SeqComm outperforms existing methods in a variety of cooperative multi-agent tasks.", "pdf": "https://openreview.net/pdf/111bdb476557b41f6d7f48240d4c392f82fcbd34.pdf"} {"title": "GSDF: 3DGS Meets SDF for Improved Neural Rendering and Reconstruction", "url": "https://openreview.net/forum?id=r6V7EjANUK", "detail_url": "https://openreview.net/forum?id=r6V7EjANUK", "authors": "Mulin Yu,Tao Lu,Linning Xu,Lihan Jiang,Yuanbo Xiangli,Bo Dai", "tags": "NIPS 2024,Poster", "abstract": "Representing 3D scenes from multiview images remains a core challenge in computer vision and graphics, requiring both reliable rendering and reconstruction, which often conflicts due to the mismatched prioritization of image quality over precise underlying scene geometry. Although both neural implicit surfaces and explicit Gaussian primitives have advanced with neural rendering techniques, current methods impose strict constraints on density fields or primitive shapes, which enhances the affinity for geometric reconstruction at the sacrifice of rendering quality. To address this dilemma, we introduce GSDF, a dual-branch architecture combining 3D Gaussian Splatting (3DGS) and neural Signed Distance Fields (SDF). Our approach leverages mutual guidance and joint supervision during the training process to mutually enhance reconstruction and rendering. Specifically, our method guides the Gaussian primitives to locate near potential surfaces and accelerates the SDF convergence. This implicit mutual guidance ensures robustness and accuracy in both synthetic and real-world scenarios. Experimental results demonstrate that our method boosts the SDF optimization process to reconstruct more detailed geometry, while reducing floaters and blurry edge artifacts in rendering by aligning Gaussian primitives with the underlying geometry.", "pdf": "https://openreview.net/pdf/6249ed4cb62d4456ad444614a177383b1b92faf9.pdf"} {"title": "Amortizing intractable inference in diffusion models for vision, language, and control", "url": "https://openreview.net/forum?id=gVTkMsaaGI", "detail_url": "https://openreview.net/forum?id=gVTkMsaaGI", "authors": "Siddarth Venkatraman,Moksh Jain,Luca Scimeca,Minsu Kim,Marcin Sendera,Mohsin Hasan,Luke Rowe,Sarthak Mittal,Pablo Lemos,Emmanuel Bengio,Alexandre Adam,Jarrid Rector-Brooks,Yoshua Bengio,Glen Berseth,Nikolay Malkin", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models have emerged as effective distribution estimators in vision, language, and reinforcement learning, but their use as priors in downstream tasks poses an intractable posterior inference problem. This paper studies *amortized* sampling of the posterior over data, $\\mathbf{x}\\sim p^{\\rm post}(\\mathbf{x})\\propto p(\\mathbf{x})r(\\mathbf{x})$, in a model that consists of a diffusion generative model prior $p(\\mathbf{x})$ and a black-box constraint or likelihood function $r(\\mathbf{x})$. We state and prove the asymptotic correctness of a data-free learning objective, *relative trajectory balance*, for training a diffusion model that samples from this posterior, a problem that existing methods solve only approximately or in restricted cases. Relative trajectory balance arises from the generative flow network perspective on diffusion models, which allows the use of deep reinforcement learning techniques to improve mode coverage. Experiments illustrate the broad potential of unbiased inference of arbitrary posteriors under diffusion priors: in vision (classifier guidance), language (infilling under a discrete diffusion LLM), and multimodal data (text-to-image generation). Beyond generative modeling, we apply relative trajectory balance to the problem of continuous control with a score-based behavior prior, achieving state-of-the-art results on benchmarks in offline reinforcement learning. Code is available at [this link](https://github.com/GFNOrg/diffusion-finetuning).", "pdf": "https://openreview.net/pdf/84a151267033cbd847568171271da8d12bb4d656.pdf"} {"title": "FEEL-SNN: Robust Spiking Neural Networks with Frequency Encoding and Evolutionary Leak Factor", "url": "https://openreview.net/forum?id=TuCQdBo4NC", "detail_url": "https://openreview.net/forum?id=TuCQdBo4NC", "authors": "Mengting Xu,De Ma,Huajin Tang,Qian Zheng,Gang Pan", "tags": "NIPS 2024,Poster", "abstract": "Currently, researchers think that the inherent robustness of spiking neural networks (SNNs) stems from their biologically plausible spiking neurons, and are dedicated to developing more bio-inspired models to defend attacks. However, most work relies solely on experimental analysis and lacks theoretical support, and the direct-encoding method and fixed membrane potential leak factor they used in spiking neurons are simplified simulations of those in the biological nervous system, which makes it difficult to ensure generalizability across all datasets and networks. Contrarily, the biological nervous system can stay reliable even in a highly complex noise environment, one of the reasons is selective visual attention and non-fixed membrane potential leaks in biological neurons. This biological finding has inspired us to design a highly robust SNN model that closely mimics the biological nervous system. In our study, we first present a unified theoretical framework for SNN robustness constraint, which suggests that improving the encoding method and evolution of the membrane potential leak factor in spiking neurons can improve SNN robustness. Subsequently, we propose a robust SNN (FEEL-SNN) with Frequency Encoding (FE) and Evolutionary Leak factor (EL) to defend against different noises, mimicking the selective visual attention mechanism and non-fixed leak observed in biological systems. Experimental results confirm the efficacy of both our FE, EL, and FEEL methods, either in isolation or in conjunction with established robust enhancement algorithms, for enhancing the robustness of SNNs.", "pdf": "https://openreview.net/pdf/ba1fe9863d1632c6c4af1ded4fb5b68228ab8df3.pdf"} {"title": "Pseudo-Siamese Blind-spot Transformers for Self-Supervised Real-World Denoising", "url": "https://openreview.net/forum?id=O3nPufVaee", "detail_url": "https://openreview.net/forum?id=O3nPufVaee", "authors": "Yuhui Quan,Tianxiang Zheng,Hui Ji", "tags": "NIPS 2024,Poster", "abstract": "Real-world image denoising remains a challenge task. This paper studies self-supervised image denoising, requiring only noisy images captured in a single shot. We revamping the blind-spot technique by leveraging the transformer\u2019s capability for long-range pixel interactions, which is crucial for effectively removing noise dependence in relating pixel\u2013a requirement for achieving great performance for the blind-spot technique. The proposed method integrates these elements with two key innovations: a directional self-attention (DSA) module using a half-plane grid for self-attention, creating a sophisticated blind-spot structure, and a Siamese architecture with mutual learning to mitigate the performance impacts\nfrom the restricted attention grid in DSA. Experiments on benchmark datasets demonstrate that our method outperforms existing self-supervised and clean-image-free methods. This combination of blind-spot and transformer techniques provides a natural synergy for tackling real-world image denoising challenges.", "pdf": "https://openreview.net/pdf/8c9e292154114bd0f46f614aad97ac42d4c4be33.pdf"} {"title": "Identity Decoupling for Multi-Subject Personalization of Text-to-Image Models", "url": "https://openreview.net/forum?id=tEEpVPDaRf", "detail_url": "https://openreview.net/forum?id=tEEpVPDaRf", "authors": "Sangwon Jang,Jaehyeong Jo,Kimin Lee,Sung Ju Hwang", "tags": "NIPS 2024,Poster", "abstract": "Text-to-image diffusion models have shown remarkable success in generating personalized subjects based on a few reference images. However, current methods often fail when generating multiple subjects simultaneously, resulting in mixed\nidentities with combined attributes from different subjects. In this work, we present MuDI, a novel framework that enables multi-subject personalization by effectively decoupling identities from multiple subjects. Our main idea is to utilize segmented subjects generated by a foundation model for segmentation (Segment Anything) for both training and inference, as a form of data augmentation for training and initialization for the generation process. Moreover, we further introduce a new metric to better evaluate the performance of our method on multi-subject personalization. Experimental results show that our MuDI can produce high-quality personalized images without identity mixing, even for highly similar subjects as shown in Figure 1. Specifically, in human evaluation, MuDI obtains twice the success rate for personalizing multiple subjects without identity mixing over existing baselines and is preferred over 70% against the strongest baseline.", "pdf": "https://openreview.net/pdf/76a3c0edf9676a9a55092b29e47710fc0b9fcc5d.pdf"} {"title": "TFS-NeRF: Template-Free NeRF for Semantic 3D Reconstruction of Dynamic Scene", "url": "https://openreview.net/forum?id=UPxFYvHsyN", "detail_url": "https://openreview.net/forum?id=UPxFYvHsyN", "authors": "Sandika Biswas,Qianyi Wu,Biplab Banerjee,Hamid Rezatofighi", "tags": "NIPS 2024,Poster", "abstract": "Despite advancements in Neural Implicit models for 3D surface reconstruction, handling dynamic environments with interactions between arbitrary rigid, non-rigid, or deformable entities remains challenging. The generic reconstruction methods adaptable to such dynamic scenes often require additional inputs like depth or optical flow or rely on pre-trained image features for reasonable outcomes. These methods typically use latent codes to capture frame-by-frame deformations. Another set of dynamic scene reconstruction methods, are entity-specific, mostly focusing on humans, and relies on template models. In contrast, some template-free methods bypass these requirements and adopt traditional LBS (Linear Blend Skinning) weights for a detailed representation of deformable object motions,\nalthough they involve complex optimizations leading to lengthy training times. To this end, as a remedy, this paper introduces TFS-NeRF, a template-free 3D semantic NeRF for dynamic scenes captured from sparse or single-view RGB videos, featuring interactions among two entities and more time-efficient than other LBS-based approaches. Our framework uses an Invertible Neural Network (INN) for LBS prediction, simplifying the training process. By disentangling the motions of interacting entities and optimizing per-entity skinning weights, our method efficiently generates accurate, semantically separable geometries. Extensive experiments demonstrate that our approach produces high-quality reconstructions of both deformable and non-deformable objects in complex interactions, with improved\ntraining efficiency compared to existing methods. The code and models will be available on our github page.", "pdf": "https://openreview.net/pdf/d57633f66c3a74c9513088d716ffdb90285b9d67.pdf"} {"title": "Language Models as Hierarchy Encoders", "url": "https://openreview.net/forum?id=GJMYvWzjE1", "detail_url": "https://openreview.net/forum?id=GJMYvWzjE1", "authors": "Yuan He,Moy Yuan,Jiaoyan Chen,Ian Horrocks", "tags": "NIPS 2024,Poster", "abstract": "Interpreting hierarchical structures latent in language is a key limitation of current language models (LMs). While previous research has implicitly leveraged these hierarchies to enhance LMs, approaches for their explicit encoding are yet to be explored. To address this, we introduce a novel approach to re-train transformer encoder-based LMs as Hierarchy Transformer encoders (HiTs), harnessing the expansive nature of hyperbolic space. Our method situates the output embedding space of pre-trained LMs within a Poincar\u00e9 ball with a curvature that adapts to the embedding dimension, followed by re-training on hyperbolic clustering and centripetal losses. These losses are designed to effectively cluster related entities (input as texts) and organise them hierarchically. We evaluate HiTs against pre-trained LMs, standard fine-tuned LMs, and several hyperbolic embedding baselines, focusing on their capabilities in simulating transitive inference, predicting subsumptions, and transferring knowledge across hierarchies. The results demonstrate that HiTs consistently outperform all baselines in these tasks, underscoring the effectiveness and transferability of our re-trained hierarchy encoders.", "pdf": "https://openreview.net/pdf/6f04e448e3391617623ed9cb94961aa485f78741.pdf"} {"title": "R$^2$-Gaussian: Rectifying Radiative Gaussian Splatting for Tomographic Reconstruction", "url": "https://openreview.net/forum?id=fMWrTAe5Iy", "detail_url": "https://openreview.net/forum?id=fMWrTAe5Iy", "authors": "Ruyi Zha,Tao Jun Lin,Yuanhao Cai,Jiwen Cao,Yanhao Zhang,Hongdong Li", "tags": "NIPS 2024,Poster", "abstract": "3D Gaussian splatting (3DGS) has shown promising results in image rendering and surface reconstruction. However, its potential in volumetric reconstruction tasks, such as X-ray computed tomography, remains under-explored. This paper introduces R$^2$-Gaussian, the first 3DGS-based framework for sparse-view tomographic reconstruction. By carefully deriving X-ray rasterization functions, we discover a previously unknown \\emph{integration bias} in the standard 3DGS formulation, which hampers accurate volume retrieval. To address this issue, we propose a novel rectification technique via refactoring the projection from 3D to 2D Gaussians. Our new method presents three key innovations: (1) introducing tailored Gaussian kernels, (2) extending rasterization to X-ray imaging, and (3) developing a CUDA-based differentiable voxelizer. Experiments on synthetic and real-world datasets demonstrate that our method outperforms state-of-the-art approaches in accuracy and efficiency. Crucially, it delivers high-quality results in 4 minutes, which is 12$\\times$ faster than NeRF-based methods and on par with traditional algorithms.", "pdf": "https://openreview.net/pdf/e43faafc86ce94cf84611b5f4dec0417d18daa24.pdf"} {"title": "Referring Human Pose and Mask Estimation In the Wild", "url": "https://openreview.net/forum?id=fXEi3LVflp", "detail_url": "https://openreview.net/forum?id=fXEi3LVflp", "authors": "Bo Miao,Mingtao Feng,Zijie Wu,Mohammed Bennamoun,Yongsheng Gao,Ajmal Saeed Mian", "tags": "NIPS 2024,Poster", "abstract": "We introduce Referring Human Pose and Mask Estimation (R-HPM) in the wild, where either a text or positional prompt specifies the person of interest in an image. This new task holds significant potential for human-centric applications such as assistive robotics and sports analysis. In contrast to previous works, R-HPM (i) ensures high-quality, identity-aware results corresponding to the referred person, and (ii) simultaneously predicts human pose and mask for a comprehensive representation. To achieve this, we introduce a large-scale dataset named RefHuman, which substantially extends the MS COCO dataset with additional text and positional prompt annotations. RefHuman includes over 50,000 annotated instances in the wild, each equipped with keypoint, mask, and prompt annotations. To enable prompt-conditioned estimation, we propose the first end-to-end promptable approach named UniPHD for R-HPM. UniPHD extracts multimodal representations and employs a proposed pose-centric hierarchical decoder to process (text or positional) instance queries and keypoint queries, producing results specific to the referred person. Extensive experiments demonstrate that UniPHD produces quality results based on user-friendly prompts and achieves top-tier performance on RefHuman val and MS COCO val2017.", "pdf": "https://openreview.net/pdf/3a47c08a3759b5f79ef8599d6d6317ead8042340.pdf"} {"title": "AutoManual: Constructing Instruction Manuals by LLM Agents via Interactive Environmental Learning", "url": "https://openreview.net/forum?id=Pwl9n4zlf5", "detail_url": "https://openreview.net/forum?id=Pwl9n4zlf5", "authors": "Minghao Chen,Yihang Li,Yanting Yang,Shiyu Yu,Binbin Lin,Xiaofei He", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLM) based agents have shown promise in autonomously completing tasks across various domains, e.g., robotics, games, and web navigation. However, these agents typically require elaborate design and expert prompts to solve tasks in specific domains, which limits their adaptability. We introduce AutoManual, a framework enabling LLM agents to autonomously build their understanding through interaction and adapt to new environments. AutoManual categorizes environmental knowledge into diverse rules and optimizes them in an online fashion by two agents: 1) The Planner codes actionable plans based on current rules for interacting with the environment. 2) The Builder updates the rules through a well-structured rule system that facilitates online rule management and essential detail retention. To mitigate hallucinations in managing rules, we introduce a *case-conditioned prompting* strategy for the Builder. Finally, the Formulator agent compiles these rules into a comprehensive manual. The self-generated manual can not only improve the adaptability but also guide the planning of smaller LLMs while being human-readable. Given only one simple demonstration, AutoManual significantly improves task success rates, achieving 97.4\\% with GPT-4-turbo and 86.2\\% with GPT-3.5-turbo on ALFWorld benchmark tasks. The code is available at https://github.com/minghchen/automanual.", "pdf": "https://openreview.net/pdf/fa7bc0e7a61d8579a98e5eadb278e05289e2611c.pdf"} {"title": "SeeClear: Semantic Distillation Enhances Pixel Condensation for Video Super-Resolution", "url": "https://openreview.net/forum?id=zeaBrGv7Ll", "detail_url": "https://openreview.net/forum?id=zeaBrGv7Ll", "authors": "Qi Tang,Yao Zhao,Meiqin Liu,Chao Yao", "tags": "NIPS 2024,Poster", "abstract": "Diffusion-based Video Super-Resolution (VSR) is renowned for generating perceptually realistic videos, yet it grapples with maintaining detail consistency across frames due to stochastic fluctuations. The traditional approach of pixel-level alignment is ineffective for diffusion-processed frames because of iterative disruptions. To overcome this, we introduce SeeClear--a novel VSR framework leveraging conditional video generation, orchestrated by instance-centric and channel-wise semantic controls. This framework integrates a Semantic Distiller and a Pixel Condenser, which synergize to extract and upscale semantic details from low-resolution frames. The Instance-Centric Alignment Module (InCAM) utilizes video-clip-wise tokens to dynamically relate pixels within and across frames, enhancing coherency. Additionally, the Channel-wise Texture Aggregation Memory (CaTeGory) infuses extrinsic knowledge, capitalizing on long-standing semantic textures. Our method also innovates the blurring diffusion process with the ResShift mechanism, finely balancing between sharpness and diffusion effects. Comprehensive experiments confirm our framework's advantage over state-of-the-art diffusion-based VSR techniques.", "pdf": "https://openreview.net/pdf/01c997cd807103ddbfa717a7445b4e1837ebeb53.pdf"} {"title": "Scalable DBSCAN with Random Projections", "url": "https://openreview.net/forum?id=dmhi2ydnXZ", "detail_url": "https://openreview.net/forum?id=dmhi2ydnXZ", "authors": "HaoChuan Xu,Ninh Pham", "tags": "NIPS 2024,Poster", "abstract": "We present sDBSCAN, a scalable density-based clustering algorithm in high dimensions with cosine distance. sDBSCAN leverages recent advancements in random projections given a significantly large number of random vectors to quickly identify core points and their neighborhoods, the primary hurdle of density-based clustering. Theoretically, sDBSCAN preserves the DBSCAN\u2019s clustering structure under mild conditions with high probability. To facilitate sDBSCAN, we present sOPTICS, a scalable visual tool to guide the parameter setting of sDBSCAN. We also extend sDBSCAN and sOPTICS to L2, L1, \u03c72, and Jensen-Shannon distances via random kernel features. Empirically, sDBSCAN is significantly faster and provides higher accuracy than competitive DBSCAN variants on real-world million-point data sets. On these data sets, sDBSCAN and sOPTICS run in a few minutes, while the scikit-learn counterparts and other clustering competitors demand several hours or\ncannot run on our hardware due to memory constraints. Our code is available at https://github.com/NinhPham/sDbscan.", "pdf": "https://openreview.net/pdf/502f19430e0e9e02d665c47884efcd8195484aeb.pdf"} {"title": "Challenges of Generating Structurally Diverse Graphs", "url": "https://openreview.net/forum?id=bbGPoL1NLo", "detail_url": "https://openreview.net/forum?id=bbGPoL1NLo", "authors": "Fedor Velikonivtsev,Mikhail Mironov,Liudmila Prokhorenkova", "tags": "NIPS 2024,Poster", "abstract": "For many graph-related problems, it can be essential to have a set of structurally diverse graphs. For instance, such graphs can be used for testing graph algorithms or their neural approximations. However, to the best of our knowledge, the problem of generating structurally diverse graphs has not been explored in the literature. In this paper, we fill this gap. First, we discuss how to define diversity for a set of graphs, why this task is non-trivial, and how one can choose a proper diversity measure. Then, for a given diversity measure, we propose and compare several algorithms optimizing it: we consider approaches based on standard random graph models, local graph optimization, genetic algorithms, and neural generative models. We show that it is possible to significantly improve diversity over basic random graph generators. Additionally, our analysis of generated graphs allows us to better understand the properties of graph distances: depending on which diversity measure is used for optimization, the obtained graphs may possess very different structural properties which gives a better understanding of the graph distance underlying the diversity measure.", "pdf": "https://openreview.net/pdf/3f898eb088d9063f7672ca99a4b419a4d3892f9a.pdf"} {"title": "Calibrating Reasoning in Language Models with Internal Consistency", "url": "https://openreview.net/forum?id=udZKVMPf3S", "detail_url": "https://openreview.net/forum?id=udZKVMPf3S", "authors": "Zhihui Xie,Jizhou Guo,Tong Yu,Shuai Li", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have demonstrated impressive capabilities in various reasoning tasks, aided by techniques like chain-of-thought prompting that elicits verbalized reasoning. However, LLMs often generate text with obvious mistakes and contradictions, raising doubts about their ability to robustly process and utilize generated rationales. In this work, we investigate reasoning in LLMs through the lens of internal representations, focusing on how these representations are influenced by generated rationales. Our preliminary analysis reveals that while generated rationales improve answer accuracy, inconsistencies emerge between the model\u2019s internal representations in middle layers and those in final layers, potentially undermining the reliability of their reasoning processes. To address this, we propose internal consistency as a measure of the model\u2019s confidence by examining the agreement of latent predictions decoded from intermediate layers. Extensive empirical studies across different models and datasets demonstrate that internal consistency effectively distinguishes between correct and incorrect reasoning paths. Motivated by this, we propose a new approach to calibrate reasoning by up-weighting reasoning paths with high internal consistency, resulting in a significant boost in reasoning performance. Further analysis uncovers distinct patterns in attention and feed-forward modules across layers, providing insights into the emergence of internal inconsistency. In summary, our results demonstrate the potential of using internal representations for self-evaluation of LLMs.", "pdf": "https://openreview.net/pdf/53780dc13be7167b8ffc365abb20097c74b52476.pdf"} {"title": "Multi-turn Reinforcement Learning with Preference Human Feedback", "url": "https://openreview.net/forum?id=rVSc3HIZS4", "detail_url": "https://openreview.net/forum?id=rVSc3HIZS4", "authors": "Lior Shani,Aviv Rosenberg,Asaf Cassel,Oran Lang,Daniele Calandriello,Avital Zipori,Hila Noga,Orgad Keller,Bilal Piot,Idan Szpektor,Avinatan Hassidim,Yossi Matias,Remi Munos", "tags": "NIPS 2024,Poster", "abstract": "Reinforcement Learning from Human Feedback (RLHF) has become the standard approach for aligning Large Language Models (LLMs) with human preferences, allowing LLMs to demonstrate remarkable abilities in various tasks. Existing methods work by emulating the human preference at the single decision (turn) level, limiting their capabilities in settings that require planning or multi-turn interactions to achieve a long-term goal. In this paper, we address this issue by developing novel methods for Reinforcement Learning (RL) from preference feedback between two full multi-turn conversations. In the tabular setting, we present a novel mirror-descent-based policy optimization algorithm for the general multi-turn preference-based RL problem, and prove its convergence to Nash equilibrium. To evaluate performance, we create a new environment, Education Dialogue, where a teacher agent guides a student in learning a random topic, and show that a deep RL variant of our algorithm outperforms RLHF baselines. Finally, we show that in an environment with explicit rewards, our algorithm recovers the same performance as a reward-based RL baseline, despite relying solely on a weaker preference signal.", "pdf": "https://openreview.net/pdf/4b797018cf3a0b6670839025a87efe1cbd4ef3e4.pdf"} {"title": "Can Graph Learning Improve Planning in LLM-based Agents?", "url": "https://openreview.net/forum?id=bmoS6Ggw4j", "detail_url": "https://openreview.net/forum?id=bmoS6Ggw4j", "authors": "Xixi Wu,Yifei Shen,Caihua Shan,Kaitao Song,Siwei Wang,Bohang Zhang,Jiarui Feng,Hong Cheng,Wei Chen,Yun Xiong,Dongsheng Li", "tags": "NIPS 2024,Poster", "abstract": "Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs). It aims to break down complex user requests in natural language into solvable sub-tasks, thereby fulfilling the original requests. In this context, the sub-tasks can be naturally viewed as a graph, where the nodes represent the sub-tasks, and the edges denote the dependencies among them. Consequently, task planning is a decision-making problem that involves selecting a connected path or subgraph within the corresponding graph and invoking it. In this paper, we explore graph learning-based methods for task planning, a direction that is orthogonal to the prevalent focus on prompt design. Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs, which is adeptly addressed by graph neural networks (GNNs). This theoretical insight led us to integrate GNNs with LLMs to enhance overall performance. Extensive experiments demonstrate that GNN-based methods surpass existing solutions even without training, and minimal training can further enhance their performance. The performance gain increases with a larger task graph size.", "pdf": "https://openreview.net/pdf/3e5e4109af1b321e2c7be31711db90c52c8a140e.pdf"} {"title": "Understanding and Improving Training-free Loss-based Diffusion Guidance", "url": "https://openreview.net/forum?id=Eu80DGuOcs", "detail_url": "https://openreview.net/forum?id=Eu80DGuOcs", "authors": "Yifei Shen,XINYANG JIANG,Yifan Yang,Yezhen Wang,Dongqi Han,Dongsheng Li", "tags": "NIPS 2024,Poster", "abstract": "Adding additional guidance to pretrained diffusion models has become an increasingly popular research area, with extensive applications in computer vision, reinforcement learning, and AI for science. Recently, several studies have proposed training-free loss-based guidance by using off-the-shelf networks pretrained on clean images. This approach enables zero-shot conditional generation for universal control formats, which appears to offer a free lunch in diffusion guidance. In this paper, we aim to develop a deeper understanding of training-free guidance, as well as overcome its limitations. We offer a theoretical analysis that supports training-free guidance from the perspective of optimization, distinguishing it from classifier-based (or classifier-free) guidance. To elucidate their drawbacks, we theoretically demonstrate that training-free guidance is more susceptible to misaligned gradients and exhibits slower convergence rates compared to classifier guidance. We then introduce a collection of techniques designed to overcome the limitations, accompanied by theoretical rationale and empirical evidence. Our experiments in image and motion generation confirm the efficacy of these techniques.", "pdf": "https://openreview.net/pdf/d040380d64726b318a256320a09084e44fd63156.pdf"} {"title": "Stability and Generalizability in SDE Diffusion Models with Measure-Preserving Dynamics", "url": "https://openreview.net/forum?id=VTJvTa41D0", "detail_url": "https://openreview.net/forum?id=VTJvTa41D0", "authors": "Weitong Zhang,Chengqi Zang,Liu Li,Sarah Cechnicka,Cheng Ouyang,Bernhard Kainz", "tags": "NIPS 2024,Poster", "abstract": "Inverse problems describe the process of estimating the causal factors from a set of measurements or data. \nMapping of often incomplete or degraded data to parameters is ill-posed, thus data-driven iterative solutions are required, for example when reconstructing clean images from poor signals. \nDiffusion models have shown promise as potent generative tools for solving inverse problems due to their superior reconstruction quality and their compatibility with iterative solvers. However, most existing approaches are limited to linear inverse problems represented as Stochastic Differential Equations (SDEs). This simplification falls short of addressing the challenging nature of real-world problems, leading to amplified cumulative errors and biases. \nWe provide an explanation for this gap through the lens of measure-preserving dynamics of Random Dynamical Systems (RDS) with which we analyse Temporal Distribution Discrepancy and thus introduce a theoretical framework based on RDS for SDE diffusion models. We uncover several strategies that inherently enhance the stability and generalizability of diffusion models for inverse problems and introduce a novel score-based diffusion framework, the Dynamics-aware SDE Diffusion Generative Model (D^3GM). The Measure-preserving property can return the degraded measurement to the original state despite complex degradation with the RDS concept of stability.\nOur extensive experimental results corroborate the effectiveness of D^3GM across multiple benchmarks including a prominent application for inverse problems, magnetic resonance imaging.", "pdf": "https://openreview.net/pdf/a60b38749d9b2f1a56604b520e07ac12fcc3072d.pdf"} {"title": "No Representation, No Trust: Connecting Representation, Collapse, and Trust Issues in PPO", "url": "https://openreview.net/forum?id=Wy9UgrMwD0", "detail_url": "https://openreview.net/forum?id=Wy9UgrMwD0", "authors": "Skander Moalla,Andrea Miele,Daniil Pyatko,Razvan Pascanu,Caglar Gulcehre", "tags": "NIPS 2024,Poster", "abstract": "Reinforcement learning (RL) is inherently rife with non-stationarity since the states and rewards the agent observes during training depend on its changing policy.\nTherefore, networks in deep RL must be capable of adapting to new observations and fitting new targets.\nHowever, previous works have observed that networks trained under non-stationarity exhibit an inability to continue learning, termed loss of plasticity, and eventually a collapse in performance.\nFor off-policy deep value-based RL methods, this phenomenon has been correlated with a decrease in representation rank and the ability to fit random targets, termed capacity loss.\nAlthough this correlation has generally been attributed to neural network learning under non-stationarity, the connection to representation dynamics has not been carefully studied in on-policy policy optimization methods.\nIn this work, we empirically study representation dynamics in Proximal Policy Optimization (PPO) on the Atari and MuJoCo environments, revealing that PPO agents are also affected by feature rank deterioration and capacity loss.\nWe show that this is aggravated by stronger non-stationarity, ultimately driving the actor's performance to collapse, regardless of the performance of the critic.\nWe ask why the trust region, specific to methods like PPO, cannot alleviate or prevent the collapse and find a connection between representation collapse and the degradation of the trust region, one exacerbating the other.\nFinally, we present Proximal Feature Optimization (PFO), a novel auxiliary loss that, along with other interventions, shows that regularizing the representation dynamics mitigates the performance collapse of PPO agents.\nCode and run histories are available at https://github.com/CLAIRE-Labo/no-representation-no-trust.", "pdf": "https://openreview.net/pdf/575fc0fb8e8f423ef4cdbb2ea7b50c7ec78bb2f8.pdf"} {"title": "MetaCURL: Non-stationary Concave Utility Reinforcement Learning", "url": "https://openreview.net/forum?id=TS09IypR3r", "detail_url": "https://openreview.net/forum?id=TS09IypR3r", "authors": "Bianca Marin Moreno,Margaux Br\u00e9g\u00e8re,Pierre Gaillard,Nadia Oudjane", "tags": "NIPS 2024,Poster", "abstract": "We explore online learning in episodic loop-free Markov decision processes on non-stationary environments (changing losses and probability transitions). Our focus is on the Concave Utility Reinforcement Learning problem (CURL), an extension of classical RL for handling convex performance criteria in state-action distributions induced by agent policies. While various machine learning problems can be written as CURL, its non-linearity invalidates traditional Bellman equations. Despite recent solutions to classical CURL, none address non-stationary MDPs. This paper introduces MetaCURL, the first CURL algorithm for non-stationary MDPs. It employs a meta-algorithm running multiple black-box algorithms instances over different intervals, aggregating outputs via a sleeping expert framework. The key hurdle is partial information due to MDP uncertainty. Under partial information on the probability transitions (uncertainty and non-stationarity coming only from external noise, independent of agent state-action pairs), we achieve optimal dynamic regret without prior knowledge of MDP changes. Unlike approaches for RL, MetaCURL handles full adversarial losses, not just stochastic ones. We believe our approach for managing non-stationarity with experts can be of interest to the RL community.", "pdf": "https://openreview.net/pdf/8c752e71f91d1ccf0f49f0a56df997a1c951fad3.pdf"} {"title": "Language-Driven Interactive Traffic Trajectory Generation", "url": "https://openreview.net/forum?id=1u3qkG7BkQ", "detail_url": "https://openreview.net/forum?id=1u3qkG7BkQ", "authors": "Junkai XIA,Chenxin Xu,Qingyao Xu,Yanfeng Wang,Siheng Chen", "tags": "NIPS 2024,Poster", "abstract": "Realistic trajectory generation with natural language control is pivotal for advancing autonomous vehicle technology. However, previous methods focus on individual traffic participant trajectory generation, thus failing to account for the complexity of interactive traffic dynamics. In this work, we propose InteractTraj, the first language-driven traffic trajectory generator that can generate interactive traffic trajectories. InteractTraj interprets abstract trajectory descriptions into concrete formatted interaction-aware numerical codes and learns a mapping between these formatted codes and the final interactive trajectories. To interpret language descriptions, we propose a language-to-code encoder with a novel interaction-aware encoding strategy. To produce interactive traffic trajectories, we propose a code-to-trajectory decoder with interaction-aware feature aggregation that synergizes vehicle interactions with the environmental map and the vehicle moves. Extensive experiments show our method demonstrates superior performance over previous SoTA methods, offering a more realistic generation of interactive traffic trajectories with high controllability via diverse natural language commands.", "pdf": "https://openreview.net/pdf/be7f3545c0333bfb3a48e442ac5e0f533a848f90.pdf"} {"title": "LaKD: Length-agnostic Knowledge Distillation for Trajectory Prediction with Any Length Observations", "url": "https://openreview.net/forum?id=fC2SV2sQ8J", "detail_url": "https://openreview.net/forum?id=fC2SV2sQ8J", "authors": "Yuhang Li,Changsheng Li,Ruilin Lv,Rongqing Li,Ye Yuan,Guoren Wang", "tags": "NIPS 2024,Poster", "abstract": "Trajectory prediction is a crucial technology to help systems avoid traffic accidents, ensuring safe autonomous driving. Previous methods typically use a fixed-length and sufficiently long trajectory of an agent as observations to predict its future trajectory. However, in real-world scenarios, we often lack the time to gather enough trajectory points before making predictions, e.g., when a car suddenly appears due to an obstruction, the system must make immediate predictions to prevent a collision. This poses a new challenge for trajectory prediction systems, requiring them to be capable of making accurate predictions based on observed trajectories of arbitrary lengths, leading to the failure of existing methods. In this paper, we propose a Length-agnostic Knowledge Distillation framework, named LaKD, which can make accurate trajectory predictions, regardless of the length of observed data. Specifically, considering the fact that long trajectories, containing richer temporal information but potentially additional interference, may perform better or worse than short trajectories, we devise a dynamic length-agnostic knowledge distillation mechanism for exchanging information among trajectories of arbitrary lengths, dynamically determining the transfer direction based on prediction performance. In contrast to traditional knowledge distillation, LaKD employs a unique model that simultaneously serves as both the teacher and the student, potentially causing knowledge collision during the distillation process. Therefore, we design a dynamic soft-masking mechanism, where we first calculate the importance of neuron units and then apply soft-masking to them, so as to safeguard critical units from disruption during the knowledge distillation process. In essence, LaKD is a general and principled framework that can be naturally compatible with existing trajectory prediction models of different architectures. Extensive experiments on three benchmark datasets, Argoverse 1, nuScenes and Argoverse 2, demonstrate the effectiveness of our approach.", "pdf": "https://openreview.net/pdf/7cf0348cc3747c46278bb98d27d152a16c5722d3.pdf"} {"title": "The Reliability of OKRidge Method in Solving Sparse Ridge Regression Problems", "url": "https://openreview.net/forum?id=R3ruv1gF8R", "detail_url": "https://openreview.net/forum?id=R3ruv1gF8R", "authors": "Xiyuan Li,Youjun Wang,Weiwei Liu", "tags": "NIPS 2024,Poster", "abstract": "Sparse ridge regression problems play a significant role across various domains. To solve sparse ridge regression, Liu et al. (2023) recently propose an advanced algorithm, Scalable Optimal $K$-Sparse Ridge Regression (OKRidge), which is both faster and more accurate than existing approaches. However, the absence of theoretical analysis on the error of OKRidge impedes its large-scale applications. In this paper, we reframe the estimation error of OKRidge as a Primary Optimization ($\\textbf{PO}$) problem and employ the Convex Gaussian min-max theorem (CGMT) to simplify the $\\textbf{PO}$ problem into an Auxiliary Optimization ($\\textbf{AO}$) problem. Subsequently, we provide a theoretical error analysis for OKRidge based on the $\\textbf{AO}$ problem. This error analysis improves the theoretical reliability of OKRidge. We also conduct experiments to verify our theorems and the results are in excellent agreement with our theoretical findings.", "pdf": "https://openreview.net/pdf/3cf15befde18fcef083448ff80ac246f7bfe4d06.pdf"} {"title": "Iterative Methods via Locally Evolving Set Process", "url": "https://openreview.net/forum?id=wT2KhEb97a", "detail_url": "https://openreview.net/forum?id=wT2KhEb97a", "authors": "Baojian Zhou,Yifan Sun,Reza Babanezhad Harikandeh,Xingzhi Guo,Deqing Yang,Yanghua Xiao", "tags": "NIPS 2024,Poster", "abstract": "Given the damping factor $\\alpha$ and precision tolerance $\\epsilon$, \\citet{andersen2006local} introduced Approximate Personalized PageRank (APPR), the \\textit{de facto local method} for approximating the PPR vector, with runtime bounded by $\\Theta(1/(\\alpha\\epsilon))$ independent of the graph size. Recently, Fountoulakis \\& Yang asked whether faster local algorithms could be developed using $\\tilde{\\mathcal{O}}(1/(\\sqrt{\\alpha}\\epsilon))$ operations. By noticing that APPR is a local variant of Gauss-Seidel, this paper explores the question of *whether standard iterative solvers can be effectively localized*. We propose to use the *locally evolving set process*, a novel framework to characterize the algorithm locality, and demonstrate that many standard solvers can be effectively localized. Let $\\overline{\\operatorname{vol}}{ (\\mathcal S_t)}$ and $\\overline{\\gamma_t}$ be the running average of volume and the residual ratio of active nodes $\\textstyle \\mathcal{S_t}$ during the process. We show $\\overline{\\operatorname{vol}}{ (\\mathcal S_t)}/\\overline{\\gamma_t} \\leq 1/\\epsilon$ and prove APPR admits a new runtime bound $\\tilde{\\mathcal{O}}(\\overline{\\operatorname{vol}}(\\mathcal S_t)/(\\alpha\\overline{\\gamma_t}))$ mirroring the actual performance. Furthermore, when the geometric mean of residual reduction is $\\Theta(\\sqrt{\\alpha})$, then there exists $c \\in (0,2)$ such that the local Chebyshev method has runtime $\\tilde{\\mathcal{O}}(\\overline{\\operatorname{vol}}(\\mathcal{S_t})/(\\sqrt{\\alpha}(2-c)))$ without the monotonicity assumption. Numerical results confirm the efficiency of this novel framework and show up to a hundredfold speedup over corresponding standard solvers on real-world graphs.", "pdf": "https://openreview.net/pdf/018805bdb5e7dfb1133288f180c5012bb6b6e388.pdf"} {"title": "Continual Audio-Visual Sound Separation", "url": "https://openreview.net/forum?id=PZCiWtQjAw", "detail_url": "https://openreview.net/forum?id=PZCiWtQjAw", "authors": "Weiguo Pian,Yiyang Nan,Shijian Deng,Shentong Mo,Yunhui Guo,Yapeng Tian", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we introduce a novel continual audio-visual sound separation task, aiming to continuously separate sound sources for new classes while preserving performance on previously learned classes, with the aid of visual guidance. This problem is crucial for practical visually guided auditory perception as it can significantly enhance the adaptability and robustness of audio-visual sound separation models, making them more applicable for real-world scenarios where encountering new sound sources is commonplace. The task is inherently challenging as our models must not only effectively utilize information from both modalities in current tasks but also preserve their cross-modal association in old tasks to mitigate catastrophic forgetting during audio-visual continual learning. To address these challenges, we propose a novel approach named ContAV-Sep ($\\textbf{Cont}$inual $\\textbf{A}$udio-$\\textbf{V}$isual Sound $\\textbf{Sep}$aration). ContAV-Sep presents a novel Cross-modal Similarity Distillation Constraint (CrossSDC) to uphold the cross-modal semantic similarity through incremental tasks and retain previously acquired knowledge of semantic similarity in old models, mitigating the risk of catastrophic forgetting. The CrossSDC can seamlessly integrate into the training process of different audio-visual sound separation frameworks. Experiments demonstrate that ContAV-Sep can effectively mitigate catastrophic forgetting and achieve significantly better performance compared to other continual learning baselines for audio-visual sound separation. Code is available at: https://github.com/weiguoPian/ContAV-Sep_NeurIPS2024.", "pdf": "https://openreview.net/pdf/fd25d9e8abc814ee3c5d1d374c127ffdda6c023a.pdf"} {"title": "ROBIN: Robust and Invisible Watermarks for Diffusion Models with Adversarial Optimization", "url": "https://openreview.net/forum?id=RvoxlFvnlX", "detail_url": "https://openreview.net/forum?id=RvoxlFvnlX", "authors": "Huayang Huang,Yu Wu,Qian Wang", "tags": "NIPS 2024,Poster", "abstract": "Watermarking generative content serves as a vital tool for authentication, ownership protection, and mitigation of potential misuse. Existing watermarking methods face the challenge of balancing robustness and concealment. They empirically inject a watermark that is both invisible and robust and passively achieve concealment by limiting the strength of the watermark, thus reducing the robustness. In this paper, we propose to explicitly introduce a watermark hiding process to actively achieve concealment, thus allowing the embedding of stronger watermarks. To be specific, we implant a robust watermark in an intermediate diffusion state and then guide the model to hide the watermark in the final generated image. We employ an adversarial optimization algorithm to produce the optimal hiding prompt guiding signal for each watermark. The prompt embedding is optimized to minimize artifacts in the generated image, while the watermark is optimized to achieve maximum strength. The watermark can be verified by reversing the generation process. Experiments on various diffusion models demonstrate the watermark remains verifiable even under significant image tampering and shows superior invisibility compared to other state-of-the-art robust watermarking methods.", "pdf": "https://openreview.net/pdf/f750a4304c500d4fedf1fc261a334f58548297d5.pdf"} {"title": "SILENCE: Protecting privacy in offloaded speech understanding on resource-constrained devices", "url": "https://openreview.net/forum?id=tKuLgnDWWN", "detail_url": "https://openreview.net/forum?id=tKuLgnDWWN", "authors": "DONGQI CAI,Shangguang Wang,Zeling Zhang,Felix Xiaozhu Lin,Mengwei Xu", "tags": "NIPS 2024,Poster", "abstract": "Speech serves as a ubiquitous input interface for embedded mobile devices. \nCloud-based solutions, while offering powerful speech understanding services, raise significant concerns regarding user privacy. \nTo address this, disentanglement-based encoders have been proposed to remove sensitive information from speech signals without compromising the speech understanding functionality. \nHowever, these encoders demand high memory usage and computation complexity, making them impractical for resource-constrained wimpy devices.\nOur solution is based on a key observation that speech understanding hinges on long-term dependency knowledge of the entire utterance, in contrast to privacy-sensitive elements that are short-term dependent. \nExploiting this observation, we propose SILENCE, a lightweight system that selectively obscuring short-term details, without damaging the long-term dependent speech understanding performance.\nThe crucial part of SILENCE is a differential mask generator derived from interpretable learning to \nautomatically configure the masking process.\nWe have implemented SILENCE on the STM32H7 microcontroller and evaluate its efficacy under different attacking scenarios. \nOur results demonstrate that SILENCE offers speech understanding performance and privacy protection capacity comparable to existing encoders, while achieving up to 53.3$\\times$ speedup and 134.1$\\times$ reduction in memory footprint.", "pdf": "https://openreview.net/pdf/f916b7a4e107eaa97efdcf615a87e1d4a63f8d66.pdf"} {"title": "Improving Adversarial Robust Fairness via Anti-Bias Soft Label Distillation", "url": "https://openreview.net/forum?id=kW30LbNwdV", "detail_url": "https://openreview.net/forum?id=kW30LbNwdV", "authors": "Shiji Zhao,Ranjie Duan,xizhewang,Xingxing Wei", "tags": "NIPS 2024,Poster", "abstract": "Adversarial Training (AT) has been widely proved to be an effective method to improve the adversarial robustness against adversarial examples for Deep Neural Networks (DNNs). As a variant of AT, Adversarial Robustness Distillation (ARD) has demonstrated its superior performance in improving the robustness of small student models with the guidance of large teacher models. However, both AT and ARD encounter the robust fairness problem: these models exhibit strong robustness when facing part of classes (easy class), but weak robustness when facing others (hard class). In this paper, we give an in-depth analysis of the potential factors and argue that the smoothness degree of samples' soft labels for different classes (i.e., hard class or easy class) will affect the robust fairness of DNNs from both empirical observation and theoretical analysis. Based on the above finding, we propose an Anti-Bias Soft Label Distillation (ABSLD) method to mitigate the adversarial robust fairness problem within the framework of Knowledge Distillation (KD). Specifically, ABSLD adaptively reduces the student's error risk gap between different classes to achieve fairness by adjusting the class-wise smoothness degree of samples' soft labels during the training process, and the smoothness degree of soft labels is controlled by assigning different temperatures in KD to different classes. Extensive experiments demonstrate that ABSLD outperforms state-of-the-art AT, ARD, and robust fairness methods in the comprehensive metric (Normalized Standard Deviation) of robustness and fairness.", "pdf": "https://openreview.net/pdf/4778b077085c3a1b78c8e8796dd0c9ad4b693d64.pdf"} {"title": "ALPINE: Unveiling The Planning Capability of Autoregressive Learning in Language Models", "url": "https://openreview.net/forum?id=WFbZusv14E", "detail_url": "https://openreview.net/forum?id=WFbZusv14E", "authors": "Siwei Wang,Yifei Shen,Shi Feng,Haoran Sun,Shang-Hua Teng,Wei Chen", "tags": "NIPS 2024,Poster", "abstract": "Planning is a crucial element of both human intelligence and contemporary large language models (LLMs). In this paper, we initiate a theoretical investigation into the emergence of planning capabilities in Transformer-based LLMs via their next-word prediction mechanisms. We model planning as a network path-finding task, where the objective is to generate a valid path from a specified source node to a designated target node. Our mathematical characterization shows that Transformer architectures can execute path-finding by embedding the adjacency and reachability matrices within their weights. Furthermore, our theoretical analysis of gradient-based learning dynamics reveals that LLMs can learn both the adjacency and a limited form of the reachability matrices. These theoretical insights are then validated through experiments, which demonstrate that Transformer architectures indeed learn the adjacency and an incomplete reachability matrices, consistent with our theoretical predictions. When applying our methodology to the real-world planning benchmark Blocksworld, our observations remain consistent. Additionally, our analyses uncover a fundamental limitation of current Transformer architectures in path-finding: these architectures cannot identify reachability relationships through transitivity, which leads to failures in generating paths when concatenation is required. These findings provide new insights into how the internal mechanisms of autoregressive learning facilitate intelligent planning and deepen our understanding of how future LLMs might achieve more advanced and general planning-and-reasoning capabilities across diverse applications.", "pdf": "https://openreview.net/pdf/b096a1f1e15c37117a1405fafca50616087626b1.pdf"} {"title": "Coherent 3D Scene Diffusion From a Single RGB Image", "url": "https://openreview.net/forum?id=lckAdnVzsT", "detail_url": "https://openreview.net/forum?id=lckAdnVzsT", "authors": "Manuel Dahnert,Angela Dai,Norman M\u00fcller,Matthias Nie\u00dfner", "tags": "NIPS 2024,Poster", "abstract": "We present a novel diffusion-based approach for coherent 3D scene reconstruction from a single RGB image. \nOur method utilizes an image-conditioned 3D scene diffusion model to simultaneously denoise the 3D poses and geometries of all objects within the scene.\n\nMotivated by the ill-posed nature of the task and to obtain consistent scene reconstruction results, we learn a generative scene prior by conditioning on all scene objects simultaneously to capture scene context and by allowing the model to learn inter-object relationships throughout the diffusion process.\n\nWe further propose an efficient surface alignment loss to facilitate training even in the absence of full ground-truth annotation, which is common in publicly available datasets. This loss leverages an expressive shape representation, which enables direct point sampling from intermediate shape predictions.\n\nBy framing the task of single RGB image 3D scene reconstruction as a conditional diffusion process, our approach surpasses current state-of-the-art methods, achieving a 12.04\\% improvement in AP3D on SUN RGB-D and a 13.43\\% increase in F-Score on Pix3D.", "pdf": "https://openreview.net/pdf/14518c4c229813583cef4952da32a8fbf3c7b5c4.pdf"} {"title": "Making Offline RL Online: Collaborative World Models for Offline Visual Reinforcement Learning", "url": "https://openreview.net/forum?id=ucxQrked0d", "detail_url": "https://openreview.net/forum?id=ucxQrked0d", "authors": "Qi Wang,Junming Yang,Yunbo Wang,Xin Jin,Wenjun Zeng,Xiaokang Yang", "tags": "NIPS 2024,Poster", "abstract": "Training offline RL models using visual inputs poses two significant challenges, *i.e.*, the overfitting problem in representation learning and the overestimation bias for expected future rewards. Recent work has attempted to alleviate the overestimation bias by encouraging conservative behaviors. This paper, in contrast, tries to build more flexible constraints for value estimation without impeding the exploration of potential advantages. The key idea is to leverage off-the-shelf RL simulators, which can be easily interacted with in an online manner, as the \u201c*test bed*\u201d for offline policies. To enable effective online-to-offline knowledge transfer, we introduce CoWorld, a model-based RL approach that mitigates cross-domain discrepancies in state and reward spaces. Experimental results demonstrate the effectiveness of CoWorld, outperforming existing RL approaches by large margins.", "pdf": "https://openreview.net/pdf/5a411d1f7badc798a114ee857720735fdee8d9ac.pdf"} {"title": "Color-Oriented Redundancy Reduction in Dataset Distillation", "url": "https://openreview.net/forum?id=yfQwyxiSJ7", "detail_url": "https://openreview.net/forum?id=yfQwyxiSJ7", "authors": "Bowen Yuan,Zijian Wang,Mahsa Baktashmotlagh,Yadan Luo,Zi Huang", "tags": "NIPS 2024,Poster", "abstract": "Dataset Distillation (DD) is designed to generate condensed representations of extensive image datasets, enhancing training efficiency. Despite recent advances, there remains considerable potential for improvement, particularly in addressing the notable redundancy within the color space of distilled images. In this paper, we propose a two-fold optimization strategy to minimize color redundancy at the individual image and overall dataset levels, respectively. At the image level, we employ a palette network, a specialized neural network, to dynamically allocate colors from a reduced color space to each pixel. The palette network identifies essential areas in synthetic images for model training, and consequently assigns more unique colors to them. At the dataset level, we develop a color-guided initialization strategy to minimize redundancy among images. Representative images with the least replicated color patterns are selected based on the information gain. A comprehensive performance study involving various datasets and evaluation scenarios is conducted, demonstrating the superior performance of our proposed color-aware DD compared to existing DD methods.", "pdf": "https://openreview.net/pdf/20b534cf5fff43e4e9a8229eb66f4841e6dba9df.pdf"} {"title": "Visual Decoding and Reconstruction via EEG Embeddings with Guided Diffusion", "url": "https://openreview.net/forum?id=RxkcroC8qP", "detail_url": "https://openreview.net/forum?id=RxkcroC8qP", "authors": "Dongyang Li,Chen Wei,Shiying Li,Jiachen Zou,Quanying Liu", "tags": "NIPS 2024,Poster", "abstract": "How to decode human vision through neural signals has attracted a long-standing interest in neuroscience and machine learning. Modern contrastive learning and generative models improved the performance of visual decoding and reconstruction based on functional Magnetic Resonance Imaging (fMRI). However, the high cost and low temporal resolution of fMRI limit their applications in brain-computer interfaces (BCIs), prompting a high need for visual decoding based on electroencephalography (EEG). In this study, we present an end-to-end EEG-based visual reconstruction zero-shot framework, consisting of a tailored brain encoder, called the Adaptive Thinking Mapper (ATM), which projects neural signals from different sources into the shared subspace as the clip embedding, and a two-stage multi-pipe EEG-to-image generation strategy. In stage one, EEG is embedded to align the high-level clip embedding, and then the prior diffusion model refines EEG embedding into image priors. A blurry image also decoded from EEG for maintaining the low-level feature. In stage two, we input both the high-level clip embedding, the blurry image and caption from EEG latent to a pre-trained diffusion model. Furthermore, we analyzed the impacts of different time windows and brain regions on decoding and reconstruction. The versatility of our framework is demonstrated in the magnetoencephalogram (MEG) data modality. The experimental results indicate that our EEG-based visual zero-shot framework achieves SOTA performance in classification, retrieval and reconstruction, highlighting the portability, low cost, and high temporal resolution of EEG, enabling a wide range of BCI applications. Our code is available at https://github.com/ncclab-sustech/EEG_Image_decode.", "pdf": "https://openreview.net/pdf/e9ae7d71af62f1e7845a156a8ed90cbd2e77329f.pdf"} {"title": "Trajectory Diffusion for ObjectGoal Navigation", "url": "https://openreview.net/forum?id=1GpY0hsv2w", "detail_url": "https://openreview.net/forum?id=1GpY0hsv2w", "authors": "Xinyao Yu,Sixian Zhang,Xinhang Song,Xiaorong Qin,Shuqiang Jiang", "tags": "NIPS 2024,Poster", "abstract": "Object goal navigation requires an agent to navigate to a specified object in an unseen environment based on visual observations and user-specified goals. \nHuman decision-making in navigation is sequential, planning a most likely sequence of actions toward the goal. \nHowever, existing ObjectNav methods, both end-to-end learning methods and modular methods, rely on single-step planning. They output the next action based on the current model input, which easily overlooks temporal consistency and leads to myopic planning.\nTo this end, we aim to learn sequence planning for ObjectNav. Specifically, we propose trajectory diffusion to learn the distribution of trajectory sequences conditioned on the current observation and the goal. \nWe utilize DDPM and automatically collected optimal trajectory segments to train the trajectory diffusion.\nOnce the trajectory diffusion model is trained, it can generate a temporally coherent sequence of future trajectory for agent based on its current observations.\nExperimental results on the Gibson and MP3D datasets demonstrate that the generated trajectories effectively guide the agent, resulting in more accurate and efficient navigation.", "pdf": "https://openreview.net/pdf/4a98ca3c5b5cc11841f2d3f230131fedb22a7c9a.pdf"} {"title": "Simplifying Latent Dynamics with Softly State-Invariant World Models", "url": "https://openreview.net/forum?id=CwNevJONgq", "detail_url": "https://openreview.net/forum?id=CwNevJONgq", "authors": "Tankred Saanum,Peter Dayan,Eric Schulz", "tags": "NIPS 2024,Poster", "abstract": "To solve control problems via model-based reasoning or planning, an agent needs to know how its actions affect the state of the world. The actions an agent has at its disposal often change the state of the environment in systematic ways. However, existing techniques for world modelling do not guarantee that the effect of actions are represented in such systematic ways. We introduce the Parsimonious Latent Space Model (PLSM), a world model that regularizes the latent dynamics to make the effect of the agent's actions more predictable. Our approach minimizes the mutual information between latent states and the change that an action produces in the agent's latent state, in turn minimizing the dependence the state has on the dynamics. This makes the world model softly state-invariant. We combine PLSM with different model classes used for i) future latent state prediction, ii) planning, and iii) model-free reinforcement learning. We find that our regularization improves accuracy, generalization, and performance in downstream tasks, highlighting the importance of systematic treatment of actions in world models.", "pdf": "https://openreview.net/pdf/6336f8da2745b9d3614b0afc9d90dc594f88dd74.pdf"} {"title": "SlimSAM: 0.1% Data Makes Segment Anything Slim", "url": "https://openreview.net/forum?id=ZG84y6a7ge", "detail_url": "https://openreview.net/forum?id=ZG84y6a7ge", "authors": "Zigeng Chen,Gongfan Fang,Xinyin Ma,Xinchao Wang", "tags": "NIPS 2024,Poster", "abstract": "Current approaches for compressing the Segment Anything Model (SAM) yield commendable results, yet necessitate extensive data to train a new network from scratch. Employing conventional pruning techniques can remarkably reduce data requirements but would suffer from a degradation in performance. To address this challenging trade-off, we introduce SlimSAM, a novel data-efficient SAM compression method that achieves superior performance with extremely less training data. The essence of SlimSAM is encapsulated in the alternate slimming framework which effectively enhances knowledge inheritance under severely limited training data availability and exceptional pruning ratio. Diverging from prior techniques, our framework progressively compresses the model by alternately pruning and distilling distinct, decoupled sub-structures. Disturbed Taylor pruning is also proposed to address the misalignment between the pruning objective and training target, thereby boosting the post-distillation after pruning. SlimSAM yields significant performance improvements while demanding over 10 times less training data than any other existing compression methods. Even when compared to the original SAM, SlimSAM achieves approaching performance while reducing parameter counts to merely 1.4% (9.1M), MACs to 0.8% (23G), and requiring only 0.1% (10k) of the SAM training data.", "pdf": "https://openreview.net/pdf/7eb5dfbd62d9dc1c4c697b22e21b99500342830a.pdf"} {"title": "Recovering Complete Actions for Cross-dataset Skeleton Action Recognition", "url": "https://openreview.net/forum?id=oe7MfqFK1M", "detail_url": "https://openreview.net/forum?id=oe7MfqFK1M", "authors": "Hanchao Liu,Yujiang Li,Tai-Jiang Mu,Shi-min Hu", "tags": "NIPS 2024,Poster", "abstract": "Despite huge progress in skeleton-based action recognition, its generalizability to different domains remains a challenging issue. \nIn this paper, to solve the skeleton action generalization problem, we present a recover-and-resample augmentation framework based on a novel complete action prior. We observe that human daily actions are confronted with temporal mismatch across different datasets, as they are usually partial observations of their complete action sequences. By recovering complete actions and resampling from these full sequences, we can generate strong augmentations for unseen domains. At the same time, we discover the nature of general action completeness within large datasets, indicated by the per-frame diversity over time. This allows us to exploit two assets of transferable knowledge that can be shared across action samples and be helpful for action completion: boundary poses for determining the action start, and linear temporal transforms for capturing global action patterns. Therefore, we formulate the recovering stage as a two-step stochastic action completion with boundary pose-conditioned extrapolation followed by smooth linear transforms. Both the boundary poses and linear transforms can be efficiently learned from the whole dataset via clustering. We validate our approach on a cross-dataset setting with three skeleton action datasets, outperforming other domain generalization approaches by a considerable margin.", "pdf": "https://openreview.net/pdf/ed3d5688d70d03ec751dfe16c56d0722ef8c2528.pdf"} {"title": "General Articulated Objects Manipulation in Real Images via Part-Aware Diffusion Process", "url": "https://openreview.net/forum?id=WRd9LCbvxN", "detail_url": "https://openreview.net/forum?id=WRd9LCbvxN", "authors": "Zhou FANG,Yong-Lu Li,Lixin Yang,Cewu Lu", "tags": "NIPS 2024,Poster", "abstract": "Articulated object manipulation in real images is a fundamental step in computer and robotic vision tasks. Recently, several image editing methods based on diffusion models have been proposed to manipulate articulated objects according to text prompts. However, these methods often generate weird artifacts or even fail in real images. To this end, we introduce the Part-Aware Diffusion Model to approach the manipulation of articulated objects in real images. First, we develop Abstract 3D Models to represent and manipulate articulated objects efficiently and arbitrarily. Then we propose dynamic feature maps to transfer the appearance of objects from input images to edited ones, meanwhile generating novel views or novel-appearing parts reasonably. Extensive experiments are provided to illustrate the advanced manipulation capabilities of our method concerning state-of-the-art editing works. Additionally, we verify our method on 3D articulated object understanding for\nembodied robot scenarios and the promising results prove that our method supports this task strongly. The project page is https://mvig-rhos.com/pa_diffusion.", "pdf": "https://openreview.net/pdf/5d947b154aeed8b77b371ef40f65f5374070e4cb.pdf"} {"title": "Aligning Diffusion Behaviors with Q-functions for Efficient Continuous Control", "url": "https://openreview.net/forum?id=Wd1DFLUp1M", "detail_url": "https://openreview.net/forum?id=Wd1DFLUp1M", "authors": "Huayu Chen,Kaiwen Zheng,Hang Su,Jun Zhu", "tags": "NIPS 2024,Poster", "abstract": "Drawing upon recent advances in language model alignment, we formulate offline Reinforcement Learning as a two-stage optimization problem: First pretraining expressive generative policies on reward-free behavior datasets, then finetuning these policies to align with task-specific annotations like Q-values. This strategy allows us to leverage abundant and diverse behavior data to enhance generalization and enable rapid adaptation to downstream tasks using minimal annotations. In particular, we introduce Efficient Diffusion Alignment (EDA) for solving continuous control problems. EDA utilizes diffusion models for behavior modeling. However, unlike previous approaches, we represent diffusion policies as the derivative of a scalar neural network with respect to action inputs. This representation is critical because it enables direct density calculation for diffusion models, making them compatible with existing LLM alignment theories. During policy fine-tuning, we extend preference-based alignment methods like Direct Preference Optimization (DPO) to align diffusion behaviors with continuous Q-functions. Our evaluation on the D4RL benchmark shows that EDA exceeds all baseline methods in overall performance. Notably, EDA maintains about 95\\% of performance and still outperforms several baselines given only 1\\% of Q-labelled data during fine-tuning.", "pdf": "https://openreview.net/pdf/36b802a6ae74a169344f720fea7757c283bcdf0d.pdf"} {"title": "MO-DDN: A Coarse-to-Fine Attribute-based Exploration Agent for Multi-Object Demand-driven Navigation", "url": "https://openreview.net/forum?id=MzTdZhMjeC", "detail_url": "https://openreview.net/forum?id=MzTdZhMjeC", "authors": "Hongcheng Wang,Peiqi Liu,Wenzhe Cai,Mingdong Wu,Zhengyu Qian,Hao Dong", "tags": "NIPS 2024,Poster", "abstract": "The process of satisfying daily demands is a fundamental aspect of humans' daily lives. With the advancement of embodied AI, robots are increasingly capable of satisfying human demands. Demand-driven navigation (DDN) is a task in which an agent must locate an object to satisfy a specified demand instruction, such as \"I am thirsty.\" The previous study typically assumes that each demand instruction requires only one object to be fulfilled and does not consider individual preferences. However, the realistic human demand may involve multiple objects. In this paper, we introduce the Multi-object Demand-driven Navigation (MO-DDN) benchmark, which addresses these nuanced aspects, including multi-object search and personal preferences, thus making the MO-DDN task more reflective of real-life scenarios compared to DDN. Building upon previous work, we employ the concept of ``attribute'' to tackle this new task. However, instead of solely relying on attribute features in an end-to-end manner like DDN, we propose a modular method that involves constructing a coarse-to-fine attribute-based exploration agent (C2FAgent). Our experimental results illustrate that this coarse-to-fine exploration strategy capitalizes on the advantages of attributes at various decision-making levels, resulting in superior performance compared to baseline methods. Code and video can be found at https://sites.google.com/view/moddn.", "pdf": "https://openreview.net/pdf/93905ccec59fb2772c8a3940329393bf93a88986.pdf"} {"title": "HENASY: Learning to Assemble Scene-Entities for Interpretable Egocentric Video-Language Model", "url": "https://openreview.net/forum?id=7uWzoGn4kv", "detail_url": "https://openreview.net/forum?id=7uWzoGn4kv", "authors": "Khoa Vo,Thinh Phan,Kashu Yamazaki,Minh Tran,Ngan Hoang Le", "tags": "NIPS 2024,Poster", "abstract": "Current video-language models (VLMs) rely extensively on instance-level alignment between video and language modalities, which presents two major limitations: (1) visual reasoning disobeys the natural perception that humans do in first-person perspective, leading to a lack of reasoning interpretation; and (2) learning is limited in capturing inherent fine-grained relationships between two modalities.\n\nIn this paper, we take an inspiration from human perception and explore a compositional approach for egocentric video representation. We introduce HENASY (Hierarchical ENtities ASsemblY), which includes a spatiotemporal token grouping mechanism to explicitly assemble dynamically evolving scene entities through time and model their relationship for video representation. By leveraging compositional structure understanding, HENASY possesses strong interpretability via visual grounding with free-form text queries. We further explore a suite of multi-grained contrastive losses to facilitate entity-centric understandings. This comprises three alignment types: video-narration, noun-entity, verb-entities alignments.\n\nOur method demonstrates strong interpretability in both quantitative and qualitative experiments; while maintaining competitive performances on five downstream tasks via zero-shot transfer or as video/text representation, including video/text retrieval, action recognition, multi-choice query, natural language query, and moments query.\n\nProject page: https://uark-aicv.github.io/HENASY", "pdf": "https://openreview.net/pdf/260addeb3cae77d95d50cef43216e139cdea0e6a.pdf"} {"title": "Improving the Learning Capability of Small-size Image Restoration Network by Deep Fourier Shifting", "url": "https://openreview.net/forum?id=3gKsKFeuMA", "detail_url": "https://openreview.net/forum?id=3gKsKFeuMA", "authors": "Man Zhou", "tags": "NIPS 2024,Poster", "abstract": "State-of-the-art image restoration methods currently face challenges in terms of computational requirements and performance, making them impractical for deployment on edge devices such as phones and resource-limited devices. As a result, there is a need to develop alternative solutions with efficient designs that can achieve comparable performance to transformer or large-kernel methods. This motivates our research to explore techniques for improving the capability of small-size image restoration standing on the success secret of large receptive filed.\n\nTargeting at expanding receptive filed, spatial-shift operator tailored for efficient spatial communication and has achieved remarkable advances in high-level image classification tasks, like $S^2$-MLP and ShiftVit. However, its potential has rarely been explored in low-level image restoration tasks. The underlying reason behind this obstacle is that image restoration is sensitive to the spatial shift that occurs due to severe region-aware information loss, which exhibits a different behavior from high-level tasks. To address this challenge and unleash the potential of spatial shift for image restoration, we propose an information-lossless shifting operator, i.e., Deep Fourier Shifting, that is customized for image restoration. To develop our proposed operator, we first revisit the principle of shift operator and apply it to the Fourier domain, where the shift operator can be modeled in an information-lossless Fourier cycling manner. Inspired by Fourier cycling, we design two variants of Deep Fourier Shifting, namely the amplitude-phase variant and the real-imaginary variant. These variants are generic operators that can be directly plugged into existing image restoration networks as a drop-in replacement for the standard convolution unit, consuming fewer parameters. Extensive experiments across multiple low-level tasks including image denoising, low-light image enhancement, guided image super-resolution, and image de-blurring demonstrate consistent performance gains obtained by our Deep Fourier Shifting while reducing the computation burden. Additionally, ablation studies verify the robustness of the shift displacement with stable performance improvement.", "pdf": "https://openreview.net/pdf/2adbaa12c492cb3da56fb264c1f1af8baf11a4a4.pdf"} {"title": "Lumen: Unleashing Versatile Vision-Centric Capabilities of Large Multimodal Models", "url": "https://openreview.net/forum?id=v5Un2QqnRf", "detail_url": "https://openreview.net/forum?id=v5Un2QqnRf", "authors": "Yang Jiao,Shaoxiang Chen,ZEQUN JIE,Jingjing Chen,Lin Ma,Yu-Gang Jiang", "tags": "NIPS 2024,Poster", "abstract": "Large Multimodal Model (LMM) is a hot research topic in the computer vision area and has also demonstrated remarkable potential across multiple disciplinary fields. A recent trend is to further extend and enhance the perception capabilities of LMMs. The current methods follow the paradigm of adapting the visual task outputs to the format of the language model, which is the main component of a LMM. This adaptation leads to convenient development of such LMMs with minimal modifications, however, it overlooks the intrinsic characteristics of diverse visual tasks and hinders the learning of perception capabilities. To address this issue, we propose a novel LMM architecture named Lumen, a Large multimodal model with versatile vision-centric capability enhancement. We decouple the LMM's learning of perception capabilities into task-agnostic and task-specific stages. Lumen first promotes fine-grained vision-language concept alignment, which is the fundamental capability for various visual tasks. Thus the output of the task-agnostic stage is a shared representation for all the tasks we address in this paper. Then the task-specific decoding is carried out by flexibly routing the shared representation to lightweight task decoders with negligible training efforts. Comprehensive experimental results on a series of vision-centric and VQA benchmarks indicate that our Lumen model not only achieves or surpasses the performance of existing LMM-based approaches in a range of vision-centric tasks while maintaining general visual understanding and instruction following capabilities.", "pdf": "https://openreview.net/pdf/f93765d8f6468be09c47280c032485d031a61297.pdf"} {"title": "MoVA: Adapting Mixture of Vision Experts to Multimodal Context", "url": "https://openreview.net/forum?id=uHs6RJFDsg", "detail_url": "https://openreview.net/forum?id=uHs6RJFDsg", "authors": "Zhuofan Zong,Bingqi Ma,Dazhong Shen,Guanglu Song,Hao Shao,Dongzhi Jiang,Hongsheng Li,Yu Liu", "tags": "NIPS 2024,Poster", "abstract": "As the key component in multimodal large language models (MLLMs), the ability of the visual encoder greatly affects MLLM's understanding on diverse image content. Although some large-scale pretrained vision encoders such as vision encoders in CLIP and DINOv2 have brought promising performance, we found that there is still no single vision encoder that can dominate various image content understanding, e.g., the CLIP vision encoder leads to outstanding results on general image understanding but poor performance on document or chart content. To alleviate the bias of CLIP vision encoder, we first delve into the inherent behavior of different pre-trained vision encoders and then propose the MoVA, a powerful and novel MLLM, adaptively routing and fusing task-specific vision experts with a coarse-to-fine mechanism. In the coarse-grained stage, we design a context-aware expert routing strategy to dynamically select the most suitable vision experts according to the user instruction, input image, and expertise of vision experts. This benefits from the powerful model function understanding ability of the large language model (LLM). In the fine-grained stage, we elaborately conduct the mixture-of-vision-expert adapter (MoV-Adapter) to extract and fuse task-specific knowledge from various experts. This coarse-to-fine paradigm effectively leverages representations from experts based on multimodal context and model expertise, further enhancing the generalization ability. We conduct extensive experiments to evaluate the effectiveness of the proposed approach. Without any bells and whistles, MoVA can achieve significant performance gains over current state-of-the-art methods in a wide range of challenging multimodal benchmarks.", "pdf": "https://openreview.net/pdf/85eac16c2d58fa7601c94c13ed540dcb3e97d7c3.pdf"} {"title": "UPS: Unified Projection Sharing for Lightweight Single-Image Super-resolution and Beyond", "url": "https://openreview.net/forum?id=tacb2bFZcm", "detail_url": "https://openreview.net/forum?id=tacb2bFZcm", "authors": "Kun Zhou,Xinyu Lin,Zhonghang LIU,Xiaoguang Han,Jiangbo Lu", "tags": "NIPS 2024,Poster", "abstract": "To date, transformer-based frameworks have demonstrated impressive results in single-image super-resolution (SISR). However, under practical lightweight scenarios, the complex interaction of deep image feature extraction and similarity modeling limits the performance of these methods, since they require simultaneous layer-specific optimization of both two tasks. In this work, we introduce a novel Unified Projection Sharing algorithm(UPS) to decouple the feature extraction and similarity modeling, achieving notable performance. To do this, we establish a unified projection space defined by a learnable projection matrix, for similarity calculation across all self-attention layers. As a result, deep image feature extraction remains a per-layer optimization manner, while similarity modeling is carried out by projecting these image features onto the shared projection space. Extensive experiments demonstrate that our proposed UPS achieves state-of-the-art performance relative to leading lightweight SISR methods, as verified by various popular benchmarks. Moreover, our unified optimized projection space exhibits encouraging robustness performance for unseen data (degraded and depth images). Finally, UPS also demonstrates promising results across various image restoration tasks, including real-world and classic SISR, image denoising, and image deblocking.", "pdf": "https://openreview.net/pdf/23b5ecf1afe998ecdd8d69c48a52d8ef2e144490.pdf"} {"title": "Bayesian Nonparametrics Meets Data-Driven Distributionally Robust Optimization", "url": "https://openreview.net/forum?id=8CguPoe3TP", "detail_url": "https://openreview.net/forum?id=8CguPoe3TP", "authors": "Nicola Bariletto,Nhat Ho", "tags": "NIPS 2024,Poster", "abstract": "Training machine learning and statistical models often involves optimizing a data-driven risk criterion. The risk is usually computed with respect to the empirical data distribution, but this may result in poor and unstable out-of-sample performance due to distributional uncertainty. In the spirit of distributionally robust optimization, we propose a novel robust criterion by combining insights from Bayesian nonparametric (i.e., Dirichlet process) theory and a recent decision-theoretic model of smooth ambiguity-averse preferences. First, we highlight novel connections with standard regularized empirical risk minimization techniques, among which Ridge and LASSO regressions. Then, we theoretically demonstrate the existence of favorable finite-sample and asymptotic statistical guarantees on the performance of the robust optimization procedure. For practical implementation, we propose and study tractable approximations of the criterion based on well-known Dirichlet process representations. We also show that the smoothness of the criterion naturally leads to standard gradient-based numerical optimization. Finally, we provide insights into the workings of our method by applying it to a variety of tasks based on simulated and real datasets.", "pdf": "https://openreview.net/pdf/78de727e07ecc8455e43addfde64b87fb1a6355d.pdf"} {"title": "SSA-Seg: Semantic and Spatial Adaptive Pixel-level Classifier for Semantic Segmentation", "url": "https://openreview.net/forum?id=RZZo23pQFL", "detail_url": "https://openreview.net/forum?id=RZZo23pQFL", "authors": "Xiaowen Ma,Zhen-Liang Ni,Xinghao Chen", "tags": "NIPS 2024,Poster", "abstract": "Vanilla pixel-level classifiers for semantic segmentation are based on a certain paradigm, involving the inner product of fixed prototypes obtained from the training set and pixel features in the test image. This approach, however, encounters significant limitations, i.e., feature deviation in the semantic domain and information loss in the spatial domain. The former struggles with large intra-class variance among pixel features from different images, while the latter fails to utilize the structured information of semantic objects effectively. This leads to blurred mask boundaries as well as a deficiency of fine-grained recognition capability. In this paper, we propose a novel Semantic and Spatial Adaptive Classifier (SSA-Seg) to address the above challenges. Specifically, we employ the coarse masks obtained from the fixed prototypes as a guide to adjust the fixed prototype towards the center of the semantic and spatial domains in the test image. The adapted prototypes in semantic and spatial domains are then simultaneously considered to accomplish classification decisions. In addition, we propose an online multi-domain distillation learning strategy to improve the adaption process. Experimental results on three publicly available benchmarks show that the proposed SSA-Seg significantly improves the segmentation performance of the baseline models with only a minimal increase in computational cost.", "pdf": "https://openreview.net/pdf/d4205e278e0ac3d0c37ca725c7feedf94929714c.pdf"} {"title": "Language Model as Visual Explainer", "url": "https://openreview.net/forum?id=Dsi8Ibxg9H", "detail_url": "https://openreview.net/forum?id=Dsi8Ibxg9H", "authors": "Xingyi Yang,Xinchao Wang", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we present Language Model as Visual Explainer (\\texttt{LVX}), a systematic approach for interpreting the internal workings of vision models using a tree-structured linguistic explanation, without the need for model training. Central to our strategy is the collaboration between vision models and LLM to craft explanations. On one hand, the LLM is harnessed to delineate hierarchical visual attributes, while concurrently, a text-to-image API retrieves images that are most aligned with these textual concepts. By mapping the collected texts and images to the vision model's embedding space, we construct a hierarchy-structured visual embedding tree. This tree is dynamically pruned and grown by querying the LLM using language templates, tailoring the explanation to the model. Such a scheme allows us \nto seamlessly incorporate new attributes while eliminating undesired concepts based on the model's representations. When applied to testing samples, our method provides human-understandable explanations in the form of attribute-laden trees. Beyond explanation, we retrained the vision model by calibrating it on the generated concept hierarchy, allowing the model to incorporate the refined knowledge of visual attributes. To access the effectiveness of our approach, we introduce new benchmarks and conduct rigorous evaluations, demonstrating its plausibility, faithfulness, and stability.", "pdf": "https://openreview.net/pdf/74362df179f20f43d867bd8fcfb34a4d06911c90.pdf"} {"title": "Mind's Eye of LLMs: Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models", "url": "https://openreview.net/forum?id=CEJ1mYPgWw", "detail_url": "https://openreview.net/forum?id=CEJ1mYPgWw", "authors": "Wenshan Wu,Shaoguang Mao,Yadong Zhang,Yan Xia,Li Dong,Lei Cui,Furu Wei", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have exhibited impressive performance in language comprehension and various reasoning tasks. However, their abilities in spatial reasoning, a crucial aspect of human cognition, remain relatively unexplored. Human possess a remarkable ability to create mental images of unseen objects and actions through a process known as the Mind's Eye, enabling the imagination of the unseen world. Inspired by this cognitive capacity, we propose Visualization-of-Thought (VoT) prompting. VoT aims to elicit spatial reasoning of LLMs by visualizing their reasoning traces, thereby guiding subsequent reasoning steps. We employed VoT for multi-hop spatial reasoning tasks, including natural language navigation, visual navigation, and visual tiling in 2D grid worlds. Experimental results demonstrated that VoT significantly enhances the spatial reasoning abilities of LLMs. Notably, VoT outperformed existing multimodal large language models (MLLMs) in these tasks. While VoT works surprisingly well on LLMs, the ability to generate mental images to facilitate spatial reasoning resembles the mind's eye process, suggesting its potential viability in MLLMs. Please find the dataset and codes in our [project page](https://microsoft.github.io/visualization-of-thought).", "pdf": "https://openreview.net/pdf/113616234bcddc1e7e4bbd4714048a448e75e10a.pdf"} {"title": "GaussianMarker: Uncertainty-Aware Copyright Protection of 3D Gaussian Splatting", "url": "https://openreview.net/forum?id=wcxHbAY8B3", "detail_url": "https://openreview.net/forum?id=wcxHbAY8B3", "authors": "Xiufeng Huang,Ruiqi Li,Yiu-ming Cheung,Ka Chun Cheung,Simon See,Renjie Wan", "tags": "NIPS 2024,Poster", "abstract": "3D Gaussian Splatting (3DGS) has become a crucial method for acquiring 3D assets. To protect the copyright of these assets, digital watermarking techniques can be applied to embed ownership information discreetly within 3DGS mod- els. However, existing watermarking methods for meshes, point clouds, and implicit radiance fields cannot be directly applied to 3DGS models, as 3DGS models use explicit 3D Gaussians with distinct structures and do not rely on neural networks. Naively embedding the watermark on a pre-trained 3DGS can cause obvious distortion in rendered images. In our work, we propose an uncertainty- based method that constrains the perturbation of model parameters to achieve invisible watermarking for 3DGS. At the message decoding stage, the copyright messages can be reliably extracted from both 3D Gaussians and 2D rendered im- ages even under various forms of 3D and 2D distortions. We conduct extensive experiments on the Blender, LLFF, and MipNeRF-360 datasets to validate the effectiveness of our proposed method, demonstrating state-of-the-art performance on both message decoding accuracy and view synthesis quality.", "pdf": "https://openreview.net/pdf/430c524014071593c5a9851aef0295f14993f2e5.pdf"} {"title": "Expressive Gaussian Human Avatars from Monocular RGB Video", "url": "https://openreview.net/forum?id=3CweLZFNyl", "detail_url": "https://openreview.net/forum?id=3CweLZFNyl", "authors": "Hezhen Hu,Zhiwen Fan,Tianhao Walter Wu,Yihan Xi,Seoyoung Lee,Georgios Pavlakos,Zhangyang Wang", "tags": "NIPS 2024,Poster", "abstract": "Nuanced expressiveness, especially through detailed hand and facial expressions, is pivotal for enhancing the realism and vitality of digital human representations.\nIn this work, we aim to learn expressive human avatars from a monocular RGB video; a setting that introduces new challenges in capturing and animating fine-grained details.\nTo this end, we introduce EVA, a drivable human model that can recover fine details based on 3D Gaussians and an expressive parametric human model, SMPL-X.\nFocused on enhancing expressiveness, our work makes three key contributions.\nFirst, we highlight the importance of aligning the SMPL-X model with the video frames for effective avatar learning.\nRecognizing the limitations of current methods for estimating SMPL-X parameters from in-the-wild videos, we introduce a reconstruction module that significantly improves the image-model alignment.\nSecond, we propose a context-aware adaptive density control strategy, which is adaptively adjusting the gradient thresholds to accommodate the varied granularity across body parts.\nThird, we develop a feedback mechanism that predicts per-pixel confidence to better guide the optimization of 3D Gaussians.\nExtensive experiments on two benchmarks demonstrate the superiority of our approach both quantitatively and qualitatively, especially on the fine-grained hand and facial details. \nWe make our code available at the project website: https://evahuman.github.io.", "pdf": "https://openreview.net/pdf/d3eeb6925f27576c641205c98103704bbe8e32eb.pdf"} {"title": "DiffPano: Scalable and Consistent Text to Panorama Generation with Spherical Epipolar-Aware Diffusion", "url": "https://openreview.net/forum?id=YIOvR40hSo", "detail_url": "https://openreview.net/forum?id=YIOvR40hSo", "authors": "Weicai Ye,Chenhao Ji,Zheng Chen,Junyao Gao,Xiaoshui Huang,Song-Hai Zhang,Wanli Ouyang,Tong He,Cairong Zhao,Guofeng Zhang", "tags": "NIPS 2024,Poster", "abstract": "Diffusion-based methods have achieved remarkable achievements in 2D image or 3D object generation, however, the generation of 3D scenes and even $360^{\\circ}$ images remains constrained, due to the limited number of scene datasets, the complexity of 3D scenes themselves, and the difficulty of generating consistent multi-view images. To address these issues, we first establish a large-scale panoramic video-text dataset containing millions of consecutive panoramic keyframes with corresponding panoramic depths, camera poses, and text descriptions. Then, we propose a novel text-driven panoramic generation framework, termed DiffPano, to achieve scalable, consistent, and diverse panoramic scene generation. Specifically, benefiting from the powerful generative capabilities of stable diffusion, we fine-tune a single-view text-to-panorama diffusion model with LoRA on the established panoramic video-text dataset. We further design a spherical epipolar-aware multi-view diffusion model to ensure the multi-view consistency of the generated panoramic images. Extensive experiments demonstrate that DiffPano can generate scalable, consistent, and diverse panoramic images with given unseen text descriptions and camera poses.", "pdf": "https://openreview.net/pdf/49edb1b3fd04008c5c8f72f72489b6a2bdab8e4c.pdf"} {"title": "Does Egalitarian Fairness Lead to Instability? The Fairness Bounds in Stable Federated Learning Under Altruistic Behaviors", "url": "https://openreview.net/forum?id=1kyc4TSOFZ", "detail_url": "https://openreview.net/forum?id=1kyc4TSOFZ", "authors": "Jiashi Gao,Ziwei Wang,Xiangyu Zhao,Xin Yao,Xuetao Wei", "tags": "NIPS 2024,Poster", "abstract": "Federated learning (FL) offers a machine learning paradigm that protects privacy, allowing multiple clients to collaboratively train a global model while only accessing their local data. Recent research in FL has increasingly focused on improving the uniformity of model performance across clients, a fairness principle known as egalitarian fairness. However, achieving egalitarian fairness in FL may sacrifice the model performance for data-rich clients to benefit those with less data. This trade-off raises concerns about the stability of FL, as data-rich clients may opt to leave the current coalition and join another that is more closely aligned with its expected high performance. In this context, our work rigorously addresses the critical concern: **Does egalitarian fairness lead to instability?** Drawing from game theory and social choice theory, we initially characterize fair FL systems as altruism coalition formation games (ACFGs) and reveal that the instability issues emerging from the pursuit of egalitarian fairness are significantly related to the clients\u2019 altruism within the coalition and the configuration of the friends-relationship networks among the clients. Then, we theoretically propose the optimal egalitarian fairness bounds that an FL coalition can achieve while maintaining core stability under various types of altruistic behaviors. The theoretical contributions clarify the quantitative relationships between achievable egalitarian fairness and the disparities in the sizes of local datasets, disproving the misconception that egalitarian fairness inevitably leads to instability. Finally, we conduct experiments to evaluate the consistency of our theoretically derived egalitarian fairness bounds with the empirically achieved egalitarian fairness in fair FL settings.", "pdf": "https://openreview.net/pdf/2e449843d62a52f82adfc61034d84c572027d5f9.pdf"} {"title": "VLMimic: Vision Language Models are Visual Imitation Learner for Fine-grained Actions", "url": "https://openreview.net/forum?id=C3ZHiij9QE", "detail_url": "https://openreview.net/forum?id=C3ZHiij9QE", "authors": "Guangyan Chen,Meiling Wang,Te Cui,Yao Mu,Haoyang Lu,Tianxing Zhou,Zicai Peng,Mengxiao Hu,Haizhou Li,Li Yuan,Yi Yang,Yufeng Yue", "tags": "NIPS 2024,Poster", "abstract": "Visual imitation learning (VIL) provides an efficient and intuitive strategy for robotic systems to acquire novel skills. Recent advancements in Vision Language Models (VLMs) have demonstrated remarkable performance in vision and language reasoning capabilities for VIL tasks. Despite the progress, current VIL methods naively employ VLMs to learn high-level plans from human videos, relying on pre-defined motion primitives for executing physical interactions, which remains a major bottleneck. In this work, we present VLMimic, a novel paradigm that harnesses VLMs to directly learn even fine-grained action levels, only given a limited number of human videos. Specifically, VLMimic first grounds object-centric movements from human videos, and learns skills using hierarchical constraint representations, facilitating the derivation of skills with fine-grained action levels from limited human videos. These skills are refined and updated through an iterative comparison strategy, enabling efficient adaptation to unseen environments. Our extensive experiments exhibit that our VLMimic, using only 5 human videos, yields significant improvements of over 27% and 21% in RLBench and real-world manipulation tasks, and surpasses baselines by more than 37% in long-horizon tasks. Code and videos are available on our anonymous homepage.", "pdf": "https://openreview.net/pdf/e9b1a837e503d1861ece741d0a2b937f77eea435.pdf"} {"title": "AUCSeg: AUC-oriented Pixel-level Long-tail Semantic Segmentation", "url": "https://openreview.net/forum?id=ekK26cW5TB", "detail_url": "https://openreview.net/forum?id=ekK26cW5TB", "authors": "Boyu Han,Qianqian Xu,Zhiyong Yang,Shilong Bao,Peisong Wen,Yangbangyan Jiang,Qingming Huang", "tags": "NIPS 2024,Poster", "abstract": "The Area Under the ROC Curve (AUC) is a well-known metric for evaluating instance-level long-tail learning problems. In the past two decades, many AUC optimization methods have been proposed to improve model performance under long-tail distributions. In this paper, we explore AUC optimization methods in the context of pixel-level long-tail semantic segmentation, a much more complicated scenario. This task introduces two major challenges for AUC optimization techniques. On one hand, AUC optimization in a pixel-level task involves complex coupling across loss terms, with structured inner-image and pairwise inter-image dependencies, complicating theoretical analysis. On the other hand, we find that mini-batch estimation of AUC loss in this case requires a larger batch size, resulting in an unaffordable space complexity. To address these issues, we develop a pixel-level AUC loss function and conduct a dependency-graph-based theoretical analysis of the algorithm's generalization ability. Additionally, we design a Tail-Classes Memory Bank (T-Memory Bank) to manage the significant memory demand. Finally, comprehensive experiments across various benchmarks confirm the effectiveness of our proposed AUCSeg method. The code is available at https://github.com/boyuh/AUCSeg.", "pdf": "https://openreview.net/pdf/38fa941c480de3259c3508aaf0c968eed971b269.pdf"} {"title": "Scene Graph Generation with Role-Playing Large Language Models", "url": "https://openreview.net/forum?id=xpRUi8amtC", "detail_url": "https://openreview.net/forum?id=xpRUi8amtC", "authors": "Guikun Chen,Jin Li,Wenguan Wang", "tags": "NIPS 2024,Poster", "abstract": "Current approaches for open-vocabulary scene graph generation (OVSGG) use vision-language models such as CLIP and follow a standard zero-shot pipeline \u2013 computing similarity between the query image and the text embeddings for each category (i.e., text classifiers). In this work, we argue that the text classifiers adopted by existing OVSGG methods, i.e., category-/part-level prompts, are scene-agnostic as they remain unchanged across contexts. Using such fixed text classifiers not only struggles to model visual relations with high variance, but also falls short in adapting to distinct contexts. To plug these intrinsic shortcomings, we devise SDSGG, a scene-specific description based OVSGG framework where the weights of text classifiers are adaptively adjusted according to the visual content. In particular, to generate comprehensive and diverse descriptions oriented to the scene, an LLM is asked to play different roles (e.g., biologist and engineer) to analyze and discuss the descriptive features of a given scene from different views. Unlike previous efforts simply treating the generated descriptions as mutually equivalent text classifiers, SDSGG is equipped with an advanced renormalization mechanism to adjust the influence of each text classifier based on its relevance to the presented scene (this is what the term \u201cspecific\u201d means). Furthermore, to capture the complicated interplay between subjects and objects, we propose a new lightweight module called mutual visual adapter. It refines CLIP\u2019s ability to recognize relations by learning an interaction-aware semantic space. Extensive experiments on prevalent benchmarks show that SDSGG significantly outperforms top-leading methods.", "pdf": "https://openreview.net/pdf/a6ceccff861a11ddd07c003913a667e72407795a.pdf"} {"title": "pFedClub: Controllable Heterogeneous Model Aggregation for Personalized Federated Learning", "url": "https://openreview.net/forum?id=xW6ga9i4eA", "detail_url": "https://openreview.net/forum?id=xW6ga9i4eA", "authors": "Jiaqi Wang,Qi Li,Lingjuan Lyu,Fenglong Ma", "tags": "NIPS 2024,Poster", "abstract": "Federated learning, a pioneering paradigm, enables collaborative model training without exposing users\u2019 data to central servers. Most existing federated learning systems necessitate uniform model structures across all clients, restricting their practicality. Several methods have emerged to aggregate diverse client models; however, they either lack the ability of personalization, raise privacy and security concerns, need prior knowledge, or ignore the capability and functionality of personalized models. In this paper, we present an innovative approach, named pFedClub, which addresses these challenges. pFedClub introduces personalized federated learning through the substitution of controllable neural network blocks/layers. Initially, pFedClub dissects heterogeneous client models into blocks and organizes them into functional groups on the server. Utilizing the designed CMSR (Controllable Model Searching and Reproduction) algorithm, pFedClub generates a range of personalized candidate models for each client. A model matching technique is then applied to select the optimal personalized model, serving as a teacher model to guide each client\u2019s training process. We conducted extensive experiments across three datasets, examining both IID and non-IID settings. The results demonstrate that pFedClub outperforms baseline approaches, achieving state-of-the-art performance. Moreover, our model insight analysis reveals that pFedClub generates personalized models of reasonable size in a controllable manner, significantly reducing computational costs.", "pdf": "https://openreview.net/pdf/e4ee792dd28b3bc552b8f290198f312b9e344159.pdf"} {"title": "Zero-shot Image Editing with Reference Imitation", "url": "https://openreview.net/forum?id=LZV0U6UHb6", "detail_url": "https://openreview.net/forum?id=LZV0U6UHb6", "authors": "Xi Chen,Yutong Feng,Mengting Chen,Yiyang Wang,Shilong Zhang,Yu Liu,Yujun Shen,Hengshuang Zhao", "tags": "NIPS 2024,Poster", "abstract": "Image editing serves as a practical yet challenging task considering the diverse demands from users, where one of the hardest parts is to precisely describe how the edited image should look like. In this work, we present a new form of editing, termed imitative editing, to help users exercise their creativity more conveniently. Concretely, to edit an image region of interest, users are free to directly draw inspiration from some in-the-wild references (e.g., some relative pictures come across online), without having to cope with the fit between the reference and the source. Such a design requires the system to automatically figure out what to expect from the reference to perform the editing. For this purpose, we propose a generative training framework, dubbed MimicBrush, which randomly selects two frames from a video clip, masks some regions of one frame, and learns to recover the masked regions using the information from the other frame. That way, our model, developed from a diffusion prior, is able to capture the semantic correspondence between separate images in a self-supervised manner. We experimentally show the effectiveness of our method under various test cases as well as its superiority over existing alternatives. We also construct a benchmark to facilitate further research.", "pdf": "https://openreview.net/pdf/3180eab83b7ea4addb961064aebffa351b5a0e2c.pdf"} {"title": "Deep Correlated Prompting for Visual Recognition with Missing Modalities", "url": "https://openreview.net/forum?id=zO55ovdLJw", "detail_url": "https://openreview.net/forum?id=zO55ovdLJw", "authors": "Lianyu Hu,Tongkai Shi,Wei Feng,Fanhua Shang,Liang Wan", "tags": "NIPS 2024,Poster", "abstract": "Large-scale multimodal models have shown excellent performance over a series of tasks powered by the large corpus of paired multimodal training data. Generally, they are always assumed to receive modality-complete inputs. However, this simple assumption may not always hold in the real world due to privacy constraints or collection difficulty, where models pretrained on modality-complete data easily demonstrate degraded performance on missing-modality cases. To handle this issue, we refer to prompt learning to adapt large pretrained multimodal models to handle missing-modality scenarios by regarding different missing cases as different types of input. Instead of only prepending independent prompts to the intermediate layers, we present to leverage the correlations between prompts and input features and excavate the relationships between different layers of prompts to carefully design the instructions. We also incorporate the complementary semantics of different modalities to guide the prompting design for each modality. Extensive experiments on three commonly-used datasets consistently demonstrate the superiority of our method compared to the previous approaches upon different missing scenarios. Plentiful ablations are further given to show the generalizability and reliability of our method upon different modality-missing ratios and types.", "pdf": "https://openreview.net/pdf/e5eab82e91c827d97d0d74e6bfb40e12627a0fb3.pdf"} {"title": "Ordering-Based Causal Discovery for Linear and Nonlinear Relations", "url": "https://openreview.net/forum?id=OQUg2T4qJB", "detail_url": "https://openreview.net/forum?id=OQUg2T4qJB", "authors": "Zhuopeng Xu,Yujie Li,Cheng Liu,Ning Gui", "tags": "NIPS 2024,Poster", "abstract": "Identifying causal relations from purely observational data typically requires additional assumptions on relations and/or noise. Most current methods restrict their analysis to datasets that are assumed to have pure linear or nonlinear relations, which is often not reflective of real-world datasets that contain a combination of both. This paper presents CaPS, an ordering-based causal discovery algorithm that effectively handles linear and nonlinear relations. CaPS introduces a novel identification criterion for topological ordering and incorporates the concept of \"parent score\" during the post-processing optimization stage. These scores quantify the strength of the average causal effect, helping to accelerate the pruning process and correct inaccurate predictions in the pruning step. Experimental results demonstrate that our proposed solutions outperform state-of-the-art baselines on synthetic data with varying ratios of linear and nonlinear relations. The results obtained from real-world data also support the competitiveness of CaPS. Code and datasets are available at https://github.com/E2real/CaPS.", "pdf": "https://openreview.net/pdf/b1aebbce8b2e5fd3e0c0adc2bacb860b24dc9ac0.pdf"} {"title": "SpeAr: A Spectral Approach for Zero-Shot Node Classification", "url": "https://openreview.net/forum?id=eU87jJyEK5", "detail_url": "https://openreview.net/forum?id=eU87jJyEK5", "authors": "Ting Guo,Da Wang,Jiye Liang,Kaihan Zhang,Jianchao Zeng", "tags": "NIPS 2024,Poster", "abstract": "Zero-shot node classification is a vital task in the field of graph data processing, aiming to identify nodes of classes unseen during the training process. Prediction bias is one of the primary challenges in zero-shot node classification, referring to the model's propensity to misclassify nodes of unseen classes as seen classes. However, most methods introduce external knowledge to mitigate the bias, inadequately leveraging the inherent cluster information within the unlabeled nodes. To address this issue, we employ spectral analysis coupled with learnable class prototypes to discover the implicit cluster structures within the graph, providing a more comprehensive understanding of classes. In this paper, we propose a spectral approach for zero-shot node classification (SpeAr). Specifically, we establish an approximate relationship between minimizing the spectral contrastive loss and performing spectral decomposition on the graph, thereby enabling effective node characterization through loss minimization. Subsequently, the class prototypes are iteratively refined based on the learned node representations, initialized with the semantic vectors. Finally, extensive experiments verify the effectiveness of the SpeAr, which can further alleviate the bias problem.", "pdf": "https://openreview.net/pdf/2cf58069ba1d4a743d58ba0cc134db6d36640f1d.pdf"} {"title": "Boosting Semi-Supervised Scene Text Recognition via Viewing and Summarizing", "url": "https://openreview.net/forum?id=7NrYnCN2be", "detail_url": "https://openreview.net/forum?id=7NrYnCN2be", "authors": "Yadong Qu,Yuxin Wang,Bangbang Zhou,Zixiao Wang,Hongtao Xie,Yongdong Zhang", "tags": "NIPS 2024,Poster", "abstract": "Existing scene text recognition (STR) methods struggle to recognize challenging texts, especially for artistic and severely distorted characters. The limitation lies in the insufficient exploration of character morphologies, including the monotonousness of widely used synthetic training data and the sensitivity of the model to character morphologies. To address these issues, inspired by the human learning process of viewing and summarizing, we facilitate the contrastive learning-based STR framework in a self-motivated manner by leveraging synthetic and real unlabeled data without any human cost. In the viewing process, to compensate for the simplicity of synthetic data and enrich character morphology diversity, we propose an Online Generation Strategy to generate background-free samples with diverse character styles. By excluding background noise distractions, the model is encouraged to focus on character morphology and generalize the ability to recognize complex samples when trained with only simple synthetic data. To boost the summarizing process, we theoretically demonstrate the derivation error in the previous character contrastive loss, which mistakenly causes the sparsity in the intra-class distribution and exacerbates ambiguity on challenging samples. Therefore, a new Character Unidirectional Alignment Loss is proposed to correct this error and unify the representation of the same characters in all samples by aligning the character features in the student model with the reference features in the teacher model. Extensive experiment results show that our method achieves SOTA performance (94.7\\% and 70.9\\% average accuracy on common benchmarks and Union14M-Benchmark). Code will be available.", "pdf": "https://openreview.net/pdf/bf2fac25c1e0c1368a45293d71a7f4484f021e1c.pdf"} {"title": "United We Stand, Divided We Fall: Fingerprinting Deep Neural Networks via Adversarial Trajectories", "url": "https://openreview.net/forum?id=YwpL0BVxts", "detail_url": "https://openreview.net/forum?id=YwpL0BVxts", "authors": "Tianlong Xu,Chen Wang,Gaoyang Liu,Yang Yang,Kai Peng,Wei Liu", "tags": "NIPS 2024,Poster", "abstract": "In recent years, deep neural networks (DNNs) have witnessed extensive applications, and protecting their intellectual property (IP) is thus crucial. As a non-invasive way for model IP protection, model fingerprinting has become popular. However, existing single-point based fingerprinting methods are highly sensitive to the changes in the decision boundary, and may suffer from the misjudgment of the resemblance of sparse fingerprinting, yielding high false positives of innocent models. In this paper, we propose ADV-TRA, a more robust fingerprinting scheme that utilizes adversarial trajectories to verify the ownership of DNN models. Benefited from the intrinsic progressively adversarial level, the trajectory is capable of tolerating greater degree of alteration in decision boundaries. We further design novel schemes to generate a surface trajectory that involves a series of fixed-length trajectories with dynamically adjusted step sizes. Such a design enables a more unique and reliable fingerprinting with relatively low querying costs. Experiments on three datasets against four types of removal attacks show that ADV-TRA exhibits superior performance in distinguishing between infringing and innocent models, outperforming the state-of-the-art comparisons.", "pdf": "https://openreview.net/pdf/c4e326e4220480b60f51e63ae149d2009d4cccd5.pdf"} {"title": "Generate Universal Adversarial Perturbations for Few-Shot Learning", "url": "https://openreview.net/forum?id=QLRO8o4bol", "detail_url": "https://openreview.net/forum?id=QLRO8o4bol", "authors": "Yiman Hu,Yixiong Zou,Ruixuan Li,Yuhua Li", "tags": "NIPS 2024,Poster", "abstract": "Deep networks are known to be vulnerable to adversarial examples which are deliberately designed to mislead the trained model by introducing imperceptible perturbations to input samples. Compared to traditional perturbations crafted specifically for each data point, Universal Adversarial Perturbations (UAPs) are input-agnostic and shown to be more practical in the real world. However, UAPs are typically generated in a close-set scenario that shares the same classification task during the training and testing phases. This paper demonstrates the ineffectiveness of traditional UAPs in open-set scenarios like Few-Shot Learning (FSL). Through analysis, we identify two primary challenges that hinder the attacking process: the task shift and the semantic shift. To enhance the transferability of UAPs in FSL, we propose a unifying attacking framework addressing these two shifts. The task shift is addressed by aligning proxy tasks to the downstream tasks, while the semantic shift is handled by leveraging the generalizability of pre-trained encoders.The proposed Few-Shot Attacking FrameWork, denoted as FSAFW, can effectively generate UAPs across various FSL training paradigms and different downstream tasks. Our approach not only sets a new standard for state-of-the-art works but also significantly enhances attack performance, exceeding the baseline method by over 16\\%.", "pdf": "https://openreview.net/pdf/110a0e19d065da1b60cf8db53027ee85c36b89f6.pdf"} {"title": "Zero-Shot Scene Reconstruction from Single Images with Deep Prior Assembly", "url": "https://openreview.net/forum?id=SoTK84ewb7", "detail_url": "https://openreview.net/forum?id=SoTK84ewb7", "authors": "Junsheng Zhou,Yu-Shen Liu,Zhizhong Han", "tags": "NIPS 2024,Poster", "abstract": "Large language and vision models have been leading a revolution in visual computing. By greatly scaling up sizes of data and model parameters, the large models learn deep priors which lead to remarkable performance in various tasks. In this work, we present deep prior assembly, a novel framework that assembles diverse deep priors from large models for scene reconstruction from single images in a zero-shot manner. We show that this challenging task can be done without extra knowledge but just simply generalizing one deep prior in one sub-task. To this end, we introduce novel methods related to poses, scales, and occlusion parsing which are keys to enable deep priors to work together in a robust way. Deep prior assembly does not require any 3D or 2D data-driven training in the task and demonstrates superior performance in generalizing priors to open-world scenes. We conduct evaluations on various datasets, and report analysis, numerical and visual comparisons with the latest methods to show our superiority. Project page: https://junshengzhou.github.io/DeepPriorAssembly.", "pdf": "https://openreview.net/pdf/f7b0ad01064293c4412d2a2b19be3a6d4e74472c.pdf"} {"title": "High-dimensional (Group) Adversarial Training in Linear Regression", "url": "https://openreview.net/forum?id=Tsb4dVtCHx", "detail_url": "https://openreview.net/forum?id=Tsb4dVtCHx", "authors": "Yiling Xie,Xiaoming Huo", "tags": "NIPS 2024,Poster", "abstract": "Adversarial training can achieve robustness against adversarial perturbations and has been widely used in machine-learning models. This paper delivers a non-asymptotic consistency analysis of the adversarial training procedure under $\\ell_\\infty$-perturbation in high-dimensional linear regression. It will be shown that, under the restricted eigenvalue condition, the associated convergence rate of prediction error can achieve the minimax rate up to a logarithmic factor in the high-dimensional linear regression on the class of sparse parameters. Additionally, the group adversarial training procedure is analyzed. Compared with classic adversarial training, it will be proved that the group adversarial training procedure enjoys a better prediction error upper bound under certain group-sparsity patterns.", "pdf": "https://openreview.net/pdf/477761584d4124aa9d9da812aeb63e1a2bd25252.pdf"} {"title": "From Dictionary to Tensor: A Scalable Multi-View Subspace Clustering Framework with Triple Information Enhancement", "url": "https://openreview.net/forum?id=p4a1nSvwD7", "detail_url": "https://openreview.net/forum?id=p4a1nSvwD7", "authors": "Zhibin Gu,Songhe Feng", "tags": "NIPS 2024,Poster", "abstract": "While Tensor-based Multi-view Subspace Clustering (TMSC) has garnered significant attention for its capacity to effectively capture high-order correlations among multiple views, three notable limitations in current TMSC methods necessitate consideration: 1) high computational complexity and reliance on dictionary completeness resulting from using observed data as the dictionary, 2) inaccurate subspace representation stemming from the oversight of local geometric information and 3) under-penalization of noise-related singular values within tensor data caused by treating all singular values equally. To address these limitations, this paper presents a \\textbf{S}calable TMSC framework with \\textbf{T}riple inf\\textbf{O}rmatio\\textbf{N} \\textbf{E}nhancement (\\textbf{STONE}). Notably, an enhanced anchor dictionary learning mechanism has been utilized to recover the low-rank anchor structure, resulting in reduced computational complexity and increased resilience, especially in scenarios with inadequate dictionaries. Additionally, we introduce an anchor hypergraph Laplacian regularizer to preserve the inherent geometry of the data within the subspace representation. Simultaneously, an improved hyperbolic tangent function has been employed as a precise approximation for tensor rank, effectively capturing the significant variations in singular values. Extensive experimentation on a variety of datasets demonstrates that our approach surpasses SOTA methods in both effectiveness and efficiency.", "pdf": "https://openreview.net/pdf/a9b52ea6298c33aa1b741a10daf60f7b9a8b2a08.pdf"} {"title": "DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment", "url": "https://openreview.net/forum?id=hKVTwQQu76", "detail_url": "https://openreview.net/forum?id=hKVTwQQu76", "authors": "Gongpei Zhao,Tao Wang,Congyan Lang,Yi Jin,Yidong Li,Haibin Ling", "tags": "NIPS 2024,Poster", "abstract": "Graph neural networks (GNNs) are recognized for their strong performance across various applications, with the backpropagation (BP) algorithm playing a central role in the development of most GNN models. However, despite its effectiveness, BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks. While several non-backpropagation (non-BP) training algorithms, such as the direct feedback alignment (DFA), have been successfully applied to fully-connected and convolutional network components for handling Euclidean data, directly adapting these non-BP frameworks to manage non-Euclidean graph data in GNN models presents significant challenges. These challenges primarily arise from the violation of the independent and identically distributed (i.i.d.) assumption in graph data and the difficulty in accessing prediction errors for all samples (nodes) within the graph. To overcome these obstacles, in this paper we propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning. The proposed method breaks the limitations of BP by using a dedicated forward training mechanism. Specifically, DFA-GNN extends the principles of DFA to adapt to graph data and unique architecture of GNNs, which incorporates the information of graph topology into the feedback links to accommodate the non-Euclidean characteristics of graph data. Additionally, for semi-supervised graph learning tasks, we developed a pseudo error generator that spreads residual errors from training data to create a pseudo error for each unlabeled node. These pseudo errors are then utilized to train GNNs using DFA. Extensive experiments on 10 public benchmarks reveal that our learning framework outperforms not only previous non-BP methods but also the standard BP methods, and it exhibits excellent robustness against various types of noise and attacks.", "pdf": "https://openreview.net/pdf/647c2de3753f676ee4f3022d76f9b618c090fb25.pdf"} {"title": "Rethinking Transformer for Long Contextual Histopathology Whole Slide Image Analysis", "url": "https://openreview.net/forum?id=f3oHNyqd83", "detail_url": "https://openreview.net/forum?id=f3oHNyqd83", "authors": "Honglin Li,Yunlong Zhang,Pingyi Chen,Zhongyi Shui,Chenglu Zhu,Lin Yang", "tags": "NIPS 2024,Poster", "abstract": "Histopathology Whole Slide Image (WSI) analysis serves as the gold standard for clinical cancer diagnosis in the daily routines of doctors. To develop computer-aided diagnosis model for histopathology WSIs, previous methods typically employ Multi-Instance Learning to enable slide-level prediction given only slide-level labels.\nAmong these models, vanilla attention mechanisms without pairwise interactions have traditionally been employed but are unable to model contextual information. More recently, self-attention models have been utilized to address this issue. To alleviate the computational complexity of long sequences in large WSIs, methods like HIPT use region-slicing, and TransMIL employs Nystr\\\"{o}mformer as an approximation of full self-attention. Both approaches suffer from suboptimal performance due to the loss of key information. Moreover, their use of absolute positional embedding struggles to effectively handle long contextual dependencies in shape-varying WSIs.\nIn this paper, we first analyze how the low-rank nature of the long-sequence attention matrix constrains the representation ability of WSI modelling. Then, we demonstrate that the rank of attention matrix can be improved by focusing on local interactions via a local attention mask. Our analysis shows that the local mask aligns with the attention patterns in the lower layers of the Transformer. Furthermore, the local attention mask can be implemented during chunked attention calculation, reducing the quadratic computational complexity to linear with a small local bandwidth. Additionally, this locality helps the model generalize to unseen or under-fitted positions more easily.\nBuilding on this, we propose a local-global hybrid Transformer for both computational acceleration and local-global information interactions modelling. Our method, Long-contextual MIL (LongMIL), is evaluated through extensive experiments on various WSI tasks to validate its superiority in: 1) overall performance, 2) memory usage and speed, and 3) extrapolation ability compared to previous methods.", "pdf": "https://openreview.net/pdf/41837e7defc28fbad676346268cce7e1206d8357.pdf"} {"title": "Improving Gloss-free Sign Language Translation by Reducing Representation Density", "url": "https://openreview.net/forum?id=FtzLbGoHW2", "detail_url": "https://openreview.net/forum?id=FtzLbGoHW2", "authors": "Jinhui Ye,Xing Wang,Wenxiang Jiao,Junwei Liang,Hui Xiong", "tags": "NIPS 2024,Poster", "abstract": "Gloss-free sign language translation (SLT) aims to develop well-performing SLT systems with no requirement for the costly gloss annotations, but currently still lags behind gloss-based approaches significantly. In this paper, we identify **a representation density problem** that could be a bottleneck in restricting the performance of gloss-free SLT. Specifically, the representation density problem describes that the visual representations of semantically distinct sign gestures tend to be closely packed together in feature space, which makes gloss-free methods struggle with distinguishing different sign gestures and suffer from a sharp performance drop. To address the representation density problem, we introduce a simple but effective contrastive learning strategy, namely SignCL, which encourages gloss-free models to learn more discriminative feature representation in a self-supervised manner. Our experiments demonstrate that the proposed SignCL can significantly reduce the representation density and improve performance across various translation frameworks. Specifically, SignCLachieves a significant improvement in BLEU score for the Sign Language Transformer and GFSLT-VLP on the CSL-Daily dataset by 39\\% and 46\\%, respectively, without any increase of model parameters. Compared to Sign2GPT, a state-of-the-art method based on large-scale pre-trained vision and language models, SignCLachieves better performance with only 35\\% of its parameters. We will release our code and model to facilitate further research.", "pdf": "https://openreview.net/pdf/0631b014669e398b1951d82fa6284f992ebcdaae.pdf"} {"title": "Directional Smoothness and Gradient Methods: Convergence and Adaptivity", "url": "https://openreview.net/forum?id=m9WZrEXWl5", "detail_url": "https://openreview.net/forum?id=m9WZrEXWl5", "authors": "Aaron Mishkin,Ahmed Khaled,Yuanhao Wang,Aaron Defazio,Robert M. Gower", "tags": "NIPS 2024,Poster", "abstract": "We develop new sub-optimality bounds for gradient descent (GD) that depend on the conditioning of the objective along the path of optimization, rather than on global, worst-case constants. Key to our proofs is directional smoothness, a measure of gradient variation that we use to develop upper-bounds on the objective. Minimizing these upper-bounds requires solving implicit equations to obtain a sequence of strongly adapted step-sizes; we show that these equations are straightforward to solve for convex quadratics and lead to new guarantees for two classical step-sizes. For general functions, we prove that the Polyak step-size and normalized GD obtain fast, path-dependent rates despite using no knowledge of the directional smoothness. Experiments on logistic regression show our convergence guarantees are tighter than the classical theory based on $L$-smoothness.", "pdf": "https://openreview.net/pdf/f2703711fef33a3a992dd71b1727654c5941f548.pdf"} {"title": "Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation", "url": "https://openreview.net/forum?id=e0SQ6wsHjv", "detail_url": "https://openreview.net/forum?id=e0SQ6wsHjv", "authors": "Wangbo Zhao,Jiasheng Tang,Yizeng Han,Yibing Song,Kai Wang,Gao Huang,Fan Wang,Yang You", "tags": "NIPS 2024,Poster", "abstract": "Existing parameter-efficient fine-tuning (PEFT) methods have achieved significant success on vision transformers (ViTs) adaptation by improving parameter efficiency. However, the exploration of enhancing inference efficiency during adaptation remains underexplored. This limits the broader application of pre-trained ViT models, especially when the model is computationally extensive. In this paper, we propose Dynamic Tuning (DyT), a novel approach to improve both parameter and inference efficiency for ViT adaptation. Specifically, besides using the lightweight adapter modules, we propose a token dispatcher to distinguish informative tokens from less important ones, allowing the latter to dynamically skip the original block, thereby reducing the redundant computation during inference. Additionally, we explore multiple design variants to find the best practice of DyT. Finally, inspired by the mixture-of-experts (MoE) mechanism, we introduce an enhanced adapter to further boost the adaptation performance. We validate DyT across various tasks, including image/video recognition and semantic segmentation. For instance, DyT achieves superior performance compared to existing PEFT methods while evoking only 71% of their FLOPs on the VTAB-1K benchmark.", "pdf": "https://openreview.net/pdf/dce2445eb474664115ba48794811ae31c0fee121.pdf"} {"title": "Key-Grid: Unsupervised 3D Keypoints Detection using Grid Heatmap Features", "url": "https://openreview.net/forum?id=4pCu9c8leX", "detail_url": "https://openreview.net/forum?id=4pCu9c8leX", "authors": "Chengkai Hou,Zhengrong Xue,Bingyang Zhou,Jinghan Ke,Lin Shao,Huazhe Xu", "tags": "NIPS 2024,Poster", "abstract": "Detecting 3D keypoints with semantic consistency is widely used in many scenarios such as pose estimation, shape registration and robotics. Currently, most unsupervised 3D keypoint detection methods focus on the rigid-body objects. However, when faced with deformable objects, the keypoints they identify do not preserve semantic consistency well. In this paper, we introduce an innovative unsupervised keypoint detector Key-Grid for both the rigid-body and deformable objects, which is an autoencoder framework. The encoder predicts keypoints and the decoder utilizes the generated keypoints to reconstruct the objects. Unlike previous work, we leverage the identified keypoint in formation to form a 3D grid feature heatmap called grid heatmap, which is used in the decoder section. Grid heatmap is a novel concept that represents the latent variables for grid points sampled uniformly in the 3D cubic space, where these variables are the shortest distance between the grid points and the \u201cskeleton\u201d connected by keypoint pairs. Meanwhile, we incorporate the information from each layer of the encoder into the decoder section. We conduct an extensive evaluation of Key-Grid on a list of benchmark datasets. Key-Grid achieves the state-of-the-art performance on the semantic consistency and position accuracy of keypoints. Moreover, we demonstrate the robustness of Key-Grid to noise and downsampling. In addition, we achieve SE-(3) invariance of keypoints though generalizing Key-Grid to a SE(3)-invariant backbone.", "pdf": "https://openreview.net/pdf/1cac2d0a643843b33dd76da7279a9dbbedad0142.pdf"} {"title": "Attention Temperature Matters in ViT-Based Cross-Domain Few-Shot Learning", "url": "https://openreview.net/forum?id=o8m4RM5mBk", "detail_url": "https://openreview.net/forum?id=o8m4RM5mBk", "authors": "Yixiong Zou,Ran Ma,Yuhua Li,Ruixuan Li", "tags": "NIPS 2024,Poster", "abstract": "Cross-domain few-shot learning (CDFSL) is proposed to transfer knowledge from large-scale source-domain datasets to downstream target-domain datasets with only a few training samples. However, Vision Transformer (ViT), as a strong backbone network to achieve many top performances, is still under-explored in the CDFSL task in its transferability against large domain gaps. In this paper, we find an interesting phenomenon of ViT in the CDFSL task: by simply multiplying a temperature (even as small as 0) to the attention in ViT blocks, the target-domain performance consistently increases, even though the attention map is downgraded to a uniform map. In this paper, we delve into this phenomenon for an interpretation. Through experiments, we interpret this phenomenon as a remedy for the ineffective target-domain attention caused by the query-key attention mechanism under large domain gaps. Based on it, we further propose a simple but effective method for the CDFSL task to boost ViT's transferability by resisting the learning of query-key parameters and encouraging that of non-query-key ones. Experiments on four CDFSL datasets validate the rationale of our interpretation and method, showing we can consistently outperform state-of-the-art methods. Our codes are available at https://github.com/Zoilsen/Attn_Temp_CDFSL.", "pdf": "https://openreview.net/pdf/e5e1a8116920c85d89d3cc454c2cbf92f98caa8c.pdf"} {"title": "Decoupled Kullback-Leibler Divergence Loss", "url": "https://openreview.net/forum?id=bnZZedw9CM", "detail_url": "https://openreview.net/forum?id=bnZZedw9CM", "authors": "Jiequan Cui,Zhuotao Tian,Zhisheng Zhong,XIAOJUAN QI,Bei Yu,Hanwang Zhang", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we delve deeper into the Kullback\u2013Leibler (KL) Divergence loss and mathematically prove that it is equivalent to the Decoupled Kullback-Leibler (DKL) Divergence loss that consists of 1) a weighted Mean Square Error ($\\mathbf{w}$MSE) loss and 2) a Cross-Entropy loss incorporating soft labels. \nThanks to the decomposed formulation of DKL loss, we have identified two areas for improvement. \nFirstly, we address the limitation of KL/DKL in scenarios like knowledge distillation by breaking its asymmetric optimization property. This modification ensures that the $\\mathbf{w}$MSE component is always effective during training, providing extra constructive cues.\nSecondly, we introduce class-wise global information into KL/DKL to mitigate bias from individual samples.\nWith these two enhancements, we derive the Improved Kullback\u2013Leibler (IKL) Divergence loss and evaluate its effectiveness by conducting experiments on CIFAR-10/100 and ImageNet datasets, focusing on adversarial training, and knowledge distillation tasks. The proposed approach achieves new state-of-the-art adversarial robustness on the public leaderboard --- \\textit{RobustBench} and competitive performance on knowledge distillation, demonstrating the substantial practical merits. Our code is available at https://github.com/jiequancui/DKL.", "pdf": "https://openreview.net/pdf/cfcd980d9b010e1d311001cabb19110fd1f58261.pdf"} {"title": "Large Spatial Model: End-to-end Unposed Images to Semantic 3D", "url": "https://openreview.net/forum?id=ybHPzL7eYT", "detail_url": "https://openreview.net/forum?id=ybHPzL7eYT", "authors": "Zhiwen Fan,Jian Zhang,Wenyan Cong,Peihao Wang,Renjie Li,Kairun Wen,Shijie Zhou,Achuta Kadambi,Zhangyang Wang,Danfei Xu,Boris Ivanovic,Marco Pavone,Yue Wang", "tags": "NIPS 2024,Poster", "abstract": "Reconstructing and understanding 3D structures from a limited number of images is a classical problem in computer vision. Traditional approaches typically decompose this task into multiple subtasks, involving several stages of complex mappings between different data representations. For example, dense reconstruction using Structure-from-Motion (SfM) requires transforming images into key points, optimizing camera parameters, and estimating structures. Following this, accurate sparse reconstructions are necessary for further dense modeling, which is then input into task-specific neural networks. This multi-stage paradigm leads to significant processing times and engineering complexity.\n\nIn this work, we introduce the Large Spatial Model (LSM), which directly processes unposed RGB images into semantic radiance fields. LSM simultaneously estimates geometry, appearance, and semantics in a single feed-forward pass and can synthesize versatile label maps by interacting through language at novel views. Built on a general Transformer-based framework, LSM predicts global geometry via pixel-aligned point maps. To improve spatial attribute regression, we adopt local context aggregation with multi-scale fusion, enhancing the accuracy of fine local details. To address the scarcity of labeled 3D semantic data and enable natural language-driven scene manipulation, we incorporate a pre-trained 2D language-based segmentation model into a 3D-consistent semantic feature field. An efficient decoder parameterizes a set of semantic anisotropic Gaussians, allowing supervised end-to-end learning. Comprehensive experiments on various tasks demonstrate that LSM unifies multiple 3D vision tasks directly from unposed images, achieving real-time semantic 3D reconstruction for the first time.", "pdf": "https://openreview.net/pdf/ee1ddc8c08fa974be5694c1cffee72f12da261ad.pdf"} {"title": "Partial observation can induce mechanistic mismatches in data-constrained models of neural dynamics", "url": "https://openreview.net/forum?id=LCEgP7Ir6k", "detail_url": "https://openreview.net/forum?id=LCEgP7Ir6k", "authors": "William Qian,Jacob A Zavatone-Veth,Benjamin Samuel Ruben,Cengiz Pehlevan", "tags": "NIPS 2024,Poster", "abstract": "One of the central goals of neuroscience is to gain a mechanistic understanding of how the dynamics of neural circuits give rise to their observed function. A popular approach towards this end is to train recurrent neural networks (RNNs) to reproduce experimental recordings of neural activity. These trained RNNs are then treated as surrogate models of biological neural circuits, whose properties can be dissected via dynamical systems analysis. How reliable are the mechanistic insights derived from this procedure? While recent advances in population-level recording technologies have allowed simultaneous recording of up to tens of thousands of neurons, this represents only a tiny fraction of most cortical circuits. Here we show that observing only a subset of neurons in a circuit can create mechanistic mismatches between a simulated teacher network and a data-constrained student, even when the two networks have matching single-unit dynamics. In particular, we show that partial observation of models of low-dimensional cortical dynamics based on functionally feedforward or low-rank connectivity can lead to surrogate models with spurious attractor structure. In total, our results illustrate the challenges inherent in accurately uncovering neural mechanisms from single-trial data, and suggest the need for new methods of validating data-constrained models for neural dynamics.", "pdf": "https://openreview.net/pdf/b7593b903bc2d7a2a09cd152c0545ffae687c9e5.pdf"} {"title": "Aligning Target-Aware Molecule Diffusion Models with Exact Energy Optimization", "url": "https://openreview.net/forum?id=EWcvxXtzNu", "detail_url": "https://openreview.net/forum?id=EWcvxXtzNu", "authors": "Siyi Gu,Minkai Xu,Alexander S Powers,Weili Nie,Tomas Geffner,Karsten Kreis,Jure Leskovec,Arash Vahdat,Stefano Ermon", "tags": "NIPS 2024,Poster", "abstract": "Generating ligand molecules for specific protein targets, known as structure-based drug design, is a fundamental problem in therapeutics development and biological discovery. Recently, target-aware generative models, especially diffusion models, have shown great promise in modeling protein-ligand interactions and generating candidate drugs. However, existing models primarily focus on learning the chemical distribution of all drug candidates, which lacks effective steerability on the chemical quality of model generations. In this paper, we propose a novel and general alignment framework to align pretrained target diffusion models with preferred functional properties, named AliDiff. AliDiff shifts the target-conditioned chemical distribution towards regions with higher binding affinity and structural rationality, specified by user-defined reward functions, via the preference optimization approach. To avoid the overfitting problem in common preference optimization objectives, we further develop an improved Exact Energy Preference Optimization method to yield an exact and efficient alignment of the diffusion models, and provide the closed-form expression for the converged distribution. Empirical studies on the CrossDocked2020 benchmark show that AliDiff can generate molecules with state-of-the-art binding energies with up to -7.07 Avg. Vina Score, while maintaining strong molecular properties. Code is available at https://github.com/MinkaiXu/AliDiff.", "pdf": "https://openreview.net/pdf/9a2effadc466c9ca9a6cade03965aa021d233929.pdf"} {"title": "Stepwise Alignment for Constrained Language Model Policy Optimization", "url": "https://openreview.net/forum?id=VrVx83BkQX", "detail_url": "https://openreview.net/forum?id=VrVx83BkQX", "authors": "Akifumi Wachi,Thien Q. Tran,Rei Sato,Takumi Tanabe,Youhei Akimoto", "tags": "NIPS 2024,Poster", "abstract": "Safety and trustworthiness are indispensable requirements for real-world applications of AI systems using large language models (LLMs). This paper formulates human value alignment as an optimization problem of the language model policy to maximize reward under a safety constraint, and then proposes an algorithm, Stepwise Alignment for Constrained Policy Optimization (SACPO). One key idea behind SACPO, supported by theory, is that the optimal policy incorporating reward and safety can be directly obtained from a reward-aligned policy. Building on this key idea, SACPO aligns LLMs step-wise with each metric while leveraging simple yet powerful alignment algorithms such as direct preference optimization (DPO). SACPO offers several advantages, including simplicity, stability, computational efficiency, and flexibility of algorithms and datasets. Under mild assumptions, our theoretical analysis provides the upper bounds on optimality and safety constraint violation. Our experimental results show that SACPO can fine-tune Alpaca-7B better than the state-of-the-art method in terms of both helpfulness and harmlessness.", "pdf": "https://openreview.net/pdf/bd370f67382a3c77430b240964621b278eb1fed8.pdf"} {"title": "Towards Flexible Visual Relationship Segmentation", "url": "https://openreview.net/forum?id=kJkp2ECJT7", "detail_url": "https://openreview.net/forum?id=kJkp2ECJT7", "authors": "Fangrui Zhu,Jianwei Yang,Huaizu Jiang", "tags": "NIPS 2024,Poster", "abstract": "Visual relationship understanding has been studied separately in human-object interaction(HOI) detection, scene graph generation(SGG), and referring relationships(RR) tasks. \nGiven the complexity and interconnectedness of these tasks, it is crucial to have a flexible framework that can effectively address these tasks in a cohesive manner.\nIn this work, we propose FleVRS, a single model that seamlessly integrates the above three aspects in standard and promptable visual relationship segmentation, and further possesses the capability for open-vocabulary segmentation to adapt to novel scenarios. \nFleVRS leverages the synergy between text and image modalities, \nto ground various types of relationships from images and use textual features from vision-language models to visual conceptual understanding.\nEmpirical validation across various datasets demonstrates that our framework outperforms existing models in standard, promptable, and open-vocabulary tasks, e.g., +1.9 $mAP$ on HICO-DET, +11.4 $Acc$ on VRD, +4.7 $mAP$ on unseen HICO-DET.\nOur FleVRS represents a significant step towards a more intuitive, comprehensive, and scalable understanding of visual relationships.", "pdf": "https://openreview.net/pdf/b7afe6940b58cb99a3ef4a9c95a315a93cb6550f.pdf"} {"title": "Implicit Optimization Bias of Next-token Prediction in Linear Models", "url": "https://openreview.net/forum?id=xSziO6gQgG", "detail_url": "https://openreview.net/forum?id=xSziO6gQgG", "authors": "Christos Thrampoulidis", "tags": "NIPS 2024,Poster", "abstract": "We initiate an investigation into the optimization properties of next-token prediction (NTP), the dominant training paradigm for modern language models. Specifically, we study the structural properties of the solutions selected by gradient-based optimizers among the many possible minimizers of the NTP objective. By framing NTP as cross-entropy minimization across \\emph{distinct} contexts, each tied with a \\emph{sparse} conditional probability distribution across a finite vocabulary of tokens, we introduce ``NTP-separability conditions'' that enable reaching the data-entropy lower bound. With this setup, and focusing on linear models with fixed context embeddings, we characterize the optimization bias of gradient descent (GD): Within the data subspace defined by the sparsity patterns of distinct contexts, GD selects parameters that equate the logits' differences of in-support tokens to their log-odds. In the orthogonal subspace, the GD parameters diverge in norm and select the direction that maximizes a margin specific to NTP. These findings extend previous research on implicit bias in one-hot classification to the NTP setting, highlighting key differences and prompting further research into the optimization and generalization properties of NTP, irrespective of the specific architecture used to generate the context embeddings.", "pdf": "https://openreview.net/pdf/174e95ac9c3ffa8d220ddbe8561c2d8a3a48c25e.pdf"} {"title": "Proximal Causal Inference With Text Data", "url": "https://openreview.net/forum?id=L4RwA0qyUd", "detail_url": "https://openreview.net/forum?id=L4RwA0qyUd", "authors": "Jacob M. Chen,Rohit Bhattacharya,Katherine A. Keith", "tags": "NIPS 2024,Poster", "abstract": "Recent text-based causal methods attempt to mitigate confounding bias by estimating proxies of confounding variables that are partially or imperfectly measured from unstructured text data. These approaches, however, assume analysts have supervised labels of the confounders given text for a subset of instances, a constraint that is sometimes infeasible due to data privacy or annotation costs. In this work, we address settings in which an important confounding variable is completely unobserved. We propose a new causal inference method that uses two instances of pre-treatment text data, infers two proxies using two zero-shot models on the separate instances, and applies these proxies in the proximal g-formula. We prove, under certain assumptions about the instances of text and accuracy of the zero-shot predictions, that our method of inferring text-based proxies satisfies identification conditions of the proximal g-formula while other seemingly reasonable proposals do not. To address untestable assumptions associated with our method and the proximal g-formula, we further propose an odds ratio falsification heuristic that flags when to proceed with downstream effect estimation using the inferred proxies. We evaluate our method in synthetic and semi-synthetic settings---the latter with real-world clinical notes from MIMIC-III and open large language models for zero-shot prediction---and find that our method produces estimates with low bias. We believe that this text-based design of proxies allows for the use of proximal causal inference in a wider range of scenarios, particularly those for which obtaining suitable proxies from structured data is difficult.", "pdf": "https://openreview.net/pdf/0b66e866067ce02be0fd5a73f12ff0ac335a4cdf.pdf"} {"title": "DDGS-CT: Direction-Disentangled Gaussian Splatting for Realistic Volume Rendering", "url": "https://openreview.net/forum?id=mY0ZnS2s9u", "detail_url": "https://openreview.net/forum?id=mY0ZnS2s9u", "authors": "Zhongpai Gao,Benjamin Planche,Meng Zheng,Xiao Chen,Terrence Chen,Ziyan Wu", "tags": "NIPS 2024,Poster", "abstract": "Digitally reconstructed radiographs (DRRs) are simulated 2D X-ray images generated from 3D CT volumes, widely used in preoperative settings but limited in intraoperative applications due to computational bottlenecks. Physics-based Monte Carlo simulations provide accurate representations but are extremely computationally intensity. Analytical DRR renderers are much more efficient, but at the price of ignoring anisotropic X-ray image formation phenomena such as Compton scattering. We propose a novel approach that balances realistic physics-inspired X-ray simulation with efficient, differentiable DRR generation using 3D Gaussian splatting (3DGS). Our direction-disentangled 3DGS (DDGS) method decomposes the radiosity contribution into isotropic and direction-dependent components, able to approximate complex anisotropic interactions without complex runtime simulations. Additionally, we adapt the 3DGS initialization to account for tomography data properties, enhancing accuracy and efficiency. Our method outperforms state-of-the-art techniques in image accuracy and inference speed, demonstrating its potential for intraoperative applications and inverse problems like pose registration.", "pdf": "https://openreview.net/pdf/0c564dbb6edaaa1d191947e75b2e6803793b4e5d.pdf"} {"title": "Nuclear Norm Regularization for Deep Learning", "url": "https://openreview.net/forum?id=eddHTvb5eM", "detail_url": "https://openreview.net/forum?id=eddHTvb5eM", "authors": "Christopher Scarvelis,Justin Solomon", "tags": "NIPS 2024,Poster", "abstract": "Penalizing the nuclear norm of a function's Jacobian encourages it to locally behave like a low-rank linear map. Such functions vary locally along only a handful of directions, making the Jacobian nuclear norm a natural regularizer for machine learning problems. However, this regularizer is intractable for high-dimensional problems, as it requires computing a large Jacobian matrix and taking its SVD. We show how to efficiently penalize the Jacobian nuclear norm using techniques tailor-made for deep learning. We prove that for functions parametrized as compositions $f = g \\circ h$, one may equivalently penalize the average squared Frobenius norm of $Jg$ and $Jh$. We then propose a denoising-style approximation that avoids the Jacobian computations altogether. Our method is simple, efficient, and accurate, enabling Jacobian nuclear norm regularization to scale to high-dimensional deep learning problems. We complement our theory with an empirical study of our regularizer's performance and investigate applications to denoising and representation learning.", "pdf": "https://openreview.net/pdf/00f8fe35f069869002a6276f610775ef9c1e8c5c.pdf"} {"title": "ECMamba: Consolidating Selective State Space Model with Retinex Guidance for Efficient Multiple Exposure Correction", "url": "https://openreview.net/forum?id=mZsvm58FPG", "detail_url": "https://openreview.net/forum?id=mZsvm58FPG", "authors": "Wei Dong,Han Zhou,Yulun Zhang,Xiaohong Liu,Jun Chen", "tags": "NIPS 2024,Poster", "abstract": "Exposure Correction (EC) aims to recover proper exposure conditions for images captured under over-exposure or under-exposure scenarios. While existing deep learning models have shown promising results, few have fully embedded Retinex theory into their architecture, highlighting a gap in current methodologies. Additionally, the balance between high performance and efficiency remains an under-explored problem for exposure correction task. Inspired by Mamba which demonstrates powerful and highly efficient sequence modeling, we introduce a novel framework based on \\textbf{Mamba} for \\textbf{E}xposure \\textbf{C}orrection (\\textbf{ECMamba}) with dual pathways, each dedicated to the restoration of reflectance and illumination map, respectively. Specifically, we firstly derive the Retinex theory and we train a Retinex estimator capable of mapping inputs into two intermediary spaces, each approximating the target reflectance and illumination map, respectively. This setup facilitates the refined restoration process of the subsequent \\textbf{E}xposure \\textbf{C}orrection \\textbf{M}amba \\textbf{M}odule (\\textbf{ECMM}). Moreover, we develop a novel \\textbf{2D S}elective \\textbf{S}tate-space layer guided by \\textbf{Retinex} information (\\textbf{Retinex-SS2D}) as the core operator of \\textbf{ECMM}. This architecture incorporates an innovative 2D scanning strategy based on deformable feature aggregation, thereby enhancing both efficiency and effectiveness. Extensive experiment results and comprehensive ablation studies demonstrate the outstanding performance and the importance of each component of our proposed ECMamba. Code is available at \\url{https://github.com/LowlevelAI/ECMamba}.", "pdf": "https://openreview.net/pdf/00620fe70cf60fc001a8268234ac200d0f3daf61.pdf"} {"title": "Interpreting and Analysing CLIP's Zero-Shot Image Classification via Mutual Knowledge", "url": "https://openreview.net/forum?id=n01yLUy7Mj", "detail_url": "https://openreview.net/forum?id=n01yLUy7Mj", "authors": "Fawaz Sammani,Nikos Deligiannis", "tags": "NIPS 2024,Poster", "abstract": "Contrastive Language-Image Pretraining (CLIP) performs zero-shot image classification by mapping images and textual class representation into a shared embedding space, then retrieving the class closest to the image. This work provides a new approach for interpreting CLIP models for image classification from the lens of mutual knowledge between the two modalities. Specifically, we ask: what concepts do both vision and language CLIP encoders learn in common that influence the joint embedding space, causing points to be closer or further apart? We answer this question via an approach of textual concept-based explanations, showing their effectiveness, and perform an analysis encompassing a pool of 13 CLIP models varying in architecture, size and pretraining datasets. We explore those different aspects in relation to mutual knowledge, and analyze zero-shot predictions. Our approach demonstrates an effective and human-friendly way of understanding zero-shot classification decisions with CLIP.", "pdf": "https://openreview.net/pdf/83d78f63b19aed3bdfb9c16dfae7518b0cd4772c.pdf"} {"title": "Sm: enhanced localization in Multiple Instance Learning for medical imaging classification", "url": "https://openreview.net/forum?id=iNS3SC949v", "detail_url": "https://openreview.net/forum?id=iNS3SC949v", "authors": "Francisco M Castro-Mac\u00edas,Pablo Morales-Alvarez,Yunan Wu,Rafael Molina,Aggelos Katsaggelos", "tags": "NIPS 2024,Poster", "abstract": "Multiple Instance Learning (MIL) is widely used in medical imaging classification to reduce the labeling effort. \nWhile only bag labels are available for training, one typically seeks predictions at both bag and instance levels (classification and localization tasks, respectively). Early MIL methods treated the instances in a bag independently. Recent methods account for global and local dependencies among instances. Although they have yielded excellent results in classification, their performance in terms of localization is comparatively limited. We argue that these models have been designed to target the classification task, while implications at the instance level have not been deeply investigated. Motivated by a simple observation -- that neighboring instances are likely to have the same label -- we propose a novel, principled, and flexible mechanism to model local dependencies. It can be used alone or combined with any mechanism to model global dependencies (e.g., transformers). A thorough empirical validation shows that our module leads to state-of-the-art performance in localization while being competitive or superior in classification. Our code is at https://github.com/Franblueee/SmMIL.", "pdf": "https://openreview.net/pdf/af0974fa9e30a48f6f22f8f3a77de6f98f8000a6.pdf"} {"title": "Can neural operators always be continuously discretized?", "url": "https://openreview.net/forum?id=cyJxphdw3B", "detail_url": "https://openreview.net/forum?id=cyJxphdw3B", "authors": "Takashi Furuya,Michael Anthony Puthawala,Matti Lassas,Maarten V. de Hoop", "tags": "NIPS 2024,Poster", "abstract": "In this work we consider the problem of discretization of neural operators in a general setting. Using category theory, we give a no-go theorem that shows that diffeomorphisms between Hilbert spaces may not admit any continuous approximations by diffeomorphisms on finite spaces, even if the discretization is non-linear. This shows how infinite-dimensional Hilbert spaces and finite-dimensional vector spaces fundamentally differ. A key take-away is that to obtain discretization invariance, considerable effort is needed to ensure that finite-dimensional approximations of neural operator converge not only as sequences of functions, but that their representations converge in a suitable sense as well. With this perspective, we give several positive results. We first show that strongly monotone diffeomorphism operators always admit finite-dimensional strongly monotone diffeomorphisms. Next we show that bilipschitz neural operators may always be written via the repeated alternating composition of strongly monotone neural operators and invertible linear maps. We also show that such operators may be inverted locally via iteration provided that such inverse exists. Finally, we conclude by showing how our framework may be used `out of the box' to prove quantitative approximation results for discretization of neural operators.", "pdf": "https://openreview.net/pdf/2630790a4cf440f8032004da1dbce0c9778ff0c7.pdf"} {"title": "G2D: From Global to Dense Radiography Representation Learning via Vision-Language Pre-training", "url": "https://openreview.net/forum?id=zsXbGJJ7Oo", "detail_url": "https://openreview.net/forum?id=zsXbGJJ7Oo", "authors": "Che Liu,Cheng Ouyang,Sibo Cheng,Anand Shah,Wenjia Bai,Rossella Arcucci", "tags": "NIPS 2024,Poster", "abstract": "Medical imaging tasks require an understanding of subtle and localized visual features due to the inherently detailed and area-specific nature of pathological patterns, which are crucial for clinical diagnosis. Although recent advances in medical vision-language pre-training (VLP) enable models to learn clinically relevant visual features by leveraging both medical images and their associated radiology reports, current medical VLP methods primarily focus on aligning images with entire reports. This focus hinders the learning of dense (pixel-level) visual features and is suboptimal for dense prediction tasks (e.g., medical image segmentation).\n\nTo address this challenge, we propose a novel medical VLP framework, named **Global to Dense level representation learning (G2D)**, which aims to learn global and dense visual features simultaneously using only image-text pairs without extra annotations. In particular, G2D designs a **Pseudo Segmentation (PS)** task, which enables the model to learn dense visual features during VLP. Notably, generating PS masks can be performed on the fly during VLP, which does not incur extra trainable parameters. With this simple yet effective idea, G2D achieves superior performance across 5 medical imaging tasks and 25 diseases. Particularly, in the segmentation task which requires dense visual features, **G2D surpasses existing models even with just 1% of the training data for finetuning, compared to 100% used by other models**. The code can be found in https://github.com/cheliu-computation/G2D-NeurIPS24/tree/main.", "pdf": "https://openreview.net/pdf/266314e449f23eb30c332e9f0688da33556f643c.pdf"} {"title": "BECAUSE: Bilinear Causal Representation for Generalizable Offline Model-based Reinforcement Learning", "url": "https://openreview.net/forum?id=4i9xuPEu9w", "detail_url": "https://openreview.net/forum?id=4i9xuPEu9w", "authors": "Haohong Lin,Wenhao Ding,Jian Chen,Laixi Shi,Jiacheng Zhu,Bo Li,Ding Zhao", "tags": "NIPS 2024,Poster", "abstract": "Offline model-based reinforcement learning (MBRL) enhances data efficiency by utilizing pre-collected datasets to learn models and policies, especially in scenarios where exploration is costly or infeasible. Nevertheless, its performance often suffers from the objective mismatch between model and policy learning, resulting in inferior performance despite accurate model predictions. This paper first identifies the primary source of this mismatch comes from the underlying confounders present in offline data for MBRL. Subsequently, we introduce **B**ilin**E**ar **CAUS**al r**E**presentation (BECAUSE), an algorithm to capture causal representation for both states and actions to reduce the influence of the distribution shift, thus mitigating the objective mismatch problem. Comprehensive evaluations on 18 tasks that vary in data quality and environment context demonstrate the superior performance of BECAUSE over existing offline RL algorithms. We show the generalizability and robustness of BECAUSE under fewer samples or larger numbers of confounders. Additionally, we offer theoretical analysis of BECAUSE to prove its error bound and sample efficiency when integrating causal representation into offline MBRL. See more details in our project page: [https://sites.google.com/view/be-cause](https://sites.google.com/view/be-cause).", "pdf": "https://openreview.net/pdf/8017d1f697426cf78f3bf09b430193873845fb76.pdf"} {"title": "Meta-Learning Universal Priors Using Non-Injective Change of Variables", "url": "https://openreview.net/forum?id=E8b4yOLGZ5", "detail_url": "https://openreview.net/forum?id=E8b4yOLGZ5", "authors": "Yilang Zhang,Alireza Sadeghi,Georgios B. Giannakis", "tags": "NIPS 2024,Poster", "abstract": "Meta-learning empowers data-hungry deep neural networks to rapidly learn from merely a few samples, which is especially appealing to tasks with small datasets. Critical in this context is the *prior knowledge* accumulated from related tasks. Existing meta-learning approaches typically rely on preselected priors, such as a Gaussian probability density function (pdf). The limited expressiveness of such priors however, hinders the enhanced performance of the trained model when dealing with tasks having exceedingly scarce data. Targeting improved expressiveness, this contribution introduces a *data-driven* prior that optimally fits the provided tasks using a novel non-injective change-of-variable (NCoV) model. Unlike preselected prior pdfs with fixed shapes, the advocated NCoV model can effectively approximate a considerably wide range of pdfs. Moreover, compared to conventional change-of-variable models, the introduced NCoV exhibits augmented expressiveness for pdf modeling, especially in high-dimensional spaces. Theoretical analysis underscores the appealing universal approximation capacity of the NCoV model. Numerical experiments conducted on three few-shot learning datasets validate the superiority of data-driven priors over the prespecified ones, showcasing its pronounced effectiveness when dealing with extremely limited data resources.", "pdf": "https://openreview.net/pdf/1c54062ea9ae7e141df075c7e9d2154a2540ea32.pdf"} {"title": "Geometric Trajectory Diffusion Models", "url": "https://openreview.net/forum?id=OYmms5Mv9H", "detail_url": "https://openreview.net/forum?id=OYmms5Mv9H", "authors": "Jiaqi Han,Minkai Xu,Aaron Lou,Haotian Ye,Stefano Ermon", "tags": "NIPS 2024,Poster", "abstract": "Generative models have shown great promise in generating 3D geometric systems, which is a fundamental problem in many natural science domains such as molecule and protein design. However, existing approaches only operate on static structures, neglecting the fact that physical systems are always dynamic in nature. In this work, we propose geometric trajectory diffusion models (GeoTDM), the first diffusion model for modeling the temporal distribution of 3D geometric trajectories. Modeling such distribution is challenging as it requires capturing both the complex spatial interactions with physical symmetries and temporal correspondence encapsulated in the dynamics. We theoretically justify that diffusion models with equivariant temporal kernels can lead to density with desired symmetry, and develop a novel transition kernel leveraging SE(3)-equivariant spatial convolution and temporal attention. Furthermore, to induce an expressive trajectory distribution for conditional generation, we introduce a generalized learnable geometric prior into the forward diffusion process to enhance temporal conditioning. We conduct extensive experiments on both unconditional and conditional generation in various scenarios, including physical simulation, molecular dynamics, and pedestrian motion. Empirical results on a wide suite of metrics demonstrate that GeoTDM can generate realistic geometric trajectories with significantly higher quality.", "pdf": "https://openreview.net/pdf/aeef0c906f7c5b4a7f45bc03873108514d0fd60b.pdf"} {"title": "Public-data Assisted Private Stochastic Optimization: Power and Limitations", "url": "https://openreview.net/forum?id=j14wStqZni", "detail_url": "https://openreview.net/forum?id=j14wStqZni", "authors": "Enayat Ullah,Michael Menart,Raef Bassily,Crist\u00f3bal A Guzm\u00e1n,Raman Arora", "tags": "NIPS 2024,Poster", "abstract": "We study the limits and capability of public-data assisted differentially private (PA-DP) algorithms. Specifically, we focus on the problem of stochastic convex optimization (SCO) with either labeled or unlabeled public data. For complete/labeled public data, we show that any $(\\epsilon,\\delta)$-PA-DP has excess risk $\\tilde{\\Omega}\\big(\\min(\\frac{1}{\\sqrt{n_{\\text{pub}}}},\\frac{1}{\\sqrt{n}}+\\frac{\\sqrt{d}}{n\\epsilon} ) \\big)$, where $d$ is the dimension, ${n_{\\text{pub}}}$ is the number of public samples, ${n_{\\text{priv}}}$ is the number of private samples, and $n={n_{\\text{pub}}}+{n_{\\text{priv}}}$. These lower bounds are established via our new lower bounds for PA-DP mean estimation, which are of a similar form. Up to constant factors, these lower bounds show that the simple strategy of either treating all data as private or discarding the private data, is optimal. We also study PA-DP supervised learning with \\textit{unlabeled} public samples. In contrast to our previous result, we here show novel methods for leveraging public data in private supervised learning. For generalized linear models (GLM) with unlabeled public data, we show an efficient algorithm which, given $\\tilde{O}({n_{\\text{priv}}}\\epsilon)$ unlabeled public samples, achieves the dimension independent rate $\\tilde{O}\\big(\\frac{1}{\\sqrt{{n_{\\text{priv}}}}} + \\frac{1}{\\sqrt{{n_{\\text{priv}}}\\epsilon}}\\big)$. We develop new lower bounds for this setting which shows that this rate cannot be improved with more public samples, and any fewer public samples leads to a worse rate. Finally, we provide extensions of this result to general hypothesis classes with finite \\textit{fat-shattering dimension} with applications to neural networks and non-Euclidean geometries.", "pdf": "https://openreview.net/pdf/23ab816f42b6ac8783e7611fc054892ea80e8a32.pdf"} {"title": "Large Scale Transfer Learning for Tabular Data via Language Modeling", "url": "https://openreview.net/forum?id=WH5blx5tZ1", "detail_url": "https://openreview.net/forum?id=WH5blx5tZ1", "authors": "Joshua P Gardner,Juan Carlos Perdomo,Ludwig Schmidt", "tags": "NIPS 2024,Poster", "abstract": "Tabular data \u2013 structured, heterogeneous, spreadsheet-style data with rows and columns \u2013 is widely used in practice across many domains. However, while recent foundation models have reduced the need for developing task-specific datasets and predictors in domains such as language modeling and computer vision, this transfer learning paradigm has not had similar impact in the tabular domain. In this work, we seek to narrow this gap and present TABULA-8B, a language model for tabular prediction. We define a process for extracting a large, high-quality training dataset from the TabLib corpus, proposing methods for tabular data filtering and quality control. Using the resulting dataset, which comprises over 2.1B rows from 4.2M unique tables, we fine-tune a Llama 3-8B large language model (LLM) for tabular data prediction (classification and binned regression) using a novel packing and attention scheme for tabular prediction. Through evaluation across a test suite of 329 datasets, we find that TABULA-8B has zero-shot accuracy on unseen tables that is over 15 percentage points (pp) higher than random guessing, a feat that is not possible with existing state-of-the-art tabular prediction models (e.g. XGBoost, TabPFN). In the few-shot setting (1-32 shots), without any fine-tuning on the target datasets, TABULA-8B is 5-15 pp more accurate than XGBoost and TabPFN models that are explicitly trained on equal, or even up to 16\u00d7 more data. We release our model, code, and data along with the publication of this paper.", "pdf": "https://openreview.net/pdf/43927250681c0a0037ece3a72e51d375046e2152.pdf"} {"title": "Least Squares Regression Can Exhibit Under-Parameterized Double Descent", "url": "https://openreview.net/forum?id=gzh9nTUtsY", "detail_url": "https://openreview.net/forum?id=gzh9nTUtsY", "authors": "Xinyue Li,Rishi Sonthalia", "tags": "NIPS 2024,Poster", "abstract": "The relationship between the number of training data points, the number of parameters, and the generalization capabilities of models has been widely studied. Previous work has shown that double descent can occur in the over-parameterized regime and that the standard bias-variance trade-off holds in the under-parameterized regime. These works provide multiple reasons for the existence of the peak. We postulate that the location of the peak depends on the technical properties of both the spectrum as well as the eigenvectors of the sample covariance. We present two simple examples that provably exhibit double descent in the under-parameterized regime and do not seem to occur for reasons provided in prior work.", "pdf": "https://openreview.net/pdf/3e1ebe9a5e28912092431afd0af17f13095982ac.pdf"} {"title": "Textual Training for the Hassle-Free Removal of Unwanted Visual Data: Case Studies on OOD and Hateful Image Detection", "url": "https://openreview.net/forum?id=XErWgdxaFU", "detail_url": "https://openreview.net/forum?id=XErWgdxaFU", "authors": "Saehyung Lee,Jisoo Mok,Sangha Park,Yongho Shin,Dahuin Jung,Sungroh Yoon", "tags": "NIPS 2024,Poster", "abstract": "In our study, we explore methods for detecting unwanted content lurking in visual datasets. We provide a theoretical analysis demonstrating that a model capable of successfully partitioning visual data can be obtained using only textual data. Based on the analysis, we propose Hassle-Free Textual Training (HFTT), a streamlined method capable of acquiring detectors for unwanted visual content, using only textual data in conjunction with pre-trained vision-language models. HFTT features an innovative objective function that significantly reduces the necessity for human involvement in data annotation. Furthermore, HFTT employs a clever textual data synthesis method, effectively emulating the integration of unknown visual data distribution into the training process at no extra cost. The unique characteristics of HFTT extend its utility beyond traditional out-of-distribution detection, making it applicable to tasks that address more abstract concepts. We complement our analyses with experiments in hateful image detection and out-of-distribution detection. Our codes are available at https://github.com/HFTT-anonymous/HFTT.", "pdf": "https://openreview.net/pdf/7f2d755399ac446b553342aaa3457233169cc01b.pdf"} {"title": "Multimodal Task Vectors Enable Many-Shot Multimodal In-Context Learning", "url": "https://openreview.net/forum?id=W0okTgsPvM", "detail_url": "https://openreview.net/forum?id=W0okTgsPvM", "authors": "Brandon Huang,Chancharik Mitra,Leonid Karlinsky,Assaf Arbelle,Trevor Darrell,Roei Herzig", "tags": "NIPS 2024,Poster", "abstract": "The recent success of interleaved Large Multimodal Models (LMMs) in few-shot learning suggests that in-context learning (ICL) with many examples can be promising for learning new tasks. However, this many-shot multimodal ICL setting has one crucial problem: it is fundamentally limited by the model's context length set at pretraining. The problem is especially prominent in the multimodal domain, which processes both text and images, requiring additional tokens. This motivates the need for a multimodal method to compress many shots into fewer tokens without finetuning. In this work, we enable LMMs to perform multimodal, many-shot in-context learning by leveraging Multimodal Task Vectors (MTV)---compact implicit representations of in-context examples compressed in the model's attention heads. Specifically, we first demonstrate the existence of such MTV in LMMs and then leverage these extracted MTV to enable many-shot in-context learning for various vision-and-language tasks. Our experiments suggest that MTV can scale in performance with the number of compressed shots and generalize to similar out-of-domain tasks without additional context length for inference. Code: https://github.com/Brandon3964/MultiModal-Task-Vector", "pdf": "https://openreview.net/pdf/8d2de8f909b1d01bd9a36bd7655f9dfb3cf81686.pdf"} {"title": "Multi-Object Hallucination in Vision Language Models", "url": "https://openreview.net/forum?id=KNrwaFEi1u", "detail_url": "https://openreview.net/forum?id=KNrwaFEi1u", "authors": "Xuweiyi Chen,Ziqiao Ma,Xuejun Zhang,Sihan Xu,Shengyi Qian,Jianing Yang,David Fouhey,Joyce Chai", "tags": "NIPS 2024,Poster", "abstract": "Large vision language models (LVLMs) often suffer from object hallucination, producing objects not present in the given images. \nWhile current benchmarks for object hallucination primarily concentrate on the presence of a single object class rather than individual entities, this work systematically investigates multi-object hallucination, examining how models misperceive (e.g., invent nonexistent objects or become distracted) when tasked with focusing on multiple objects simultaneously.\nWe introduce Recognition-based Object Probing Evaluation (ROPE), an automated evaluation protocol that considers the distribution of object classes within a single image during testing and uses visual referring prompts to eliminate ambiguity. \nWith comprehensive empirical studies and analysis of potential factors leading to multi-object hallucination, we found that (1) LVLMs suffer more hallucinations when focusing on multiple objects compared to a single object. \n(2) The tested object class distribution affects hallucination behaviors, indicating that LVLMs may follow shortcuts and spurious correlations.\n(3) Hallucinatory behaviors are influenced by data-specific factors, salience and frequency, and model intrinsic behaviors.\nWe hope to enable LVLMs to recognize and reason about multiple objects that often occur in realistic visual scenes, provide insights, and quantify our progress towards mitigating the issues.", "pdf": "https://openreview.net/pdf/72066c2d52b85f4dcead570a97d065a2eba2509c.pdf"} {"title": "MeMo: Meaningful, Modular Controllers via Noise Injection", "url": "https://openreview.net/forum?id=5DJBBACqim", "detail_url": "https://openreview.net/forum?id=5DJBBACqim", "authors": "Megan Tjandrasuwita,Jie Xu,Armando Solar-Lezama,Wojciech Matusik", "tags": "NIPS 2024,Poster", "abstract": "Robots are often built from standardized assemblies, (e.g. arms, legs, or fingers), but each robot must be trained from scratch to control all the actuators of all the parts together. In this paper we demonstrate a new approach that takes a single robot and its controller as input and produces a set of modular controllers for each of these assemblies such that when a new robot is built from the same parts, its control can be quickly learned by reusing the modular controllers. We achieve this with a framework called MeMo which learns (Me)aningful, (Mo)dular controllers. Specifically, we propose a novel modularity objective to learn an appropriate division of labor among the modules. We demonstrate that this objective can be optimized simultaneously with standard behavior cloning loss via noise injection. We benchmark our framework in locomotion and grasping environments on simple to complex robot morphology transfer. We also show that the modules help in task transfer. On both structure and task transfer, MeMo achieves improved training efficiency to graph neural network and Transformer baselines.", "pdf": "https://openreview.net/pdf/424fa677d237a97f364c1d23b7cb42da16a83ffc.pdf"} {"title": "HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting", "url": "https://openreview.net/forum?id=HkMCCFrYkT", "detail_url": "https://openreview.net/forum?id=HkMCCFrYkT", "authors": "Yuanhao Cai,Zihao Xiao,Yixun Liang,Minghan Qin,Yulun Zhang,Xiaokang Yang,Yaoyao Liu,Alan Yuille", "tags": "NIPS 2024,Poster", "abstract": "High dynamic range (HDR) novel view synthesis (NVS) aims to create photorealistic images from novel viewpoints using HDR imaging techniques. The rendered HDR images capture a wider range of brightness levels containing more details of the scene than normal low dynamic range (LDR) images. Existing HDR NVS methods are mainly based on NeRF. They suffer from long training time and slow inference speed. In this paper, we propose a new framework, High Dynamic Range Gaussian Splatting (HDR-GS), which can efficiently render novel HDR views and reconstruct LDR images with a user input exposure time. Specifically, we design a Dual Dynamic Range (DDR) Gaussian point cloud model that uses spherical harmonics to fit HDR color and employs an MLP-based tone-mapper to render LDR color. The HDR and LDR colors are then fed into two Parallel Differentiable Rasterization (PDR) processes to reconstruct HDR and LDR views. To establish the data foundation for the research of 3D Gaussian splatting-based methods in HDR NVS, we recalibrate the camera parameters and compute the initial positions for Gaussian point clouds. Comprehensive experiments show that HDR-GS surpasses the state-of-the-art NeRF-based method by 3.84 and 1.91 dB on LDR and HDR NVS while enjoying 1000$\\times$ inference speed and only costing 6.3\\% training time. Code and data are released at https://github.com/caiyuanhao1998/HDR-GS", "pdf": "https://openreview.net/pdf/23ad397d822f473b9ba689e7bbf39fe5c14c336a.pdf"} {"title": "Ctrl-X: Controlling Structure and Appearance for Text-To-Image Generation Without Guidance", "url": "https://openreview.net/forum?id=ZulWEWQOp9", "detail_url": "https://openreview.net/forum?id=ZulWEWQOp9", "authors": "Kuan Heng Lin,Sicheng Mo,Ben Klingher,Fangzhou Mu,Bolei Zhou", "tags": "NIPS 2024,Poster", "abstract": "Recent controllable generation approaches such as FreeControl and Diffusion Self-Guidance bring fine-grained spatial and appearance control to text-to-image (T2I) diffusion models without training auxiliary modules. However, these methods optimize the latent embedding for each type of score function with longer diffusion steps, making the generation process time-consuming and limiting their flexibility and use. This work presents *Ctrl-X*, a simple framework for T2I diffusion controlling structure and appearance without additional training or guidance. Ctrl-X designs feed-forward structure control to enable the structure alignment with a structure image and semantic-aware appearance transfer to facilitate the appearance transfer from a user-input image. Extensive qualitative and quantitative experiments illustrate the superior performance of Ctrl-X on various condition inputs and model checkpoints. In particular, Ctrl-X supports novel structure and appearance control with arbitrary condition images of any modality, exhibits superior image quality and appearance transfer compared to existing works, and provides instant plug-and-play functionality to any T2I and text-to-video (T2V) diffusion model. See our project page for the code and an overview of the results: https://genforce.github.io/ctrl-x", "pdf": "https://openreview.net/pdf/0cb874edebea0ccc67122b39d7c4b2846ebbca38.pdf"} {"title": "Why Transformers Need Adam: A Hessian Perspective", "url": "https://openreview.net/forum?id=X6rqEpbnj3", "detail_url": "https://openreview.net/forum?id=X6rqEpbnj3", "authors": "Yushun Zhang,Congliang Chen,Tian Ding,Ziniu Li,Ruoyu Sun,Zhi-Quan Luo", "tags": "NIPS 2024,Poster", "abstract": "SGD performs worse than Adam by a significant margin on Transformers, but the reason remains unclear. In this work, we provide an explanation through the lens of Hessian: (i) Transformers are \"heterogeneous'': the Hessian spectrum across parameter blocks vary dramatically, a phenomenon we call \"block heterogeneity\"; (ii) Heterogeneity hampers SGD: SGD performs worse than Adam on problems with block heterogeneity. To validate (i) and (ii), we check various Transformers, CNNs, MLPs, and quadratic problems, and find that SGD can perform on par with Adam on problems without block heterogeneity, but performs worse than Adam when the heterogeneity exists. Our initial theoretical analysis indicates that SGD performs worse because it applies one single learning rate to all blocks, which cannot handle the heterogeneity among blocks. This limitation could be ameliorated if we use coordinate-wise learning rates, as designed in Adam.", "pdf": "https://openreview.net/pdf/32bac765124ce6649e14e942f45bee8e4007cc8b.pdf"} {"title": "Diffusion4D: Fast Spatial-temporal Consistent 4D generation via Video Diffusion Models", "url": "https://openreview.net/forum?id=grrefkWEES", "detail_url": "https://openreview.net/forum?id=grrefkWEES", "authors": "HANWEN LIANG,Yuyang Yin,Dejia Xu,hanxue liang,Zhangyang Wang,Konstantinos N Plataniotis,Yao Zhao,Yunchao Wei", "tags": "NIPS 2024,Poster", "abstract": "The availability of large-scale multimodal datasets and advancements in diffusion models have significantly accelerated progress in 4D content generation. Most prior approaches rely on multiple images or video diffusion models, utilizing score distillation sampling for optimization or generating pseudo novel views for direct supervision. However, these methods are hindered by slow optimization speeds and multi-view inconsistency issues. Spatial and temporal consistency in 4D geometry has been extensively explored respectively in 3D-aware diffusion models and traditional monocular video diffusion models. Building on this foundation, we propose a strategy to migrate the temporal consistency in video diffusion models to the spatial-temporal consistency required for 4D generation. Specifically, we present a novel framework, \\textbf{Diffusion4D}, for efficient and scalable 4D content generation. Leveraging a meticulously curated dynamic 3D dataset, we develop a 4D-aware video diffusion model capable of synthesizing orbital views of dynamic 3D assets. To control the dynamic strength of these assets, we introduce a 3D-to-4D motion magnitude metric as guidance. Additionally, we propose a novel motion magnitude reconstruction loss and 3D-aware classifier-free guidance to refine the learning and generation of motion dynamics. After obtaining orbital views of the 4D asset, we perform explicit 4D construction with Gaussian splatting in a coarse-to-fine manner. Extensive experiments demonstrate that our method surpasses prior state-of-the-art techniques in terms of generation efficiency and 4D geometry consistency across various prompt modalities.", "pdf": "https://openreview.net/pdf/f591e6d5467584bafafeecfc8bfe987eca34f416.pdf"} {"title": "Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization", "url": "https://openreview.net/forum?id=bIa03mAtxQ", "detail_url": "https://openreview.net/forum?id=bIa03mAtxQ", "authors": "James Oldfield,Markos Georgopoulos,Grigorios Chrysos,Christos Tzelepis,Yannis Panagakis,Mihalis Nicolaou,Jiankang Deng,Ioannis Patras", "tags": "NIPS 2024,Poster", "abstract": "The Mixture of Experts (MoE) paradigm provides a powerful way to decompose dense layers into smaller, modular computations often more amenable to human interpretation, debugging, and editability. However, a major challenge lies in the computational cost of scaling the number of experts high enough to achieve fine-grained specialization. In this paper, we propose the Multilinear Mixture of Experts (\u03bcMoE) layer to address this, focusing on vision models. \u03bcMoE layers enable scalable expert specialization by performing an implicit computation on prohibitively large weight tensors entirely in factorized form. Consequently, \u03bcMoEs (1) avoid the restrictively high inference-time costs of dense MoEs, yet (2) do not inherit the training issues of the popular sparse MoEs' discrete (non-differentiable) expert routing. We present both qualitative and quantitative evidence that scaling \u03bcMoE layers when fine-tuning foundation models for vision tasks leads to more specialized experts at the class-level, further enabling manual bias correction in CelebA attribute classification. Finally, we show qualitative results demonstrating the expert specialism achieved when pre-training large GPT2 and MLP-Mixer models with parameter-matched \u03bcMoE blocks at every layer, maintaining comparable accuracy. Our code is available at: https://github.com/james-oldfield/muMoE.", "pdf": "https://openreview.net/pdf/7449775f369e803fca0522e3b4cfc90bfd1db269.pdf"} {"title": "Pandora's Box: Towards Building Universal Attackers against Real-World Large Vision-Language Models", "url": "https://openreview.net/forum?id=gDpWYpocE1", "detail_url": "https://openreview.net/forum?id=gDpWYpocE1", "authors": "Daizong Liu,Mingyu Yang,Xiaoye Qu,Pan Zhou,Xiang Fang,Keke Tang,Yao Wan,Lichao Sun", "tags": "NIPS 2024,Poster", "abstract": "Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities across a wide range of multimodal understanding tasks. Nevertheless, these models are susceptible to adversarial examples. In real-world applications, existing LVLM attackers generally rely on the detailed prior knowledge of the model to generate effective perturbations. Moreover, these attacks are task-specific, leading to significant costs for designing perturbation. Motivated by the research gap and practical demands, in this paper, we make the first attempt to build a universal attacker against real-world LVLMs, focusing on two critical aspects: (i) restricting access to only the LVLM inputs and outputs. (ii) devising a universal adversarial patch, which is task-agnostic and can deceive any LVLM-driven task when applied to various inputs. Specifically, we start by initializing the location and the pattern of the adversarial patch through random sampling, guided by the semantic distance between their output and the target label. Subsequently, we maintain a consistent patch location while refining the pattern to enhance semantic resemblance to the target. In particular, our approach incorporates a diverse set of LVLM task inputs as query samples to approximate the patch gradient, capitalizing on the importance of distinct inputs. In this way, the optimized patch is universally adversarial against different tasks and prompts, leveraging solely gradient estimates queried from the model. Extensive experiments are conducted to verify the strong universal adversarial capabilities of our proposed attack with prevalent LVLMs including LLaVA, MiniGPT-4, Flamingo, and BLIP-2, spanning a spectrum of tasks, all achieved without delving into the details of the model structures.", "pdf": "https://openreview.net/pdf/82ff33e202a079b34dbf8dfa20e1a93a21b6f36a.pdf"} {"title": "S-SOS: Stochastic Sum-Of-Squares for Parametric Polynomial Optimization", "url": "https://openreview.net/forum?id=iChQIJtjHB", "detail_url": "https://openreview.net/forum?id=iChQIJtjHB", "authors": "Richard Licheng Zhu,Mathias Oster,Yuehaw Khoo", "tags": "NIPS 2024,Poster", "abstract": "Global polynomial optimization is an important tool across applied mathematics, with many applications in operations research, engineering, and the physical sciences. In various settings, the polynomials depend on external parameters that may be random. We discuss a stochastic sum-of-squares (S-SOS) algorithm based on the sum-of-squares hierarchy that constructs a series of semidefinite programs to jointly find strict lower bounds on the global minimum and extracts candidates for parameterized global minimizers. We prove quantitative convergence of the hierarchy as the degree increases and use it to solve unconstrained and constrained polynomial optimization problems parameterized by random variables. By employing n-body priors from condensed matter physics to induce sparsity, we can use S-SOS to produce solutions and uncertainty intervals for sensor network localization problems containing up to 40 variables and semidefinite matrix sizes surpassing 800 x 800.", "pdf": "https://openreview.net/pdf/ed5532056ce074f84ad30a4151b78b386f186f49.pdf"} {"title": "Image-aware Evaluation of Generated Medical Reports", "url": "https://openreview.net/forum?id=ecPIg6o84Z", "detail_url": "https://openreview.net/forum?id=ecPIg6o84Z", "authors": "Gefen Dawidowicz,Elad Hirsch,Ayellet Tal", "tags": "NIPS 2024,Poster", "abstract": "The paper proposes a novel evaluation metric for automatic medical report generation from X-ray images, VLScore. It aims to overcome the limitations of existing evaluation methods, which either focus solely on textual similarities, ignoring clinical aspects, or concentrate only on a single clinical aspect, the pathology, neglecting all other factors. The key idea of our metric is to measure the similarity between radiology reports while considering the corresponding image. We demonstrate the benefit of our metric through evaluation on a dataset where radiologists marked errors in pairs of reports, showing notable alignment with radiologists' judgments. In addition, we provide a new dataset for evaluating metrics. This dataset includes well-designed perturbations that distinguish between significant modifications (e.g., removal of a diagnosis) and insignificant ones. It highlights the weaknesses in current evaluation metrics and provides a clear framework for analysis.", "pdf": "https://openreview.net/pdf/0b0fac5775eaa03414fa5955dad7c7e264bc2a1f.pdf"} {"title": "Attractor Memory for Long-Term Time Series Forecasting: A Chaos Perspective", "url": "https://openreview.net/forum?id=fEYHZzN7kX", "detail_url": "https://openreview.net/forum?id=fEYHZzN7kX", "authors": "Jiaxi Hu,Yuehong HU,Wei Chen,Ming Jin,Shirui Pan,Qingsong Wen,Yuxuan Liang", "tags": "NIPS 2024,Poster", "abstract": "In long-term time series forecasting (LTSF) tasks, an increasing number of works have acknowledged that discrete time series originate from continuous dynamic systems and have attempted to model their underlying dynamics. Recognizing the chaotic nature of real-world data, our model, Attraos, incorporates chaos theory into LTSF, perceiving real-world time series as low-dimensional observations from unknown high-dimensional chaotic dynamical systems. Under the concept of attractor invariance, Attraos utilizes non-parametric Phase Space Reconstruction embedding along with a novel multi-resolution dynamic memory unit to memorize historical dynamical structures, and evolves by a frequency-enhanced local evolution strategy. Detailed theoretical analysis and abundant empirical evidence consistently show that Attraos outperforms various LTSF methods on mainstream LTSF datasets and chaotic datasets with only one-twelfth of the parameters compared to PatchTST.", "pdf": "https://openreview.net/pdf/14101b06977aaa922728293d33761c0cf695ba53.pdf"} {"title": "Alleviate Anchor-Shift: Explore Blind Spots with Cross-View Reconstruction for Incomplete Multi-View Clustering", "url": "https://openreview.net/forum?id=4pIfc51fGK", "detail_url": "https://openreview.net/forum?id=4pIfc51fGK", "authors": "Suyuan Liu,Siwei Wang,KE LIANG,Junpu Zhang,Zhibin Dong,Tianrui Liu,En Zhu,Xinwang Liu,Kunlun He", "tags": "NIPS 2024,Poster", "abstract": "Incomplete multi-view clustering aims to learn complete correlations among samples by leveraging complementary information across multiple views for clustering. Anchor-based methods further establish sample-level similarities for representative anchor generation, effectively addressing scalability issues in large-scale scenarios. Despite efficiency improvements, existing methods overlook the misguidance in anchors learning induced by partial missing samples, i.e., the absence of samples results in shift of learned anchors, further leading to sub-optimal clustering performance. To conquer the challenges, our solution involves a cross-view reconstruction strategy that not only alleviate the anchor shift problem through a carefully designed cross-view learning process, but also reconstructs missing samples in a way that transcends the limitations imposed by convex combinations. By employing affine combinations, our method explores areas beyond the convex hull defined by anchors, thereby illuminating blind spots in the reconstruction of missing samples. Experimental results on four benchmark datasets and three large-scale datasets validate the effectiveness of our proposed method.", "pdf": "https://openreview.net/pdf/ce30e6eed0894fd2a61e434ee4e1be6db2351ecc.pdf"} {"title": "Where's Waldo: Diffusion Features For Personalized Segmentation and Retrieval", "url": "https://openreview.net/forum?id=LGXeIx75sc", "detail_url": "https://openreview.net/forum?id=LGXeIx75sc", "authors": "Dvir Samuel,Rami Ben-Ari,Matan Levy,Nir Darshan,Gal Chechik", "tags": "NIPS 2024,Poster", "abstract": "Personalized retrieval and segmentation aim to locate specific instances within a dataset based on an input image and a short description of the reference instance. While supervised methods are effective, they require extensive labeled data for training. Recently, self-supervised foundation models have been introduced to these tasks showing comparable results to supervised methods. However, a significant flaw in these models is evident: they struggle to locate a desired instance when other instances within the same class are presented. In this paper, we explore text-to-image diffusion models for these tasks. Specifically, we propose a novel approach called PDM for Personalized Diffusion Features Matching, that leverages intermediate features of pre-trained text-to-image models for personalization tasks without any additional training. PDM demonstrates superior performance on popular retrieval and segmentation benchmarks, outperforming even supervised methods. We also highlight notable shortcomings in current instance and segmentation datasets and propose new benchmarks for these tasks.", "pdf": "https://openreview.net/pdf/8d1a6d8c9c315838c66a43a217144425a01331cd.pdf"} {"title": "Binocular-Guided 3D Gaussian Splatting with View Consistency for Sparse View Synthesis", "url": "https://openreview.net/forum?id=oTEttMIymz", "detail_url": "https://openreview.net/forum?id=oTEttMIymz", "authors": "Liang Han,Junsheng Zhou,Yu-Shen Liu,Zhizhong Han", "tags": "NIPS 2024,Poster", "abstract": "Novel view synthesis from sparse inputs is a vital yet challenging task in 3D computer vision. Previous methods explore 3D Gaussian Splatting with neural priors (e.g. depth priors) as an additional supervision, demonstrating promising quality and efficiency compared to the NeRF based methods. However, the neural priors from 2D pretrained models are often noisy and blurry, which struggle to precisely guide the learning of radiance fields. In this paper, We propose a novel method for synthesizing novel views from sparse views with Gaussian Splatting that does not require external prior as supervision. Our key idea lies in exploring the self-supervisions inherent in the binocular stereo consistency between each pair of binocular images constructed with disparity-guided image warping. To this end, we additionally introduce a Gaussian opacity constraint which regularizes the Gaussian locations and avoids Gaussian redundancy for\nimproving the robustness and efficiency of inferring 3D Gaussians from sparse views. Extensive experiments on the LLFF, DTU, and Blender datasets demonstrate that our method significantly outperforms the state-of-the-art methods.", "pdf": "https://openreview.net/pdf/5fffa3fc11d94b1534454d017683ac8fe327445a.pdf"} {"title": "Mind the Graph When Balancing Data for Fairness or Robustness", "url": "https://openreview.net/forum?id=LQR22jM5l3", "detail_url": "https://openreview.net/forum?id=LQR22jM5l3", "authors": "Jessica Schrouff,Alexis Bellot,Amal Rannen-Triki,Alan Malek,Isabela Albuquerque,Arthur Gretton,Alexander Nicholas D'Amour,Silvia Chiappa", "tags": "NIPS 2024,Poster", "abstract": "Failures of fairness or robustness in machine learning predictive settings can be due to undesired dependencies between covariates, outcomes and auxiliary factors of variation. A common strategy to mitigate these failures is data balancing, which attempts to remove those undesired dependencies. In this work, we define conditions on the training distribution for data balancing to lead to fair or robust models. Our results display that in many cases, the balanced distribution does not correspond to selectively removing the undesired dependencies in a causal graph of the task, leading to multiple failure modes and even interference with other mitigation techniques such as regularization. Overall, our results highlight the importance of taking the causal graph into account before performing data balancing.", "pdf": "https://openreview.net/pdf/46d856b3d3dbb4ad20e539314a8e960d79f7d832.pdf"} {"title": "Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation", "url": "https://openreview.net/forum?id=NsqxN9iOJ7", "detail_url": "https://openreview.net/forum?id=NsqxN9iOJ7", "authors": "Yuanhao Zhai,Kevin Lin,Zhengyuan Yang,Linjie Li,Jianfeng Wang,Chung-Ching Lin,David Doermann,Junsong Yuan,Lijuan Wang", "tags": "NIPS 2024,Poster", "abstract": "Image diffusion distillation achieves high-fidelity generation with very few sampling steps. However, directly applying these techniques to video models results in unsatisfied frame quality. This issue arises from the limited frame appearance quality in public video datasets, affecting the performance of both teacher and student video diffusion models. Our study aims to improve video diffusion distillation and meanwhile enabling the student model to improve frame appearance using the abundant high-quality image data. To this end, we propose motion consistency models (MCM), a single-stage video diffusion distillation method that disentangles motion and appearance learning. Specifically, MCM involves a video consistency model that distills motion from the video teacher model, and an image discriminator that boosts frame appearance to match high-quality image data. However, directly combining these components leads to two significant challenges: a conflict in frame learning objectives, where video distillation learns from low-quality video frames while the image discriminator targets high-quality images, and training-inference discrepancies due to the differing quality of video samples used during training and inference. To address these challenges, we introduce disentangled motion distillation and mixed trajectory distillation. The former applies the distillation objective solely to the motion representation, while the latter mitigates training-inference discrepancies by mixing distillation trajectories from both the low- and high-quality video domains. Extensive experiments show that our MCM achieves state-of-the-art video diffusion distillation performance. Additionally, our method can enhance frame quality in video diffusion models, producing frames with high aesthetic value or specific styles.", "pdf": "https://openreview.net/pdf/38b3b43429f503f270b45ad9f0531f83298246b6.pdf"} {"title": "Learning via Surrogate PAC-Bayes", "url": "https://openreview.net/forum?id=IEyXWuXAQT", "detail_url": "https://openreview.net/forum?id=IEyXWuXAQT", "authors": "Antoine Picard,Roman Moscoviz,Benjamin Guedj", "tags": "NIPS 2024,Poster", "abstract": "PAC-Bayes learning is a comprehensive setting for (i) studying the generalisation ability of learning algorithms and (ii) deriving new learning algorithms by optimising a generalisation bound. However, optimising generalisation bounds might not always be viable for tractable or computational reasons, or both. For example, iteratively querying the empirical risk might prove computationally expensive.\nIn response, we introduce a novel principled strategy for building an iterative learning algorithm via the optimisation of a sequence of surrogate training objectives, inherited from PAC-Bayes generalisation bounds. The key argument is to replace the empirical risk (seen as a function of hypotheses) in the generalisation bound by its projection onto a constructible low dimensional functional space: these projections can be queried much more efficiently than the initial risk. On top of providing that generic recipe for learning via surrogate PAC-Bayes bounds, we (i) contribute theoretical results establishing that iteratively optimising our surrogates implies the optimisation of the original generalisation bounds, (ii) instantiate this strategy to the framework of meta-learning, introducing a meta-objective offering a closed form expression for meta-gradient, (iii) illustrate our approach with numerical experiments inspired by an industrial biochemical problem.", "pdf": "https://openreview.net/pdf/385860db2b940d8aa58dfb1b020c40a73d918215.pdf"} {"title": "Diff-eRank: A Novel Rank-Based Metric for Evaluating Large Language Models", "url": "https://openreview.net/forum?id=nvn80cscVm", "detail_url": "https://openreview.net/forum?id=nvn80cscVm", "authors": "Lai Wei,Zhiquan Tan,Chenghai Li,Jindong Wang,Weiran Huang", "tags": "NIPS 2024,Poster", "abstract": "Large Language Models (LLMs) have transformed natural language processing and extended their powerful capabilities to multi-modal domains. As LLMs continue to advance, it is crucial to develop diverse and appropriate metrics for their evaluation. In this paper, we introduce a novel rank-based metric, Diff-eRank, grounded in information theory and geometry principles. Diff-eRank assesses LLMs by analyzing their hidden representations, providing a quantitative measure of how efficiently they eliminate redundant information during training. We demonstrate the applicability of Diff-eRank in both single-modal (e.g., language) and multi-modal settings. For language models, our results show that Diff-eRank increases with model size and correlates well with conventional metrics such as loss and accuracy. In the multi-modal context, we propose an alignment evaluation method based on the eRank, and verify that contemporary multi-modal LLMs exhibit strong alignment performance based on our method. Our code is publicly available at https://github.com/waltonfuture/Diff-eRank.", "pdf": "https://openreview.net/pdf/794e4505432b75f10b25351e3f544c7b0ccda026.pdf"} {"title": "Private Geometric Median", "url": "https://openreview.net/forum?id=cPzjN7KABv", "detail_url": "https://openreview.net/forum?id=cPzjN7KABv", "authors": "Mahdi Haghifam,Thomas Steinke,Jonathan Ullman", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we study differentially private (DP) algorithms for computing the geometric median (GM) of a dataset: Given $n$ points, $x_1,\\dots,x_n$ in $\\mathbb{R}^d$, the goal is to find a point $\\theta$ that minimizes the sum of the Euclidean distances to these points, i.e., $\\sum_{i=1}^{n} \\lVert|\\theta - x_i\\rVert_2$. Off-the-shelf methods, such as DP-GD, require strong a priori knowledge locating the data within a ball of radius $R$, and the excess risk of the algorithm depends linearly on $R$. In this paper, we ask: can we design an efficient and private algorithm with an excess error guarantee that scales with the (unknown) radius containing the majority of the datapoints? Our main contribution is a pair of polynomial-time DP algorithms for the task of private GM with an excess error guarantee that scales with the effective diameter of the datapoints. Additionally, we propose an inefficient algorithm based on the inverse smooth sensitivity mechanism, which satisfies the more restrictive notion of pure DP. We complement our results with a lower bound and demonstrate the optimality of our polynomial-time algorithms in terms of sample complexity.", "pdf": "https://openreview.net/pdf/e0ed97378dd13a7e75ab7f2fc3a66fd3bd11ff22.pdf"} {"title": "Would I Lie To You? Inference Time Alignment of Language Models using Direct Preference Heads", "url": "https://openreview.net/forum?id=NKGuLthW80", "detail_url": "https://openreview.net/forum?id=NKGuLthW80", "authors": "Avelina Asada Hadji-Kyriacou,Ognjen Arandjelovic", "tags": "NIPS 2024,Poster", "abstract": "Pre-trained Language Models (LMs) exhibit strong zero-shot and in-context learning capabilities; however, their behaviors are often difficult to control. By utilizing Reinforcement Learning from Human Feedback (RLHF), it is possible to fine-tune unsupervised LMs to follow instructions and produce outputs that reflect human preferences. Despite its benefits, RLHF has been shown to potentially harm a language model's reasoning capabilities and introduce artifacts such as hallucinations where the model may fabricate facts. To address this issue we introduce Direct Preference Heads (DPH), a fine-tuning framework that enables LMs to learn human preference signals through an auxiliary reward head without directly affecting the output distribution of the language modeling head. We perform a theoretical analysis of our objective function and find strong ties to Conservative Direct Preference Optimization (cDPO). Finally we evaluate our models on GLUE, RACE, and the GPT4All evaluation suite and demonstrate that our method produces models which achieve higher scores than those fine-tuned with Supervised Fine-Tuning (SFT) or Direct Preference Optimization (DPO) alone.", "pdf": "https://openreview.net/pdf/de3b96c3793d66f5bc84fb520301c2b58a49d6b7.pdf"} {"title": "Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models", "url": "https://openreview.net/forum?id=YAEKMFZyJm", "detail_url": "https://openreview.net/forum?id=YAEKMFZyJm", "authors": "Dominik Hintersdorf,Lukas Struppek,Kristian Kersting,Adam Dziedzic,Franziska Boenisch", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models (DMs) produce very detailed and high-quality images. Their power results from extensive training on large amounts of data - usually scraped from the internet without proper attribution or consent from content creators. Unfortunately, this practice raises privacy and intellectual property concerns, as DMs can memorize and later reproduce their potentially sensitive or copyrighted training images at inference time. Prior efforts prevent this issue by either changing the input to the diffusion process, thereby preventing the DM from generating memorized samples during inference, or removing the memorized data from training altogether. While those are viable solutions when the DM is developed and deployed in a secure and constantly monitored environment, they hold the risk of adversaries circumventing the safeguards and are not effective when the DM itself is publicly released. To solve the problem, we introduce NeMo, the first method to localize memorization of individual data samples down to the level of neurons in DMs' cross-attention layers. Through our experiments, we make the intriguing finding that in many cases, single neurons are responsible for memorizing particular training samples. By deactivating these memorization neurons, we can avoid the replication of training data at inference time, increase the diversity in the generated outputs, and mitigate the leakage of private and copyrighted data. In this way, our NeMo contributes to a more responsible deployment of DMs.", "pdf": "https://openreview.net/pdf/0e60e9a8a51279fd45d501fd3735f724a5344a55.pdf"} {"title": "InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD", "url": "https://openreview.net/forum?id=nRp0XhTf61", "detail_url": "https://openreview.net/forum?id=nRp0XhTf61", "authors": "Xiaoyi Dong,Pan Zhang,Yuhang Zang,Yuhang Cao,Bin Wang,Linke Ouyang,Songyang Zhang,Haodong Duan,Wenwei Zhang,Yining Li,Hang Yan,Yang Gao,Zhe Chen,xinyue zhang,Wei Li,Li Jingwen,Wenhai Wang,Kai Chen,Conghui He,Xingcheng ZHANG,Jifeng Dai,Yu Qiao,Dahua Lin,Jiaqi Wang", "tags": "NIPS 2024,Poster", "abstract": "The Large Vision-Language Model (LVLM) field has seen significant advancements, yet its progression has been hindered by challenges in comprehending fine-grained visual content due to limited resolution. Recent efforts have aimed to enhance the high-resolution understanding capabilities of LVLMs, yet they remain capped at approximately 1500 $\\times$ 1500 pixels and constrained to a relatively narrow resolution range. This paper represents InternLM-XComposer2-4KHD, a groundbreaking exploration into elevating LVLM resolution capabilities up to 4K HD (3840 \u00d7 1600) and beyond. Concurrently, considering the ultra-high resolution may not be necessary in all scenarios, it supports a wide range of diverse resolutions from 336 pixels to 4K standard, significantly broadening its scope of applicability. Specifically, this research advances the patch division paradigm by introducing a novel extension: dynamic resolution with automatic patch configuration. It maintains the training image aspect ratios while automatically varying patch counts and configuring layouts based on a pre-trained Vision Transformer (ViT) (336 $\\times$ 336), leading to dynamic training resolution from 336 pixels to 4K standard. Our research demonstrates that scaling training resolution up to 4K HD leads to consistent performance enhancements without hitting the ceiling of potential improvements. InternLM-XComposer2-4KHD shows superb capability that matches or even surpasses GPT-4V and Gemini Pro in 10 of the 16 benchmarks.", "pdf": "https://openreview.net/pdf/1155d659fc23c66face1a69fc64ca5781c59f825.pdf"} {"title": "Training Dynamics of Transformers to Recognize Word Co-occurrence via Gradient Flow Analysis", "url": "https://openreview.net/forum?id=w6q46IslSR", "detail_url": "https://openreview.net/forum?id=w6q46IslSR", "authors": "Hongru Yang,Bhavya Kailkhura,Zhangyang Wang,Yingbin Liang", "tags": "NIPS 2024,Poster", "abstract": "Understanding the training dynamics of transformers is important to explain the impressive capabilities behind large language models. \nIn this work, we study the dynamics of training a shallow transformer on a task of recognizing co-occurrence of two designated words. In the literature of studying training dynamics of transformers, several simplifications are commonly adopted such as weight reparameterization, attention linearization, special initialization, and lazy regime. In contrast, we analyze the gradient flow dynamics of simultaneously training three attention matrices and a linear MLP layer from random initialization, and provide a framework of analyzing such dynamics via a coupled dynamical system. We establish near minimum loss and characterize the attention model after training. We discover that gradient flow serves as an inherent mechanism that naturally divide the training process into two phases. In Phase 1, the linear MLP quickly aligns with the two target signals for correct classification, whereas the softmax attention remains almost unchanged. In Phase 2, the attention matrices and the MLP evolve jointly to enlarge the classification margin and reduce the loss to a near minimum value. Technically, we prove a novel property of the gradient flow, termed \\textit{automatic balancing of gradients}, which enables the loss values of different samples to decrease almost at the same rate and further facilitates the proof of near minimum training loss. We also conduct experiments to verify our theoretical results.", "pdf": "https://openreview.net/pdf/d62bb7ca05c4ddb2e68d06f57b06fae99492728a.pdf"} {"title": "Towards Next-Level Post-Training Quantization of Hyper-Scale Transformers", "url": "https://openreview.net/forum?id=6uv9ViIoMj", "detail_url": "https://openreview.net/forum?id=6uv9ViIoMj", "authors": "Junhan Kim,Chungman Lee,Eulrang Cho,Kyungphil Park,Ho-young Kim,Joonyoung Kim,Yongkweon Jeon", "tags": "NIPS 2024,Poster", "abstract": "With the increasing complexity of generative AI models, post-training quantization (PTQ) has emerged as a promising solution for deploying hyper-scale models on edge devices such as mobile and TVs.\nExisting PTQ schemes, however, consume considerable time and resources, which could be a bottleneck in real situations where frequent model updates and multiple hyperparameter tunings are required.\nAs a cost-effective alternative, learning-free PTQ schemes have been proposed. \nHowever, the performance is somewhat limited because they cannot consider the inter-layer dependency within the attention module, which is a significant feature of Transformers.\nIn this paper, we thus propose a novel PTQ algorithm that balances accuracy and efficiency.\nThe key idea of the proposed algorithm called aespa is to perform quantization layer-wise for efficiency while targeting attention-wise reconstruction to consider the cross-layer dependency.\nThrough extensive experiments on various language models and complexity analysis, we demonstrate that aespa is accurate and efficient in quantizing Transformer models. The code will be available at https: //github.com/SamsungLabs/aespa.", "pdf": "https://openreview.net/pdf/92b693f63079c7ab2c8fcfce715dcc49a20e6fd4.pdf"} {"title": "TimeXer: Empowering Transformers for Time Series Forecasting with Exogenous Variables", "url": "https://openreview.net/forum?id=INAeUQ04lT", "detail_url": "https://openreview.net/forum?id=INAeUQ04lT", "authors": "Yuxuan Wang,Haixu Wu,Jiaxiang Dong,Guo Qin,Haoran Zhang,Yong Liu,Yun-Zhong Qiu,Jianmin Wang,Mingsheng Long", "tags": "NIPS 2024,Poster", "abstract": "Deep models have demonstrated remarkable performance in time series forecasting. However, due to the partially-observed nature of real-world applications, solely focusing on the target of interest, so-called endogenous variables, is usually insufficient to guarantee accurate forecasting. Notably, a system is often recorded into multiple variables, where the exogenous variables can provide valuable external information for endogenous variables. Thus, unlike well-established multivariate or univariate forecasting paradigms that either treat all the variables equally or ignore exogenous information, this paper focuses on a more practical setting: time series forecasting with exogenous variables. We propose a novel approach, TimeXer, to ingest external information to enhance the forecasting of endogenous variables. With deftly designed embedding layers, TimeXer empowers the canonical Transformer with the ability to reconcile endogenous and exogenous information, where patch-wise self-attention and variate-wise cross-attention are used simultaneously. Moreover, global endogenous tokens are learned to effectively bridge the causal information underlying exogenous series into endogenous temporal patches. Experimentally, TimeXer achieves consistent state-of-the-art performance on twelve real-world forecasting benchmarks and exhibits notable generality and scalability. Code is available at this repository: https://github.com/thuml/TimeXer.", "pdf": "https://openreview.net/pdf/b7ca8dfac678d3faedb563cabfbeec99c7ca137e.pdf"} {"title": "A2PO: Towards Effective Offline Reinforcement Learning from an Advantage-aware Perspective", "url": "https://openreview.net/forum?id=hYjRmGqq5e", "detail_url": "https://openreview.net/forum?id=hYjRmGqq5e", "authors": "Yunpeng Qing,Shunyu Liu,Jingyuan Cong,Kaixuan Chen,Yihe Zhou,Mingli Song", "tags": "NIPS 2024,Poster", "abstract": "Offline reinforcement learning endeavors to leverage offline datasets to craft effective agent policy without online interaction, which imposes proper conservative constraints with the support of behavior policies to tackle the out-of-distribution problem. However, existing works often suffer from the constraint conflict issue when offline datasets are collected from multiple behavior policies, i.e., different behavior policies may exhibit inconsistent actions with distinct returns across the state space. To remedy this issue, recent advantage-weighted methods prioritize samples with high advantage values for agent training while inevitably ignoring the diversity of behavior policy. In this paper, we introduce a novel Advantage-Aware Policy Optimization (A2PO) method to explicitly construct advantage-aware policy constraints for offline learning under mixed-quality datasets. Specifically, A2PO employs a conditional variational auto-encoder to disentangle the action distributions of intertwined behavior policies by modeling the advantage values of all training data as conditional variables. Then the agent can follow such disentangled action distribution constraints to optimize the advantage-aware policy towards high advantage values. Extensive experiments conducted on both the single-quality and mixed-quality datasets of the D4RL benchmark demonstrate that A2PO yields results superior to the counterparts. Our code is available at https://github.com/Plankson/A2PO.", "pdf": "https://openreview.net/pdf/bc8f88d8715d12112423534e6ea66b34180f5f5b.pdf"} {"title": "einspace: Searching for Neural Architectures from Fundamental Operations", "url": "https://openreview.net/forum?id=qf1ncViBr5", "detail_url": "https://openreview.net/forum?id=qf1ncViBr5", "authors": "Linus Ericsson,Miguel Espinosa,Chenhongyi Yang,Antreas Antoniou,Amos Storkey,Shay B Cohen,Steven McDonagh,Elliot J. Crowley", "tags": "NIPS 2024,Poster", "abstract": "Neural architecture search (NAS) finds high performing networks for a given task. Yet the results of NAS are fairly prosaic; they did not e.g. create a shift from convolutional structures to transformers. This is not least because the search spaces in NAS often aren\u2019t diverse enough to include such transformations *a priori*. Instead, for NAS to provide greater potential for fundamental design shifts, we need a novel expressive search space design which is built from more fundamental operations. To this end, we introduce `einspace`, a search space based on a parameterised probabilistic context-free grammar. Our space is versatile, supporting architectures of various sizes and complexities, while also containing diverse network operations which allow it to model convolutions, attention components and more. It contains many existing competitive architectures, and provides flexibility for discovering new ones. Using this search space, we perform experiments to find novel architectures as well as improvements on existing ones on the diverse Unseen NAS datasets. We show that competitive architectures can be obtained by searching from scratch, and we consistently find large improvements when initialising the search with strong baselines. We believe that this work is an important advancement towards a transformative NAS paradigm where search space expressivity and strategic search initialisation play key roles.", "pdf": "https://openreview.net/pdf/efec03c6fc6f3cdf740e493587209da213f4a1f2.pdf"} {"title": "Frequency Adaptive Normalization For Non-stationary Time Series Forecasting", "url": "https://openreview.net/forum?id=T0axIflVDD", "detail_url": "https://openreview.net/forum?id=T0axIflVDD", "authors": "Weiwei Ye,Songgaojun Deng,Qiaosha Zou,Ning Gui", "tags": "NIPS 2024,Poster", "abstract": "Time series forecasting typically needs to address non-stationary data with evolving trend and seasonal patterns. To address the non-stationarity, reversible instance normalization has been recently proposed to alleviate impacts from the trend with certain statistical measures, e.g., mean and variance. Although they demonstrate improved predictive accuracy, they are limited to expressing basic trends and are incapable of handling seasonal patterns. To address this limitation, this paper proposes a new instance normalization solution, called frequency adaptive normalization (FAN), which extends instance normalization in handling both dynamic trend and seasonal patterns. Specifically, we employ the Fourier transform to identify instance-wise predominant frequent components that cover most non-stationary factors. \nFurthermore, the discrepancy of those frequency components between inputs and outputs is explicitly modeled as a prediction task with a simple MLP model. FAN is a model-agnostic method that can be applied to arbitrary predictive backbones. We instantiate FAN on four widely used forecasting models as the backbone and evaluate their prediction performance improvements on eight benchmark datasets. FAN demonstrates significant performance advancement, achieving 7.76\\%$\\sim$37.90\\% average improvements in MSE. Our code is publicly available at http://github.com/icannotnamemyself/FAN.", "pdf": "https://openreview.net/pdf/86aeb855802263e84f1d0a0ee4a42e932e56b41b.pdf"} {"title": "Generalization Bound and Learning Methods for Data-Driven Projections in Linear Programming", "url": "https://openreview.net/forum?id=jHh804fZ5l", "detail_url": "https://openreview.net/forum?id=jHh804fZ5l", "authors": "Shinsaku Sakaue,Taihei Oki", "tags": "NIPS 2024,Poster", "abstract": "How to solve high-dimensional linear programs (LPs) efficiently is a fundamental question.\nRecently, there has been a surge of interest in reducing LP sizes using *random projections*, which can accelerate solving LPs independently of improving LP solvers. \nThis paper explores a new direction of *data-driven projections*, which use projection matrices learned from data instead of random projection matrices.\nGiven training data of $n$-dimensional LPs, we learn an $n\\times k$ projection matrix with $n > k$. \nWhen addressing a future LP instance, we reduce its dimensionality from $n$ to $k$ via the learned projection matrix, solve the resulting LP to obtain a $k$-dimensional solution, and apply the learned matrix to it to recover an $n$-dimensional solution.\n\nOn the theoretical side, a natural question is: how much data is sufficient to ensure the quality of recovered solutions? We address this question based on the framework of *data-driven algorithm design*, which connects the amount of data sufficient for establishing generalization bounds to the *pseudo-dimension* of performance metrics. We obtain an $\\tilde{\\mathrm{O}}(nk^2)$ upper bound on the pseudo-dimension, where $\\tilde{\\mathrm{O}}$ compresses logarithmic factors. We also provide an $\\Omega(nk)$ lower bound, implying our result is tight up to an $\\tilde{\\mathrm{O}}(k)$ factor. \n\nOn the practical side, we explore two simple methods for learning projection matrices: PCA- and gradient-based methods. While the former is relatively efficient, the latter can sometimes achieve better solution quality. Experiments demonstrate that learning projection matrices from data is indeed beneficial: it leads to significantly higher solution quality than the existing random projection while greatly reducing the time for solving LPs.", "pdf": "https://openreview.net/pdf/15fa41b681ea89d3fba0400543d7954b091a2576.pdf"} {"title": "Noisy Dual Mirror Descent: A Near Optimal Algorithm for Jointly-DP Convex Resource Allocation", "url": "https://openreview.net/forum?id=6ArNmbMpKF", "detail_url": "https://openreview.net/forum?id=6ArNmbMpKF", "authors": "Du Chen,Geoffrey A. Chua", "tags": "NIPS 2024,Poster", "abstract": "We study convex resource allocation problems with $m$ hard constraints under $(\\varepsilon,\\delta)$-joint differential privacy (Joint-DP or JDP) in an offline setting. To approximately solve the problem, we propose a generic algorithm called Noisy Dual Mirror Descent. The algorithm applies noisy Mirror Descent to a dual problem from relaxing the hard constraints for private shadow prices, and then uses the shadow prices to coordinate allocations in the primal problem. Leveraging weak duality theory, we show that the optimality gap is upper bounded by $\\mathcal{O}(\\frac{\\sqrt{m\\ln(1/\\delta)}}{\\varepsilon})$, and constraint violation is no more than $\\mathcal{O}(\\frac{\\sqrt{m\\ln(1/\\delta)}}{\\varepsilon})$ per constraint. When strong duality holds, both preceding results can be improved to $\\widetilde{\\mathcal{O}}(\\frac{\\sqrt{\\ln(1/\\delta)}}{\\varepsilon})$ by better utilizing the geometric structure of the dual space, which is neglected by existing works. To complement our results under strong duality, we derive a minimax lower bound $\\Omega(\\frac{m}{\\varepsilon})$ for any JDP algorithm outputting feasible allocations. The lower bound matches our upper bounds up to some logarithmic factors for $\\varepsilon\\geq \\max(1, 1/(n\\gamma))$, where $n\\gamma$ is the available resource level. Numerical studies further confirm the effectiveness of our algorithm.", "pdf": "https://openreview.net/pdf/93244a46c3816db2023adc86515cf94f65d0b4ea.pdf"} {"title": "Online Weighted Paging with Unknown Weights", "url": "https://openreview.net/forum?id=ctxtY3VGGq", "detail_url": "https://openreview.net/forum?id=ctxtY3VGGq", "authors": "Orin Levy,Noam Touitou,Aviv Rosenberg", "tags": "NIPS 2024,Poster", "abstract": "Online paging is a fundamental problem in the field of online algorithms, in which one maintains a cache of $k$ slots as requests for fetching pages arrive online. \nIn the weighted variant of this problem, each page has its own fetching cost; a substantial line of work on this problem culminated in an (optimal) $O(\\log k)$-competitive randomized algorithm, due to Bansal, Buchbinder and Naor (FOCS'07).\n\nExisting work for weighted paging assumes that page weights are known in advance, which is not always the case in practice.\nFor example, in multi-level caching architectures, the expected cost of fetching a memory block is a function of its probability of being in a mid-level cache rather than the main memory.\nThis complex property cannot be predicted in advance; over time, however, one may glean information about page weights through sampling their fetching cost multiple times.\n\nWe present the first algorithm for online weighted paging that does not know page weights in advance, but rather learns from weight samples.\nIn terms of techniques, this requires providing (integral) samples to a fractional solver, requiring a delicate interface between this solver and the randomized rounding scheme; we believe that our work can inspire online algorithms to other problems that involve cost sampling.", "pdf": "https://openreview.net/pdf/b8e68d1aa68c06fb6c2a89b058f372f08726dcda.pdf"} {"title": "MotionGS: Exploring Explicit Motion Guidance for Deformable 3D Gaussian Splatting", "url": "https://openreview.net/forum?id=6FTlHaxCpR", "detail_url": "https://openreview.net/forum?id=6FTlHaxCpR", "authors": "Ruijie Zhu,Yanzhe Liang,Hanzhi Chang,Jiacheng Deng,Jiahao Lu,Wenfei Yang,Tianzhu Zhang,Yongdong Zhang", "tags": "NIPS 2024,Poster", "abstract": "Dynamic scene reconstruction is a long-term challenge in the field of 3D vision. Recently, the emergence of 3D Gaussian Splatting has provided new insights into this problem. Although subsequent efforts rapidly extend static 3D Gaussian to dynamic scenes, they often lack explicit constraints on object motion, leading to optimization difficulties and performance degradation. To address the above issues, we propose a novel deformable 3D Gaussian splatting framework called MotionGS, which explores explicit motion priors to guide the deformation of 3D Gaussians. Specifically, we first introduce an optical flow decoupling module that decouples optical flow into camera flow and motion flow, corresponding to camera movement and object motion respectively. Then the motion flow can effectively constrain the deformation of 3D Gaussians, thus simulating the motion of dynamic objects. Additionally, a camera pose refinement module is proposed to alternately optimize 3D Gaussians and camera poses, mitigating the impact of inaccurate camera poses. Extensive experiments in the monocular dynamic scenes validate that MotionGS surpasses state-of-the-art methods and exhibits significant superiority in both qualitative and quantitative results. Project page: https://ruijiezhu94.github.io/MotionGS_page.", "pdf": "https://openreview.net/pdf/a64d6292f1496bea977e6d1c0f543fdbd807a131.pdf"} {"title": "CRAYM: Neural Field Optimization via Camera RAY Matching", "url": "https://openreview.net/forum?id=wK0Z49myyi", "detail_url": "https://openreview.net/forum?id=wK0Z49myyi", "authors": "Liqiang Lin,Wenpeng Wu,Chi-Wing Fu,Hao Zhang,Hui Huang", "tags": "NIPS 2024,Poster", "abstract": "We introduce camera ray matching (CRAYM) into the joint optimization of camera poses and neural fields from multi-view images. The optimized field, referred to as a feature volume, can be \u201cprobed\u201d by the camera rays for novel view synthesis (NVS) and 3D geometry reconstruction. One key reason for matching camera rays, instead of pixels as in prior works, is that the camera rays can be parameterized by the feature volume to carry both geometric and photometric information. Multi-view consistencies involving the camera rays and scene rendering can be naturally integrated into the joint optimization and network training, to impose physically meaningful constraints to improve the final quality of both the geometric reconstruction and photorealistic rendering. We formulate our per-ray optimization and matched ray coherence by focusing on camera rays passing through keypoints in the input images to elevate both the efficiency and accuracy of scene correspondences. Accumulated ray features along the feature volume provide a means to discount the coherence constraint amid erroneous ray matching. We demonstrate the effectiveness of CRAYM for both NVS and geometry reconstruction, over dense- or sparse-view settings, with qualitative and quantitative comparisons to state-of-the-art alternatives.", "pdf": "https://openreview.net/pdf/168f165b72a71321d07a27b12f06fd7ac0fa9bd3.pdf"} {"title": "HumanSplat: Generalizable Single-Image Human Gaussian Splatting with Structure Priors", "url": "https://openreview.net/forum?id=JBAUg7o8Yv", "detail_url": "https://openreview.net/forum?id=JBAUg7o8Yv", "authors": "Panwang Pan,Zhuo Su,Chenguo Lin,Zhen Fan,Yongjie zhang,Zeming Li,Tingting Shen,Yadong MU,Yebin Liu", "tags": "NIPS 2024,Poster", "abstract": "Despite recent advancements in high-fidelity human reconstruction techniques, the requirements for densely captured images or time-consuming per-instance optimization significantly hinder their applications in broader scenarios. To tackle these issues, we present **HumanSplat**, which predicts the 3D Gaussian Splatting properties of any human from a single input image in a generalizable manner.\nSpecifically, HumanSplat comprises a 2D multi-view diffusion model and a latent reconstruction Transformer with human structure priors that adeptly integrate geometric priors and semantic features within a unified framework. A hierarchical loss that incorporates human semantic information is devised to achieve high-fidelity texture modeling and impose stronger constraints on the estimated multiple views. Comprehensive experiments on standard benchmarks and in-the-wild images demonstrate that HumanSplat surpasses existing state-of-the-art methods in achieving photorealistic novel-view synthesis. Project page: https://humansplat.github.io.", "pdf": "https://openreview.net/pdf/92ce76c8d8154ec2cdd9b5f4679b939639e191ab.pdf"} {"title": "When Your AIs Deceive You: Challenges of Partial Observability in Reinforcement Learning from Human Feedback", "url": "https://openreview.net/forum?id=XcbgkjWSJ7", "detail_url": "https://openreview.net/forum?id=XcbgkjWSJ7", "authors": "Leon Lang,Davis Foote,Stuart Russell,Anca Dragan,Erik Jenner,Scott Emmons", "tags": "NIPS 2024,Poster", "abstract": "Past analyses of reinforcement learning from human feedback (RLHF) assume that the human evaluators fully observe the environment. What happens when human feedback is based only on partial observations? We formally define two failure cases: deceptive inflation and overjustification. Modeling the human as Boltzmann-rational w.r.t. a belief over trajectories, we prove conditions under which RLHF is guaranteed to result in policies that deceptively inflate their performance, overjustify their behavior to make an impression, or both. Under the new assumption that the human's partial observability is known and accounted for, we then analyze how much information the feedback process provides about the return function. We show that sometimes, the human's feedback determines the return function uniquely up to an additive constant, but in other realistic cases, there is irreducible ambiguity. We propose exploratory research directions to help tackle these challenges and experimentally validate both the theoretical concerns and potential mitigations, and caution against blindly applying RLHF in partially observable settings.", "pdf": "https://openreview.net/pdf/6725942ed5948de7b61bb8b10956f4d9e4f1560b.pdf"} {"title": "Optimizing the coalition gain in Online Auctions with Greedy Structured Bandits", "url": "https://openreview.net/forum?id=B74mb0tEY6", "detail_url": "https://openreview.net/forum?id=B74mb0tEY6", "authors": "Dorian Baudry,Hugo Richard,Maria Cherifa,Vianney Perchet,Cl\u00e9ment Calauz\u00e8nes", "tags": "NIPS 2024,Poster", "abstract": "Motivated by online display advertising, this work considers repeated second-price auctions, where agents sample their value from an unknown distribution with cumulative distribution function $F$. In each auction $t$, a decision-maker bound by limited observations selects $n_t$ agents from a coalition of $N$ to compete for a prize with $p$ other agents, aiming to maximize the cumulative reward of the coalition across all auctions.\nThe problem is framed as an $N$-armed structured bandit, each number of player sent being an arm $n$, with expected reward $r(n)$ fully characterized by $F$ and $p+n$. \nWe present two algorithms, Local-Greedy (LG) and Greedy-Grid (GG), both achieving *constant* problem-dependent regret. This relies on three key ingredients: **1.** an estimator of $r(n)$ from feedback collected from any arm $k$, **2.** concentration bounds of these estimates for $k$ within an estimation neighborhood of $n$ and **3.** the unimodality property of $r$ under standard assumptions on $F$. Additionally, GG exhibits problem-independent guarantees on top of best problem-dependent guarantees. However, by avoiding to rely on confidence intervals, LG practically outperforms GG, as well as standard unimodal bandit algorithms such as OSUB or multi-armed bandit algorithms.", "pdf": "https://openreview.net/pdf/29c70087e7739d230e2a3ad8ce55d56cff2b4735.pdf"} {"title": "Improved off-policy training of diffusion samplers", "url": "https://openreview.net/forum?id=vieIamY2Gi", "detail_url": "https://openreview.net/forum?id=vieIamY2Gi", "authors": "Marcin Sendera,Minsu Kim,Sarthak Mittal,Pablo Lemos,Luca Scimeca,Jarrid Rector-Brooks,Alexandre Adam,Yoshua Bengio,Nikolay Malkin", "tags": "NIPS 2024,Poster", "abstract": "We study the problem of training diffusion models to sample from a distribution with a given unnormalized density or energy function. We benchmark several diffusion-structured inference methods, including simulation-based variational approaches and off-policy methods (continuous generative flow networks). Our results shed light on the relative advantages of existing algorithms while bringing into question some claims from past work. We also propose a novel exploration strategy for off-policy methods, based on local search in the target space with the use of a replay buffer, and show that it improves the quality of samples on a variety of target distributions. Our code for the sampling methods and benchmarks studied is made public at [this link](https://github.com/GFNOrg/gfn-diffusion) as a base for future work on diffusion models for amortized inference.", "pdf": "https://openreview.net/pdf/4b23e8d7e5e0b89e5b02a0559ea68f07079b1aaf.pdf"} {"title": "Richelieu: Self-Evolving LLM-Based Agents for AI Diplomacy", "url": "https://openreview.net/forum?id=7Jb4NJS8Yk", "detail_url": "https://openreview.net/forum?id=7Jb4NJS8Yk", "authors": "Zhenyu Guan,Xiangyu Kong,Fangwei Zhong,Yizhou Wang", "tags": "NIPS 2024,Poster", "abstract": "Diplomacy is one of the most sophisticated activities in human society, involving complex interactions among multiple parties that require skills in social reasoning, negotiation, and long-term strategic planning. Previous AI agents have demonstrated their ability to handle multi-step games and large action spaces in multi-agent tasks. However, diplomacy involves a staggering magnitude of decision spaces, especially considering the negotiation stage required. While recent agents based on large language models (LLMs) have shown potential in various applications, they still struggle with extended planning periods in complex multi-agent settings. Leveraging recent technologies for LLM-based agents, we aim to explore AI's potential to create a human-like agent capable of executing comprehensive multi-agent missions by integrating three fundamental capabilities: 1) strategic planning with memory and reflection; 2) goal-oriented negotiation with social reasoning; and 3) augmenting memory through self-play games for self-evolution without human in the loop.", "pdf": "https://openreview.net/pdf/cb90ba2efaa50242cc6bff5c3b01638437ab4113.pdf"} {"title": "Artemis: Towards Referential Understanding in Complex Videos", "url": "https://openreview.net/forum?id=FaNhyXY6Y1", "detail_url": "https://openreview.net/forum?id=FaNhyXY6Y1", "authors": "Jihao Qiu,Yuan Zhang,Xi Tang,Lingxi Xie,Tianren Ma,Pengyu Yan,David Doermann,Qixiang Ye,Yunjie Tian", "tags": "NIPS 2024,Poster", "abstract": "Videos carry rich visual information including object description, action, interaction, etc., but the existing multimodal large language models (MLLMs) fell short in referential understanding scenarios such as video-based referring. In this paper, we present Artemis, an MLLM that pushes video-based referential understanding to a finer level. Given a video, Artemis receives a natural-language question with a bounding box in any video frame and describes the referred target in the entire video. The key to achieving this goal lies in extracting compact, target-specific video features, where we set a solid baseline by tracking and selecting spatiotemporal features from the video. We train Artemis on the newly established ViderRef45K dataset with 45K video-QA pairs and design a computationally efficient, three-stage training procedure. Results are promising both quantitatively and qualitatively. Additionally, we show that Artemis can be integrated with video grounding and text summarization tools to understand more complex scenarios. Code and data are available at https://github.com/NeurIPS24Artemis/Artemis.", "pdf": "https://openreview.net/pdf/6adf08928f9e63c0985b07403580c3d66bc2a7c8.pdf"} {"title": "Out-of-Distribution Detection with a Single Unconditional Diffusion Model", "url": "https://openreview.net/forum?id=tTnFH7D1h4", "detail_url": "https://openreview.net/forum?id=tTnFH7D1h4", "authors": "Alvin Heng,Alexandre H. Thiery,Harold Soh", "tags": "NIPS 2024,Poster", "abstract": "Out-of-distribution (OOD) detection is a critical task in machine learning that seeks to identify abnormal samples. Traditionally, unsupervised methods utilize a deep generative model for OOD detection. However, such approaches require a new model to be trained for each inlier dataset. This paper explores whether a single model can perform OOD detection across diverse tasks. To that end, we introduce Diffusion Paths (DiffPath), which uses a single diffusion model originally trained to perform unconditional generation for OOD detection. We introduce a novel technique of measuring the rate-of-change and curvature of the diffusion paths connecting samples to the standard normal. Extensive experiments show that with a single model, DiffPath is competitive with prior work using individual models on a variety of OOD tasks involving different distributions. Our code is publicly available at https://github.com/clear-nus/diffpath.", "pdf": "https://openreview.net/pdf/38fb26df19bbe7033e041b9e7356933d0c59aeb7.pdf"} {"title": "Improved learning rates in multi-unit uniform price auctions", "url": "https://openreview.net/forum?id=UN7nXLeh9D", "detail_url": "https://openreview.net/forum?id=UN7nXLeh9D", "authors": "Marius Potfer,Dorian Baudry,Hugo Richard,Vianney Perchet,Cheng Wan", "tags": "NIPS 2024,Poster", "abstract": "Motivated by the strategic participation of electricity producers in electricity day-ahead market, we study the problem of online learning in repeated multi-unit uniform price auctions focusing on the adversarial opposing bid setting. The main contribution of this paper is the introduction of a new modeling of the bid space. Indeed, we prove that a learning algorithm leveraging the structure of this problem achieves a regret of $\\tilde{O}(K^{4/3}T^{2/3})$ under bandit feedback, improving over the bound of $\\tilde{O}(K^{7/4}T^{3/4})$ previously obtained in the literature. This improved regret rate is tight up to logarithmic terms. %by deducing a lower bound of $\\Omega (T^{2/3})$ from the dynamic pricing literature, proving the optimality in $T$ of our algorithm up to log factors. \nInspired by electricity reserve markets, we further introduce a different feedback model under which all winning bids are revealed. This feedback interpolates between the full-information and bandit scenarios depending on the auctions' results. We prove that, under this feedback, the algorithm that we propose achieves regret $\\tilde{O}(K^{5/2}\\sqrt{T})$.", "pdf": "https://openreview.net/pdf/fae30e5838e0e17f75527e204f35ff9b9664b4af.pdf"} {"title": "Robust Fine-tuning of Zero-shot Models via Variance Reduction", "url": "https://openreview.net/forum?id=ViTUlZvPDu", "detail_url": "https://openreview.net/forum?id=ViTUlZvPDu", "authors": "Beier Zhu,Jiequan Cui,Hanwang Zhang", "tags": "NIPS 2024,Poster", "abstract": "When fine-tuning zero-shot models like CLIP, our desideratum is for the fine-tuned model to excel in both in-distribution (ID) and out-of-distribution (OOD). Recently, ensemble-based models (ESM) have been shown to offer significant robustness improvement, while preserving high ID accuracy. However, our study finds that ESMs do not solve the ID-OOD trade-offs: they achieve peak performance for ID and OOD accuracy at different mixing coefficients. When optimized for OOD accuracy, the ensemble model exhibits a noticeable decline in ID accuracy, and vice versa. In contrast, we propose a sample-wise ensembling technique that can simultaneously attain the best ID and OOD accuracy without the trade-offs. Specifically, we construct a Zero-Shot Failure (ZSF) set containing training samples incorrectly predicted by the zero-shot model. For each test sample, we calculate its distance to the ZSF set and assign a higher weight to the fine-tuned model in the ensemble if the distance is small. We term our method Variance Reduction Fine-tuning (VRF), as it effectively reduces the variance in ensemble predictions, thereby decreasing residual error. On ImageNet and five derived distribution shifts, our VRF further improves the OOD accuracy by 1.5 - 2.0 pp over the ensemble baselines while maintaining or increasing ID accuracy. VRF achieves similar large robustness gains on (0.9 - 3.1 pp) on other distribution shifts\n19 benchmarks. Codes are available in https://github.com/BeierZhu/VRF.", "pdf": "https://openreview.net/pdf/a5b94f6f1c4dae09259f68b869e4abbcc65ad8af.pdf"} {"title": "NeuralClothSim: Neural Deformation Fields Meet the Thin Shell Theory", "url": "https://openreview.net/forum?id=pzJjlnMvk5", "detail_url": "https://openreview.net/forum?id=pzJjlnMvk5", "authors": "Navami Kairanda,Marc Habermann,Christian Theobalt,Vladislav Golyanik", "tags": "NIPS 2024,Poster", "abstract": "Despite existing 3D cloth simulators producing realistic results, they predominantly operate on discrete surface representations (e.g. points and meshes) with a fixed spatial resolution, which often leads to large memory consumption and resolution-dependent simulations. Moreover, back-propagating gradients through the existing solvers is difficult and they hence cannot be easily integrated into modern neural architectures. In response, this paper re-thinks physically plausible cloth simulation: We propose NeuralClothSim, i.e., a new quasistatic cloth simulator using thin shells, in which surface deformation is encoded in neural network weights in form of a neural field. Our memory-efficient solver operates on a new continuous coordinate-based surface representation called neural deformation fields (NDFs); it supervises NDF equilibria with the laws of the non-linear Kirchhoff-Love shell theory with a non-linear anisotropic material model. NDFs are adaptive: They 1) allocate their capacity to the deformation details and 2) allow surface state queries at arbitrary spatial resolutions without re-training. We show how to train NeuralClothSim while imposing hard boundary conditions and demonstrate multiple applications, such as material interpolation and simulation editing. The experimental results highlight the effectiveness of our continuous neural formulation.", "pdf": "https://openreview.net/pdf/3dec444f18c27dd63e0072ceec22ae0f3f5068a9.pdf"} {"title": "Strategic Multi-Armed Bandit Problems Under Debt-Free Reporting", "url": "https://openreview.net/forum?id=WqNfihAcu5", "detail_url": "https://openreview.net/forum?id=WqNfihAcu5", "authors": "Ahmed Ben Yahmed,Cl\u00e9ment Calauz\u00e8nes,Vianney Perchet", "tags": "NIPS 2024,Poster", "abstract": "We examine multi-armed bandit problems featuring strategic arms under debt-free reporting. In this context, each arm is characterized by a bounded support reward distribution and strategically aims to maximize its own utility by retaining a portion of the observed reward, potentially disclosing only a fraction of it to the player. This scenario unfolds as a game over $T$ rounds, leading to a competition of objectives between the player, aiming to minimize regret, and the arms, motivated by the desire to maximize their individual utilities. To address these dynamics, we propose an algorithm that establishes an equilibrium wherein each arm behaves truthfully and discloses as much of its rewards as possible. Utilizing this algorithm, the player can attain the second-highest average (true) reward among arms, with a cumulative regret bounded by $O(\\log(T)/\\Delta)$ (problem-dependent) or $O(\\sqrt{T\\log(T)})$ (worst-case).", "pdf": "https://openreview.net/pdf/f75b1863c05310137343a4f5163350bcb0f1c8e2.pdf"} {"title": "Homology Consistency Constrained Efficient Tuning for Vision-Language Models", "url": "https://openreview.net/forum?id=veMnGKXvTx", "detail_url": "https://openreview.net/forum?id=veMnGKXvTx", "authors": "Huatian Zhang,Lei Zhang,Yongdong Zhang,Zhendong Mao", "tags": "NIPS 2024,Poster", "abstract": "Efficient transfer learning has shown remarkable performance in tuning large-scale vision-language models (VLMs) toward downstream tasks with limited data resources. The key challenge of efficient transfer lies in adjusting image-text alignment to be task-specific while preserving pre-trained general knowledge. However, existing methods adjust image-text alignment merely on a set of observed samples, e.g., data set and external knowledge base, which cannot guarantee to keep the correspondence of general concepts between image and text latent manifolds without being disrupted and thereby a weak generalization of the adjusted alignment. In this work, we propose a Homology Consistency (HC) constraint for efficient transfer on VLMs, which explicitly constrains the correspondence of image and text latent manifolds through structural equivalence based on persistent homology in downstream tuning. Specifically, we build simplicial complex on the top of data to mimic the topology of latent manifolds, then track the persistence of the homology classes of topological features across multiple scales, and guide the directions of persistence tracks in image and text manifolds to coincide each other, with a deviating perturbation additionally. For practical application, we tailor the implementation of our proposed HC constraint for two main paradigms of adapter tuning. Extensive experiments on few-shot learning over 11 datasets and domain generalization demonstrate the effectiveness and robustness of our method.", "pdf": "https://openreview.net/pdf/5cab7ed9375bde4b2a85485640533b73a623b658.pdf"} {"title": "DPIC: Decoupling Prompt and Intrinsic Characteristics for LLM Generated Text Detection", "url": "https://openreview.net/forum?id=BZh05P2EoN", "detail_url": "https://openreview.net/forum?id=BZh05P2EoN", "authors": "Xiao Yu,Yuang Qi,Kejiang Chen,Guoqiang Chen,Xi Yang,PENGYUAN ZHU,Xiuwei Shang,Weiming Zhang,Nenghai Yu", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have the potential to generate texts that pose risks of misuse, such as plagiarism, planting fake reviews on e-commerce platforms, or creating inflammatory false tweets. Consequently, detecting whether a text is generated by LLMs has become increasingly important. Existing high-quality detection methods usually require access to the interior of the model to extract the intrinsic characteristics. However, since we do not have access to the interior of the black-box model, we must resort to surrogate models, which impacts detection quality. In order to achieve high-quality detection of black-box models, we would like to extract deep intrinsic characteristics of the black-box model generated texts. We view the generation process as a coupled process of prompt and intrinsic characteristics of the generative model. Based on this insight, we propose to decouple prompt and intrinsic characteristics (DPIC) for LLM-generated text detection method. Specifically, given a candidate text, DPIC employs an auxiliary LLM to reconstruct the prompt corresponding to the candidate text, then uses the prompt to regenerate text by the auxiliary LLM, which makes the candidate text and the regenerated text align with their prompts, respectively. Then, the similarity between the candidate text and the regenerated text is used as a detection feature, thus eliminating the prompt in the detection process, which allows the detector to focus on the intrinsic characteristics of the generative model. Compared to the baselines, DPIC has achieved an average improvement of 6.76\\% and 2.91\\% in detecting texts from different domains generated by GPT4 and Claude3, respectively.", "pdf": "https://openreview.net/pdf/9a83f57578d2530d0ec2e5ef7cb874b3d92d68a9.pdf"} {"title": "4-bit Shampoo for Memory-Efficient Network Training", "url": "https://openreview.net/forum?id=ASqdVeifn7", "detail_url": "https://openreview.net/forum?id=ASqdVeifn7", "authors": "Sike Wang,Pan Zhou,Jia Li,Hua Huang", "tags": "NIPS 2024,Poster", "abstract": "Second-order optimizers, maintaining a matrix termed a preconditioner, are superior to first-order optimizers in both theory and practice.\nThe states forming the preconditioner and its inverse root restrict the maximum size of models trained by second-order optimizers. To address this, compressing 32-bit optimizer states to lower bitwidths has shown promise in reducing memory usage. However, current approaches only pertain to first-order optimizers. In this paper, we propose the first 4-bit second-order optimizers, exemplified by 4-bit Shampoo, maintaining performance similar to that of 32-bit ones. We show that quantizing the eigenvector matrix of the preconditioner in 4-bit Shampoo is remarkably better than quantizing the preconditioner itself both theoretically and experimentally. By rectifying the orthogonality of the quantized eigenvector matrix, we enhance the approximation of the preconditioner's eigenvector matrix, which also benefits the computation of its inverse 4-th root. Besides, we find that linear square quantization slightly outperforms dynamic tree quantization when quantizing second-order optimizer states. Evaluation on various networks for image classification and natural language modeling demonstrates that our 4-bit Shampoo achieves comparable performance to its 32-bit counterpart while being more memory-efficient.", "pdf": "https://openreview.net/pdf/5ffe9d21278adcfc9480d4a3ad17d296600c88d3.pdf"} {"title": "Agent Planning with World Knowledge Model", "url": "https://openreview.net/forum?id=j6kJSS9O6I", "detail_url": "https://openreview.net/forum?id=j6kJSS9O6I", "authors": "Shuofei Qiao,Runnan Fang,Ningyu Zhang,Yuqi Zhu,Xiang Chen,Shumin Deng,Yong Jiang,Pengjun Xie,Fei Huang,Huajun Chen", "tags": "NIPS 2024,Poster", "abstract": "Recent endeavors towards directly using large language models (LLMs) as agent models to execute interactive planning tasks have shown commendable results. Despite their achievements, however, they still struggle with brainless trial-and-error in global planning and generating hallucinatory actions in local planning due to their poor understanding of the \"real\" physical world. Imitating humans' mental world knowledge model which provides global prior knowledge before the task and maintains local dynamic knowledge during the task, in this paper, we introduce parametric World Knowledge Model (WKM) to facilitate agent planning. Concretely, we steer the agent model to self-synthesize knowledge from both expert and sampled trajectories. Then we develop WKM, providing prior task knowledge to guide the global planning and dynamic state knowledge to assist the local planning. Experimental results on three real-world simulated datasets with Mistral-7B, Gemma-7B, and Llama-3-8B demonstrate that our method can achieve superior performance compared to various strong baselines. Besides, we analyze to illustrate that our WKM can effectively alleviate the blind trial-and-error and hallucinatory action issues, providing strong support for the agent's understanding of the world. Other interesting findings include: 1) our instance-level task knowledge can generalize better to unseen tasks, 2) weak WKM can guide strong agent model planning, and 3) unified WKM training has promising potential for further development.", "pdf": "https://openreview.net/pdf/c31f6cad990a43b74cdb8a1628859b3b17c73ccc.pdf"} {"title": "Diffusion Imitation from Observation", "url": "https://openreview.net/forum?id=6b6TfDBDOO", "detail_url": "https://openreview.net/forum?id=6b6TfDBDOO", "authors": "Bo-Ruei Huang,Chun-Kai Yang,Chun-Mao Lai,Dai-Jie Wu,Shao-Hua Sun", "tags": "NIPS 2024,Poster", "abstract": "Learning from Observation (LfO) aims to imitate experts by learning from state-only demonstrations without requiring action labels. \nExisting adversarial imitation learning approaches learn a generator agent policy to produce state transitions that are indistinguishable to a discriminator that learns to classify agent and expert state transitions. Despite its simplicity in formulation, these methods are often sensitive to hyperparameters and brittle to train. Motivated by the recent success of diffusion models in generative modeling, we propose to integrate a diffusion model into the adversarial imitation learning from observation framework. Specifically, we employ a diffusion model to capture expert and agent transitions by generating the next state, given the current state. Then, we reformulate the learning objective to train the diffusion model as a binary classifier and use it to provide ``realness'' rewards for policy learning. Our proposed framework, Diffusion Imitation from Observation (DIFO), demonstrates superior performance in various continuous control domains, including navigation, locomotion, manipulation, and games.", "pdf": "https://openreview.net/pdf/47a8ac81d50bd6fbb33165af250615515ee96f6a.pdf"} {"title": "PEAC: Unsupervised Pre-training for Cross-Embodiment Reinforcement Learning", "url": "https://openreview.net/forum?id=LyAFfdx8YF", "detail_url": "https://openreview.net/forum?id=LyAFfdx8YF", "authors": "Chengyang Ying,Zhongkai Hao,Xinning Zhou,Xuezhou Xu,Hang Su,Xingxing Zhang,Jun Zhu", "tags": "NIPS 2024,Poster", "abstract": "Designing generalizable agents capable of adapting to diverse embodiments has achieved significant attention in Reinforcement Learning (RL), which is critical for deploying RL agents in various real-world applications. Previous Cross-Embodiment RL approaches have focused on transferring knowledge across embodiments within specific tasks. These methods often result in knowledge tightly coupled with those tasks and fail to adequately capture the distinct characteristics of different embodiments. To address this limitation, we introduce the notion of Cross-Embodiment Unsupervised RL (CEURL), which leverages unsupervised learning to enable agents to acquire embodiment-aware and task-agnostic knowledge through online interactions within reward-free environments. We formulate CEURL as a novel Controlled Embodiment Markov Decision Process (CE-MDP) and systematically analyze CEURL's pre-training objectives under CE-MDP. Based on these analyses, we develop a novel algorithm Pre-trained Embodiment-Aware Control (PEAC) for handling CEURL, incorporating an intrinsic reward function specifically designed for cross-embodiment pre-training. PEAC not only provides an intuitive optimization strategy for cross-embodiment pre-training but also can integrate flexibly with existing unsupervised RL methods, facilitating cross-embodiment exploration and skill discovery. Extensive experiments in both simulated (e.g., DMC and Robosuite) and real-world environments (e.g., legged locomotion) demonstrate that PEAC significantly improves adaptation performance and cross-embodiment generalization, demonstrating its effectiveness in overcoming the unique challenges of CEURL. The project page and code are in https://yingchengyang.github.io/ceurl.", "pdf": "https://openreview.net/pdf/b35ffd6cac571f63f92b782ebf5e83426674ff56.pdf"} {"title": "Adaptive Experimentation When You Can't Experiment", "url": "https://openreview.net/forum?id=2mqiTiJKrx", "detail_url": "https://openreview.net/forum?id=2mqiTiJKrx", "authors": "Yao Zhao,Kwang-Sung Jun,Tanner Fiez,Lalit K Jain", "tags": "NIPS 2024,Poster", "abstract": "This paper introduces the confounded pure exploration transductive linear bandit (CPET-LB) problem. \nAs a motivating example, often online services cannot directly assign users to specific control or treatment experiences either for business or practical reasons. In these settings, naively comparing treatment and control groups that may result from self-selection can lead to biased estimates of underlying treatment effects. \nInstead, online services can employ a properly randomized encouragement that incentivizes users toward a specific treatment. \nOur methodology provides online services with an adaptive experimental design approach for learning the best-performing treatment for such encouragement designs. \nWe consider a more general underlying model captured by a linear structural equation and formulate pure exploration linear bandits in this setting. Though pure exploration has been extensively studied in standard adaptive experimental design settings, we believe this is the first work considering a setting where noise is confounded. Elimination-style algorithms using experimental design methods in combination with a novel finite-time confidence interval on an instrumental variable style estimator are presented with sample complexity upper bounds nearly matching a minimax lower bound. Finally, experiments are conducted that demonstrate the efficacy of our approach.", "pdf": "https://openreview.net/pdf/899ca6f360759a2ba64ba2f899769156ccf4239a.pdf"} {"title": "Bias Amplification in Language Model Evolution: An Iterated Learning Perspective", "url": "https://openreview.net/forum?id=BSYn7ah4KX", "detail_url": "https://openreview.net/forum?id=BSYn7ah4KX", "authors": "Yi Ren,Shangmin Guo,Linlu Qiu,Bailin Wang,Danica J. Sutherland", "tags": "NIPS 2024,Poster", "abstract": "With the widespread adoption of Large Language Models (LLMs), the prevalence of iterative interactions among these models is anticipated to increase. Notably, recent advancements in multi-round on-policy self-improving methods allow LLMs to generate new examples for training subsequent models. At the same time, multi-agent LLM systems, involving automated interactions among agents, are also increasing in prominence. Thus, in both short and long terms, LLMs may actively engage in an evolutionary process. We draw parallels between the behavior of LLMs and the evolution of human culture, as the latter has been extensively studied by cognitive scientists for decades. Our approach involves leveraging Iterated Learning (IL), a Bayesian framework that elucidates how subtle biases are magnified during human cultural evolution, to explain some behaviors of LLMs. This paper outlines key characteristics of agents' behavior in the Bayesian-IL framework, including predictions that are supported by experimental verification with various LLMs. This theoretical framework could help to more effectively predict and guide the evolution of LLMs in desired directions.", "pdf": "https://openreview.net/pdf/a87b0454e169c4fa46c354c446f8b09fa0ea7da9.pdf"} {"title": "Towards Global Optimal Visual In-Context Learning Prompt Selection", "url": "https://openreview.net/forum?id=N2PwbxJ3o6", "detail_url": "https://openreview.net/forum?id=N2PwbxJ3o6", "authors": "Chengming Xu,Chen Liu,Yikai Wang,Yuan Yao,Yanwei Fu", "tags": "NIPS 2024,Poster", "abstract": "Visual In-Context Learning (VICL) is a prevailing way to transfer visual foundation models to new tasks by leveraging contextual information contained in in-context examples to enhance learning and prediction of query sample. The fundamental problem in VICL is how to select the best prompt to activate its power as much as possible, which is equivalent to the ranking problem to test the in-context behavior of each candidate in the alternative set and select the best one. To utilize more appropriate ranking metric and leverage more comprehensive information among the alternative set, we propose a novel in-context example selection framework to approximately identify the global optimal prompt, i.e. choosing the best performing in-context examples from all alternatives for each query sample. Our method, dubbed Partial2Global, adopts a transformer-based list-wise ranker to provide a more comprehensive comparison within several alternatives, and a consistency-aware ranking aggregator to generate globally consistent ranking. The effectiveness of Partial2Global is validated through experiments on foreground segmentation, single object detection and image colorization, demonstrating that Partial2Global selects consistently better in-context examples compared with other methods, and thus establish the new state-of-the-arts.", "pdf": "https://openreview.net/pdf/a0857600ede5fd887190559242efdad745f8ee0e.pdf"} {"title": "Entropy testing and its application to testing Bayesian networks", "url": "https://openreview.net/forum?id=bMSXeAlCI4", "detail_url": "https://openreview.net/forum?id=bMSXeAlCI4", "authors": "Clement Louis Canonne,Qiping Yang", "tags": "NIPS 2024,Poster", "abstract": "This paper studies the problem of \\emph{entropy identity testing}: given sample access to a distribution $p$ and a fully described distribution $q$ (both are discrete distributions over the support of size $k$), and the promise that either $p = q$ or $ | H(p) - H(q) | \\geqslant \\varepsilon$, where $H(\\cdot)$ denotes the Shannon entropy, a tester needs to distinguish between the two cases with high probability. We establish a near-optimal sample complexity bound of $\\tilde{\\Theta}(\\sqrt{k}/\\varepsilon + {1}/{\\varepsilon^2}$) for this problem, and show how to apply it to the problem of identity testing for in-degree-$d$ $n$-dimensional Bayesian networks, obtaining an upper bound of $\\tilde{O}\\left( {2^{d / 2} n}/{\\varepsilon^2} + {n^2}/{\\varepsilon^4} \\right)$. This improves on the sample complexity bound of $\\tilde{O}(2^{d/2}n^2/\\varepsilon^4)$ from Canonne, Diakonikolas, Kane, and Stewart (2020), which required an additional assumption on the structure of the (unknown) Bayesian network.", "pdf": "https://openreview.net/pdf/bd15f158e3c5c9f4e6140e50a0ee6871b798ed6c.pdf"} {"title": "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers", "url": "https://openreview.net/forum?id=preo49P1VY", "detail_url": "https://openreview.net/forum?id=preo49P1VY", "authors": "Sukjun Hwang,Aakash Lahoti,Ratish Puduppully,Tri Dao,Albert Gu", "tags": "NIPS 2024,Poster", "abstract": "A wide array of sequence models are built on a framework modeled after Transformers, comprising alternating sequence mixer and channel mixer layers. This paper studies a unifying *matrix mixer* view of sequence mixers that can be conceptualized as a linear map on the input sequence. This framework encompasses a broad range of well-known sequence models, including the self-attention of Transformers as well as recent strong alternatives such as structured state space models (SSMs), and allows understanding downstream characteristics such as efficiency and expressivity through properties of their structured matrix class. We identify a key axis of matrix parameterizations termed *sequence alignment*, which increases the flexibility and performance of matrix mixers, providing insights into the strong performance of Transformers and recent SSMs such as Mamba. Furthermore, the matrix mixer framework offers a systematic approach to developing sequence mixers with desired properties, allowing us to develop several new sub-quadratic sequence models. In particular, we propose a natural bidirectional extension of the Mamba model (**Hydra**), parameterized as a *quasiseparable matrix mixer*, which demonstrates superior performance over other sequence models including Transformers on non-causal tasks. As a drop-in replacement for attention layers, \\name outperforms BERT by 0.8 points on the GLUE benchmark and ViT by 2% Top-1 accuracy on ImageNet.", "pdf": "https://openreview.net/pdf/975e174ed724f910ffb63bad51c2578e145b4c50.pdf"} {"title": "Style Adaptation and Uncertainty Estimation for Multi-Source Blended-Target Domain Adaptation", "url": "https://openreview.net/forum?id=KvAaIJhqhI", "detail_url": "https://openreview.net/forum?id=KvAaIJhqhI", "authors": "Yuwu Lu,Haoyu Huang,Xue Hu", "tags": "NIPS 2024,Poster", "abstract": "Blended-target domain adaptation (BTDA), which implicitly mixes multiple sub-target domains into a fine domain, has attracted more attention in recent years. Most previously developed BTDA approaches focus on utilizing a single source domain, which makes it difficult to obtain sufficient feature information for learning domain-invariant representations. Furthermore, different feature distributions derived from different domains may increase the uncertainty of models. To overcome these issues, we propose a style adaptation and uncertainty estimation (SAUE) approach for multi-source blended-target domain adaptation (MBDA). Specifically, we exploit the extra knowledge acquired from the blended-target domain, where a similarity factor is adopted to select more useful target style information for augmenting the source features. \\!Then, to mitigate the negative impact of the domain-specific attributes, we devise a function to estimate and mitigate uncertainty in category prediction. Finally, we construct a simple and lightweight adversarial learning strategy for MBDA, effectively aligning multi-source and blended-target domains without the requirements of domain labels of the target domains. Extensive experiments conducted on several challenging DA benchmarks, including the ImageCLEF-DA, Office-Home, VisDA 2017, and DomainNet datasets, demonstrate the superiority of our method over the state-of-the-art (SOTA) approaches.", "pdf": "https://openreview.net/pdf/44fcdf30db2ab8af2916b00a507f878457f54234.pdf"} {"title": "Toward Real Ultra Image Segmentation: Leveraging Surrounding Context to Cultivate General Segmentation Model", "url": "https://openreview.net/forum?id=nU4lvlMwrt", "detail_url": "https://openreview.net/forum?id=nU4lvlMwrt", "authors": "Sai Wang,Yutian Lin,Yu Wu,Bo Du", "tags": "NIPS 2024,Poster", "abstract": "Existing ultra image segmentation methods suffer from two major challenges, namely the scalability issue (i.e. they lack the stability and generality of standard segmentation models, as they are tailored to specific datasets), and the architectural issue (i.e. they are incompatible with real-world ultra image scenes, as they compromise between image size and computing resources).\nTo tackle these issues, we revisit the classic sliding inference framework, upon which we propose a Surrounding Guided Segmentation framework (SGNet) for ultra image segmentation. \nThe SGNet leverages a larger area around each image patch to refine the general segmentation results of local patches.\nSpecifically, we propose a surrounding context integration module to absorb surrounding context information and extract specific features that are beneficial to local patches. Note that, SGNet can be seamlessly integrated to any general segmentation model.\nExtensive experiments on five datasets demonstrate that SGNet achieves competitive performance and consistent improvements across a variety of general segmentation models, surpassing the traditional ultra image segmentation methods by a large margin.", "pdf": "https://openreview.net/pdf/0eceffa7f6da09e11f0e7396015fca9f4bbc9d09.pdf"} {"title": "Vidu4D: Single Generated Video to High-Fidelity 4D Reconstruction with Dynamic Gaussian Surfels", "url": "https://openreview.net/forum?id=7eIaqYrpcs", "detail_url": "https://openreview.net/forum?id=7eIaqYrpcs", "authors": "Yikai Wang,Xinzhou Wang,Zilong Chen,Zhengyi Wang,Fuchun Sun,Jun Zhu", "tags": "NIPS 2024,Poster", "abstract": "Video generative models are receiving particular attention given their ability to generate realistic and imaginative frames. Besides, these models are also observed to exhibit strong 3D consistency, significantly enhancing their potential to act as world simulators. In this work, we present Vidu4D, a novel reconstruction model that excels in accurately reconstructing 4D (i.e., sequential 3D) representations from single generated videos, addressing challenges associated with non-rigidity and frame distortion. This capability is pivotal for creating high-fidelity virtual contents that maintain both spatial and temporal coherence. At the core of Vidu4D is our proposed Dynamic Gaussian Surfels (DGS) technique. DGS optimizes time-varying warping functions to transform Gaussian surfels (surface elements) from a static state to a dynamically warped state. This transformation enables a precise depiction of motion and deformation over time. To preserve the structural integrity of surface-aligned Gaussian surfels, we design the warped-state geometric regularization based on continuous warping fields for estimating normals. Additionally, we learn refinements on rotation and scaling parameters of Gaussian surfels, which greatly alleviates texture flickering during the warping process and enhances the capture of fine-grained appearance details. Vidu4D also contains a novel initialization state that provides a proper start for the warping fields in DGS. Equipping Vidu4D with an existing video generative model, the overall framework demonstrates high-fidelity text-to-4D generation in both appearance and geometry.", "pdf": "https://openreview.net/pdf/fb6f694b71a09764832e80ccdba761541caec4e3.pdf"} {"title": "Segment Anything without Supervision", "url": "https://openreview.net/forum?id=aGqldlOxxY", "detail_url": "https://openreview.net/forum?id=aGqldlOxxY", "authors": "Xudong Wang,Jingfeng Yang,Trevor Darrell", "tags": "NIPS 2024,Poster", "abstract": "The Segmentation Anything Model (SAM) requires labor-intensive data labeling. We present Unsupervised SAM (UnSAM) for promptable and automatic whole-image segmentation that does not require human annotations. UnSAM utilizes a divide-and-conquer strategy to \u201cdiscover\u201d the hierarchical structure of visual scenes. We first leverage top-down clustering methods to partition an unlabeled image into instance/semantic level segments. For all pixels within a segment, a bottom-up clustering method is employed to iteratively merge them into larger groups, thereby forming a hierarchical structure. These unsupervised multi-granular masks are then utilized to supervise model training. Evaluated across seven popular datasets, UnSAM achieves competitive results with the supervised counterpart SAM, and surpasses the previous state-of-the-art in unsupervised segmentation by 11% in terms of AR. Moreover, we show that supervised SAM can also benefit from our self-supervised labels. By integrating our unsupervised pseudo masks into SA-1B\u2019s ground-truth masks and training UnSAM with only 1% of SA-1B, a lightly semi-supervised UnSAM can often segment entities overlooked by supervised SAM, exceeding SAM\u2019s AR by over 6.7% and AP by 3.9% on SA-1B.", "pdf": "https://openreview.net/pdf/a4f807cce84b03d9ba605485c4c89f5521686bea.pdf"} {"title": "Dense Connector for MLLMs", "url": "https://openreview.net/forum?id=Ioabr42B44", "detail_url": "https://openreview.net/forum?id=Ioabr42B44", "authors": "Huanjin Yao,Wenhao Wu,Taojiannan Yang,YuXin Song,Mengxi Zhang,Haocheng Feng,Yifan Sun,Zhiheng Li,Wanli Ouyang,Jingdong Wang", "tags": "NIPS 2024,Poster", "abstract": "*Do we fully leverage the potential of visual encoder in Multimodal Large Language Models (MLLMs)?* The recent outstanding performance of MLLMs in multimodal understanding has garnered broad attention from both academia and industry. In the current MLLM rat race, the focus seems to be predominantly on the linguistic side. We witness the rise of larger and higher-quality instruction datasets, as well as the involvement of larger-sized LLMs. Yet, scant attention has been directed towards the visual signals utilized by MLLMs, often assumed to be the final high-level features extracted by a frozen visual encoder. In this paper, we introduce the **Dense Connector** - a simple, effective, and plug-and-play vision-language connector that significantly enhances existing MLLMs by leveraging multi-layer visual features, with minimal additional computational overhead. Building on this, we also propose the Efficient Dense Connector, which achieves performance comparable to LLaVA-v1.5 with only 25% of the visual tokens. Furthermore, our model, trained solely on images, showcases remarkable zero-shot capabilities in video understanding as well. Experimental results across various vision encoders, image resolutions, training dataset scales, varying sizes of LLMs (2.7B\u219270B), and diverse architectures of MLLMs (e.g., LLaVA-v1.5, LLaVA-NeXT and Mini-Gemini) validate the versatility and scalability of our approach, achieving state-of-the-art performance across 19 image and video benchmarks. We hope that this work will provide valuable experience and serve as a basic module for future MLLM development. Code is available at https://github.com/HJYao00/DenseConnector.", "pdf": "https://openreview.net/pdf/e2a610dbdf265e0043ef7b83516b5f095489aab4.pdf"} {"title": "Computerized Adaptive Testing via Collaborative Ranking", "url": "https://openreview.net/forum?id=5Fl4zgXbsW", "detail_url": "https://openreview.net/forum?id=5Fl4zgXbsW", "authors": "Zirui Liu,Yan Zhuang,Qi Liu,Jiatong Li,Yuren Zhang,Zhenya Huang,Jinze Wu,Shijin Wang", "tags": "NIPS 2024,Poster", "abstract": "As the deep integration of machine learning and intelligent education, Computerized Adaptive Testing (CAT) has received more and more research attention. Compared to traditional paper-and-pencil tests, CAT can deliver both personalized and interactive assessments by automatically adjusting testing questions according to the performance of students during the test process. Therefore, CAT has been recognized as an efficient testing methodology capable of accurately estimating a student\u2019s ability with a minimal number of questions, leading to its widespread adoption in mainstream selective exams such as the GMAT and GRE. However, just improving the accuracy of ability estimation is far from satisfactory in the real-world scenarios, since an accurate ranking of students is usually more important (e.g., in high-stakes exams). Considering the shortage of existing CAT solutions in student ranking, this paper emphasizes the importance of aligning test outcomes (student ranks) with the true underlying abilities of students. Along this line, different from the conventional independent testing paradigm among students, we propose a novel collaborative framework, Collaborative Computerized Adaptive Testing (CCAT), that leverages inter-student information to enhance student ranking. By using collaborative students as anchors to assist in ranking test-takers, CCAT can give both theoretical guarantees and experimental validation for ensuring ranking consistency.", "pdf": "https://openreview.net/pdf/664958f1a9b5e09fe00047cbd37f837926af2435.pdf"} {"title": "TabPedia: Towards Comprehensive Visual Table Understanding with Concept Synergy", "url": "https://openreview.net/forum?id=aou5yrBqKy", "detail_url": "https://openreview.net/forum?id=aou5yrBqKy", "authors": "Weichao Zhao,Hao Feng,Qi Liu,Jingqun Tang,Binghong Wu,Lei Liao,Shu Wei,Yongjie Ye,Hao Liu,Wengang Zhou,Houqiang Li,Can Huang", "tags": "NIPS 2024,Poster", "abstract": "Tables contain factual and quantitative data accompanied by various structures and contents that pose challenges for machine comprehension. Previous methods generally design task-specific architectures and objectives for individual tasks, resulting in modal isolation and intricate workflows. In this paper, we present a novel large vision-language model, TabPedia, equipped with a concept synergy mechanism. In this mechanism, all the involved diverse visual table understanding (VTU) tasks and multi-source visual embeddings are abstracted as concepts. This unified framework allows TabPedia to seamlessly integrate VTU tasks, such as table detection, table structure recognition, table querying, and table question answering, by leveraging the capabilities of large language models (LLMs). Moreover, the concept synergy mechanism enables table perception-related and comprehension-related tasks to work in harmony, as they can effectively leverage the needed clues from the corresponding source perception embeddings. Furthermore, to better evaluate the VTU task in real-world scenarios, we establish a new and comprehensive table VQA benchmark, ComTQA, featuring approximately 9,000 QA pairs. Extensive quantitative and qualitative experiments on both table perception and comprehension tasks, conducted across various public benchmarks, validate the effectiveness of our TabPedia. The superior performance further confirms the feasibility of using LLMs for understanding visual tables when all concepts work in synergy. The benchmark ComTQA has been open-sourced at https://huggingface.co/datasets/ByteDance/ComTQA. The source code and model also have been released at https://github.com/zhaowc-ustc/TabPedia.", "pdf": "https://openreview.net/pdf/cbd7c23801a19d9fd3eac5ef9ee16a8a27e68549.pdf"} {"title": "AP-Adapter: Improving Generalization of Automatic Prompts on Unseen Text-to-Image Diffusion Models", "url": "https://openreview.net/forum?id=46V9axmOuU", "detail_url": "https://openreview.net/forum?id=46V9axmOuU", "authors": "Yuchen Fu,Zhiwei Jiang,Yuliang Liu,Cong Wang,Zexuan Deng,Zhaoling Chen,Qing Gu", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements in Automatic Prompt Optimization (APO) for text-to-image generation have streamlined user input while ensuring high-quality image output. However, most APO methods are trained assuming a fixed text-to-image model, which is impractical given the emergence of new models. To address this, we propose a novel task, model-generalized automatic prompt optimization (MGAPO), which trains APO methods on a set of known models to enable generalization to unseen models during testing. MGAPO presents significant challenges. First, we experimentally confirm the suboptimal performance of existing APO methods on unseen models. We then introduce a two-stage prompt optimization method, AP-Adapter. In the first stage, a large language model is used to rewrite the prompts. In the second stage, we propose a novel method to construct an enhanced representation space by leveraging inter-model differences. This space captures the characteristics of multiple domain models, storing them as domain prototypes. These prototypes serve as anchors to adjust prompt representations, enabling generalization to unseen models. The optimized prompt representations are subsequently used to generate conditional representations for controllable image generation. We curate a multi-modal, multi-model dataset that includes multiple diffusion models and their corresponding text-image data, and conduct experiments under a model generalization setting. The experimental results demonstrate the AP-Adapter's ability to enable the automatic prompts to generalize well to previously unseen diffusion models, generating high-quality images.", "pdf": "https://openreview.net/pdf/4df440113e1bfd1181f3efaab7e59107ab6447b0.pdf"} {"title": "HOI-Swap: Swapping Objects in Videos with Hand-Object Interaction Awareness", "url": "https://openreview.net/forum?id=GkHXBasQwm", "detail_url": "https://openreview.net/forum?id=GkHXBasQwm", "authors": "Zihui Xue,Mi Luo,Changan Chen,Kristen Grauman", "tags": "NIPS 2024,Poster", "abstract": "We study the problem of precisely swapping objects in videos, with a focus on those interacted with by hands, given one user-provided reference object image. Despite the great advancements that diffusion models have made in video editing recently, these models often fall short in handling the intricacies of hand-object interactions (HOI), failing to produce realistic edits---especially when object swapping results in object shape or functionality changes. To bridge this gap, we present HOI-Swap, a novel diffusion-based video editing framework trained in a self-supervised manner. Designed in two stages, the first stage focuses on object swapping in a single frame with HOI awareness; the model learns to adjust the interaction patterns, such as the hand grasp, based on changes in the object's properties. The second stage extends the single-frame edit across the entire sequence; we achieve controllable motion alignment with the original video by: (1) warping a new sequence from the stage-I edited frame based on sampled motion points and (2) conditioning video generation on the warped sequence. Comprehensive qualitative and quantitative evaluations demonstrate that HOI-Swap significantly outperforms existing methods, delivering high-quality video edits with realistic HOIs.", "pdf": "https://openreview.net/pdf/6b67b74d4bc3c6670207e065b9a583c4cf792cab.pdf"} {"title": "D-LLM: A Token Adaptive Computing Resource Allocation Strategy for Large Language Models", "url": "https://openreview.net/forum?id=UIOjGTKHQG", "detail_url": "https://openreview.net/forum?id=UIOjGTKHQG", "authors": "yikun jiang,Huanyu Wang,Lei Xie,Hanbin Zhao,Chao Zhang,Hui Qian,John C.S. Lui", "tags": "NIPS 2024,Poster", "abstract": "Large language models have shown an impressive societal impact owing to their excellent understanding and logical reasoning skills. However, such strong ability relies on a huge amount of computing resources, which makes it difficult to deploy LLMs on computing resource-constrained platforms. Currently, LLMs process each token equivalently, but we argue that not every word is equally important. Some words should not be allocated excessive computing resources, particularly for dispensable terms in simple questions. In this paper, we propose a novel dynamic inference paradigm for LLMs, namely D-LLMs, which adaptively allocate computing resources in token processing. We design a dynamic decision module for each transformer layer that decides whether a network unit should be executed or skipped. Moreover, we tackle the issue of adapting D-LLMs to real-world applications, specifically concerning the missing KV-cache when layers are skipped. To overcome this, we propose a simple yet effective eviction policy to exclude the skipped layers from subsequent attention calculations. The eviction policy not only enables D-LLMs to be compatible with prevalent applications but also reduces considerable storage resources. Experimentally, D-LLMs show superior performance, in terms of computational cost and KV storage utilization. It can reduce up to 45\\% computational cost and KV storage on Q\\&A, summarization, and math solving tasks, 50\\% on commonsense reasoning tasks.", "pdf": "https://openreview.net/pdf/22bc9c92fe5f55a971fdc0b7090cca2f2b5453c6.pdf"} {"title": "DisenGCD: A Meta Multigraph-assisted Disentangled Graph Learning Framework for Cognitive Diagnosis", "url": "https://openreview.net/forum?id=lJuQxkDbDo", "detail_url": "https://openreview.net/forum?id=lJuQxkDbDo", "authors": "Shangshang Yang,Mingyang Chen,Ziwen Wang,Xiaoshan Yu,Panpan Zhang,Haiping Ma,Xingyi Zhang", "tags": "NIPS 2024,Poster", "abstract": "Existing graph learning-based cognitive diagnosis (CD) methods have made relatively good results, but their student, exercise, and concept representations are learned and exchanged in an implicit unified graph, which makes the interaction-agnostic exercise and concept representations be learned poorly, failing to provide high robustness against noise in students' interactions. Besides, lower-order exercise latent representations obtained in shallow layers are not well explored when learning the student representation. \nTo tackle the issues, this paper suggests a meta multigraph-assisted disentangled graph learning framework for CD (DisenGCD), which learns three types of representations on three disentangled graphs: student-exercise-concept interaction, exercise-concept relation, and concept dependency graphs, respectively. \nSpecifically, the latter two graphs are first disentangled from the interaction graph. \nThen, the student representation is learned from the interaction graph by a devised meta multigraph learning module; multiple learnable propagation paths in this module enable current student latent representation to access lower-order exercise latent representations,\nwhich can lead to more effective nad robust student representations learned; \nthe exercise and concept representations are learned on the relation and dependency graphs by graph attention modules. \nFinally, a novel diagnostic function is devised to handle three disentangled representations for prediction. Experiments show better performance and robustness of DisenGCD than state-of-the-art CD methods and demonstrate the effectiveness of the disentangled learning framework and meta multigraph module.The source code is available at https://github.com/BIMK/Intelligent-Education/tree/main/DisenGCD.", "pdf": "https://openreview.net/pdf/e51af5d4e5dbe15572e497a2ca8aadbb75729501.pdf"} {"title": "Evidential Stochastic Differential Equations for Time-Aware Sequential Recommendation", "url": "https://openreview.net/forum?id=1PmsSugB87", "detail_url": "https://openreview.net/forum?id=1PmsSugB87", "authors": "Krishna Prasad Neupane,Ervine Zheng,Qi Yu", "tags": "NIPS 2024,Poster", "abstract": "Sequential recommender systems are designed to capture users' evolving interests over time. Existing methods typically assume a uniform time interval among consecutive user interactions and may not capture users' continuously evolving behavior in the short and long term. In reality, the actual time intervals of user interactions vary dramatically. Consequently, as the time interval between interactions increases, so does the uncertainty in user behavior. Intuitively, it is beneficial to establish a correlation between the interaction time interval and the model uncertainty to provide effective recommendations. To this end, we formulate a novel Evidential Neural Stochastic Differential Equation (*E-NSDE*) to seamlessly integrate NSDE and evidential learning for effective time-aware sequential recommendations. The NSDE enables the model to learn users' fine-grained time-evolving behavior by capturing continuous user representation while evidential learning quantifies both aleatoric and epistemic uncertainties considering interaction time interval to provide model confidence during prediction. Furthermore, we derive a mathematical relationship between the interaction time interval and model uncertainty to guide the learning process. Experiments on real-world data demonstrate the effectiveness of the proposed method compared to the SOTA methods.", "pdf": "https://openreview.net/pdf/2beec8cffa58544ee59efe7e2af69ee39a90bf01.pdf"} {"title": "Local and Adaptive Mirror Descents in Extensive-Form Games", "url": "https://openreview.net/forum?id=HU2uyDjAcy", "detail_url": "https://openreview.net/forum?id=HU2uyDjAcy", "authors": "C\u00f4me Fiegel,Pierre Menard,Tadashi Kozuno,Remi Munos,Vianney Perchet,Michal Valko", "tags": "NIPS 2024,Poster", "abstract": "We study how to learn $\\epsilon$-optimal strategies in zero-sum imperfect information games (IIG) with *trajectory feedback*. In this setting, players update their policies sequentially, based on their observations over a fixed number of episodes denoted by $T$. Most existing procedures suffer from high variance due to the use of importance sampling over sequences of actions. To reduce this variance, we consider a *fixed sampling* approach, where players still update their policies over time, but with observations obtained through a given fixed sampling policy. Our approach is based on an adaptive Online Mirror Descent (OMD) algorithm that applies OMD locally to each information set, using individually decreasing learning rates and a *regularized loss*. We show that this approach guarantees a convergence rate of $\\tilde{\\mathcal{O}}(T^{-1/2})$ with high probability and has a near-optimal dependence on the game parameters when applied with the best theoretical choices of learning rates and sampling policies. To achieve these results, we generalize the notion of OMD stabilization, allowing for time-varying regularization with convex increments.", "pdf": "https://openreview.net/pdf/d611ef23f6215c52bc55a606306c5aa1359e045c.pdf"} {"title": "Adaptive Important Region Selection with Reinforced Hierarchical Search for Dense Object Detection", "url": "https://openreview.net/forum?id=f8MrWxlnRz", "detail_url": "https://openreview.net/forum?id=f8MrWxlnRz", "authors": "Dingrong Wang,Hitesh Sapkota,Qi Yu", "tags": "NIPS 2024,Poster", "abstract": "Existing state-of-the-art dense object detection techniques tend to produce a large number of false positive detections on difficult images with complex scenes because they focus on ensuring a high recall. To improve the detection accuracy, we propose an Adaptive Important Region Selection (AIRS) framework guided by Evidential Q-learning coupled with a uniquely designed reward function. Inspired by human visual attention, our detection model conducts object search in a top-down, hierarchical fashion. It starts from the top of the hierarchy with the coarsest granularity and then identifies the potential patches likely to contain objects of interest. It then discards non-informative patches and progressively moves downward on the selected ones for a fine-grained search. The proposed evidential Q-learning systematically encodes epistemic uncertainty in its evidential-Q value to encourage the exploration of unknown patches, especially in the early phase of model training. In this way, the proposed model dynamically balances exploration-exploitation to cover both highly valuable and informative patches. Theoretical analysis and extensive experiments on multiple datasets demonstrate that our proposed framework outperforms the SOTA models.", "pdf": "https://openreview.net/pdf/53d742bd3904f4e7d7374a3cdcef414ae2931040.pdf"} {"title": "Sequential Decision Making with Expert Demonstrations under Unobserved Heterogeneity", "url": "https://openreview.net/forum?id=c8cpMlPUbI", "detail_url": "https://openreview.net/forum?id=c8cpMlPUbI", "authors": "Vahid Balazadeh,Keertana Chidambaram,Viet Nguyen,Rahul Krishnan,Vasilis Syrgkanis", "tags": "NIPS 2024,Poster", "abstract": "We study the problem of online sequential decision-making given auxiliary demonstrations from _experts_ who made their decisions based on unobserved contextual information. These demonstrations can be viewed as solving related but slightly different tasks than what the learner faces. This setting arises in many application domains, such as self-driving cars, healthcare, and finance, where expert demonstrations are made using contextual information, which is not recorded in the data available to the learning agent. We model the problem as a zero-shot meta-reinforcement learning setting with an unknown task distribution and a Bayesian regret minimization objective, where the unobserved tasks are encoded as parameters with an unknown prior. We propose the Experts-as-Priors algorithm (ExPerior), an empirical Bayes approach that utilizes expert data to establish an informative prior distribution over the learner's decision-making problem. This prior enables the application of any Bayesian approach for online decision-making, such as posterior sampling. We demonstrate that our strategy surpasses existing behaviour cloning and online algorithms, as well as online-offline baselines for multi-armed bandits, Markov decision processes (MDPs), and partially observable MDPs, showcasing the broad reach and utility of ExPerior in using expert demonstrations across different decision-making setups.", "pdf": "https://openreview.net/pdf/b7ab3e462b98489b467caa87b2bff712f12826dd.pdf"} {"title": "Parameter Disparities Dissection for Backdoor Defense in Heterogeneous Federated Learning", "url": "https://openreview.net/forum?id=g8wnC1E1OS", "detail_url": "https://openreview.net/forum?id=g8wnC1E1OS", "authors": "Wenke Huang,Mang Ye,Zekun Shi,Guancheng Wan,He Li,Bo Du", "tags": "NIPS 2024,Poster", "abstract": "Backdoor attacks pose a serious threat to federated systems, where malicious clients optimize on the triggered distribution to mislead the global model towards a predefined target. Existing backdoor defense methods typically require either homogeneous assumption, validation datasets, or client optimization conflicts. In our work, we observe that benign heterogeneous distributions and malicious triggered distributions exhibit distinct parameter importance degrees. We introduce the Fisher Discrepancy Cluster and Rescale (FDCR) method, which utilizes Fisher Information to calculate the degree of parameter importance for local distributions. This allows us to reweight client parameter updates and identify those with large discrepancies as backdoor attackers. Furthermore, we prioritize rescaling important parameters to expedite adaptation to the target distribution, encouraging significant elements to contribute more while diminishing the influence of trivial ones. This approach enables FDCR to handle backdoor attacks in heterogeneous federated learning environments. Empirical results on various heterogeneous federated scenarios under backdoor attacks demonstrate the effectiveness of our method.", "pdf": "https://openreview.net/pdf/72add9e5ac2eb79808943a4a8a63653f60df33de.pdf"} {"title": "Provable and Efficient Dataset Distillation for Kernel Ridge Regression", "url": "https://openreview.net/forum?id=WI2VpcBdnd", "detail_url": "https://openreview.net/forum?id=WI2VpcBdnd", "authors": "Yilan Chen,Wei Huang,Tsui-Wei Weng", "tags": "NIPS 2024,Poster", "abstract": "Deep learning models are now trained on increasingly larger datasets, making it crucial to reduce computational costs and improve data quality. Dataset distillation aims to distill a large dataset into a small synthesized dataset such that models trained on it can achieve similar performance to those trained on the original dataset. While there have been many empirical efforts to improve dataset distillation algorithms, a thorough theoretical analysis and provable, efficient algorithms are still lacking. In this paper, by focusing on dataset distillation for kernel ridge regression (KRR), we show that one data point per class is already necessary and sufficient to recover the original model's performance in many settings. For linear ridge regression and KRR with surjective feature mappings, we provide necessary and sufficient conditions for the distilled dataset to recover the original model's parameters. For KRR with injective feature mappings of deep neural networks, we show that while one data point per class is not sufficient in general, $k+1$ data points can be sufficient for deep linear neural networks, where $k$ is the number of classes. Our theoretical results enable directly constructing analytical solutions for distilled datasets, resulting in a provable and efficient dataset distillation algorithm for KRR. We verify our theory experimentally and show that our algorithm outperforms previous work such as KIP while being significantly more efficient, e.g. 15840$\\times$ faster on CIFAR-100. Our code is available at \\href{https://github.com/Trustworthy-ML-Lab/provable-efficient-dataset-distill-KRR}{GitHub}.", "pdf": "https://openreview.net/pdf/5c86731ee6a56c2c8c033dcb75a73ea56d8b41ec.pdf"} {"title": "Exponential Quantum Communication Advantage in Distributed Inference and Learning", "url": "https://openreview.net/forum?id=gGR9dJbe3r", "detail_url": "https://openreview.net/forum?id=gGR9dJbe3r", "authors": "Dar Gilboa,Hagay Michaeli,Daniel Soudry,Jarrod Ryan McClean", "tags": "NIPS 2024,Poster", "abstract": "Training and inference with large machine learning models that far exceed the memory capacity of individual devices necessitates the design of distributed architectures, forcing one to contend with communication constraints. We present a framework for distributed computation over a quantum network in which data is encoded into specialized quantum states. We prove that for models within this framework, inference and training using gradient descent can be performed with exponentially less communication compared to their classical analogs, and with relatively modest overhead relative to standard gradient-based methods. We show that certain graph neural networks are particularly amenable to implementation within this framework, and moreover present empirical evidence that they perform well on standard benchmarks.\nTo our knowledge, this is the first example of exponential quantum advantage for a generic class of machine learning problems that hold regardless of the data encoding cost. \nMoreover, we show that models in this class can encode highly nonlinear features of their inputs, and their expressivity increases exponentially with model depth.\nWe also delineate the space of models for which exponential communication advantages hold by showing that they cannot hold for linear classification. \nCommunication of quantum states that potentially limit the amount of information that can be extracted from them about the data and model parameters may also lead to improved privacy guarantees for distributed computation. Taken as a whole, these findings form a promising foundation for distributed machine learning over quantum networks.", "pdf": "https://openreview.net/pdf/85976563d7b83ae435fb465d0d3d33b9aa955470.pdf"} {"title": "Neural Isometries: Taming Transformations for Equivariant ML", "url": "https://openreview.net/forum?id=kCabCEhQWv", "detail_url": "https://openreview.net/forum?id=kCabCEhQWv", "authors": "Thomas Mitchel,Michael Taylor,Vincent Sitzmann", "tags": "NIPS 2024,Poster", "abstract": "Real-world geometry and 3D vision tasks are replete with challenging symmetries that defy tractable analytical expression. In this paper, we introduce Neural Isometries, an autoencoder framework which learns to map the observation space to a general-purpose latent space wherein encodings are related by isometries whenever their corresponding observations are geometrically related in world space. Specifically, we regularize the latent space such that maps between encodings preserve a learned inner product and commute with a learned functional operator, in the same manner as rigid-body transformations commute with the Laplacian. This approach forms an effective backbone for self-supervised representation learning, and we demonstrate that a simple off-the-shelf equivariant network operating in the pre-trained latent space can achieve results on par with meticulously-engineered, handcrafted networks designed to handle complex, nonlinear symmetries. Furthermore, isometric maps capture information about the respective transformations in world space, and we show that this allows us to regress camera poses directly from the coefficients of the maps between encodings of adjacent views of a scene.", "pdf": "https://openreview.net/pdf/8c568b63a9a2879a066eefc227b8733dc6a6a1f2.pdf"} {"title": "CoFie: Learning Compact Neural Surface Representations with Coordinate Fields", "url": "https://openreview.net/forum?id=0KseSacluJ", "detail_url": "https://openreview.net/forum?id=0KseSacluJ", "authors": "Hanwen Jiang,Haitao Yang,Georgios Pavlakos,Qixing Huang", "tags": "NIPS 2024,Poster", "abstract": "This paper introduces CoFie, a novel local geometry-aware neural surface representation. CoFie is motivated by the theoretical analysis of local SDFs with quadratic approximation. We find that local shapes are highly compressive in an aligned coordinate frame defined by the normal and tangent directions of local shapes. Accordingly, we introduce Coordinate Field, which is a composition of coordinate frames of all local shapes. The Coordinate Field is optimizable and is used to transform the local shapes from the world coordinate frame to the aligned shape coordinate frame. It largely reduces the complexity of local shapes and benefits the learning of MLP-based implicit representations. Moreover, we introduce quadratic layers into the MLP to enhance expressiveness concerning local shape geometry. CoFie is a generalizable surface representation. It is trained on a curated set of 3D shapes and works on novel shape instances during testing. When using the same amount of parameters with prior works, CoFie reduces the shape error by 48% and 56% on novel instances of both training and unseen shape categories. Moreover, CoFie demonstrates comparable performance to prior works when using even 70% fewer parameters. Code and model can be found here: https://hwjiang1510.github.io/CoFie/", "pdf": "https://openreview.net/pdf/5b9c41ca377cd2f19a3a5bacc748970c644a8700.pdf"} {"title": "FactorSim: Generative Simulation via Factorized Representation", "url": "https://openreview.net/forum?id=wBzvYh3PRA", "detail_url": "https://openreview.net/forum?id=wBzvYh3PRA", "authors": "Fan-Yun Sun,Harini S I,Angela Yi,Yihan Zhou,Alex Zook,Jonathan Tremblay,Logan Cross,Jiajun Wu,Nick Haber", "tags": "NIPS 2024,Poster", "abstract": "Generating simulations to train intelligent agents in game-playing and robotics from natural language input, user input, or task documentation remains an open-ended challenge. Existing approaches focus on parts of this challenge, such as generating reward functions or task hyperparameters. Unlike previous work, we introduce FACTORSIM that generates full simulations in code from language input that can be used to train agents. Exploiting the structural modularity specific to coded simulations, we propose to use a factored partially observable Markov decision process representation that allows us to reduce context dependence during each step of the generation. For evaluation, we introduce a generative simulation benchmark that assesses the generated simulation code\u2019s accuracy and effectiveness in facilitating zero-shot transfers in reinforcement learning settings. We show that FACTORSIM outperforms existing methods in generating simulations regarding prompt alignment (i.e., accuracy), zero-shot transfer abilities, and human evaluation. We also demonstrate its effectiveness in generating robotic tasks.", "pdf": "https://openreview.net/pdf/c3c4eed43ecec8fe574f69437c9137f8c41b7797.pdf"} {"title": "DreamScene4D: Dynamic Multi-Object Scene Generation from Monocular Videos", "url": "https://openreview.net/forum?id=YlIvhHFwQ2", "detail_url": "https://openreview.net/forum?id=YlIvhHFwQ2", "authors": "Wen-Hsuan Chu,Lei Ke,Katerina Fragkiadaki", "tags": "NIPS 2024,Poster", "abstract": "View-predictive generative models provide strong priors for lifting object-centric images and videos into 3D and 4D through rendering and score distillation objectives. A question then remains: what about lifting complete multi-object dynamic scenes? There are two challenges in this direction: First, rendering error gradients are often insufficient to recover fast object motion, and second, view predictive generative models work much better for objects than whole scenes, so, score distillation objectives cannot currently be applied at the scene level directly. We present DreamScene4D, the first approach to generate 3D dynamic scenes of multiple objects from monocular videos via 360-degree novel view synthesis. Our key insight is a \"decompose-recompose\" approach that factorizes the video scene into the background and object tracks, while also factorizing object motion into 3 components: object-centric deformation, object-to-world-frame transformation, and camera motion. Such decomposition permits rendering error gradients and object view-predictive models to recover object 3D completions and deformations while bounding box tracks guide the large object movements in the scene. We show extensive results on challenging DAVIS, Kubric, and self-captured videos with quantitative comparisons and a user preference study. Besides 4D scene generation, DreamScene4D obtains accurate 2D persistent point track by projecting the inferred 3D trajectories to 2D. We will release our code and hope our work will stimulate more research on fine-grained 4D understanding from videos.", "pdf": "https://openreview.net/pdf/08b8a8513a583a208bffb426bcd585b85bb1c532.pdf"} {"title": "Learning-Augmented Priority Queues", "url": "https://openreview.net/forum?id=1ATLLgvURu", "detail_url": "https://openreview.net/forum?id=1ATLLgvURu", "authors": "Ziyad Benomar,Christian Coester", "tags": "NIPS 2024,Poster", "abstract": "Priority queues are one of the most fundamental and widely used data structures in computer science. Their primary objective is to efficiently support the insertion of new elements with assigned priorities and the extraction of the highest priority element. \nIn this study, we investigate the design of priority queues within the learning-augmented framework, where algorithms use potentially inaccurate predictions to enhance their worst-case performance.\nWe examine three prediction models spanning different use cases, and we show how the predictions can be leveraged to enhance the performance of priority queue operations. Moreover, we demonstrate the optimality of our solution and discuss some possible applications.", "pdf": "https://openreview.net/pdf/662bef6cabfe3f02203ff281ee882b784fb6c968.pdf"} {"title": "B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable", "url": "https://openreview.net/forum?id=TA5zPfH8iI", "detail_url": "https://openreview.net/forum?id=TA5zPfH8iI", "authors": "Shreyash Arya,Sukrut Rao,Moritz B\u00f6hle,Bernt Schiele", "tags": "NIPS 2024,Poster", "abstract": "B-cos Networks have been shown to be effective for obtaining highly human interpretable explanations of model decisions by architecturally enforcing stronger alignment between inputs and weight. B-cos variants of convolutional networks (CNNs) and vision transformers (ViTs), which primarily replace linear layers with B-cos transformations, perform competitively to their respective standard variants while also yielding explanations that are faithful by design. However, it has so far been necessary to train these models from scratch, which is increasingly infeasible in the era of large, pre-trained foundation models. In this work, inspired by the architectural similarities in standard DNNs and B-cos networks, we propose \u2018B-cosification\u2019, a novel approach to transform existing pre-trained models to become inherently interpretable. We perform a thorough study of design choices to perform this conversion, both for convolutional neural networks and vision transformers. We find that B-cosification can yield models that are on par with B-cos models trained from scratch in terms of interpretability, while often outperforming them in terms of classification performance at a fraction of the training cost. Subsequently, we apply B-cosification to a pretrained CLIP model, and show that, even with limited data and compute cost, we obtain a B-cosified version that is highly interpretable and competitive on zero shot performance across a variety of datasets. We release our\ncode and pre-trained model weights at https://github.com/shrebox/B-cosification.", "pdf": "https://openreview.net/pdf/fca30262051fe174e583f9532abbafc61dc2333d.pdf"} {"title": "Addressing Bias in Online Selection with Limited Budget of Comparisons", "url": "https://openreview.net/forum?id=BdGFgKrlHl", "detail_url": "https://openreview.net/forum?id=BdGFgKrlHl", "authors": "Ziyad Benomar,Evgenii Chzhen,Nicolas Schreuder,Vianney Perchet", "tags": "NIPS 2024,Poster", "abstract": "Consider a hiring process with candidates coming from different universities. It is easy to order candidates with the same background, yet it can be challenging to compare them otherwise. The latter case requires additional costly assessments, leading to a potentially high total cost for the hiring organization. Given an assigned budget, what would be an optimal strategy to select the most qualified candidate?\nWe model the above problem as a multicolor secretary problem, allowing comparisons between candidates from distinct groups at a fixed cost. Our study explores how the allocated budget enhances the success probability of online selection algorithms.", "pdf": "https://openreview.net/pdf/558eb7fba3308df59c80801c74ef07c7cb1c3d22.pdf"} {"title": "Lookback Prophet Inequalities", "url": "https://openreview.net/forum?id=cg1vwt5Xou", "detail_url": "https://openreview.net/forum?id=cg1vwt5Xou", "authors": "Ziyad Benomar,Dorian Baudry,Vianney Perchet", "tags": "NIPS 2024,Poster", "abstract": "Prophet inequalities are fundamental optimal stopping problems, where a decision-maker observes sequentially items with values sampled independently from known distributions, and must decide at each new observation to either stop and gain the current value or reject it irrevocably and move to the next step. This model is often too pessimistic and does not adequately represent real-world online selection processes. Potentially, rejectesd items can be revisited and a fraction of their value can be recovered. To analyze this problem, we consider general decay functions $D_1,D_2,\\ldots$, quantifying the value to be recovered from a rejected item, depending on how far it has been observed in the past. We analyze how lookback improves, or not, the competitive ratio in prophet inequalities in different order models. \nWe show that, under mild monotonicity assumptions on the decay functions, the problem can be reduced to the case where all the decay functions are equal to the same function $x \\mapsto \\gamma x$, where $\\gamma = \\inf_{x>0} \\inf_{j \\geq 1} D_j(x)/x$. Consequently, we focus on this setting and refine the analyses of the competitive ratios, with upper and lower bounds expressed as increasing functions of $\\gamma$.", "pdf": "https://openreview.net/pdf/c966e74b3e15f2424239887687733377d5894459.pdf"} {"title": "How to Solve Contextual Goal-Oriented Problems with Offline Datasets?", "url": "https://openreview.net/forum?id=Ku31aRq3sW", "detail_url": "https://openreview.net/forum?id=Ku31aRq3sW", "authors": "Ying Fan,Jingling Li,Adith Swaminathan,Aditya Modi,Ching-An Cheng", "tags": "NIPS 2024,Poster", "abstract": "We present a novel method, Contextual goal-Oriented Data Augmentation (CODA), which uses commonly available unlabeled trajectories and context-goal pairs to solve Contextual Goal-Oriented (CGO) problems. By carefully constructing an action-augmented MDP that is equivalent to the original MDP, CODA creates a fully labeled transition dataset under training contexts without additional approximation error. We conduct a novel theoretical analysis to demonstrate CODA's capability to solve CGO problems in the offline data setup. Empirical results also showcase the effectiveness of CODA, which outperforms other baseline methods across various context-goal relationships of CGO problem. This approach offers a promising direction to solving CGO problems using offline datasets.", "pdf": "https://openreview.net/pdf/394329c9a08aabe39ad5f5198405d58e05051e76.pdf"} {"title": "Multiple Physics Pretraining for Spatiotemporal Surrogate Models", "url": "https://openreview.net/forum?id=DKSI3bULiZ", "detail_url": "https://openreview.net/forum?id=DKSI3bULiZ", "authors": "Michael McCabe,Bruno R\u00e9galdo-Saint Blancard,Liam Holden Parker,Ruben Ohana,Miles Cranmer,Alberto Bietti,Michael Eickenberg,Siavash Golkar,Geraud Krawezik,Francois Lanusse,Mariel Pettee,Tiberiu Tesileanu,Kyunghyun Cho,Shirley Ho", "tags": "NIPS 2024,Poster", "abstract": "We introduce multiple physics pretraining (MPP), an autoregressive task-agnostic pretraining approach for physical surrogate modeling of spatiotemporal systems with transformers. In MPP, rather than training one model on a specific physical system, we train a backbone model to predict the dynamics of multiple heterogeneous physical systems simultaneously in order to learn features that are broadly useful across systems and facilitate transfer. In order to learn effectively in this setting, we introduce a shared embedding and normalization strategy that projects the fields of multiple systems into a shared embedding space. We validate the efficacy of our approach on both pretraining and downstream tasks over a broad fluid mechanics-oriented benchmark. We show that a single MPP-pretrained transformer is able to match or outperform task-specific baselines on all pretraining sub-tasks without the need for finetuning. For downstream tasks, we demonstrate that finetuning MPP-trained models results in more accurate predictions across multiple time-steps on systems with previously unseen physical components or higher dimensional systems compared to training from scratch or finetuning pretrained video foundation models. We open-source our code and model weights trained at multiple scales for reproducibility.", "pdf": "https://openreview.net/pdf/f17e2dec513bf41ef3a4a0158ce2876724a18c38.pdf"} {"title": "DU-Shapley: A Shapley Value Proxy for Efficient Dataset Valuation", "url": "https://openreview.net/forum?id=uCgFk8nP0Z", "detail_url": "https://openreview.net/forum?id=uCgFk8nP0Z", "authors": "Felipe Garrido,Benjamin Heymann,Maxime Vono,Patrick Loiseau,Vianney Perchet", "tags": "NIPS 2024,Poster", "abstract": "We consider the dataset valuation problem, that is the problem of quantifying the incremental gain, to some relevant pre-defined utility of a machine learning task, of aggregating an individual dataset to others.\nThe Shapley value is a natural tool to perform dataset valuation due to its formal axiomatic justification, which can be combined with Monte Carlo integration to overcome the computational tractability challenges. Such generic approximation methods, however, remain expensive in some cases. In this paper, we exploit the knowledge about the structure of the dataset valuation problem to devise more efficient Shapley value estimators. We propose a novel approximation, referred to as discrete uniform Shapley, which is expressed as an expectation under a discrete uniform distribution with support of reasonable size. We justify the relevancy of the proposed framework via asymptotic and non-asymptotic theoretical guarantees and illustrate its benefits via an extensive set of numerical experiments.", "pdf": "https://openreview.net/pdf/d5040e60c0b03762c252d1fcdf5dbbeeba1f0efe.pdf"} {"title": "Reinforcement Learning with Lookahead Information", "url": "https://openreview.net/forum?id=wlqfOvlTQz", "detail_url": "https://openreview.net/forum?id=wlqfOvlTQz", "authors": "Nadav Merlis", "tags": "NIPS 2024,Poster", "abstract": "We study reinforcement learning (RL) problems in which agents observe the reward or transition realizations at their current state _before deciding which action to take_. Such observations are available in many applications, including transactions, navigation and more. When the environment is known, previous work shows that this lookahead information can drastically increase the collected reward. However, outside of specific applications, existing approaches for interacting with unknown environments are not well-adapted to these observations. In this work, we close this gap and design provably-efficient learning algorithms able to incorporate lookahead information. To achieve this, we perform planning using the empirical distribution of the reward and transition observations, in contrast to vanilla approaches that only rely on estimated expectations. We prove that our algorithms achieve tight regret versus a baseline that also has access to lookahead information -- linearly increasing the amount of collected reward compared to agents that cannot handle lookahead information.", "pdf": "https://openreview.net/pdf/ad7a5666a27d4242faa064f772f46ff2791265c1.pdf"} {"title": "Taming Heavy-Tailed Losses in Adversarial Bandits and the Best-of-Both-Worlds Setting", "url": "https://openreview.net/forum?id=4Yj7L9Kt7t", "detail_url": "https://openreview.net/forum?id=4Yj7L9Kt7t", "authors": "Duo Cheng,Xingyu Zhou,Bo Ji", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we study the multi-armed bandit problem in the best-of-both-worlds (BOBW) setting with heavy-tailed losses, where the losses can be negative and unbounded but have $(1+v)$-th raw moments bounded by $u^{1+v}$ for some known $u>0$ and $v\\in(0,1]$. Specifically, we consider the BOBW setting where the underlying environment could be either (oblivious) adversarial (i.e., the loss distribution can change arbitrarily over time) or stochastic (i.e., the loss distribution is fixed over time) and is unknown to the decision-maker a prior, and propose an algorithm that achieves a $T^{\\frac{1}{1+v}}$-type worst-case (pseudo-)regret in the adversarial regime and a $\\log T$-type gap-dependent regret in the stochastic regime, where $T$ is the time horizon. Compared to the state-of-the-art results, our algorithm offers stronger \\emph{high-probability} regret guarantees rather than expected regret guarantees, and more importantly, relaxes a strong technical assumption on the loss distribution. This assumption is needed even for the weaker expected regret obtained in the literature and is generally hard to verify in practice. As a byproduct, relaxing this assumption leads to the first near-optimal regret result for heavy-tailed bandits with Huber contamination in the adversarial regime, in contrast to all previous works focused on the (easier) stochastic regime. Our result also implies a high-probability BOBW regret guarantee when the bounded true losses are protected with pure Local Differential Privacy (LDP), while the existing work ensures the (weaker) \\emph{approximate} LDP with the regret bounds in expectation only.", "pdf": "https://openreview.net/pdf/a8fdfdfb5545fe905e7b427d0a7b90394db22697.pdf"} {"title": "What matters when building vision-language models?", "url": "https://openreview.net/forum?id=dtvJF1Vy2i", "detail_url": "https://openreview.net/forum?id=dtvJF1Vy2i", "authors": "Hugo Lauren\u00e7on,Leo Tronchon,Matthieu Cord,Victor Sanh", "tags": "NIPS 2024,Poster", "abstract": "The growing interest in vision-language models (VLMs) has been driven by improvements in large language models and vision transformers. Despite the abundance of literature on this subject, we observe that critical decisions regarding the design of VLMs are often not justified. We argue that these unsupported decisions impede progress in the field by making it difficult to identify which choices improve model performance. To address this issue, we conduct extensive experiments around pre-trained models, architecture choice, data, and training methods. Our consolidation of findings includes the development of Idefics2, an efficient foundational VLM of 8 billion parameters. Idefics2 achieves state-of-the-art performance within its size category across various multimodal benchmarks, and is often on par with models four times its size. We release the model (base, instructed, and chat) along with the datasets created for its training.", "pdf": "https://openreview.net/pdf/bd8a566943e874320060b624f8993b71592e6e2c.pdf"} {"title": "Practical Bayesian Algorithm Execution via Posterior Sampling", "url": "https://openreview.net/forum?id=m4ZcDrVvid", "detail_url": "https://openreview.net/forum?id=m4ZcDrVvid", "authors": "Chu Xin Cheng,Raul Astudillo,Thomas Desautels,Yisong Yue", "tags": "NIPS 2024,Poster", "abstract": "We consider Bayesian algorithm execution (BAX), a framework for efficiently selecting evaluation points of an expensive function to infer a property of interest encoded as the output of a base algorithm. Since the base algorithm typically requires more evaluations than are feasible, it cannot be directly applied. Instead, BAX methods sequentially select evaluation points using a probabilistic numerical approach. Current BAX methods use expected information gain to guide this selection. However, this approach is computationally intensive. Observing that, in many tasks, the property of interest corresponds to a target set of points defined by the function, we introduce PS-BAX, a simple, effective, and scalable BAX method based on posterior sampling. PS-BAX is applicable to a wide range of problems, including many optimization variants and level set estimation. Experiments across diverse tasks demonstrate that PS-BAX performs competitively with existing baselines while being significantly faster, simpler to implement, and easily parallelizable, setting a strong baseline for future research. Additionally, we establish conditions under which PS-BAX is asymptotically convergent, offering new insights into posterior sampling as an algorithm design paradigm.", "pdf": "https://openreview.net/pdf/aa64728e33cc20d941afc5c78a957327d2a32a80.pdf"} {"title": "On the Efficiency of ERM in Feature Learning", "url": "https://openreview.net/forum?id=5kthqxbK7r", "detail_url": "https://openreview.net/forum?id=5kthqxbK7r", "authors": "Ayoub El Hanchi,Chris J. Maddison,Murat A Erdogdu", "tags": "NIPS 2024,Poster", "abstract": "Given a collection of feature maps indexed by a set $\\mathcal{T}$, we study the performance of empirical risk minimization (ERM) on regression problems with square loss over the union of the linear classes induced by these feature maps. This setup aims at capturing the simplest instance of feature learning, where the model is expected to jointly learn from the data an appropriate feature map and a linear predictor. We start by studying the asymptotic quantiles of the excess risk of sequences of empirical risk minimizers. Remarkably, we show that when the set $\\mathcal{T}$ is not too large and when there is a unique optimal feature map, these quantiles coincide, up to a factor of two, with those of the excess risk of the oracle procedure, which knows a priori this optimal feature map and deterministically outputs an empirical risk minimizer from the associated optimal linear class. We complement this asymptotic result with a non-asymptotic analysis that quantifies the decaying effect of the global complexity of the set $\\mathcal{T}$ on the excess risk of ERM, and relates it to the size of the sublevel sets of the suboptimality of the feature maps. As an application of our results, we characterize the performance of the best subset selection procedure in sparse linear regression under general assumptions.", "pdf": "https://openreview.net/pdf/3252fec742b319b71053f755cd30d0ca4d97dc2a.pdf"} {"title": "Human-Object Interaction Detection Collaborated with Large Relation-driven Diffusion Models", "url": "https://openreview.net/forum?id=lmsCSDymEP", "detail_url": "https://openreview.net/forum?id=lmsCSDymEP", "authors": "Liulei Li,Wenguan Wang,Yi Yang", "tags": "NIPS 2024,Poster", "abstract": "Prevalent human-object interaction (HOI) detection approaches typically leverage large-scale visual-linguistic models to help recognize events involving humans and objects. Though promising, models trained via contrastive learning on text-image pairs often neglect mid/low-level visual cues and struggle at compositional reasoning. In response, we introduce DIFFUSIONHOI, a new HOI detector shedding light on text-to-image diffusion models. Unlike the aforementioned models, diffusion models excel in discerning mid/low-level visual concepts as generative models, and possess strong compositionality to handle novel concepts expressed in text inputs. Considering diffusion models usually emphasize instance objects, we first devise an inversion-based strategy to learn the expression of relation patterns between humans and objects in embedding space. These learned relation embeddings then serve as textual prompts, to steer diffusion models generate images that depict specific interactions, and extract HOI-relevant cues from images without heavy finetuning. Benefited from above, DIFFUSIONHOI achieves SOTA performance on three datasets under both regular and zero-shot setups.", "pdf": "https://openreview.net/pdf/d968c578f9956e1c66a75ead0ee823a71f966e3f.pdf"} {"title": "Gradient-free Decoder Inversion in Latent Diffusion Models", "url": "https://openreview.net/forum?id=nbqvjkOs6S", "detail_url": "https://openreview.net/forum?id=nbqvjkOs6S", "authors": "Seongmin Hong,Suh Yoon Jeon,Kyeonghyun Lee,Ernest K. Ryu,Se Young Chun", "tags": "NIPS 2024,Poster", "abstract": "In latent diffusion models (LDMs), denoising diffusion process efficiently takes place on latent space whose dimension is lower than that of pixel space. Decoder is typically used to transform the representation in latent space to that in pixel space. While a decoder is assumed to have an encoder as an accurate inverse, exact encoder-decoder pair rarely exists in practice even though applications often require precise inversion of decoder. In other words, encoder is not the left-inverse but the right-inverse of the decoder; decoder inversion seeks the left-inverse. Prior works for decoder inversion in LDMs employed gradient descent inspired by inversions of generative adversarial networks. However, gradient-based methods require larger GPU memory and longer computation time for larger latent space. For example, recent video LDMs can generate more than 16 frames, but GPUs with 24 GB memory can only perform gradient-based decoder inversion for 4 frames. Here, we propose an efficient gradient-free decoder inversion for LDMs, which can be applied to diverse latent models. Theoretical convergence property of our proposed inversion has been investigated not only for the forward step method, but also for the inertial Krasnoselskii-Mann (KM) iterations under mild assumption on cocoercivity that is satisfied by recent LDMs. Our proposed gradient-free method with Adam optimizer and learning rate scheduling significantly reduced computation time and memory usage over prior gradient-based methods and enabled efficient computation in applications such as noise-space watermarking and background-preserving image editing while achieving comparable error levels.", "pdf": "https://openreview.net/pdf/673b4bd9c2c5df584e9ce2632909dde3257b93a8.pdf"} {"title": "NeuRodin: A Two-stage Framework for High-Fidelity Neural Surface Reconstruction", "url": "https://openreview.net/forum?id=hVGAGU4TKk", "detail_url": "https://openreview.net/forum?id=hVGAGU4TKk", "authors": "Yifan Wang,Di Huang,Weicai Ye,Guofeng Zhang,Wanli Ouyang,Tong He", "tags": "NIPS 2024,Poster", "abstract": "Signed Distance Function (SDF)-based volume rendering has demonstrated significant capabilities in surface reconstruction. Although promising, SDF-based methods often fail to capture detailed geometric structures, resulting in visible defects. By comparing SDF-based volume rendering to density-based volume rendering, we identify two main factors within the SDF-based approach that degrade surface quality: SDF-to-density representation and geometric regularization. These factors introduce challenges that hinder the optimization of the SDF field. To address these issues, we introduce NeuRodin, a novel two-stage neural surface reconstruction framework that not only achieves high-fidelity surface reconstruction but also retains the flexible optimization characteristics of density-based methods. \n NeuRodin incorporates innovative strategies that facilitate transformation of arbitrary topologies and reduce artifacts associated with density bias.\n Extensive evaluations on the Tanks and Temples and ScanNet++ datasets demonstrate the superiority of NeuRodin, showing strong reconstruction capabilities for both indoor and outdoor environments using solely posed RGB captures. Project website:\nhttps://open3dvlab.github.io/NeuRodin/", "pdf": "https://openreview.net/pdf/d5aa69f2f01793f39ba1f7a0037b6447b118b10d.pdf"} {"title": "Why the Metric Backbone Preserves Community Structure", "url": "https://openreview.net/forum?id=Kx8I0rP7w2", "detail_url": "https://openreview.net/forum?id=Kx8I0rP7w2", "authors": "Maximilien Dreveton,Charbel Chucri,Matthias Grossglauser,Patrick Thiran", "tags": "NIPS 2024,Poster", "abstract": "The metric backbone of a weighted graph is the union of all-pairs shortest paths. It is obtained by removing all edges $(u,v)$ that are not the shortest path between $u$ and $v$. In networks with well-separated communities, the metric backbone tends to preserve many inter-community edges, because these edges serve as bridges connecting two communities, but tends to delete many intra-community edges because the communities are dense. This suggests that the metric backbone would dilute or destroy the community structure of the network. However, this is not borne out by prior empirical work, which instead showed that the metric backbone of real networks preserves the community structure of the original network well. In this work, we analyze the metric backbone of a broad class of weighted random graphs with communities, and we formally prove the robustness of the community structure with respect to the deletion of all the edges that are not in the metric backbone. An empirical comparison of several graph sparsification techniques confirms our theoretical finding and shows that the metric backbone is an efficient sparsifier in the presence of communities.", "pdf": "https://openreview.net/pdf/72e4e35ac9f7332d4537dbc4d056e4e5016dbcf0.pdf"} {"title": "Tackling Uncertain Correspondences for Multi-Modal Entity Alignment", "url": "https://openreview.net/forum?id=IAse6CAG26", "detail_url": "https://openreview.net/forum?id=IAse6CAG26", "authors": "Liyi Chen,Ying Sun,Shengzhe Zhang,Yuyang Ye,Wei Wu,Hui Xiong", "tags": "NIPS 2024,Poster", "abstract": "Recently, multi-modal entity alignment has emerged as a pivotal endeavor for the integration of Multi-Modal Knowledge Graphs (MMKGs) originating from diverse data sources. Existing works primarily focus on fully depicting entity features by designing various modality encoders or fusion approaches. However, uncertain correspondences between inter-modal or intra-modal cues, such as weak inter-modal associations, description diversity, and modality absence, still severely hinder the effective exploration of aligned entity similarities. To this end, in this paper, we propose a novel Tackling uncertain correspondences method for Multi-modal Entity Alignment (TMEA). Specifically, to handle diverse attribute knowledge descriptions, we design alignment-augmented abstract representation that incorporates the large language model and in-context learning into attribute alignment and filtering for generating and embedding the attribute abstract. In order to mitigate the influence of the modality absence, we propose to unify all modality features into a shared latent subspace and generate pseudo features via variational autoencoders according to existing modal features. Then, we develop an inter-modal commonality enhancement mechanism based on cross-attention with orthogonal constraints, to address weak semantic associations between modalities. Extensive experiments on two real-world datasets validate the effectiveness of TMEA with a clear improvement over competitive baselines.", "pdf": "https://openreview.net/pdf/52289702a21f2ed53dae65a18d4e3a3ffa17b84c.pdf"} {"title": "Self-supervised Transformation Learning for Equivariant Representations", "url": "https://openreview.net/forum?id=87AXdbkRyd", "detail_url": "https://openreview.net/forum?id=87AXdbkRyd", "authors": "Jaemyung Yu,Jaehyun Choi,Dong-Jae Lee,HyeongGwon Hong,Junmo Kim", "tags": "NIPS 2024,Poster", "abstract": "Unsupervised representation learning has significantly advanced various machine learning tasks. In the computer vision domain, state-of-the-art approaches utilize transformations like random crop and color jitter to achieve invariant representations, embedding semantically the same inputs despite transformations. However, this can degrade performance in tasks requiring precise features, such as localization or flower classification. To address this, recent research incorporates equivariant representation learning, which captures transformation-sensitive information. However, current methods depend on transformation labels and thus struggle with interdependency and complex transformations. We propose Self-supervised Transformation Learning (STL), replacing transformation labels with transformation representations derived from image pairs. The proposed method ensures transformation representation is image-invariant and learns corresponding equivariant transformations, enhancing performance without increased batch complexity. We demonstrate the approach\u2019s effectiveness across diverse classification and detection tasks, outperforming existing methods in 7 out of 11 benchmarks and excelling in detection. By integrating complex transformations like AugMix, unusable by prior equivariant methods, this approach enhances performance across tasks, underscoring its adaptability and resilience. Additionally, its compatibility with various base models highlights its flexibility and broad applicability. The code is available at https://github.com/jaemyung-u/stl.", "pdf": "https://openreview.net/pdf/0ee6d2109f29ca0939d6319c78173cdc156359d0.pdf"} {"title": "Multi-Object 3D Grounding with Dynamic Modules and Language-Informed Spatial Attention", "url": "https://openreview.net/forum?id=jFWl9EWZ7z", "detail_url": "https://openreview.net/forum?id=jFWl9EWZ7z", "authors": "Haomeng Zhang,Chiao An Yang,Raymond A. Yeh", "tags": "NIPS 2024,Poster", "abstract": "Multi-object 3D Grounding involves locating 3D boxes based on a given query phrase from a point cloud. It is a challenging and significant task that has numerous applications in visual understanding, human-computer interaction, and robotics. To tackle this challenge, we introduce D-LISA, a two-stage approach that incorporates three innovations. First, a dynamic vision module that enables a variable and learnable number of box proposals. Second, a dynamic camera positioning that extracts features for each proposal. Third, a language-informed spatial attention module that better reasons over the proposals to output the final prediction. Empirically, experiments show that our method outperforms the state-of-the-art methods on multi-object 3D grounding by 12.8% (absolute) and is competitive in single-object 3D grounding.", "pdf": "https://openreview.net/pdf/a00e4f772e2813276a2eaadaf70c65a0f5e563ee.pdf"} {"title": "Credal Deep Ensembles for Uncertainty Quantification", "url": "https://openreview.net/forum?id=PCgnTiGC9K", "detail_url": "https://openreview.net/forum?id=PCgnTiGC9K", "authors": "Kaizheng Wang,Fabio Cuzzolin,Shireen Kudukkil Manchingal,Keivan Shariatmadar,David Moens,Hans Hallez", "tags": "NIPS 2024,Poster", "abstract": "This paper introduces an innovative approach to classification called Credal Deep Ensembles (CreDEs), namely, ensembles of novel Credal-Set Neural Networks (CreNets). CreNets are trained to predict a lower and an upper probability bound for each class, which, in turn, determine a convex set of probabilities (credal set) on the class set. The training employs a loss inspired by distributionally robust optimization which simulates the potential divergence of the test distribution from the training distribution, in such a way that the width of the predicted probability interval reflects the epistemic uncertainty about the future data distribution. Ensembles can be constructed by training multiple CreNets, each associated with a different random seed, and averaging the outputted intervals. Extensive experiments are conducted on various out-of-distributions (OOD) detection benchmarks (CIFAR10/100 vs SVHN/Tiny-ImageNet, CIFAR10 vs CIFAR10-C, ImageNet vs ImageNet-O) and using different network architectures (ResNet50, VGG16, and ViT Base). Compared to Deep Ensemble baselines, CreDEs demonstrate higher test accuracy, lower expected calibration error, and significantly improved epistemic uncertainty estimation.", "pdf": "https://openreview.net/pdf/0000b4509531262631102b4be13dfcddc8400f01.pdf"} {"title": "UltraPixel: Advancing Ultra High-Resolution Image Synthesis to New Peaks", "url": "https://openreview.net/forum?id=voJCpdlw53", "detail_url": "https://openreview.net/forum?id=voJCpdlw53", "authors": "Jingjing Ren,Wenbo Li,Haoyu Chen,Renjing Pei,Bin Shao,Yong Guo,Long Peng,Fenglong Song,Lei Zhu", "tags": "NIPS 2024,Poster", "abstract": "Ultra-high-resolution image generation poses great challenges, such as increased semantic planning complexity and detail synthesis difficulties, alongside substantial training resource demands. We present UltraPixel, a novel architecture utilizing cascade diffusion models to generate high-quality images at multiple resolutions (\\textit{e.g.}, 1K, 2K, and 4K) within a single model, while maintaining computational efficiency. UltraPixel leverages semantics-rich representations of lower-resolution images in a later denoising stage to guide the whole generation of highly detailed high-resolution images, significantly reducing complexity. Specifically, we introduce implicit neural representations for continuous upsampling and scale-aware normalization layers adaptable to various resolutions. Notably, both low- and high-resolution processes are performed in the most compact space, sharing the majority of parameters with less than 3$\\%$ additional parameters for high-resolution outputs, largely enhancing training and inference efficiency. Our model achieves fast training with reduced data requirements, producing photo-realistic high-resolution images and demonstrating state-of-the-art performance in extensive experiments.", "pdf": "https://openreview.net/pdf/17e36670338ced1f6346b71e16283ec8543c38f0.pdf"} {"title": "In-Trajectory Inverse Reinforcement Learning: Learn Incrementally Before an Ongoing Trajectory Terminates", "url": "https://openreview.net/forum?id=mJZH9w8qgu", "detail_url": "https://openreview.net/forum?id=mJZH9w8qgu", "authors": "Shicheng Liu,Minghui Zhu", "tags": "NIPS 2024,Poster", "abstract": "Inverse reinforcement learning (IRL) aims to learn a reward function and a corresponding policy that best fit the demonstrated trajectories of an expert. However, current IRL works cannot learn incrementally from an ongoing trajectory because they have to wait to collect at least one complete trajectory to learn. To bridge the gap, this paper considers the problem of learning a reward function and a corresponding policy while observing the initial state-action pair of an ongoing trajectory and keeping updating the learned reward and policy when new state-action pairs of the ongoing trajectory are observed. We formulate this problem as an online bi-level optimization problem where the upper level dynamically adjusts the learned reward according to the newly observed state-action pairs with the help of a meta-regularization term, and the lower level learns the corresponding policy. We propose a novel algorithm to solve this problem and guarantee that the algorithm achieves sub-linear local regret $O(\\sqrt{T}+\\log T+\\sqrt{T}\\log T)$. If the reward function is linear, we prove that the proposed algorithm achieves sub-linear regret $O(\\log T)$. Experiments are used to validate the proposed algorithm.", "pdf": "https://openreview.net/pdf/3b427badc2149d9bf43f46d2128e4cef13cd5a7b.pdf"} {"title": "RLE: A Unified Perspective of Data Augmentation for Cross-Spectral Re-Identification", "url": "https://openreview.net/forum?id=Ok6jSSxzfj", "detail_url": "https://openreview.net/forum?id=Ok6jSSxzfj", "authors": "Lei Tan,Yukang Zhang,Keke Han,Pingyang Dai,Yan Zhang,YONGJIAN WU,Rongrong Ji", "tags": "NIPS 2024,Poster", "abstract": "This paper makes a step towards modeling the modality discrepancy in the cross-spectral re-identification task. Based on the Lambertain model, we observe that the non-linear modality discrepancy mainly comes from diverse linear transformations acting on the surface of different materials. From this view, we unify all data augmentation strategies for cross-spectral re-identification as mimicking such local linear transformations and categorize them into moderate transformation and radical transformation. By extending the observation, we propose a Random Linear Enhancement (RLE) strategy which includes Moderate Random Linear Enhancement (MRLE) and Radical Random Linear Enhancement (RRLE) to push the boundaries of both types of transformation. Moderate Random Linear Enhancement is designed to provide diverse image transformations that satisfy the original linear correlations under constrained conditions, whereas Radical Random Linear Enhancement seeks to generate local linear transformations directly without relying on external information. The experimental results not only demonstrate the superiority and effectiveness of RLE but also confirm its great potential as a general-purpose data augmentation for cross-spectral re-identification.", "pdf": "https://openreview.net/pdf/fa2b171e803d1e28d7155ac5456d3dc55f9c7131.pdf"} {"title": "A Cat Is A Cat (Not A Dog!): Unraveling Information Mix-ups in Text-to-Image Encoders through Causal Analysis and Embedding Optimization", "url": "https://openreview.net/forum?id=YNRYWZHmKY", "detail_url": "https://openreview.net/forum?id=YNRYWZHmKY", "authors": "Chieh-Yun Chen,Chiang Tseng,Li-Wu Tsao,Hong-Han Shuai", "tags": "NIPS 2024,Poster", "abstract": "This paper analyzes the impact of causal manner in the text encoder of text-to-image (T2I) diffusion models, which can lead to information bias and loss. Previous works have focused on addressing the issues through the denoising process. However, there is no research discussing how text embedding contributes to T2I models, especially when generating more than one object. In this paper, we share a comprehensive analysis of text embedding: i) how text embedding contributes to the generated images and ii) why information gets lost and biases towards the first-mentioned object. Accordingly, we propose a simple but effective text embedding balance optimization method, which is training-free, with an improvement of 125.42\\% on information balance in stable diffusion. Furthermore, we propose a new automatic evaluation metric that quantifies information loss more accurately than existing methods, achieving 81\\% concordance with human assessments. This metric effectively measures the presence and accuracy of objects, addressing the limitations of current distribution scores like CLIP's text-image similarities.", "pdf": "https://openreview.net/pdf/00007767f4833d81931c46695abefcd9c584a586.pdf"} {"title": "Bridging the Divide: Reconsidering Softmax and Linear Attention", "url": "https://openreview.net/forum?id=RSiGFzQapl", "detail_url": "https://openreview.net/forum?id=RSiGFzQapl", "authors": "Dongchen Han,Yifan Pu,Zhuofan Xia,Yizeng Han,Xuran Pan,Xiu Li,Jiwen Lu,Shiji Song,Gao Huang", "tags": "NIPS 2024,Poster", "abstract": "Widely adopted in modern Vision Transformer designs, Softmax attention can effectively capture long-range visual information; however, it incurs excessive computational cost when dealing with high-resolution inputs. In contrast, linear attention naturally enjoys linear complexity and has great potential to scale up to higher-resolution images. Nonetheless, the unsatisfactory performance of linear attention greatly limits its practical application in various scenarios. In this paper, we take a step forward to close the gap between the linear and Softmax attention with novel theoretical analyses, which demystify the core factors behind the performance deviations. Specifically, we present two key perspectives to understand and alleviate the limitations of linear attention: the injective property and the local modeling ability. Firstly, we prove that linear attention is not injective, which is prone to assign identical attention weights to different query vectors, thus adding to severe semantic confusion since different queries correspond to the same outputs. Secondly, we confirm that effective local modeling is essential for the success of Softmax attention, in which linear attention falls short. The aforementioned two fundamental differences significantly contribute to the disparities between these two attention paradigms, which is demonstrated by our substantial empirical validation in the paper. In addition, more experiment results indicate that linear attention, as long as endowed with these two properties, can outperform Softmax attention across various tasks while maintaining lower computation complexity. Code is available at https://github.com/LeapLabTHU/InLine.", "pdf": "https://openreview.net/pdf/31bad536df26e6977656df403fe28b991643b271.pdf"} {"title": "GoMatching: A Simple Baseline for Video Text Spotting via Long and Short Term Matching", "url": "https://openreview.net/forum?id=ASv9lQcHCc", "detail_url": "https://openreview.net/forum?id=ASv9lQcHCc", "authors": "Haibin He,Maoyuan Ye,Jing Zhang,Juhua Liu,Bo Du,Dacheng Tao", "tags": "NIPS 2024,Poster", "abstract": "Beyond the text detection and recognition tasks in image text spotting, video text spotting presents an augmented challenge with the inclusion of tracking. While advanced end-to-end trainable methods have shown commendable performance, the pursuit of multi-task optimization may pose the risk of producing sub-optimal outcomes for individual tasks. In this paper, we identify a main bottleneck in the state-of-the-art video text spotter: the limited recognition capability. In response to this issue, we propose to efficiently turn an off-the-shelf query-based image text spotter into a specialist on video and present a simple baseline termed GoMatching, which focuses the training efforts on tracking while maintaining strong recognition performance. To adapt the image text spotter to video datasets, we add a rescoring head to rescore each detected instance's confidence via efficient tuning, leading to a better tracking candidate pool. \nAdditionally, we design a long-short term matching module, termed LST-Matcher, to enhance the spotter's tracking capability by integrating both long- and short-term matching results via Transformer. Based on the above simple designs, GoMatching delivers new records on ICDAR15-video, DSText, BOVText, and our proposed novel test set with arbitrary-shaped text termed ArTVideo, which demonstates GoMatching's capability to accommodate general, dense, small, arbitrary-shaped, Chinese and English text scenarios while saving considerable training budgets. The code will be released.", "pdf": "https://openreview.net/pdf/83c8a86469d2ccfb10b616932ba9eb573a1af1a4.pdf"} {"title": "Expanding Sparse Tuning for Low Memory Usage", "url": "https://openreview.net/forum?id=AbZyNGWfpN", "detail_url": "https://openreview.net/forum?id=AbZyNGWfpN", "authors": "Shufan Shen,Junshu Sun,Xiangyang Ji,Qingming Huang,Shuhui Wang", "tags": "NIPS 2024,Poster", "abstract": "Parameter-efficient fine-tuning (PEFT) is an effective method for adapting pre-trained vision models to downstream tasks by tuning a small subset of parameters. Among PEFT methods, sparse tuning achieves superior performance by only adjusting the weights most relevant to downstream tasks, rather than densely tuning the whole weight matrix. However, this performance improvement has been accompanied by increases in memory usage, which stems from two factors, i.e., the storage of the whole weight matrix as learnable parameters in the optimizer and the additional storage of tunable weight indexes. In this paper, we propose a method named SNELL (Sparse tuning with kerNELized LoRA) for sparse tuning with low memory usage. To achieve low memory usage, SNELL decomposes the tunable matrix for sparsification into two learnable low-rank matrices, saving from the costly storage of the whole original matrix. A competition-based sparsification mechanism is further proposed to avoid the storage of tunable weight indexes. To maintain the effectiveness of sparse tuning with low-rank matrices, we extend the low-rank decomposition by applying nonlinear kernel functions to the whole-matrix merging. Consequently, we gain an increase in the rank of the merged matrix, enhancing the ability of SNELL in adapting the pre-trained models to downstream tasks. Extensive experiments on multiple downstream tasks show that SNELL achieves state-of-the-art performance with low memory usage, endowing PEFT with sparse tuning to large-scale models. Codes are available at https://github.com/ssfgunner/SNELL.", "pdf": "https://openreview.net/pdf/9f9ba39d91df45971e3cde402d0940a8f439ff49.pdf"} {"title": "Active preference learning for ordering items in- and out-of-sample", "url": "https://openreview.net/forum?id=PSLH5q7PFo", "detail_url": "https://openreview.net/forum?id=PSLH5q7PFo", "authors": "Herman Bergstr\u00f6m,Emil Carlsson,Devdatt Dubhashi,Fredrik D. Johansson", "tags": "NIPS 2024,Poster", "abstract": "Learning an ordering of items based on pairwise comparisons is useful when items are difficult to rate consistently on an absolute scale, for example, when annotators have to make subjective assessments. When exhaustive comparison is infeasible, actively sampling item pairs can reduce the number of annotations necessary for learning an accurate ordering. However, many algorithms ignore shared structure between items, limiting their sample efficiency and precluding generalization to new items. It is also common to disregard how noise in comparisons varies between item pairs, despite it being informative of item similarity. In this work, we study active preference learning for ordering items with contextual attributes, both in- and out-of-sample. We give an upper bound on the expected ordering error of a logistic preference model as a function of which items have been compared. Next, we propose an active learning strategy that samples items to minimize this bound by accounting for aleatoric and epistemic uncertainty in comparisons. We evaluate the resulting algorithm, and a variant aimed at reducing model misspecification, in multiple realistic ordering tasks with comparisons made by human annotators. Our results demonstrate superior sample efficiency and generalization compared to non-contextual ranking approaches and active preference learning baselines.", "pdf": "https://openreview.net/pdf/9d2a2347866b8b0435d2ec96c66f31d142d660c3.pdf"} {"title": "OPUS: Occupancy Prediction Using a Sparse Set", "url": "https://openreview.net/forum?id=ZyR0sRQrDd", "detail_url": "https://openreview.net/forum?id=ZyR0sRQrDd", "authors": "JiaBao Wang,Zhaojiang Liu,Qiang Meng,Liujiang Yan,Ke Wang,JIE YANG,Wei Liu,Qibin Hou,Ming-Ming Cheng", "tags": "NIPS 2024,Poster", "abstract": "Occupancy prediction, aiming at predicting the occupancy status within voxelized 3D environment, is quickly gaining momentum within the autonomous driving community. Mainstream occupancy prediction works first discretize the 3D environment into voxels, then perform classification on such dense grids. However, inspection on sample data reveals that the vast majority of voxels is unoccupied. Performing classification on these empty voxels demands suboptimal computation resource allocation, and reducing such empty voxels necessitates complex algorithm designs. To this end, we present a novel perspective on the occupancy prediction task: formulating it as a streamlined set prediction paradigm without the need for explicit space modeling or complex sparsification procedures. Our proposed framework, called OPUS, utilizes a transformer encoder-decoder architecture to simultaneously predict occupied locations and classes using a set of learnable queries. Firstly, we employ the Chamfer distance loss to scale the set-to-set comparison problem to unprecedented magnitudes, making training such model end-to-end a reality. Subsequently, semantic classes are adaptively assigned using nearest neighbor search based on the learned locations. In addition, OPUS incorporates a suite of non-trivial strategies to enhance model performance, including coarse-to-fine learning, consistent point sampling, and adaptive re-weighting, etc. Finally, compared with current state-of-the-art methods, our lightest model achieves superior RayIoU on the Occ3D-nuScenes dataset at near 2x FPS, while our heaviest model surpasses previous best results by 6.1 RayIoU.", "pdf": "https://openreview.net/pdf/9bc95b7b8108f0b381e0120ce85262d9ef7570a2.pdf"} {"title": "Multi-scale Consistency for Robust 3D Registration via Hierarchical Sinkhorn Tree", "url": "https://openreview.net/forum?id=sfPxUqzdPI", "detail_url": "https://openreview.net/forum?id=sfPxUqzdPI", "authors": "Chengwei Ren,Yifan Feng,Weixiang Zhang,Xiao-Ping Zhang,Yue Gao", "tags": "NIPS 2024,Poster", "abstract": "We study the problem of retrieving accurate correspondence through multi-scale consistency (MSC) for robust point cloud registration. Existing works in a coarse-to-fine manner either suffer from severe noisy correspondences caused by unreliable coarse matching or struggle to form outlier-free coarse-level correspondence sets. To tackle this, we present Hierarchical Sinkhorn Tree (HST), a pruned tree structure designed to hierarchically measure the local consistency of each coarse correspondence across multiple feature scales, thereby filtering out the local dissimilar ones. In this way, we convert the modeling of MSC for each correspondence into a BFS traversal with pruning of a K-ary tree rooted at the superpoint, with its K nearest neighbors in the feature pyramid serving as child nodes. To achieve efficient pruning and accurate vicinity characterization, we further propose a novel overlap-aware Sinkhorn Distance, which retains only the most likely overlapping points for local measurement and next level exploration. The modeling process essentially involves traversing a pair of HSTs synchronously and aggregating the consistency measures of corresponding tree nodes. Extensive experiments demonstrate HST consistently outperforms the state-of-the-art methods on both indoor and outdoor benchmarks.", "pdf": "https://openreview.net/pdf/ee535dc856d1d4bea09ea0ea17affe9d466274a5.pdf"} {"title": "Interactive Deep Clustering via Value Mining", "url": "https://openreview.net/forum?id=Y7HPB7pL1f", "detail_url": "https://openreview.net/forum?id=Y7HPB7pL1f", "authors": "Honglin Liu,Peng Hu,Changqing Zhang,Yunfan Li,Xi Peng", "tags": "NIPS 2024,Poster", "abstract": "In the absence of class priors, recent deep clustering methods resort to data augmentation and pseudo-labeling strategies to generate supervision signals. Though achieved remarkable success, existing works struggle to discriminate hard samples at cluster boundaries, mining which is particularly challenging due to their unreliable cluster assignments. To break such a performance bottleneck, we propose incorporating user interaction to facilitate clustering instead of exhaustively mining semantics from the data itself. To be exact, we present Interactive Deep Clustering (IDC), a plug-and-play method designed to boost the performance of pre-trained clustering models with minimal interaction overhead. More specifically, IDC first quantitatively evaluates sample values based on hardness, representativeness, and diversity, where the representativeness avoids selecting outliers and the diversity prevents the selected samples from collapsing into a small number of clusters. IDC then queries the cluster affiliations of high-value samples in a user-friendly manner. Finally, it utilizes the user feedback to finetune the pre-trained clustering model. Extensive experiments demonstrate that IDC could remarkably improve the performance of various pre-trained clustering models, at the expense of low user interaction costs. The code could be accessed at pengxi.me.", "pdf": "https://openreview.net/pdf/c5dc0df3d59683c7edb37a0587adeb92ce7d5a65.pdf"} {"title": "Lisa: Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning Attack", "url": "https://openreview.net/forum?id=RPChapuXlC", "detail_url": "https://openreview.net/forum?id=RPChapuXlC", "authors": "Tiansheng Huang,Sihao Hu,Fatih Ilhan,Selim Furkan Tekin,Ling Liu", "tags": "NIPS 2024,Poster", "abstract": "Recent studies show that Large Language Models (LLMs) with safety alignment can be jail-broken by fine-tuning on a dataset mixed with harmful data. For the first time in the literature, we show that the jail-break effect can be mitigated by separating two states in the fine-tuning stage to respectively optimize over the alignment and user datasets. Unfortunately, our subsequent study shows that this simple Bi-State Optimization (BSO) solution experiences convergence instability when steps invested in its alignment state is too small, leading to downgraded alignment performance. By statistical analysis, we show that the \\textit{excess drift} towards the switching iterates of the two states could be a probable reason for the instability. To remedy this issue, we propose \\textbf{L}azy(\\textbf{i}) \\textbf{s}afety \\textbf{a}lignment (\\textbf{Lisa}), which introduces a proximal term to constraint the drift of each state. Theoretically, the benefit of the proximal term is supported by the convergence analysis, wherein we show that a sufficient large proximal factor is necessary to guarantee Lisa's convergence. Empirically, our results on four downstream fine-tuning tasks show that Lisa with a proximal term can significantly increase alignment performance while maintaining the LLM's accuracy on the user tasks. Code is available at https://github.com/git-disl/Lisa.", "pdf": "https://openreview.net/pdf/fa55c75e7e028109e0278fcd68fa3c20db53aa8d.pdf"} {"title": "Vaccine: Perturbation-aware Alignment for Large Language Models against Harmful Fine-tuning Attack", "url": "https://openreview.net/forum?id=lpXDZKiAnt", "detail_url": "https://openreview.net/forum?id=lpXDZKiAnt", "authors": "Tiansheng Huang,Sihao Hu,Ling Liu", "tags": "NIPS 2024,Poster", "abstract": "The new paradigm of fine-tuning-as-a-service introduces a new attack surface for Large Language Models (LLMs): a few harmful data uploaded by users can easily trick the fine-tuning to produce an alignment-broken model. We conduct an empirical analysis and uncover\na \\textit{harmful embedding drift} phenomenon, showing a probable \ncause of the alignment-broken effect. Inspired by our findings, we propose Vaccine, a perturbation-aware alignment technique to mitigate the security risk of users fine-tuning. The core idea of Vaccine is to produce invariant hidden embeddings by progressively adding crafted perturbation to them in the alignment phase. This enables the embeddings to withstand harmful perturbation from un-sanitized user data in the fine-tuning phase. Our results on open source mainstream LLMs (e.g., Llama2, Opt, Vicuna) demonstrate that Vaccine can boost the robustness of alignment against harmful prompts induced embedding drift while reserving reasoning ability towards benign prompts. Our code is available at https://github.com/git-disl/Vaccine.", "pdf": "https://openreview.net/pdf/69c7003f7279c297f9e4e46a7c73c9cf3d0f0c5b.pdf"} {"title": "Cloud Object Detector Adaptation by Integrating Different Source Knowledge", "url": "https://openreview.net/forum?id=S8SEjerTTg", "detail_url": "https://openreview.net/forum?id=S8SEjerTTg", "authors": "Shuaifeng Li,Mao Ye,Lihua Zhou,Nianxin Li,Siying Xiao,Song Tang,Xiatian Zhu", "tags": "NIPS 2024,Poster", "abstract": "We propose to explore an interesting and promising problem, Cloud Object Detector Adaptation (CODA), where the target domain leverages detections provided by a large cloud model to build a target detector. Despite with powerful generalization capability, the cloud model still cannot achieve error-free detection in a specific target domain. In this work, we present a novel Cloud Object detector adaptation method by Integrating different source kNowledge (COIN). The key idea is to incorporate a public vision-language model (CLIP) to distill positive knowledge while refining negative knowledge for adaptation by self-promotion gradient direction alignment. To that end, knowledge dissemination, separation, and distillation are carried out successively. Knowledge dissemination combines knowledge from cloud detector and CLIP model to initialize a target detector and a CLIP detector in target domain. By matching CLIP detector with the cloud detector, knowledge separation categorizes detections into three parts: consistent, inconsistent and private detections such that divide-and-conquer strategy can be used for knowledge distillation. Consistent and private detections are directly used to train target detector; while inconsistent detections are fused based on a consistent knowledge generation network, which is trained by aligning the gradient direction of inconsistent detections to that of consistent detections, because it provides a direction toward an optimal target detector. Experiment results demonstrate that the proposed COIN method achieves the state-of-the-art performance.", "pdf": "https://openreview.net/pdf/35ef1f407462b300fcba7e8d806b4ebad63eac69.pdf"} {"title": "Learning 3D Equivariant Implicit Function with Patch-Level Pose-Invariant Representation", "url": "https://openreview.net/forum?id=aXS1pwMa8I", "detail_url": "https://openreview.net/forum?id=aXS1pwMa8I", "authors": "Xin Hu,Xiaole Tang,Ruixuan Yu,Jian Sun", "tags": "NIPS 2024,Poster", "abstract": "Implicit neural representation gains popularity in modeling the continuous 3D surface for 3D representation and reconstruction. In this work, we are motivated by the fact that the local 3D patches repeatedly appear on 3D shapes/surfaces if the factor of poses is removed. Based on this observation, we propose the 3D patch-level equivariant implicit function (PEIF) based on the 3D patch-level pose-invariant representation, allowing us to reconstruct 3D surfaces by estimating equivariant displacement vector fields for query points. Specifically, our model is based on the pose-normalized query/patch pairs and enhanced by the proposed intrinsic patch geometry representation, modeling the intrinsic 3D patch geometry feature by learnable multi-head memory banks. Extensive experiments show that our model achieves state-of-the-art performance on multiple surface reconstruction datasets, and also exhibits better generalization to crossdataset shapes and robustness to arbitrary rotations. Our code will be available at https://github.com/mathXin112/PEIF.git.", "pdf": "https://openreview.net/pdf/43343c161e1c011706fba5e6d3f362ddd8e2167c.pdf"} {"title": "VisionLLM v2: An End-to-End Generalist Multimodal Large Language Model for Hundreds of Vision-Language Tasks", "url": "https://openreview.net/forum?id=nvYDPF4LJK", "detail_url": "https://openreview.net/forum?id=nvYDPF4LJK", "authors": "Jiannan Wu,Muyan Zhong,Sen Xing,Zeqiang Lai,Zhaoyang Liu,Zhe Chen,Wenhai Wang,Xizhou Zhu,Lewei Lu,Tong Lu,Ping Luo,Yu Qiao,Jifeng Dai", "tags": "NIPS 2024,Poster", "abstract": "We present VisionLLM v2, an end-to-end generalist multimodal large model (MLLM) that unifies visual perception, understanding, and generation within a single framework. Unlike traditional MLLMs limited to text output, VisionLLM v2 significantly broadens its application scope. It excels not only in conventional visual question answering (VQA) but also in open-ended, cross-domain vision tasks such as object localization, pose estimation, and image generation and editing. To this end, we propose a new information transmission mechanism termed ``super link'', as a medium to connect MLLM with task-specific decoders. It not only allows flexible transmission of task information and gradient feedback between the MLLM and multiple downstream decoders but also effectively resolves training conflicts in multi-tasking scenarios. In addition, to support the diverse range of tasks, we carefully collected and combed training data from hundreds of public vision and vision-language tasks. In this way, our model can be joint-trained end-to-end on hundreds of vision language tasks and generalize to these tasks using a set of shared parameters through different user prompts, achieving performance comparable to task-specific models. We believe VisionLLM v2 will offer a new perspective on the generalization of MLLMs.", "pdf": "https://openreview.net/pdf/88c6ba80e503fc8179139756925d2386d233098b.pdf"} {"title": "On the Optimal Time Complexities in Decentralized Stochastic Asynchronous Optimization", "url": "https://openreview.net/forum?id=IXRa8adMHX", "detail_url": "https://openreview.net/forum?id=IXRa8adMHX", "authors": "Alexander Tyurin,Peter Richt\u00e1rik", "tags": "NIPS 2024,Poster", "abstract": "We consider the decentralized stochastic asynchronous optimization setup, where many workers asynchronously calculate stochastic gradients and asynchronously communicate with each other using edges in a multigraph. For both homogeneous and heterogeneous setups, we prove new time complexity lower bounds under the assumption that computation and communication speeds are bounded by constants. After that, we developed a new nearly optimal method, Fragile SGD, and a new optimal method, Amelie SGD, that converge with arbitrary heterogeneous computation and communication speeds and match our lower bounds (up to a logarithmic factor in the homogeneous setting). Our time complexities are new, nearly optimal, and provably improve all previous asynchronous/synchronous stochastic methods in the decentralized setup.", "pdf": "https://openreview.net/pdf/c32a73243002c5447e1227445ed34f23994b8903.pdf"} {"title": "OT4P: Unlocking Effective Orthogonal Group Path for Permutation Relaxation", "url": "https://openreview.net/forum?id=pMJFaBzoG3", "detail_url": "https://openreview.net/forum?id=pMJFaBzoG3", "authors": "Yaming Guo,Chen Zhu,Hengshu Zhu,Tieru Wu", "tags": "NIPS 2024,Poster", "abstract": "Optimization over permutations is typically an NP-hard problem that arises extensively in ranking, matching, tracking, etc. Birkhoff polytope-based relaxation methods have made significant advancements, particularly in penalty-free optimization and probabilistic inference. Relaxation onto the orthogonal group offers unique potential advantages such as a lower representation dimension and preservation of inner products; however, equally effective approaches remain unexplored. To bridge the gap, we present a temperature-controlled differentiable transformation that maps unconstrained vector space to the orthogonal group, where the temperature, in the limit, concentrates orthogonal matrices near permutation matrices. This transformation naturally implements a parameterization for the relaxation of permutation matrices, allowing for gradient-based optimization of problems involving permutations. Additionally, by deriving a re-parameterized gradient estimator, this transformation also provides efficient stochastic optimization over the latent permutations. Extensive experiments involving the optimization over permutation matrices validate the effectiveness of the proposed method.", "pdf": "https://openreview.net/pdf/c3b6f09a1ec2460ff63325a7189dd7873a26b49b.pdf"} {"title": "Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations", "url": "https://openreview.net/forum?id=AUeTkSymOq", "detail_url": "https://openreview.net/forum?id=AUeTkSymOq", "authors": "Alexander Tyurin,Kaja Gruntkowska,Peter Richt\u00e1rik", "tags": "NIPS 2024,Poster", "abstract": "In practical distributed systems, workers are typically not homogeneous, and due to differences in hardware configurations and network conditions, can have highly varying processing times. We consider smooth nonconvex finite-sum (empirical risk minimization) problems in this setup and introduce a new parallel method, Freya PAGE, designed to handle arbitrarily heterogeneous and asynchronous computations. By being robust to \"stragglers\" and adaptively ignoring slow computations, Freya PAGE offers significantly improved time complexity guarantees compared to all previous methods, including Asynchronous SGD, Rennala SGD, SPIDER, and PAGE, while requiring weaker assumptions. The algorithm relies on novel generic stochastic gradient collection strategies with theoretical guarantees that can be of interest on their own, and may be used in the design of future optimization methods. Furthermore, we establish a lower bound for smooth nonconvex finite-sum problems in the asynchronous setup, providing a fundamental time complexity limit. This lower bound is tight and demonstrates the optimality of Freya PAGE in the large-scale regime, i.e., when $\\sqrt{m} \\geq n,$ where $n$ is \\# of workers, and $m$ is \\# of data samples.", "pdf": "https://openreview.net/pdf/fd9c96c0aafddc1f06a74e91c66533e525fe2422.pdf"} {"title": "Are Self-Attentions Effective for Time Series Forecasting?", "url": "https://openreview.net/forum?id=iN43sJoib7", "detail_url": "https://openreview.net/forum?id=iN43sJoib7", "authors": "Dongbin Kim,Jinseong Park,Jaewook Lee,Hoki Kim", "tags": "NIPS 2024,Poster", "abstract": "Time series forecasting is crucial for applications across multiple domains and various scenarios. Although Transformers have dramatically advanced the landscape of forecasting, their effectiveness remains debated. Recent findings have indicated that simpler linear models might outperform complex Transformer-based approaches, highlighting the potential for more streamlined architectures. In this paper, we shift the focus from evaluating the overall Transformer architecture to specifically examining the effectiveness of self-attention for time series forecasting. To this end, we introduce a new architecture, Cross-Attention-only Time Series transformer (CATS), that rethinks the traditional transformer framework by eliminating self-attention and leveraging cross-attention mechanisms instead. \nBy establishing future horizon-dependent parameters as queries and enhanced parameter sharing, our model not only improves long-term forecasting accuracy but also reduces the number of parameters and memory usage. Extensive experiment across various datasets demonstrates that our model achieves superior performance with the lowest mean squared error and uses fewer parameters compared to existing models.\nThe implementation of our model is available at: https://github.com/dongbeank/CATS.", "pdf": "https://openreview.net/pdf/ef44e2de71717de8e0acde8f37eaa2c5f5b5500c.pdf"} {"title": "On Learning Multi-Modal Forgery Representation for Diffusion Generated Video Detection", "url": "https://openreview.net/forum?id=4bJufOS6No", "detail_url": "https://openreview.net/forum?id=4bJufOS6No", "authors": "Xiufeng Song,Xiao Guo,Jiache Zhang,Qirui Li,LEI BAI,Xiaoming Liu,Guangtao Zhai,Xiaohong Liu", "tags": "NIPS 2024,Poster", "abstract": "Large numbers of synthesized videos from diffusion models pose threats to information security and authenticity, leading to an increasing demand for generated content detection. However, existing video-level detection algorithms primarily focus on detecting facial forgeries and often fail to identify diffusion-generated content with a diverse range of semantics. To advance the field of video forensics, we propose an innovative algorithm named Multi-Modal Detection(MM-Det) for detecting diffusion-generated videos. MM-Det utilizes the profound perceptual and comprehensive abilities of Large Multi-modal Models (LMMs) by generating a Multi-Modal Forgery Representation (MMFR) from LMM's multi-modal space, enhancing its ability to detect unseen forgery content. Besides, MM-Det leverages an In-and-Across Frame Attention (IAFA) mechanism for feature augmentation in the spatio-temporal domain. A dynamic fusion strategy helps refine forgery representations for the fusion. Moreover, we construct a comprehensive diffusion video dataset, called Diffusion Video Forensics (DVF), across a wide range of forgery videos. MM-Det achieves state-of-the-art performance in DVF, demonstrating the effectiveness of our algorithm. Both source code and DVF are available at https://github.com/SparkleXFantasy/MM-Det.", "pdf": "https://openreview.net/pdf/eca64a59d6062b7b412c7d0c3c4b25f9a2c5d86c.pdf"} {"title": "Soft ascent-descent as a stable and flexible alternative to flooding", "url": "https://openreview.net/forum?id=Y1ZsLONDI2", "detail_url": "https://openreview.net/forum?id=Y1ZsLONDI2", "authors": "Matthew J. Holland,Kosuke Nakatani", "tags": "NIPS 2024,Poster", "abstract": "As a heuristic for improving test accuracy in classification, the \"flooding\" method proposed by Ishida et al. (2020) sets a threshold for the average surrogate loss at training time; above the threshold, gradient descent is run as usual, but below the threshold, a switch to gradient *ascent* is made. While setting the threshold is non-trivial and is usually done with validation data, this simple technique has proved remarkably effective in terms of accuracy. On the other hand, what if we are also interested in other metrics such as model complexity or average surrogate loss at test time? As an attempt to achieve better overall performance with less fine-tuning, we propose a softened, pointwise mechanism called SoftAD (soft ascent-descent) that downweights points on the borderline, limits the effects of outliers, and retains the ascent-descent effect of flooding, with no additional computational overhead. We contrast formal stationarity guarantees with those for flooding, and empirically demonstrate how SoftAD can realize classification accuracy competitive with flooding (and the more expensive alternative SAM) while enjoying a much smaller loss generalization gap and model norm.", "pdf": "https://openreview.net/pdf/951f259f1b33272b545627df94924cce29f1e376.pdf"} {"title": "From Chaos to Clarity: 3DGS in the Dark", "url": "https://openreview.net/forum?id=lWHe7pmk7C", "detail_url": "https://openreview.net/forum?id=lWHe7pmk7C", "authors": "Zhihao Li,Yufei Wang,Alex Kot,Bihan Wen", "tags": "NIPS 2024,Poster", "abstract": "Novel view synthesis from raw images provides superior high dynamic range (HDR) information compared to reconstructions from low dynamic range RGB images. However, the inherent noise in unprocessed raw images compromises the accuracy of 3D scene representation. Our study reveals that 3D Gaussian Splatting (3DGS) is particularly susceptible to this noise, leading to numerous elongated Gaussian shapes that overfit the noise, thereby significantly degrading reconstruction quality and reducing inference speed, especially in scenarios with limited views. To address these issues, we introduce a novel self-supervised learning framework designed to reconstruct HDR 3DGS from a limited number of noisy raw images. This framework enhances 3DGS by integrating a noise extractor and employing a noise-robust reconstruction loss that leverages a noise distribution prior. Experimental results show that our method outperforms LDR/HDR 3DGS and previous state-of-the-art (SOTA) self-supervised and supervised pre-trained models in both reconstruction quality and inference speed on the RawNeRF dataset across a broad range of training views. We will release the code upon paper acceptance.", "pdf": "https://openreview.net/pdf/4f75f8a55048a791aa6665df8a0ee829c5890bb3.pdf"} {"title": "An Analysis of Elo Rating Systems via Markov Chains", "url": "https://openreview.net/forum?id=kLiWXUdCEw", "detail_url": "https://openreview.net/forum?id=kLiWXUdCEw", "authors": "Sam Olesker-Taylor,Luca Zanetti", "tags": "NIPS 2024,Poster", "abstract": "We present a theoretical analysis of the Elo rating system, a popular method for ranking skills of players in an online setting. In particular, we study Elo under the Bradley-Terry-Luce model and, using techniques from Markov chain theory, show that Elo learns the model parameters at a rate competitive with the state-of-the-art. We apply our results to the problem of efficient tournament design and discuss a connection with the fastest-mixing Markov chain problem.", "pdf": "https://openreview.net/pdf/42165f1041ca152ea8a885495b76275febac1292.pdf"} {"title": "MemoryFormer : Minimize Transformer Computation by Removing Fully-Connected Layers", "url": "https://openreview.net/forum?id=04EC4ZnZJj", "detail_url": "https://openreview.net/forum?id=04EC4ZnZJj", "authors": "Ning Ding,Yehui Tang,Haochen Qin,Zhenli Zhou,Chao Xu,Lin Li,Kai Han,Liao Heng,Yunhe Wang", "tags": "NIPS 2024,Poster", "abstract": "In order to reduce the computational complexity of large language models, great efforts have been made to to improve the efficiency of transformer models such as linear attention and flash-attention. However, the model size and corresponding computational complexity are constantly scaled up in pursuit of higher performance. In this work, we present MemoryFormer, a novel transformer architecture which significantly reduces the computational complexity (FLOPs) from a new perspective. We eliminate nearly all the computations of the transformer model except for the necessary computation required by the multi-head attention operation. This is made possible by utilizing an alternative method for feature transformation to replace the linear projection of fully-connected layers. Specifically, we first construct a group of in-memory lookup tables that store a large amount of discrete vectors to replace the weight matrix used in linear projection. We then use a hash algorithm to retrieve a correlated subset of vectors dynamically based on the input embedding. The retrieved vectors combined together will form the output embedding, which provides an estimation of the result of matrix multiplication operation in a fully-connected layer. Compared to conducting matrix multiplication, retrieving data blocks from memory is a much cheaper operation which requires little computations. We train MemoryFormer from scratch and conduct extensive experiments on various benchmarks to demonstrate the effectiveness of the proposed model.", "pdf": "https://openreview.net/pdf/3182fbee67403ec98f7f114ce0d5398a511c94cf.pdf"} {"title": "Rethinking Imbalance in Image Super-Resolution for Efficient Inference", "url": "https://openreview.net/forum?id=fyYrZbWtNz", "detail_url": "https://openreview.net/forum?id=fyYrZbWtNz", "authors": "Wei Yu,Bowen Yang,Qinglin Liu,Jianing Li,Shengping Zhang,Xiangyang Ji", "tags": "NIPS 2024,Poster", "abstract": "Existing super-resolution (SR) methods optimize all model weights equally using $\\mathcal{L}_1$ or $\\mathcal{L}_2$ losses by uniformly sampling image patches without considering dataset imbalances or parameter redundancy, which limits their performance. To address this, we formulate the image SR task as an imbalanced distribution transfer learning problem from a statistical probability perspective, proposing a plug-and-play Weight-Balancing framework (WBSR) to achieve balanced model learning without changing the original model structure and training data. Specifically, we develop a Hierarchical Equalization Sampling (HES) strategy to address data distribution imbalances, enabling better feature representation from texture-rich samples. To tackle model optimization imbalances, we propose a Balanced Diversity Loss (BDLoss) function, focusing on learning texture regions while disregarding redundant computations in smooth regions. After joint training of HES and BDLoss to rectify these imbalances, we present a gradient projection dynamic inference strategy to facilitate accurate and efficient inference. Extensive experiments across various models, datasets, and scale factors demonstrate that our method achieves comparable or superior performance to existing approaches with about 34\\% reduction in computational cost.", "pdf": "https://openreview.net/pdf/c2d8d46acf8b21841d395663e4394c0292149fc0.pdf"} {"title": "Can We Leave Deepfake Data Behind in Training Deepfake Detector?", "url": "https://openreview.net/forum?id=vh9yEPLeyD", "detail_url": "https://openreview.net/forum?id=vh9yEPLeyD", "authors": "Jikang Cheng,Zhiyuan Yan,Ying Zhang,Yuhao Luo,Zhongyuan Wang,Chen Li", "tags": "NIPS 2024,Poster", "abstract": "The generalization ability of deepfake detectors is vital for their applications in real-world scenarios. One effective solution to enhance this ability is to train the models with manually-blended data, which we termed ''blendfake'', encouraging models to learn generic forgery artifacts like blending boundary. Interestingly, current SoTA methods utilize blendfake $\\textit{without}$ incorporating any deepfake data in their training process. This is likely because previous empirical observations suggest that vanilla hybrid training (VHT), which combines deepfake and blendfake data, results in inferior performance to methods using only blendfake data (so-called \"1+1<2\"). Therefore, a critical question arises: Can we leave deepfake behind and rely solely on blendfake data to train an effective deepfake detector? Intuitively, as deepfakes also contain additional informative forgery clues ($\\textit{e.g.,}$ deep generative artifacts), excluding all deepfake data in training deepfake detectors seems counter-intuitive. In this paper, we rethink the role of blendfake in detecting deepfakes and formulate the process from \"real to blendfake to deepfake\" to be a $\\textit{progressive transition}$. Specifically, blendfake and deepfake can be explicitly delineated as the oriented pivot anchors between \"real-to-fake\" transitions. The accumulation of forgery information should be oriented and progressively increasing during this transition process. To this end, we propose an $\\underline{O}$riented $\\underline{P}$rogressive $\\underline{R}$egularizor (OPR) to establish the constraints that compel the distribution of anchors to be discretely arranged. Furthermore, we introduce feature bridging to facilitate the smooth transition between adjacent anchors. Extensive experiments confirm that our design allows leveraging forgery information from both blendfake and deepfake effectively and comprehensively. Code is available at https://github.com/beautyremain/ProDet.", "pdf": "https://openreview.net/pdf/c5f94fbd9d60ce8c8dc283b8970f02e3f631bd22.pdf"} {"title": "SPO: Sequential Monte Carlo Policy Optimisation", "url": "https://openreview.net/forum?id=XKvYcPPH5G", "detail_url": "https://openreview.net/forum?id=XKvYcPPH5G", "authors": "Matthew Macfarlane,Edan Toledo,Donal John Byrne,Paul Duckworth,Alexandre Laterre", "tags": "NIPS 2024,Poster", "abstract": "Leveraging planning during learning and decision-making is central to the long-term development of intelligent agents. Recent works have successfully combined tree-based search methods and self-play learning mechanisms to this end. However, these methods typically face scaling challenges due to the sequential nature of their search. While practical engineering solutions can partly overcome this, they often result in a negative impact on performance. In this paper, we introduce SPO: Sequential Monte Carlo Policy Optimisation, a model-based reinforcement learning algorithm grounded within the Expectation Maximisation (EM) framework. We show that SPO provides robust policy improvement and efficient scaling properties. The sample-based search makes it directly applicable to both discrete and continuous action spaces without modifications. We demonstrate statistically significant improvements in performance relative to model-free and model-based baselines across both continuous and discrete environments. Furthermore, the parallel nature of SPO\u2019s search enables effective utilisation of hardware accelerators, yielding favourable scaling laws.", "pdf": "https://openreview.net/pdf/62ca7358130cf69057f00f903b75cf5f8d15a86b.pdf"} {"title": "Mitigating Object Hallucination via Concentric Causal Attention", "url": "https://openreview.net/forum?id=CIRPE1bSmV", "detail_url": "https://openreview.net/forum?id=CIRPE1bSmV", "authors": "Yun Xing,Yiheng Li,Ivan Laptev,Shijian Lu", "tags": "NIPS 2024,Poster", "abstract": "Recent Large Vision Language Models (LVLMs) present remarkable zero-shot conversational and reasoning capabilities given multimodal queries. Nevertheless, they suffer from object hallucination, a phenomenon where LVLMs are prone to generate textual responses not factually aligned with image inputs. Our pilot study reveals that object hallucination is closely tied with Rotary Position Encoding (RoPE), a widely adopted positional dependency modeling design in existing LVLMs. Due to the long-term decay in RoPE, LVLMs tend to hallucinate more when relevant visual cues are distant from instruction tokens in the multimodal input sequence, Additionally, we observe a similar effect when reversing the sequential order of visual tokens during multimodal alignment. Our tests indicate that long-term decay in RoPE poses challenges to LVLMs while capturing visual-instruction interactions across long distances. We propose Concentric Causal Attention (CCA), a simple yet effective positional alignment strategy that mitigates the impact of RoPE long-term decay in LVLMs by naturally reducing relative distance between visual and instruction tokens. With CCA, visual tokens can better interact with instruction tokens, thereby enhancing model's perception capability and alleviating object hallucination. Without bells and whistles, our positional alignment method surpasses existing hallucination mitigation strategies by large margins on multiple object hallucination benchmarks.", "pdf": "https://openreview.net/pdf/0fdbe640f2d0df7aa1ac021c542f4cb301f131d3.pdf"} {"title": "Shadowheart SGD: Distributed Asynchronous SGD with Optimal Time Complexity Under Arbitrary Computation and Communication Heterogeneity", "url": "https://openreview.net/forum?id=O8yHsRLwPl", "detail_url": "https://openreview.net/forum?id=O8yHsRLwPl", "authors": "Alexander Tyurin,Marta Pozzi,Ivan Ilin,Peter Richt\u00e1rik", "tags": "NIPS 2024,Poster", "abstract": "We consider nonconvex stochastic optimization problems in the asynchronous centralized distributed setup where the communication times from workers to a server can not be ignored, and the computation and communication times are potentially different for all workers. Using an unbiassed compression technique, we develop a new method\u2014Shadowheart SGD\u2014that provably improves the time complexities of all previous centralized methods. Moreover, we show that the time complexity of Shadowheart SGD is optimal in the family of centralized methods with compressed communication. We also consider the bidirectional setup, where broadcasting from the server to the workers is non-negligible, and develop a corresponding method.", "pdf": "https://openreview.net/pdf/f17bca98b113dae10833f48c690422ffbabb84dc.pdf"} {"title": "A Consistency-Aware Spot-Guided Transformer for Versatile and Hierarchical Point Cloud Registration", "url": "https://openreview.net/forum?id=btLLWaOrFs", "detail_url": "https://openreview.net/forum?id=btLLWaOrFs", "authors": "Renlang Huang,Yufan Tang,Jiming Chen,Liang Li", "tags": "NIPS 2024,Poster", "abstract": "Deep learning-based feature matching has shown great superiority for point cloud registration in the absence of pose priors. Although coarse-to-fine matching approaches are prevalent, the coarse matching of existing methods is typically sparse and loose without consideration of geometric consistency, which makes the subsequent fine matching rely on ineffective optimal transport and hypothesis-and-selection methods for consistency. Therefore, these methods are neither efficient nor scalable for real-time applications such as odometry in robotics. To address these issues, we design a consistency-aware spot-guided Transformer (CAST), which incorporates a spot-guided cross-attention module to avoid interfering with irrelevant areas, and a consistency-aware self-attention module to enhance matching capabilities with geometrically consistent correspondences. Furthermore, a lightweight fine matching module for both sparse keypoints and dense features can estimate the transformation accurately. Extensive experiments on both outdoor LiDAR point cloud datasets and indoor RGBD point cloud datasets demonstrate that our method achieves state-of-the-art accuracy, efficiency, and robustness.", "pdf": "https://openreview.net/pdf/0ebedb57564b241db75971993a3c60eeffde03dd.pdf"} {"title": "Dual Prototype Evolving for Test-Time Generalization of Vision-Language Models", "url": "https://openreview.net/forum?id=jsgYYXaSiS", "detail_url": "https://openreview.net/forum?id=jsgYYXaSiS", "authors": "Ce Zhang,Simon Stepputtis,Katia P. Sycara,Yaqi Xie", "tags": "NIPS 2024,Poster", "abstract": "Test-time adaptation, which enables models to generalize to diverse data with unlabeled test samples, holds significant value in real-world scenarios. Recently, researchers have applied this setting to advanced pre-trained vision-language models (VLMs), developing approaches such as test-time prompt tuning to further extend their practical applicability. However, these methods typically focus solely on adapting VLMs from a single modality and fail to accumulate task-specific knowledge as more samples are processed. To address this, we introduce Dual Prototype Evolving (DPE), a novel test-time adaptation approach for VLMs that effectively accumulates task-specific knowledge from multi-modalities. Specifically, we create and evolve two sets of prototypes\u2014textual and visual\u2014to progressively capture more accurate multi-modal representations for target classes during test time. Moreover, to promote consistent multi-modal representations, we introduce and optimize learnable residuals for each test sample to align the prototypes from both modalities. Extensive experimental results on 15 benchmark datasets demonstrate that our proposed DPE consistently outperforms previous state-of-the-art methods while also exhibiting competitive computational efficiency.", "pdf": "https://openreview.net/pdf/2e765cd5aec6d33cb021ceae32ccd052f0f7823e.pdf"} {"title": "DEL: Discrete Element Learner for Learning 3D Particle Dynamics with Neural Rendering", "url": "https://openreview.net/forum?id=2nvkD0sPOk", "detail_url": "https://openreview.net/forum?id=2nvkD0sPOk", "authors": "Jiaxu Wang,Jingkai SUN,Ziyi Zhang,Junhao He,Qiang Zhang,Mingyuan Sun,Renjing Xu", "tags": "NIPS 2024,Poster", "abstract": "Learning-based simulators show great potential for simulating particle dynamics when 3D groundtruth is available, but per-particle correspondences are not always accessible. The development of neural rendering presents a new solution to this field to learn 3D dynamics from 2D images by inverse rendering. \nHowever, existing approaches still suffer from ill-posed natures resulting from the 2D to 3D uncertainty, for example, specific 2D images can correspond with various 3D particle distributions. To mitigate such uncertainty, we consider a conventional, mechanically interpretable framework as the physical priors and extend it to a learning-based version. In brief, we incorporate the learnable graph kernels into the classic Discrete Element Analysis (DEA) framework to implement a novel mechanics-informed network architecture. In this case, the graph networks are only used for approximating some specific mechanical operators in the DEA framework rather than the whole dynamics mapping. By integrating the strong physics priors, our methods can effectively learn the dynamics of various materials from the partial 2D observations in a unified manner. Experiments show that our approach outperforms other learned simulators by a large margin in this context and is robust to different renderers, fewer training samples, and fewer camera views.", "pdf": "https://openreview.net/pdf/d6fa7c33d9c5cdb816a1f37511c85430dcca32d3.pdf"} {"title": "UniMTS: Unified Pre-training for Motion Time Series", "url": "https://openreview.net/forum?id=DpByqSbdhI", "detail_url": "https://openreview.net/forum?id=DpByqSbdhI", "authors": "Xiyuan Zhang,Diyan Teng,Ranak Roy Chowdhury,Shuheng Li,Dezhi Hong,Rajesh K. Gupta,Jingbo Shang", "tags": "NIPS 2024,Poster", "abstract": "Motion time series collected from low-power, always-on mobile and wearable devices such as smartphones and smartwatches offer significant insights into human behavioral patterns, with wide applications in healthcare, automation, IoT, and AR/XR. However, given security and privacy concerns, building large-scale motion time series datasets remains difficult, hindering the development of pre-trained models for human activity analysis. Typically, existing models are trained and tested on the same dataset, leading to poor generalizability across variations in device location, device mounting orientation, and human activity type. In this paper, we introduce UniMTS, the first unified pre-training procedure for motion time series that generalizes across diverse device latent factors and activities. Specifically, we employ a contrastive learning framework that aligns motion time series with text descriptions enriched by large language models. This helps the model learn the semantics of time series to generalize across activities. Given the absence of large-scale motion time series data, we derive and synthesize time series from existing motion skeleton data with all-joint coverage. We use spatio-temporal graph networks to capture the relationships across joints for generalization across different device locations. We further design rotation-invariant augmentation to make the model agnostic to changes in device mounting orientations. Our model shows exceptional generalizability across 18 motion time series classification benchmark datasets, outperforming the best baselines by 340% in the zero-shot setting, 16.3% in the few-shot setting, and 9.2% in the full-shot setting.", "pdf": "https://openreview.net/pdf/084e8cefc22387e71d119326451e7ade819c6e8f.pdf"} {"title": "DOFEN: Deep Oblivious Forest ENsemble", "url": "https://openreview.net/forum?id=umukvCdGI6", "detail_url": "https://openreview.net/forum?id=umukvCdGI6", "authors": "Kuan-Yu Chen,Ping-Han Chiang,Hsin-Rung Chou,Chih-Sheng Chen,Tien-Hao Chang", "tags": "NIPS 2024,Poster", "abstract": "Deep Neural Networks (DNNs) have revolutionized artificial intelligence, achieving impressive results on diverse data types, including images, videos, and texts. However, DNNs still lag behind Gradient Boosting Decision Trees (GBDT) on tabular data, a format extensively utilized across various domains. This paper introduces DOFEN, which stands for Deep Oblivious Forest ENsemble. DOFEN is a novel DNN architecture inspired by oblivious decision trees and achieves on-off sparse selection of columns. DOFEN surpasses other DNNs on tabular data, achieving state-of-the-art performance on the well-recognized benchmark: Tabular Benchmark, which includes 73 total datasets spanning a wide array of domains. The code of DOFEN is available at: https://github.com/Sinopac-Digital-Technology-Division/DOFEN", "pdf": "https://openreview.net/pdf/b367f18c85c92e5b35b60e25154c9840ac8be956.pdf"} {"title": "BoostAdapter: Improving Vision-Language Test-Time Adaptation via Regional Bootstrapping", "url": "https://openreview.net/forum?id=8tOYl6WsGY", "detail_url": "https://openreview.net/forum?id=8tOYl6WsGY", "authors": "Taolin Zhang,Jinpeng Wang,Hang Guo,Tao Dai,Bin Chen,Shu-Tao Xia", "tags": "NIPS 2024,Poster", "abstract": "Adaptation of \npretrained vision-language models such as CLIP to various downstream tasks have raised great interest in recent researches. \nPrevious works have proposed a variety of test-time adaptation (TTA) methods to achieve strong generalization without any knowledge of the target domain. \nHowever, existing training-required TTA approaches like TPT necessitate entropy minimization that involves large computational overhead, while training-free methods like TDA overlook the potential for information mining from the test samples themselves.\nIn this paper, we break down the design of existing popular training-required and training-free TTA methods and bridge the gap between them within our framework.\nSpecifically, we maintain a light-weight key-value memory for feature retrieval from instance-agnostic historical samples and instance-aware boosting samples. \nThe historical samples are filtered from the testing data stream and serve to extract useful information from the target distribution, while the boosting samples are drawn from regional bootstrapping and capture the knowledge of the test sample itself.\nWe theoretically justify the rationality behind our method and empirically verify its effectiveness on both the out-of-distribution and the cross-domain datasets, showcasing its applicability in real-world situations.", "pdf": "https://openreview.net/pdf/4ec41b4d4363dbb9074ceb1f5720665d584780d2.pdf"} {"title": "Reinforced Cross-Domain Knowledge Distillation on Time Series Data", "url": "https://openreview.net/forum?id=tUHABDZP0Q", "detail_url": "https://openreview.net/forum?id=tUHABDZP0Q", "authors": "QING XU,Min Wu,Xiaoli Li,Kezhi Mao,Zhenghua Chen", "tags": "NIPS 2024,Poster", "abstract": "Unsupervised domain adaptation methods have demonstrated superior capabilities in handling the domain shift issue which widely exists in various time series tasks. However, their prominent adaptation performances heavily rely on complex model architectures, posing an unprecedented challenge in deploying them on resource-limited devices for real-time monitoring. Existing approaches, which integrates knowledge distillation into domain adaptation frameworks to simultaneously address domain shift and model complexity, often neglect network capacity gap between teacher and student and just coarsely align their outputs over all source and target samples, resulting in poor distillation efficiency. Thus, in this paper, we propose an innovative framework named Reinforced Cross-Domain Knowledge Distillation (RCD-KD) which can effectively adapt to student's network capability via dynamically selecting suitable target domain samples for knowledge transferring. Particularly, a reinforcement learning-based module with a novel reward function is proposed to learn optimal target sample selection policy based on student's capacity. Meanwhile, a domain discriminator is designed to transfer the domain invariant knowledge. Empirical experimental results and analyses on four public time series datasets demonstrate the effectiveness of our proposed method over other state-of-the-art benchmarks.", "pdf": "https://openreview.net/pdf/61019d031c791930dc4224fd05e54124ae9c4697.pdf"} {"title": "CausalStock: Deep End-to-end Causal Discovery for News-driven Multi-stock Movement Prediction", "url": "https://openreview.net/forum?id=5BXXoJh0Vr", "detail_url": "https://openreview.net/forum?id=5BXXoJh0Vr", "authors": "Shuqi Li,Yuebo Sun,Yuxin Lin,Xin Gao,Shuo Shang,Rui Yan", "tags": "NIPS 2024,Poster", "abstract": "There are two issues in news-driven multi-stock movement prediction tasks that are not well solved in the existing works. On the one hand, \"relation discovery\" is a pivotal part when leveraging the price information of other stocks to achieve accurate stock movement prediction. Given that stock relations are often unidirectional, such as the \"supplier-consumer\" relationship, causal relations are more appropriate to capture the impact between stocks. On the other hand, there is substantial noise existing in the news data leading to extracting effective information with difficulty. With these two issues in mind, we propose a novel framework called CausalStock for news-driven multi-stock movement prediction, which discovers the temporal causal relations between stocks. We design a lag-dependent temporal causal discovery mechanism to model the temporal causal graph distribution. Then a Functional Causal Model is employed to encapsulate the discovered causal relations and predict the stock movements. Additionally, we propose a Denoised News Encoder by taking advantage of the excellent text evaluation ability of large language models (LLMs) to extract useful information from massive news data. The experiment results show that CausalStock outperforms the strong baselines for both news-driven multi-stock movement prediction and multi-stock movement prediction tasks on six real-world datasets collected from the US, China, Japan, and UK markets. Moreover, getting benefit from the causal relations, CausalStock could offer a clear prediction mechanism with good explainability.", "pdf": "https://openreview.net/pdf/d34a4ac7746e859a357a63867c2999b52656a91e.pdf"} {"title": "Hyper-SD: Trajectory Segmented Consistency Model for Efficient Image Synthesis", "url": "https://openreview.net/forum?id=O5XbOoi0x3", "detail_url": "https://openreview.net/forum?id=O5XbOoi0x3", "authors": "Yuxi Ren,Xin Xia,Yanzuo Lu,Jiacheng Zhang,Jie Wu,Pan Xie,XING WANG,Xuefeng Xiao", "tags": "NIPS 2024,Poster", "abstract": "Recently, a series of diffusion-aware distillation algorithms have emerged to alleviate the computational overhead associated with the multi-step inference process of Diffusion Models (DMs). Current distillation techniques often dichotomize into two distinct aspects: i) ODE Trajectory Preservation; and ii) ODE Trajectory Reformulation. However, these approaches suffer from severe performance degradation or domain shifts. To address these limitations, we propose Hyper-SD, a novel framework that synergistically amalgamates the advantages of ODE Trajectory Preservation and Reformulation, while maintaining near-lossless performance during step compression. Firstly, we introduce Trajectory Segmented Consistency Distillation to progressively perform consistent distillation within pre-defined time-step segments, which facilitates the preservation of the original ODE trajectory from a higher-order perspective. Secondly, we incorporate human feedback learning to boost the performance of the model in a low-step regime and mitigate the performance loss incurred by the distillation process. Thirdly, we integrate score distillation to further improve the low-step generation capability of the model and offer the first attempt to leverage a unified LoRA to support the inference process at all steps. Extensive experiments and user studies demonstrate that Hyper-SD achieves SOTA performance from 1 to 8 inference steps for both SDXL and SD1.5. For example, Hyper-SDXL surpasses SDXL-Lightning by +0.68 in CLIP Score and +0.51 in Aes Score in the 1-step inference.", "pdf": "https://openreview.net/pdf/19490734899d96e4bd026dfa37e4f50c32430a03.pdf"} {"title": "Large Language Models-guided Dynamic Adaptation for Temporal Knowledge Graph Reasoning", "url": "https://openreview.net/forum?id=Cw7Agrr8GJ", "detail_url": "https://openreview.net/forum?id=Cw7Agrr8GJ", "authors": "Jiapu Wang,Kai Sun,LINHAO LUO,Wei Wei,Yongli Hu,Alan Wee-Chung Liew,Shirui Pan,Baocai Yin", "tags": "NIPS 2024,Poster", "abstract": "Temporal Knowledge Graph Reasoning (TKGR) is the process of utilizing temporal information to capture complex relations within a Temporal Knowledge Graph (TKG) to infer new knowledge. Conventional methods in TKGR typically depend on deep learning algorithms or temporal logical rules. However, deep learning-based TKGRs often lack interpretability, whereas rule-based TKGRs struggle to effectively learn temporal rules that capture temporal patterns. Recently, Large Language Models (LLMs) have demonstrated extensive knowledge and remarkable proficiency in temporal reasoning. Consequently, the employment of LLMs for Temporal Knowledge Graph Reasoning (TKGR) has sparked increasing interest among researchers. Nonetheless, LLMs are known to function as black boxes, making it challenging to comprehend their reasoning process. Additionally, due to the resource-intensive nature of fine-tuning, promptly updating LLMs to integrate evolving knowledge within TKGs for reasoning is impractical. To address these challenges, in this paper, we propose a Large Language Models-guided Dynamic Adaptation (LLM-DA) method for reasoning on TKGs. Specifically, LLM-DA harnesses the capabilities of LLMs to analyze historical data and extract temporal logical rules. These rules unveil temporal patterns and facilitate interpretable reasoning. To account for the evolving nature of TKGs, a dynamic adaptation strategy is proposed to update the LLM-generated rules with the latest events. This ensures that the extracted rules always incorporate the most recent knowledge and better generalize to the predictions on future events. Experimental results show that without the need of fine-tuning, LLM-DA significantly improves the accuracy of reasoning over several common datasets, providing a robust framework for TKGR tasks.", "pdf": "https://openreview.net/pdf/fe070085910162a656ee4755cd6d8df70fa8d0bc.pdf"} {"title": "TAPTRv2: Attention-based Position Update Improves Tracking Any Point", "url": "https://openreview.net/forum?id=Cx2O6Xz03H", "detail_url": "https://openreview.net/forum?id=Cx2O6Xz03H", "authors": "Hongyang Li,Hao Zhang,Shilong Liu,Zhaoyang Zeng,Feng Li,Bohan Li,Tianhe Ren,Lei Zhang", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we present TAPTRv2, a Transformer-based approach built upon TAPTR for solving the Tracking Any Point (TAP) task. TAPTR borrows designs from DEtection TRansformer (DETR) and formulates each tracking point as a point query, making it possible to leverage well-studied operations in DETR-like algorithms. TAPTRv2 improves TAPTR by addressing a critical issue regarding its reliance on cost-volume, which contaminates the point query\u2019s content feature and negatively impacts both visibility prediction and cost-volume computation. In TAPTRv2, we propose a novel attention-based position update (APU) operation and use key-aware deformable attention to realize. For each query, this operation uses key-aware attention weights to combine their corresponding deformable sampling positions to predict a new query position. This design is based on the observation that local attention is essentially the same as cost-volume, both of which are computed by dot-production between a query and its surrounding features. By introducing this new operation, TAPTRv2 not only removes the extra burden of cost-volume computation, but also leads to a substantial performance improvement. TAPTRv2 surpasses TAPTR and achieves state-of-the-art performance on many challenging datasets, demonstrating the effectiveness of our approach.", "pdf": "https://openreview.net/pdf/3ecc6c6d7700bc1318396853854712d909d571c0.pdf"} {"title": "CV-VAE: A Compatible Video VAE for Latent Generative Video Models", "url": "https://openreview.net/forum?id=8z4isrqbcf", "detail_url": "https://openreview.net/forum?id=8z4isrqbcf", "authors": "Sijie Zhao,Yong Zhang,Xiaodong Cun,Shaoshu Yang,Muyao Niu,Xiaoyu Li,Wenbo Hu,Ying Shan", "tags": "NIPS 2024,Poster", "abstract": "Spatio-temporal compression of videos, utilizing networks such as Variational Autoencoders (VAE), plays a crucial role in OpenAI's SORA and numerous other video generative models. For instance, many LLM-like video models learn the distribution of discrete tokens derived from 3D VAEs within the VQVAE framework, while most diffusion-based video models capture the distribution of continuous latent extracted by 2D VAEs without quantization. The temporal compression is simply realized by uniform frame sampling which results in unsmooth motion between consecutive frames. Currently, there lacks of a commonly used continuous video (3D) VAE for latent diffusion-based video models in the research community. Moreover, since current diffusion-based approaches are often implemented using pre-trained text-to-image (T2I) models, directly training a video VAE without considering the compatibility with existing T2I models will result in a latent space gap between them, which will take huge computational resources for training to bridge the gap even with the T2I models as initialization. To address this issue, we propose a method for training a video VAE of latent video models, namely CV-VAE, whose latent space is compatible with that of a given image VAE, e.g., image VAE of Stable Diffusion (SD). The compatibility is achieved by the proposed novel latent space regularization, which involves formulating a regularization loss using the image VAE. Benefiting from the latent space compatibility, video models can be trained seamlessly from pre-trained T2I or video models in a truly spatio-temporally compressed latent space, rather than simply sampling video frames at equal intervals. To improve the training efficiency, we also design a novel architecture for the video VAE. With our CV-VAE, existing video models can generate four times more frames with minimal finetuning. Extensive experiments are conducted to demonstrate the effectiveness of the proposed video VAE.", "pdf": "https://openreview.net/pdf/881b9319244168ea714d5bf0655edfd38e65a249.pdf"} {"title": "A Closer Look at the CLS Token for Cross-Domain Few-Shot Learning", "url": "https://openreview.net/forum?id=qIkYlfDZaI", "detail_url": "https://openreview.net/forum?id=qIkYlfDZaI", "authors": "Yixiong Zou,Shuai Yi,Yuhua Li,Ruixuan Li", "tags": "NIPS 2024,Poster", "abstract": "Vision Transformer (ViT) has shown great power in learning from large-scale datasets. However, collecting sufficient data for expert knowledge is always difficult. To handle this problem, Cross-Domain Few-Shot Learning (CDFSL) has been proposed to transfer the source-domain knowledge learned from sufficient data to target domains where only scarce data is available. In this paper, we find an intriguing phenomenon neglected by previous works for the CDFSL task based on ViT: leaving the CLS token to random initialization, instead of loading source-domain trained parameters, could consistently improve target-domain performance. We then delve into this phenomenon for an interpretation. We find **the CLS token naturally absorbs domain information** due to the inherent structure of the ViT, which is represented as the low-frequency component in the Fourier frequency space of images. Based on this phenomenon and interpretation, we further propose a method for the CDFSL task to decouple the domain information in the CLS token during the source-domain training, and adapt the CLS token on the target domain for efficient few-shot learning. Extensive experiments on four benchmarks validate our rationale and state-of-the-art performance. Our codes are available at https://github.com/Zoilsen/CLS_Token_CDFSL.", "pdf": "https://openreview.net/pdf/094a7e1a785ccc56a91ff11df3f3815a786a77a0.pdf"} {"title": "PuLID: Pure and Lightning ID Customization via Contrastive Alignment", "url": "https://openreview.net/forum?id=E6ZodZu0HQ", "detail_url": "https://openreview.net/forum?id=E6ZodZu0HQ", "authors": "Zinan Guo,Yanze Wu,Zhuowei Chen,Lang chen,Peng Zhang,Qian HE", "tags": "NIPS 2024,Poster", "abstract": "We propose Pure and Lightning ID customization (PuLID), a novel tuning-free ID customization method for text-to-image generation. By incorporating a Lightning T2I branch with a standard diffusion one, PuLID introduces both contrastive alignment loss and accurate ID loss, minimizing disruption to the original model and ensuring high ID fidelity. Experiments show that PuLID achieves superior performance in both ID fidelity and editability. Another attractive property of PuLID is that the image elements (\\eg, background, lighting, composition, and style) before and after the ID insertion are kept as consistent as possible. Codes and models are available at https://github.com/ToTheBeginning/PuLID", "pdf": "https://openreview.net/pdf/59d2bf9aadf5a14b4deb04536479cefd8920d4a1.pdf"} {"title": "BELM: Bidirectional Explicit Linear Multi-step Sampler for Exact Inversion in Diffusion Models", "url": "https://openreview.net/forum?id=ccQ4fmwLDb", "detail_url": "https://openreview.net/forum?id=ccQ4fmwLDb", "authors": "Fangyikang Wang,Hubery Yin,Yue-Jiang Dong,Huminhao Zhu,Chao Zhang,Hanbin Zhao,Hui Qian,Chen Li", "tags": "NIPS 2024,Poster", "abstract": "The inversion of diffusion model sampling, which aims to find the corresponding initial noise of a sample, plays a critical role in various tasks.\nRecently, several heuristic exact inversion samplers have been proposed to address the inexact inversion issue in a training-free manner. \nHowever, the theoretical properties of these heuristic samplers remain unknown and they often exhibit mediocre sampling quality.\nIn this paper, we introduce a generic formulation, \\emph{Bidirectional Explicit Linear Multi-step} (BELM) samplers, of the exact inversion samplers, which includes all previously proposed heuristic exact inversion samplers as special cases.\nThe BELM formulation is derived from the variable-stepsize-variable-formula linear multi-step method via integrating a bidirectional explicit constraint. We highlight this bidirectional explicit constraint is the key of mathematically exact inversion.\nWe systematically investigate the Local Truncation Error (LTE) within the BELM framework and show that the existing heuristic designs of exact inversion samplers yield sub-optimal LTE.\nConsequently, we propose the Optimal BELM (O-BELM) sampler through the LTE minimization approach.\nWe conduct additional analysis to substantiate the theoretical stability and global convergence property of the proposed optimal sampler.\nComprehensive experiments demonstrate our O-BELM sampler establishes the exact inversion property while achieving high-quality sampling.\nAdditional experiments in image editing and image interpolation highlight the extensive potential of applying O-BELM in varying applications.", "pdf": "https://openreview.net/pdf/7f3c70feb6ff46ecefa67a1d30b459467fd6fdd3.pdf"} {"title": "Towards Dynamic Message Passing on Graphs", "url": "https://openreview.net/forum?id=4BWlUJF0E9", "detail_url": "https://openreview.net/forum?id=4BWlUJF0E9", "authors": "Junshu Sun,Chenxue Yang,Xiangyang Ji,Qingming Huang,Shuhui Wang", "tags": "NIPS 2024,Poster", "abstract": "Message passing plays a vital role in graph neural networks (GNNs) for effective feature learning. However, the over-reliance on input topology diminishes the efficacy of message passing and restricts the ability of GNNs. Despite efforts to mitigate the reliance, existing study encounters message-passing bottlenecks or high computational expense problems, which invokes the demands for flexible message passing with low complexity. In this paper, we propose a novel dynamic message-passing mechanism for GNNs. It projects graph nodes and learnable pseudo nodes into a common space with measurable spatial relations between them. With nodes moving in the space, their evolving relations facilitate flexible pathway construction for a dynamic message-passing process. Associating pseudo nodes to input graphs with their measured relations, graph nodes can communicate with each other intermediately through pseudo nodes under linear complexity. We further develop a GNN model named $\\mathtt{N^2}$ based on our dynamic message-passing mechanism. $\\mathtt{N^2}$ employs a single recurrent layer to recursively generate the displacements of nodes and construct optimal dynamic pathways. Evaluation on eighteen benchmarks demonstrates the superior performance of $\\mathtt{N^2}$ over popular GNNs. $\\mathtt{N^2}$ successfully scales to large-scale benchmarks and requires significantly fewer parameters for graph classification with the shared recurrent layer.", "pdf": "https://openreview.net/pdf/19aca458edc9cc08bde630e2cbc9f498f190759c.pdf"} {"title": "Stability and Generalization of Asynchronous SGD: Sharper Bounds Beyond Lipschitz and Smoothness", "url": "https://openreview.net/forum?id=bHP9hX4SvI", "detail_url": "https://openreview.net/forum?id=bHP9hX4SvI", "authors": "Xiaoge Deng,Tao Sun,Shengwei Li,Dongsheng Li,Xicheng Lu", "tags": "NIPS 2024,Poster", "abstract": "Asynchronous stochastic gradient descent (ASGD) has evolved into an indispensable optimization algorithm for training modern large-scale distributed machine learning tasks. Therefore, it is imperative to explore the generalization performance of the ASGD algorithm. However, the existing results are either pessimistic and vacuous or restricted by strict assumptions that fail to reveal the intrinsic impact of asynchronous training on generalization. In this study, we establish sharper stability and generalization bounds for ASGD under much weaker assumptions. Firstly, this paper studies the on-average model stability of ASGD and provides a non-vacuous upper bound on the generalization error, without relying on the Lipschitz assumption. Furthermore, we investigate the excess generalization error of the ASGD algorithm, revealing the effects of asynchronous delay, model initialization, number of training samples and iterations on generalization performance. Secondly, for the first time, this study explores the generalization performance of ASGD in the non-smooth case. We replace smoothness with the much weaker H\u00f6lder continuous assumption and achieve similar generalization results as in the smooth case. Finally, we validate our theoretical findings by training numerous machine learning models, including convex problems and non-convex tasks in computer vision and natural language processing.", "pdf": "https://openreview.net/pdf/7d9f31bcb43ed8dab001e3034770959b84d53427.pdf"} {"title": "Federated Graph Learning for Cross-Domain Recommendation", "url": "https://openreview.net/forum?id=UBpPOqrBKE", "detail_url": "https://openreview.net/forum?id=UBpPOqrBKE", "authors": "Ziqi Yang,Zhaopeng Peng,Zihui Wang,Jianzhong Qi,Chaochao Chen,Weike Pan,Chenglu Wen,Cheng Wang,Xiaoliang Fan", "tags": "NIPS 2024,Poster", "abstract": "Cross-domain recommendation (CDR) offers a promising solution to the data sparsity problem by enabling knowledge transfer across source and target domains. However, many recent CDR models overlook crucial issues such as privacy as well as the risk of negative transfer (which negatively impact model performance), especially in multi-domain settings. To address these challenges, we propose FedGCDR, a novel federated graph learning framework that securely and effectively leverages positive knowledge from multiple source domains. First, we design a positive knowledge transfer module that ensures privacy during inter-domain knowledge transmission. This module employs differential privacy-based knowledge extraction combined with a feature mapping mechanism, transforming source domain embeddings from federated graph attention networks into reliable domain knowledge. Second, we design a knowledge activation module to filter out potential harmful or conflicting knowledge from source domains, addressing the issues of negative transfer. This module enhances target domain training by expanding the graph of the target domain to generate reliable domain attentions and fine-tunes the target model for improved negative knowledge filtering and more accurate predictions. We conduct extensive experiments on 16 popular domains of the Amazon dataset, demonstrating that FedGCDR significantly outperforms state-of-the-art methods.", "pdf": "https://openreview.net/pdf/7ce6c1b6d401de6dd5465d044cf4cec4a5fc6a52.pdf"} {"title": "ReGS: Reference-based Controllable Scene Stylization with Gaussian Splatting", "url": "https://openreview.net/forum?id=ynJr0RW6FR", "detail_url": "https://openreview.net/forum?id=ynJr0RW6FR", "authors": "Yiqun Mei,Jiacong Xu,Vishal M. Patel", "tags": "NIPS 2024,Poster", "abstract": "Referenced-based scene stylization that edits the appearance based on a content-aligned reference image is an emerging research area. Starting with a pretrained neural radiance field (NeRF), existing methods typically learn a novel appearance that matches the given style. Despite their effectiveness, they inherently suffer from time-consuming volume rendering, and thus are impractical for many real-time applications. In this work, we propose ReGS, which adapts 3D Gaussian Splatting (3DGS) for reference-based stylization to enable real-time stylized view synthesis. Editing the appearance of a pretrained 3DGS is challenging as it uses discrete Gaussians as 3D representation, which tightly bind appearance with geometry. Simply optimizing the appearance as prior methods do is often insufficient for modeling continuous textures in the given reference image. To address this challenge, we propose a novel texture-guided control mechanism that adaptively adjusts local responsible Gaussians to a new geometric arrangement, serving for desired texture details. The proposed process is guided by texture clues for effective appearance editing, and regularized by scene depth for preserving original geometric structure. With these novel designs, we show ReGs can produce state-of-the-art stylization results that respect the reference texture while embracing real-time rendering speed for free-view navigation.", "pdf": "https://openreview.net/pdf/ff2c2a64aeb6fea451908d363d55da1992fca363.pdf"} {"title": "ReFIR: Grounding Large Restoration Models with Retrieval Augmentation", "url": "https://openreview.net/forum?id=iFKmFUxQDh", "detail_url": "https://openreview.net/forum?id=iFKmFUxQDh", "authors": "Hang Guo,Tao Dai,Zhihao Ouyang,Taolin Zhang,Yaohua Zha,Bin Chen,Shu-Tao Xia", "tags": "NIPS 2024,Poster", "abstract": "Recent advances in diffusion-based Large Restoration Models (LRMs) have significantly improved photo-realistic image restoration by leveraging the internal knowledge embedded within model weights. However, existing LRMs often suffer from the hallucination dilemma, i.e., producing incorrect contents or textures when dealing with severe degradations, due to their heavy reliance on limited internal knowledge. In this paper, we propose an orthogonal solution called the Retrieval-augmented Framework for Image Restoration (ReFIR), which incorporates retrieved images as external knowledge to extend the knowledge boundary of existing LRMs in generating details faithful to the original scene. Specifically, we first introduce the nearest neighbor lookup to retrieve content-relevant high-quality images as reference, after which we propose the cross-image injection to modify existing LRMs to utilize high-quality textures from retrieved images. Thanks to the additional external knowledge, our ReFIR can well handle the hallucination challenge and facilitate faithfully results. Extensive experiments demonstrate that ReFIR can achieve not only high-fidelity but also realistic restoration results. Importantly, our ReFIR requires no training and is adaptable to various LRMs.", "pdf": "https://openreview.net/pdf/4b4f37247817eaee1dde1da0101212a63588a707.pdf"} {"title": "BAKU: An Efficient Transformer for Multi-Task Policy Learning", "url": "https://openreview.net/forum?id=uFXGsiYkkX", "detail_url": "https://openreview.net/forum?id=uFXGsiYkkX", "authors": "Siddhant Haldar,Zhuoran Peng,Lerrel Pinto", "tags": "NIPS 2024,Poster", "abstract": "Training generalist agents capable of solving diverse tasks is challenging, often requiring large datasets of expert demonstrations. This is particularly problematic in robotics, where each data point requires physical execution of actions in the real world. Thus, there is a pressing need for architectures that can effectively leverage the available training data. In this work, we present BAKU, a simple transformer architecture that enables efficient learning of multi-task robot policies. BAKU builds upon recent advancements in offline imitation learning and meticulously combines observation trunks, action chunking, multi-sensory observations, and action heads to substantially improve upon prior work. Our experiments on 129 simulated tasks across LIBERO, Meta-World suite, and the Deepmind Control suite exhibit an overall 18% absolute improvement over RT-1 and MT-ACT, with a 36% improvement on the harder LIBERO benchmark. On 30 real-world manipulation tasks, given an average of just 17 demonstrations per task, BAKU achieves a 91% success rate. Videos of the robot are best viewed at baku-robot.github.io.", "pdf": "https://openreview.net/pdf/71b48662ea04f445f5d15dd51227f43a7ca9b49c.pdf"} {"title": "Introspective Planning: Aligning Robots' Uncertainty with Inherent Task Ambiguity", "url": "https://openreview.net/forum?id=4TlUE0ufiz", "detail_url": "https://openreview.net/forum?id=4TlUE0ufiz", "authors": "Kaiqu Liang,Zixu Zhang,Jaime Fern\u00e1ndez Fisac", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) exhibit advanced reasoning skills, enabling robots to comprehend natural language instructions and strategically plan high-level actions through proper grounding. However, LLM hallucination may result in robots confidently executing plans that are misaligned with user goals or even unsafe in critical scenarios. Additionally, inherent ambiguity in natural language instructions can introduce uncertainty into the LLM's reasoning and planning. We propose introspective planning, a systematic approach that guides LLMs to refine their own uncertainty in alignment with inherent task ambiguity. Our approach constructs a knowledge base containing introspective reasoning examples as post-hoc rationalizations of human-selected safe and compliant plans, which are retrieved during deployment. Evaluations on three tasks, including a new safe mobile manipulation benchmark, indicate that introspection substantially improves both compliance and safety over state-of-the-art LLM-based planning methods. Additionally, we empirically show that introspective planning, in combination with conformal prediction, achieves tighter confidence bounds, maintaining statistical success guarantees while minimizing unnecessary user clarification requests.", "pdf": "https://openreview.net/pdf/b301698e312372060c84b11b95c5ab3f1e2674ec.pdf"} {"title": "Efficient Large Multi-modal Models via Visual Context Compression", "url": "https://openreview.net/forum?id=5ujp72CiYB", "detail_url": "https://openreview.net/forum?id=5ujp72CiYB", "authors": "Jieneng Chen,Luoxin Ye,Ju He,Zhao-Yang Wang,Daniel Khashabi,Alan Yuille", "tags": "NIPS 2024,Poster", "abstract": "While significant advancements have been made in compressed representations for text embeddings in large language models (LLMs), the compression of visual tokens in multi-modal LLMs (MLLMs) has remained a largely overlooked area. In this work, we present the study on the analysis of redundancy concerning visual tokens and efficient training within these models. Our initial experiments\nshow that eliminating up to 70% of visual tokens at the testing stage by simply average pooling only leads to a minimal 3% reduction in visual question answering accuracy on the GQA benchmark, indicating significant redundancy in visual context. Addressing this, we introduce Visual Context Compressor, which reduces the number of visual tokens to enhance training and inference efficiency without sacrificing performance. To minimize information loss caused by the compression on visual tokens while maintaining training efficiency, we develop LLaVolta as a light and staged training scheme that incorporates stage-wise visual context compression to progressively compress the visual tokens from heavily to lightly compression during training, yielding no loss of information when testing. Extensive experiments demonstrate that our approach enhances the performance of MLLMs in both image-language and video-language understanding, while also significantly cutting training costs and improving inference efficiency.", "pdf": "https://openreview.net/pdf/8464f557e0eafcebf6b3307cb864bf60aee57ec6.pdf"} {"title": "Approaching Human-Level Forecasting with Language Models", "url": "https://openreview.net/forum?id=FlcdW7NPRY", "detail_url": "https://openreview.net/forum?id=FlcdW7NPRY", "authors": "Danny Halawi,Fred Zhang,Chen Yueh-Han,Jacob Steinhardt", "tags": "NIPS 2024,Poster", "abstract": "Forecasting future events is important for policy and decision making. In this work, we study whether language models (LMs) can forecast at the level of competitive human forecasters. Towards this goal, we develop a retrieval-augmented LM system designed to automatically search for relevant information, generate forecasts, and aggregate predictions. To facilitate our study, we collect a large dataset of questions from competitive forecasting platforms. Under a test set published after the knowledge cut-offs of our LMs, we evaluate the end-to-end performance of our system against the aggregates of human forecasts. On average, the system nears the crowd aggregate of competitive forecasters and, in a certain relaxed setting, surpasses it. Our work suggests that using LMs to forecasts the future could provide accurate predictions at scale and help to inform institutional decision making.", "pdf": "https://openreview.net/pdf/5bef6f0321cee7c8a6604e0fe1f95848988f96ce.pdf"} {"title": "Warm-up Free Policy Optimization: Improved Regret in Linear Markov Decision Processes", "url": "https://openreview.net/forum?id=1c9XHlHTs7", "detail_url": "https://openreview.net/forum?id=1c9XHlHTs7", "authors": "Asaf Cassel,Aviv Rosenberg", "tags": "NIPS 2024,Poster", "abstract": "Policy Optimization (PO) methods are among the most popular Reinforcement Learning (RL) algorithms in practice. Recently, Sherman et al. [2023a] proposed a PO-based algorithm with rate-optimal regret guarantees under the linear Markov Decision Process (MDP) model. However, their algorithm relies on a costly pure exploration warm-up phase that is hard to implement in practice. This paper eliminates this undesired warm-up phase, replacing it with a simple and efficient contraction mechanism. Our PO algorithm achieves rate-optimal regret with improved dependence on the other parameters of the problem (horizon and function approximation dimension) in two fundamental settings: adversarial losses with full-information feedback and stochastic losses with bandit feedback.", "pdf": "https://openreview.net/pdf/306830e11a9e3b0046b7c85d29e7cb963f283c26.pdf"} {"title": "Harnessing Multiple Correlated Networks for Exact Community Recovery", "url": "https://openreview.net/forum?id=7Fzx3Akdt5", "detail_url": "https://openreview.net/forum?id=7Fzx3Akdt5", "authors": "Miklos Z. Racz,Jifan Zhang", "tags": "NIPS 2024,Poster", "abstract": "We study the problem of learning latent community structure from multiple correlated networks, focusing on edge-correlated stochastic block models with two balanced communities. Recent work of Gaudio, R\u00e1cz, and Sridhar (COLT 2022) determined the precise information-theoretic threshold for exact community recovery using two correlated graphs; in particular, this showcased the subtle interplay between community recovery and graph matching. Here we study the natural setting of more than two graphs. The main challenge lies in understanding how to aggregate information across several graphs when none of the pairwise latent vertex correspondences can be exactly recovered. Our main result derives the precise information-theoretic threshold for exact community recovery using any constant number of correlated graphs, answering a question of Gaudio, R\u00e1cz, and Sridhar (COLT 2022). In particular, for every $K \\geq 3$ we uncover and characterize a region of the parameter space where exact community recovery is possible using $K$ correlated graphs, even though (1) this is information-theoretically impossible using any $K-1$ of them and (2) none of the latent matchings can be exactly recovered.", "pdf": "https://openreview.net/pdf/2b20a75043a5c2f203fb00af2ffbc09dafc4f11b.pdf"} {"title": "Exploring Low-Dimensional Subspace in Diffusion Models for Controllable Image Editing", "url": "https://openreview.net/forum?id=50aOEfb2km", "detail_url": "https://openreview.net/forum?id=50aOEfb2km", "authors": "Siyi Chen,Huijie Zhang,Minzhe Guo,Yifu Lu,Peng Wang,Qing Qu", "tags": "NIPS 2024,Poster", "abstract": "Recently, diffusion models have emerged as a powerful class of generative models. \nDespite their success, there is still limited understanding of their semantic spaces. This makes it challenging to achieve precise and disentangled image generation without additional training, especially in an unsupervised way. \nIn this work, we improve the understanding of their semantic spaces from intriguing observations: among a certain range of noise levels, (1) the learned posterior mean predictor (PMP) in the diffusion model is locally linear, and (2) the singular vectors of its Jacobian lie in low-dimensional semantic subspaces. We provide a solid theoretical basis to justify the linearity and low-rankness in the PMP. These insights allow us to propose an unsupervised, single-step, training-free **LO**w-rank **CO**ntrollable image editing (LOCO Edit) method for precise local editing in diffusion models. LOCO Edit identified editing directions with nice properties: homogeneity, transferability, composability, and linearity. These properties of LOCO Edit benefit greatly from the low-dimensional semantic subspace.\nOur method can further be extended to unsupervised or text-supervised editing in various text-to-image diffusion models (T-LOCO Edit). Finally, extensive empirical experiments demonstrate the effectiveness and efficiency of LOCO Edit. The code and the arXiv version can be found on the [project website](https://chicychen.github.io/LOCO).", "pdf": "https://openreview.net/pdf/2ecc789f39a91123bffb6022a9d0889986ab90f2.pdf"} {"title": "Initialization is Critical to Whether Transformers Fit Composite Functions by Reasoning or Memorizing", "url": "https://openreview.net/forum?id=YOBGdVaYTS", "detail_url": "https://openreview.net/forum?id=YOBGdVaYTS", "authors": "Zhongwang Zhang,Pengxiao Lin,Zhiwei Wang,Yaoyu Zhang,Zhi-Qin John Xu", "tags": "NIPS 2024,Poster", "abstract": "Transformers have shown impressive capabilities across various tasks, but their performance on compositional problems remains a topic of debate. In this work, we investigate the mechanisms of how transformers behave on unseen compositional tasks. We discover that the parameter initialization scale plays a critical role in determining whether the model learns inferential (reasoning-based) solutions, which capture the underlying compositional primitives, or symmetric (memory-based) solutions, which simply memorize mappings without understanding the compositional structure. By analyzing the information flow and vector representations within the model, we reveal the distinct mechanisms underlying these solution types. We further find that inferential (reasoning-based) solutions exhibit low complexity bias, which we hypothesize is a key factor enabling them to learn individual mappings for single anchors. We validate our conclusions on various real-world datasets. Our findings provide valuable insights into the role of initialization scale in tuning the reasoning and memorizing ability and we propose the initialization rate $\\gamma$ to be a convenient tunable hyper-parameter in common deep learning frameworks, where $1/d_{\\mathrm{in}}^\\gamma$ is the standard deviation of parameters of the layer with $d_{\\mathrm{in}}$ input neurons.", "pdf": "https://openreview.net/pdf/f41a9dd1ef6a32e7ce7e86b85b35a7670867ef7a.pdf"} {"title": "Efficient Graph Matching for Correlated Stochastic Block Models", "url": "https://openreview.net/forum?id=nBhfIcDnRP", "detail_url": "https://openreview.net/forum?id=nBhfIcDnRP", "authors": "Shuwen Chai,Miklos Z. Racz", "tags": "NIPS 2024,Poster", "abstract": "We study learning problems on correlated stochastic block models with two balanced communities. Our main result gives the first efficient algorithm for graph matching in this setting. In the most interesting regime where the average degree is logarithmic in the number of vertices, this algorithm correctly matches all but a vanishing fraction of vertices with high probability, whenever the edge correlation parameter $s$ satisfies $s^2 > \\alpha \\approx 0.338$, where $\\alpha$ is Otter's tree-counting constant. Moreover, we extend this to an efficient algorithm for exact graph matching whenever this is information-theoretically possible, positively resolving an open problem of R\u00e1cz and Sridhar (NeurIPS 2021). Our algorithm generalizes the recent breakthrough work of Mao, Wu, Xu, and Yu (STOC 2023), which is based on centered subgraph counts of a large family of trees termed chandeliers. A major technical challenge that we overcome is dealing with the additional estimation errors that are necessarily present due to the fact that, in relevant parameter regimes, the latent community partition cannot be exactly recovered from a single graph. As an application of our results, we give an efficient algorithm for exact community recovery using multiple correlated graphs in parameter regimes where it is information-theoretically impossible to do so using just a single graph.", "pdf": "https://openreview.net/pdf/39c7a60cd5b8e1b8375829b801a4fd6f348ebab1.pdf"} {"title": "Learning Truncated Causal History Model for Video Restoration", "url": "https://openreview.net/forum?id=cUGf2HaNcs", "detail_url": "https://openreview.net/forum?id=cUGf2HaNcs", "authors": "Amirhosein Ghasemabadi,Muhammad Kamran Janjua,Mohammad Salameh,Di Niu", "tags": "NIPS 2024,Poster", "abstract": "One key challenge to video restoration is to model the transition dynamics of video frames governed by motion. In this work, we propose Turtle to learn the truncated causal history model for efficient and high-performing video restoration. Unlike traditional methods that process a range of contextual frames in parallel, Turtle enhances efficiency by storing and summarizing a truncated history of the input frame latent representation into an evolving historical state. This is achieved through a sophisticated similarity-based retrieval mechanism that implicitly accounts for inter-frame motion and alignment. The causal design in Turtle enables recurrence in inference through state-memorized historical features while allowing parallel training by sampling truncated video clips. We report new state-of-the-art results on a multitude of video restoration benchmark tasks, including video desnowing, nighttime video deraining, video raindrops and rain streak removal, video super-resolution, real-world and synthetic video deblurring, and blind video denoising while reducing the computational cost compared to existing best contextual methods on all these tasks.", "pdf": "https://openreview.net/pdf/d837b4f7b539a1cfa6a5d2f3ba64db90b24d0e8f.pdf"} {"title": "MUVERA: Multi-Vector Retrieval via Fixed Dimensional Encoding", "url": "https://openreview.net/forum?id=X3ydKRcQr6", "detail_url": "https://openreview.net/forum?id=X3ydKRcQr6", "authors": "Rajesh Jayaram,Laxman Dhulipala,Majid Hadian,Jason Lee,Vahab Mirrokni", "tags": "NIPS 2024,Poster", "abstract": "Neural embedding models have become a fundamental component of modern information retrieval (IR) pipelines. These models produce a single embedding $x \\in \\mathbb{R}^d$ per data-point, allowing for fast retrieval via highly optimized maximum inner product search (MIPS) algorithms. Recently, beginning with the landmark ColBERT paper, multi-vector models, which produce a set of embedding per data point, have achieved markedly superior performance for IR tasks. Unfortunately, using these models for IR is computationally expensive due to the increased complexity of multi-vector retrieval and scoring. \n\nIn this paper, we introduce MUVERA (MUlti-VEctor Retrieval Algorithm), a retrieval mechanism which reduces multi-vector similarity search to single-vector similarity search. This enables the usage of off-the-shelf MIPS solvers for multi-vector retrieval. \nMUVERA asymmetrically generates Fixed Dimensional Encodings (FDEs) of queries and documents, which are vectors whose inner product approximates multi-vector similarity. We prove that FDEs give high-quality $\\epsilon$-approximations, thus providing the first single-vector proxy for multi-vector similarity with theoretical guarantees. Empirically, we find that FDEs achieve the same recall as prior state-of-the-art heuristics while retrieving 2-5$\\times$ fewer candidates. Compared to prior state of the art implementations, MUVERA achieves consistently good end-to-end recall and latency across a diverse set of the BEIR retrieval datasets, achieving an average of 10% improved recall with 90% lower latency.", "pdf": "https://openreview.net/pdf/3afbc85392452987bf48005868cf4249f54ca5d3.pdf"} {"title": "IPO: Interpretable Prompt Optimization for Vision-Language Models", "url": "https://openreview.net/forum?id=WPPC7FHtaM", "detail_url": "https://openreview.net/forum?id=WPPC7FHtaM", "authors": "Yingjun Du,Wenfang Sun,Cees G. M. Snoek", "tags": "NIPS 2024,Poster", "abstract": "Pre-trained vision-language models like CLIP have remarkably adapted to various downstream tasks. Nonetheless, their performance heavily depends on the specificity of the input text prompts, which requires skillful prompt template engineering. Instead, current approaches to prompt optimization learn the prompts through gradient descent, where the prompts are treated as adjustable parameters. However, these methods tend to lead to overfitting of the base classes seen during training and produce prompts that are no longer understandable by humans. This paper introduces a simple but interpretable prompt optimizer (IPO), that utilizes large language models (LLMs) to generate textual prompts dynamically. We introduce a Prompt Optimization Prompt that not only guides LLMs in creating effective prompts but also stores past prompts with their performance metrics, providing rich in-context information. Additionally, we incorporate a large multimodal model (LMM) to condition on visual content by generating image descriptions, which enhance the interaction between textual and visual modalities. This allows for the creation of dataset-specific prompts that improve generalization performance, while maintaining human comprehension. Extensive testing across 11 datasets reveals that IPO not only improves the accuracy of existing gradient-descent-based prompt learning methods but also considerably enhances the interpretability of the generated prompts. By leveraging the strengths of LLMs, our approach ensures that the prompts remain human-understandable, thereby facilitating better transparency and oversight for vision-language models.", "pdf": "https://openreview.net/pdf/bef27cf04fbd0a4e004415c5464b2536eb28a9b8.pdf"} {"title": "PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics", "url": "https://openreview.net/forum?id=ZeihWodDVh", "detail_url": "https://openreview.net/forum?id=ZeihWodDVh", "authors": "Omead Pooladzandi,Sunay Gajanan Bhat,Jeffrey Jiang,Alexander Branch,Gregory Pottie", "tags": "NIPS 2024,Poster", "abstract": "Train-time data poisoning attacks threaten machine learning models by introducing adversarial examples during training, leading to misclassification. Current defense methods often reduce generalization performance, are attack-specific, and impose significant training overhead. To address this, we introduce a set of universal data purification methods using a stochastic transform, $\\Psi(x)$, realized via iterative Langevin dynamics of Energy-Based Models (EBMs), Denoising Diffusion Probabilistic Models (DDPMs), or both. These approaches purify poisoned data with minimal impact on classifier generalization. Our specially trained EBMs and DDPMs provide state-of-the-art defense against various attacks (including Narcissus, Bullseye Polytope, Gradient Matching) on CIFAR-10, Tiny-ImageNet, and CINIC-10, without needing attack or classifier-specific information. We discuss performance trade-offs and show that our methods remain highly effective even with poisoned or distributionally shifted generative model training data.", "pdf": "https://openreview.net/pdf/ada9c339fc7d61e0ab6a11d1231c8c264828b865.pdf"} {"title": "Idiographic Personality Gaussian Process for Psychological Assessment", "url": "https://openreview.net/forum?id=Twqa0GFMGX", "detail_url": "https://openreview.net/forum?id=Twqa0GFMGX", "authors": "Yehu Chen,Muchen Xi,Joshua J. Jackson,Jacob Montgomery,Roman Garnett", "tags": "NIPS 2024,Poster", "abstract": "We develop a novel measurement framework based on Gaussian process coregionalization model to address a long-lasting debate in psychometrics: whether psychological features like personality share a common structure across the population or vary uniquely for individuals. We propose idiographic personality Gaussian process (IPGP), an intermediate model that accommodates both shared trait structure across individuals and \"idiographic\" deviations. IPGP leverages the Gaussian process coregionalization model to conceptualize responses of grouped survey batteries but adjusted to non-Gaussian ordinal data, and exploits stochastic variational inference for latent factor estimation. Using both synthetic data and a novel survey, we show that IPGP improves both prediction of actual responses and estimation of intrapersonal response patterns compared to existing benchmarks. In the survey study, IPGP also identifies unique clusters of personality taxonomies, displaying great potential in advancing individualized approaches to psychological diagnosis.", "pdf": "https://openreview.net/pdf/3804612064e4761ae7d673a4cf4fa64a0e9b34ef.pdf"} {"title": "Transformer Doctor: Diagnosing and Treating Vision Transformers", "url": "https://openreview.net/forum?id=chnJT8Nj8X", "detail_url": "https://openreview.net/forum?id=chnJT8Nj8X", "authors": "Jiacong Hu,Hao Chen,Kejia Chen,Yang Gao,Jingwen Ye,Xingen Wang,Mingli Song,Zunlei Feng", "tags": "NIPS 2024,Poster", "abstract": "Due to its powerful representational capabilities, Transformers have gradually become the mainstream model in the field of machine vision. However, the vast and complex parameters of Transformers impede researchers from gaining a deep understanding of their internal mechanisms, especially error mechanisms. Existing methods for interpreting Transformers mainly focus on understanding them from the perspectives of the importance of input tokens or internal modules, as well as the formation and meaning of features. In contrast, inspired by research on information integration mechanisms and conjunctive errors in the biological visual system, this paper conducts an in-depth exploration of the internal error mechanisms of Transformers. We first propose an information integration hypothesis for Transformers in the machine vision domain and provide substantial experimental evidence to support this hypothesis. This includes the dynamic integration of information among tokens and the static integration of information within tokens in Transformers, as well as the presence of conjunctive errors therein. Addressing these errors, we further propose heuristic dynamic integration constraint methods and rule-based static integration constraint methods to rectify errors and ultimately improve model performance. The entire methodology framework is termed as Transformer Doctor, designed for diagnosing and treating internal errors within transformers. Through a plethora of quantitative and qualitative experiments, it has been demonstrated that Transformer Doctor can effectively address internal errors in transformers, thereby enhancing model performance.", "pdf": "https://openreview.net/pdf/69180423f087baebc1c090283f3af25bb9e933d2.pdf"} {"title": "Learning to Decouple the Lights for 3D Face Texture Modeling", "url": "https://openreview.net/forum?id=3lic0JgPRZ", "detail_url": "https://openreview.net/forum?id=3lic0JgPRZ", "authors": "Tianxin Huang,Zhenyu Zhang,Ying Tai,Gim Hee Lee", "tags": "NIPS 2024,Poster", "abstract": "Existing research has made impressive strides in reconstructing human facial shapes and textures from images with well-illuminated faces and minimal external occlusions. \nNevertheless, it remains challenging to recover accurate facial textures from scenarios with complicated illumination affected by external occlusions, \\eg a face that is partially obscured by items such as a hat. \nExisting works based on the assumption of single and uniform illumination cannot correctly process these data.\nIn this work, we introduce a novel approach to model 3D facial textures under such unnatural illumination. Instead of assuming single illumination, our framework learns to imitate the unnatural illumination as a composition of multiple separate light conditions combined with learned neural representations, named Light Decoupling.\nAccording to experiments on both single images and video sequences, we demonstrate the effectiveness of our approach in modeling facial textures under challenging illumination affected by occlusions.", "pdf": "https://openreview.net/pdf/4f54688248285bee7928fb56ffb59e190da8ab0f.pdf"} {"title": "Bayesian Optimisation with Unknown Hyperparameters: Regret Bounds Logarithmically Closer to Optimal", "url": "https://openreview.net/forum?id=eygv0JRvTL", "detail_url": "https://openreview.net/forum?id=eygv0JRvTL", "authors": "Juliusz Ziomek,Masaki Adachi,Michael A Osborne", "tags": "NIPS 2024,Poster", "abstract": "Bayesian Optimization (BO) is widely used for optimising black-box functions but requires us to specify the length scale hyperparameter, which defines the smoothness of the functions the optimizer will consider. Most current BO algorithms choose this hyperparameter by maximizing the marginal likelihood of the observed data, albeit risking misspecification if the objective function is less smooth in regions we have not yet explored. The only prior solution addressing this problem with theoretical guarantees was A-GP-UCB, proposed by Berkenkamp et al. (2019). This algorithm progressively decreases the length scale, expanding the class of functions considered by the optimizer. However, A-GP-UCB lacks a stopping mechanism, leading to over-exploration and slow convergence. To overcome this, we introduce Length scale Balancing (LB) - a novel approach, aggregating multiple base surrogate models with varying length scales. LB intermittently adds smaller length scale candidate values while retaining longer scales, balancing exploration and exploitation. We formally derive a cumulative regret bound of LB and compare it with the regret of an oracle BO algorithm using the optimal length scale. Denoting the factor by which the regret bound of A-GP-UCB was away from oracle as $g(T)$, we show that LB is only $\\log g(T)$ away from oracle regret. We also empirically evaluate our algorithm on synthetic and real-world benchmarks and show it outperforms A-GP-UCB and maximum likelihood estimation.", "pdf": "https://openreview.net/pdf/98b8e22bec3af0a3935f4484cf0af92bb5a4e8f7.pdf"} {"title": "Unchosen Experts Can Contribute Too: Unleashing MoE Models\u2019 Power by Self-Contrast", "url": "https://openreview.net/forum?id=C1d3VVfdVG", "detail_url": "https://openreview.net/forum?id=C1d3VVfdVG", "authors": "Chufan Shi,Cheng Yang,Xinyu Zhu,Jiahao Wang,Taiqiang Wu,Siheng Li,Deng Cai,Yujiu Yang,Yu Meng", "tags": "NIPS 2024,Poster", "abstract": "Mixture-of-Experts (MoE) has emerged as a prominent architecture for scaling model size while maintaining computational efficiency. In MoE, each token in the input sequence activates a different subset of experts determined by a routing mechanism. However, the unchosen experts in MoE models do not contribute to the output, potentially leading to underutilization of the model's capacity.\nIn this work, we first conduct exploratory studies to demonstrate that increasing the number of activated experts does not necessarily improve and can even degrade the output quality. Then, we show that output distributions from an MoE model using different routing strategies substantially differ, indicating that different experts do not always act synergistically. \nMotivated by these findings, we propose **S**elf-**C**ontrast **M**ixture-**o**f-**E**xperts (SCMoE), a training-free strategy that utilizes unchosen experts in a self-contrast manner during inference. \nIn SCMoE, the next-token probabilities are determined by contrasting the outputs from strong and weak activation using the same MoE model.\nOur method is conceptually simple and computationally lightweight, as it incurs minimal latency compared to greedy decoding. \nExperiments on several benchmarks (GSM8K, StrategyQA, MBPP and HumanEval) demonstrate that SCMoE can consistently enhance Mixtral 8x7B\u2019s reasoning capability across various domains. For example, it improves the accuracy on GSM8K from 61.79 to 66.94. \nMoreover, combining SCMoE with self-consistency yields additional gains, increasing major@20 accuracy from 75.59 to 78.31.", "pdf": "https://openreview.net/pdf/067fc4537caa9562aed142113b799e22fd5a39a7.pdf"} {"title": "A Decision-Language Model (DLM) for Dynamic Restless Multi-Armed Bandit Tasks in Public Health", "url": "https://openreview.net/forum?id=UiQkFXLfbu", "detail_url": "https://openreview.net/forum?id=UiQkFXLfbu", "authors": "Nikhil Behari,Edwin Zhang,YUNFAN ZHAO,Aparna Taneja,Dheeraj Mysore Nagaraj,Milind Tambe", "tags": "NIPS 2024,Poster", "abstract": "Restless multi-armed bandits (RMAB) have demonstrated success in optimizing resource allocation for large beneficiary populations in public health settings. Unfortunately, RMAB models lack flexibility to adapt to evolving public health policy priorities. Concurrently, Large Language Models (LLMs) have emerged as adept automated planners across domains of robotic control and navigation. In this paper, we propose a Decision Language Model (DLM) for RMABs, enabling dynamic fine-tuning of RMAB policies in public health settings using human-language commands. We propose using LLMs as automated planners to (1) interpret human policy preference prompts, (2) propose reward functions as code for a multi-agent RMAB environment, and (3) iterate on the generated reward functions using feedback from grounded RMAB simulations. We illustrate the application of DLM in collaboration with ARMMAN, an India-based non-profit promoting preventative care for pregnant mothers, that currently relies on RMAB policies to optimally allocate health worker calls to low-resource populations. We conduct a technology demonstration in simulation using the Gemini Pro model, showing DLM can dynamically shape policy outcomes using only human prompts as input.", "pdf": "https://openreview.net/pdf/022671b21acf1eef6ce2d8aef3660c7c6285d9a4.pdf"} {"title": "Knowledge Circuits in Pretrained Transformers", "url": "https://openreview.net/forum?id=YVXzZNxcag", "detail_url": "https://openreview.net/forum?id=YVXzZNxcag", "authors": "Yunzhi Yao,Ningyu Zhang,Zekun Xi,Mengru Wang,Ziwen Xu,Shumin Deng,Huajun Chen", "tags": "NIPS 2024,Poster", "abstract": "The remarkable capabilities of modern large language models are rooted in their vast repositories of knowledge encoded within their parameters, enabling them to perceive the world and engage in reasoning. The inner workings of how these models store knowledge have long been a subject of intense interest and investigation among researchers. To date, most studies have concentrated on isolated components within these models, such as the Multilayer Perceptrons and attention head. In this paper, we delve into the computation graph of the language model to uncover the knowledge circuits that are instrumental in articulating specific knowledge. The experiments, conducted with GPT2 and TinyLLAMA, has allowed us to observe how certain information heads, relation heads, and Multilayer Perceptrons collaboratively encode knowledge within the model. Moreover, we evaluate the impact of current knowledge editing techniques on these knowledge circuits, providing deeper insights into the functioning and constraints of these editing methodologies. Finally, we utilize knowledge circuits to analyze and interpret language model behaviors such as hallucinations and in-context learning. We believe the knowledge circuit holds potential for advancing our understanding of Transformers and guiding the improved design of knowledge editing.", "pdf": "https://openreview.net/pdf/5b5d856a093dae508bce507bc74e93c8e5b0fcf8.pdf"} {"title": "Phased Consistency Models", "url": "https://openreview.net/forum?id=mtBmKqyqGS", "detail_url": "https://openreview.net/forum?id=mtBmKqyqGS", "authors": "Fu-Yun Wang,Zhaoyang Huang,Alexander William Bergman,Dazhong Shen,Peng Gao,Michael Lingelbach,Keqiang Sun,Weikang Bian,Guanglu Song,Yu Liu,Xiaogang Wang,Hongsheng Li", "tags": "NIPS 2024,Poster", "abstract": "Consistency Models (CMs) have made significant progress in accelerating the generation of diffusion models. However, their application to high-resolution, text-conditioned image generation in the latent space remains unsatisfactory. In this paper, we identify three key flaws in the current design of Latent Consistency Models~(LCMs). We investigate the reasons behind these limitations and propose Phased Consistency Models (PCMs), which generalize the design space and address the identified limitations. Our evaluations demonstrate that PCMs outperform LCMs across 1--16 step generation settings. While PCMs are specifically designed for multi-step refinement, they achieve comparable 1-step generation results to previously state-of-the-art specifically designed 1-step methods. Furthermore, we show the methodology of PCMs is versatile and applicable to video generation, enabling us to train the state-of-the-art few-step text-to-video generator. Our code is available at https://github.com/G-U-N/Phased-Consistency-Model.", "pdf": "https://openreview.net/pdf/d6b0d049925a3753a9c86b198b7ec86f4a50b681.pdf"} {"title": "Doubly Mild Generalization for Offline Reinforcement Learning", "url": "https://openreview.net/forum?id=7QG9R8urVy", "detail_url": "https://openreview.net/forum?id=7QG9R8urVy", "authors": "Yixiu Mao,Cheems Wang,Yun Qu,Yuhang Jiang,Xiangyang Ji", "tags": "NIPS 2024,Poster", "abstract": "Offline Reinforcement Learning (RL) suffers from the extrapolation error and value overestimation. From a generalization perspective, this issue can be attributed to the over-generalization of value functions or policies towards out-of-distribution (OOD) actions. Significant efforts have been devoted to mitigating such generalization, and recent in-sample learning approaches have further succeeded in entirely eschewing it. Nevertheless, we show that mild generalization beyond the dataset can be trusted and leveraged to improve performance under certain conditions. To appropriately exploit generalization in offline RL, we propose Doubly Mild Generalization (DMG), comprising (i) mild action generalization and (ii) mild generalization propagation. The former refers to selecting actions in a close neighborhood of the dataset to maximize the Q values. Even so, the potential erroneous generalization can still be propagated, accumulated, and exacerbated by bootstrapping. In light of this, the latter concept is introduced to mitigate the generalization propagation without impeding the propagation of RL learning signals. Theoretically, DMG guarantees better performance than the in-sample optimal policy in the oracle generalization scenario. Even under worst-case generalization, DMG can still control value overestimation at a certain level and lower bound the performance. Empirically, DMG achieves state-of-the-art performance across Gym-MuJoCo locomotion tasks and challenging AntMaze tasks. Moreover, benefiting from its flexibility in both generalization aspects, DMG enjoys a seamless transition from offline to online learning and attains strong online fine-tuning performance.", "pdf": "https://openreview.net/pdf/4db35aef66dffb6630e99f2fbb36c0e5c72c91ac.pdf"} {"title": "Unveiling the Tapestry of Consistency in Large Vision-Language Models", "url": "https://openreview.net/forum?id=tu1oC7zHGW", "detail_url": "https://openreview.net/forum?id=tu1oC7zHGW", "authors": "Yuan Zhang,Fei xiao,Tao Huang,Chun-Kai Fan,Hongyuan Dong,Jiawen Li,Jiacong Wang,Kuan Cheng,Shanghang Zhang,Haoyuan Guo", "tags": "NIPS 2024,Poster", "abstract": "Large vision-language models (LVLMs) have recently achieved rapid progress, exhibiting great perception and reasoning abilities concerning visual information. However, when faced with prompts in different sizes of solution spaces, LVLMs fail to always give consistent answers regarding the same knowledge point. This inconsistency of answers between different solution spaces is prevalent in LVLMs and erodes trust. To this end, we provide a multi-modal benchmark ConBench, to intuitively analyze how LVLMs perform when the solution space of a prompt revolves around a knowledge point. Based on the ConBench tool, we are the first to reveal the tapestry and get the following findings: (1) In the discriminate realm, the larger the solution space of the prompt, the lower the accuracy of the answers. \n(2) Establish the relationship between the discriminative and generative realms: the accuracy of the discriminative question type exhibits a strong positive correlation with its Consistency with the caption. (3) Compared to open-source models, closed-source models exhibit a pronounced bias advantage in terms of Consistency. Eventually, we ameliorate the consistency of LVLMs by trigger-based diagnostic refinement, indirectly improving the performance of their caption. We hope this paper will accelerate the research community in better evaluating their models and encourage future advancements in the consistency domain.", "pdf": "https://openreview.net/pdf/83dfd150b1948e538576ef1e057ad1a668b1ca3e.pdf"} {"title": "Visual Perception by Large Language Model\u2019s Weights", "url": "https://openreview.net/forum?id=JPtobPtxKT", "detail_url": "https://openreview.net/forum?id=JPtobPtxKT", "authors": "Feipeng Ma,Hongwei Xue,Yizhou Zhou,Guangting Wang,Fengyun Rao,Shilin Yan,Yueyi Zhang,Siying Wu,Mike Zheng Shou,Xiaoyan Sun", "tags": "NIPS 2024,Poster", "abstract": "Existing Multimodal Large Language Models (MLLMs) follow the paradigm that perceives visual information by aligning visual features with the input space of Large Language Models (LLMs) and concatenating visual tokens with text tokens to form a unified sequence input for LLMs. These methods demonstrate promising results on various vision-language tasks but are limited by the high computational effort due to the extended input sequence resulting from the involvement of visual tokens. In this paper, instead of input space alignment, we propose a novel parameter space alignment paradigm that represents visual information as model weights. For each input image, we use a vision encoder to extract visual features, convert features into perceptual weights, and merge the perceptual weights with LLM's weights. In this way, the input of LLM does not require visual tokens, which reduces the length of the input sequence and greatly improves efficiency. Following this paradigm, we propose VLoRA with the perceptual weights generator. The perceptual weights generator is designed to convert visual features to perceptual weights with low-rank property, exhibiting a form similar to LoRA. The experimental results show that our VLoRA achieves comparable performance on various benchmarks for MLLMs, while significantly reducing the computational costs for both training and inference. Code and models are released at \\url{https://github.com/FeipengMa6/VLoRA}.", "pdf": "https://openreview.net/pdf/aff7b95d0125b4686a093d84c06428c6a04b7fd2.pdf"} {"title": "Hybrid Mamba for Few-Shot Segmentation", "url": "https://openreview.net/forum?id=Qe2BKeCEBC", "detail_url": "https://openreview.net/forum?id=Qe2BKeCEBC", "authors": "Qianxiong Xu,Xuanyi Liu,Lanyun Zhu,Guosheng Lin,Cheng Long,Ziyue Li,Rui Zhao", "tags": "NIPS 2024,Poster", "abstract": "Many few-shot segmentation (FSS) methods use cross attention to fuse support foreground (FG) into query features, regardless of the quadratic complexity. A recent advance Mamba can also well capture intra-sequence dependencies, yet the complexity is only linear. Hence, we aim to devise a cross (attention-like) Mamba to capture inter-sequence dependencies for FSS. A simple idea is to scan on support features to selectively compress them into the hidden state, which is then used as the initial hidden state to sequentially scan query features. Nevertheless, it suffers from (1) support forgetting issue: query features will also gradually be compressed when scanning on them, so the support features in hidden state keep reducing, and many query pixels cannot fuse sufficient support features; (2) intra-class gap issue: query FG is essentially more similar to itself rather than to support FG, i.e., query may prefer not to fuse support features but their own ones from the hidden state, yet the success of FSS relies on the effective use of support information. To tackle them, we design a hybrid Mamba network (HMNet), including (1) a support recapped Mamba to periodically recap the support features when scanning query, so the hidden state can always contain rich support information; (2) a query intercepted Mamba to forbid the mutual interactions among query pixels, and encourage them to fuse more support features from the hidden state. Consequently, the support information is better utilized, leading to better performance. Extensive experiments have been conducted on two public benchmarks, showing the superiority of HMNet. The code is available at https://github.com/Sam1224/HMNet.", "pdf": "https://openreview.net/pdf/7fca1561230feeb691e2dbccd7f1ef8f79dfa5ec.pdf"} {"title": "Fast Graph Sharpness-Aware Minimization for Enhancing and Accelerating Few-Shot Node Classification", "url": "https://openreview.net/forum?id=AF32GbuupC", "detail_url": "https://openreview.net/forum?id=AF32GbuupC", "authors": "Yihong Luo,Yuhan Chen,Siya Qiu,Yiwei Wang,Chen Zhang,Yan Zhou,Xiaochun Cao,Jing Tang", "tags": "NIPS 2024,Poster", "abstract": "Graph Neural Networks (GNNs) have shown superior performance in node classification. However, GNNs perform poorly in the Few-Shot Node Classification (FSNC) task that requires robust generalization to make accurate predictions for unseen classes with limited labels. To tackle the challenge, we propose the integration of Sharpness-Aware Minimization (SAM)--a technique designed to enhance model generalization by finding a flat minimum of the loss landscape--into GNN training. The standard SAM approach, however, consists of two forward-backward steps in each training iteration, doubling the computational cost compared to the base optimizer (e.g., Adam). To mitigate this drawback, we introduce a novel algorithm, Fast Graph Sharpness-Aware Minimization (FGSAM), that integrates the rapid training of Multi-Layer Perceptrons (MLPs) with the superior performance of GNNs. Specifically, we utilize GNNs for parameter perturbation while employing MLPs to minimize the perturbed loss so that we can find a flat minimum with good generalization more efficiently. Moreover, our method reutilizes the gradient from the perturbation phase to incorporate graph topology into the minimization process at almost zero additional cost. To further enhance training efficiency, we develop FGSAM+ that executes exact perturbations periodically. Extensive experiments demonstrate that our proposed algorithm outperforms the standard SAM with lower computational costs in FSNC tasks. In particular, our FGSAM+ as a SAM variant offers a faster optimization than the base optimizer in most cases. In addition to FSNC, our proposed methods also demonstrate competitive performance in the standard node classification task for heterophilic graphs, highlighting the broad applicability.", "pdf": "https://openreview.net/pdf/5a54d46eeef20ef32811f6bca1369685f8431781.pdf"} {"title": "Are We on the Right Way for Evaluating Large Vision-Language Models?", "url": "https://openreview.net/forum?id=evP9mxNNxJ", "detail_url": "https://openreview.net/forum?id=evP9mxNNxJ", "authors": "Lin Chen,Jinsong Li,Xiaoyi Dong,Pan Zhang,Yuhang Zang,Zehui Chen,Haodong Duan,Jiaqi Wang,Yu Qiao,Dahua Lin,Feng Zhao", "tags": "NIPS 2024,Poster", "abstract": "Large vision-language models (LVLMs) have recently achieved rapid progress, sparking numerous studies to evaluate their multi-modal capabilities. However, we dig into current evaluation works and identify two primary issues: 1) Visual content is unnecessary for many samples. The answers can be directly inferred from the questions and options, or the world knowledge embedded in LLMs. This phenomenon is prevalent across current benchmarks. For instance, GeminiPro achieves 42.7% on the MMMU benchmark without any visual input, and outperforms the random choice baseline across six benchmarks near 24% on average. 2) Unintentional data leakage exists in LLM and LVLM training. LLM and LVLM could still answer some visual-necessary questions without visual content, indicating the memorizing of these samples within large-scale training data. For example, Sphinx-X-MoE gets 43.6% on MMMU without accessing images, surpassing its LLM backbone with 17.9%. Both problems lead to misjudgments of actual multi-modal gains and potentially misguide the study of LVLM. To this end, we present MMStar, an elite vision-indispensable multi-modal benchmark comprising 1,500 samples meticulously selected by humans. MMStar benchmarks 6 core capabilities and 18 detailed axes, aiming to evaluate LVLMs' multi-modal capacities with carefully balanced and purified samples. These samples are first roughly selected from current benchmarks with an automated pipeline, human review is then involved to ensure each curated sample exhibits visual dependency, minimal data leakage, and requires advanced multi-modal capabilities. Moreover, two metrics are developed to measure data leakage and actual performance gain in multi-modal training. We evaluate 16 leading LVLMs on MMStar to assess their multi-modal capabilities, and on 7 benchmarks with the proposed metrics to investigate their data leakage and actual multi-modal gain.", "pdf": "https://openreview.net/pdf/97ee2414d16a670f5c352ebe33c5f855a53a7dea.pdf"} {"title": "Physics-informed Neural Networks for Functional Differential Equations: Cylindrical Approximation and Its Convergence Guarantees", "url": "https://openreview.net/forum?id=H5z0XqEX57", "detail_url": "https://openreview.net/forum?id=H5z0XqEX57", "authors": "Taiki Miyagawa,Takeru Yokota", "tags": "NIPS 2024,Poster", "abstract": "We propose the first learning scheme for functional differential equations (FDEs).\nFDEs play a fundamental role in physics, mathematics, and optimal control.\nHowever, the numerical analysis of FDEs has faced challenges due to its unrealistic computational costs and has been a long standing problem over decades.\nThus, numerical approximations of FDEs have been developed, but they often oversimplify the solutions. \nTo tackle these two issues, we propose a hybrid approach combining physics-informed neural networks (PINNs) with the *cylindrical approximation*. \nThe cylindrical approximation expands functions and functional derivatives with an orthonormal basis and transforms FDEs into high-dimensional PDEs. \nTo validate the reliability of the cylindrical approximation for FDE applications, we prove the convergence theorems of approximated functional derivatives and solutions.\nThen, the derived high-dimensional PDEs are numerically solved with PINNs.\nThrough the capabilities of PINNs, our approach can handle a broader class of functional derivatives more efficiently than conventional discretization-based methods, improving the scalability of the cylindrical approximation.\nAs a proof of concept, we conduct experiments on two FDEs and demonstrate that our model can successfully achieve typical $L^1$ relative error orders of PINNs $\\sim 10^{-3}$.\nOverall, our work provides a strong backbone for physicists, mathematicians, and machine learning experts to analyze previously challenging FDEs, thereby democratizing their numerical analysis, which has received limited attention.", "pdf": "https://openreview.net/pdf/af8cd25af20af0733160dca64cc8125087b4e653.pdf"} {"title": "Optimal and Approximate Adaptive Stochastic Quantization", "url": "https://openreview.net/forum?id=8ZLL6mu2qC", "detail_url": "https://openreview.net/forum?id=8ZLL6mu2qC", "authors": "Ran Ben-Basat,Yaniv Ben-Itzhak,Michael Mitzenmacher,shay vargaftik", "tags": "NIPS 2024,Poster", "abstract": "Quantization is a fundamental optimization for many machine learning (ML) use cases, including compressing gradients, model weights and activations, and datasets. The most accurate form of quantization is adaptive, where the error is minimized with respect to a given input rather than optimizing for the worst case. However, optimal adaptive quantization methods are considered infeasible in terms of both their runtime and memory requirements.\n\nWe revisit the Adaptive Stochastic Quantization (ASQ) problem and present algorithms that find optimal solutions with asymptotically improved time and space complexities. Our experiments indicate that our algorithms may open the door to using ASQ more extensively in a variety of ML applications. We also present an even faster approximation algorithm for quantizing large inputs on the fly.", "pdf": "https://openreview.net/pdf/35d60c34258ab73bfbcd44317112fca129f7dc09.pdf"} {"title": "The Poisson Midpoint Method for Langevin Dynamics: Provably Efficient Discretization for Diffusion Models", "url": "https://openreview.net/forum?id=Ylvviju6MD", "detail_url": "https://openreview.net/forum?id=Ylvviju6MD", "authors": "Saravanan Kandasamy,Dheeraj Mysore Nagaraj", "tags": "NIPS 2024,Poster", "abstract": "Langevin Dynamics is a Stochastic Differential Equation (SDE) central to sampling and generative modeling and is implemented via time discretization. Langevin Monte Carlo (LMC), based on the Euler-Maruyama discretization, is the simplest and most studied algorithm. LMC can suffer from slow convergence - requiring a large number of steps of small step-size to obtain good quality samples. This becomes stark in the case of diffusion models where a large number of steps gives the best samples, but the quality degrades rapidly with smaller number of steps. Randomized Midpoint Method has been recently proposed as a better discretization of Langevin dynamics for sampling from strongly log-concave distributions. However, important applications such as diffusion models involve non-log concave densities and contain time varying drift. We propose its variant, the Poisson Midpoint Method, which approximates a small step-size LMC with large step-sizes. We prove that this can obtain a quadratic speed up of LMC under very weak assumptions. We apply our method to diffusion models for image generation and show that it maintains the quality of DDPM with 1000 neural network calls with just 50-80 neural network calls and outperforms ODE based methods with similar compute.", "pdf": "https://openreview.net/pdf/3af5c647f6167fef7353b18afe07eb2503d97baf.pdf"} {"title": "Near-Optimal Streaming Heavy-Tailed Statistical Estimation with Clipped SGD", "url": "https://openreview.net/forum?id=8JauriwDeH", "detail_url": "https://openreview.net/forum?id=8JauriwDeH", "authors": "Aniket Das,Dheeraj Mysore Nagaraj,Soumyabrata Pal,Arun Suggala,Prateek Varshney", "tags": "NIPS 2024,Poster", "abstract": "$\\newcommand{\\Tr}{\\mathsf{Tr}}$\nWe consider the problem of high-dimensional heavy-tailed statistical estimation in the streaming setting, which is much harder than the traditional batch setting due to memory constraints. We cast this problem as stochastic convex optimization with heavy tailed stochastic gradients, and prove that the widely used Clipped-SGD algorithm attains near-optimal sub-Gaussian statistical rates whenever the second moment of the stochastic gradient noise is finite. More precisely, with $T$ samples, we show that Clipped-SGD, for smooth and strongly convex objectives, achieves an error of $\\sqrt{\\frac{\\Tr(\\Sigma)+\\sqrt{\\Tr(\\Sigma)\\\\|\\Sigma\\\\|_2}\\ln(\\tfrac{\\ln(T)}{\\delta})}{T}}$ with probability $1-\\delta$, where $\\Sigma$ is the covariance of the clipped gradient. Note that the fluctuations (depending on $\\tfrac{1}{\\delta}$) are of lower order than the term $\\Tr(\\Sigma)$.\nThis improves upon the current best rate of\n$\\sqrt{\\frac{\\Tr(\\Sigma)\\ln(\\tfrac{1}{\\delta})}{T}}$ for Clipped-SGD, known \\emph{only} for smooth and strongly convex objectives. Our results also extend to smooth convex and lipschitz convex objectives. Key to our result is a novel iterative refinement strategy for martingale concentration, improving upon the PAC-Bayes approach of \\citet{catoni2018dimension}.", "pdf": "https://openreview.net/pdf/492b971c4dfd9caecfce61206d2bccbf105635fc.pdf"} {"title": "Transformers on Markov data: Constant depth suffices", "url": "https://openreview.net/forum?id=5uG9tp3v2q", "detail_url": "https://openreview.net/forum?id=5uG9tp3v2q", "authors": "Nived Rajaraman,Marco Bondaschi,Ashok Vardhan Makkuva,Kannan Ramchandran,Michael Gastpar", "tags": "NIPS 2024,Poster", "abstract": "Attention-based transformers have been remarkably successful at modeling generative processes across various domains and modalities. In this paper, we study the behavior of transformers on data drawn from $k^{\\text{th}}$-order Markov processes, where the conditional distribution of the next symbol in a sequence depends on the previous $k$ symbols observed. We observe a surprising phenomenon empirically which contradicts previous findings: when trained for sufficiently long, a transformer with a fixed depth and $1$ head per layer is able to achieve low test loss on sequences drawn from $k^{\\text{th}}$-order Markov sources, even as $k$ grows. Furthermore, this low test loss is achieved by the transformer\u2019s ability to represent and learn the in-context conditional empirical distribution. On the theoretical side, we prove that a transformer with $O(\\log_2(k))$ layers can represent the in-context conditional empirical distribution by composing induction heads to track the previous $k$ symbols in the sequence. Surprisingly, with the addition of layer normalization, we show that a transformer with a constant number of layers can represent the in-context conditional empirical distribution, concurring with our empirical observations. This result provides more insight into the benefit of soft-attention and non-linearities in the transformer architecture.", "pdf": "https://openreview.net/pdf/3f922424a4972b109b5e294928fdb3fc38a075fd.pdf"} {"title": "A Siamese Transformer with Hierarchical Refinement for Lane Detection", "url": "https://openreview.net/forum?id=E3HDagVPNG", "detail_url": "https://openreview.net/forum?id=E3HDagVPNG", "authors": "Zinan Lv,Dong Han,Wenzhe Wang,Danny Chen", "tags": "NIPS 2024,Poster", "abstract": "Lane detection is an important yet challenging task in autonomous driving systems. Existing lane detection methods mainly rely on finer-scale information to identify key points of lane lines. Since local information in realistic road environments is frequently obscured by other vehicles or affected by poor outdoor lighting conditions, these methods struggle with the regression of such key points. In this paper, we propose a novel Siamese Transformer with hierarchical refinement for lane detection to improve the detection accuracy in complex road environments. Specifically, we propose a high-to-low hierarchical refinement Transformer structure, called LAne TRansformer (LATR), to refine the key points of lane lines, which integrates global semantics information and finer-scale features. Moreover, exploiting the thin and long characteristics of lane lines, we propose a novel Curve-IoU loss to supervise the fit of lane lines. Extensive experiments on three benchmark datasets of lane detection demonstrate that our proposed new method achieves state-of-the-art results with high accuracy and efficiency. Specifically, our method achieves improved F1 scores on the OpenLane dataset, surpassing the current best-performing method by 5.0 points.", "pdf": "https://openreview.net/pdf/b1a921745df62f58f050a0eb88efb08b853d8422.pdf"} {"title": "YOLOv10: Real-Time End-to-End Object Detection", "url": "https://openreview.net/forum?id=tz83Nyb71l", "detail_url": "https://openreview.net/forum?id=tz83Nyb71l", "authors": "Ao Wang,Hui Chen,Lihao Liu,Kai CHEN,Zijia Lin,Jungong Han,Guiguang Ding", "tags": "NIPS 2024,Poster", "abstract": "Over the past years, YOLOs have emerged as the predominant paradigm in the field of real-time object detection owing to their effective balance between computational cost and detection performance. Researchers have explored the architectural designs, optimization objectives, data augmentation strategies, and others for YOLOs, achieving notable progress. However, the reliance on the non-maximum suppression (NMS) for post-processing hampers the end-to-end deployment of YOLOs and adversely impacts the inference latency. Besides, the design of various components in YOLOs lacks the comprehensive and thorough inspection, resulting in noticeable computational redundancy and limiting the model's capability. It renders the suboptimal efficiency, along with considerable potential for performance improvements. In this work, we aim to further advance the performance-efficiency boundary of YOLOs from both the post-processing and the model architecture. To this end, we first present the consistent dual assignments for NMS-free training of YOLOs, which brings the competitive performance and low inference latency simultaneously. Moreover, we introduce the holistic efficiency-accuracy driven model design strategy for YOLOs. We comprehensively optimize various components of YOLOs from both the efficiency and accuracy perspectives, which greatly reduces the computational overhead and enhances the capability. The outcome of our effort is a new generation of YOLO series for real-time end-to-end object detection, dubbed YOLOv10. Extensive experiments show that YOLOv10 achieves the state-of-the-art performance and efficiency across various model scales. For example, our YOLOv10-S is 1.8$\\times$ faster than RT-DETR-R18 under the similar AP on COCO, meanwhile enjoying 2.8$\\times$ smaller number of parameters and FLOPs. Compared with YOLOv9-C, YOLOv10-B has 46\\% less latency and 25\\% fewer parameters for the same performance. Code and models are available at https://github.com/THU-MIG/yolov10.", "pdf": "https://openreview.net/pdf/6fb5f9fb30b35e9848efa2418cd263f561d2e3a5.pdf"} {"title": "Harmonizing Visual Text Comprehension and Generation", "url": "https://openreview.net/forum?id=fqjeKsHOVR", "detail_url": "https://openreview.net/forum?id=fqjeKsHOVR", "authors": "Zhen Zhao,Jingqun Tang,Binghong Wu,Chunhui Lin,Shu Wei,Hao Liu,Xin Tan,zhizhong zhang,Can Huang,Yuan Xie", "tags": "NIPS 2024,Poster", "abstract": "In this work, we present TextHarmony, a unified and versatile multimodal generative model proficient in comprehending and generating visual text. Simultaneously generating images and texts typically results in performance degradation due to the inherent inconsistency between vision and language modalities. To overcome this challenge, existing approaches resort to modality-specific data for supervised fine-tuning, necessitating distinct model instances. We propose Slide-LoRA, which dynamically aggregates modality-specific and modality-agnostic LoRA experts, partially decoupling the multimodal generation space. Slide-LoRA harmonizes the generation of vision and language within a singular model instance, thereby facilitating a more unified generative process. Additionally, we develop a high-quality image caption dataset, DetailedTextCaps-100K, synthesized with a sophisticated closed-source MLLM to enhance visual text generation capabilities further. Comprehensive experiments across various benchmarks demonstrate the effectiveness of the proposed approach. Empowered by Slide-LoRA, TextHarmony achieves comparable performance to modality-specific fine-tuning results with only a 2% increase in parameters and shows an average improvement of 2.5% in visual text comprehension tasks and 4.0% in visual text generation tasks. Our work delineates the viability of an integrated approach to multimodal generation within the visual text domain, setting a foundation for subsequent inquiries. Code is available at https://github.com/bytedance/TextHarmony.", "pdf": "https://openreview.net/pdf/ab52b4b6e417851e6307a8ff59755d172b1a46af.pdf"} {"title": "Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting", "url": "https://openreview.net/forum?id=qDfPSWXSLt", "detail_url": "https://openreview.net/forum?id=qDfPSWXSLt", "authors": "Ziyi Yang,Xinyu Gao,Yang-Tian Sun,Yi-Hua Huang,Xiaoyang Lyu,Wen Zhou,Shaohui Jiao,XIAOJUAN QI,Xiaogang Jin", "tags": "NIPS 2024,Poster", "abstract": "The recent advancements in 3D Gaussian splatting (3D-GS) have not only facilitated real-time rendering through modern GPU rasterization pipelines but have also attained state-of-the-art rendering quality. Nevertheless, despite its exceptional rendering quality and performance on standard datasets, 3D-GS frequently encounters difficulties in accurately modeling specular and anisotropic components. This issue stems from the limited ability of spherical harmonics (SH) to represent high-frequency information. To overcome this challenge, we introduce Spec-Gaussian, an approach that utilizes an anisotropic spherical Gaussian (ASG) appearance field instead of SH for modeling the view-dependent appearance of each 3D Gaussian. Additionally, we have developed a coarse-to-fine training strategy to improve learning efficiency and eliminate floaters caused by overfitting in real-world scenes. Our experimental results demonstrate that our method surpasses existing approaches in terms of rendering quality. Thanks to ASG, we have significantly improved the ability of 3D-GS to model scenes with specular and anisotropic components without increasing the number of 3D Gaussians. This improvement extends the applicability of 3D GS to handle intricate scenarios with specular and anisotropic surfaces.", "pdf": "https://openreview.net/pdf/318102c1ccdb1f36d39b709bb4becf3633743e6f.pdf"} {"title": "Discovering Creative Behaviors through DUPLEX: Diverse Universal Features for Policy Exploration", "url": "https://openreview.net/forum?id=bHgkT0sUy6", "detail_url": "https://openreview.net/forum?id=bHgkT0sUy6", "authors": "Borja G. Le\u00f3n,Francesco Riccio,Kaushik Subramanian,Peter R. Wurman,Peter Stone", "tags": "NIPS 2024,Poster", "abstract": "The ability to approach the same problem from different angles is a cornerstone of human intelligence that leads to robust solutions and effective adaptation to problem variations. In contrast, current RL methodologies tend to lead to policies that settle on a single solution to a given problem, making them brittle to problem variations. Replicating human flexibility in reinforcement learning agents is the challenge that we explore in this work. We tackle this challenge by extending state-of-the-art approaches to introduce DUPLEX, a method that explicitly defines a diversity objective with constraints and makes robust estimates of policies\u2019 expected behavior through successor features. The trained agents can (i) learn a diverse set of near-optimal policies in complex highly-dynamic environments and (ii) exhibit competitive and diverse skills in out-of-distribution (OOD) contexts. Empirical results indicate that DUPLEX improves over previous methods and successfully learns competitive driving styles in a hyper-realistic simulator (i.e., GranTurismo \u2122 7) as well as diverse and effective policies in several multi-context robotics MuJoCo simulations with OOD gravity forces and height limits. To the best of our knowledge, our method is the first to achieve diverse solutions in complex driving simulators and OOD robotic contexts. DUPLEX agents demonstrating diverse behaviors can be found at https://ai.sony/publications/Discovering-Creative-Behaviors-through-DUPLEX-Diverse-Universal-Features-for-Policy-Exploration/.", "pdf": "https://openreview.net/pdf/797c73757517c0a099940cf56bbd136b9ef349b8.pdf"} {"title": "Conjugate Bayesian Two-step Change Point Detection for Hawkes Process", "url": "https://openreview.net/forum?id=WoKtFJf9VG", "detail_url": "https://openreview.net/forum?id=WoKtFJf9VG", "authors": "Zeyue Zhang,Xiaoling LU,Feng Zhou", "tags": "NIPS 2024,Poster", "abstract": "The Bayesian two-step change point detection method is popular for the Hawkes process due to its simplicity and intuitiveness. However, the non-conjugacy between the point process likelihood and the prior requires most existing Bayesian two-step change point detection methods to rely on non-conjugate inference methods. These methods lack analytical expressions, leading to low computational efficiency and impeding timely change point detection. To address this issue, this work employs data augmentation to propose a conjugate Bayesian two-step change point detection method for the Hawkes process, which proves to be more accurate and efficient. Extensive experiments on both synthetic and real data demonstrate the superior effectiveness and efficiency of our method compared to baseline methods. Additionally, we conduct ablation studies to explore the robustness of our method concerning various hyperparameters.", "pdf": "https://openreview.net/pdf/43f391e2419a823a5db13b99a25f36a7e1f98f5b.pdf"} {"title": "MVInpainter: Learning Multi-View Consistent Inpainting to Bridge 2D and 3D Editing", "url": "https://openreview.net/forum?id=XIScpCMUse", "detail_url": "https://openreview.net/forum?id=XIScpCMUse", "authors": "Chenjie Cao,Chaohui Yu,Fan Wang,Xiangyang Xue,Yanwei Fu", "tags": "NIPS 2024,Poster", "abstract": "Novel View Synthesis (NVS) and 3D generation have recently achieved prominent improvements. However, these works mainly focus on confined categories or synthetic 3D assets, which are discouraged from generalizing to challenging in-the-wild scenes and fail to be employed with 2D synthesis directly. Moreover, these methods heavily depended on camera poses, limiting their real-world applications. \nTo overcome these issues, we propose MVInpainter, re-formulating the 3D editing as a multi-view 2D inpainting task. Specifically, MVInpainter partially inpaints multi-view images with the reference guidance rather than intractably generating an entirely novel view from scratch, which largely simplifies the difficulty of in-the-wild NVS and leverages unmasked clues instead of explicit pose conditions. To ensure cross-view consistency, MVInpainter is enhanced by video priors from motion components and appearance guidance from concatenated reference key\\&value attention. Furthermore, MVInpainter incorporates slot attention to aggregate high-level optical flow features from unmasked regions to control the camera movement with pose-free training and inference. Sufficient scene-level experiments on both object-centric and forward-facing datasets verify the effectiveness of MVInpainter, including diverse tasks, such as multi-view object removal, synthesis, insertion, and replacement. The project page is https://ewrfcas.github.io/MVInpainter/.", "pdf": "https://openreview.net/pdf/fb2df576fdedb7707a719fe7fe47b61279006c85.pdf"} {"title": "Local to Global: Learning Dynamics and Effect of Initialization for Transformers", "url": "https://openreview.net/forum?id=OX4yll3X53", "detail_url": "https://openreview.net/forum?id=OX4yll3X53", "authors": "Ashok Vardhan Makkuva,Marco Bondaschi,Adway Girish,Alliot Nagle,Hyeji Kim,Michael Gastpar,Chanakya Ekbote", "tags": "NIPS 2024,Poster", "abstract": "In recent years, transformer-based models have revolutionized deep learning, particularly in sequence modeling. To better understand this phenomenon, there is a growing interest in using Markov input processes to study transformers. However, our current understanding in this regard remains limited with many fundamental questions about how transformers learn Markov chains still unanswered. In this paper, we address this by focusing on first-order Markov chains and single-layer transformers, providing a comprehensive characterization of the learning dynamics in this context. Specifically, we prove that transformer parameters trained on next-token prediction loss can either converge to global or local minima, contingent on the initialization and the Markovian data properties, and we characterize the precise conditions under which this occurs. To the best of our knowledge, this is the first result of its kind highlighting the role of initialization. We further demonstrate that our theoretical findings are corroborated by empirical evidence. Based on these insights, we provide guidelines for the initialization of single-layer transformers and demonstrate their effectiveness. Finally, we outline several open problems in this arena. Code is available at: \\url{https://github.com/Bond1995/Markov}.", "pdf": "https://openreview.net/pdf/24ce24804175ff6c2a9e0cc0619d233c402e9ed9.pdf"} {"title": "AV-Cloud: Spatial Audio Rendering Through Audio-Visual Cloud Splatting", "url": "https://openreview.net/forum?id=yxOrSmS5wR", "detail_url": "https://openreview.net/forum?id=yxOrSmS5wR", "authors": "Mingfei Chen,Eli Shlizerman", "tags": "NIPS 2024,Poster", "abstract": "We propose a novel approach for rendering high-quality spatial audio for 3D scenes that is in synchrony with the visual stream but does not rely or explicitly conditioned on the visual rendering. We demonstrate that such an approach enables the experience of immersive virtual tourism - performing a real-time dynamic navigation within the scene, experiencing both audio and visual content. Current audio-visual rendering approaches typically rely on visual cues, such as images, and thus visual artifacts could cause inconsistency in the audio quality. Furthermore, when such approaches are incorporated with visual rendering, audio generation at each viewpoint occurs after the rendering of the image of the viewpoint and thus could lead to audio lag that affects the integration of audio and visual streams. Our proposed approach, AV-Cloud, overcomes these challenges by learning the representation of the audio-visual scene based on a set of sparse AV anchor points, that constitute the Audio-Visual Cloud, and are derived from the camera calibration. The Audio-Visual Cloud serves as an audio-visual representation from which the generation of spatial audio for arbitrary listener location can be generated. In particular, we propose a novel module Audio-Visual Cloud Splatting which decodes AV anchor points into a spatial audio transfer function for the arbitrary viewpoint of the target listener. This function, applied through the Spatial Audio Render Head module, transforms monaural input into viewpoint-specific spatial audio. As a result, AV-Cloud efficiently renders the spatial audio aligned with any visual viewpoint and eliminates the need for pre-rendered images. We show that AV-Cloud surpasses current state-of-the-art accuracy on audio reconstruction, perceptive quality, and acoustic effects on two real-world datasets. AV-Cloud also outperforms previous methods when tested on scenes \"in the wild\".", "pdf": "https://openreview.net/pdf/ddf8493e6466b57d0cba1ab7e5d9b9b4ab8c6d6d.pdf"} {"title": "Depth Anywhere: Enhancing 360 Monocular Depth Estimation via Perspective Distillation and Unlabeled Data Augmentation", "url": "https://openreview.net/forum?id=VzoyBrqJ4O", "detail_url": "https://openreview.net/forum?id=VzoyBrqJ4O", "authors": "Ning-Hsu Wang,Yu-Lun Liu", "tags": "NIPS 2024,Poster", "abstract": "Accurately estimating depth in 360-degree imagery is crucial for virtual reality, autonomous navigation, and immersive media applications. Existing depth estimation methods designed for perspective-view imagery fail when applied to 360-degree images due to different camera projections and distortions. We propose a new depth estimation framework that uses unlabeled 360-degree data effectively. Our approach uses state-of-the-art perspective depth estimation models as teacher models to generate pseudo labels through a six-face cube projection technique, enabling efficient labeling of depth in 360-degree images. This method leverages the increasing availability of large datasets. It includes two main stages: offline mask generation for invalid regions and an online semi-supervised joint training regime. We tested our approach on benchmark datasets such as Matterport3D and Stanford2D3D, showing significant improvements in depth estimation accuracy, particularly in zero-shot scenarios. Our proposed training pipeline can enhance any 360 monocular depth estimator and demonstrate effective knowledge transfer across different camera projections and data types.", "pdf": "https://openreview.net/pdf/547bde5cad09ac8ace727529e07642553636fc19.pdf"} {"title": "Q-VLM: Post-training Quantization for Large Vision-Language Models", "url": "https://openreview.net/forum?id=gxMfNArldP", "detail_url": "https://openreview.net/forum?id=gxMfNArldP", "authors": "Changyuan Wang,Ziwei Wang,Xiuwei Xu,Yansong Tang,Jie Zhou,Jiwen Lu", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we propose a post-training quantization framework of large vision-language models (LVLMs) for efficient multi-modal inference. Conventional quantization methods sequentially search the layer-wise rounding functions by minimizing activation discretization errors, which fails to acquire optimal quantization strategy without considering cross-layer dependency. On the contrary, we mine the cross-layer dependency that significantly influences discretization errors of the entire vision-language model, and embed this dependency into optimal quantization strategy searching with low search cost. Specifically, we observe the strong correlation between the activation entropy and the cross-layer dependency concerning output discretization errors. Therefore, we employ the entropy as the proxy to partition blocks optimally, which aims to achieve satisfying trade-offs between discretization errors and the search cost. Moreover, we optimize the visual encoder to disentangle the cross-layer dependency for fine-grained decomposition of search space, so that the search cost is further reduced without harming the quantization accuracy. Experimental results demonstrate that our method compresses the memory by 2.78x and increase generate speed by 1.44x about 13B LLaVA model without performance degradation on diverse multi-modal reasoning tasks.", "pdf": "https://openreview.net/pdf/122716785c243571ac10f1d02f7891436880afcc.pdf"} {"title": "Visual Prompt Tuning in Null Space for Continual Learning", "url": "https://openreview.net/forum?id=8pRemr5kEi", "detail_url": "https://openreview.net/forum?id=8pRemr5kEi", "authors": "Yue Lu,Shizhou Zhang,De Cheng,Yinghui Xing,Nannan Wang,PENG WANG,Yanning Zhang", "tags": "NIPS 2024,Poster", "abstract": "Existing prompt-tuning methods have demonstrated impressive performances in continual learning (CL), by selecting and updating relevant prompts in the vision-transformer models. On the contrary, this paper aims to learn each task by tuning the prompts in the direction orthogonal to the subspace spanned by previous tasks' features, so as to ensure no interference on tasks that have been learned to overcome catastrophic forgetting in CL. However, different from the orthogonal projection in the traditional CNN architecture, the prompt gradient orthogonal projection in the ViT architecture shows completely different and greater challenges, i.e., 1) the high-order and non-linear self-attention operation; 2) the drift of prompt distribution brought by the LayerNorm in the transformer block. Theoretically, we have finally deduced two consistency conditions to achieve the prompt gradient orthogonal projection, which provide a theoretical guarantee of eliminating interference on previously learned knowledge via the self-attention mechanism in visual prompt tuning. In practice, an effective null-space-based approximation solution has been proposed to implement the prompt gradient orthogonal projection. Extensive experimental results demonstrate the effectiveness of anti-forgetting on four class-incremental benchmarks with diverse pre-trained baseline models, and our approach achieves superior performances to state-of-the-art methods. Our code is available at https://github.com/zugexiaodui/VPTinNSforCL", "pdf": "https://openreview.net/pdf/bdbd2120af155b8c2cb5b5408d55ce8714e2d9eb.pdf"} {"title": "LG-VQ: Language-Guided Codebook Learning", "url": "https://openreview.net/forum?id=vA4s3kN4QE", "detail_url": "https://openreview.net/forum?id=vA4s3kN4QE", "authors": "Liang Guotao,Baoquan Zhang,Yaowei Wang,Yunming Ye,Xutao Li,Wanghuaibin,Luo Chuyao,kolaye,luolinfeng", "tags": "NIPS 2024,Poster", "abstract": "Vector quantization (VQ) is a key technique in high-resolution and high-fidelity image synthesis, which aims to learn a codebook to encode an image with a sequence of discrete codes and then generate an image in an auto-regression manner. \n Although existing methods have shown superior performance, most methods prefer to learn a single-modal codebook (\\emph{e.g.}, image), resulting in suboptimal performance when the codebook is applied to multi-modal downstream tasks (\\emph{e.g.}, text-to-image, image captioning) due to the existence of modal gaps.\n In this paper, we propose a novel language-guided codebook learning framework, called LG-VQ, which aims to learn a codebook that can be aligned with the text to improve the performance of multi-modal downstream tasks. Specifically, we first introduce pre-trained text semantics as prior knowledge, then design two novel alignment modules (\\emph{i.e.}, Semantic Alignment Module, and Relationship Alignment Module) to transfer such prior knowledge into codes for achieving codebook text alignment. \n In particular, our LG-VQ method is model-agnostic, which can be easily integrated into existing VQ models. Experimental results show that our method achieves superior performance on reconstruction and various multi-modal downstream tasks.", "pdf": "https://openreview.net/pdf/96a358e0e07a0284d43a8bd709729b6145adcd2a.pdf"} {"title": "Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning", "url": "https://openreview.net/forum?id=nBjmMF2IZU", "detail_url": "https://openreview.net/forum?id=nBjmMF2IZU", "authors": "Yuexiang Zhai,Hao Bai,Zipeng Lin,Jiayi Pan,Shengbang Tong,Yifei Zhou,Alane Suhr,Saining Xie,Yann LeCun,Yi Ma,Sergey Levine", "tags": "NIPS 2024,Poster", "abstract": "Large vision-language models (VLMs) fine-tuned on specialized visual instruction-following data have exhibited impressive language reasoning capabilities across various scenarios. However, this fine-tuning paradigm may not be able to efficiently learn optimal decision-making agents in multi-step goal-directed tasks from interactive environments. To address this challenge, we propose an algorithmic framework that fine-tunes VLMs with reinforcement learning (RL). Specifically, our framework provides a task description and then prompts the VLM to generate chain-of-thought (CoT) reasoning, enabling the VLM to efficiently explore intermediate reasoning steps that lead to the final text-based action. Next, the open-ended text output is parsed into an executable action to interact with the environment to obtain goal-directed task rewards. Finally, our framework uses these task rewards to fine-tune the entire VLM with RL. Empirically, we demonstrate that our proposed framework enhances the decision-making capabilities of VLM agents across various tasks, enabling 7b models to outperform commercial models such as GPT4-V or Gemini. Furthermore, we find that CoT reasoning is a crucial component for performance improvement, as removing the CoT reasoning results in a significant decrease in the overall performance of our method.", "pdf": "https://openreview.net/pdf/84c596658e84793b911234c08d43b5038aa98129.pdf"} {"title": "Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment", "url": "https://openreview.net/forum?id=NsxthTVpqA", "detail_url": "https://openreview.net/forum?id=NsxthTVpqA", "authors": "Xin Xiao,Bohong Wu,Jiacong Wang,Chunyuan Li,zhou Xun,Haoyuan Guo", "tags": "NIPS 2024,Poster", "abstract": "Existing image-text modality alignment in Vision Language Models (VLMs) treats each text token equally in an autoregressive manner. Despite being simple and effective, this method results in sub-optimal cross-modal alignment by over-emphasizing the text tokens that are less correlated with or even contradictory with the input images. In this paper, we advocate for distinct contributions for each text token based on its visual correlation. Specifically, we present by contrasting image inputs, the difference in prediction logits on each text token provides strong guidance of visual correlation. We therefore introduce Contrastive Alignment (CAL), a simple yet effective re-weighting strategy that prioritizes training visually correlated tokens. Our experimental results demonstrate that CAL consistently improves different types of VLMs across different resolutions and model sizes on various benchmark datasets. Importantly, our method incurs minimal additional computational overhead, rendering it highly efficient compared to alternative data scaling strategies.", "pdf": "https://openreview.net/pdf/17bb2dfb15592564079e52c0c9bb07a6ef695e7e.pdf"} {"title": "Coupled Mamba: Enhanced Multimodal Fusion with Coupled State Space Model", "url": "https://openreview.net/forum?id=UXEo3uNNIX", "detail_url": "https://openreview.net/forum?id=UXEo3uNNIX", "authors": "Wenbing Li,Hang Zhou,Junqing Yu,Zikai Song,Wei Yang", "tags": "NIPS 2024,Poster", "abstract": "The essence of multi-modal fusion lies in exploiting the complementary information inherent in diverse modalities.However, most prevalent fusion methods rely on traditional neural architectures and are inadequately equipped to capture the dynamics of interactions across modalities, particularly in presence of complex intra- and inter-modality correlations.Recent advancements in State Space Models (SSMs), notably exemplified by the Mamba model, have emerged as promising contenders. Particularly, its state evolving process implies stronger modality fusion paradigm, making multi-modal fusion on SSMs an appealing direction. However, fusing multiple modalities is challenging for SSMs due to its hardware-aware parallelism designs. To this end, this paper proposes the Coupled SSM model, for coupling state chains of multiple modalities while maintaining independence of intra-modality state processes. Specifically, in our coupled scheme, we devise an inter-modal hidden states transition scheme, in which the current state is dependent on the states of its own chain and that of the neighbouring chains at the previous time-step. To fully comply with the hardware-aware parallelism, we obtain the global convolution kernel by deriving the state equation while introducing the historical state.Extensive experiments on CMU-MOSEI, CH-SIMS, CH-SIMSV2 through multi-domain input verify the effectiveness of our model compared to current state-of-the-art methods, improved F1-Score by 0.4%, 0.9%, and 2.3% on the three datasets respectively, 49% faster inference and 83.7% GPU memory save. The results demonstrate that Coupled Mamba model is capable of enhanced multi-modal fusion.", "pdf": "https://openreview.net/pdf/c0967468999c3d38c161a8dd93682b7d1707ad5e.pdf"} {"title": "Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging", "url": "https://openreview.net/forum?id=zxSWIdyW3A", "detail_url": "https://openreview.net/forum?id=zxSWIdyW3A", "authors": "Jiamian Wang,Zongliang Wu,Yulun Zhang,Xin Yuan,Tao Lin,ZHIQIANG TAO", "tags": "NIPS 2024,Poster", "abstract": "Existing reconstruction models in snapshot compressive imaging systems (SCI) are trained with a single well-calibrated hardware instance, making their perfor- mance vulnerable to hardware shifts and limited in adapting to multiple hardware configurations. To facilitate cross-hardware learning, previous efforts attempt to directly collect multi-hardware data and perform centralized training, which is impractical due to severe user data privacy concerns and hardware heterogeneity across different platforms/institutions. In this study, we explicitly consider data privacy and heterogeneity in cooperatively optimizing SCI systems by proposing a Federated Hardware-Prompt learning (FedHP) framework. Rather than mitigating the client drift by rectifying the gradients, which only takes effect on the learning manifold but fails to solve the heterogeneity rooted in the input data space, FedHP learns a hardware-conditioned prompter to align inconsistent data distribution across clients, serving as an indicator of the data inconsistency among different hardware (e.g., coded apertures). Extensive experimental results demonstrate that the proposed FedHP coordinates the pre-trained model to multiple hardware con- figurations, outperforming prevalent FL frameworks for 0.35dB under challenging heterogeneous settings. Moreover, a Snapshot Spectral Heterogeneous Dataset has been built upon multiple practical SCI systems. Data and code are aveilable at https://github.com/Jiamian-Wang/FedHP-Snapshot-Compressive-Imaging.git", "pdf": "https://openreview.net/pdf/9784818faf1b61e993e8c55556f64ad6c612ecad.pdf"} {"title": "End-to-end Learnable Clustering for Intent Learning in Recommendation", "url": "https://openreview.net/forum?id=As91fJvY9E", "detail_url": "https://openreview.net/forum?id=As91fJvY9E", "authors": "Yue Liu,Shihao Zhu,Jun Xia,YINGWEI MA,Jian Ma,Xinwang Liu,Shengju Yu,Kejun Zhang,Wenliang Zhong", "tags": "NIPS 2024,Poster", "abstract": "Intent learning, which aims to learn users' intents for user understanding and item recommendation, has become a hot research spot in recent years. However, existing methods suffer from complex and cumbersome alternating optimization, limiting performance and scalability. To this end, we propose a novel intent learning method termed \\underline{ELCRec}, by unifying behavior representation learning into an \\underline{E}nd-to-end \\underline{L}earnable \\underline{C}lustering framework, for effective and efficient \\underline{Rec}ommendation. Concretely, we encode user behavior sequences and initialize the cluster centers (latent intents) as learnable neurons. Then, we design a novel learnable clustering module to separate different cluster centers, thus decoupling users' complex intents. Meanwhile, it guides the network to learn intents from behaviors by forcing behavior embeddings close to cluster centers. This allows simultaneous optimization of recommendation and clustering via mini-batch data. Moreover, we propose intent-assisted contrastive learning by using cluster centers as self-supervision signals, further enhancing mutual promotion. Both experimental results and theoretical analyses demonstrate the superiority of ELCRec from six perspectives. Compared to the runner-up, ELCRec improves NDCG@5 by 8.9\\% and reduces computational costs by 22.5\\% on the Beauty dataset. Furthermore, due to the scalability and universal applicability, we deploy this method on the industrial recommendation system with 130 million page views and achieve promising results. The codes are available on GitHub\\footnote{https://github.com/yueliu1999/ELCRec}. A collection (papers, codes, datasets) of deep group recommendation/intent learning methods is available on GitHub\\footnote{https://github.com/yueliu1999/Awesome-Deep-Group-Recommendation}.", "pdf": "https://openreview.net/pdf/7e5f450b655c29d94c691badac39d70e4c288032.pdf"} {"title": "MVSplat360: Feed-Forward 360 Scene Synthesis from Sparse Views", "url": "https://openreview.net/forum?id=B0OWOkMwhz", "detail_url": "https://openreview.net/forum?id=B0OWOkMwhz", "authors": "Yuedong Chen,Chuanxia Zheng,Haofei Xu,Bohan Zhuang,Andrea Vedaldi,Tat-Jen Cham,Jianfei Cai", "tags": "NIPS 2024,Poster", "abstract": "We introduce MVSplat360, a feed-forward approach for 360\u00b0 novel view synthesis (NVS) of diverse real-world scenes, using only sparse observations. This setting is inherently ill-posed due to minimal overlap among input views and insufficient visual information provided, making it challenging for conventional methods to achieve high-quality results. Our MVSplat360 addresses this by effectively combining geometry-aware 3D reconstruction with temporally consistent video generation. Specifically, it refactors a feed-forward 3D Gaussian Splatting (3DGS) model to render features directly into the latent space of a pre-trained Stable Video Diffusion (SVD) model, where these features then act as pose and visual cues to guide the denoising process and produce photorealistic 3D-consistent views. Our model is end-to-end trainable and supports rendering arbitrary views with as few as 5 sparse input views. To evaluate MVSplat360's performance, we introduce a new benchmark using the challenging DL3DV-10K dataset, where MVSplat360 achieves superior visual quality compared to state-of-the-art methods on wide-sweeping or even 360\u00b0 NVS tasks. Experiments on the existing benchmark RealEstate10K also confirm the effectiveness of our model. Readers are highly recommended to view the video results at [donydchen.github.io/mvsplat360](https://donydchen.github.io/mvsplat360).", "pdf": "https://openreview.net/pdf/8eab4c80c2d13fe008de8ba51c5e01546d382d2f.pdf"} {"title": "Rethinking No-reference Image Exposure Assessment from Holism to Pixel: Models, Datasets and Benchmarks", "url": "https://openreview.net/forum?id=zVrQeoPIoQ", "detail_url": "https://openreview.net/forum?id=zVrQeoPIoQ", "authors": "Shuai He,Shuntian Zheng,Anlong Ming,Banyu Wu,Huadong Ma", "tags": "NIPS 2024,Poster", "abstract": "The past decade has witnessed an increasing demand for enhancing image quality through exposure, and as a crucial prerequisite in this endeavor, Image Exposure Assessment (IEA) is now being accorded serious attention. However, IEA encounters two persistent challenges that remain unresolved over the long term: the accuracy and generalizability of No-reference IEA are inadequate for practical applications; the scope of IEA is confined to qualitative and quantitative analysis of the entire image or subimage, such as providing only a score to evaluate the exposure level, thereby lacking intuitive and precise fine-grained evaluation for complex exposure conditions. The objective of this paper is to address the persistent bottleneck challenges from three perspectives: model, dataset, and benchmark. 1) Model-level: we propose a Pixel-level IEA Network (P-IEANet) that utilizes Haar discrete wavelet transform (DWT) to analyze, decompose, and assess exposure from both lightness and structural perspectives, capable of generating pixel-level assessment results under no-reference scenarios. 2) Dataset-level: we elaborately build an exposure-oriented dataset, IEA40K, containing 40K images, covering 17 typical lighting scenarios, 27 devices, and 50+ scenes, with each image densely annotated by more than 10 experts with pixel-level labels. 3) Benchmark-level: we develop a comprehensive benchmark of 19 methods based on IEA40K. Our P-IEANet not only achieves state-of-the-art (SOTA) performance on all metrics but also seamlessly integrates with existing exposure correction and lighting enhancement methods. To our knowledge, this is the first work that explicitly emphasizes assessing complex image exposure problems at a pixel level, providing a significant boost to the IEA and exposure-related community. The code and dataset are available in \\href{https://github.com/mRobotit/Pixel-level-No-reference-Image-Exposure-Assessment}{\\textcolor{red} {here}}.", "pdf": "https://openreview.net/pdf/9d159e1b2a972b461ac4b69cc1ad301734642641.pdf"} {"title": "An Adaptive Approach for Infinitely Many-armed Bandits under Generalized Rotting Constraints", "url": "https://openreview.net/forum?id=1cXdndzkxU", "detail_url": "https://openreview.net/forum?id=1cXdndzkxU", "authors": "Jung-hun Kim,Milan Vojnovic,Se-Young Yun", "tags": "NIPS 2024,Poster", "abstract": "In this study, we consider the infinitely many-armed bandit problems in a rested rotting setting, where the mean reward of an arm may decrease with each pull, while otherwise, it remains unchanged. We explore two scenarios regarding the rotting of rewards: one in which the cumulative amount of rotting is bounded by $V_T$, referred to as the slow-rotting case, and the other in which the cumulative number of rotting instances is bounded by $S_T$, referred to as the abrupt-rotting case. To address the challenge posed by rotting rewards, we introduce an algorithm that utilizes UCB with an adaptive sliding window, designed to manage the bias and variance trade-off arising due to rotting rewards. Our proposed algorithm achieves tight regret bounds for both slow and abrupt rotting scenarios. Lastly, we demonstrate the performance of our algorithm using numerical experiments.", "pdf": "https://openreview.net/pdf/bf227815f265f9c36930b05a6a80bbdce1c8cca5.pdf"} {"title": "Off-Policy Selection for Initiating Human-Centric Experimental Design", "url": "https://openreview.net/forum?id=swp3lPDmZe", "detail_url": "https://openreview.net/forum?id=swp3lPDmZe", "authors": "Ge Gao,Xi Yang,Qitong Gao,Song Ju,Miroslav Pajic,Min Chi", "tags": "NIPS 2024,Poster", "abstract": "In human-centric applications like healthcare and education, the \\textit{heterogeneity} among patients and students necessitates personalized treatments and instructional interventions. While reinforcement learning (RL) has been utilized in those tasks, off-policy selection (OPS) is pivotal to close the loop by offline evaluating and selecting policies without online interactions, yet current OPS methods often overlook the heterogeneity among participants. Our work is centered on resolving a \\textit{pivotal challenge} in human-centric systems (HCSs): \\textbf{\\textit{how to select a policy to deploy when a new participant joining the cohort, without having access to any prior offline data collected over the participant?}} We introduce First-Glance Off-Policy Selection (FPS), a novel approach that systematically addresses participant heterogeneity through sub-group segmentation and tailored OPS criteria to each sub-group. By grouping individuals with similar traits, FPS facilitates personalized policy selection aligned with unique characteristics of each participant or group of participants. FPS is evaluated via two important but challenging applications, intelligent tutoring systems and a healthcare application for sepsis treatment and intervention. FPS presents significant advancement in enhancing learning outcomes of students and in-hospital care outcomes.", "pdf": "https://openreview.net/pdf/8d345a8973e3f20dedb3ffedeb4680bd783a2b3e.pdf"} {"title": "$\\textit{Read-ME}$: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design", "url": "https://openreview.net/forum?id=i8JaxY7tDI", "detail_url": "https://openreview.net/forum?id=i8JaxY7tDI", "authors": "Ruisi Cai,Yeonju Ro,Geon-Woo Kim,Peihao Wang,Babak Ehteshami Bejnordi,Aditya Akella,Zhangyang Wang", "tags": "NIPS 2024,Poster", "abstract": "The proliferation of large language models (LLMs) has led to the adoption of Mixture-of-Experts (MoE) architectures that dynamically leverage specialized subnetworks for improved efficiency and performance. Despite their benefits, MoE models face significant challenges during inference, including inefficient memory management and suboptimal batching, due to misaligned design choices between the model architecture and the system policies. Furthermore, the conventional approach of training MoEs from scratch is increasingly prohibitive in terms of cost. \nIn this paper, we propose a novel framework $\\textit{Read-ME}$ that transforms pre-trained dense LLMs into smaller MoE models (in contrast to ``upcycling\" generalist MoEs), avoiding the high costs of ground-up training. Our approach employs activation sparsity to extract experts. \nTo compose experts, we examine the widely-adopted layer-wise router design and show its redundancy, and thus we introduce the pre-gating router decoupled from the MoE backbone that facilitates system-friendly pre-computing and lookahead scheduling, enhancing expert-aware batching and caching.\nOur codesign therefore addresses critical gaps on both the algorithmic and system fronts, establishing a scalable and efficient alternative for LLM inference in resource-constrained settings.\n$\\textit{Read-ME}$ outperforms other popular open-source dense models of similar scales, achieving improvements of up to 10.1\\% on MMLU, and improving mean end-to-end latency up to 6.1\\%. \nCodes are available at: \\url{https://github.com/VITA-Group/READ-ME}.", "pdf": "https://openreview.net/pdf/45fc4a2f5ffb5fea7118544b185a36593c608ece.pdf"} {"title": "SimGen: Simulator-conditioned Driving Scene Generation", "url": "https://openreview.net/forum?id=JCyBN5syv3", "detail_url": "https://openreview.net/forum?id=JCyBN5syv3", "authors": "Yunsong Zhou,Michael Simon,Zhenghao Peng,Sicheng Mo,Hongzi Zhu,Minyi Guo,Bolei Zhou", "tags": "NIPS 2024,Poster", "abstract": "Controllable synthetic data generation can substantially lower the annotation cost of training data. Prior works use diffusion models to generate driving images conditioned on the 3D object layout. However, those models are trained on small-scale datasets like nuScenes, which lack appearance and layout diversity. Moreover, overfitting often happens, where the trained models can only generate images based on the layout data from the validation set of the same dataset. In this work, we introduce a simulator-conditioned scene generation framework called SimGen that can learn to generate diverse driving scenes by mixing data from the simulator and the real world. It uses a novel cascade diffusion pipeline to address challenging sim-to-real gaps and multi-condition conflicts. A driving video dataset DIVA is collected to enhance the generative diversity of SimGen, which contains over 147.5 hours of real-world driving videos from 73 locations worldwide and simulated driving data from the MetaDrive simulator. SimGen achieves superior generation quality and diversity while preserving controllability based on the text prompt and the layout pulled from a simulator. We further demonstrate the improvements brought by SimGen for synthetic data augmentation on the BEV detection and segmentation task and showcase its capability in safety-critical data generation.", "pdf": "https://openreview.net/pdf/767991e0d8bf535e0c7ec95c3f3222ed05c30813.pdf"} {"title": "The Fine-Grained Complexity of Gradient Computation for Training Large Language Models", "url": "https://openreview.net/forum?id=up4tWnwRol", "detail_url": "https://openreview.net/forum?id=up4tWnwRol", "authors": "Josh Alman,Zhao Song", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) have made fundamental contributions over the last a few years. To train an LLM, one needs to alternatingly run `forward' computations and backward computations. The forward computation can be viewed as attention function evaluation, and the backward computation can be viewed as a gradient computation. In previous work by [Alman and Song, NeurIPS 2023], it was proved that the forward step can be performed in almost-linear time in certain parameter regimes, but that there is no truly sub-quadratic time algorithm in the remaining parameter regimes unless the popular hypothesis $\\mathsf{SETH}$ is false. In this work, we show nearly identical results for the harder-seeming problem of computing the gradient of loss function of one layer attention network, and thus for the entire process of LLM training. This completely characterizes the fine-grained complexity of every step of LLM training.", "pdf": "https://openreview.net/pdf/491fb9060e09eb1a90b9cd2cb253e594bf0f7c83.pdf"} {"title": "Learning to be Smooth: An End-to-End Differentiable Particle Smoother", "url": "https://openreview.net/forum?id=WdMhbqCoqW", "detail_url": "https://openreview.net/forum?id=WdMhbqCoqW", "authors": "Ali Younis,Erik B. Sudderth", "tags": "NIPS 2024,Poster", "abstract": "For challenging state estimation problems arising in domains like vision and robotics, particle-based representations attractively enable temporal reasoning about multiple posterior modes. Particle smoothers offer the potential for more accurate offline data analysis by propagating information both forward and backward in time, but have classically required human-engineered dynamics and observation models. Extending recent advances in discriminative training of particle filters, we develop a framework for low-variance propagation of gradients across long time sequences when training particle smoothers. Our \"two-filter\" smoother integrates particle streams that are propagated forward and backward in time, while incorporating stratification and importance weights in the resampling step to provide low-variance gradient estimates for neural network dynamics and observation models. The resulting mixture density particle smoother is substantially more accurate than state-of-the-art particle filters, as well as search-based baselines, for city-scale global vehicle localization from real-world videos and maps.", "pdf": "https://openreview.net/pdf/436d21646700dafd6966a41a37cb937e415cb353.pdf"} {"title": "Intervention and Conditioning in Causal Bayesian Networks", "url": "https://openreview.net/forum?id=DC28Fpk76s", "detail_url": "https://openreview.net/forum?id=DC28Fpk76s", "authors": "sainyam galhotra,Joseph Halpern", "tags": "NIPS 2024,Poster", "abstract": "Causal models are crucial for understanding complex systems and\nidentifying causal relationships among variables. Even though causal\nmodels are extremely popular, conditional probability calculation of\nformulas involving interventions pose significant challenges.\nIn case of Causal Bayesian Networks (CBNs), Pearl assumes autonomy \nof mechanisms that determine interventions to calculate a range of\nprobabilities. We show that by making simple yet\noften realistic independence assumptions, it is possible \nto uniquely estimate the probability of an interventional formula (including\nthe well-studied notions of probability of sufficiency and necessity). \nWe discuss when these assumptions are appropriate.\nImportantly, in many cases of interest, when the assumptions are appropriate,\nthese probability estimates can be evaluated using\nobservational data, which carries immense significance in scenarios\nwhere conducting experiments is impractical or unfeasible.", "pdf": "https://openreview.net/pdf/6828363d1f91d3c1cdbdb241f79007e3ad290513.pdf"} {"title": "$\\textit{Trans-LoRA}$: towards data-free Transferable Parameter Efficient Finetuning", "url": "https://openreview.net/forum?id=c3Pakdyi3t", "detail_url": "https://openreview.net/forum?id=c3Pakdyi3t", "authors": "Runqian Wang,Soumya Ghosh,David Daniel Cox,Diego Antognini,Aude Oliva,Rogerio Feris,Leonid Karlinsky", "tags": "NIPS 2024,Poster", "abstract": "Low-rank adapters (LoRA) and their variants are popular parameter-efficient fine-tuning (PEFT) techniques that closely match full model fine-tune performance while requiring only a small number of additional parameters. These additional LoRA parameters are specific to the base model being adapted. When the base model needs to be deprecated and replaced with a new one, all the associated LoRA modules need to be re-trained. Such re-training requires access to the data used to train the LoRA for the original base model. This is especially problematic for commercial cloud applications where the LoRA modules and the base models are hosted by service providers who may not be allowed to host proprietary client task data. To address this challenge, we propose $\\textit{Trans-LoRA}$ --- a novel method for lossless, nearly data-free transfer of LoRAs across base models. Our approach relies on synthetic data to transfer LoRA modules. Using large language models, we design a synthetic data generator to approximate the data-generating process of the $\\textit{observed}$ task data subset. Training on the resulting synthetic dataset transfers LoRA modules to new models. We show the effectiveness of our approach using both LLama and Gemma model families. Our approach achieves lossless (mostly improved) LoRA transfer between models within and across different base model families, and even between different PEFT methods, on a wide variety of tasks.", "pdf": "https://openreview.net/pdf/33225dd1a42b912e79c3bc10fd2ec971206701e5.pdf"} {"title": "Referencing Where to Focus: Improving Visual Grounding with Referential Query", "url": "https://openreview.net/forum?id=oPvBnPTbQv", "detail_url": "https://openreview.net/forum?id=oPvBnPTbQv", "authors": "Yabing Wang,Zhuotao Tian,Qingpei Guo,Zheng Qin,Sanping Zhou,Ming Yang,Le Wang", "tags": "NIPS 2024,Poster", "abstract": "Visual Grounding aims to localize the referring object in an image given a natural language expression. Recent advancements in DETR-based visual grounding methods have attracted considerable attention, as they directly predict the coordinates of the target object without relying on additional efforts, such as pre-generated proposal candidates or pre-defined anchor boxes. However, existing research primarily focuses on designing stronger multi-modal decoder, which typically generates learnable queries by random initialization or by using linguistic embeddings. This vanilla query generation approach inevitably increases the learning difficulty for the model, as it does not involve any target-related information at the beginning of decoding. Furthermore, they only use the deepest image feature during the query learning process, overlooking the importance of features from other levels. To address these issues, we propose a novel approach, called RefFormer. It consists of the query adaption module that can be seamlessly integrated into CLIP and generate the referential query to provide the prior context for decoder, along with a task-specific decoder. By incorporating the referential query into the decoder, we can effectively mitigate the learning difficulty of the decoder, and accurately concentrate on the target object. Additionally, our proposed query adaption module can also act as an adapter, preserving the rich knowledge within CLIP without the need to tune the parameters of the backbone network. Extensive experiments demonstrate the effectiveness and efficiency of our proposed method, outperforming state-of-the-art approaches on five visual grounding benchmarks.", "pdf": "https://openreview.net/pdf/4ee331051d6635715c830b6da249a25520489f44.pdf"} {"title": "Bridging OOD Detection and Generalization: A Graph-Theoretic View", "url": "https://openreview.net/forum?id=qzwAG8qxI1", "detail_url": "https://openreview.net/forum?id=qzwAG8qxI1", "authors": "Han Wang,Yixuan Li", "tags": "NIPS 2024,Poster", "abstract": "In the context of modern machine learning, models deployed in real-world scenarios often encounter diverse data shifts like covariate and semantic shifts, leading to challenges in both out-of-distribution (OOD) generalization and detection. Despite considerable attention to these issues separately, a unified framework for theoretical understanding and practical usage is lacking. To bridge the gap, we introduce a graph-theoretic framework to jointly tackle both OOD generalization and detection problems. By leveraging the graph formulation, data representations are obtained through the factorization of the graph's adjacency matrix, enabling us to derive provable error quantifying OOD generalization and detection performance. Empirical results showcase competitive performance in comparison to existing methods, thereby validating our theoretical underpinnings.", "pdf": "https://openreview.net/pdf/c9c294f6dd6249359b43866f023196ca44bf1f4b.pdf"} {"title": "Group Robust Preference Optimization in Reward-free RLHF", "url": "https://openreview.net/forum?id=PRAsjrmXXK", "detail_url": "https://openreview.net/forum?id=PRAsjrmXXK", "authors": "Shyam Sundhar Ramesh,Yifan Hu,Iason Chaimalas,Viraj Mehta,Pier Giuseppe Sessa,Haitham Bou Ammar,Ilija Bogunovic", "tags": "NIPS 2024,Poster", "abstract": "Adapting large language models (LLMs) for specific tasks usually involves fine-tuning through reinforcement learning with human feedback (RLHF) on preference data. While these data often come from diverse labelers' groups (e.g., different demographics, ethnicities, company teams, etc.), traditional RLHF approaches adopt a \"one-size-fits-all\" approach, i.e., they indiscriminately assume and optimize a single preference model, thus not being robust to unique characteristics and needs of the various groups. To address this limitation, we propose a novel Group Robust Preference Optimization (GRPO) method to align LLMs to individual groups' preferences robustly. Our approach builds upon reward-free direct preference optimization methods, but unlike previous approaches, it seeks a robust policy which maximizes the worst-case group performance. To achieve this, GRPO adaptively and sequentially weights the importance of different groups, prioritizing groups with worse cumulative loss. We theoretically study the feasibility of GRPO and analyze its convergence for the log-linear policy class. By fine-tuning LLMs with GRPO using diverse group-based global opinion data, we significantly improved performance for the worst-performing groups, reduced loss imbalances across groups, and improved probability accuracies compared to non-robust baselines.", "pdf": "https://openreview.net/pdf/ac7003d56c2ecb8160aabdba77bec397e41c0164.pdf"} {"title": "Counter-Current Learning: A Biologically Plausible Dual Network Approach for Deep Learning", "url": "https://openreview.net/forum?id=L3RYBqzRmF", "detail_url": "https://openreview.net/forum?id=L3RYBqzRmF", "authors": "Chia Hsiang Kao,Bharath Hariharan", "tags": "NIPS 2024,Poster", "abstract": "Despite its widespread use in neural networks, error backpropagation has faced criticism for its lack of biological plausibility, suffering from issues such as the backward locking problem and the weight transport problem. \nThese limitations have motivated researchers to explore more biologically plausible learning algorithms that could potentially shed light on how biological neural systems adapt and learn. \nInspired by the counter-current exchange mechanisms observed in biological systems, we propose counter-current learning (CCL), a biologically plausible framework for credit assignment in deep learning. \nThis framework employs a feedforward network to process input data and a feedback network to process targets, with each network enhancing the other through anti-parallel signal propagation. \nBy leveraging the more informative signals from the bottom layer of the feedback network to guide the updates of the top layer of the feedforward network and vice versa, CCL enables the simultaneous transformation of source inputs to target outputs and the dynamic mutual influence of these transformations.\nExperimental results on MNIST, FashionMNIST, CIFAR10, CIFAR100, and STL-10 datasets using multi-layer perceptrons and convolutional neural networks demonstrate that CCL achieves comparable performance to other biological plausible algorithms while offering a more biologically realistic learning mechanism. \nFurthermore, we showcase the applicability of our approach to an autoencoder task, underscoring its potential for unsupervised representation learning.\nOur work presents a promising direction for biologically inspired and plausible learning algorithms, offering insights into the mechanisms of learning and adaptation in neural networks.", "pdf": "https://openreview.net/pdf/e71d0043528684976cad88931141229e9b62c783.pdf"} {"title": "Policy Mirror Descent with Lookahead", "url": "https://openreview.net/forum?id=om2Aa0gUha", "detail_url": "https://openreview.net/forum?id=om2Aa0gUha", "authors": "Kimon Protopapas,Anas Barakat", "tags": "NIPS 2024,Poster", "abstract": "Policy Mirror Descent (PMD) stands as a versatile algorithmic framework encompassing several seminal policy gradient algorithms such as natural policy gradient, with connections with state-of-the-art reinforcement learning (RL) algorithms such as TRPO and PPO. PMD can be seen as a soft Policy Iteration algorithm implementing regularized 1-step greedy policy improvement. However, 1-step greedy policies might not be the best choice and recent remarkable empirical successes in RL such as AlphaGo and AlphaZero have demonstrated that greedy approaches with respect to multiple steps outperform their 1-step counterpart. In this work, we propose a new class of PMD algorithms called $h$-PMD which incorporates multi-step greedy policy improvement with lookahead depth $h$ to the PMD update rule. To solve discounted infinite horizon Markov Decision Processes with discount factor $\\gamma$, we show that $h$-PMD which generalizes the standard PMD enjoys a faster dimension-free $\\gamma^h$-linear convergence rate, contingent on the computation of multi-step greedy policies. We propose an inexact version of $h$-PMD where lookahead action values are estimated. Under a generative model, we establish a sample complexity for $h$-PMD which improves over prior work. Finally, we extend our result to linear function approximation to scale to large state spaces. Under suitable assumptions, our sample complexity only involves dependence on the dimension of the feature map space instead of the state space size.", "pdf": "https://openreview.net/pdf/3e67678b563dbcb1a4c125114a24df6e808c3c36.pdf"} {"title": "Theoretical Investigations and Practical Enhancements on Tail Task Risk Minimization in Meta Learning", "url": "https://openreview.net/forum?id=McrzOo0hwr", "detail_url": "https://openreview.net/forum?id=McrzOo0hwr", "authors": "Yiqin Lv,Cheems Wang,Dong Liang,Zheng Xie", "tags": "NIPS 2024,Poster", "abstract": "Meta learning is a promising paradigm in the era of large models and \ntask distributional robustness has become an indispensable consideration in real-world scenarios.\nRecent advances have examined the effectiveness of tail task risk minimization in fast adaptation robustness improvement \\citep{wang2023simple}.\nThis work contributes to more theoretical investigations and practical enhancements in the field.\nSpecifically, we reduce the distributionally robust strategy to a max-min optimization problem, constitute the Stackelberg equilibrium as the solution concept, and estimate the convergence rate.\nIn the presence of tail risk, we further derive the generalization bound, establish connections with estimated quantiles, and practically improve the studied strategy.\nAccordingly, extensive evaluations demonstrate the significance of our proposal in boosting robustness.", "pdf": "https://openreview.net/pdf/8f42b9167be474e2b6dfef0144964acf3823bb00.pdf"} {"title": "Goal Conditioned Reinforcement Learning for Photo Finishing Tuning", "url": "https://openreview.net/forum?id=4kVHI2uXRE", "detail_url": "https://openreview.net/forum?id=4kVHI2uXRE", "authors": "Jiarui Wu,Yujin Wang,Lingen Li,Zhang Fan,Tianfan Xue", "tags": "NIPS 2024,Poster", "abstract": "Photo finishing tuning aims to automate the manual tuning process of the photo finishing pipeline, like Adobe Lightroom or Darktable. Previous works either use zeroth-order optimization, which is slow when the set of parameters increases, or rely on a differentiable proxy of the target finishing pipeline, which is hard to train.\nTo overcome these challenges, we propose a novel goal-conditioned reinforcement learning framework for efficiently tuning parameters using a goal image as a condition. Unlike previous approaches, our tuning framework does not rely on any proxy and treats the photo finishing pipeline as a black box. Utilizing a trained reinforcement learning policy, it can efficiently find the desired set of parameters within just 10 queries, while optimization based approaches normally take 200 queries. Furthermore, our architecture utilizes a goal image to guide the iterative tuning of pipeline parameters, allowing for flexible conditioning on pixel-aligned target images, style images, or any other visually representable goals. We conduct detailed experiments on photo finishing tuning and photo stylization tuning tasks, demonstrating the advantages of our method.", "pdf": "https://openreview.net/pdf/3f8d40c2dcf5fa09bbe755497ee149774572ee7d.pdf"} {"title": "Towards Multi-dimensional Explanation Alignment for Medical Classification", "url": "https://openreview.net/forum?id=3A5VgiH5Pw", "detail_url": "https://openreview.net/forum?id=3A5VgiH5Pw", "authors": "Lijie Hu,Songning Lai,Wenshuo Chen,Hongru Xiao,Hongbin Lin,Lu Yu,Jingfeng Zhang,Di Wang", "tags": "NIPS 2024,Poster", "abstract": "The lack of interpretability in the field of medical image analysis has significant ethical and legal implications. Existing interpretable methods in this domain encounter several challenges, including dependency on specific models, difficulties in understanding and visualization, and issues related to efficiency. To address these limitations, we propose a novel framework called Med-MICN (Medical Multi-dimensional Interpretable Concept Network). Med-MICN provides interpretability alignment for various angles, including neural symbolic reasoning, concept semantics, and saliency maps, which are superior to current interpretable methods. Its advantages include high prediction accuracy, interpretability across multiple dimensions, and automation through an end-to-end concept labeling process that reduces the need for extensive human training effort when working with new datasets. To demonstrate the effectiveness and interpretability of Med-MICN, we apply it to four benchmark datasets and compare it with baselines. The results clearly demonstrate the superior performance and interpretability of our Med-MICN.", "pdf": "https://openreview.net/pdf/e2bc722a923c9dbaf0468dbd8b8855e3ebe40fa3.pdf"} {"title": "CAT: Coordinating Anatomical-Textual Prompts for Multi-Organ and Tumor Segmentation", "url": "https://openreview.net/forum?id=pnmUiVAGnv", "detail_url": "https://openreview.net/forum?id=pnmUiVAGnv", "authors": "Zhongzhen Huang,Yankai Jiang,Rongzhao Zhang,Shaoting Zhang,Xiaofan Zhang", "tags": "NIPS 2024,Poster", "abstract": "Existing promptable segmentation methods in the medical imaging field primarily consider either textual or visual prompts to segment relevant objects, yet they often fall short when addressing anomalies in medical images, like tumors, which may vary greatly in shape, size, and appearance. Recognizing the complexity of medical scenarios and the limitations of textual or visual prompts, we propose a novel dual-prompt schema that leverages the complementary strengths of visual and textual prompts for segmenting various organs and tumors. Specifically, we introduce $\\textbf{\\textit{CAT}}$, an innovative model that $\\textbf{C}$oordinates $\\textbf{A}$natomical prompts derived from 3D cropped images with $\\textbf{T}$extual prompts enriched by medical domain knowledge. The model architecture adopts a general query-based design, where prompt queries facilitate segmentation queries for mask prediction. To synergize two types of prompts within a unified framework, we implement a ShareRefiner, which refines both segmentation and prompt queries while disentangling the two types of prompts. Trained on a consortium of 10 public CT datasets, $\\textbf{\\textit{CAT}}$ demonstrates superior performance in multiple segmentation tasks. Further validation on a specialized in-house dataset reveals the remarkable capacity of segmenting tumors across multiple cancer stages. This approach confirms that coordinating multimodal prompts is a promising avenue for addressing complex scenarios in the medical domain.", "pdf": "https://openreview.net/pdf/0dcf67da2efd207182eda04b8b667b49326c2479.pdf"} {"title": "Nimbus: Secure and Efficient Two-Party Inference for Transformers", "url": "https://openreview.net/forum?id=G7QS68ICPJ", "detail_url": "https://openreview.net/forum?id=G7QS68ICPJ", "authors": "Zhengyi Li,Kang Yang,Jin Tan,Wen-jie Lu,Haoqi Wu,Xiao Wang,Yu Yu,Derun Zhao,Yancheng Zheng,Minyi Guo,Jingwen Leng", "tags": "NIPS 2024,Poster", "abstract": "Transformer models have gained significant attention due to their power in machine learning tasks. Their extensive deployment has raised concerns about the potential leakage of sensitive information during inference. However, when being applied to Transformers, existing approaches based on secure two-party computation (2PC) bring about efficiency limitations in two folds: (1) resource-intensive matrix multiplications in linear layers, and (2) complex non-linear activation functions like $\\mathsf{GELU}$ and $\\mathsf{Softmax}$. This work presents a new two-party inference framework $\\mathsf{Nimbus}$ for Transformer models. Specifically, we propose a new 2PC paradigm to securely compute matrix multiplications based on an outer-product insight, which achieves $2.9\\times \\sim 12.5\\times$ performance improvements compared to the state-of-the-art (SOTA) protocol. Furthermore, through a new observation of utilizing the input distribution, we propose an approach of low-degree polynomial approximation for $\\mathsf{GELU}$ and $\\mathsf{Softmax}$, which improves the performance of the SOTA polynomial approximation by $2.9\\times \\sim 4.0\\times$, where the average accuracy loss of our approach is 0.08\\% compared to the non-2PC inference without privacy. Compared with the SOTA two-party inference, $\\mathsf{Nimbus}$ improves the end-to-end performance of $BERT_{base}$ inference by $2.7\\times \\sim 4.7\\times$ across different network settings.", "pdf": "https://openreview.net/pdf/8e46783bd7cf3fc3ca9bf4e5d9b04fa64e098fe6.pdf"} {"title": "One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos", "url": "https://openreview.net/forum?id=bQMevGCYVM", "detail_url": "https://openreview.net/forum?id=bQMevGCYVM", "authors": "Zechen Bai,Tong He,Haiyang Mei,Pichao WANG,Ziteng Gao,Joya Chen,liulei,Zheng Zhang,Mike Zheng Shou", "tags": "NIPS 2024,Poster", "abstract": "We introduce VideoLISA, a video-based multimodal large language model designed to tackle the problem of language-instructed reasoning segmentation in videos. Leveraging the reasoning capabilities and world knowledge of large language models, and augmented by the Segment Anything Model, VideoLISA generates temporally consistent segmentation masks in videos based on language instructions. Existing image-based methods, such as LISA, struggle with video tasks due to the additional temporal dimension, which requires temporal dynamic understanding and consistent segmentation across frames. VideoLISA addresses these challenges by integrating a Sparse Dense Sampling strategy into the video-LLM, which balances temporal context and spatial detail within computational constraints. Additionally, we propose a One-Token-Seg-All approach using a specially designed token, enabling the model to segment and track objects across multiple frames. Extensive evaluations on diverse benchmarks, including our newly introduced ReasonVOS benchmark, demonstrate VideoLISA's superior performance in video object segmentation tasks involving complex reasoning, temporal understanding, and object tracking. While optimized for videos, VideoLISA also shows promising generalization to image segmentation, revealing its potential as a unified foundation model for language-instructed object segmentation. Code and model will be available at: https://github.com/showlab/VideoLISA.", "pdf": "https://openreview.net/pdf/7f185c3dfd575b1dd5d18b4a650f5ea18468b3a4.pdf"} {"title": "Generalize or Detect? Towards Robust Semantic Segmentation Under Multiple Distribution Shifts", "url": "https://openreview.net/forum?id=h0rbjHyWoa", "detail_url": "https://openreview.net/forum?id=h0rbjHyWoa", "authors": "Zhitong Gao,Bingnan Li,Mathieu Salzmann,Xuming He", "tags": "NIPS 2024,Poster", "abstract": "In open-world scenarios, where both novel classes and domains may exist, an ideal segmentation model should detect anomaly classes for safety and generalize to new domains. However, existing methods often struggle to distinguish between domain-level and semantic-level distribution shifts, leading to poor OOD detection or domain generalization performance. In this work, we aim to equip the model to generalize effectively to covariate-shift regions while precisely identifying semantic-shift regions. To achieve this, we design a novel generative augmentation method to produce coherent images that incorporate both anomaly (or novel) objects and various covariate shifts at both image and object levels. Furthermore, we introduce a training strategy that recalibrates uncertainty specifically for semantic shifts and enhances the feature extractor to align features associated with domain shifts. We validate the effectiveness of our method across benchmarks featuring both semantic and domain shifts. Our method achieves state-of-the-art performance across all benchmarks for both OOD detection and domain generalization. Code is available at https://github.com/gaozhitong/MultiShiftSeg.", "pdf": "https://openreview.net/pdf/dc83386e61184b2c08f60da6297a230eee4a5d53.pdf"} {"title": "Distribution-Aware Data Expansion with Diffusion Models", "url": "https://openreview.net/forum?id=UGUkPYSdg4", "detail_url": "https://openreview.net/forum?id=UGUkPYSdg4", "authors": "haoweiz,Ling Yang,Jun-Hai Yong,Hongzhi Yin,Jiawei Jiang,Meng Xiao,Wentao Zhang,Bin Wang", "tags": "NIPS 2024,Poster", "abstract": "The scale and quality of a dataset significantly impact the performance of deep models. However, acquiring large-scale annotated datasets is both a costly and time-consuming endeavor. To address this challenge, dataset expansion technologies aim to automatically augment datasets, unlocking the full potential of deep models. Current data expansion techniques include image transformation and image synthesis methods. Transformation-based methods introduce only local variations, leading to limited diversity. In contrast, synthesis-based methods generate entirely new content, greatly enhancing informativeness. However, existing synthesis methods carry the risk of distribution deviations, potentially degrading model performance with out-of-distribution samples. In this paper, we propose DistDiff, a training-free data expansion framework based on the distribution-aware diffusion model. DistDiff constructs hierarchical prototypes to approximate the real data distribution, optimizing latent data points within diffusion models with hierarchical energy guidance. We demonstrate its capability to generate distribution-consistent samples, significantly improving data expansion tasks. DistDiff consistently enhances accuracy across a diverse range of datasets compared to models trained solely on original data. Furthermore, our approach consistently outperforms existing synthesis-based techniques and demonstrates compatibility with widely adopted transformation-based augmentation methods. Additionally, the expanded dataset exhibits robustness across various architectural frameworks.", "pdf": "https://openreview.net/pdf/aebcee87f52d358e621ac06cb728c66d23f7299b.pdf"} {"title": "BetterDepth: Plug-and-Play Diffusion Refiner for Zero-Shot Monocular Depth Estimation", "url": "https://openreview.net/forum?id=35WwZhkush", "detail_url": "https://openreview.net/forum?id=35WwZhkush", "authors": "Xiang Zhang,Bingxin Ke,Hayko Riemenschneider,Nando Metzger,Anton Obukhov,Markus Gross,Konrad Schindler,Christopher Schroers", "tags": "NIPS 2024,Poster", "abstract": "By training over large-scale datasets, zero-shot monocular depth estimation (MDE) methods show robust performance in the wild but often suffer from insufficient detail. Although recent diffusion-based MDE approaches exhibit a superior ability to extract details, they struggle in geometrically complex scenes that challenge their geometry prior, trained on less diverse 3D data. To leverage the complementary merits of both worlds, we propose BetterDepth to achieve geometrically correct affine-invariant MDE while capturing fine details. Specifically, BetterDepth is a conditional diffusion-based refiner that takes the prediction from pre-trained MDE models as depth conditioning, in which the global depth layout is well-captured, and iteratively refines details based on the input image. For the training of such a refiner, we propose global pre-alignment and local patch masking methods to ensure BetterDepth remains faithful to the depth conditioning while learning to add fine-grained scene details. With efficient training on small-scale synthetic datasets, BetterDepth achieves state-of-the-art zero-shot MDE performance on diverse public datasets and on in-the-wild scenes. Moreover, BetterDepth can improve the performance of other MDE models in a plug-and-play manner without further re-training.", "pdf": "https://openreview.net/pdf/0fba6477df5fed26b30216cbabd2d5bfabf65133.pdf"} {"title": "Depth Anything V2", "url": "https://openreview.net/forum?id=cFTi3gLJ1X", "detail_url": "https://openreview.net/forum?id=cFTi3gLJ1X", "authors": "Lihe Yang,Bingyi Kang,Zilong Huang,Zhen Zhao,Xiaogang Xu,Jiashi Feng,Hengshuang Zhao", "tags": "NIPS 2024,Poster", "abstract": "This work presents Depth Anything V2. Without pursuing fancy techniques, we aim to reveal crucial findings to pave the way towards building a powerful monocular depth estimation model. Notably, compared with V1, this version produces much finer and more robust depth predictions through three key practices: 1) replacing all labeled real images with synthetic images, 2) scaling up the capacity of our teacher model, and 3) teaching student models via the bridge of large-scale pseudo-labeled real images. Compared with the latest models built on Stable Diffusion, our models are significantly more efficient (more than 10x faster) and more accurate. We offer models of different scales (ranging from 25M to 1.3B params) to support extensive scenarios. Benefiting from their strong generalization capability, we fine-tune them with metric depth labels to obtain our metric depth models. In addition to our models, considering the limited diversity and frequent noise in current test sets, we construct a versatile evaluation benchmark with sparse depth annotations to facilitate future research. Models are available at https://github.com/DepthAnything/Depth-Anything-V2.", "pdf": "https://openreview.net/pdf/fc5361e39997a3b9bb75d48ab2cadb293cc7b7fd.pdf"} {"title": "$\\text{Di}^2\\text{Pose}$: Discrete Diffusion Model for Occluded 3D Human Pose Estimation", "url": "https://openreview.net/forum?id=p2PO2PUPFY", "detail_url": "https://openreview.net/forum?id=p2PO2PUPFY", "authors": "Weiquan Wang,Jun Xiao,Chunping Wang,Wei Liu,Zhao Wang,Long Chen", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models have demonstrated their effectiveness in addressing the inherent uncertainty and indeterminacy in monocular 3D human pose estimation (HPE). \nDespite their strengths, the need for large search spaces and the corresponding demand for substantial training data make these models prone to generating biomechanically unrealistic poses. \nThis challenge is particularly noticeable in occlusion scenarios, where the complexity of inferring 3D structures from 2D images intensifies. \nIn response to these limitations, we introduce the **Di**screte **Di**ffusion **Pose** (**$\\text{Di}^2\\text{Pose}$**), a novel framework designed for occluded 3D HPE that capitalizes on the benefits of a discrete diffusion model. \nSpecifically, **$\\text{Di}^2\\text{Pose}$** employs a two-stage process: it first converts 3D poses into a discrete representation through a pose quantization step, which is subsequently modeled in latent space through a discrete diffusion process. \nThis methodological innovation restrictively confines the search space towards physically viable configurations and enhances the model\u2019s capability to comprehend how occlusions affect human pose within the latent space. \nExtensive evaluations conducted on various benchmarks (e.g., Human3.6M, 3DPW, and 3DPW-Occ) have demonstrated its effectiveness.", "pdf": "https://openreview.net/pdf/2f464387bc4d5c53f929c419cd4f83dc55f1e5a3.pdf"} {"title": "MambaAD: Exploring State Space Models for Multi-class Unsupervised Anomaly Detection", "url": "https://openreview.net/forum?id=8VKxTlnejE", "detail_url": "https://openreview.net/forum?id=8VKxTlnejE", "authors": "Haoyang He,Yuhu Bai,Jiangning Zhang,Qingdong He,Hongxu Chen,Zhenye Gan,Chengjie Wang,Xiangtai Li,Guanzhong Tian,Lei Xie", "tags": "NIPS 2024,Poster", "abstract": "Recent advancements in anomaly detection have seen the efficacy of CNN- and transformer-based approaches. However, CNNs struggle with long-range dependencies, while transformers are burdened by quadratic computational complexity. Mamba-based models, with their superior long-range modeling and linear efficiency, have garnered substantial attention. This study pioneers the application of Mamba to multi-class unsupervised anomaly detection, presenting MambaAD, which consists of a pre-trained encoder and a Mamba decoder featuring (Locality-Enhanced State Space) LSS modules at multi-scales. The proposed LSS module, integrating parallel cascaded (Hybrid State Space) HSS blocks and multi-kernel convolutions operations, effectively captures both long-range and local information. The HSS block, utilizing (Hybrid Scanning) HS encoders, encodes feature maps into five scanning methods and eight directions, thereby strengthening global connections through the (State Space Model) SSM. The use of Hilbert scanning and eight directions significantly improves feature sequence modeling. Comprehensive experiments on six diverse anomaly detection datasets and seven metrics demonstrate state-of-the-art performance, substantiating the method's effectiveness. The code and models are available at https://lewandofskee.github.io/projects/MambaAD.", "pdf": "https://openreview.net/pdf/b33454c402a9428ad357b8c1e894b852ce56a599.pdf"} {"title": "MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models", "url": "https://openreview.net/forum?id=Xskl7Da34U", "detail_url": "https://openreview.net/forum?id=Xskl7Da34U", "authors": "Leyang Shen,Gongwei Chen,Rui Shao,Weili Guan,Liqiang Nie", "tags": "NIPS 2024,Poster", "abstract": "Multimodal large language models (MLLMs) have demonstrated impressive capabilities across various vision-language tasks. However, a generalist MLLM typically underperforms compared with a specialist MLLM on most VL tasks, which can be attributed to task interference. In this paper, we propose a mixture of multimodal experts (MoME) to mitigate task interference and obtain a generalist MLLM. Our MoME is composed of two key components, a mixture of vision experts (MoVE) and a mixture of language experts (MoLE). MoVE can adaptively modulate the features transformed from various vision encoders, and has a strong compatibility in transformation architecture. MoLE incorporates sparsely gated experts into LLMs to achieve painless improvements with roughly unchanged inference costs. In response to task interference, our MoME specializes in both vision and language modality to adapt to task discrepancies. Extensive experiments show that MoME significantly improves the performance of generalist MLLMs across various VL tasks.", "pdf": "https://openreview.net/pdf/b7ca0b7ca564643e99c5888c17416163e834e51f.pdf"} {"title": "Image Understanding Makes for A Good Tokenizer for Image Generation", "url": "https://openreview.net/forum?id=RMmgu49lwn", "detail_url": "https://openreview.net/forum?id=RMmgu49lwn", "authors": "Luting Wang,Yang Zhao,Zijian Zhang,Jiashi Feng,Si Liu,Bingyi Kang", "tags": "NIPS 2024,Poster", "abstract": "Modern image generation (IG) models have been shown to capture rich semantics valuable for image understanding (IU) tasks. However, the potential of IU models to improve IG performance remains uncharted. We address this issue using a token-based IG framework, which relies on effective tokenizers to project images into token sequences. Currently, **pixel reconstruction** (e.g., VQGAN) dominates the training objective for image tokenizers. In contrast, our approach adopts the **feature reconstruction** objective, where tokenizers are trained by distilling knowledge from pretrained IU encoders. Comprehensive comparisons indicate that tokenizers with strong IU capabilities achieve superior IG performance across a variety of metrics, datasets, tasks, and proposal networks. Notably, VQ-KD CLIP achieves $4.10$ FID on ImageNet-1k (IN-1k). Visualization suggests that the superiority of VQ-KD can be partly attributed to the rich semantics within the VQ-KD codebook. We further introduce a straightforward pipeline to directly transform IU encoders into tokenizers, demonstrating exceptional effectiveness for IG tasks. These discoveries may energize further exploration into image tokenizer research and inspire the community to reassess the relationship between IU and IG. The code is released at https://github.com/magic-research/vector_quantization.", "pdf": "https://openreview.net/pdf/3a2ffb8b1b496a554ed5211e9fa55057e68767ad.pdf"} {"title": "Differentiable Structure Learning with Partial Orders", "url": "https://openreview.net/forum?id=B2cTLakrhV", "detail_url": "https://openreview.net/forum?id=B2cTLakrhV", "authors": "Taiyu Ban,Lyuzhou Chen,Xiangyu Wang,Xin Wang,Derui Lyu,Huanhuan Chen", "tags": "NIPS 2024,Poster", "abstract": "Differentiable structure learning is a novel line of causal discovery research that transforms the combinatorial optimization of structural models into a continuous optimization problem. However, the field has lacked feasible methods to integrate partial order constraints, a critical prior information typically used in real-world scenarios, into the differentiable structure learning framework. The main difficulty lies in adapting these constraints, typically suited for the space of total orderings, to the continuous optimization context of structure learning in the graph space. To bridge this gap, this paper formalizes a set of equivalent constraints that map partial orders onto graph spaces and introduces a plug-and-play module for their efficient application. This module preserves the equivalent effect of partial order constraints in the graph space, backed by theoretical validations of correctness and completeness. It significantly enhances the quality of recovered structures while maintaining good efficiency, which learns better structures using 90\\% fewer samples than the data-based method on a real-world dataset. This result, together with a comprehensive evaluation on synthetic cases, demonstrates our method's ability to effectively improve differentiable structure learning with partial orders.", "pdf": "https://openreview.net/pdf/fd75d3592f5e7a120079d3917c23b0683fbb55e7.pdf"} {"title": "GaussianCube: A Structured and Explicit Radiance Representation for 3D Generative Modeling", "url": "https://openreview.net/forum?id=DG2f1rVEM5", "detail_url": "https://openreview.net/forum?id=DG2f1rVEM5", "authors": "Bowen Zhang,Yiji Cheng,Jiaolong Yang,Chunyu Wang,Feng Zhao,Yansong Tang,Dong Chen,Baining Guo", "tags": "NIPS 2024,Poster", "abstract": "We introduce a radiance representation that is both structured and fully explicit and thus greatly facilitates 3D generative modeling. Existing radiance representations either require an implicit feature decoder, which significantly degrades the modeling power of the representation, or are spatially unstructured, making them difficult to integrate with mainstream 3D diffusion methods. We derive GaussianCube by first using a novel densification-constrained Gaussian fitting algorithm, which yields high-accuracy fitting using a fixed number of free Gaussians, and then rearranging these Gaussians into a predefined voxel grid via Optimal Transport. Since GaussianCube is a structured grid representation, it allows us to use standard 3D U-Net as our backbone in diffusion modeling without elaborate designs. More importantly, the high-accuracy fitting of the Gaussians allows us to achieve a high-quality representation with orders of magnitude fewer parameters than previous structured representations for comparable quality, ranging from one to two orders of magnitude. The compactness of GaussianCube greatly eases the difficulty of 3D generative modeling. Extensive experiments conducted on unconditional and class-conditioned object generation, digital avatar creation, and text-to-3D synthesis all show that our model achieves state-of-the-art generation results both qualitatively and quantitatively, underscoring the potential of GaussianCube as a highly accurate and versatile radiance representation for 3D generative modeling.", "pdf": "https://openreview.net/pdf/31846ee9e75a1424f980048568944a1577ba41e8.pdf"} {"title": "ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users", "url": "https://openreview.net/forum?id=H2ATO32ilj", "detail_url": "https://openreview.net/forum?id=H2ATO32ilj", "authors": "Guanlin Li,Kangjie Chen,Shudong Zhang,Jie Zhang,Tianwei Zhang", "tags": "NIPS 2024,Poster", "abstract": "Large-scale pre-trained generative models are taking the world by storm, due to their abilities in generating creative content. Meanwhile, safeguards for these generative models are developed, to protect users' rights and safety, most of which are designed for large language models. Existing methods primarily focus on jailbreak and adversarial attacks, which mainly evaluate the model's safety under malicious prompts. Recent work found that manually crafted safe prompts can unintentionally trigger unsafe generations. To further systematically evaluate the safety risks of text-to-image models, we propose a novel Automatic Red-Teaming framework, ART. Our method leverages both vision language model and large language model to establish a connection between unsafe generations and their prompts, thereby more efficiently identifying the model's vulnerabilities. With our comprehensive experiments, we reveal the toxicity of the popular open-source text-to-image models. The experiments also validate the effectiveness, adaptability, and great diversity of ART. Additionally, we introduce three large-scale red-teaming datasets for studying the safety risks associated with text-to-image models. Datasets and models can be found in https://github.com/GuanlinLee/ART.", "pdf": "https://openreview.net/pdf/842392c21de2119ca2b928631330e9ce03e111db.pdf"} {"title": "Make Continual Learning Stronger via C-Flat", "url": "https://openreview.net/forum?id=Dokew2u49m", "detail_url": "https://openreview.net/forum?id=Dokew2u49m", "authors": "Ang Bian,Wei Li,Hangjie Yuan,yu chengrong,Mang Wang,Zixiang Zhao,Aojun Lu,Pengliang Ji,Tao Feng", "tags": "NIPS 2024,Poster", "abstract": "How to balance the learning \u2019sensitivity-stability\u2019 upon new task training and memory preserving is critical in CL to resolve catastrophic forgetting. Improving model generalization ability within each learning phase is one solution to help CL learning overcome the gap in the joint knowledge space. Zeroth-order loss landscape sharpness-aware minimization is a strong training regime improving model generalization in transfer learning compared with optimizer like SGD. It has also been introduced into CL to improve memory representation or learning efficiency. However, zeroth-order sharpness alone could favors sharper over flatter minima in certain scenarios, leading to a rather sensitive minima rather than a global optima. To further enhance learning stability, we propose a Continual Flatness (C-Flat) method featuring a flatter loss landscape tailored for CL. C-Flat could be easily called with only one line of code and is plug-and-play to any CL methods. A general framework of C-Flat applied to all CL categories and a thorough comparison with loss minima optimizer and flat minima based CL approaches is presented in this paper, showing that our method can boost CL performance in almost all cases. Code is available at https://github.com/WanNaa/C-Flat.", "pdf": "https://openreview.net/pdf/be179393fb5b55da27facef791300b7cea7f22b0.pdf"} {"title": "MVGamba: Unify 3D Content Generation as State Space Sequence Modeling", "url": "https://openreview.net/forum?id=AprsVxrwXT", "detail_url": "https://openreview.net/forum?id=AprsVxrwXT", "authors": "Xuanyu Yi,Zike Wu,Qiuhong Shen,Qingshan Xu,Pan Zhou,Joo Hwee Lim,Shuicheng YAN,Xinchao Wang,Hanwang Zhang", "tags": "NIPS 2024,Poster", "abstract": "Recent 3D large reconstruction models (LRMs) can generate high-quality 3D content in sub-seconds by integrating multi-view diffusion models with scalable multi-view reconstructors. Current works further leverage 3D Gaussian Splatting as 3D representation for improved visual quality and rendering efficiency. However, we observe that existing Gaussian reconstruction models often suffer from multi-view inconsistency and blurred textures. We attribute this to the compromise of multi-view information propagation in favor of adopting powerful yet computationally intensive architectures (\\eg, Transformers). \nTo address this issue, we introduce MVGamba, a general and lightweight Gaussian reconstruction model featuring a multi-view Gaussian reconstructor based on the RNN-like State Space Model (SSM). Our Gaussian reconstructor propagates causal context containing multi-view information for cross-view self-refinement while generating a long sequence of Gaussians for fine-detail modeling with linear complexity.\nWith off-the-shelf multi-view diffusion models integrated, MVGamba unifies 3D generation tasks from a single image, sparse images, or text prompts. Extensive experiments demonstrate that MVGamba outperforms state-of-the-art baselines in all 3D content generation scenarios with approximately only $0.1\\times$ of the model size. The codes are available at \\url{https://github.com/SkyworkAI/MVGamba}.", "pdf": "https://openreview.net/pdf/d1260f2186da4119a6e9c5f99613609eaa771e3e.pdf"} {"title": "HOPE: Shape Matching Via Aligning Different K-hop Neighbourhoods", "url": "https://openreview.net/forum?id=1ziIqFo4Tj", "detail_url": "https://openreview.net/forum?id=1ziIqFo4Tj", "authors": "Barakeel Fanseu Kamhoua,Huamin Qu", "tags": "NIPS 2024,Poster", "abstract": "Accurate and smooth shape matching is very hard to achieve. This is because for accuracy, one needs unique descriptors (signatures) on shapes that distinguish different vertices on a mesh accurately while at the same time being invariant to deformations. However, most existing unique shape descriptors are generally not smooth on the shape and are not noise-robust thus leading to non-smooth matches. On the other hand, for smoothness, one needs descriptors that are smooth and continuous on the shape. However, existing smooth descriptors are generally not unique and as such lose accuracy as they match neighborhoods (for smoothness) rather than exact vertices (for accuracy). In this work, we propose to use different k-hop neighborhoods of vertices as pairwise descriptors for shape matching. We use these descriptors in conjunction with local map distortion (LMD) to refine an initialized map for shape matching. We validate the effectiveness of our pipeline on benchmark datasets such as SCAPE, TOSCA, TOPKIDS, and others.", "pdf": "https://openreview.net/pdf/d6e3094eee690239f5afb1e3d6ec580d0b05e262.pdf"} {"title": "Continuous Heatmap Regression for Pose Estimation via Implicit Neural Representation", "url": "https://openreview.net/forum?id=GgIJeoSLjQ", "detail_url": "https://openreview.net/forum?id=GgIJeoSLjQ", "authors": "Shengxiang Hu,Huaijiang Sun,Dong Wei,Xiaoning Sun,Jin Wang", "tags": "NIPS 2024,Poster", "abstract": "Heatmap regression has dominated human pose estimation due to its superior performance and strong generalization. To meet the requirements of traditional explicit neural networks for output form, existing heatmap-based methods discretize the originally continuous heatmap representation into 2D pixel arrays, which leads to performance degradation due to the introduction of quantization errors. This problem is significantly exacerbated as the size of the input image decreases, which makes heatmap-based methods not much better than coordinate regression on low-resolution images. In this paper, we propose a novel neural representation for human pose estimation called NerPE to achieve continuous heatmap regression. Given any position within the image range, NerPE regresses the corresponding confidence scores for body joints according to the surrounding image features, which guarantees continuity in space and confidence during training. Thanks to the decoupling from spatial resolution, NerPE can output the predicted heatmaps at arbitrary resolution during inference without retraining, which easily achieves sub-pixel localization precision. To reduce the computational cost, we design progressive coordinate decoding to cooperate with continuous heatmap regression, in which localization no longer requires the complete generation of high-resolution heatmaps. The code is available at https://github.com/hushengxiang/NerPE.", "pdf": "https://openreview.net/pdf/c9092cefe0cd18feb7f29c8917260b69e7632a4a.pdf"} {"title": "DiffGS: Functional Gaussian Splatting Diffusion", "url": "https://openreview.net/forum?id=6zROYoHlcp", "detail_url": "https://openreview.net/forum?id=6zROYoHlcp", "authors": "Junsheng Zhou,Weiqi Zhang,Yu-Shen Liu", "tags": "NIPS 2024,Poster", "abstract": "3D Gaussian Splatting (3DGS) has shown convincing performance in rendering speed and fidelity, yet the generation of Gaussian Splatting remains a challenge due to its discreteness and unstructured nature. In this work, we propose DiffGS, a general Gaussian generator based on latent diffusion models. DiffGS is a powerful and efficient 3D generative model which is capable of generating Gaussian primitives at arbitrary numbers for high-fidelity rendering with rasterization. The key insight is to represent Gaussian Splatting in a disentangled manner via three novel functions to model Gaussian probabilities, colors and transforms. Through the novel disentanglement of 3DGS, we represent the discrete and unstructured 3DGS with continuous Gaussian Splatting functions, where we then train a latent diffusion model with the target of generating these Gaussian Splatting functions both unconditionally and conditionally. Meanwhile, we introduce a discretization algorithm to extract Gaussians at arbitrary numbers from the generated functions via octree-guided sampling and optimization. We explore DiffGS for various tasks, including unconditional generation, conditional generation from text, image, and partial 3DGS, as well as Point-to-Gaussian generation. We believe that DiffGS provides a new direction for flexibly modeling and generating Gaussian Splatting. Project page: https://junshengzhou.github.io/DiffGS.", "pdf": "https://openreview.net/pdf/62d8544795aabee7235bcdcd4b52a7d69e840f77.pdf"} {"title": "Inferring Neural Signed Distance Functions by Overfitting on Single Noisy Point Clouds through Finetuning Data-Driven based Priors", "url": "https://openreview.net/forum?id=Hgqs1b4ECy", "detail_url": "https://openreview.net/forum?id=Hgqs1b4ECy", "authors": "Chao Chen,Yu-Shen Liu,Zhizhong Han", "tags": "NIPS 2024,Poster", "abstract": "It is important to estimate an accurate signed distance function (SDF) from a point cloud in many computer vision applications. The latest methods learn neural SDFs using either a data-driven based or an overfitting-based strategy. However, these two kinds of methods are with either poor generalization or slow convergence, which limits their capability under challenging scenarios like highly noisy point clouds. To resolve this issue, we propose a method to prompt pros of both data-driven based and overfitting-based methods for better generalization, faster inference, and higher accuracy in learning neural SDFs. We introduce a novel statistical reasoning algorithm in local regions which is able to finetune data-driven based priors without signed distance supervision, clean point cloud, or point normals. This helps our method start with a good initialization, and converge to a minimum in a much faster way. Our numerical and visual comparisons with the stat-of-the-art methods show our superiority over these methods in surface reconstruction and point cloud denoising on widely used shape and scene benchmarks. The code is available at https://github.com/chenchao15/LocalN2NM.", "pdf": "https://openreview.net/pdf/9a920fec20ef7cd144d753d76d5daa575994b112.pdf"} {"title": "SubgDiff: A Subgraph Diffusion Model to Improve Molecular Representation Learning", "url": "https://openreview.net/forum?id=iSMTo0toDO", "detail_url": "https://openreview.net/forum?id=iSMTo0toDO", "authors": "Jiying Zhang,Zijing Liu,Yu Wang,Bin Feng,Yu Li", "tags": "NIPS 2024,Poster", "abstract": "Molecular representation learning has shown great success in advancing AI-based drug discovery. A key insight of many recent works is that the 3D geometric structure of molecules provides essential information about their physicochemical properties. Recently, denoising diffusion probabilistic models have achieved impressive performance in molecular 3D conformation generation. However, most existing molecular diffusion models treat each atom as an independent entity, overlooking the dependency among atoms within the substructures. This paper introduces a novel approach that enhances molecular representation learning by incorporating substructural information in the diffusion model framework. We propose a novel diffusion model termed SubgDiff for involving the molecular subgraph information in diffusion. Specifically, SubgDiff adopts three vital techniques: i) subgraph prediction, ii) expectation state, and iii) k-step same subgraph diffusion, to enhance the perception of molecular substructure in the denoising network. Experiments on extensive downstream tasks, especially the molecular force predictions, demonstrate the superior performance of our approach.", "pdf": "https://openreview.net/pdf/e1d9b348b60f0ec8fdfe0d8e8d1d73e8b9a48e13.pdf"} {"title": "Transductive Active Learning: Theory and Applications", "url": "https://openreview.net/forum?id=tZtepJBtHg", "detail_url": "https://openreview.net/forum?id=tZtepJBtHg", "authors": "Jonas H\u00fcbotter,Bhavya Sukhija,Lenart Treven,Yarden As,Andreas Krause", "tags": "NIPS 2024,Poster", "abstract": "We study a generalization of classical active learning to real-world settings with concrete prediction targets where sampling is restricted to an accessible region of the domain, while prediction targets may lie outside this region.\nWe analyze a family of decision rules that sample adaptively to minimize uncertainty about prediction targets.\nWe are the first to show, under general regularity assumptions, that such decision rules converge uniformly to the smallest possible uncertainty obtainable from the accessible data.\nWe demonstrate their strong sample efficiency in two key applications: active fine-tuning of large neural networks and safe Bayesian optimization, where they achieve state-of-the-art performance.", "pdf": "https://openreview.net/pdf/a4d2fe62468dee7d8a39e169ecd8c2cfd078a896.pdf"} {"title": "Neural Signed Distance Function Inference through Splatting 3D Gaussians Pulled on Zero-Level Set", "url": "https://openreview.net/forum?id=r6tnDXIkNS", "detail_url": "https://openreview.net/forum?id=r6tnDXIkNS", "authors": "Wenyuan Zhang,Yu-Shen Liu,Zhizhong Han", "tags": "NIPS 2024,Poster", "abstract": "It is vital to infer a signed distance function (SDF) for multi-view based surface reconstruction. 3D Gaussian splatting (3DGS) provides a novel perspective for volume rendering, and shows advantages in rendering efficiency and quality. Although 3DGS provides a promising neural rendering option, it is still hard to infer SDFs for surface reconstruction with 3DGS due to the discreteness, the sparseness, and the off-surface drift of 3D Gaussians. To resolve these issues, we propose a method that seamlessly merge 3DGS with the learning of neural SDFs. Our key idea is to more effectively constrain the SDF inference with the multi-view consistency. To this end, we dynamically align 3D Gaussians on the zero-level set of the neural SDF, and then render the aligned 3D Gaussians through the differentiable rasterization. Meanwhile, we update the neural SDF by pulling neighboring space to the pulled 3D Gaussians, which progressively refine the signed distance field near the surface. With both differentiable pulling and splatting, we jointly optimize 3D Gaussians and the neural SDF with both RGB and geometry constraints, which recovers more accurate, smooth, and complete surfaces with more geometry details. Our numerical and visual comparisons show our superiority over the state-of-the-art results on the widely used benchmarks.", "pdf": "https://openreview.net/pdf/840219aff71fd45610db672eca004b4ce1d0ff91.pdf"} {"title": "Graph Learning for Numeric Planning", "url": "https://openreview.net/forum?id=Wxc6KvQgLq", "detail_url": "https://openreview.net/forum?id=Wxc6KvQgLq", "authors": "Dillon Ze Chen,Sylvie Thiebaux", "tags": "NIPS 2024,Poster", "abstract": "Graph learning is naturally well suited for use in symbolic, object-centric planning due to its ability to exploit relational structures exhibited in planning domains and to take as input planning instances with arbitrary number of objects. Numeric planning is an extension of symbolic planning in which states may now also exhibit numeric variables. In this work, we propose data-efficient and interpretable machine learning models for learning to solve numeric planning tasks. This involves constructing a new graph kernel for graphs with both continuous and categorical attributes, as well as new optimisation methods for learning heuristic functions for numeric planning. Experiments show that our graph kernels are vastly more efficient and generalise better than graph neural networks for numeric planning, and also yield competitive coverage performance over domain-independent numeric planners.", "pdf": "https://openreview.net/pdf/38b28d849cc794dbc382490478f1f6a631edeca8.pdf"} {"title": "Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exiting", "url": "https://openreview.net/forum?id=lT3oc04mDp", "detail_url": "https://openreview.net/forum?id=lT3oc04mDp", "authors": "Fangcheng Liu,Yehui Tang,Zhenhua Liu,Yunsheng Ni,Duyu Tang,Kai Han,Yunhe Wang", "tags": "NIPS 2024,Poster", "abstract": "Speculative decoding has demonstrated its effectiveness in accelerating the inference of large language models (LLMs) while maintaining an identical sampling distribution. However, the conventional approach of training separate draft model to achieve a satisfactory token acceptance rate can be costly and impractical. In this paper, we propose a novel self-speculative decoding framework \\emph{Kangaroo} with \\emph{double} early exiting strategy, which leverages the shallow sub-network and the \\texttt{LM Head} of the well-trained target LLM to construct a self-drafting model. Then, the self-verification stage only requires computing the remaining layers over the \\emph{early-exited} hidden states in parallel. To bridge the representation gap between the sub-network and the full model, we train a lightweight and efficient adapter module on top of the sub-network. One significant challenge that comes with the proposed method is that the inference latency of the self-draft model may no longer be negligible compared to the big model. To boost the token acceptance rate while minimizing the latency of the self-drafting model, we introduce an additional \\emph{early exiting} mechanism for both single-sequence and the tree decoding scenarios. Specifically, we dynamically halt the small model's subsequent prediction during the drafting phase once the confidence level for the current step falls below a certain threshold. This approach reduces unnecessary computations and improves overall efficiency. Extensive experiments on multiple benchmarks demonstrate our effectiveness, where Kangaroo achieves walltime speedups up to 2.04$\\times$, outperforming Medusa-1 with 88.7\\% fewer additional parameters. The code for Kangaroo is available at https://github.com/Equationliu/Kangaroo.", "pdf": "https://openreview.net/pdf/b8852e1ddfa243d2aa74ab89a7bf9adfcfd6cbf2.pdf"} {"title": "Dual Encoder GAN Inversion for High-Fidelity 3D Head Reconstruction from Single Images", "url": "https://openreview.net/forum?id=SlDx451MjC", "detail_url": "https://openreview.net/forum?id=SlDx451MjC", "authors": "Bahri Batuhan Bilecen,Ahmet Berke G\u00f6kmen,Aysegul Dundar", "tags": "NIPS 2024,Poster", "abstract": "3D GAN inversion aims to project a single image into the latent space of a 3D Generative Adversarial Network (GAN), thereby achieving 3D geometry reconstruction. While there exist encoders that achieve good results in 3D GAN inversion, they are predominantly built on EG3D, which specializes in synthesizing near-frontal views and is limiting in synthesizing comprehensive 3D scenes from diverse viewpoints. In contrast to existing approaches, we propose a novel framework built on PanoHead, which excels in synthesizing images from a 360-degree perspective. To achieve realistic 3D modeling of the input image, we introduce a dual encoder system tailored for high-fidelity reconstruction and realistic generation from different viewpoints. Accompanying this, we propose a stitching framework on the triplane domain to get the best predictions from both. To achieve seamless stitching, both encoders must output consistent results despite being specialized for different tasks. For this reason, we carefully train these encoders using specialized losses, including an adversarial loss based on our novel occlusion-aware triplane discriminator. Experiments reveal that our approach surpasses the existing encoder training methods qualitatively and quantitatively.", "pdf": "https://openreview.net/pdf/a4ff6839f9fffb0d401e0df40d1be4d7dbeb759b.pdf"} {"title": "FAST: A Dual-tier Few-Shot Learning Paradigm for Whole Slide Image Classification", "url": "https://openreview.net/forum?id=9vcqleAHPl", "detail_url": "https://openreview.net/forum?id=9vcqleAHPl", "authors": "Kexue Fu,xiaoyuan Luo,Linhao Qu,Shuo Wang,Ying Xiong,Ilias Maglogiannis,Longxiang Gao,Manning Wang", "tags": "NIPS 2024,Poster", "abstract": "The expensive fine-grained annotation and data scarcity \n have become the primary obstacles for the widespread adoption of deep learning-based Whole Slide Images (WSI) classification algorithms in clinical practice. Unlike few-shot learning methods in natural images that can leverage the labels of each image, existing few-shot WSI classification methods only utilize a small number of fine-grained labels or weakly supervised slide labels for training in order to avoid expensive fine-grained annotation. They lack sufficient mining of available WSIs, severely limiting WSI classification performance. To address the above issues, we propose a novel and efficient dual-tier few-shot learning paradigm for WSI classification, named FAST. FAST consists of a dual-level annotation strategy and a dual-branch classification framework. Firstly, to avoid expensive fine-grained annotation, we collect a very small number of WSIs at the slide level, and annotate an extremely small number of patches. Then, to fully mining the available WSIs, we use all the patches and available patch labels to build a cache branch, which utilizes the labeled patches to learn the labels of unlabeled patches and through knowledge retrieval for patch classification. In addition to the cache branch, we also construct a prior branch that includes learnable prompt vectors, using the text encoder of visual-language models for patch classification. Finally, we integrate the results from both branches to achieve WSI classification. Extensive experiments on binary and multi-class datasets demonstrate that our proposed method significantly surpasses existing few-shot classification methods and approaches the accuracy of fully supervised methods with only 0.22% annotation costs. All codes and models will be publicly available on https://github.com/fukexue/FAST.", "pdf": "https://openreview.net/pdf/529e424cec522c1bd383518b695a49d8bfc14a1a.pdf"} {"title": "Taming Diffusion Prior for Image Super-Resolution with Domain Shift SDEs", "url": "https://openreview.net/forum?id=u7okTt4ZyE", "detail_url": "https://openreview.net/forum?id=u7okTt4ZyE", "authors": "Qinpeng Cui,Yi'xuan Liu,Xinyi Zhang,Qiqi Bao,Qingmin Liao,liwang Amd,Lu Tian,Zicheng Liu,Zhongdao Wang,Emad Barsoum", "tags": "NIPS 2024,Poster", "abstract": "Diffusion-based image super-resolution (SR) models have attracted substantial interest due to their powerful image restoration capabilities. However, prevailing diffusion models often struggle to strike an optimal balance between efficiency and performance. Typically, they either neglect to exploit the potential of existing extensive pretrained models, limiting their generative capacity, or they necessitate a dozens of forward passes starting from random noises, compromising inference efficiency. In this paper, we present DoSSR, a $\\textbf{Do}$main $\\textbf{S}$hift diffusion-based SR model that capitalizes on the generative powers of pretrained diffusion models while significantly enhancing efficiency by initiating the diffusion process with low-resolution (LR) images. At the core of our approach is a domain shift equation that integrates seamlessly with existing diffusion models. This integration not only improves the use of diffusion prior but also boosts inference efficiency. Moreover, we advance our method by transitioning the discrete shift process to a continuous formulation, termed as DoS-SDEs. This advancement leads to the fast and customized solvers that further enhance sampling efficiency. Empirical results demonstrate that our proposed method achieves state-of-the-art performance on synthetic and real-world datasets, while notably requiring $\\textbf{\\emph{only 5 sampling steps}}$. Compared to previous diffusion prior based methods, our approach achieves a remarkable speedup of 5-7 times, demonstrating its superior efficiency.", "pdf": "https://openreview.net/pdf/fe140549f7d235b4d8486b6f7227fafebbdde760.pdf"} {"title": "Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies", "url": "https://openreview.net/forum?id=sKCKPr8cRL", "detail_url": "https://openreview.net/forum?id=sKCKPr8cRL", "authors": "Chaofan Tao,Qian Liu,Longxu Dou,Niklas Muennighoff,Zhongwei Wan,Ping Luo,Min Lin,Ngai Wong", "tags": "NIPS 2024,Poster", "abstract": "Research on scaling large language models (LLMs) has primarily focused on model parameters and training data size, overlooking the role of vocabulary size. We investigate how vocabulary size impacts LLM scaling laws by training models ranging from 33M to 3B parameters on up to 500B characters with various vocabulary configurations. We propose three complementary approaches for predicting the compute-optimal vocabulary size: IsoFLOPs analysis, derivative estimation, and parametric fit of the loss function. Our approaches converge on the conclusion that the optimal vocabulary size depends on the compute budget, with larger models requiring larger vocabularies. Most LLMs, however, use insufficient vocabulary sizes. For example, we predict that the optimal vocabulary size of Llama2-70B should have been at least 216K, 7 times larger than its vocabulary of 32K. We validate our predictions empirically by training models with 3B parameters across different FLOPs budgets. Adopting our predicted optimal vocabulary size consistently improves downstream performance over commonly used vocabulary sizes. By increasing the vocabulary size from the conventional 32K to 43K, we improve performance on ARC-Challenge from 29.1 to 32.0 with the same 2.3e21 FLOPs. Our work highlights the importance of jointly considering tokenization and model scaling for efficient pre-training. The code and demo are available at https://github.com/sail-sg/scaling-with-vocab and https://hf.co/spaces/sail/scaling-with-vocab-demo.", "pdf": "https://openreview.net/pdf/134f55cd9235bc28c2d2cba435869c22f9d830f4.pdf"} {"title": "DomainGallery: Few-shot Domain-driven Image Generation by Attribute-centric Finetuning", "url": "https://openreview.net/forum?id=ZMmJ1z8vee", "detail_url": "https://openreview.net/forum?id=ZMmJ1z8vee", "authors": "Yuxuan Duan,Yan Hong,Bo Zhang,jun lan,Huijia Zhu,Weiqiang Wang,Jianfu Zhang,Li Niu,Liqing Zhang", "tags": "NIPS 2024,Poster", "abstract": "The recent progress in text-to-image models pretrained on large-scale datasets has enabled us to generate various images as long as we provide a text prompt describing what we want. Nevertheless, the availability of these models is still limited when we expect to generate images that fall into a specific domain either hard to describe or just unseen to the models. In this work, we propose DomainGallery, a few-shot domain-driven image generation method which aims at finetuning pretrained Stable Diffusion on few-shot target datasets in an attribute-centric manner. Specifically, DomainGallery features prior attribute erasure, attribute disentanglement, regularization and enhancement. These techniques are tailored to few-shot domain-driven generation in order to solve key issues that previous works have failed to settle. Extensive experiments are given to validate the superior performance of DomainGallery on a variety of domain-driven generation scenarios.", "pdf": "https://openreview.net/pdf/4420bc032c629beaa3ba3ef4ac5fedfc964bac7d.pdf"} {"title": "CLIPAway: Harmonizing focused embeddings for removing objects via diffusion models", "url": "https://openreview.net/forum?id=76CZrhbMoo", "detail_url": "https://openreview.net/forum?id=76CZrhbMoo", "authors": "Yi\u011fit Ekin,Ahmet Burak Yildirim,Erdem Eren Caglar,Aykut Erdem,Erkut Erdem,Aysegul Dundar", "tags": "NIPS 2024,Poster", "abstract": "Advanced image editing techniques, particularly inpainting, are essential for seamlessly removing unwanted elements while preserving visual integrity. Traditional GAN-based methods have achieved notable success, but recent advancements in diffusion models have produced superior results due to their training on large-scale datasets, enabling the generation of remarkably realistic inpainted images.\nDespite their strengths, diffusion models often struggle with object removal tasks without explicit guidance, leading to unintended hallucinations of the removed object. To address this issue, we introduce CLIPAway, a novel approach leveraging CLIP embeddings to focus on background regions while excluding foreground elements. CLIPAway enhances inpainting accuracy and quality by identifying embeddings that prioritize the background, thus achieving seamless object removal. Unlike other methods that rely on specialized training datasets or costly manual annotations, CLIPAway provides a flexible, plug-and-play solution compatible with various diffusion-based inpainting techniques.", "pdf": "https://openreview.net/pdf/78e342fb4d7c8e504d002eeb051accc4ea383588.pdf"} {"title": "HAWK: Learning to Understand Open-World Video Anomalies", "url": "https://openreview.net/forum?id=vBKoEZ1PG3", "detail_url": "https://openreview.net/forum?id=vBKoEZ1PG3", "authors": "Jiaqi Tang,Hao LU,RUIZHENG WU,Xiaogang Xu,Ke Ma,Cheng Fang,Bin Guo,Jiangbo Lu,Qifeng Chen,Ying-Cong Chen", "tags": "NIPS 2024,Poster", "abstract": "Video Anomaly Detection (VAD) systems can autonomously monitor and identify disturbances, reducing the need for manual labor and associated costs. However, current VAD systems are often limited by their superficial semantic understanding of scenes and minimal user interaction. Additionally, the prevalent data scarcity in existing datasets restricts their applicability in open-world scenarios.\nIn this paper, we introduce HAWK, a novel framework that leverages interactive large Visual Language Models (VLM) to interpret video anomalies precisely. Recognizing the difference in motion information between abnormal and normal videos, HAWK explicitly integrates motion modality to enhance anomaly identification. To reinforce motion attention, we construct an auxiliary consistency loss within the motion and video space, guiding the video branch to focus on the motion modality. Moreover, to improve the interpretation of motion-to-language, we establish a clear supervisory relationship between motion and its linguistic representation. Furthermore, we have annotated over 8,000 anomaly videos with language descriptions, enabling effective training across diverse open-world scenarios, and also created 8,000 question-answering pairs for users' open-world questions. The final results demonstrate that HAWK achieves SOTA performance, surpassing existing baselines in both video description generation and question-answering. Our codes/dataset/demo will be released at https://github.com/jqtangust/hawk.", "pdf": "https://openreview.net/pdf/72f5b57d7e449d765338f2a60df13977e0717488.pdf"} {"title": "Understanding Bias in Large-Scale Visual Datasets", "url": "https://openreview.net/forum?id=NGIIHlAEBt", "detail_url": "https://openreview.net/forum?id=NGIIHlAEBt", "authors": "Boya Zeng,Yida Yin,Zhuang Liu", "tags": "NIPS 2024,Poster", "abstract": "A recent study has shown that large-scale visual datasets are very biased: they can be easily classified by modern neural networks. However, the concrete forms of bias among these datasets remain unclear. In this study, we propose a framework to identify the unique visual attributes distinguishing these datasets. Our approach applies various transformations to extract semantic, structural, boundary, color, and frequency information from datasets, and assess how much each type of information reflects their bias. We further decompose their semantic bias with object-level analysis, and leverage natural language methods to generate detailed, open-ended descriptions of each dataset's characteristics. Our work aims to help researchers understand the bias in existing large-scale pre-training datasets, and build more diverse and representative ones in the future. Our project page and code are available at boyazeng.github.io/understand_bias.", "pdf": "https://openreview.net/pdf/5da0816d2d2e1df0b789560eea636e627d0a021b.pdf"} {"title": "Enhancing Reasoning Capabilities of LLMs via Principled Synthetic Logic Corpus", "url": "https://openreview.net/forum?id=mljDUaQpln", "detail_url": "https://openreview.net/forum?id=mljDUaQpln", "authors": "Terufumi Morishita,Gaku Morio,Atsuki Yamaguchi,Yasuhiro Sogawa", "tags": "NIPS 2024,Poster", "abstract": "Large language models (LLMs) are capable of solving a wide range of tasks, yet they have struggled with reasoning.\nTo address this, we propose $\\textbf{Additional Logic Training (ALT)}$, which aims to enhance LLMs' reasoning capabilities by program-generated logical reasoning samples.\nWe first establish principles for designing high-quality samples by integrating symbolic logic theory and previous empirical insights.\nThen, based on these principles, we construct a synthetic corpus named $\\textbf{Formal} \\ \\textbf{Logic} \\ \\textbf{\\textit{D}eduction} \\ \\textbf{\\textit{D}iverse}$ (FLD$ _{\\times2}$), comprising numerous samples of multi-step deduction with unknown facts, diverse reasoning rules, diverse linguistic expressions, and challenging distractors.\nFinally, we empirically show that ALT on FLD$ _{\\times2}$ substantially enhances the reasoning capabilities of state-of-the-art LLMs, including LLaMA-3.1-70B.\nImprovements include gains of up to 30 points on logical reasoning benchmarks, up to 10 points on math and coding benchmarks, and 5 points on the benchmark suite BBH.", "pdf": "https://openreview.net/pdf/246a03e416453fbc32408c9fdfd22b45f6bbfff0.pdf"} {"title": "DOGS: Distributed-Oriented Gaussian Splatting for Large-Scale 3D Reconstruction Via Gaussian Consensus", "url": "https://openreview.net/forum?id=HAocQ9dSAX", "detail_url": "https://openreview.net/forum?id=HAocQ9dSAX", "authors": "Yu Chen,Gim Hee Lee", "tags": "NIPS 2024,Poster", "abstract": "The recent advances in 3D Gaussian Splatting (3DGS) show promising results on the novel view synthesis (NVS) task. With its superior rendering performance and high-fidelity rendering quality, 3DGS is excelling at its previous NeRF counterparts. The most recent 3DGS method focuses either on improving the instability of rendering efficiency or reducing the model size. On the other hand, the training efficiency of 3DGS on large-scale scenes has not gained much attention. In this work, we propose DoGaussian, a method that trains 3DGS distributedly. Our method first decomposes a scene into $K$ blocks and then introduces the Alternating Direction Method of Multipliers (ADMM) into the training procedure of 3DGS. During training, our DoGaussian maintains one global 3DGS model on the master node and $K$ local 3DGS models on the slave nodes. The $K$ local 3DGS models are dropped after training and we only query the global 3DGS model during inference. The training time is reduced by scene decomposition, and the training convergence and stability are guaranteed through the consensus on the shared 3D Gaussians. Our method accelerates the training of 3DGS by $6+$ times when evaluated on large-scale scenes while concurrently achieving state-of-the-art rendering quality. Our code is publicly available at [https://github.com/AIBluefisher/DOGS](https://github.com/AIBluefisher/DOGS).", "pdf": "https://openreview.net/pdf/964cd99e77e60db83528d7a5c3548299f7ecb9b5.pdf"} {"title": "Revealing Distribution Discrepancy by Sampling Transfer in Unlabeled Data", "url": "https://openreview.net/forum?id=bnzeOG0yey", "detail_url": "https://openreview.net/forum?id=bnzeOG0yey", "authors": "Zhilin Zhao,Longbing Cao,Xuhui Fan,Wei-Shi Zheng", "tags": "NIPS 2024,Poster", "abstract": "There are increasing cases where the class labels of test samples are unavailable, creating a significant need and challenge in measuring the discrepancy between training and test distributions. This distribution discrepancy complicates the assessment of whether the hypothesis selected by an algorithm on training samples remains applicable to test samples. We present a novel approach called Importance Divergence (I-Div) to address the challenge of test label unavailability, enabling distribution discrepancy evaluation using only training samples. I-Div transfers the sampling patterns from the test distribution to the training distribution by estimating density and likelihood ratios. Specifically, the density ratio, informed by the selected hypothesis, is obtained by minimizing the Kullback-Leibler divergence between the actual and estimated input distributions. Simultaneously, the likelihood ratio is adjusted according to the density ratio by reducing the generalization error of the distribution discrepancy as transformed through the two ratios. Experimentally, I-Div accurately quantifies the distribution discrepancy, as evidenced by a wide range of complex data scenarios and tasks.", "pdf": "https://openreview.net/pdf/70599d5b43abd370f98f8216bee2e61b4f188a71.pdf"} {"title": "SHMT: Self-supervised Hierarchical Makeup Transfer via Latent Diffusion Models", "url": "https://openreview.net/forum?id=EeXcOYf3Lg", "detail_url": "https://openreview.net/forum?id=EeXcOYf3Lg", "authors": "Zhaoyang Sun,Shengwu Xiong,Yaxiong Chen,Fei Du,Weihua Chen,Fan Wang,Yi Rong", "tags": "NIPS 2024,Poster", "abstract": "This paper studies the challenging task of makeup transfer, which aims to apply diverse makeup styles precisely and naturally to a given facial image. Due to the absence of paired data, current methods typically synthesize sub-optimal pseudo ground truths to guide the model training, resulting in low makeup fidelity. Additionally, different makeup styles generally have varying effects on the person face, but existing methods struggle to deal with this diversity. To address these issues, we propose a novel Self-supervised Hierarchical Makeup Transfer (SHMT) method via latent diffusion models. Following a \"decoupling-and-reconstruction\" paradigm, SHMT works in a self-supervised manner, freeing itself from the misguidance of imprecise pseudo-paired data. Furthermore, to accommodate a variety of makeup styles, hierarchical texture details are decomposed via a Laplacian pyramid and selectively introduced to the content representation. Finally, we design a novel Iterative Dual Alignment (IDA) module that dynamically adjusts the injection condition of the diffusion model, allowing the alignment errors caused by the domain gap between content and makeup representations to be corrected. Extensive quantitative and qualitative analyses demonstrate the effectiveness of our method. Our code is available at https://github.com/Snowfallingplum/SHMT.", "pdf": "https://openreview.net/pdf/146b7cbc513aa23e9aba7cf669668f542193d2e8.pdf"} {"title": "Full-Distance Evasion of Pedestrian Detectors in the Physical World", "url": "https://openreview.net/forum?id=lWYwZklSvg", "detail_url": "https://openreview.net/forum?id=lWYwZklSvg", "authors": "Zhi Cheng,Zhanhao Hu,Yuqiu Liu,Jianmin Li,Hang Su,Xiaolin Hu", "tags": "NIPS 2024,Poster", "abstract": "Many studies have proposed attack methods to generate adversarial patterns for evading pedestrian detection, alarming the computer vision community about the need for more attention to the robustness of detectors. However, adversarial patterns optimized by these methods commonly have limited performance at medium to long distances in the physical world. To overcome this limitation, we identify two main challenges. First, in existing methods, there is commonly an appearance gap between simulated distant adversarial patterns and their physical world counterparts, leading to incorrect optimization. Second, there exists a conflict between adversarial losses at different distances, which causes difficulties in optimization. To overcome these challenges, we introduce a Full Distance Attack (FDA) method. Our physical world experiments demonstrate the effectiveness of our FDA patterns across various detection models like YOLOv5, Deformable-DETR, and Mask RCNN. Codes available at https://github.com/zhicheng2T0/Full-Distance-Attack.git", "pdf": "https://openreview.net/pdf/36174efaee6f05139e43a80929070e81990026b0.pdf"} {"title": "AnyFit: Controllable Virtual Try-on for Any Combination of Attire Across Any Scenario", "url": "https://openreview.net/forum?id=cARFM6KKlE", "detail_url": "https://openreview.net/forum?id=cARFM6KKlE", "authors": "Yuhan Li,Hao Zhou,Wenxiang Shang,Ran Lin,Xuanhong Chen,Bingbing Ni", "tags": "NIPS 2024,Poster", "abstract": "While image-based virtual try-on has made significant strides, emerging approaches still fall short of delivering high-fidelity and robust fitting images across various scenarios, as their models suffer from issues of ill-fitted garment styles and quality degrading during the training process, not to mention the lack of support for various combinations of attire. Therefore, we first propose a lightweight, scalable, operator known as Hydra Block for attire combinations. This is achieved through a parallel attention mechanism that facilitates the feature injection of multiple garments from conditionally encoded branches into the main network. Secondly, to significantly enhance the model's robustness and expressiveness in real-world scenarios, we evolve its potential across diverse settings by synthesizing the residuals of multiple models, as well as implementing a mask region boost strategy to overcome the instability caused by information leakage in existing models. \nEquipped with the above design, AnyFit surpasses all baselines on high-resolution benchmarks and real-world data by a large gap, excelling in producing well-fitting garments replete with photorealistic and rich details. Furthermore, AnyFit\u2019s impressive performance on high-fidelity virtual try-ons in any scenario from any image, paves a new path for future research within the fashion community.", "pdf": "https://openreview.net/pdf/66fbddc3cdfe7ac52409f9b0dc1eae33ebea8919.pdf"} {"title": "DiP-GO: A Diffusion Pruner via Few-step Gradient Optimization", "url": "https://openreview.net/forum?id=sbsaRj475E", "detail_url": "https://openreview.net/forum?id=sbsaRj475E", "authors": "haoweiz,Dehua Tang,Ji Liu,Mingjie Lu,Jintu Zheng,Jinzhang Peng,Dong Li,Yu Wang,Fan Jiang,Lu Tian,Spandan Tiwari,Ashish Sirasao,Jun-Hai Yong,Bin Wang,Emad Barsoum", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models have achieved remarkable progress in the field of image generation due to their outstanding capabilities. However, these models require substantial computing resources because of the multi-step denoising process during inference. While traditional pruning methods have been employed to optimize these models, the retraining process necessitates large-scale training datasets and extensive computational costs to maintain generalization ability, making it neither convenient nor efficient. Recent studies attempt to utilize the similarity of features across adjacent denoising stages to reduce computational costs through simple and static strategies. However, these strategies cannot fully harness the potential of the similar feature patterns across adjacent timesteps. In this work, we propose a novel pruning method that derives an efficient diffusion model via a more intelligent and differentiable pruner. At the core of our approach is casting the model pruning process into a SubNet search process. Specifically, we first introduce a SuperNet based on standard diffusion via adding some backup connections built upon the similar features. We then construct a plugin pruner network and design optimization losses to identify redundant computation. Finally, our method can identify an optimal SubNet through few-step gradient optimization and a simple post-processing procedure. We conduct extensive experiments on various diffusion models including Stable Diffusion series and DiTs. Our DiP-GO approach achieves 4.4 x speedup for SD-1.5 without any loss of accuracy, significantly outperforming the previous state-of-the-art methods.", "pdf": "https://openreview.net/pdf/5a3bc5d77fe2b0c491b53d94bc1da70c0b57b3af.pdf"} {"title": "StreamFlow: Streamlined Multi-Frame Optical Flow Estimation for Video Sequences", "url": "https://openreview.net/forum?id=FYLcH4HAZr", "detail_url": "https://openreview.net/forum?id=FYLcH4HAZr", "authors": "Shangkun Sun,Jiaming Liu,Huaxia Li,Guoqing Liu,Thomas H. Li,Wei Gao", "tags": "NIPS 2024,Poster", "abstract": "Prior multi-frame optical flow methods typically estimate flow repeatedly in a pair-wise manner, leading to significant computational redundancy. To mitigate this, we implement a Streamlined In-batch Multi-frame (SIM) pipeline, specifically tailored to video inputs to minimize redundant calculations. It enables the simultaneous prediction of successive unidirectional flows in a single forward pass, boosting processing speed by 44.43% and reaching efficiencies on par with two-frame networks. Moreover, we investigate various spatiotemporal modeling methods for optical flow estimation within this pipeline. Notably, we propose a simple yet highly effective parameter-efficient Integrative spatiotemporal Coherence (ISC) modeling method, alongside a lightweight Global Temporal Regressor (GTR) to harness temporal cues. The proposed ISC and GTR bring powerful spatiotemporal modeling capabilities and significantly enhance accuracy, including in occluded areas, while adding modest computations to the SIM pipeline. Compared to the baseline, our approach, StreamFlow, achieves performance enhancements of 15.45% and 11.37% on the Sintel clean and final test sets respectively, with gains of 15.53% and 10.77% on occluded regions and only a 1.11% rise in latency. Furthermore, StreamFlow exhibits state-of-the-art cross-dataset testing results on Sintel and KITTI, demonstrating its robust cross-domain generalization capabilities. The code is available [here](https://github.com/littlespray/StreamFlow).", "pdf": "https://openreview.net/pdf/7ad517ca20a299f0ae196fa0e62715ebb9409fdb.pdf"} {"title": "PromptFix: You Prompt and We Fix the Photo", "url": "https://openreview.net/forum?id=p1LpXNPmIa", "detail_url": "https://openreview.net/forum?id=p1LpXNPmIa", "authors": "Yongsheng Yu,Ziyun Zeng,Hang Hua,Jianlong Fu,Jiebo Luo", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models equipped with language models demonstrate excellent controllability in image generation tasks, allowing image processing to adhere to human instructions. However, the lack of diverse instruction-following data hampers the development of models that effectively recognize and execute user-customized instructions, particularly in low-level tasks. Moreover, the stochastic nature of the diffusion process leads to deficiencies in image generation or editing tasks that require the detailed preservation of the generated images. To address these limitations, we propose PromptFix, a comprehensive framework that enables diffusion models to follow human instructions to perform a wide variety of image-processing tasks. First, we construct a large-scale instruction-following dataset that covers comprehensive image-processing tasks, including low-level tasks, image editing, and object creation. Next, we propose a high-frequency guidance sampling method to explicitly control the denoising process and preserve high-frequency details in unprocessed areas. Finally, we design an auxiliary prompting adapter, utilizing Vision-Language Models (VLMs) to enhance text prompts and improve the model's task generalization. Experimental results show that PromptFix outperforms previous methods in various image-processing tasks. Our proposed model also achieves comparable inference efficiency with these baseline models and exhibits superior zero-shot capabilities in blind restoration and combination tasks.", "pdf": "https://openreview.net/pdf/f858bc374c4a90f1e788a8dcdafcda856fa9915a.pdf"} {"title": "ACFun: Abstract-Concrete Fusion Facial Stylization", "url": "https://openreview.net/forum?id=D2VK206HaJ", "detail_url": "https://openreview.net/forum?id=D2VK206HaJ", "authors": "Jiapeng Ji,Kun Wei,Ziqi Zhang,Cheng Deng", "tags": "NIPS 2024,Poster", "abstract": "Owing to advancements in image synthesis techniques, stylization methodologies for large models have garnered remarkable outcomes. However, when it comes to processing facial images, the outcomes frequently fall short of expectations. Facial stylization is predominantly challenged by two significant hurdles. Firstly, obtaining a large dataset of high-quality stylized images is difficult. The scarcity and diversity of artistic styles make it impractical to compile comprehensive datasets for each style. Secondly, while many methods can transfer colors and strokes from style images, these elements alone cannot fully capture a specific style, which encompasses both concrete and abstract visual elements. Additionally, facial stylization often alters the visual features of the face, making it challenging to balance these changes with the need to retain facial information. To address these issues, we propose a novel method called ACFun, which uses only one style image and one facial image for facial stylization. ACFun comprises an Abstract Fusion Module (AFun) and a Concrete Fusion Module (CFun), which separately learn the abstract and concrete features of the style and face. We also design a Face and Style Imagery Alignment Loss to align the style image with the face image in the latent space. Finally, we generate styled facial images from noise directly to complete the facial stylization task. Experiments show that our method outperforms others in facial stylization, producing highly artistic and visually pleasing results.", "pdf": "https://openreview.net/pdf/5674ab58e62c438fcc0cef2aa343429fa3f879eb.pdf"} {"title": "Improved Bayes Regret Bounds for Multi-Task Hierarchical Bayesian Bandit Algorithms", "url": "https://openreview.net/forum?id=joNPMCzVIi", "detail_url": "https://openreview.net/forum?id=joNPMCzVIi", "authors": "Jiechao Guan,Hui Xiong", "tags": "NIPS 2024,Poster", "abstract": "Hierarchical Bayesian bandit refers to the multi-task bandit problem in which bandit tasks are assumed to be drawn from the same distribution. In this work, we provide improved Bayes regret bounds for hierarchical Bayesian bandit algorithms in the multi-task linear bandit and semi-bandit settings. For the multi-task linear bandit, we first analyze the preexisting hierarchical Thompson sampling (HierTS) algorithm, and improve its gap-independent Bayes regret bound from $O(m\\sqrt{n\\log{n}\\log{(mn)}})$ to $O(m\\sqrt{n\\log{n}})$ in the case of infinite action set, with $m$ being the number of tasks and $n$ the number of iterations per task. In the case of finite action set, we propose a novel hierarchical Bayesian bandit algorithm, named hierarchical BayesUCB (HierBayesUCB), that achieves the logarithmic but gap-dependent regret bound $O(m\\log{(mn)}\\log{n})$ under mild assumptions. All of the above regret bounds hold in many variants of hierarchical Bayesian linear bandit problem, including when the tasks are solved sequentially or concurrently. Furthermore, we extend the aforementioned HierTS and HierBayesUCB algorithms to the multi-task combinatorial semi-bandit setting. Concretely, our combinatorial HierTS algorithm attains comparable Bayes regret bound $O(m\\sqrt{n}\\log{n})$ with respect to the latest one. Moreover, our combinatorial HierBayesUCB yields a sharper Bayes regret bound $O(m\\log{(mn)}\\log{n})$. Experiments are conducted to validate the soundness of our theoretical results for multi-task bandit algorithms.", "pdf": "https://openreview.net/pdf/35c624f4c1c81fe9bad7cac2cf5867fdf06ae4ad.pdf"} {"title": "Cryptographic Hardness of Score Estimation", "url": "https://openreview.net/forum?id=URQXbwM0Md", "detail_url": "https://openreview.net/forum?id=URQXbwM0Md", "authors": "Min Jae Song", "tags": "NIPS 2024,Poster", "abstract": "We show that L2-accurate score estimation, in the absence of strong assumptions on the data distribution, is computationally hard even when sample complexity is polynomial in the relevant problem parameters. Our reduction builds on the result of Chen et al. (ICLR 2023), who showed that the problem of generating samples from an unknown data distribution reduces to L2-accurate score estimation. Our hard-to-estimate distributions are the \"Gaussian pancakes\" distributions, originally due to Diakonikolas et al. (FOCS 2017), which have been shown to be computationally indistinguishable from the standard Gaussian under widely believed hardness assumptions from lattice-based cryptography (Bruna et al., STOC 2021; Gupte et al., FOCS 2022).", "pdf": "https://openreview.net/pdf/c0bdf78561ddb13eb64794c3c03c9e594f9b611d.pdf"} {"title": "Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and Flatness", "url": "https://openreview.net/forum?id=G522UpazH3", "detail_url": "https://openreview.net/forum?id=G522UpazH3", "authors": "Mingyuan Fan,Xiaodan Li,Cen Chen,Wenmeng Zhou,Yaliang Li", "tags": "NIPS 2024,Poster", "abstract": "A prevailing belief in attack and defense community is that the higher flatness of adversarial examples enables their better cross-model transferability, leading to a growing interest in employing sharpness-aware minimization and its variants. However, the theoretical relationship between the transferability of adversarial examples and their flatness has not been well established, making the belief questionable. To bridge this gap, we embark on a theoretical investigation and, for the first time, derive a theoretical bound for the transferability of adversarial examples with few practical assumptions. Our analysis challenges this belief by demonstrating that the increased flatness of adversarial examples does not necessarily guarantee improved transferability. Moreover, building upon the theoretical analysis, we propose TPA, a Theoretically Provable Attack that optimizes a surrogate of the derived bound to craft adversarial examples. Extensive experiments across widely used benchmark datasets and various real-world applications show that TPA can craft more transferable adversarial examples compared to state-of-the-art baselines. We hope that these results can recalibrate preconceived impressions within the community and facilitate the development of stronger adversarial attack and defense mechanisms.", "pdf": "https://openreview.net/pdf/ee7383e672b9b47d03f5bd3f8b9d3d485b5b507c.pdf"} {"title": "A hierarchical decomposition for explaining ML performance discrepancies", "url": "https://openreview.net/forum?id=nXXwYsARXB", "detail_url": "https://openreview.net/forum?id=nXXwYsARXB", "authors": "Harvineet Singh,Fan Xia,Adarsh Subbaswamy,Alexej Gossmann,Jean Feng", "tags": "NIPS 2024,Poster", "abstract": "Machine learning (ML) algorithms can often differ in performance across domains. Understanding why their performance differs is crucial for determining what types of interventions (e.g., algorithmic or operational) are most effective at closing the performance gaps. Aggregate decompositions express the total performance gap as the gap due to a shift in the feature distribution $p(X)$ plus the gap due to a shift in the outcome's conditional distribution $p(Y|X)$. While this coarse explanation is helpful for guiding root cause analyses, it provides limited details and can only suggest coarse fixes involving all variables in an ML system. Detailed decompositions quantify the importance of each variable to each term in the aggregate decomposition, which can provide a deeper understanding and suggest more targeted interventions. Although parametric methods exist for conducting a full hierarchical decomposition of an algorithm's performance gap at the aggregate and detailed levels, current nonparametric methods only cover parts of the hierarchy; many also require knowledge of the entire causal graph. We introduce a nonparametric hierarchical framework for explaining why the performance of an ML algorithm differs across domains, without requiring causal knowledge. Furthermore, we derive debiased, computationally-efficient estimators and statistical inference procedures to construct confidence intervals for the explanations.", "pdf": "https://openreview.net/pdf/189c9c99946e6807779fec397a2e58a7b52303f0.pdf"} {"title": "LLM Processes: Numerical Predictive Distributions Conditioned on Natural Language", "url": "https://openreview.net/forum?id=HShs7q1Njh", "detail_url": "https://openreview.net/forum?id=HShs7q1Njh", "authors": "James Requeima,John F Bronskill,Dami Choi,Richard E. Turner,David Duvenaud", "tags": "NIPS 2024,Poster", "abstract": "Machine learning practitioners often face significant challenges in formally integrating their prior knowledge and beliefs into predictive models, limiting the potential for nuanced and context-aware analyses. Moreover, the expertise needed to integrate this prior knowledge into probabilistic modeling typically limits the application of these models to specialists. Our goal is to build a regression model that can process numerical data and make probabilistic predictions at arbitrary locations, guided by natural language text which describes a user's prior knowledge. Large Language Models (LLMs) provide a useful starting point for designing such a tool since they 1) provide an interface where users can incorporate expert insights in natural language and 2) provide an opportunity for leveraging latent problem-relevant knowledge encoded in LLMs that users may not have themselves. We start by exploring strategies for eliciting explicit, coherent numerical predictive distributions from LLMs. We examine these joint predictive distributions, which we call LLM Processes, over arbitrarily-many quantities in settings such as forecasting, multi-dimensional regression, black-box optimization, and image modeling. We investigate the practical details of prompting to elicit coherent predictive distributions, and demonstrate their effectiveness at regression. Finally, we demonstrate the ability to usefully incorporate text into numerical predictions, improving predictive performance and giving quantitative structure that reflects qualitative descriptions. This lets us begin to explore the rich, grounded hypothesis space that LLMs implicitly encode.", "pdf": "https://openreview.net/pdf/3f94351fd86e5666d78418be95f586b4c85714b3.pdf"} {"title": "Categorical Flow Matching on Statistical Manifolds", "url": "https://openreview.net/forum?id=5fybcQZ0g4", "detail_url": "https://openreview.net/forum?id=5fybcQZ0g4", "authors": "Chaoran Cheng,Jiahan Li,Jian Peng,Ge Liu", "tags": "NIPS 2024,Poster", "abstract": "We introduce Statistical Flow Matching (SFM), a novel and mathematically rigorous flow-matching framework on the manifold of parameterized probability measures inspired by the results from information geometry. We demonstrate the effectiveness of our method on the discrete generation problem by instantiating SFM on the manifold of categorical distributions whose geometric properties remain unexplored in previous discrete generative models. Utilizing the Fisher information metric, we equip the manifold with a Riemannian structure whose intrinsic geometries are effectively leveraged by following the shortest paths of geodesics. We develop an efficient training and sampling algorithm that overcomes numerical stability issues with a diffeomorphism between manifolds. Our distinctive geometric perspective of statistical manifolds allows us to apply optimal transport during training and interpret SFM as following the steepest direction of the natural gradient. Unlike previous models that rely on variational bounds for likelihood estimation, SFM enjoys the exact likelihood calculation for arbitrary probability measures. We manifest that SFM can learn more complex patterns on the statistical manifold where existing models often fail due to strong prior assumptions. Comprehensive experiments on real-world generative tasks ranging from image, text to biological domains further demonstrate that SFM achieves higher sampling quality and likelihood than other discrete diffusion or flow-based models.", "pdf": "https://openreview.net/pdf/12903ac826613fb780753f8be5476e351d2d74ca.pdf"} {"title": "Neural Gaffer: Relighting Any Object via Diffusion", "url": "https://openreview.net/forum?id=zV2GDsZb5a", "detail_url": "https://openreview.net/forum?id=zV2GDsZb5a", "authors": "Haian Jin,Yuan Li,Fujun Luan,Yuanbo Xiangli,Sai Bi,Kai Zhang,Zexiang Xu,Jin Sun,Noah Snavely", "tags": "NIPS 2024,Poster", "abstract": "Single-image relighting is a challenging task that involves reasoning about the complex interplay between geometry, materials, and lighting. Many prior methods either support only specific categories of images, such as portraits, or require special capture conditions, like using a flashlight. Alternatively, some methods explicitly decompose a scene into intrinsic components, such as normals and BRDFs, which can be inaccurate or under-expressive. In this work, we propose a novel end-to-end 2D relighting diffusion model, called Neural Gaffer, that takes a single image of any object and can synthesize an accurate, high-quality relit image under any novel environmental lighting condition, simply by conditioning an image generator on a target environment map, without an explicit scene decomposition. Our method builds on a pre-trained diffusion model, and fine-tunes it on a synthetic relighting dataset, revealing and harnessing the inherent understanding of lighting present in the diffusion model. We evaluate our model on both synthetic and in-the-wild Internet imagery and demonstrate its advantages in terms of generalization and accuracy. Moreover, by combining with other generative methods, our model enables many downstream 2D tasks, such as text-based relighting and object insertion. Our model can also operate as a strong relighting prior for 3D tasks, such as relighting a radiance field.", "pdf": "https://openreview.net/pdf/71a526cee2abe808ec8027770fd2ee1ce6e3a7fc.pdf"} {"title": "Capturing the denoising effect of PCA via compression ratio", "url": "https://openreview.net/forum?id=a4J7nDLXEM", "detail_url": "https://openreview.net/forum?id=a4J7nDLXEM", "authors": "Chandra Sekhar Mukherjee,Nikhil Deorkar,Jiapeng Zhang", "tags": "NIPS 2024,Poster", "abstract": "Principal component analysis (PCA) is one of the most fundamental tools in machine learning with broad use as a dimensionality reduction and denoising tool. In the later setting, while PCA is known to be effective at subspace recovery and is proven to aid clustering algorithms in some specific settings, its improvement of noisy data is still not well quantified in general. \n\nIn this paper, we propose a novel metric called *compression ratio* to capture the effect of PCA on high-dimensional noisy data.\nWe show that, for data with *underlying community structure*, PCA significantly reduces the distance of data points belonging to the same community while reducing inter-community distance relatively mildly. We explain this phenomenon through both theoretical proofs and experiments on real-world data. \n\nBuilding on this new metric, we design a straightforward algorithm that could be used to detect outliers. Roughly speaking, we argue that points that have a *lower variance of compression ratio* do not share a *common signal* with others (hence could be considered outliers).\n\nWe provide theoretical justification for this simple outlier detection algorithm and use simulations to demonstrate that our method is competitive with popular outlier detection tools. Finally, we run experiments on real-world high-dimension noisy data (single-cell RNA-seq) to show that removing points from these datasets via our outlier detection method improves the accuracy of clustering algorithms. Our method is very competitive with popular outlier detection tools in this task.", "pdf": "https://openreview.net/pdf/5b12c9baaaa6c1042fd69715b5920453044b5f7a.pdf"} {"title": "GrounDiT: Grounding Diffusion Transformers via Noisy Patch Transplantation", "url": "https://openreview.net/forum?id=SXbyy0a3rY", "detail_url": "https://openreview.net/forum?id=SXbyy0a3rY", "authors": "Yuseung Lee,TaeHoon Yoon,Minhyuk Sung", "tags": "NIPS 2024,Poster", "abstract": "We introduce GrounDiT, a novel training-free spatial grounding technique for text-to-image generation using Diffusion Transformers (DiT). Spatial grounding with bounding boxes has gained attention for its simplicity and versatility, allowing for enhanced user control in image generation. However, prior training-free approaches often rely on updating the noisy image during the reverse diffusion process via backpropagation from custom loss functions, which frequently struggle to provide precise control over individual bounding boxes. In this work, we leverage the flexibility of the Transformer architecture, demonstrating that DiT can generate noisy patches corresponding to each bounding box, fully encoding the target object and allowing for fine-grained control over each region. Our approach builds on an intriguing property of DiT, which we refer to as semantic sharing. Due to semantic sharing, when a smaller patch is jointly denoised alongside a generatable-size image, the two become semantic clones. Each patch is denoised in its own branch of the generation process and then transplanted into the corresponding region of the original noisy image at each timestep, resulting in robust spatial grounding for each bounding box. In our experiments on the HRS and DrawBench benchmarks, we achieve state-of-the-art performance compared to previous training-free approaches. Project Page: https://groundit-diffusion.github.io/.", "pdf": "https://openreview.net/pdf/5f41a28a6da94a036c8b0e1287ce2e32ed965cce.pdf"} {"title": "Pure Message Passing Can Estimate Common Neighbor for Link Prediction", "url": "https://openreview.net/forum?id=Xa3dVaolKo", "detail_url": "https://openreview.net/forum?id=Xa3dVaolKo", "authors": "Kaiwen Dong,Zhichun Guo,Nitesh V Chawla", "tags": "NIPS 2024,Poster", "abstract": "Message Passing Neural Networks (MPNNs) have emerged as the {\\em de facto} standard in graph representation learning. However, when it comes to link prediction, they are not always superior to simple heuristics such as Common Neighbor (CN). This discrepancy stems from a fundamental limitation: while MPNNs excel in node-level representation, they stumble with encoding the joint structural features essential to link prediction, like CN. To bridge this gap, we posit that, by harnessing the orthogonality of input vectors, pure message-passing can indeed capture joint structural features. Specifically, we study the proficiency of MPNNs in approximating CN heuristics. Based on our findings, we introduce the Message Passing Link Predictor (MPLP), a novel link prediction model. MPLP taps into quasi-orthogonal vectors to estimate link-level structural features, all while preserving the node-level complexities. We conduct experiments on benchmark datasets from various domains, where our method consistently outperforms the baseline methods, establishing new state-of-the-arts.", "pdf": "https://openreview.net/pdf/c93a96deeeda73eedf77f0d343e57f23caed9586.pdf"} {"title": "Decomposable Transformer Point Processes", "url": "https://openreview.net/forum?id=OesteJF0ls", "detail_url": "https://openreview.net/forum?id=OesteJF0ls", "authors": "Aristeidis Panos", "tags": "NIPS 2024,Poster", "abstract": "The standard paradigm of modeling marked point processes is by parameterizing the intensity function using an attention-based (Transformer-style) architecture. Despite the flexibility of these methods, their inference is based on the computationally intensive thinning algorithm. In this work, we propose a framework where the advantages of the attention-based architecture are maintained and the limitation of the thinning algorithm is circumvented. The framework depends on modeling the conditional distribution of inter-event times with a mixture of log-normals satisfying a Markov property and the conditional probability mass function for the marks with a Transformer-based architecture. The proposed method attains state-of-the-art performance in predicting the next event of a sequence given its history. The experiments also reveal the efficacy of the methods that do not rely on the thinning algorithm during inference over the ones they do. Finally, we test our method on the challenging long-horizon prediction task and find that it outperforms a baseline developed specifically for tackling this task; importantly, inference requires just a fraction of time compared to the thinning-based baseline.", "pdf": "https://openreview.net/pdf/a10120e98acfc0885e95d40eea84bd888fc79c6a.pdf"} {"title": "DCDepth: Progressive Monocular Depth Estimation in Discrete Cosine Domain", "url": "https://openreview.net/forum?id=463TE4N8VJ", "detail_url": "https://openreview.net/forum?id=463TE4N8VJ", "authors": "Kun Wang,Zhiqiang Yan,Junkai Fan,Wanlu Zhu,Xiang Li,Jun Li,Jian Yang", "tags": "NIPS 2024,Poster", "abstract": "In this paper, we introduce DCDepth, a novel framework for the long-standing monocular depth estimation task. Moving beyond conventional pixel-wise depth estimation in the spatial domain, our approach estimates the frequency coefficients of depth patches after transforming them into the discrete cosine domain. This unique formulation allows for the modeling of local depth correlations within each patch. Crucially, the frequency transformation segregates the depth information into various frequency components, with low-frequency components encapsulating the core scene structure and high-frequency components detailing the finer aspects. This decomposition forms the basis of our progressive strategy, which begins with the prediction of low-frequency components to establish a global scene context, followed by successive refinement of local details through the prediction of higher-frequency components. We conduct comprehensive experiments on NYU-Depth-V2, TOFDC, and KITTI datasets, and demonstrate the state-of-the-art performance of DCDepth. Code is available at https://github.com/w2kun/DCDepth.", "pdf": "https://openreview.net/pdf/acb75887f8b4431082b8caed4577820da731c7bd.pdf"} {"title": "Regret Minimization in Stackelberg Games with Side Information", "url": "https://openreview.net/forum?id=rPKCrzdqJx", "detail_url": "https://openreview.net/forum?id=rPKCrzdqJx", "authors": "Keegan Harris,Steven Wu,Maria Florina Balcan", "tags": "NIPS 2024,Poster", "abstract": "Algorithms for playing in Stackelberg games have been deployed in real-world domains including airport security, anti-poaching efforts, and cyber-crime prevention. However, these algorithms often fail to take into consideration the additional information available to each player (e.g. traffic patterns, weather conditions, network congestion), a salient feature of reality which may significantly affect both players' optimal strategies. We formalize such settings as Stackelberg games with side information, in which both players observe an external context before playing. The leader commits to a (context-dependent) strategy, and the follower best-responds to both the leader's strategy and the context. We focus on the online setting in which a sequence of followers arrive over time, and the context may change from round-to-round. In sharp contrast to the non-contextual version, we show that it is impossible for the leader to achieve good performance (measured by regret) in the full adversarial setting. Motivated by our impossibility result, we show that no-regret learning is possible in two natural relaxations: the setting in which the sequence of followers is chosen stochastically and the sequence of contexts is adversarial, and the setting in which the sequence of contexts is stochastic and the sequence of followers is chosen by an adversary.", "pdf": "https://openreview.net/pdf/e13ba07995ed5c87b819586a30e5c574bc86beb6.pdf"} {"title": "Offline Reinforcement Learning with OOD State Correction and OOD Action Suppression", "url": "https://openreview.net/forum?id=anyZgGLQ6n", "detail_url": "https://openreview.net/forum?id=anyZgGLQ6n", "authors": "Yixiu Mao,Cheems Wang,Chen Chen,Yun Qu,Xiangyang Ji", "tags": "NIPS 2024,Poster", "abstract": "In offline reinforcement learning (RL), addressing the out-of-distribution (OOD) action issue has been a focus, but we argue that there exists an OOD state issue that also impairs performance yet has been underexplored. Such an issue describes the scenario when the agent encounters states out of the offline dataset during the test phase, leading to uncontrolled behavior and performance degradation. To this end, we propose SCAS, a simple yet effective approach that unifies OOD state correction and OOD action suppression in offline RL. Technically, SCAS achieves value-aware OOD state correction, capable of correcting the agent from OOD states to high-value in-distribution states. Theoretical and empirical results show that SCAS also exhibits the effect of suppressing OOD actions. On standard offline RL benchmarks, SCAS achieves excellent performance without additional hyperparameter tuning. Moreover, benefiting from its OOD state correction feature, SCAS demonstrates enhanced robustness against environmental perturbations.", "pdf": "https://openreview.net/pdf/01018e198743495525b06ad074fafebc37158db8.pdf"} {"title": "Looks Too Good To Be True: An Information-Theoretic Analysis of Hallucinations in Generative Restoration Models", "url": "https://openreview.net/forum?id=85tu7K06i3", "detail_url": "https://openreview.net/forum?id=85tu7K06i3", "authors": "Regev Cohen,Idan Kligvasser,Ehud Rivlin,Daniel Freedman", "tags": "NIPS 2024,Poster", "abstract": "The pursuit of high perceptual quality in image restoration has driven the development of revolutionary generative models, capable of producing results often visually indistinguishable from real data.\nHowever, as their perceptual quality continues to improve, these models also exhibit a growing tendency to generate hallucinations \u2013 realistic-looking details that do not exist in the ground truth images.\nHallucinations in these models create uncertainty about their reliability, raising major concerns about their practical application.\nThis paper investigates this phenomenon through the lens of information theory, revealing a fundamental tradeoff between uncertainty and perception. We rigorously analyze the relationship between these two factors, proving that the global minimal uncertainty in generative models grows in tandem with perception. \nIn particular, we define the inherent uncertainty of the restoration problem and show that attaining perfect perceptual quality entails at least twice this uncertainty. Additionally, we establish a relation between distortion, uncertainty and perception, through which we prove the aforementioned uncertainly-perception tradeoff induces the well-known perception-distortion tradeoff.\nWe demonstrate our theoretical findings through experiments with super-resolution and inpainting algorithms.\nThis work uncovers fundamental limitations of generative models in achieving both high perceptual quality and reliable predictions for image restoration. \nThus, we aim to raise awareness among practitioners about this inherent tradeoff, empowering them to make informed decisions and potentially prioritize safety over perceptual performance.", "pdf": "https://openreview.net/pdf/a446424c485c6a1dd2da97c353327bbbd016b0e5.pdf"} {"title": "Dissect Black Box: Interpreting for Rule-Based Explanations in Unsupervised Anomaly Detection", "url": "https://openreview.net/forum?id=h6o6qXLmHZ", "detail_url": "https://openreview.net/forum?id=h6o6qXLmHZ", "authors": "Yu Zhang,Ruoyu Li,Nengwu Wu,Qing Li,Xinhan Lin,Yang Hu,Tao Li,Yong Jiang", "tags": "NIPS 2024,Poster", "abstract": "In high-stakes sectors such as network security, IoT security, accurately distinguishing between normal and anomalous data is critical due to the significant implications for operational success and safety in decision-making. The complexity is exacerbated by the presence of unlabeled data and the opaque nature of black-box anomaly detection models, which obscure the rationale behind their predictions. In this paper, we present a novel method to interpret the decision-making processes of these models, which are essential for detecting malicious activities without labeled attack data. We put forward the Segmentation Clustering Decision Tree (SCD-Tree), designed to dissect and understand the structure of normal data distributions. The SCD-Tree integrates predictions from the anomaly detection model into its splitting criteria, enhancing the clustering process with the model's insights into anomalies. To further refine these segments, the Gaussian Boundary Delineation (GBD) algorithm is employed to define boundaries within each segmented distribution, effectively delineating normal from anomalous data points. At this point, this approach addresses the curse of dimensionality by segmenting high-dimensional data and ensures resilience to data drift and perturbations through flexible boundary fitting. We transform the intricate operations of anomaly detection into an interpretable rule's format, constructing a comprehensive set of rules for understanding. Our method's evaluation on diverse datasets and models demonstrates superior explanation accuracy, fidelity, and robustness over existing method, proving its efficacy in environments where interpretability is paramount.", "pdf": "https://openreview.net/pdf/f79d37e9ef3c413f8c7bc9295d7d22d622136749.pdf"} {"title": "Virtual Scanning: Unsupervised Non-line-of-sight Imaging from Irregularly Undersampled Transients", "url": "https://openreview.net/forum?id=R4IBZrSF5d", "detail_url": "https://openreview.net/forum?id=R4IBZrSF5d", "authors": "Xingyu Cui,Huanjing Yue,Song Li,Xiangjun Yin,Yusen Hou,Yun Meng,Kai Zou,Xiaolong Hu,Jingyu Yang", "tags": "NIPS 2024,Poster", "abstract": "Non-line-of-sight (NLOS) imaging allows for seeing hidden scenes around corners through active sensing.\nMost previous algorithms for NLOS reconstruction require dense transients acquired through regular scans over a large relay surface, which limits their applicability in realistic scenarios with irregular relay surfaces.\nIn this paper, we propose an unsupervised learning-based framework for NLOS imaging from irregularly undersampled transients~(IUT).\nOur method learns implicit priors from noisy irregularly undersampled transients without requiring paired data, which is difficult and expensive to acquire and align. \nTo overcome the ambiguity of the measurement consistency constraint in inferring the albedo volume, we design a virtual scanning process that enables the network to learn within both range and null spaces for high-quality reconstruction.\nWe devise a physics-guided SURE-based denoiser to enhance robustness to ubiquitous noise in low-photon imaging conditions. \nExtensive experiments on both simulated and real-world data validate the performance and generalization of our method.\nCompared with the state-of-the-art (SOTA) method, our method achieves higher fidelity, greater robustness, and remarkably faster inference times by orders of magnitude.\nThe code and model are available at https://github.com/XingyuCuii/Virtual-Scanning-NLOS.", "pdf": "https://openreview.net/pdf/b72b54105c6a7b18badf1987cf8f7e3c30e18845.pdf"} {"title": "FUG: Feature-Universal Graph Contrastive Pre-training for Graphs with Diverse Node Features", "url": "https://openreview.net/forum?id=VUuOsBrqaw", "detail_url": "https://openreview.net/forum?id=VUuOsBrqaw", "authors": "Jitao Zhao,Di Jin,Meng Ge,Lianze Shan,Xin Wang,Dongxiao He,Zhiyong Feng", "tags": "NIPS 2024,Poster", "abstract": "Graph Neural Networks (GNNs), known for their effective graph encoding, are extensively used across various fields. Graph self-supervised pre-training, which trains GNN encoders without manual labels to generate high-quality graph representations, has garnered widespread attention. However, due to the inherent complex characteristics in graphs, GNNs encoders pre-trained on one dataset struggle to directly adapt to others that have different node feature shapes. This typically necessitates either model rebuilding or data alignment. The former results in non-transferability as each dataset need to rebuild a new model, while the latter brings serious knowledge loss since it forces features into a uniform shape by preprocessing such as Principal Component Analysis (PCA). To address this challenge, we propose a new Feature-Universal Graph contrastive pre-training strategy (FUG) that naturally avoids the need for model rebuilding and data reshaping. Specifically, inspired by discussions in existing work on the relationship between contrastive Learning and PCA, we conducted a theoretical analysis and discovered that PCA's optimization objective is a special case of that in contrastive Learning. We designed an encoder with contrastive constraints to emulate PCA's generation of basis transformation matrix, which is utilized to losslessly adapt features in different datasets. Furthermore, we introduced a global uniformity constraint to replace negative sampling, reducing the time complexity from $O(n^2)$ to $O(n)$, and by explicitly defining positive samples, FUG avoids the substantial memory requirements of data augmentation. In cross domain experiments, FUG has a performance close to the re-trained new models. The source code is available at: https://github.com/hedongxiao-tju/FUG.", "pdf": "https://openreview.net/pdf/256252442a818a4c82c27adacf5fc76f16338e18.pdf"} {"title": "Promoting Fairness Among Dynamic Agents in Online-Matching Markets under Known Stationary Arrival Distributions", "url": "https://openreview.net/forum?id=0C3bLHwjsY", "detail_url": "https://openreview.net/forum?id=0C3bLHwjsY", "authors": "Will Ma,Pan Xu", "tags": "NIPS 2024,Poster", "abstract": "Online (bipartite) matching under known stationary arrivals is a fundamental model that has been studied extensively under the objective of maximizing the total number of customers served. We instead study the objective of *maximizing the minimum matching rate across all online types*, which is referred to as long-run (individual) fairness. For Online Matching under long-run Fairness (OM-LF) with a single offline agent, we show that the first-come-first-serve (FCFS) policy is $1$-competitive, i.e., matching any optimal clairvoyant. For the general case of OM-LF: We present a sampling algorithm (SAMP) and show that (1) SAMP is of competitiveness of at least $1-1/e$ and (2) it is asymptotically optimal with competitiveness approaches one in different regimes when either all offline agents have a sufficiently large matching capacity, or all online types have a sufficiently large arrival rate, or highly imbalance between the total offline matching capacity and the number of online arrivals. To complement the competitive results, we show the following hardness results for OM-LF: (1) Any non-rejecting policy (matching every arriving online agent if possible) is no more than $1/2$-competitive; (2) Any (randomized) policy is no more than $(\\sqrt{3}-1)$-competitive; (3) SAMP can be no more than $(1-1/e)$-competitive suggesting the tightness of competitive analysis for SAMP. We stress that all hardness results mentioned here are independent of any benchmarks. We also consider a few extensions of OM-LF by proposing a few variants of fairness metrics, including long-run group-level fairness and short-run fairness, and we devise related algorithms with provable competitive performance.", "pdf": "https://openreview.net/pdf/61ecba74434d4fc5596cb759256dd94610bf7414.pdf"} {"title": "Persistent Homology for High-dimensional Data Based on Spectral Methods", "url": "https://openreview.net/forum?id=ARV1gJSOzV", "detail_url": "https://openreview.net/forum?id=ARV1gJSOzV", "authors": "Sebastian Damrich,Philipp Berens,Dmitry Kobak", "tags": "NIPS 2024,Poster", "abstract": "Persistent homology is a popular computational tool for analyzing the topology of point clouds, such as the presence of loops or voids. However, many real-world datasets with low intrinsic dimensionality reside in an ambient space of much higher dimensionality. We show that in this case traditional persistent homology becomes very sensitive to noise and fails to detect the correct topology. The same holds true for existing refinements of persistent homology. As a remedy, we find that spectral distances on the k-nearest-neighbor graph of the data, such as diffusion distance and effective resistance, allow to detect the correct topology even in the presence of high-dimensional noise. Moreover, we derive a novel closed-form formula for effective resistance, and describe its relation to diffusion distances. Finally, we apply these methods to high-dimensional single-cell RNA-sequencing data and show that spectral distances allow robust detection of cell cycle loops.", "pdf": "https://openreview.net/pdf/0304aa2903e7824763c56dab22035bb48cc24a65.pdf"} {"title": "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?", "url": "https://openreview.net/forum?id=FbuODM02ra", "detail_url": "https://openreview.net/forum?id=FbuODM02ra", "authors": "Zhanke Zhou,Rong Tao,Jianing Zhu,Yiwen Luo,Zengmao Wang,Bo Han", "tags": "NIPS 2024,Poster", "abstract": "This paper investigates an under-explored challenge in large language models (LLMs): chain-of-thought prompting with noisy rationales, which include irrelevant or inaccurate reasoning thoughts within examples used for in-context learning. We construct NoRa dataset that is tailored to evaluate the robustness of reasoning in the presence of noisy rationales. Our findings on NoRa dataset reveal a prevalent vulnerability to such noise among current LLMs, with existing robust methods like self-correction and self-consistency showing limited efficacy. Notably, compared to prompting with clean rationales, base LLM drops by 1.4%-19.8% in accuracy with irrelevant thoughts and more drastically by 2.2%-40.4% with inaccurate thoughts.\n\nAddressing this challenge necessitates external supervision that should be accessible in practice. Here, we propose the method of contrastive denoising with noisy chain-of-thought (CD-CoT). It enhances LLMs' denoising-reasoning capabilities by contrasting noisy rationales with only one clean rationale, which can be the minimal requirement for denoising-purpose prompting. This method follows a principle of exploration and exploitation: (1) rephrasing and selecting rationales in the input space to achieve explicit denoising and (2) exploring diverse reasoning paths and voting on answers in the output space. Empirically, CD-CoT demonstrates an average improvement of 17.8% in accuracy over the base model and shows significantly stronger denoising capabilities than baseline methods. The source code is publicly available at: https://github.com/tmlr-group/NoisyRationales.", "pdf": "https://openreview.net/pdf/94f7cc3be391186d5d069155d197bf808120d802.pdf"} {"title": "ENAT: Rethinking Spatial-temporal Interactions in Token-based Image Synthesis", "url": "https://openreview.net/forum?id=PhsYFyTeHr", "detail_url": "https://openreview.net/forum?id=PhsYFyTeHr", "authors": "Zanlin Ni,Yulin Wang,Renping Zhou,Yizeng Han,Jiayi Guo,Zhiyuan Liu,Yuan Yao,Gao Huang", "tags": "NIPS 2024,Poster", "abstract": "Recently, token-based generation approaches have demonstrated their effectiveness in synthesizing visual content. As a representative example, non-autoregressive Transformers (NATs) can generate decent-quality images in just a few steps. NATs perform generation in a progressive manner, where the latent tokens of a resulting image are incrementally revealed step-by-step. At each step, the unrevealed image regions are padded with [MASK] tokens and inferred by NAT, with the most reliable predictions preserved as newly revealed, visible tokens. In this paper, we delve into understanding the mechanisms behind the effectiveness of NATs and uncover two important interaction patterns that naturally emerge from NAT\u2019s paradigm: Spatially (within a step), although [MASK] and visible tokens are processed uniformly by NATs, the interactions between them are highly asymmetric. In specific, [MASK] tokens mainly gather information for decoding. On the contrary, visible tokens tend to primarily provide information, and their deep representations can be built only upon themselves. Temporally (across steps), the interactions between adjacent generation steps mostly concentrate on updating the representations of a few critical tokens, while the computation for the majority of tokens is generally repetitive. Driven by these findings, we propose EfficientNAT (ENAT), a NAT model that explicitly encourages these critical interactions inherent in NATs. At the spatial level, we disentangle the computations of visible and [MASK] tokens by encoding visible tokens independently, while decoding [MASK] tokens conditioned on the fully encoded visible tokens. At the temporal level, we prioritize the computation of the critical tokens at each step, while maximally reusing previously computed token representations to supplement necessary information. ENAT improves the performance of NATs notably with significantly reduced computational cost. Experiments on ImageNet-256 2 & 512 2 and MS-COCO validate the effectiveness of ENAT. Code and pre-trained models will be released at https://github.com/LeapLabTHU/ENAT.", "pdf": "https://openreview.net/pdf/78a7b4a305548083b1b20912d1ab565c09cfe1d8.pdf"} {"title": "Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning Mamba", "url": "https://openreview.net/forum?id=pCJ0l1JVUX", "detail_url": "https://openreview.net/forum?id=pCJ0l1JVUX", "authors": "Haoye Dong,Aviral Chharia,Wenbo Gou,Francisco Vicente Carrasco,Fernando De la Torre", "tags": "NIPS 2024,Poster", "abstract": "3D Hand reconstruction from a single RGB image is challenging due to the articulated motion, self-occlusion, and interaction with objects. Existing SOTA methods employ attention-based transformers to learn the 3D hand pose and shape, yet they do not fully achieve robust and accurate performance, primarily due to inefficiently modeling spatial relations between joints. To address this problem, we propose a novel graph-guided Mamba framework, named Hamba, which bridges graph learning and state space modeling. Our core idea is to reformulate Mamba's scanning into graph-guided bidirectional scanning for 3D reconstruction using a few effective tokens. This enables us to efficiently learn the spatial relationships between joints for improving reconstruction performance. Specifically, we design a Graph-guided State Space (GSS) block that learns the graph-structured relations and spatial sequences of joints and uses 88.5\\% fewer tokens than attention-based methods. Additionally, we integrate the state space features and the global features using a fusion module. By utilizing the GSS block and the fusion module, Hamba effectively leverages the graph-guided state space features and jointly considers global and local features to improve performance. Experiments on several benchmarks and in-the-wild tests demonstrate that Hamba significantly outperforms existing SOTAs, achieving the PA-MPVPE of 5.3mm and F@15mm of 0.992 on FreiHAND. At the time of this paper's acceptance, Hamba holds the top position, Rank 1, in two competition leaderboards on 3D hand reconstruction.", "pdf": "https://openreview.net/pdf/52f78a66bec9fab9154b68eb2413119039daa6f5.pdf"} {"title": "How to Continually Adapt Text-to-Image Diffusion Models for Flexible Customization?", "url": "https://openreview.net/forum?id=O4RCFjVUBJ", "detail_url": "https://openreview.net/forum?id=O4RCFjVUBJ", "authors": "Jiahua Dong,Wenqi Liang,Hongliu Li,Duzhen Zhang,Meng Cao,Henghui Ding,Salman Khan,Fahad Khan", "tags": "NIPS 2024,Poster", "abstract": "Custom diffusion models (CDMs) have attracted widespread attention due to their astonishing generative ability for personalized concepts. However, most existing CDMs unreasonably assume that personalized concepts are fixed and cannot change over time. Moreover, they heavily suffer from catastrophic forgetting and concept neglect on old personalized concepts when continually learning a series of new concepts. To address these challenges, we propose a novel Concept-Incremental text-to-image Diffusion Model (CIDM), which can resolve catastrophic forgetting and concept neglect to learn new customization tasks in a concept-incremental manner. Specifically, to surmount the catastrophic forgetting of old concepts, we develop a concept consolidation loss and an elastic weight aggregation module. They can explore task-specific and task-shared knowledge during training, and aggregate all low-rank weights of old concepts based on their contributions during inference. Moreover, in order to address concept neglect, we devise a context-controllable synthesis strategy that leverages expressive region features and noise estimation to control the contexts of generated images according to user conditions. Experiments validate that our CIDM surpasses existing custom diffusion models. The source codes are available at https://github.com/JiahuaDong/CIFC.", "pdf": "https://openreview.net/pdf/67131e63dd56830e063a384efe77e0eebf961221.pdf"} {"title": "Tell What You Hear From What You See - Video to Audio Generation Through Text", "url": "https://openreview.net/forum?id=kr7eN85mIT", "detail_url": "https://openreview.net/forum?id=kr7eN85mIT", "authors": "Xiulong Liu,Kun Su,Eli Shlizerman", "tags": "NIPS 2024,Poster", "abstract": "The content of visual and audio scenes is multi-faceted such that a video stream can\nbe paired with various audio streams and vice-versa. Thereby, in video-to-audio\ngeneration task, it is imperative to introduce steering approaches for controlling the\ngenerated audio. While Video-to-Audio generation is a well-established generative\ntask, existing methods lack such controllability. In this work, we propose VATT, a\nmulti-modal generative framework that takes a video and an optional text prompt\nas input, and generates audio and optional textual description (caption) of the\naudio. Such a framework has two unique advantages: i) Video-to-Audio generation\nprocess can be refined and controlled via text which complements the context\nof the visual information, and ii) The model can suggest what audio to generate\nfor the video by generating audio captions. VATT consists of two key modules:\nVATT Converter, which is an LLM that has been fine-tuned for instructions and\nincludes a projection layer that maps video features to the LLM vector space, and\nVATT Audio, a bi-directional transformer that generates audio tokens from visual\nframes and from optional text prompt using iterative parallel decoding. The audio\ntokens and the text prompt are used by a pretrained neural codec to convert them\ninto a waveform. Our experiments show that when VATT is compared to existing\nvideo-to-audio generation methods in objective metrics, such as VGGSound audiovisual dataset, it achieves competitive performance when the audio caption is\nnot provided. When the audio caption is provided as a prompt, VATT achieves\neven more refined performance (with lowest KLD score of 1.41). Furthermore,\nsubjective studies asking participants to choose the most compatible generated\naudio for a given silent video, show that VATT Audio has been chosen on average\nas a preferred generated audio than the audio generated by existing methods. VATT\nenables controllable video-to-audio generation through text as well as suggesting\ntext prompts for videos through audio captions, unlocking novel applications such\nas text-guided video-to-audio generation and video-to-audio captioning.", "pdf": "https://openreview.net/pdf/7f048a863a36075771abbaa28445e76c2233bd97.pdf"} {"title": "CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching", "url": "https://openreview.net/forum?id=OW1ldvMNJ6", "detail_url": "https://openreview.net/forum?id=OW1ldvMNJ6", "authors": "Dongzhi Jiang,Guanglu Song,Xiaoshi Wu,Renrui Zhang,Dazhong Shen,Zhuofan Zong,Yu Liu,Hongsheng Li", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models have demonstrated great success in the field of text-to-image generation. However, alleviating the misalignment between the text prompts and images is still challenging. We break down the problem into two causes: concept ignorance and concept mismapping. To tackle the two challenges, we propose CoMat, an end-to-end diffusion model fine-tuning strategy with the image-to-text concept matching mechanism. Firstly, we introduce a novel image-to-text concept activation module to guide the diffusion model in revisiting ignored concepts. Additionally, an attribute concentration module is proposed to map the text conditions of each entity to its corresponding image area correctly. Extensive experimental evaluations, conducted across three distinct text-to-image alignment benchmarks, demonstrate the superior efficacy of our proposed method, CoMat-SDXL, over the baseline model, SDXL~\\cite{podell2023sdxl}. We also show that our method enhances general condition utilization capability and generalizes to the long and complex prompt despite not specifically training on it.", "pdf": "https://openreview.net/pdf/0c98a79059497a48f919b190dfc4dd3bbfd6a8de.pdf"} {"title": "Mining and Transferring Feature-Geometry Coherence for Unsupervised Point Cloud Registration", "url": "https://openreview.net/forum?id=OCcfKzXded", "detail_url": "https://openreview.net/forum?id=OCcfKzXded", "authors": "KeZheng Xiong,Haoen Xiang,Qingshan Xu,Chenglu Wen,Siqi Shen,Jonathan Li,Cheng Wang", "tags": "NIPS 2024,Poster", "abstract": "Point cloud registration, a fundamental task in 3D vision, has achieved remark\u0002able success with learning-based methods in outdoor environments. Unsupervised outdoor point cloud registration methods have recently emerged to circumvent the need for costly pose annotations. However, they fail to establish reliable optimization objectives for unsupervised training, either relying on overly strong geometric assumptions, or suffering from poor-quality pseudo-labels due to inadequate integration of low-level geometric and high-level contextual information. We have observed that in the feature space, latent new inlier correspondences tend to cluster\naround respective positive anchors that summarize features of existing inliers. Motivated by this observation, we propose a novel unsupervised registration method termed INTEGER to incorporate high-level contextual information for reliable pseudo-label mining. Specifically, we propose the Feature-Geometry Coherence Mining module to dynamically adapt the teacher for each mini-batch of data during training and discover reliable pseudo-labels by considering both high-level feature representations and low-level geometric cues. Furthermore, we propose Anchor-Based Contrastive Learning to facilitate contrastive learning with anchors for a robust feature space. Lastly, we introduce a Mixed-Density Student to learn density-invariant features, addressing challenges related to density variation and low overlap in the outdoor scenario. Extensive experiments on KITTI and nuScenes datasets demonstrate that our INTEGER achieves competitive performance in terms of accuracy and generalizability.", "pdf": "https://openreview.net/pdf/2d3d9d652848108ab51adfd4eaabb3eb114e4aaa.pdf"} {"title": "Decentralized Noncooperative Games with Coupled Decision-Dependent Distributions", "url": "https://openreview.net/forum?id=KqgSzXbufw", "detail_url": "https://openreview.net/forum?id=KqgSzXbufw", "authors": "Wenjing Yan,Xuanyu Cao", "tags": "NIPS 2024,Poster", "abstract": "Distribution variations in machine learning, driven by the dynamic nature of deployment environments, significantly impact the performance of learning models. This paper explores endogenous distribution shifts in learning systems, where deployed models influence environments and subsequently alter data distributions. This phenomenon is formulated by a decision-dependent distribution mapping within the recently proposed framework of performative prediction (PP) Perdomo et al. (2020). We investigate the performative effect in a decentralized noncooperative game, where players aim to minimize private cost functions while simultaneously managing coupled inequality constraints. Under performativity, we examine two equilibrium concepts for the studied game: performative stable equilibrium (PSE) and Nash equilibrium (NE), and establish sufficient conditions for their existence and uniqueness. Notably, we provide the first upper bound on the distance between the PSE and NE in the literature, which is challenging to evaluate due to the absence of strong convexity on the joint cost function. Furthermore, we develop a decentralized stochastic primal-dual algorithm for efficiently computing the PSE point. By carefully bounding the performative effect in theoretical analysis, we prove that the proposed algorithm achieves sublinear convergence rates for both performative regrets and constraint violation and maintains the same order of convergence rate as the case without performativity. Numerical experiments validate the effectiveness of our algorithm and theoretical results.", "pdf": "https://openreview.net/pdf/33d736c9ba7af1de9cf4122ca82d02c113d06c5f.pdf"} {"title": "Persistent Test-time Adaptation in Recurring Testing Scenarios", "url": "https://openreview.net/forum?id=ffeUBoTcdS", "detail_url": "https://openreview.net/forum?id=ffeUBoTcdS", "authors": "Trung-Hieu Hoang,MinhDuc Vo,Minh N. Do", "tags": "NIPS 2024,Poster", "abstract": "Current test-time adaptation (TTA) approaches aim to adapt a machine learning model to environments that change continuously. Yet, it is unclear whether TTA methods can maintain their adaptability over prolonged periods. To answer this question, we introduce a diagnostic setting - **recurring TTA** where environments not only change but also recur over time, creating an extensive data stream. This setting allows us to examine the error accumulation of TTA models, in the most basic scenario, when they are regularly exposed to previous testing environments. Furthermore, we simulate a TTA process on a simple yet representative $\\epsilon$-**perturbed Gaussian Mixture Model Classifier**, deriving theoretical insights into the dataset- and algorithm-dependent factors contributing to gradual performance degradation. Our investigation leads us to propose **persistent TTA (PeTTA)**, which senses when the model is diverging towards collapse and adjusts the adaptation strategy, striking a balance between the dual objectives of adaptation and model collapse prevention. The supreme stability of PeTTA over existing approaches, in the face of lifelong TTA scenarios, has been demonstrated over comprehensive experiments on various benchmarks. Our project page is available at [https://hthieu166.github.io/petta](https://hthieu166.github.io/petta).", "pdf": "https://openreview.net/pdf/d879bd7ac3eb99ca83ba4c72c4d99ede7ae47703.pdf"} {"title": "SuperVLAD: Compact and Robust Image Descriptors for Visual Place Recognition", "url": "https://openreview.net/forum?id=bZpZMdY1sj", "detail_url": "https://openreview.net/forum?id=bZpZMdY1sj", "authors": "Feng Lu,Xinyao Zhang,Canming Ye,Shuting Dong,Lijun Zhang,Xiangyuan Lan,Chun Yuan", "tags": "NIPS 2024,Poster", "abstract": "Visual place recognition (VPR) is an essential task for multiple applications such as augmented reality and robot localization. Over the past decade, mainstream methods in the VPR area have been to use feature representation based on global aggregation, as exemplified by NetVLAD. These features are suitable for large-scale VPR and robust against viewpoint changes. However, the VLAD-based aggregation methods usually learn a large number of (e.g., 64) clusters and their corresponding cluster centers, which directly leads to a high dimension of the yielded global features. More importantly, when there is a domain gap between the data in training and inference, the cluster centers determined on the training set are usually improper for inference, resulting in a performance drop. To this end, we first attempt to improve NetVLAD by removing the cluster center and setting only a small number of (e.g., only 4) clusters. The proposed method not only simplifies NetVLAD but also enhances the generalizability across different domains. We name this method SuperVLAD. In addition, by introducing ghost clusters that will not be retained in the final output, we further propose a very low-dimensional 1-Cluster VLAD descriptor, which has the same dimension as the output of GeM pooling but performs notably better. Experimental results suggest that, when paired with a transformer-based backbone, our SuperVLAD shows better domain generalization performance than NetVLAD with significantly fewer parameters. The proposed method also surpasses state-of-the-art methods with lower feature dimensions on several benchmark datasets. The code is available at https://github.com/lu-feng/SuperVLAD.", "pdf": "https://openreview.net/pdf/b2ca1159dc18061891ff4dc5f402dc312e18ad9b.pdf"} {"title": "Log-concave Sampling from a Convex Body with a Barrier: a Robust and Unified Dikin Walk", "url": "https://openreview.net/forum?id=XKrSB5a79F", "detail_url": "https://openreview.net/forum?id=XKrSB5a79F", "authors": "Yuzhou Gu,Nikki Lijing Kuang,Yian Ma,Zhao Song,Lichen Zhang", "tags": "NIPS 2024,Poster", "abstract": "We consider the problem of sampling from a $d$-dimensional log-concave distribution $\\pi(\\theta) \\propto \\exp(-f(\\theta))$ for $L$-Lipschitz $f$, constrained to a convex body (described by $n$ hyperplanes) equipped with a barrier function, contained in a ball of radius $R$ with a $w$-warm start. \n\nWe propose a \\emph{robust} sampling framework that computes spectral approximations to the Hessian of the barrier functions in each iteration. We prove that for the polytope constraints, sampling with the Lee-Sidford barrier function mixes within $\\widetilde O((d^2+dL^2R^2)\\log(w/\\delta))$ steps with a per step cost of $\\widetilde O(nd^{\\omega-1})$, where $\\omega\\approx 2.37$ is the fast matrix multiplication exponent. Compared to the prior work of Mangoubi and Vishnoi, our approach gives faster mixing time as we are able to design a generalized soft-threshold Dikin walk beyond log-barrier.\n\nWe further extend our result to show how to sample from a $d$-dimensional spectrahedron, the constrained set of a semidefinite program, specified by the set $\\{x\\in \\mathbb{R}^d: \\sum_{i=1}^d x_i A_i \\succeq C \\}$ where $A_1,\\ldots,A_d, C$ are $n\\times n$ real symmetric matrices. We design a walk that mixes in $\\widetilde O((nd+dL^2R^2)\\log(w/\\delta))$ steps with a per iteration cost of $\\widetilde O(n^\\omega+n^2d^{3\\omega-5})$. We improve the mixing time bound of prior best Dikin walk due to Narayanan and Rakhlin that mixes in $\\widetilde O((n^2d^3+n^2dL^2R^2)\\log(w/\\delta))$ steps.", "pdf": "https://openreview.net/pdf/3006a03bb05e9cf8552f144aa6d4dad0635ab146.pdf"} {"title": "MultiPull: Detailing Signed Distance Functions by Pulling Multi-Level Queries at Multi-Step", "url": "https://openreview.net/forum?id=XxE8mL1bCO", "detail_url": "https://openreview.net/forum?id=XxE8mL1bCO", "authors": "Takeshi Noda,Chao Chen,Weiqi Zhang,Xinhai Liu,Yu-Shen Liu,Zhizhong Han", "tags": "NIPS 2024,Poster", "abstract": "Reconstructing a continuous surface from a raw 3D point cloud is a challenging task. Latest methods employ supervised learning or pretrained priors to learn a signed distance function (SDF). However, neural networks tend to smooth local details due to the lack of ground truth signed distnaces or normals, which limits the performance of learning-based methods in reconstruction tasks. To resolve this issue, we propose a novel method, named MultiPull, to learn multi-scale implicit fields from raw point clouds to optimize accurate SDFs from coarse to fine. We achieve this by mapping 3D query points into a set of frequency features, which makes it possible to leverage multi-level features during optimization. Meanwhile, we introduce optimization constraints from the perspective of spatial distance and normal consistency, which play a key role in point cloud reconstruction based on multi-scale optimization strategies. Our experiments on widely used object and scene benchmarks demonstrate that our method outperforms the state-of-the-art methods in surface reconstruction.", "pdf": "https://openreview.net/pdf/9a52a3f25ec1794f481de9877614163c9bbb4a2c.pdf"} {"title": "Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge Distillation", "url": "https://openreview.net/forum?id=1qfdCAXn6K", "detail_url": "https://openreview.net/forum?id=1qfdCAXn6K", "authors": "Jiaming Lv,Haoyuan Yang,Peihua Li", "tags": "NIPS 2024,Poster", "abstract": "Since pioneering work of Hinton et al., knowledge distillation based on Kullback-Leibler Divergence (KL-Div) has been predominant, and recently its variants have achieved compelling performance. However, KL-Div only compares probabilities of the corresponding category between the teacher and student while lacking a mechanism for cross-category comparison. Besides, KL-Div is problematic when applied to intermediate layers, as it cannot handle non-overlapping distributions and is unaware of geometry of the underlying manifold. To address these downsides, we propose a methodology of Wasserstein Distance (WD) based knowledge distillation. Specifically, we propose a logit distillation method called WKD-L based on discrete WD, which performs cross-category comparison of probabilities and thus can explicitly leverage rich interrelations among categories. Moreover, we introduce a feature distillation method called WKD-F, which uses a parametric method for modeling feature distributions and adopts continuous WD for transferring knowledge from intermediate layers. Comprehensive evaluations on image classification and object detection have shown (1) for logit distillation WKD-L outperforms very strong KL-Div variants; (2) for feature distillation WKD-F is superior to the KL-Div counterparts and state-of-the-art competitors.", "pdf": "https://openreview.net/pdf/3f60614b48791f8d973bd890fc769f6d16bb2fb2.pdf"} {"title": "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models", "url": "https://openreview.net/forum?id=dkpmfIydrF", "detail_url": "https://openreview.net/forum?id=dkpmfIydrF", "authors": "Yimeng Zhang,Xin Chen,Jinghan Jia,Yihua Zhang,Chongyu Fan,Jiancheng Liu,Mingyi Hong,Ke Ding,Sijia Liu", "tags": "NIPS 2024,Poster", "abstract": "Diffusion models (DMs) have achieved remarkable success in text-to-image generation, but they also pose safety risks, such as the potential generation of harmful content and copyright violations. The techniques of machine unlearning, also known as concept erasing, have been developed to address these risks. However, these techniques remain vulnerable to adversarial prompt attacks, which can prompt DMs post-unlearning to regenerate undesired images containing concepts (such as nudity) meant to be erased. This work aims to enhance the robustness of concept erasing by integrating the principle of adversarial training (AT) into machine unlearning, resulting in the robust unlearning framework referred to as AdvUnlearn. However, achieving this effectively and efficiently is highly nontrivial. First, we find that a straightforward implementation of AT compromises DMs\u2019 image generation quality post-unlearning. To address this, we develop a utility-retaining regularization on an additional retain set, optimizing the trade-off between concept erasure robustness and model utility in AdvUnlearn. Moreover, we identify the text encoder as a more suitable module for robustification compared to UNet, ensuring unlearning effectiveness. And the acquired text encoder can serve as a plug-and-play robust unlearner for various DM types. Empirically, we perform extensive experiments to demonstrate the robustness advantage of AdvUnlearn across various DM unlearning scenarios, including the erasure of nudity, objects, and style concepts. In addition to robustness, AdvUnlearn also achieves a balanced tradeoff with model utility. To our knowledge, this is the first work to systematically explore robust DM unlearning through AT, setting it apart from existing methods that overlook robustness in concept erasing. Codes are available at https://github.com/OPTML-Group/AdvUnlearn.\n\nWarning: This paper contains model outputs that may be offensive in nature.", "pdf": "https://openreview.net/pdf/e0287c1548c1151a6ee44ee07a60f1c2af5d5684.pdf"} {"title": "Learnability of high-dimensional targets by two-parameter models and gradient flow", "url": "https://openreview.net/forum?id=8XoWofmZkI", "detail_url": "https://openreview.net/forum?id=8XoWofmZkI", "authors": "Dmitry Yarotsky", "tags": "NIPS 2024,Poster", "abstract": "We explore the theoretical possibility of learning $d$-dimensional targets with $W$-parameter models by gradient flow (GF) when $W