title
stringlengths 17
147
| url
stringlengths 42
42
| detail_url
stringlengths 42
42
| authors
stringlengths 8
486
| tags
stringclasses 2
values | abstract
stringlengths 468
2.51k
| pdf
stringlengths 71
71
|
---|---|---|---|---|---|---|
Quantifying and Optimizing Global Faithfulness in Persona-driven Role-playing | https://openreview.net/forum?id=bzPmjmiaz8 | https://openreview.net/forum?id=bzPmjmiaz8 | Letian Peng,Jingbo Shang | NIPS 2024,Poster | Persona-driven role-playing (PRP) aims to build AI characters that can respond to user queries by faithfully sticking with \emph{all} (factual) statements in persona documents.
Unfortunately, existing faithfulness criteria for PRP are limited to coarse-grained LLM-based scoring without a clear definition or formulation.
This paper presents a pioneering exploration to quantify PRP faithfulness evaluation as a fine-grained and explainable criterion, which also serves as a reliable reference for faithfulness optimization.
Our criterion first discriminates persona statements into \emph{active} and \emph{passive} constraints by identifying the query-statement relevance.
Then, we incorporate all constraints following the principle that the AI character's response should be (a) entailed by active constraints and (b) not contradicted by passive constraints.
We translate this principle mathematically into a novel Active-Passive-Constraint (APC) score, a constraint-wise sum of statement-to-response natural language inference (NLI) scores weighted by constraint-query relevance scores.
In practice, we build the APC scoring system by symbolically distilling small NLI and relevance discriminators (300M parameters) from GPT-4 for efficiency, and both show high consistency with GPT-4's discrimination.
We validate the quality of the APC score against human evaluation based on example personas with tens of statements, and the results show a high correlation.
As the APC score could faithfully reflect the PRP quality, we further leverage it as a reward system in direct preference optimization (DPO) for better AI characters.
Our experiments offer a fine-grained and explainable comparison between existing PRP techniques, revealing their advantages and limitations.
We further find APC-based DPO to be one of the most competitive techniques for sticking with all constraints and can be well incorporated with other techniques.
We then extend the scale of the experiments to real persons with hundreds of statements and reach a consistent conclusion.
Finally, we provide comprehensive analyses and case studies to support the effectiveness of APC and APC-based DPO. | https://openreview.net/pdf/7320b1176a2e393847ded67e352ffa677c207b0a.pdf |
Continuous Temporal Domain Generalization | https://openreview.net/forum?id=G24fOpC3JE | https://openreview.net/forum?id=G24fOpC3JE | Zekun Cai,Guangji Bai,Renhe Jiang,Xuan Song,Liang Zhao | NIPS 2024,Poster | Temporal Domain Generalization (TDG) addresses the challenge of training predictive models under temporally varying data distributions. Traditional TDG approaches typically focus on domain data collected at fixed, discrete time intervals, which limits their capability to capture the inherent dynamics within continuous-evolving and irregularly-observed temporal domains. To overcome this, this work formalizes the concept of Continuous Temporal Domain Generalization (CTDG), where domain data are derived from continuous times and are collected at arbitrary times. CTDG tackles critical challenges including: 1) Characterizing the continuous dynamics of both data and models, 2) Learning complex high-dimensional nonlinear dynamics, and 3) Optimizing and controlling the generalization across continuous temporal domains. To address them, we propose a Koopman operator-driven continuous temporal domain generalization (Koodos) framework. We formulate the problem within a continuous dynamic system and leverage the Koopman theory to learn the underlying dynamics; the framework is further enhanced with a comprehensive optimization strategy equipped with analysis and control driven by prior knowledge of the dynamics patterns. Extensive experiments demonstrate the effectiveness and efficiency of our approach. The code can be found at: https://github.com/Zekun-Cai/Koodos. | https://openreview.net/pdf/8880caa1d1873d351ec09d9941495d9944b4dcc1.pdf |
Vision-Language Navigation with Energy-Based Policy | https://openreview.net/forum?id=v3jHuoxMw8 | https://openreview.net/forum?id=v3jHuoxMw8 | Rui Liu,Wenguan Wang,Yi Yang | NIPS 2024,Poster | Vision-language navigation (VLN) requires an agent to execute actions following human instructions. Existing VLN models are optimized through expert demonstrations by supervised behavioural cloning or incorporating manual reward engineering. While straightforward, these efforts overlook the accumulation of errors in the Markov decision process, and struggle to match the distribution of the expert policy. Going beyond this, we propose an Energy-based Navigation Policy (ENP) to model the joint state-action distribution using an energy-based model. At each step, low energy values correspond to the state-action pairs that the expert is most likely to perform, and vice versa. Theoretically, the optimization objective is equivalent to minimizing the forward divergence between the occupancy measure of the expert and ours. Consequently, ENP learns to globally align with the expert policy by maximizing the likelihood of the actions and modeling the dynamics of the navigation states in a collaborative manner. With a variety of VLN architectures, ENP achieves promising performances on R2R, REVERIE, RxR, and R2R-CE, unleashing the power of existing VLN models. | https://openreview.net/pdf/3086459a83bf86ddd7c3c2a039c2f2248ea6d01f.pdf |
Queueing Matching Bandits with Preference Feedback | https://openreview.net/forum?id=0TUMAAb3of | https://openreview.net/forum?id=0TUMAAb3of | Jung-hun Kim,Min-hwan Oh | NIPS 2024,Poster | In this study, we consider multi-class multi-server asymmetric queueing systems consisting of $N$ queues on one side and $K$ servers on the other side, where jobs randomly arrive in queues at each time. The service rate of each job-server assignment is unknown and modeled by a feature-based Multi-nomial Logit (MNL) function. At each time, a scheduler assigns jobs to servers, and each server stochastically serves at most one job based on its preferences over the assigned jobs. The primary goal of the algorithm is to stabilize the queues in the system while learning the service rates of servers. To achieve this goal, we propose algorithms based on UCB and Thompson Sampling, which achieve system stability with an average queue length bound of $O(\min\\{N,K\\}/\epsilon)$ for a large time horizon $T$, where $\epsilon$ is a traffic slackness of the system. Furthermore, the algorithms achieve sublinear regret bounds of $\tilde{O}(\min\\{\sqrt{T}Q_{\max},T^{3/4}\\})$, where $Q_{\max}$ represents the maximum queue length over agents and times.
Lastly, we provide experimental results to demonstrate the performance of our algorithms. | https://openreview.net/pdf/86d730d17255ddad3b2b4d57e58dbd5c3594c0b0.pdf |
QUEEN: QUantized Efficient ENcoding for Streaming Free-viewpoint Videos | https://openreview.net/forum?id=7xhwE7VH4S | https://openreview.net/forum?id=7xhwE7VH4S | Sharath Girish,Tianye Li,Amrita Mazumdar,Abhinav Shrivastava,david luebke,Shalini De Mello | NIPS 2024,Poster | Online free-viewpoint video (FVV) streaming is a challenging problem, which is relatively under-explored. It requires incremental on-the-fly updates to a volumetric representation, fast training and rendering to satisfy realtime constraints and a small memory footprint for efficient transmission. If acheived, it can enhance user experience by enabling novel applications, e.g., 3D video conferencing and live volumetric video broadcast, among others. In this work, we propose a novel framework for QUantized and Efficient ENcoding (QUEEN) for streaming FVV using 3D Gaussian Splatting (3D-GS). QUEEN directly learns Gaussian attribute residuals between consecutive frames at each time-step without imposing any structural constraints on them, allowing for high quality reconstruction and generalizability. To efficiently store the residuals, we further propose a quantization-sparsity framework, which contains a learned latent-decoder for effectively quantizing attribute residuals other than Gaussian positions and a learned gating module to sparsify position residuals. We propose to use the Gaussian viewspace gradient difference vector as a signal to separate the static and dynamic content of the scene. It acts as a guide for effective sparsity learning and speeds up training. On diverse FVV benchmarks, QUEEN outperforms the state-of-the-art online FVV methods on all metrics. Notably, for several highly dynamic scenes, it reduces the model size to just 0.7 MB per frame while training in under 5 sec and rendering at ~350 FPS. | https://openreview.net/pdf/aee4627dfef148422614cbd8bdb87e521cd96f07.pdf |
An Image is Worth 32 Tokens for Reconstruction and Generation | https://openreview.net/forum?id=tOXoQPRzPL | https://openreview.net/forum?id=tOXoQPRzPL | Qihang Yu,Mark Weber,Xueqing Deng,Xiaohui Shen,Daniel Cremers,Liang-Chieh Chen | NIPS 2024,Poster | Recent advancements in generative models have highlighted the crucial role of image tokenization in the efficient synthesis of high-resolution images. Tokenization, which transforms images into latent representations, reduces computational demands compared to directly processing pixels and enhances the effectiveness and efficiency of the generation process. Prior methods, such as VQGAN, typically utilize 2D latent grids with fixed downsampling factors. However, these 2D tokenizations face challenges in managing the inherent redundancies present in images, where adjacent regions frequently display similarities. To overcome this issue, we introduce **T**ransformer-based 1-D**i**mensional **Tok**enizer (TiTok), an innovative approach that tokenizes images into 1D latent sequences. TiTok provides a more compact latent representation, yielding substantially more efficient and effective representations than conventional techniques. For example, a 256 × 256 × 3 image can be reduced to just **32** discrete tokens, a significant reduction from the 256 or 1024 tokens obtained by prior methods. Despite its compact nature, TiTok achieves competitive performance to state-of-the-art approaches. Specifically, using the same generator framework, TiTok attains **1.97** gFID, outperforming MaskGIT baseline significantly by 4.21 at ImageNet 256 × 256 benchmark. The advantages of TiTok become even more significant when it comes to higher resolution. At ImageNet 512 × 512 benchmark, TiTok not only outperforms state-of-the-art diffusion model DiT-XL/2 (gFID 2.74 vs. 3.04), but also reduces the image tokens by 64×, leading to **410× faster** generation process. Our best-performing variant can significantly surpasses DiT-XL/2 (gFID **2.13** vs. 3.04) while still generating high-quality samples **74× faster**. Codes and models are available at https://github.com/bytedance/1d-tokenizer | https://openreview.net/pdf/8d26fff4931350ad8c13726dc061ecd5349c8fd6.pdf |
Non-Euclidean Mixture Model for Social Network Embedding | https://openreview.net/forum?id=nuZv2iTlvn | https://openreview.net/forum?id=nuZv2iTlvn | Roshni Iyer,YEWEN WANG,Wei Wang,Yizhou Sun | NIPS 2024,Poster | It is largely agreed that social network links are formed due to either homophily or social influence. Inspired by this, we aim at understanding the generation of links via providing a novel embedding-based graph formation model. Different from existing graph representation learning, where link generation probabilities are defined as a simple function of the corresponding node embeddings, we model the link generation as a mixture model of the two factors. In addition, we model the homophily factor in spherical space and the influence factor in hyperbolic space to accommodate the fact that (1) homophily results in cycles and (2) influence results in hierarchies in networks. We also design a special projection to align these two spaces. We call this model Non-Euclidean Mixture Model, i.e., NMM. We further integrate NMM with our non-Euclidean graph variational autoencoder (VAE) framework, NMM-GNN. NMM-GNN learns embeddings through a unified framework which uses non-Euclidean GNN encoders, non-Euclidean Gaussian priors, a non-Euclidean decoder, and a novel space unification loss component to unify distinct non-Euclidean geometric spaces. Experiments on public datasets show NMM-GNN significantly outperforms state-of-the-art baselines on social network generation and classification tasks, demonstrating its ability to better explain how the social network is formed. | https://openreview.net/pdf/d5d69c7e63ec70791e10b1f795151b2e7b0934c1.pdf |
SpaceByte: Towards Deleting Tokenization from Large Language Modeling | https://openreview.net/forum?id=KEe4IUp20I | https://openreview.net/forum?id=KEe4IUp20I | Kevin Slagle | NIPS 2024,Poster | Tokenization is widely used in large language models because it significantly improves performance. However, tokenization imposes several disadvantages, such as performance biases, increased adversarial vulnerability, decreased character-level modeling performance, and increased modeling complexity. To address these disadvantages without sacrificing performance, we propose SpaceByte, a novel byte-level decoder architecture that closes the performance gap between byte-level and subword autoregressive language modeling. SpaceByte consists of a byte-level Transformer model, but with extra larger transformer blocks inserted in the middle of the layers. We find that performance is significantly improved by applying these larger blocks only after certain bytes, such as space characters, which typically denote word boundaries. Our experiments show that for a fixed training and inference compute budget, SpaceByte outperforms other byte-level architectures and roughly matches the performance of tokenized Transformer architectures. | https://openreview.net/pdf/ff60ffa81a9d27ebcb787af7c4a54387d56d9d2d.pdf |
LCM: Locally Constrained Compact Point Cloud Model for Masked Point Modeling | https://openreview.net/forum?id=H1NklRKPYi | https://openreview.net/forum?id=H1NklRKPYi | Yaohua Zha,Naiqi Li,Yanzi Wang,Tao Dai,Hang Guo,Bin Chen,Zhi Wang,Zhihao Ouyang,Shu-Tao Xia | NIPS 2024,Poster | The pre-trained point cloud model based on Masked Point Modeling (MPM) has exhibited substantial improvements across various tasks. However, these models heavily rely on the Transformer, leading to quadratic complexity and limited decoder, hindering their practice application. To address this limitation, we first conduct a comprehensive analysis of existing Transformer-based MPM, emphasizing the idea that redundancy reduction is crucial for point cloud analysis. To this end, we propose a Locally constrained Compact point cloud Model (LCM) consisting of a locally constrained compact encoder and a locally constrained Mamba-based decoder. Our encoder replaces self-attention with our local aggregation layers to achieve an elegant balance between performance and efficiency. Considering the varying information density between masked and unmasked patches in the decoder inputs of MPM, we introduce a locally constrained Mamba-based decoder. This decoder ensures linear complexity while maximizing the perception of point cloud geometry information from unmasked patches with higher information density. Extensive experimental results show that our compact model significantly surpasses existing Transformer-based models in both performance and efficiency, especially our LCM-based Point-MAE model, compared to the Transformer-based model, achieved an improvement of 1.84%, 0.67%, and 0.60% in performance on the three variants of ScanObjectNN while reducing parameters by 88% and computation by 73%. The code is available at https://github.com/zyh16143998882/LCM. | https://openreview.net/pdf/c807e5928cc2ab7bf815e12ffffd39d909a5ebb9.pdf |
Frozen-DETR: Enhancing DETR with Image Understanding from Frozen Foundation Models | https://openreview.net/forum?id=erQDc72vyi | https://openreview.net/forum?id=erQDc72vyi | Shenghao Fu,Junkai Yan,Qize Yang,Xihan Wei,Xiaohua Xie,Wei-Shi Zheng | NIPS 2024,Poster | Recent vision foundation models can extract universal representations and show impressive abilities in various tasks. However, their application on object detection is largely overlooked, especially without fine-tuning them. In this work, we show that frozen foundation models can be a versatile feature enhancer, even though they are not pre-trained for object detection. Specifically, we explore directly transferring the high-level image understanding of foundation models to detectors in the following two ways. First, the class token in foundation models provides an in-depth understanding of the complex scene, which facilitates decoding object queries in the detector's decoder by providing a compact context. Additionally, the patch tokens in foundation models can enrich the features in the detector's encoder by providing semantic details. Utilizing frozen foundation models as plug-and-play modules rather than the commonly used backbone can significantly enhance the detector's performance while preventing the problems caused by the architecture discrepancy between the detector's backbone and the foundation model. With such a novel paradigm, we boost the SOTA query-based detector DINO from 49.0% AP to 51.9% AP (+2.9% AP) and further to 53.8% AP (+4.8% AP) by integrating one or two foundation models respectively, on the COCO validation set after training for 12 epochs with R50 as the detector's backbone. Code will be available. | https://openreview.net/pdf/2e645aa0142daf54b0d33633b87ad690893165a0.pdf |
Improving Temporal Link Prediction via Temporal Walk Matrix Projection | https://openreview.net/forum?id=Ti3ciyqlS3 | https://openreview.net/forum?id=Ti3ciyqlS3 | Xiaodong Lu,Leilei Sun,Tongyu Zhu,Weifeng Lv | NIPS 2024,Poster | Temporal link prediction, aiming at predicting future interactions among entities based on historical interactions, is crucial for a series of real-world applications. Although previous methods have demonstrated the importance of relative encodings for effective temporal link prediction, computational efficiency remains a major concern in constructing these encodings. Moreover, existing relative encodings are usually constructed based on structural connectivity, where temporal information is seldom considered. To address the aforementioned issues, we first analyze existing relative encodings and unify them as a function of temporal walk matrices. This unification establishes a connection between relative encodings and temporal walk matrices, providing a more principled way for analyzing and designing relative encodings. Based on this analysis, we propose a new temporal graph neural network called TPNet, which introduces a temporal walk matrix that incorporates the time decay effect to simultaneously consider both temporal and structural information. Moreover, TPNet designs a random feature propagation mechanism with theoretical guarantees to implicitly maintain the temporal walk matrices, which improves the computation and storage efficiency. Experimental results on 13 benchmark datasets verify the effectiveness and efficiency of TPNet, where TPNet outperforms other baselines on most datasets and achieves a maximum speedup of $33.3 \times$ compared to the SOTA baseline. | https://openreview.net/pdf/33a29baa0c450c1f070795146a5fadf3ba733120.pdf |
Efficient Sketches for Training Data Attribution and Studying the Loss Landscape | https://openreview.net/forum?id=8jyCRGXOr5 | https://openreview.net/forum?id=8jyCRGXOr5 | Andrea Schioppa | NIPS 2024,Poster | The study of modern machine learning models often necessitates storing vast quantities of gradients or Hessian vector products (HVPs). Traditional sketching methods struggle to scale under these memory constraints. We present a novel framework for scalable gradient and HVP sketching, tailored for modern hardware. We provide theoretical guarantees and demonstrate the power of our methods in applications like training data attribution, Hessian spectrum analysis, and intrinsic dimension computation for pre-trained language models. Our work sheds new light on the behavior of pre-trained language models, challenging assumptions about their intrinsic dimensionality and Hessian properties. | https://openreview.net/pdf/6ab18f0e78acdf0ddf9571ee043d35b7278b1fb7.pdf |
Aggregating Quantitative Relative Judgments: From Social Choice to Ranking Prediction | https://openreview.net/forum?id=37CyA1K0vV | https://openreview.net/forum?id=37CyA1K0vV | Yixuan Even Xu,Hanrui Zhang,Yu Cheng,Vincent Conitzer | NIPS 2024,Poster | Quantitative Relative Judgment Aggregation (QRJA) is a new research topic in (computational) social choice. In the QRJA model, agents provide judgments on the relative quality of different candidates, and the goal is to aggregate these judgments across all agents. In this work, our main conceptual contribution is to explore the interplay between QRJA in a social choice context and its application to ranking prediction. We observe that in QRJA, judges do not have to be people with subjective opinions; for example, a race can be viewed as a ``judgment'' on the contestants' relative abilities. This allows us to aggregate results from multiple races to evaluate the contestants' true qualities. At a technical level, we introduce new aggregation rules for QRJA and study their structural and computational properties. We evaluate the proposed methods on data from various real races and show that QRJA-based methods offer effective and interpretable ranking predictions. | https://openreview.net/pdf/13539df2d5217a0d9c3f41049a3e3e45ddccf56c.pdf |
Bidirectional Recurrence for Cardiac Motion Tracking with Gaussian Process Latent Coding | https://openreview.net/forum?id=CTIFk7b9jU | https://openreview.net/forum?id=CTIFk7b9jU | Jiewen Yang,Yiqun Lin,Bin Pu,Xiaomeng Li | NIPS 2024,Poster | Quantitative analysis of cardiac motion is crucial for assessing cardiac function. This analysis typically uses imaging modalities such as MRI and Echocardiograms that capture detailed image sequences throughout the heartbeat cycle. Previous methods predominantly focused on the analysis of image pairs lacking consideration of the motion dynamics and spatial variability. Consequently, these methods often overlook the long-term relationships and regional motion characteristic of cardiac. To overcome these limitations, we introduce the GPTrack, a novel unsupervised framework crafted to fully explore the temporal and spatial dynamics of cardiac motion. The GPTrack enhances motion tracking by employing the sequential Gaussian Process in the latent space and encoding statistics by spatial information at each time stamp, which robustly promotes temporal consistency and spatial variability of cardiac dynamics. Also, we innovatively aggregate sequential information in a bidirectional recursive manner, mimicking the behavior of diffeomorphic registration to better capture consistent long-term relationships of motions across cardiac regions such as the ventricles and atria. Our GPTrack significantly improves the precision of motion tracking in both 3D and 4D medical images while maintaining computational efficiency. The code is available at: https://github.com/xmed-lab/GPTrack. | https://openreview.net/pdf/d33b6e478a00543e152ce3dcc9b508569dca1d63.pdf |
ChatTracker: Enhancing Visual Tracking Performance via Chatting with Multimodal Large Language Model | https://openreview.net/forum?id=HzANl2unCB | https://openreview.net/forum?id=HzANl2unCB | Yiming Sun,Fan Yu,Shaoxiang Chen,Yu Zhang,Junwei Huang,Yang Li,Chenhui Li,Changbo Wang | NIPS 2024,Poster | Visual object tracking aims to locate a targeted object in a video sequence based on an initial bounding box. Recently, Vision-Language~(VL) trackers have proposed to utilize additional natural language descriptions to enhance versatility in various applications. However, VL trackers are still inferior to State-of-The-Art (SoTA) visual trackers in terms of tracking performance. We found that this inferiority primarily results from their heavy reliance on manual textual annotations, which include the frequent provision of ambiguous language descriptions. In this paper, we propose ChatTracker to leverage the wealth of world knowledge in the Multimodal Large Language Model (MLLM) to generate high-quality language descriptions and enhance tracking performance. To this end, we propose a novel reflection-based prompt optimization module to iteratively refine the ambiguous and inaccurate descriptions of the target with tracking feedback. To further utilize semantic information produced by MLLM, a simple yet effective VL tracking framework is proposed and can be easily integrated as a plug-and-play module to boost the performance of both VL and visual trackers. Experimental results show that our proposed ChatTracker achieves a performance comparable to existing methods. | https://openreview.net/pdf/07030af559b369e65c612c46a1f85ea0051706dd.pdf |
Neuro-Vision to Language: Enhancing Brain Recording-based Visual Reconstruction and Language Interaction | https://openreview.net/forum?id=ohi00YhT3T | https://openreview.net/forum?id=ohi00YhT3T | Guobin Shen,Dongcheng Zhao,Xiang He,Linghao Feng,Yiting Dong,Jihang Wang,Qian Zhang,Yi Zeng | NIPS 2024,Poster | Decoding non-invasive brain recordings is pivotal for advancing our understanding of human cognition but faces challenges due to individual differences and complex neural signal representations. Traditional methods often require customized models and extensive trials, lacking interpretability in visual reconstruction tasks. Our framework integrates 3D brain structures with visual semantics using a *Vision Transformer 3D*. This unified feature extractor efficiently aligns fMRI features with multiple levels of visual embeddings, eliminating the need for subject-specific models and allowing extraction from single-trial data. The extractor consolidates multi-level visual features into one network, simplifying integration with Large Language Models (LLMs). Additionally, we have enhanced the fMRI dataset with diverse fMRI-image-related textual data to support multimodal large model development. Integrating with LLMs enhances decoding capabilities, enabling tasks such as brain captioning, complex reasoning, concept localization, and visual reconstruction. Our approach demonstrates superior performance across these tasks, precisely identifying language-based concepts within brain signals, enhancing interpretability, and providing deeper insights into neural processes. These advances significantly broaden the applicability of non-invasive brain decoding in neuroscience and human-computer interaction, setting the stage for advanced brain-computer interfaces and cognitive models. | https://openreview.net/pdf/ba7b93f7d624a68efb8b3859bb55e3ab19511123.pdf |
Diffusion-based Layer-wise Semantic Reconstruction for Unsupervised Out-of-Distribution Detection | https://openreview.net/forum?id=3m5ndUNQYt | https://openreview.net/forum?id=3m5ndUNQYt | Ying Yang,De Cheng,Chaowei Fang,Yubiao Wang,Changzhe Jiao,Lechao Cheng,Nannan Wang,Xinbo Gao | NIPS 2024,Poster | Unsupervised out-of-distribution (OOD) detection aims to identify out-of-domain data by learning only from unlabeled In-Distribution (ID) training samples, which is crucial for developing a safe real-world machine learning system. Current reconstruction-based method provides a good alternative approach, by measuring the reconstruction error between the input and its corresponding generative counterpart in the pixel/feature space. However, such generative methods face the key dilemma, $i.e.$, improving the reconstruction power of the generative model, while keeping compact representation of the ID data. To address this issue, we propose the diffusion-based layer-wise semantic reconstruction approach for unsupervised OOD detection. The innovation of our approach is that we leverage the diffusion model's intrinsic data reconstruction ability to distinguish ID samples from OOD samples in the latent feature space. Moreover, to set up a comprehensive and discriminative feature representation, we devise a multi-layer semantic feature extraction strategy. Through distorting the extracted features with Gaussian noises and applying the diffusion model for feature reconstruction, the separation of ID and OOD samples is implemented according to the reconstruction errors. Extensive experimental results on multiple benchmarks built upon various datasets demonstrate that our method achieves state-of-the-art performance in terms of detection accuracy and speed. | https://openreview.net/pdf/e3690bf0a374cd5f217dff8e209d8b3744a9127c.pdf |
Geometry Cloak: Preventing TGS-based 3D Reconstruction from Copyrighted Images | https://openreview.net/forum?id=UTrIEHobXI | https://openreview.net/forum?id=UTrIEHobXI | Qi Song,Ziyuan Luo,Ka Chun Cheung,Simon See,Renjie Wan | NIPS 2024,Poster | Single-view 3D reconstruction methods like Triplane Gaussian Splatting (TGS) have enabled high-quality 3D model generation from just a single image input within seconds. However, this capability raises concerns about potential misuse, where malicious users could exploit TGS to create unauthorized 3D models from copyrighted images. To prevent such infringement, we propose a novel image protection approach that embeds invisible geometry perturbations, termed ``geometry cloaks'', into images before supplying them to TGS. These carefully crafted perturbations encode a customized message that is revealed when TGS attempts 3D reconstructions of the cloaked image. Unlike conventional adversarial attacks that simply degrade output quality, our method forces TGS to fail the 3D reconstruction in a specific way - by generating an identifiable customized pattern that acts as a watermark. This watermark allows copyright holders to assert ownership over any attempted 3D reconstructions made from their protected images. Extensive experiments have verified the effectiveness of our geometry cloak. | https://openreview.net/pdf/4d4e1141f3f59243a9b340232bf8f8a2fd1ecc62.pdf |
iVideoGPT: Interactive VideoGPTs are Scalable World Models | https://openreview.net/forum?id=4TENzBftZR | https://openreview.net/forum?id=4TENzBftZR | Jialong Wu,Shaofeng Yin,Ningya Feng,Xu He,Dong Li,Jianye HAO,Mingsheng Long | NIPS 2024,Poster | World models empower model-based agents to interactively explore, reason, and plan within imagined environments for real-world decision-making. However, the high demand for interactivity poses challenges in harnessing recent advancements in video generative models for developing world models at scale. This work introduces Interactive VideoGPT (iVideoGPT), a scalable autoregressive transformer framework that integrates multimodal signals—visual observations, actions, and rewards—into a sequence of tokens, facilitating an interactive experience of agents via next-token prediction. iVideoGPT features a novel compressive tokenization technique that efficiently discretizes high-dimensional visual observations. Leveraging its scalable architecture, we are able to pre-train iVideoGPT on millions of human and robotic manipulation trajectories, establishing a versatile foundation that is adaptable to serve as interactive world models for a wide range of downstream tasks. These include action-conditioned video prediction, visual planning, and model-based reinforcement learning, where iVideoGPT achieves competitive performance compared with state-of-the-art methods. Our work advances the development of interactive general world models, bridging the gap between generative video models and practical model-based reinforcement learning applications. Code and pre-trained models are available at https://thuml.github.io/iVideoGPT. | https://openreview.net/pdf/db1a05f0f6944e4bfbf4a2dfd68d34b67e5e1bd2.pdf |
Does Video-Text Pretraining Help Open-Vocabulary Online Action Detection? | https://openreview.net/forum?id=PWzB2V2b6R | https://openreview.net/forum?id=PWzB2V2b6R | Qingsong Zhao,Yi Wang,Jilan Xu,Yinan He,Zifan Song,Limin Wang,Yu Qiao,Cairong Zhao | NIPS 2024,Poster | Video understanding relies on accurate action detection for temporal analysis. However, existing mainstream methods have limitations in real-world applications due to their offline and closed-set evaluation approaches, as well as their dependence on manual annotations. To address these challenges and enable real-time action understanding in open-world scenarios, we propose OV-OAD, a zero-shot online action detector that leverages vision-language models and learns solely from text supervision. By introducing an object-centered decoder unit into a Transformer-based model, we aggregate frames with similar semantics using video-text correspondence. Extensive experiments on four action detection benchmarks demonstrate that OV-OAD outperforms other advanced zero-shot methods. Specifically, it achieves 37.5\% mean average precision on THUMOS’14 and 73.8\% calibrated average precision on TVSeries. This research establishes a robust baseline for zero-shot transfer in online action detection, enabling scalable solutions for open-world temporal understanding. The code will be available for download at \url{https://github.com/OpenGVLab/OV-OAD}. | https://openreview.net/pdf/106675eab768f7a48d71880b719cdc704697bf63.pdf |
Episodic Future Thinking Mechanism for Multi-agent Reinforcement Learning | https://openreview.net/forum?id=rL7OtNsD9a | https://openreview.net/forum?id=rL7OtNsD9a | Dongsu Lee,Minhae Kwon | NIPS 2024,Poster | Understanding cognitive processes in multi-agent interactions is a primary goal in cognitive science. It can guide the direction of artificial intelligence (AI) research toward social decision-making in multi-agent systems, which includes uncertainty from character heterogeneity. In this paper, we introduce *episodic future thinking (EFT) mechanism* for a reinforcement learning (RL) agent, inspired by the cognitive processes observed in animals. To enable future thinking functionality, we first develop a *multi-character policy* that captures diverse characters with an ensemble of heterogeneous policies. The *character* of an agent is defined as a different weight combination on reward components, representing distinct behavioral preferences. The future thinking agent collects observation-action trajectories of the target agents and leverages the pre-trained multi-character policy to infer their characters. Once the character is inferred, the agent predicts the upcoming actions of target agents and simulates the potential future scenario. This capability allows the agent to adaptively select the optimal action, considering the predicted future scenario in multi-agent scenarios. To evaluate the proposed mechanism, we consider the multi-agent autonomous driving scenario in which autonomous vehicles with different driving traits are on the road. Simulation results demonstrate that the EFT mechanism with accurate character inference leads to a higher reward than existing multi-agent solutions. We also confirm that the effect of reward improvement remains valid across societies with different levels of character diversity. | https://openreview.net/pdf/60e8c7e9988c5c4ebbf3be916b263f3f55247926.pdf |
Meta-Diffu$B$: A Contextualized Sequence-to-Sequence Text Diffusion Model with Meta-Exploration | https://openreview.net/forum?id=NTWXVvIXJM | https://openreview.net/forum?id=NTWXVvIXJM | Yunyen Chuang,Hung-Min Hsu,Kevin Lin,Chen-Sheng Gu,Ling Zhen Li,Ray-I Chang,Hung-yi Lee | NIPS 2024,Poster | The diffusion model, a new generative modeling paradigm, has achieved significant success in generating images, audio, video, and text. It has been adapted for sequence-to-sequence text generation (Seq2Seq) through DiffuSeq, termed the S2S-Diffusion model. Existing S2S-Diffusion models predominantly rely on fixed or hand-crafted rules to schedule noise during the diffusion and denoising processes. However, these models are limited by non-contextualized noise, which fails to fully consider the characteristics of Seq2Seq tasks. In this paper, we propose the Meta-Diffu$B$ framework—a novel scheduler-exploiter S2S-Diffusion paradigm designed to overcome the limitations of existing S2S-Diffusion models. We employ Meta-Exploration to train an additional scheduler model dedicated to scheduling contextualized noise for each sentence. Our exploiter model, an S2S-Diffusion model, leverages the noise scheduled by our scheduler model for updating and generation. Meta-Diffu$B$ achieves state-of-the-art performance compared to previous S2S-Diffusion models and fine-tuned pre-trained language models (PLMs) across four Seq2Seq benchmark datasets. We further investigate and visualize the impact of Meta-Diffu$B$'s noise scheduling on the generation of sentences with varying difficulties. Additionally, our scheduler model can function as a "plug-and-play" model to enhance DiffuSeq without the need for fine-tuning during the inference stage. | https://openreview.net/pdf/19e8f13c8a44c597c74f914a2b74aa68f2b5e906.pdf |
Polyhedral Complex Derivation from Piecewise Trilinear Networks | https://openreview.net/forum?id=XZ4XSUTGRb | https://openreview.net/forum?id=XZ4XSUTGRb | Jin-Hwa Kim | NIPS 2024,Poster | Recent advancements in visualizing deep neural networks provide insights into their structures and mesh extraction from Continuous Piecewise Affine (CPWA) functions. Meanwhile, developments in neural surface representation learning incorporate non-linear positional encoding, addressing issues like spectral bias; however, this poses challenges in applying mesh extraction techniques based on CPWA functions. Focusing on trilinear interpolating methods as positional encoding, we present theoretical insights and an analytical mesh extraction, showing the transformation of hypersurfaces to flat planes within the trilinear region under the eikonal constraint. Moreover, we introduce a method for approximating intersecting points among three hypersurfaces contributing to broader applications. We empirically validate correctness and parsimony through chamfer distance and efficiency, and angular distance, while examining the correlation between the eikonal loss and the planarity of the hypersurfaces. | https://openreview.net/pdf/289c3462ff417069a1c01dec53972175bba77057.pdf |
RobIR: Robust Inverse Rendering for High-Illumination Scenes | https://openreview.net/forum?id=y7oxY5pq4j | https://openreview.net/forum?id=y7oxY5pq4j | Ziyi Yang,Chenyanzhen,Xinyu Gao,YazhenYuan,Wu Yu,Xiaowei Zhou,Xiaogang Jin | NIPS 2024,Poster | Implicit representation has opened up new possibilities for inverse rendering. However, existing implicit neural inverse rendering methods struggle to handle strongly illuminated scenes with significant shadows and slight reflections. The existence of shadows and reflections can lead to an inaccurate understanding of the scene, making precise factorization difficult. To this end, we present RobIR, an implicit inverse rendering approach that uses ACES tone mapping and regularized visibility estimation to reconstruct accurate BRDF of the object. By accurately modeling the indirect radiance field, normal, visibility, and direct light simultaneously, we are able to accurately decouple environment lighting and the object's PBR materials without imposing strict constraints on the scene. Even in high-illumination scenes with shadows and specular reflections, our method can recover high-quality albedo and roughness with no shadow interference. RobIR outperforms existing methods in both quantitative and qualitative evaluations. | https://openreview.net/pdf/92d9baadccb6c0d8ed17f5cd5f1fc5980a06e590.pdf |
On the cohesion and separability of average-link for hierarchical agglomerative clustering | https://openreview.net/forum?id=2LuSHTFWzK | https://openreview.net/forum?id=2LuSHTFWzK | Eduardo Sany Laber,Miguel A. Batista | NIPS 2024,Poster | Average-link is widely recognized as one of the most popular and effective methods for building hierarchical agglomerative clustering. The available theoretical analyses show that this method has a much better approximation than other popular heuristics, as single-linkage and complete-linkage, regarding variants of Dasgupta's cost function [STOC 2016]. However, these analyses do not separate average-link from a random hierarchy and they are not appealing for metric spaces since every hierarchical clustering has a $1/2$ approximation with regard to the variant of Dasgupta's function
that is employed for dissimilarity measures [Moseley and Yang 2020]. In this paper, we present a comprehensive study of the performance of \avglink \, in metric spaces, regarding several natural criteria that capture separability and cohesion, and are more interpretable than Dasgupta's cost function and its variants. We also present experimental results with real datasets that, together with our theoretical analyses, suggest that average-link is a better choice than other related methods when both cohesion and separability are important goals. | https://openreview.net/pdf/e31307cced91acc175497387c4c65a398704f6ce.pdf |
Exploring Behavior-Relevant and Disentangled Neural Dynamics with Generative Diffusion Models | https://openreview.net/forum?id=jL0EsbfbAV | https://openreview.net/forum?id=jL0EsbfbAV | Yule Wang,Chengrui Li,Weihan Li,Anqi Wu | NIPS 2024,Poster | Understanding the neural basis of behavior is a fundamental goal in neuroscience. Current research in large-scale neuro-behavioral data analysis often relies on decoding models, which quantify behavioral information in neural data but lack details on behavior encoding. This raises an intriguing scientific question: "how can we enable in-depth exploration of neural representations in behavioral tasks, revealing interpretable neural dynamics associated with behaviors". However, addressing this issue is challenging due to the varied behavioral encoding across different brain regions and mixed selectivity at the population level. To tackle this limitation, our approach, named ("BeNeDiff"), first identifies a fine-grained and disentangled neural subspace using a behavior-informed latent variable model. It then employs state-of-the-art generative diffusion models to synthesize behavior videos that interpret the neural dynamics of each latent factor. We validate the method on multi-session datasets containing widefield calcium imaging recordings across the dorsal cortex. Through guiding the diffusion model to activate individual latent factors, we verify that the neural dynamics of latent factors in the disentangled neural subspace provide interpretable quantifications of the behaviors of interest. At the same time, the neural subspace in BeNeDiff demonstrates high disentanglement and neural reconstruction quality. | https://openreview.net/pdf/7883f0ecd9c14eccf8e95c8600c9326826de89d6.pdf |
Flow Priors for Linear Inverse Problems via Iterative Corrupted Trajectory Matching | https://openreview.net/forum?id=1H2e7USI09 | https://openreview.net/forum?id=1H2e7USI09 | Yasi Zhang,Peiyu Yu,Yaxuan Zhu,Yingshan Chang,Feng Gao,Ying Nian Wu,Oscar Leong | NIPS 2024,Poster | Generative models based on flow matching have attracted significant attention for their simplicity and superior performance in high-resolution image synthesis. By leveraging the instantaneous change-of-variables formula, one can directly compute image likelihoods from a learned flow, making them enticing candidates as priors for downstream tasks such as inverse problems. In particular, a natural approach would be to incorporate such image probabilities in a maximum-a-posteriori (MAP) estimation problem. A major obstacle, however, lies in the slow computation of the log-likelihood, as it requires backpropagating through an ODE solver, which can be prohibitively slow for high-dimensional problems. In this work, we propose an iterative algorithm to approximate the MAP estimator efficiently to solve a variety of linear inverse problems. Our algorithm is mathematically justified by the observation that the MAP objective can be approximated by a sum of $N$ ``local MAP'' objectives, where $N$ is the number of function evaluations. By leveraging Tweedie's formula, we show that we can perform gradient steps to sequentially optimize these objectives. We validate our approach for various linear inverse problems, such as super-resolution, deblurring, inpainting, and compressed sensing, and demonstrate that we can outperform other methods based on flow matching. Code is available at \url{https://github.com/YasminZhang/ICTM}. | https://openreview.net/pdf/41117473ada2fc4333e42064eee99da4bcb3dbed.pdf |
UniTS: A Unified Multi-Task Time Series Model | https://openreview.net/forum?id=nBOdYBptWW | https://openreview.net/forum?id=nBOdYBptWW | Shanghua Gao,Teddy Koker,Owen Queen,Thomas Hartvigsen,Theodoros Tsiligkaridis,Marinka Zitnik | NIPS 2024,Poster | Although pre-trained transformers and reprogrammed text-based LLMs have shown strong performance on time series tasks, the best-performing architectures vary widely across tasks, with most models narrowly focused on specific areas, such as time series forecasting. Unifying predictive and generative time series tasks within a single model remains challenging. We introduce UniTS, a unified multi-task time series model that utilizes task tokenization to integrate predictive and generative tasks into a single framework. UniTS employs a modified transformer block to capture universal time series representations, enabling transferability from a heterogeneous, multi-domain pre-training dataset—characterized by diverse dynamic patterns, sampling rates, and temporal scales—to a wide range of downstream datasets with varied task specifications and data domains. Tested on 38 datasets across human activity sensors, healthcare, engineering, and finance, UniTS achieves superior performance compared to 12 forecasting models, 20 classification models, 18 anomaly detection models, and 16 imputation models, including adapted text-based LLMs. UniTS also demonstrates strong few-shot and prompt capabilities when applied to new domains and tasks. In single-task settings, UniTS outperforms competitive task-specialized time series models. Code and datasets are available at https://github.com/mims-harvard/UniTS. | https://openreview.net/pdf/76cc318f954f6e20e4694bd3cfc093467379d42d.pdf |
Yo'LLaVA: Your Personalized Language and Vision Assistant | https://openreview.net/forum?id=mjGy8g3pgi | https://openreview.net/forum?id=mjGy8g3pgi | Thao Nguyen,Haotian Liu,Yuheng Li,Mu Cai,Utkarsh Ojha,Yong Jae Lee | NIPS 2024,Poster | Large Multimodal Models (LMMs) have shown remarkable capabilities across a variety of tasks (e.g., image captioning, visual question answering).
While broad, their knowledge remains generic (e.g., recognizing a dog), and they are unable to handle personalized subjects (e.g., recognizing a user's pet dog).
Human reasoning, in contrast, typically operates within the context of specific subjects in our surroundings. For example, one might ask, "What should I buy for *my dog*'s birthday?"; as opposed to a generic inquiry about "What should I buy for *a dog*'s birthday?".
Similarly, when looking at a friend's image, the interest lies in seeing their activities (e.g., "*my friend* is holding a cat"), rather than merely observing generic human actions (e.g., "*a man* is holding a cat").
In this paper, we introduce the novel task of personalizing LMMs, so that they can have conversations about a specific subject. We propose Yo'LLaVA, which learns to embed a personalized subject into a set of latent tokens given a handful of example images of the subject. Our qualitative and quantitative analyses reveal that Yo'LLaVA can learn the concept more efficiently using fewer tokens and more effectively encode the visual attributes compared to strong prompting baselines (e.g., LLaVA). | https://openreview.net/pdf/27db4874c16bb351aed45cbd5c641e028cc244b5.pdf |
FouRA: Fourier Low-Rank Adaptation | https://openreview.net/forum?id=qCJ1dq5M7N | https://openreview.net/forum?id=qCJ1dq5M7N | Shubhankar Borse,Shreya Kadambi,Nilesh Prasad Pandey,Kartikeya Bhardwaj,Viswanath Ganapathy,Sweta Priyadarshi,Risheek Garrepalli,Rafael Esteves,Munawar Hayat,Fatih Porikli | NIPS 2024,Poster | While Low-Rank Adaptation (LoRA) has proven beneficial for efficiently fine-tuning large models, LoRA fine-tuned text-to-image diffusion models lack diversity in the generated images, as the model tends to copy data from the observed training samples. This effect becomes more pronounced at higher values of adapter strength and for adapters with higher ranks which are fine-tuned on smaller datasets. To address these challenges, we present FouRA, a novel low-rank method that learns projections in the Fourier domain along with learning a flexible input-dependent adapter rank selection strategy. Through extensive experiments and analysis, we show that FouRA successfully solves the problems related to data copying and distribution collapse while significantly improving the generated image quality. We demonstrate that FouRA enhances the generalization of fine-tuned models thanks to its adaptive rank selection. We further show that the learned projections in the frequency domain are decorrelated and prove effective when merging multiple adapters. While FouRA is motivated for vision tasks, we also demonstrate its merits for language tasks on commonsense reasoning and GLUE benchmarks. | https://openreview.net/pdf/2fc90e37f14bb97913686628fb9da31aa3b72204.pdf |
Reshuffling Resampling Splits Can Improve Generalization of Hyperparameter Optimization | https://openreview.net/forum?id=C4SInFLvuB | https://openreview.net/forum?id=C4SInFLvuB | Thomas Nagler,Lennart Schneider,Bernd Bischl,Matthias Feurer | NIPS 2024,Poster | Hyperparameter optimization is crucial for obtaining peak performance of machine learning models. The standard protocol evaluates various hyperparameter configurations using a resampling estimate of the generalization error to guide optimization and select a final hyperparameter configuration. Without much evidence, paired resampling splits, i.e., either a fixed train-validation split or a fixed cross-validation scheme, are often recommended. We show that, surprisingly, reshuffling the splits for every configuration often improves the final model's generalization performance on unseen data. Our theoretical analysis explains how reshuffling affects the asymptotic behavior of the validation loss surface and provides a bound on the expected regret in the limiting regime. This bound connects the potential benefits of reshuffling to the signal and noise characteristics of the underlying optimization problem. We confirm our theoretical results in a controlled simulation study and demonstrate the practical usefulness of reshuffling in a large-scale, realistic hyperparameter optimization experiment. While reshuffling leads to test performances that are competitive with using fixed splits, it drastically improves results for a single train-validation holdout protocol and can often make holdout become competitive with standard CV while being computationally cheaper. | https://openreview.net/pdf/0b9a9aacca5cf5afd9829212a78f4f96d2a8736f.pdf |
Everyday Object Meets Vision-and-Language Navigation Agent via Backdoor | https://openreview.net/forum?id=rXGxbDJadh | https://openreview.net/forum?id=rXGxbDJadh | Keji He,Kehan Chen,Jiawang Bai,Yan Huang,Qi Wu,Shu-Tao Xia,Liang Wang | NIPS 2024,Poster | Vision-and-Language Navigation (VLN) requires an agent to dynamically explore environments following natural language.
The VLN agent, closely integrated into daily lives, poses a substantial threat to the security of privacy and property upon the occurrence of malicious behavior.
However, this serious issue has long been overlooked.
In this paper, we pioneer the exploration of an object-aware backdoored VLN, achieved by implanting object-aware backdoors during the training phase.
Tailored to the unique VLN nature of cross-modality and continuous decision-making, we propose a novel backdoored VLN paradigm: IPR Backdoor.
This enables the agent to act in abnormal behavior once encountering the object triggers during language-guided navigation in unseen environments, thereby executing an attack on the target scene.
Experiments demonstrate the effectiveness of our method in both physical and digital spaces across different VLN agents, as well as its robustness to various visual and textual variations. Additionally, our method also well ensures navigation performance in normal scenarios with remarkable stealthiness. | https://openreview.net/pdf/947b9c499117e1094c174a0a32f3ea8987265e42.pdf |
DAT: Improving Adversarial Robustness via Generative Amplitude Mix-up in Frequency Domain | https://openreview.net/forum?id=TeQvz5AlI8 | https://openreview.net/forum?id=TeQvz5AlI8 | Fengpeng Li,Kemou Li,Haiwei Wu,Jinyu Tian,Jiantao Zhou | NIPS 2024,Poster | To protect deep neural networks (DNNs) from adversarial attacks, adversarial training (AT) is developed by incorporating adversarial examples (AEs) into model training. Recent studies show that adversarial attacks disproportionately impact the patterns within the phase of the sample's frequency spectrum---typically containing crucial semantic information---more than those in the amplitude, resulting in the model's erroneous categorization of AEs. We find that, by mixing the amplitude of training samples' frequency spectrum with those of distractor images for AT, the model can be guided to focus on phase patterns unaffected by adversarial perturbations. As a result, the model's robustness can be improved. Unfortunately, it is still challenging to select appropriate distractor images, which should mix the amplitude without affecting the phase patterns. To this end, in this paper, we propose an optimized **Adversarial Amplitude Generator (AAG)** to achieve a better tradeoff between improving the model's robustness and retaining phase patterns. Based on this generator, together with an efficient AE production procedure, we design a new **Dual Adversarial Training (DAT)** strategy. Experiments on various datasets show that our proposed DAT leads to significantly improved robustness against diverse adversarial attacks. The source code is available at https://github.com/Feng-peng-Li/DAT. | https://openreview.net/pdf/cc80f91e0e413e77795f1bce672e6ee70ba4df72.pdf |
StepbaQ: Stepping backward as Correction for Quantized Diffusion Models | https://openreview.net/forum?id=cEtExbAKYV | https://openreview.net/forum?id=cEtExbAKYV | Yi-Chung Chen,Zhi-Kai Huang,Jing-Ren Chen | NIPS 2024,Poster | Quantization of diffusion models has attracted considerable attention due to its potential to enable various applications on resource-constrained mobile devices. However, given the cumulative nature of quantization errors in quantized diffusion models, overall performance may still decline even with efforts to minimize quantization error at each sampling step.
Recent studies have proposed several methods to address accumulated quantization error, yet these solutions often suffer from limited applicability due to their underlying assumptions or only partially resolve the issue due to an incomplete understanding.
In this work, we introduce a novel perspective by conceptualizing quantization error as a "stepback" in the denoising process. We investigate how the accumulation of quantization error can distort the sampling trajectory, resulting in a notable decrease in model performance. To address this challenge, we introduce StepbaQ, a method that calibrates the sampling trajectory and counteracts the adverse effects of accumulated quantization error through a sampling step correction mechanism. Notably, StepbaQ relies solely on statistics of quantization error derived from a small calibration dataset, highlighting its strong applicability.
Our experimental results demonstrate that StepbaQ can serve as a plug-and-play technique to enhance the performance of diffusion models quantized by off-the-shelf tools without modifying the quantization settings. For example, StepbaQ significantly improves the performance of the quantized SD v1.5 model by 7.30 in terms of FID on SDprompts dataset under the common W8A8 setting, and it enhances the performance of the quantized SDXL-Turbo model by 17.31 in terms of FID on SDprompts dataset under the challenging W4A8 setting. | https://openreview.net/pdf/4403f21211ca02e4ff44f6ca8cb3fe1d6169c499.pdf |
Cross-Device Collaborative Test-Time Adaptation | https://openreview.net/forum?id=YyMiO0DWmI | https://openreview.net/forum?id=YyMiO0DWmI | Guohao Chen,Shuaicheng Niu,Deyu Chen,Shuhai Zhang,Changsheng Li,Yuanqing Li,Mingkui Tan | NIPS 2024,Poster | In this paper, we propose test-time Collaborative Lifelong Adaptation (CoLA), which is a general paradigm that can be incorporated with existing advanced TTA methods to boost the adaptation performance and efficiency in a multi-device collaborative manner. Specifically, we maintain and store a set of device-shared _domain knowledge vectors_, which accumulates the knowledge learned from all devices during their lifelong adaptation process. Based on this, CoLA conducts two collaboration strategies for devices with different computational resources and latency demands. 1) Knowledge reprogramming learning strategy jointly learns new domain-specific model parameters and a reweighting term to reprogram existing shared domain knowledge vectors, termed adaptation on _principal agents_. 2) Similarity-based knowledge aggregation strategy solely aggregates the knowledge stored in shared domain vectors according to domain similarities in an optimization-free manner, termed adaptation on _follower agents_. Experiments verify that CoLA is simple but effective, which boosts the efficiency of TTA and demonstrates remarkable superiority in collaborative, lifelong, and single-domain TTA scenarios, e.g., on follower agents, we enhance accuracy by over 30\% on ImageNet-C while maintaining nearly the same efficiency as standard inference. The source code is available at https://github.com/Cascol-Chen/COLA. | https://openreview.net/pdf/e5f1dd7f3a46aab1474f351d9ee7262b34056347.pdf |
GO4Align: Group Optimization for Multi-Task Alignment | https://openreview.net/forum?id=8vCs5U9Hbt | https://openreview.net/forum?id=8vCs5U9Hbt | Jiayi Shen,Cheems Wang,Zehao Xiao,Nanne Van Noord,Marcel Worring | NIPS 2024,Poster | This paper proposes **GO4Align**, a multi-task optimization approach that tackles task imbalance by explicitly aligning the optimization across tasks. To achieve this, we design an adaptive group risk minimization strategy, comprising two techniques in implementation: (i) dynamical group assignment, which clusters similar tasks based on task interactions; (ii) risk-guided group indicators, which exploit consistent task correlations with risk information from previous iterations. Comprehensive experimental results on diverse benchmarks demonstrate our method's performance superiority with even lower computational costs. | https://openreview.net/pdf/a83820a6003131e132759f6ef929f4a5524f9e43.pdf |
Open LLMs are Necessary for Current Private Adaptations and Outperform their Closed Alternatives | https://openreview.net/forum?id=Jf40H5pRW0 | https://openreview.net/forum?id=Jf40H5pRW0 | Vincent Hanke,Tom Blanchard,Franziska Boenisch,Iyiola Emmanuel Olatunji,Michael Backes,Adam Dziedzic | NIPS 2024,Poster | While open Large Language Models (LLMs) have made significant progress, they still fall short of matching the performance of their closed, proprietary counterparts, making the latter attractive even for the use on highly *private* data.
Recently, various new methods have been proposed to adapt closed LLMs to private data without leaking private information to third parties and/or the LLM provider.
In this work, we analyze the privacy protection and performance of the four most recent methods for private adaptation of closed LLMs.
By examining their threat models and thoroughly comparing their performance under different privacy levels according to differential privacy (DP), various LLM architectures, and multiple datasets for classification and generation tasks, we find that: (1) all the methods leak query data, i.e., the (potentially sensitive) user data that is queried at inference time, to the LLM provider, (2) three out of four methods also leak large fractions of private training data to the LLM provider while the method that protects private data requires a local open LLM, (3) all the methods exhibit lower performance compared to three private gradient-based adaptation methods for *local open LLMs*, and (4) the private adaptation methods for closed LLMs incur higher monetary training and query costs than running the alternative methods on local open LLMs.
This yields the conclusion that, to achieve truly *privacy-preserving LLM adaptations* that yield high performance and more privacy at lower costs, taking into account current methods and models, one should use open LLMs. | https://openreview.net/pdf/5113c6167358d02b4b1a103c072e8b3cb7dbc2b2.pdf |
UniDSeg: Unified Cross-Domain 3D Semantic Segmentation via Visual Foundation Models Prior | https://openreview.net/forum?id=dDDc3iNZA7 | https://openreview.net/forum?id=dDDc3iNZA7 | Yao Wu,Mingwei Xing,Yachao Zhang,Xiaotong Luo,Yuan Xie,Yanyun Qu | NIPS 2024,Poster | 3D semantic segmentation using an adapting model trained from a source domain with or without accessing unlabeled target-domain data is the fundamental task in computer vision, containing domain adaptation and domain generalization.
The essence of simultaneously solving cross-domain tasks is to enhance the generalizability of the encoder.
In light of this, we propose a groundbreaking universal method with the help of off-the-shelf Visual Foundation Models (VFMs) to boost the adaptability and generalizability of cross-domain 3D semantic segmentation, dubbed $\textbf{UniDSeg}$.
Our method explores the VFMs prior and how to harness them, aiming to inherit the recognition ability of VFMs.
Specifically, this method introduces layer-wise learnable blocks to the VFMs, which hinges on alternately learning two representations during training: (i) Learning visual prompt. The 3D-to-2D transitional prior and task-shared knowledge is captured from the prompt space, and then (ii) Learning deep query. Spatial Tunability is constructed to the representation of distinct instances driven by prompts in the query space.
Integrating these representations into a cross-modal learning framework, UniDSeg efficiently mitigates the domain gap between 2D and 3D modalities, achieving unified cross-domain 3D semantic segmentation.
Extensive experiments demonstrate the effectiveness of our method across widely recognized tasks and datasets, all achieving superior performance over state-of-the-art methods. Remarkably, UniDSeg achieves 57.5\%/54.4\% mIoU on ``A2D2/sKITTI'' for domain adaptive/generalized tasks. Code is available at https://github.com/Barcaaaa/UniDSeg. | https://openreview.net/pdf/2a44ca78c6cb797dc9a91aebf697ca399484e8e1.pdf |
Relationship Prompt Learning is Enough for Open-Vocabulary Semantic Segmentation | https://openreview.net/forum?id=PKcCHncbzg | https://openreview.net/forum?id=PKcCHncbzg | Jiahaoli,Yang Lu,Yuan Xie,Yanyun Qu | NIPS 2024,Poster | Open-vocabulary semantic segmentation (OVSS) aims to segment unseen classes without corresponding labels. Existing Vision-Language Model (VLM)-based methods leverage VLM's rich knowledge to enhance additional explicit segmentation-specific networks, yielding competitive results, but at the cost of extensive training cost. To reduce the cost, we attempt to enable VLM to directly produce the segmentation results without any segmentation-specific networks. Prompt learning offers a direct and parameter-efficient approach, yet it falls short in guiding VLM for pixel-level visual classification. Therefore, we propose the ${\bf R}$elationship ${\bf P}$rompt ${\bf M}$odule (${\bf RPM}$), which generates the relationship prompt that directs VLM to extract pixel-level semantic embeddings suitable for OVSS. Moreover, RPM integrates with VLM to construct the ${\bf R}$elationship ${\bf P}$rompt ${\bf N}$etwork (${\bf RPN}$), achieving OVSS without any segmentation-specific networks. RPN attains state-of-the-art performance with merely about ${\bf 3M}$ trainable parameters (2\% of total parameters). | https://openreview.net/pdf/c7b8a1c36becb32f34baad4bf10d15a76317da22.pdf |
PhyloGen: Language Model-Enhanced Phylogenetic Inference via Graph Structure Generation | https://openreview.net/forum?id=GxvDsFArxY | https://openreview.net/forum?id=GxvDsFArxY | ChenRui Duan,Zelin Zang,Siyuan Li,Yongjie Xu,Stan Z. Li | NIPS 2024,Poster | Phylogenetic trees elucidate evolutionary relationships among species, but phylogenetic inference remains challenging due to the complexity of combining continuous (branch lengths) and discrete parameters (tree topology).
Traditional Markov Chain Monte Carlo methods face slow convergence and computational burdens. Existing Variational Inference methods, which require pre-generated topologies and typically treat tree structures and branch lengths independently, may overlook critical sequence features, limiting their accuracy and flexibility.
We propose PhyloGen, a novel method leveraging a pre-trained genomic language model to generate and optimize phylogenetic trees without dependence on evolutionary models or aligned sequence constraints. PhyloGen views phylogenetic inference as a conditionally constrained tree structure generation problem, jointly optimizing tree topology and branch lengths through three core modules: (i) Feature Extraction, (ii) PhyloTree Construction, and (iii) PhyloTree Structure Modeling.
Meanwhile, we introduce a Scoring Function to guide the model towards a more stable gradient descent.
We demonstrate the effectiveness and robustness of PhyloGen on eight real-world benchmark datasets. Visualization results confirm PhyloGen provides deeper insights into phylogenetic relationships. | https://openreview.net/pdf/c61a857b067e8bfa2ba60f7fd16ed133fb9332d7.pdf |
EZ-HOI: VLM Adaptation via Guided Prompt Learning for Zero-Shot HOI Detection | https://openreview.net/forum?id=R1Rrb2d5BH | https://openreview.net/forum?id=R1Rrb2d5BH | Qinqian Lei,Bo Wang,Robby T. Tan | NIPS 2024,Poster | Detecting Human-Object Interactions (HOI) in zero-shot settings, where models must handle unseen classes, poses significant challenges. Existing methods that rely on aligning visual encoders with large Vision-Language Models (VLMs) to tap into the extensive knowledge of VLMs, require large, computationally expensive models and encounter training difficulties. Adapting VLMs with prompt learning offers an alternative to direct alignment. However, fine-tuning on task-specific datasets often leads to overfitting to seen classes and suboptimal performance on unseen classes, due to the absence of unseen class labels. To address these challenges, we introduce a novel prompt learning-based framework for Efficient Zero-Shot HOI detection (EZ-HOI). First, we introduce Large Language Model (LLM) and VLM guidance for learnable prompts, integrating detailed HOI descriptions and visual semantics to adapt VLMs to HOI tasks. However, because training datasets contain seen-class labels alone, fine-tuning VLMs on such datasets tends to optimize learnable prompts for seen classes instead of unseen ones. Therefore, we design prompt learning for unseen classes using information from related seen classes, with LLMs utilized to highlight the differences between unseen and related seen classes. Quantitative evaluations on benchmark datasets demonstrate that our EZ-HOI achieves state-of-the-art performance across various zero-shot settings with only 10.35\% to 33.95\% of the trainable parameters compared to existing methods. Code is available at https://github.com/ChelsieLei/EZ-HOI. | https://openreview.net/pdf/02907b5e4ee03887e94480f5e33062b6722a2766.pdf |
Data Attribution for Text-to-Image Models by Unlearning Synthesized Images | https://openreview.net/forum?id=kVr3L73pNH | https://openreview.net/forum?id=kVr3L73pNH | Sheng-Yu Wang,Aaron Hertzmann,Alexei A Efros,Jun-Yan Zhu,Richard Zhang | NIPS 2024,Poster | The goal of data attribution for text-to-image models is to identify the training images that most influence the generation of a new image. Influence is defined such that, for a given output, if a model is retrained from scratch without the most influential images, the model would fail to reproduce the same output. Unfortunately, directly searching for these influential images is computationally infeasible, since it would require repeatedly retraining models from scratch. In our work, we propose an efficient data attribution method by simulating unlearning the synthesized image. We achieve this by increasing the training loss on the output image, without catastrophic forgetting of other, unrelated concepts. We then identify training images with significant loss deviations after the unlearning process and label these as influential. We evaluate our method with a computationally intensive but "gold-standard" retraining from scratch and demonstrate our method's advantages over previous methods. | https://openreview.net/pdf/2e32e842f544c74dfff23ed357c51ab664e326cb.pdf |
Mini-Sequence Transformers: Optimizing Intermediate Memory for Long Sequences Training | https://openreview.net/forum?id=2KuZHYykkq | https://openreview.net/forum?id=2KuZHYykkq | Cheng Luo,Jiawei Zhao,Zhuoming Chen,Beidi Chen,Anima Anandkumar | NIPS 2024,Poster | We introduce Mini-Sequence Transformer (MsT), a simple and effective methodology for highly efficient and accurate LLM training with extremely long sequences. MsT partitions input sequences and iteratively processes mini-sequences to reduce intermediate memory usage. Integrated with activation recomputation, it enables significant memory savings in both forward and backward passes. In experiments with the Llama3-8B model, with MsT, we measure no degradation in throughput or convergence even with 12x longer sequences than standard implementations. MsT is fully general, implementation-agnostic, and requires minimal code changes to integrate with existing LLM training frameworks. Integrated with the huggingface library, MsT successfully extends the maximum context length of Qwen, Mistral, and Gemma-2 by 12-24x. | https://openreview.net/pdf/d291323c2636eacc38c4c3399f3ac1d69c920a5e.pdf |
OnlineTAS: An Online Baseline for Temporal Action Segmentation | https://openreview.net/forum?id=bkLetzd97M | https://openreview.net/forum?id=bkLetzd97M | Qing Zhong,Guodong Ding,Angela Yao | NIPS 2024,Poster | Temporal context plays a significant role in temporal action segmentation. In an offline setting, the context is typically captured by the segmentation network after observing the entire sequence. However, capturing and using such context information in an online setting remains an under-explored problem. This work presents the first online framework for temporal action segmentation. At the core of the framework is an adaptive memory designed to accommodate dynamic changes in context over time, alongside a feature augmentation module that enhances the frames with the memory. In addition, we propose a post-processing approach to mitigate the severe over-segmentation in the online setting. On three common segmentation benchmarks, our approach achieves state-of-the-art performance. | https://openreview.net/pdf/b7b20d7ead281b3abe5e9eceafbe1fe02f7f2267.pdf |
DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution | https://openreview.net/forum?id=QKp3nhPU41 | https://openreview.net/forum?id=QKp3nhPU41 | Yang Yue,Yulin Wang,Bingyi Kang,Yizeng Han,Shenzhi Wang,Shiji Song,Jiashi Feng,Gao Huang | NIPS 2024,Poster | Multimodal Large Language Models (MLLMs) have demonstrated remarkable comprehension and reasoning capabilities with complex language and visual data.
These advances have spurred the vision of establishing a generalist robotic MLLM proficient in understanding complex human instructions and accomplishing various embodied tasks, whose feasibility has been recently verified~\cite{rt-2,rt-x}.
However, developing MLLMs for real-world robots is challenging due to the typically limited computation and memory capacities available on robotic platforms.
In contrast, the inference of MLLMs usually incorporates storing billions of parameters and performing tremendous computation, imposing significant hardware demands.
In our paper, we seek to address this challenge by leveraging an intriguing observation: relatively easier situations make up the bulk of the procedure of controlling robots to fulfill diverse tasks, and they generally require far smaller models to obtain the correct robotic actions.
Motivated by this observation, we propose a \emph{Dynamic
Early-Exit for Robotic MLLM} (DeeR) framework that automatically adjusts the size of the activated MLLM based on each situation at hand.
The approach leverages a multi-exit architecture in MLLMs, which allows the model to cease processing once a proper size of the model has been activated for a specific situation, thus avoiding further redundant computation.
Additionally, we develop novel algorithms that establish early-termination criteria for DeeR, conditioned on predefined demands such as average computational cost (\emph{i.e.}, power consumption), as well as peak computational consumption (\emph{i.e.}, latency) and GPU memory usage. These enhancements ensure that DeeR operates efficiently under varying resource constraints while maintaining competitive performance.
Moreover, we design a tailored training method for integrating temporal information on top of such multi-exit architectures to predict actions reasonably.
On the CALVIN robot manipulation benchmark, DeeR demonstrates significant reductions in computational costs by 5.2-6.5x and GPU memory by 2x without compromising performance.
Code and checkpoints are available at https://github.com/yueyang130/DeeR-VLA. | https://openreview.net/pdf/b35ead1ac4b6efee2d45d2cb2fcd36c575bded67.pdf |
Self-Distilled Depth Refinement with Noisy Poisson Fusion | https://openreview.net/forum?id=nEqU0iCa0s | https://openreview.net/forum?id=nEqU0iCa0s | Jiaqi Li,Yiran Wang,Jinghong Zheng,Zihao Huang,Ke Xian,Zhiguo Cao,Jianming Zhang | NIPS 2024,Poster | Depth refinement aims to infer high-resolution depth with fine-grained edges and details, refining low-resolution results of depth estimation models. The prevailing methods adopt tile-based manners by merging numerous patches, which lacks efficiency and produces inconsistency. Besides, prior arts suffer from fuzzy depth boundaries and limited generalizability. Analyzing the fundamental reasons for these limitations, we model depth refinement as a noisy Poisson fusion problem with local inconsistency and edge deformation noises. We propose the Self-distilled Depth Refinement (SDDR) framework to enforce robustness against the noises, which mainly consists of depth edge representation and edge-based guidance. With noisy depth predictions as input, SDDR generates low-noise depth edge representations as pseudo-labels by coarse-to-fine self-distillation. Edge-based guidance with edge-guided gradient loss and edge-based fusion loss serves as the optimization objective equivalent to Poisson fusion. When depth maps are better refined, the labels also become more noise-free. Our model can acquire strong robustness to the noises, achieving significant improvements in accuracy, edge quality, efficiency, and generalizability on five different benchmarks. Moreover, directly training another model with edge labels produced by SDDR brings improvements, suggesting that our method could help with training robust refinement models in future works. | https://openreview.net/pdf/652424a5ba1133321596e1672a29b0226cb640f8.pdf |
Federated Model Heterogeneous Matryoshka Representation Learning | https://openreview.net/forum?id=5yboFMpvHf | https://openreview.net/forum?id=5yboFMpvHf | Liping Yi,Han Yu,Chao Ren,Gang Wang,xiaoguang Liu,Xiaoxiao Li | NIPS 2024,Poster | Model heterogeneous federated learning (MHeteroFL) enables FL clients to collaboratively train models with heterogeneous structures in a distributed fashion. However, existing MHeteroFL methods rely on training loss to transfer knowledge between the client model and the server model, resulting in limited knowledge exchange. To address this limitation, we propose the **Fed**erated model heterogeneous **M**atryoshka **R**epresentation **L**earning (**FedMRL**) approach for supervised learning tasks. It adds an auxiliary small homogeneous model shared by clients with heterogeneous local models. (1) The generalized and personalized representations extracted by the two models' feature extractors are fused by a personalized lightweight representation projector. This step enables representation fusion to adapt to local data distribution. (2) The fused representation is then used to construct Matryoshka representations with multi-dimensional and multi-granular embedded representations learned by the global homogeneous model header and the local heterogeneous model header. This step facilitates multi-perspective representation learning and improves model learning capability. Theoretical analysis shows that FedMRL achieves a $O(1/T)$ non-convex convergence rate. Extensive experiments on benchmark datasets demonstrate its superior model accuracy with low communication and computational costs compared to seven state-of-the-art baselines. It achieves up to 8.48% and 24.94% accuracy improvement compared with the state-of-the-art and the best same-category baseline, respectively. | https://openreview.net/pdf/9020f50a004891ad24f91a9a643f6781f83f01d2.pdf |
Learning Commonality, Divergence and Variety for Unsupervised Visible-Infrared Person Re-identification | https://openreview.net/forum?id=QQSGwpmDfU | https://openreview.net/forum?id=QQSGwpmDfU | Jiangming Shi,Xiangbo Yin,Yachao Zhang,zhizhong zhang,Yuan Xie,Yanyun Qu | NIPS 2024,Poster | Unsupervised visible-infrared person re-identification (USVI-ReID) aims to match specified persons in infrared images to visible images without annotations, and vice versa. USVI-ReID is a challenging yet underexplored task. Most existing methods address the USVI-ReID through cluster-based contrastive learning, which simply employs the cluster center to represent an individual. However, the cluster center primarily focuses on commonality, overlooking divergence and variety. To address the problem, we propose a Progressive Contrastive Learning with Hard and Dynamic Prototypes for USVI-ReID. In brief, we generate the hard prototype by selecting the sample with the maximum distance from the cluster center. We reveal that the inclusion of the hard prototype in contrastive loss helps to emphasize divergence. Additionally, instead of rigidly aligning query images to a specific prototype, we generate the dynamic prototype by randomly picking samples within a cluster. The dynamic prototype is used to encourage variety. Finally, we introduce a progressive learning strategy to gradually shift the model's attention towards divergence and variety, avoiding cluster deterioration. Extensive experiments conducted on the publicly available SYSU-MM01 and RegDB datasets validate the effectiveness of the proposed method. | https://openreview.net/pdf/f0bb8754ea9e12ff97b0fedfb688edc57362a88f.pdf |
EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models | https://openreview.net/forum?id=28bFUt6rUY | https://openreview.net/forum?id=28bFUt6rUY | Rui Zhao,Hangjie Yuan,Yujie Wei,Shiwei Zhang,Yuchao Gu,Lingmin Ran,Xiang Wang,Jay Zhangjie Wu,David Junhao Zhang,Yingya Zhang,Mike Zheng Shou | NIPS 2024,Poster | Recent advancements in generation models have showcased remarkable capabilities in generating fantastic content. However, most of them are trained on proprietary high-quality data, and some models withhold their parameters and only provide accessible application programming interfaces (APIs), limiting their benefits for downstream tasks. To explore the feasibility of training a text-to-image generation model comparable to advanced models using publicly available resources, we introduce EvolveDirector. This framework interacts with advanced models through their public APIs to obtain text-image data pairs to train a base model. Our experiments with extensive data indicate that the model trained on generated data of the advanced model can approximate its generation capability. However, it requires large-scale samples of 10 million or more. This incurs significant expenses in time, computational resources, and especially the costs associated with calling fee-based APIs. To address this problem, we leverage pre-trained large vision-language models (VLMs) to guide the evolution of the base model. VLM continuously evaluates the base model during training and dynamically updates and refines the training dataset by the discrimination, expansion, deletion, and mutation operations. Experimental results show that this paradigm significantly reduces the required data volume. Furthermore, when approaching multiple advanced models, EvolveDirector can select the best samples generated by them to learn powerful and balanced abilities. The final trained model Edgen is demonstrated to outperform these advanced models. The framework EvolveDiretor and the trained model Edgen will be fully open-sourced to benefit the downstream tasks. | https://openreview.net/pdf/4b7a7ebc5c661f0d9e09777a8ae71165160b13db.pdf |
DiffusionFake: Enhancing Generalization in Deepfake Detection via Guided Stable Diffusion | https://openreview.net/forum?id=FNzpVTpNbN | https://openreview.net/forum?id=FNzpVTpNbN | Ke Sun,Shen Chen,Taiping Yao,Hong Liu,Xiaoshuai Sun,Shouhong Ding,Rongrong Ji | NIPS 2024,Poster | The rapid progress of Deepfake technology has made face swapping highly realistic, raising concerns about the malicious use of fabricated facial content. Existing methods often struggle to generalize to unseen domains due to the diverse nature of facial manipulations. In this paper, we revisit the generation process and identify a universal principle: Deepfake images inherently contain information from both source and target identities, while genuine faces maintain a consistent identity. Building upon this insight, we introduce DiffusionFake, a novel plug-and-play framework that reverses the generative process of face forgeries to enhance the generalization of detection models. DiffusionFake achieves this by injecting the features extracted by the detection model into a frozen pre-trained Stable Diffusion model, compelling it to reconstruct the corresponding target and source images. This guided reconstruction process constrains the detection network to capture the source and target related features to facilitate the reconstruction, thereby learning rich and disentangled representations that are more resilient to unseen forgeries. Extensive experiments demonstrate that DiffusionFake significantly improves cross-domain generalization of various detector architectures without introducing additional parameters during inference. The code are available in https://github.com/skJack/DiffusionFake.git. | https://openreview.net/pdf/de3d7bfc4ad38cae0ae8bc83a64c953f5ed81fdc.pdf |
Visual Fourier Prompt Tuning | https://openreview.net/forum?id=nkHEl4n0JU | https://openreview.net/forum?id=nkHEl4n0JU | Runjia Zeng,Cheng Han,Qifan Wang,Chunshu Wu,Tong Geng,Lifu Huang,Ying Nian Wu,Dongfang Liu | NIPS 2024,Poster | With the scale of vision Transformer-based models continuing to grow, finetuning these large-scale pretrained models for new tasks has become increasingly parameter-intensive. Visual prompt tuning is introduced as a parameter-efficient finetuning (PEFT) method to this trend. Despite its successes, a notable research challenge persists within almost all PEFT approaches: significant performance degradation is observed when there is a substantial disparity between the datasets applied in pretraining and finetuning phases. To address this challenge, we draw inspiration from human visual cognition, and propose the Visual Fourier Prompt Tuning (VFPT) method as a general and effective solution for adapting large-scale transformer-based models. Our approach innovatively incorporates the Fast Fourier Transform into prompt embeddings and harmoniously considers both spatial and frequency domain information. Apart from its inherent simplicity and intuitiveness, VFPT exhibits superior performance across all datasets, offering a general solution to dataset challenges, irrespective of data disparities. Empirical results demonstrate that our approach outperforms current state-of-the-art baselines on two benchmarks, with low parameter usage (e.g., 0.57% of model parameters on VTAB-1k) and notable performance enhancements (e.g., 73.20% of mean accuracy on VTAB-1k). Our code is avaliable at https://github.com/runtsang/VFPT. | https://openreview.net/pdf/00106f74e4d176acca97b7ed9f9ffaf51be98d3e.pdf |
Utilizing Human Behavior Modeling to Manipulate Explanations in AI-Assisted Decision Making: The Good, the Bad, and the Scary | https://openreview.net/forum?id=7XkwzaPMvX | https://openreview.net/forum?id=7XkwzaPMvX | Zhuoyan Li,Ming Yin | NIPS 2024,Poster | Recent advances in AI models have increased the integration of AI-based decision aids into the human decision making process. To fully unlock the potential of AI-assisted decision making, researchers have computationally modeled how humans incorporate AI recommendations into their final decisions, and utilized these models to improve human-AI team performance. Meanwhile, due to the ``black-box'' nature of AI models, providing AI explanations to human decision makers to help them rely on AI recommendations more appropriately has become a common practice. In this paper, we explore whether we can quantitatively model how humans integrate both AI recommendations and explanations into their decision process, and whether this quantitative understanding of human behavior from the learned model can be utilized to manipulate AI explanations, thereby nudging individuals towards making targeted decisions. Our extensive human experiments across various tasks demonstrate that human behavior can be easily influenced by these manipulated explanations towards targeted outcomes, regardless of the intent being adversarial or benign. Furthermore, individuals often fail to detect any anomalies in these explanations, despite their decisions being affected by them. | https://openreview.net/pdf/f84b41fb9e184e05becd6bf152eaa9cae4b28225.pdf |
Assembly Fuzzy Representation on Hypergraph for Open-Set 3D Object Retrieval | https://openreview.net/forum?id=xOCAURlVM9 | https://openreview.net/forum?id=xOCAURlVM9 | Yang Xu,Yifan Feng,Jun Zhang,Jun-Hai Yong,Yue Gao | NIPS 2024,Poster | The lack of object-level labels presents a significant challenge for 3D object retrieval in the open-set environment. However, part-level shapes of objects often share commonalities across categories but remain underexploited in existing retrieval methods. In this paper, we introduce the Hypergraph-Based Assembly Fuzzy Representation (HARF) framework, which navigates the intricacies of open-set 3D object retrieval through a bottom-up lens of Part Assembly. To tackle the challenge of assembly isomorphism and unification, we propose the Hypergraph Isomorphism Convolution (HIConv) for smoothing and adopt the Isomorphic Assembly Embedding (IAE) module to generate assembly embeddings with geometric-semantic consistency. To address the challenge of open-set category generalization, our method employs high-order correlations and fuzzy representation to mitigate distribution skew through the Structure Fuzzy Reconstruction (SFR) module, by constructing a leveraged hypergraph based on local certainty and global uncertainty correlations. We construct three open-set retrieval datasets for 3D objects with part-level annotations: OP-SHNP, OP-INTRA, and OP-COSEG. Extensive experiments and ablation studies on these three benchmarks show our method outperforms current state-of-the-art methods. | https://openreview.net/pdf/8985b94f6c2cb16afe1fb713d3e65acc97d532b2.pdf |
CoSW: Conditional Sample Weighting for Smoke Segmentation with Label Noise | https://openreview.net/forum?id=RRRyQMn6dv | https://openreview.net/forum?id=RRRyQMn6dv | Lujian Yao,Haitao Zhao,Zhongze Wang,Kaijie Zhao,Jingchao Peng | NIPS 2024,Poster | Smoke segmentation is of great importance in precisely identifying the smoke location, enabling timely fire rescue and gas leak detection. However, due to the visual diversity and blurry edges of the non-grid smoke, noisy labels are almost inevitable in large-scale pixel-level smoke datasets. Noisy labels significantly impact the robustness of the model and may lead to serious accidents. Nevertheless, currently, there are no specific methods for addressing noisy labels in smoke segmentation. Smoke differs from regular objects as its transparency varies, causing inconsistent features in the noisy labels. In this paper, we propose a conditional sample weighting (CoSW). CoSW utilizes a multi-prototype framework, where prototypes serve as prior information to apply different weighting criteria to the different feature clusters. A novel regularized within-prototype entropy (RWE) is introduced to achieve CoSW and stable prototype update. The experiments show that our approach achieves SOTA performance on both real-world and synthetic noisy smoke segmentation datasets. | https://openreview.net/pdf/d4e62b7bcd956132110df1b4dbe4670633d16159.pdf |
DeepITE: Designing Variational Graph Autoencoders for Intervention Target Estimation | https://openreview.net/forum?id=GMsi9966DR | https://openreview.net/forum?id=GMsi9966DR | Hongyuan Tao,Hang Yu,Jianguo Li | NIPS 2024,Poster | Intervention Target Estimation (ITE) is vital for both understanding and decision-making in complex systems, yet it remains underexplored. Current ITE methods are hampered by their inability to learn from distinct intervention instances collaboratively and to incorporate rich insights from labeled data, which leads to inefficiencies such as the need for re-estimation of intervention targets with minor data changes or alterations in causal graphs. In this paper, we propose DeepITE, an innovative deep learning framework designed around a variational graph autoencoder. DeepITE can concurrently learn from both unlabeled and labeled data with different intervention targets and causal graphs, harnessing correlated information in a self or semi-supervised manner. The model's inference capabilities allow for the immediate identification of intervention targets on unseen samples and novel causal graphs, circumventing the need for retraining. Our extensive testing confirms that DeepITE not only surpasses 13 baseline methods in the Recall@k metric but also demonstrates expeditious inference times, particularly on large graphs. Moreover, incorporating a modest fraction of labeled data (5-10\%) substantially enhances DeepITE's performance, further solidifying its practical applicability. Our source code is available at https://github.com/alipay/DeepITE. | https://openreview.net/pdf/dee919b5f177acc6d48b46c66ca7724dddfdcda3.pdf |
Rethinking Misalignment in Vision-Language Model Adaptation from a Causal Perspective | https://openreview.net/forum?id=vwgWbCxeAQ | https://openreview.net/forum?id=vwgWbCxeAQ | Yanan Zhang,Jiangmeng Li,Lixiang Liu,Wenwen Qiang | NIPS 2024,Poster | Foundational Vision-Language models such as CLIP have exhibited impressive generalization in downstream tasks. However, CLIP suffers from a two-level misalignment issue, i.e., task misalignment and data misalignment, when adapting to specific tasks. Soft prompt tuning has mitigated the task misalignment, yet the data misalignment remains a challenge. To analyze the impacts of the data misalignment, we revisit the pre-training and adaptation processes of CLIP and develop a structural causal model. We discover that while we expect to capture task-relevant information for downstream tasks accurately, the task-irrelevant knowledge impacts the prediction results and hampers the modeling of the true relationships between the images and the predicted classes. As task-irrelevant knowledge is unobservable, we leverage the front-door adjustment and propose Causality-Guided Semantic Decoupling and Classification (CDC) to mitigate the interference of task-irrelevant knowledge. Specifically, we decouple semantics contained in the data of downstream tasks and perform classification based on each semantic. Furthermore, we employ the Dempster-Shafer evidence theory to evaluate the uncertainty of each prediction generated by diverse semantics. Experiments conducted in multiple different settings have consistently demonstrated the effectiveness of CDC. | https://openreview.net/pdf/b0f849743c20730b56ef48ad02e259f767f16cf5.pdf |
BiDM: Pushing the Limit of Quantization for Diffusion Models | https://openreview.net/forum?id=oWAItGB8LJ | https://openreview.net/forum?id=oWAItGB8LJ | Xingyu Zheng,Xianglong Liu,Yichen Bian,Xudong Ma,Yulun Zhang,Jiakai Wang,Jinyang Guo,Haotong Qin | NIPS 2024,Poster | Diffusion models (DMs) have been significantly developed and widely used in various applications due to their excellent generative qualities. However, the expensive computation and massive parameters of DMs hinder their practical use in resource-constrained scenarios. As one of the effective compression approaches, quantization allows DMs to achieve storage saving and inference acceleration by reducing bit-width while maintaining generation performance. However, as the most extreme quantization form, 1-bit binarization causes the generation performance of DMs to face severe degradation or even collapse. This paper proposes a novel method, namely BiDM, for fully binarizing weights and activations of DMs, pushing quantization to the 1-bit limit. From a temporal perspective, we introduce the Timestep-friendly Binary Structure (TBS), which uses learnable activation binarizers and cross-timestep feature connections to address the highly timestep-correlated activation features of DMs. From a spatial perspective, we propose Space Patched Distillation (SPD) to address the difficulty of matching binary features during distillation, focusing on the spatial locality of image generation tasks and noise estimation networks. As the first work to fully binarize DMs, the W1A1 BiDM on the LDM-4 model for LSUN-Bedrooms 256$\times$256 achieves a remarkable FID of 22.74, significantly outperforming the current state-of-the-art general binarization methods with an FID of 59.44 and invalid generative samples, and achieves up to excellent 28.0 times storage and 52.7 times OPs savings. | https://openreview.net/pdf/6d17445f162ca58b230060f58daa438dbe160260.pdf |
The Best of Both Worlds: On the Dilemma of Out-of-distribution Detection | https://openreview.net/forum?id=B9FPPdNmyk | https://openreview.net/forum?id=B9FPPdNmyk | Qingyang Zhang,Qiuxuan Feng,Joey Tianyi Zhou,Yatao Bian,Qinghua Hu,Changqing Zhang | NIPS 2024,Poster | Out-of-distribution (OOD) detection is essential for model trustworthiness which aims to sensitively identity semantic OOD samples and robustly generalize for covariate-shifted OOD samples. However, we discover that the superior OOD detection performance of state-of-the-art methods is achieved by secretly sacrificing the OOD generalization ability. The classification accuracy frequently collapses catastrophically when even slight noise is encountered. Such a phenomenon violates the motivation of trustworthiness and significantly limits the model's deployment in the real world. What is the hidden reason behind such a limitation? In this work, we theoretically demystify the "\textit{sensitive-robust}" dilemma that lies in previous OOD detection methods. Consequently, a theory-inspired algorithm is induced to overcome such a dilemma. By decoupling the uncertainty learning objective from a Bayesian perspective, the conflict between OOD detection and OOD generalization is naturally harmonized and a dual-optimized performance could be expected. Empirical studies show that our method achieves superior performance on commonly used benchmarks. To our best knowledge, this work is the first principled OOD detection method that achieves state-of-the-art OOD detection performance without sacrificing OOD generalization ability. Our code is available at https://github.com/QingyangZhang/DUL. | https://openreview.net/pdf/ff0e17b68593c47759dba98bf4f523ef9e1be76f.pdf |
Leveraging Hallucinations to Reduce Manual Prompt Dependency in Promptable Segmentation | https://openreview.net/forum?id=9GhSOp1LYH | https://openreview.net/forum?id=9GhSOp1LYH | Jian Hu,Jiayi Lin,Junchi Yan,Shaogang Gong | NIPS 2024,Poster | Promptable segmentation typically requires instance-specific manual prompts to guide the segmentation of each desired object. To minimize such a need, task-generic promptable segmentation has been introduced, which employs a single task-generic prompt to segment various images of different objects in the same task. Current methods use Multimodal Large Language Models (MLLMs) to reason detailed instance-specific prompts from a task-generic prompt for improving segmentation accuracy. The effectiveness of this segmentation heavily depends on the precision of these derived prompts. However, MLLMs often suffer hallucinations during reasoning, resulting in inaccurate prompting. While existing methods focus on eliminating hallucinations to improve a model, we argue that MLLM hallucinations can reveal valuable contextual insights when leveraged correctly, as they represent pre-trained large-scale knowledge beyond individual images. In this paper, we first utilize hallucinations to mine task-related information from images and verify its accuracy to enhance precision of the generated prompts. Specifically, we introduce an iterative \textbf{Pro}mpt-\textbf{Ma}sk \textbf{C}ycle generation framework (ProMaC) with a prompt generator and a mask generator. The prompt generator uses a multi-scale chain of thought prompting, initially leveraging hallucinations to extract extended contextual prompts on a test image. These hallucinations are then minimized to formulate precise instance-specific prompts, directing the mask generator to produce masks that are consistent with task semantics by mask semantic alignment. Iteratively the generated masks induce the prompt generator to focus more on task-relevant image areas and reduce irrelevant hallucinations, resulting jointly in better prompts and masks. Experiments on 5 benchmarks demonstrate the effectiveness of ProMaC. Code is in https://lwpyh.github.io/ProMaC/. | https://openreview.net/pdf/2525192e4b07bcdd1c2a88415e828acd8f948e32.pdf |
Adapting Diffusion Models for Improved Prompt Compliance and Controllable Image Synthesis | https://openreview.net/forum?id=sntv8Ac3U2 | https://openreview.net/forum?id=sntv8Ac3U2 | Deepak Sridhar,Abhishek Peri,Rohith Reddy Rachala,Nuno Vasconcelos | NIPS 2024,Poster | Recent advances in generative modeling with diffusion processes (DPs) enabled breakthroughs in image synthesis. Despite impressive image quality, these models have various prompt compliance problems, including low recall in generating multiple objects, difficulty in generating text in images, and meeting constraints like object locations and pose. For fine-grained editing and manipulation, they also require fine-grained semantic or instance maps that are tedious to produce manually. While prompt compliance can be enhanced by addition of loss functions at inference, this is time consuming and does not scale to complex scenes.
To overcome these limitations, this work introduces a new family of $\textit{Factor Graph Diffusion Models}$ (FG-DMs) that models the joint distribution of images and conditioning variables, such as semantic, sketch, depth or normal maps via a factor graph decomposition. This joint structure has several advantages, including support for efficient sampling based prompt compliance schemes, which produce images of high object recall, semi-automated fine-grained editing, explainability at intermediate levels, ability to produce labeled datasets for the training of downstream models such as segmentation or depth, training with missing data, and continual learning where new conditioning variables can be added with minimal or no modifications to the existing structure. We propose an implementation of FG-DMs by adapting a pre-trained Stable Diffusion (SD) model to implement all FG-DM factors, using only COCO dataset, and show that it is effective in generating images with 15\% higher recall than SD while retaining its generalization ability. We introduce an attention distillation loss that encourages consistency among the attention maps of all factors, improving the fidelity of the generated conditions and image. We also show that training FG-DMs from scratch on MM-CelebA-HQ, Cityscapes, ADE20K, and COCO produce images of high quality (FID) and diversity (LPIPS). | https://openreview.net/pdf/8f21ac42e948eed5568e8c93b72ba0d79d8038dd.pdf |
Du-IN: Discrete units-guided mask modeling for decoding speech from Intracranial Neural signals | https://openreview.net/forum?id=uyLtEFnpQP | https://openreview.net/forum?id=uyLtEFnpQP | Hui Zheng,Haiteng Wang,Weibang Jiang,Zhongtao Chen,Li He,Peiyang Lin,Penghu Wei,Guoguang Zhao,Yunzhe Liu | NIPS 2024,Poster | Invasive brain-computer interfaces with Electrocorticography (ECoG) have shown promise for high-performance speech decoding in medical applications, but less damaging methods like intracranial stereo-electroencephalography (sEEG) remain underexplored. With rapid advances in representation learning, leveraging abundant recordings to enhance speech decoding is increasingly attractive. However, popular methods often pre-train temporal models based on brain-level tokens, overlooking that brain activities in different regions are highly desynchronized during tasks. Alternatively, they pre-train spatial-temporal models based on channel-level tokens but fail to evaluate them on challenging tasks like speech decoding, which requires intricate processing in specific language-related areas. To address this issue, we collected a well-annotated Chinese word-reading sEEG dataset targeting language-related brain networks from 12 subjects. Using this benchmark, we developed the Du-IN model, which extracts contextual embeddings based on region-level tokens through discrete codex-guided mask modeling. Our model achieves state-of-the-art performance on the 61-word classification task, surpassing all baselines. Model comparisons and ablation studies reveal that our design choices, including (\romannumeral1) temporal modeling based on region-level tokens by utilizing 1D depthwise convolution to fuse channels in the ventral sensorimotor cortex (vSMC) and superior temporal gyrus (STG) and (\romannumeral2) self-supervision through discrete codex-guided mask modeling, significantly contribute to this performance. Overall, our approach -- inspired by neuroscience findings and capitalizing on region-level representations from specific brain regions -- is suitable for invasive brain modeling and represents a promising neuro-inspired AI approach in brain-computer interfaces. Code and dataset are available at https://github.com/liulab-repository/Du-IN. | https://openreview.net/pdf/ab4aacca59beba81baaa0ab0761739c5e4d46f7a.pdf |
PointMamba: A Simple State Space Model for Point Cloud Analysis | https://openreview.net/forum?id=Kc37srXvan | https://openreview.net/forum?id=Kc37srXvan | Dingkang Liang,Xin Zhou,Wei Xu,Xingkui Zhu,Zhikang Zou,Xiaoqing Ye,Xiao Tan,Xiang Bai | NIPS 2024,Poster | Transformers have become one of the foundational architectures in point cloud analysis tasks due to their excellent global modeling ability. However, the attention mechanism has quadratic complexity, making the design of a linear complexity method with global modeling appealing. In this paper, we propose PointMamba, transferring the success of Mamba, a recent representative state space model (SSM), from NLP to point cloud analysis tasks. Unlike traditional Transformers, PointMamba employs a linear complexity algorithm, presenting global modeling capacity while significantly reducing computational costs. Specifically, our method leverages space-filling curves for effective point tokenization and adopts an extremely simple, non-hierarchical Mamba encoder as the backbone. Comprehensive evaluations demonstrate that PointMamba achieves superior performance across multiple datasets while significantly reducing GPU memory usage and FLOPs. This work underscores the potential of SSMs in 3D vision-related tasks and presents a simple yet effective Mamba-based baseline for future research. The code is available at https://github.com/LMD0311/PointMamba. | https://openreview.net/pdf/6c43071be046c8aeb21e7669ad6466fc929de3f9.pdf |
Are Large-scale Soft Labels Necessary for Large-scale Dataset Distillation? | https://openreview.net/forum?id=12A1RT1L87 | https://openreview.net/forum?id=12A1RT1L87 | Lingao Xiao,Yang He | NIPS 2024,Poster | In ImageNet-condensation, the storage for auxiliary soft labels exceeds that of the condensed dataset by over 30 times.
However, are large-scale soft labels necessary for large-scale dataset distillation?
In this paper, we first discover that the high within-class similarity in condensed datasets necessitates the use of large-scale soft labels.
This high within-class similarity can be attributed to the fact that previous methods use samples from different classes to construct a single batch for batch normalization (BN) matching.
To reduce the within-class similarity, we introduce class-wise supervision during the image synthesizing process by batching the samples within classes, instead of across classes.
As a result, we can increase within-class diversity and reduce the size of required soft labels.
A key benefit of improved image diversity is that soft label compression can be achieved through simple random pruning, eliminating the need for complex rule-based strategies. Experiments validate our discoveries.
For example, when condensing ImageNet-1K to 200 images per class, our approach compresses the required soft labels from 113 GB to 2.8 GB (40$\times$ compression) with a 2.6\% performance gain.
Code is available at: https://github.com/he-y/soft-label-pruning-for-dataset-distillation | https://openreview.net/pdf/468e1b0cf32e2ed29a29f0b20ae5414d0a8576e3.pdf |
Rethinking the Power of Timestamps for Robust Time Series Forecasting: A Global-Local Fusion Perspective | https://openreview.net/forum?id=EY2agT920S | https://openreview.net/forum?id=EY2agT920S | Chengsen Wang,Qi Qi,Jingyu Wang,Haifeng Sun,Zirui Zhuang,Jinming Wu,Jianxin Liao | NIPS 2024,Poster | Time series forecasting has played a pivotal role across various industries, including finance, transportation, energy, healthcare, and climate. Due to the abundant seasonal information they contain, timestamps possess the potential to offer robust global guidance for forecasting techniques. However, existing works primarily focus on local observations, with timestamps being treated merely as an optional supplement that remains underutilized. When data gathered from the real world is polluted, the absence of global information will damage the robust prediction capability of these algorithms. To address these problems, we propose a novel framework named GLAFF. Within this framework, the timestamps are modeled individually to capture the global dependencies. Working as a plugin, GLAFF adaptively adjusts the combined weights for global and local information, enabling seamless collaboration with any time series forecasting backbone. Extensive experiments conducted on nine real-world datasets demonstrate that GLAFF significantly enhances the average performance of widely used mainstream forecasting models by 12.5\%, surpassing the previous state-of-the-art method by 5.5\%. | https://openreview.net/pdf/41b5b204714945da8e650b9a49331196fb015b70.pdf |
How Control Information Influences Multilingual Text Image Generation and Editing? | https://openreview.net/forum?id=r3c0WGCXgt | https://openreview.net/forum?id=r3c0WGCXgt | Boqiang Zhang,Zuan Gao,Yadong Qu,Hongtao Xie | NIPS 2024,Poster | Visual text generation has significantly advanced through diffusion models aimed at producing images with readable and realistic text. Recent works primarily use a ControlNet-based framework, employing standard font text images to control diffusion models. Recognizing the critical role of control information in generating high-quality text, we investigate its influence from three perspectives: input encoding, role at different stages, and output features. Our findings reveal that: 1) Input control information has unique characteristics compared to conventional inputs like Canny edges and depth maps. 2) Control information plays distinct roles at different stages of the denoising process. 3) Output control features significantly differ from the base and skip features of the U-Net decoder in the frequency domain. Based on these insights, we propose TextGen, a novel framework designed to enhance generation quality by optimizing control information. We improve input and output features using Fourier analysis to emphasize relevant information and reduce noise. Additionally, we employ a two-stage generation framework to align the different roles of control information at different stages. Furthermore, we introduce an effective and lightweight dataset for training. Our method achieves state-of-the-art performance in both Chinese and English text generation. The code and dataset are available at https://github.com/CyrilSterling/TextGen. | https://openreview.net/pdf/7a553d8fd3667ac8a79ed09b379e81b341dcb53d.pdf |
Unleashing Multispectral Video's Potential in Semantic Segmentation: A Semi-supervised Viewpoint and New UAV-View Benchmark | https://openreview.net/forum?id=pLoX8Og3bH | https://openreview.net/forum?id=pLoX8Og3bH | Wei Ji,Jingjing Li,Wenbo Li,Yilin Shen,Li cheng,Hongxia Jin | NIPS 2024,Poster | Thanks to the rapid progress in RGB & thermal imaging, also known as multispectral imaging, the task of multispectral video semantic segmentation, or MVSS in short, has recently drawn significant attentions. Noticeably, it offers new opportunities in improving segmentation performance under unfavorable visual conditions such as poor light or overexposure. Unfortunately, there are currently very few datasets available, including for example MVSeg dataset that focuses purely toward eye-level view; and it features the sparse annotation nature due to the intensive demands of labeling process. To address these key challenges of the MVSS task, this paper presents two major contributions: the introduction of MVUAV, a new MVSS benchmark dataset, and the development of a dedicated semi-supervised MVSS baseline - SemiMV. Our MVUAV dataset is captured via Unmanned Aerial Vehicles (UAV), which offers a unique oblique bird’s-eye view complementary to the existing MVSS datasets; it also encompasses a broad range of day/night lighting conditions and over 30 semantic categories. In the meantime, to better leverage the sparse annotations and extra unlabeled RGB-Thermal videos, a semi-supervised learning baseline, SemiMV, is proposed to enforce consistency regularization through a dedicated Cross-collaborative Consistency Learning (C3L) module and a denoised temporal aggregation strategy. Comprehensive empirical evaluations on both MVSeg and MVUAV benchmark datasets have showcased the efficacy of our SemiMV baseline. | https://openreview.net/pdf/9c9d58797ce5526a1f0dff317e669cf54103606d.pdf |
GS-Hider: Hiding Messages into 3D Gaussian Splatting | https://openreview.net/forum?id=3XLQp2Xx3J | https://openreview.net/forum?id=3XLQp2Xx3J | Xuanyu Zhang,Jiarui Meng,Runyi Li,Zhipei Xu,Yongbing Zhang,Jian Zhang | NIPS 2024,Poster | 3D Gaussian Splatting (3DGS) has already become the emerging research focus in the fields of 3D scene reconstruction and novel view synthesis. Given that training a 3DGS requires a significant amount of time and computational cost, it is crucial to protect the copyright, integrity, and privacy of such 3D assets. Steganography, as a crucial technique for encrypted transmission and copyright protection, has been extensively studied. However, it still lacks profound exploration targeted at 3DGS. Unlike its predecessor NeRF, 3DGS possesses two distinct features: 1) explicit 3D representation; and 2) real-time rendering speeds. These characteristics result in the 3DGS point cloud files being public and transparent, with each Gaussian point having a clear physical significance. Therefore, ensuring the security and fidelity of the original 3D scene while embedding information into the 3DGS point cloud files is an extremely challenging task. To solve the above-mentioned issue, we first propose a steganography framework for 3DGS, dubbed GS-Hider, which can embed 3D scenes and images into original GS point clouds in an invisible manner and accurately extract the hidden messages. Specifically, we design a coupled secured feature attribute to replace the original 3DGS's spherical harmonics coefficients and then use a scene decoder and a message decoder to disentangle the original RGB scene and the hidden message. Extensive experiments demonstrated that the proposed GS-Hider can effectively conceal multimodal messages without compromising rendering quality and possesses exceptional security, robustness, capacity, and flexibility. Our project is available at: https://xuanyuzhang21.github.io/project/gshider. | https://openreview.net/pdf/c5a044975ea8f69343dcaee260dec23386a096ed.pdf |
LCGen: Mining in Low-Certainty Generation for View-consistent Text-to-3D | https://openreview.net/forum?id=4wgzkAyi2D | https://openreview.net/forum?id=4wgzkAyi2D | Zeng Tao,Tong Yang,Junxiong Lin,Xinji Mai,Haoran Wang,Beining Wang,Enyu Zhou,Yan Wang,Wenqiang Zhang | NIPS 2024,Poster | The Janus Problem is a common issue in SDS-based text-to-3D methods. Due to view encoding approach and 2D diffusion prior guidance, the 3D representation model tends to learn content with higher certainty from each perspective, leading to view inconsistency. In this work, we first model and analyze the problem, visualizing the specific causes of the Janus Problem, which are associated with discrete view encoding and shared priors in 2D lifting. Based on this, we further propose the LCGen method, which guides text-to-3D to obtain different priors with different certainty from various viewpoints, aiding in view-consistent generation. Experiments have proven that our LCGen method can be directly applied to different SDS-based text-to-3D methods, alleviating the Janus Problem without introducing additional information, increasing excessive training burden, or compromising the generation effect. | https://openreview.net/pdf/7967e7cfad66c1e4836710a5938e1ba56055d56d.pdf |
DeMo: Decoupling Motion Forecasting into Directional Intentions and Dynamic States | https://openreview.net/forum?id=rbtnRsiXSN | https://openreview.net/forum?id=rbtnRsiXSN | Bozhou Zhang,Nan Song,Li Zhang | NIPS 2024,Poster | Accurate motion forecasting for traffic agents is crucial for ensuring the safety and efficiency of autonomous driving systems in dynamically changing environments. Mainstream methods adopt a one-query-one-trajectory paradigm, where each query corresponds to a unique trajectory for predicting multi-modal trajectories. While straightforward and effective, the absence of detailed representation of future trajectories may yield suboptimal outcomes, given that the agent states dynamically evolve over time. To address this problem, we introduce DeMo, a framework that decouples multi-modal trajectory queries into two types: mode queries capturing distinct directional intentions and state queries tracking the agent's dynamic states over time. By leveraging this format, we separately optimize the multi-modality and dynamic evolutionary properties of trajectories. Subsequently, the mode and state queries are integrated to obtain a comprehensive and detailed representation of the trajectories. To achieve these operations, we additionally introduce combined Attention and Mamba techniques for global information aggregation and state sequence modeling, leveraging their respective strengths. Extensive experiments on both the Argoverse 2 and nuScenes benchmarks demonstrate that our DeMo achieves state-of-the-art performance in motion forecasting. In addition, we will make our code and models publicly available. | https://openreview.net/pdf/3d4994978d6ca281a3b560f62d73cecc4d2310f7.pdf |
Boosting Weakly Supervised Referring Image Segmentation via Progressive Comprehension | https://openreview.net/forum?id=MxdyGXoK9h | https://openreview.net/forum?id=MxdyGXoK9h | Zaiquan Yang,Yuhao LIU,Jiaying Lin,Gerhard Petrus Hancke,Rynson W. H. Lau | NIPS 2024,Poster | This paper explores the weakly-supervised referring image segmentation (WRIS) problem, and focuses on a challenging setup where target localization is learned directly from image-text pairs.
We note that the input text description typically already contains detailed information on how to localize the target object, and we also observe that humans often follow a step-by-step comprehension process (\ie, progressively utilizing target-related attributes and relations as cues) to identify the target object.
Hence, we propose a novel Progressive Comprehension Network (PCNet) to leverage target-related textual cues from the input description for progressively localizing the target object.
Specifically, we first use a Large Language Model (LLM) to decompose the input text description into short phrases. These short phrases are taken as target-related cues and fed into a Conditional Referring Module (CRM) in multiple stages, to allow updating the referring text embedding and enhance the response map for target localization in a multi-stage manner.
Based on the CRM, we then propose a Region-aware Shrinking (RaS) loss to constrain the visual localization to be conducted progressively in a coarse-to-fine manner across different stages.
Finally, we introduce an Instance-aware Disambiguation (IaD) loss to suppress instance localization ambiguity by differentiating overlapping response maps generated by different referring texts on the same image.
Extensive experiments show that our method outperforms SOTA methods on three common benchmarks. | https://openreview.net/pdf/53044cb3e201c88c1c34493cdae408cf6290cdb5.pdf |
Vivid-ZOO: Multi-View Video Generation with Diffusion Model | https://openreview.net/forum?id=bPOaHf8OcX | https://openreview.net/forum?id=bPOaHf8OcX | Bing Li,Cheng Zheng,Wenxuan Zhu,Jinjie Mai,Biao Zhang,Peter Wonka,Bernard Ghanem | NIPS 2024,Poster | While diffusion models have shown impressive performance in 2D image/video generation, diffusion-based Text-to-Multi-view-Video (T2MVid) generation remains underexplored. The new challenges posed by T2MVid generation lie in the lack of massive captioned multi-view videos and the complexity of modeling such multi-dimensional distribution. To this end, we propose a novel diffusion-based pipeline that generates high-quality multi-view videos centered around a dynamic 3D object from text. Specifically, we factor the T2MVid problem into viewpoint-space and time components. Such factorization allows us to combine and reuse layers of advanced pre-trained multi-view image and 2D video diffusion models to ensure multi-view consistency as well as temporal coherence for the generated multi-view videos, largely reducing the training cost. We further introduce alignment modules to align the latent spaces of layers from the pre-trained multi-view and the 2D video diffusion models, addressing the reused layers' incompatibility that arises from the domain gap between 2D and multi-view data. In support of this and future research, we further contribute a captioned multi-view video dataset. Experimental results demonstrate that our method generates high-quality multi-view videos, exhibiting vivid motions, temporal coherence, and multi-view consistency, given a variety of text prompts. | https://openreview.net/pdf/949ed66b8316d0874caae8ba304706ae56eb511b.pdf |
Zero-shot Generalizable Incremental Learning for Vision-Language Object Detection | https://openreview.net/forum?id=ZNqHm0a35E | https://openreview.net/forum?id=ZNqHm0a35E | Jieren Deng,Haojian Zhang,Kun Ding,Jianhua Hu,Xingxuan Zhang,Yunkuan Wang | NIPS 2024,Poster | This paper presents Incremental Vision-Language Object Detection (IVLOD), a novel learning task designed to incrementally adapt pre-trained Vision-Language Object Detection Models (VLODMs) to various specialized domains, while simultaneously preserving their zero-shot generalization capabilities for the generalized domain. To address this new challenge, we present the Zero-interference Reparameterizable Adaptation (ZiRa), a novel method that introduces Zero-interference Loss and reparameterization techniques to tackle IVLOD without incurring a significant increase in memory usage. Comprehensive experiments on COCO and ODinW-13 datasets demonstrate that ZiRa effectively safeguards the zero-shot generalization ability of VLODMs while continuously adapting to new tasks. Specifically, after training on ODinW-13 datasets, ZiRa exhibits superior performance compared to CL-DETR and iDETR, boosting zero-shot generalizability by substantial $\textbf{13.91}$ and $\textbf{8.74}$ AP, respectively. Our code is available at https://github.com/JarintotionDin/ZiRaGroundingDINO. | https://openreview.net/pdf/89d80970877f4567bd432a47e2e7b01828df5b13.pdf |
LION: Linear Group RNN for 3D Object Detection in Point Clouds | https://openreview.net/forum?id=5tGkAcY7uV | https://openreview.net/forum?id=5tGkAcY7uV | Zhe Liu,Jinghua Hou,Xinyu Wang,Xiaoqing Ye,Jingdong Wang,Hengshuang Zhao,Xiang Bai | NIPS 2024,Poster | The benefit of transformers in large-scale 3D point cloud perception tasks, such as 3D object detection, is limited by their quadratic computation cost when modeling long-range relationships. In contrast, linear RNNs have low computational complexity and are suitable for long-range modeling. Toward this goal, we propose a simple and effective window-based framework built on Linear group RNN (i.e., perform linear RNN for grouped features) for accurate 3D object detection, called LION. The key property is to allow sufficient feature interaction in a much larger group than transformer-based methods. However, effectively applying linear group RNN to 3D object detection in highly sparse point clouds is not trivial due to its limitation in handling spatial modeling. To tackle this problem, we simply introduce a 3D spatial feature descriptor and integrate it into the linear group RNN operators to enhance their spatial features rather than blindly increasing the number of scanning orders for voxel features. To further address the challenge in highly sparse point clouds, we propose a 3D voxel generation strategy to densify foreground features thanks to linear group RNN as a natural property of auto-regressive models.
Extensive experiments verify the effectiveness of the proposed components and the generalization of our LION on different linear group RNN operators including Mamba, RWKV, and RetNet. Furthermore, it is worth mentioning that our LION-Mamba achieves state-of-the-art on Waymo, nuScenes, Argoverse V2, and ONCE datasets. Last but not least, our method supports kinds of advanced linear RNN operators (e.g., RetNet, RWKV, Mamba, xLSTM and TTT) on small but popular KITTI dataset for a quick experience with our linear RNN-based framework. | https://openreview.net/pdf/4328f471f60186bd07dea76178ce8929385cb6e4.pdf |
DEPrune: Depth-wise Separable Convolution Pruning for Maximizing GPU Parallelism | https://openreview.net/forum?id=MYI443zCvv | https://openreview.net/forum?id=MYI443zCvv | Cheonjun Park,Mincheol Park,Hyunchan Moon,Myung Kuk Yoon,Seokjin Go,Suhyun Kim,Won Woo Ro | NIPS 2024,Poster | Depth-wise Separable Convolution (DSConv) has a powerful representation even with fewer parameters and computation, leading to its adoption by almost all of the state-of-the-art CNN models.
DSConv models are already compact making it hard to apply pruning, and there are few previous pruning techniques that target depth-wise convolution (DW-conv).
In this paper, we present Depth-wise Separable Convolution Pruning (DEPrune), a novel pruning method applied to both point-wise and depth-wise convolutions.
DEPrune is optimized by analyzing the computation of DSConv on GPUs.
DEPrune employs a fine-grained pruning approach, yet it achieves the structured sparsity typically absent in fine-grained pruning, enabling practical hardware acceleration.
Moreover, this method maintains a high pruning ratio without causing any accuracy drop.
We additionally represent techniques that further enhance DEPrune performance: 1) balanced workload tuning (BWT), and 2) hardware-aware sparsity recalibration (HSR).
Experiment results show that DEPrune achieves up to $3.74\times$ practical speedup in DSConv inference on GPUs while maintaining the accuracy of EfficientNet-B0 on ImageNet. | https://openreview.net/pdf/13de79efe94138da96e04b11a8f6613a18c8052b.pdf |
Faster Neighborhood Attention: Reducing the O(n^2) Cost of Self Attention at the Threadblock Level | https://openreview.net/forum?id=8Ofbg2KYMu | https://openreview.net/forum?id=8Ofbg2KYMu | Ali Hassani,Wen-mei Hwu,Humphrey Shi | NIPS 2024,Poster | Neighborhood attention reduces the cost of self attention by restricting each token’s attention span to its nearest neighbors. This restriction, parameterized by a window size and dilation factor, draws a spectrum of possible attention patterns between linear projection and self attention. Neighborhood attention, and more generally sliding window attention patterns, have long been bounded by infrastructure, particularly in higher-rank spaces (2-D and 3-D), calling for the development of custom kernels, which have been limited in either functionality, or performance, if not both. In this work, we aim to massively improve upon existing infrastructure by providing two new methods for implementing neighborhood attention. We first show that neighborhood attention can be represented as a batched GEMM problem, similar to standard attention, and implement it for 1-D and 2-D neighborhood attention. These kernels on average provide 895% and 272% improvement in full precision runtime compared to existing naive CUDA kernels for 1-D and 2-D neighborhood attention respectively. We find that aside from being heavily bound by memory bandwidth, certain inherent inefficiencies exist in all unfused implementations of neighborhood attention, which in most cases undo their theoretical efficiency gain. Motivated by the progress made into fused dot-product attention kernels, we developed fused neighborhood attention; an adaptation of fused dot-product attention kernels that allow fine-grained control over attention across different spatial axes. Known for reducing the quadratic time complexity of self attention to a linear complexity, neighborhood attention can now enjoy a reduced and constant memory footprint, and record-breaking half precision runtime. We observe that our fused implementation successfully circumvents some of the unavoidable inefficiencies in unfused implementations. While our unfused GEMM-based kernels only improve half precision performance compared to naive kernels by an average of 548% and 193% in 1-D and 2-D problems respectively, our fused kernels improve naive kernels by an average of 1759% and 958% in 1-D and 2-D problems respectively. These improvements translate into up to 104% improvement in inference and 39% improvement in training existing models based on neighborhood attention, and additionally extend its applicability to image and video perception, as well as other modalities. Our work is open-sourced at https://github.com/SHI-Labs/NATTEN/. | https://openreview.net/pdf/c776c48ab28b80d61ecdc8e1892789844332ad6c.pdf |
NeuralFluid: Nueral Fluidic System Design and Control with Differentiable Simulation | https://openreview.net/forum?id=LLsOmvJbBm | https://openreview.net/forum?id=LLsOmvJbBm | Yifei Li,Yuchen Sun,Pingchuan Ma,Eftychios Sifakis,Tao Du,Bo Zhu,Wojciech Matusik | NIPS 2024,Poster | We present NeuralFluid, a novel framework to explore neural control and design of complex fluidic systems with dynamic solid boundaries. Our system features a fast differentiable Navier-Stokes solver with solid-fluid interface handling, a low-dimensional differentiable parametric geometry representation, a control-shape co-design algorithm, and gym-like simulation environments to facilitate various fluidic control design applications. Additionally, we present a benchmark of design, control, and learning tasks on high-fidelity, high-resolution dynamic fluid environments that pose challenges for existing differentiable fluid simulators. These tasks include designing the control of artificial hearts, identifying robotic end-effector shapes, and controlling a fluid gate. By seamlessly incorporating our differentiable fluid simulator into a learning framework, we demonstrate successful design, control, and learning results that surpass gradient-free solutions in these benchmark tasks. | https://openreview.net/pdf/6be38b86468a5d1764ac6fa70b4d9ff250e0f78e.pdf |
Why are Visually-Grounded Language Models Bad at Image Classification? | https://openreview.net/forum?id=MwmmBg1VYg | https://openreview.net/forum?id=MwmmBg1VYg | Yuhui Zhang,Alyssa Unell,Xiaohan Wang,Dhruba Ghosh,Yuchang Su,Ludwig Schmidt,Serena Yeung-Levy | NIPS 2024,Poster | Image classification is one of the most fundamental capabilities of machine vision intelligence. In this work, we revisit the image classification task using visually-grounded language models (VLMs) such as GPT-4V and LLaVA. We find that existing proprietary and public VLMs, despite often using CLIP as a vision encoder and having many more parameters, significantly underperform CLIP on standard image classification benchmarks like ImageNet. To understand the reason, we explore several hypotheses concerning the inference algorithms, training objectives, and data processing in VLMs. Our analysis reveals that the primary cause is data-related: critical information for image classification is encoded in the VLM's latent space but can only be effectively decoded with enough training data. Specifically, there is a strong correlation between the frequency of class exposure during VLM training and instruction-tuning and the VLM's performance in those classes; when trained with sufficient data, VLMs can match the accuracy of state-of-the-art classification models. Based on these findings, we enhance a VLM by integrating classification-focused datasets into its training, and demonstrate that the enhanced classification performance of the VLM transfers to its general capabilities, resulting in an improvement of 11.8% on the newly collected ImageWikiQA dataset. | https://openreview.net/pdf/ee10974e360c467b026d1c2bea32baa06d183b17.pdf |
Exploring the Role of Large Language Models in Prompt Encoding for Diffusion Models | https://openreview.net/forum?id=7b2DrIBGZz | https://openreview.net/forum?id=7b2DrIBGZz | Bingqi Ma,Zhuofan Zong,Guanglu Song,Hongsheng Li,Yu Liu | NIPS 2024,Poster | Large language models based on decoder-only transformers have demonstrated superior text understanding capabilities compared to CLIP and T5-series models.
However, the paradigm for utilizing current advanced LLMs in text-to-image diffusion models remains to be explored.
We observed an unusual phenomenon: directly using a large language model as the prompt encoder significantly degrades the prompt-following ability in image generation.
We identified two main obstacles behind this issue.
One is the misalignment between the next token prediction training in LLM and the requirement for discriminative prompt features in diffusion models.
The other is the intrinsic positional bias introduced by the decoder-only architecture.
To deal with this issue, we propose a novel framework to fully harness the capabilities of LLMs.
Through the carefully designed usage guidance, we effectively enhance the text representation capability of the LLM for prompt encoding and eliminate its inherent positional bias.
This allows us to flexibly integrate state-of-the-art LLMs into the text-to-image generation model.
Furthermore, we also provide an effective manner to fuse multiple LLMs into our framework.
Considering the excellent performance and scaling capabilities demonstrated by the transformer architecture, we further design an LLM-Infused Diffusion Transformer (LI-DIT)based on the framework.
We conduct extensive experiments to validate LI-DIT across model size and data size.
Benefiting from the inherent ability of the LLMs and our innovative designs, the prompt understanding performance of LI-DIT easily surpasses state-of-the-art open-source models as well as mainstream closed-source commercial models including Stable Diffusion 3, DALL-E 3, and Midjourney V6. | https://openreview.net/pdf/9374523072d5327e80d14acc0c98068a0a4269ed.pdf |
Flaws can be Applause: Unleashing Potential of Segmenting Ambiguous Objects in SAM | https://openreview.net/forum?id=vJSNsSFO95 | https://openreview.net/forum?id=vJSNsSFO95 | Chenxin Li,Yuzhihuang,Wuyang Li,Hengyu Liu,Xinyu Liu,Qing Xu,Zhen Chen,Yue Huang,Yixuan Yuan | NIPS 2024,Poster | As the vision foundation models like the Segment Anything Model (SAM) demonstrate potent universality, they also present challenges in giving ambiguous and uncertain predictions. Significant variations in the model output and granularity can occur with simply subtle changes in the prompt, contradicting the consensus requirement for the robustness of a model. While some established works have been dedicated to stabilizing and fortifying the prediction of SAM, this paper takes a unique path to explore how this flaw can be inverted into an advantage when modeling inherently ambiguous data distributions. We introduce an optimization framework based on a conditional variational autoencoder, which jointly models the prompt and the granularity of the object with a latent probability distribution. This approach enables the model to adaptively perceive and represent the real ambiguous label distribution, taming SAM to produce a series of diverse, convincing, and reasonable segmentation outputs controllably. Extensive experiments on several practical deployment scenarios involving ambiguity demonstrates the exceptional performance of our framework. Project page: \url{https://a-sa-m.github.io/}. | https://openreview.net/pdf/d9d0ed08e91694b0c1b594f2e8d5bece62aa7179.pdf |
Learning Frequency-Adapted Vision Foundation Model for Domain Generalized Semantic Segmentation | https://openreview.net/forum?id=b7hmPlOqr8 | https://openreview.net/forum?id=b7hmPlOqr8 | Qi Bi,Jingjun Yi,Hao Zheng,Haolan Zhan,Yawen Huang,Wei Ji,Yuexiang Li,Yefeng Zheng | NIPS 2024,Poster | The emerging vision foundation model (VFM) has inherited the ability to generalize to unseen images.
Nevertheless, the key challenge of domain-generalized semantic segmentation (DGSS) lies in the domain gap attributed to the cross-domain styles, i.e., the variance of urban landscape and environment dependencies.
Hence, maintaining the style-invariant property with varying domain styles becomes the key bottleneck in harnessing VFM for DGSS.
The frequency space after Haar wavelet transformation provides a feasible way to decouple the style information from the domain-invariant content, since the content and style information are retained in the low- and high- frequency components of the space, respectively.
To this end, we propose a novel Frequency-Adapted (FADA) learning scheme to advance the frontier.
Its overall idea is to separately tackle the content and style information by frequency tokens throughout the learning process.
Particularly, the proposed FADA consists of two branches, i.e., low- and high- frequency branches. The former one is able to stabilize the scene content, while the latter one learns the scene styles and eliminates its impact to DGSS.
Experiments conducted on various DGSS settings show the state-of-the-art performance of our FADA and its versatility to a variety of VFMs.
Source code is available at \url{https://github.com/BiQiWHU/FADA}. | https://openreview.net/pdf/0c0c79a1d0918a562578a6842280b3da2fdf8e3f.pdf |
Diffusion Models are Certifiably Robust Classifiers | https://openreview.net/forum?id=wGP1tBCP1E | https://openreview.net/forum?id=wGP1tBCP1E | Huanran Chen,Yinpeng Dong,Shitong Shao,Zhongkai Hao,Xiao Yang,Hang Su,Jun Zhu | NIPS 2024,Poster | Generative learning, recognized for its effective modeling of data distributions, offers inherent advantages in handling out-of-distribution instances, especially for enhancing robustness to adversarial attacks. Among these, diffusion classifiers, utilizing powerful diffusion models, have demonstrated superior empirical robustness. However, a comprehensive theoretical understanding of their robustness is still lacking, raising concerns about their vulnerability to stronger future attacks. In this study, we prove that diffusion classifiers possess $O(1)$ Lipschitzness, and establish their certified robustness, demonstrating their inherent resilience. To achieve non-constant Lipschitzness, thereby obtaining much tighter certified robustness, we generalize diffusion classifiers to classify Gaussian-corrupted data. This involves deriving the evidence lower bounds (ELBOs) for these distributions, approximating the likelihood using the ELBO, and calculating classification probabilities via Bayes' theorem. Experimental results show the superior certified robustness of these Noised Diffusion Classifiers (NDCs). Notably, we achieve over 80\% and 70\% certified robustness on CIFAR-10 under adversarial perturbations with \(\ell_2\) norms less than 0.25 and 0.5, respectively, using a single off-the-shelf diffusion model without any additional data. | https://openreview.net/pdf/62eb54961c7645deb0bc1355cc5f8a275a3c9c1e.pdf |
HiCoM: Hierarchical Coherent Motion for Dynamic Streamable Scenes with 3D Gaussian Splatting | https://openreview.net/forum?id=De4VWE4rbz | https://openreview.net/forum?id=De4VWE4rbz | Qiankun Gao,Jiarui Meng,Chengxiang Wen,Jie Chen,Jian Zhang | NIPS 2024,Poster | The online reconstruction of dynamic scenes from multi-view streaming videos faces significant challenges in training, rendering and storage efficiency. Harnessing superior learning speed and real-time rendering capabilities, 3D Gaussian Splatting (3DGS) has recently demonstrated considerable potential in this field. However, 3DGS can be inefficient in terms of storage and prone to overfitting by excessively growing Gaussians, particularly with limited views. This paper proposes an efficient framework, dubbed HiCoM, with three key components. First, we construct a compact and robust initial 3DGS representation using a perturbation smoothing strategy. Next, we introduce a Hierarchical Coherent Motion mechanism that leverages the inherent non-uniform distribution and local consistency of 3D Gaussians to swiftly and accurately learn motions across frames. Finally, we continually refine the 3DGS with additional Gaussians, which are later merged into the initial 3DGS to maintain consistency with the evolving scene. To preserve a compact representation, an equivalent number of low-opacity Gaussians that minimally impact the representation are removed before processing subsequent frames. Extensive experiments conducted on two widely used datasets show that our framework improves learning efficiency of the state-of-the-art methods by about 20% and reduces the data storage by 85%, achieving competitive free-viewpoint video synthesis quality but with higher robustness and stability. Moreover, by parallel learning multiple frames simultaneously, our HiCoM decreases the average training wall time to <2 seconds per frame with negligible performance degradation, substantially boosting real-world applicability and responsiveness. | https://openreview.net/pdf/17de5e34c47af4609af59ab7a10213ac03599699.pdf |
SlowFocus: Enhancing Fine-grained Temporal Understanding in Video LLM | https://openreview.net/forum?id=FOkKndty5B | https://openreview.net/forum?id=FOkKndty5B | Ming Nie,Dan Ding,Chunwei Wang,Yuanfan Guo,Jianhua Han,Hang Xu,Li Zhang | NIPS 2024,Poster | Large language models (LLMs) have demonstrated exceptional capabilities in text understanding, which has paved the way for their expansion into video LLMs (Vid-LLMs) to analyze video data. However, current Vid-LLMs struggle to simultaneously retain high-quality frame-level semantic information (i.e., a sufficient number of tokens per frame) and comprehensive video-level temporal information (i.e., an adequate number of sampled frames per video). This limitation hinders the advancement of Vid-LLMs towards fine-grained video understanding. To address this issue, we introduce the SlowFocus mechanism, which significantly enhances the equivalent sampling frequency without compromising the quality of frame-level visual tokens. SlowFocus begins by identifying the query-related temporal segment based on the posed question, then performs dense sampling on this segment to extract local high-frequency features. A multi-frequency mixing attention module is further leveraged to aggregate these local high-frequency details with global low-frequency contexts for enhanced temporal comprehension. Additionally, to tailor Vid-LLMs to this innovative mechanism, we introduce a set of training strategies aimed at bolstering both temporal grounding and detailed temporal reasoning capabilities. Furthermore, we establish FineAction-CGR, a benchmark specifically devised to assess the ability of Vid-LLMs to process fine-grained temporal understanding tasks. Comprehensive experiments demonstrate the superiority of our mechanism across both existing public video understanding benchmarks and our proposed FineAction-CGR. | https://openreview.net/pdf/b162bd6fe6ae72d43cefca3afa10350af7f2aaff.pdf |
Evaluate then Cooperate: Shapley-based View Cooperation Enhancement for Multi-view Clustering | https://openreview.net/forum?id=xoc4QOvbDs | https://openreview.net/forum?id=xoc4QOvbDs | Fangdi Wang,Jiaqi Jin,Jingtao Hu,Suyuan Liu,Xihong Yang,Siwei Wang,Xinwang Liu,En Zhu | NIPS 2024,Poster | The fundamental goal of deep multi-view clustering is to achieve preferable task performance through inter-view cooperation. Although numerous DMVC approaches have been proposed, the collaboration role of individual views have not been well investigated in existing literature. Moreover, how to further enhance view cooperation for better fusion still needs to be explored. In this paper, we firstly consider DMVC as an unsupervised cooperative game where each view can be regarded as a participant. Then, we introduce the Shapley value and propose a novel MVC framework termed Shapley-based Cooperation Enhancing Multi-view Clustering (SCE-MVC), which evaluates view cooperation with game theory. Specially, we employ the optimal transport distance between fused cluster distributions and single view component as the utility function for computing shapley values. Afterwards, we apply shapley values to assess the contribution of each view and utilize these contributions to promote view cooperation. Comprehensive experimental results well support the effectiveness of our framework adopting to existing DMVC frameworks, demonstrating the importance and necessity of enhancing the cooperation among views. | https://openreview.net/pdf/40a1d357eea0e19182fb452e305504eaa3502b19.pdf |
D-MiSo: Editing Dynamic 3D Scenes using Multi-Gaussians Soup | https://openreview.net/forum?id=3og0FT85B2 | https://openreview.net/forum?id=3og0FT85B2 | Joanna Waczynska,Piotr Borycki,Joanna Kaleta,Slawomir Tadeja,Przemysław Spurek | NIPS 2024,Poster | Over the past years, we have observed an abundance of approaches for modeling dynamic 3D scenes using Gaussian Splatting (GS). These solutions use GS to represent the scene's structure and the neural network to model dynamics. Such approaches allow fast rendering and extracting each element of such a dynamic scene. However, modifying such objects over time is challenging. SC-GS (Sparse Controlled Gaussian Splatting) enhanced with Deformed Control Points partially solves this issue. However, this approach necessitates selecting elements that need to be kept fixed, as well as centroids that should be adjusted throughout editing. Moreover, this task poses additional difficulties regarding the re-productivity of such editing. To address this, we propose Dynamic Multi-Gaussian Soup (D-MiSo), which allows us to model the mesh-inspired representation of dynamic GS. Additionally, we propose a strategy of linking parameterized Gaussian splats, forming a Triangle Soup with the estimated mesh. Consequently, we can separately construct new trajectories for the 3D objects composing the scene. Thus, we can make the scene's dynamic editable over time or while maintaining partial dynamics. | https://openreview.net/pdf/268a2295cec059662d542fb82e512f448c51ae61.pdf |
ParallelEdits: Efficient Multi-Aspect Text-Driven Image Editing with Attention Grouping | https://openreview.net/forum?id=cCL92OPlDz | https://openreview.net/forum?id=cCL92OPlDz | Mingzhen Huang,Jialing Cai,Shan Jia,Vishnu Suresh Lokhande,Siwei Lyu | NIPS 2024,Poster | Text-driven image synthesis has made significant advancements with the development of diffusion models, transforming how visual content is generated from text prompts. Despite these advances, text-driven image editing, a key area in computer graphics, faces unique challenges. A major challenge is making simultaneous edits across multiple objects or attributes. Applying these methods sequentially for multi-attribute edits increases computational demands and efficiency losses.
In this paper, we address these challenges with significant contributions. Our main contribution is the development of ParallelEdits, a method that seamlessly manages simultaneous edits across multiple attributes. In contrast to previous approaches, ParallelEdits not only preserves the quality of single attribute edits but also significantly improves the performance of multitasking edits. This is achieved through innovative attention distribution mechanism and multi-branch design that operates across several processing heads.
Additionally, we introduce the PIE-Bench++ dataset, an expansion of the original PIE-Bench dataset, to better support evaluating image-editing tasks involving multiple objects and attributes simultaneously. This dataset is a benchmark for evaluating text-driven image editing methods in multifaceted scenarios. | https://openreview.net/pdf/6644a38b625fbcd0487242f99ba9a60f7611d01e.pdf |
LiveScene: Language Embedding Interactive Radiance Fields for Physical Scene Control and Rendering | https://openreview.net/forum?id=Jkt42QYyEH | https://openreview.net/forum?id=Jkt42QYyEH | Delin Qu,Qizhi Chen,Pingrui Zhang,Xianqiang Gao,Bin Zhao,Zhigang Wang,Dong Wang,Xuelong Li | NIPS 2024,Poster | This paper scales object-level reconstruction to complex scenes, advancing interactive scene reconstruction. We introduce two datasets, OmniSim and InterReal, featuring 28 scenes with multiple interactive objects. To tackle the challenge of inaccurate interactive motion recovery in complex scenes, we propose LiveScene, a scene-level language-embedded interactive radiance field that efficiently reconstructs and controls multiple objects. By decomposing the interactive scene into local deformable fields, LiveScene enables separate reconstruction of individual object motions, reducing memory consumption. Additionally, our interaction-aware language embedding localizes individual interactive objects, allowing for arbitrary control using natural language. Our approach demonstrates significant superiority in novel view synthesis, interactive scene control, and language grounding performance through extensive experiments. Project page: https://livescenes.github.io. | https://openreview.net/pdf/db46ca38beed8e31670315500fdc6d0bf0bf5757.pdf |
GuardT2I: Defending Text-to-Image Models from Adversarial Prompts | https://openreview.net/forum?id=FMrNus3d0n | https://openreview.net/forum?id=FMrNus3d0n | Yijun Yang,Ruiyuan Gao,Xiao Yang,Jianyuan Zhong,Qiang Xu | NIPS 2024,Poster | Recent advancements in Text-to-Image models have raised significant safety concerns about their potential misuse for generating inappropriate or Not-Safe-For-Work contents, despite existing countermeasures such as Not-Safe-For-Work classifiers or model fine-tuning for inappropriate concept removal. Addressing this challenge, our study unveils GuardT2I a novel moderation framework that adopts a generative approach to enhance Text-to-Image models’ robustness against adversarial prompts. Instead of making a binary classification, GuardT2I utilizes a large language model to conditionally transform text guidance embeddings within the Text-to-Image models into natural language for effective adversarial prompt detection, without compromising the models’ inherent performance. Our extensive experiments reveal that GuardT2I outperforms leading commercial solutions like OpenAI-Moderation and Microsoft Azure Moderator by a significant margin across diverse adversarial scenarios. Our framework is available at https://github.com/cure-lab/GuardT2I. | https://openreview.net/pdf/a5df915bb3f84deba44b7362dd79edb743d577f7.pdf |
Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation | https://openreview.net/forum?id=1ptdkwZbMG | https://openreview.net/forum?id=1ptdkwZbMG | Qingwen Bu,Jia Zeng,Li Chen,Yanchao Yang,Guyue Zhou,Junchi Yan,Ping Luo,Heming Cui,Yi Ma,Hongyang Li | NIPS 2024,Poster | Despite significant progress in robotics and embodied AI in recent years, deploying robots for long-horizon tasks remains a great challenge. Majority of prior arts adhere to an open-loop philosophy and lack real-time feedback, leading to error accumulation and undesirable robustness. A handful of approaches have endeavored to establish feedback mechanisms leveraging pixel-level differences or pre-trained visual representations, yet their efficacy and adaptability have been found to be constrained. Inspired by classic closed-loop control systems, we propose CLOVER, a closed-loop visuomotor control framework that incorporates feedback mechanisms to improve adaptive robotic control. CLOVER consists of a text-conditioned video diffusion model for generating visual plans as reference inputs, a measurable embedding space for accurate error quantification, and a feedback-driven controller that refines actions from feedback and initiates replans as needed. Our framework exhibits notable advancement in real-world robotic tasks and achieves state-of-the-art on CALVIN benchmark, improving by 8% over previous open-loop counterparts. Code and checkpoints are maintained at https://github.com/OpenDriveLab/CLOVER. | https://openreview.net/pdf/6d661ae5f68b379172437a497e57e1f2a5dd4f7e.pdf |
How to Use Diffusion Priors under Sparse Views? | https://openreview.net/forum?id=i6BBclCymR | https://openreview.net/forum?id=i6BBclCymR | Qisen Wang,Yifan Zhao,Jiawei Ma,Jia Li | NIPS 2024,Poster | Novel view synthesis under sparse views has been a long-term important challenge in 3D reconstruction. Existing works mainly rely on introducing external semantic or depth priors to supervise the optimization of 3D representations. However, the diffusion model, as an external prior that can directly provide visual supervision, has always underperformed in sparse-view 3D reconstruction using Score Distillation Sampling (SDS) due to the low information entropy of sparse views compared to text, leading to optimization challenges caused by mode deviation. To this end, we present a thorough analysis of SDS from the mode-seeking perspective and propose Inline Prior Guided Score Matching (IPSM), which leverages visual inline priors provided by pose relationships between viewpoints to rectify the rendered image distribution and decomposes the original optimization objective of SDS, thereby offering effective diffusion visual guidance without any fine-tuning or pre-training. Furthermore, we propose the IPSM-Gaussian pipeline, which adopts 3D Gaussian Splatting as the backbone and supplements depth and geometry consistency regularization based on IPSM to further improve inline priors and rectified distribution. Experimental results on different public datasets show that our method achieves state-of-the-art reconstruction quality. The code is released at https://github.com/iCVTEAM/IPSM. | https://openreview.net/pdf/ff80e75c3b11f6fe3af3852cb89243a6754e0ba1.pdf |
Diffusion Tuning: Transferring Diffusion Models via Chain of Forgetting | https://openreview.net/forum?id=S98OzJD3jn | https://openreview.net/forum?id=S98OzJD3jn | Jincheng Zhong,Xingzhuo Guo,Jiaxiang Dong,Mingsheng Long | NIPS 2024,Poster | Diffusion models have significantly advanced the field of generative modeling. However, training a diffusion model is computationally expensive, creating a pressing need to adapt off-the-shelf diffusion models for downstream generation tasks. Current fine-tuning methods focus on parameter-efficient transfer learning but overlook the fundamental transfer characteristics of diffusion models.
In this paper, we investigate the transferability of diffusion models and observe a monotonous chain of forgetting trend of transferability along the reverse process. Based on this observation and novel theoretical insights, we present Diff-Tuning, a frustratingly simple transfer approach that leverages the chain of forgetting tendency. Diff-Tuning encourages the fine-tuned model to retain the pre-trained knowledge at the end of the denoising chain close to the generated data while discarding the other noise side.
We conduct comprehensive experiments to evaluate Diff-Tuning, including the transfer of pre-trained Diffusion Transformer models to eight downstream generations and the adaptation of Stable Diffusion to five control conditions with ControlNet.
Diff-Tuning achieves a 24.6% improvement over standard fine-tuning and enhances the convergence speed of ControlNet by 24%. Notably, parameter-efficient transfer learning techniques for diffusion models can also benefit from Diff-Tuning. Code
is available at this repository: https://github.com/thuml/Diffusion-Tuning. | https://openreview.net/pdf/1fb637f8f9f99fabf1579f73981f2014cc5ff585.pdf |
TrajCLIP: Pedestrian trajectory prediction method using contrastive learning and idempotent networks | https://openreview.net/forum?id=fUBFy8tb3z | https://openreview.net/forum?id=fUBFy8tb3z | Pengfei Yao,Yinglong Zhu,Huikun Bi,Tianlu Mao,Zhaoqi Wang | NIPS 2024,Poster | The distribution of pedestrian trajectories is highly complex and influenced by the scene, nearby pedestrians, and subjective intentions. This complexity presents challenges for modeling and generalizing trajectory prediction. Previous methods modeled the feature space of future trajectories based on the high-dimensional feature space of historical trajectories, but this approach is suboptimal because it overlooks the similarity between historical and future trajectories. Our proposed method, TrajCLIP, utilizes contrastive learning and idempotent generative networks to address this issue. By pairing historical and future trajectories and applying contrastive learning on the encoded feature space, we enforce same-space consistency constraints. To manage complex distributions, we use idempotent loss and tightness loss to control over-expansion in the latent space. Additionally, we have developed a trajectory interpolation algorithm and synthetic trajectory data to enhance model capacity and improve generalization. Experimental results on public datasets demonstrate that TrajCLIP achieves state-of-the-art performance and excels in scene-to-scene transfer, few-shot transfer, and online learning tasks. | https://openreview.net/pdf/4a7d843ef5132b33c713a622055fd3bf52621e15.pdf |
Adaptive Layer Sparsity for Large Language Models via Activation Correlation Assessment | https://openreview.net/forum?id=Jup0qZxH7U | https://openreview.net/forum?id=Jup0qZxH7U | Wei Li,Lujun Li,Mark G. Lee,Shengjie Sun | NIPS 2024,Poster | Large Language Models (LLMs) have revolutionized the field of natural language processing with their impressive capabilities. However, their enormous size presents challenges for deploying them in real-world applications. Traditional compression techniques, like pruning, often lead to suboptimal performance due to their uniform pruning ratios and lack of consideration for the varying importance of features across different layers. To address these limitations, we present a novel Adaptive Layer Sparsity (ALS) approach to optimize LLMs. Our approach consists of two key steps. Firstly, we estimate the correlation matrix between intermediate layers by leveraging the concept of information orthogonality. This novel perspective allows for a precise measurement of the importance of each layer across the model. Secondly, we employ a linear optimization algorithm to develop an adaptive sparse allocation strategy based on evaluating the correlation matrix. This strategy enables us to selectively prune features in intermediate layers, achieving fine-grained optimization of the LLM model. Considering the varying importance across different layers, we can significantly reduce the model size without sacrificing performance. We conduct extensive experiments on publicly available language processing datasets, including the LLaMA-V1|V2|V3 family and OPT, covering various benchmarks. Our experimental results validate the effectiveness of our ALS method, showcasing its superiority over previous approaches. The performance gains demonstrate its potential for enhancing LLMs' efficiency and resource utilization. Notably, our approach surpasses the state-of-the-art models Wanda and SparseGPT, showcasing its ability to excel even under high sparsity levels. Codes at: https://github.com/lliai/ALS. | https://openreview.net/pdf/aa1a3296dd3e918f169106355de0705b98aaae12.pdf |
ChatCam: Empowering Camera Control through Conversational AI | https://openreview.net/forum?id=IxazPgGF8h | https://openreview.net/forum?id=IxazPgGF8h | Xinhang Liu,Yu-Wing Tai,Chi-Keung Tang | NIPS 2024,Poster | Cinematographers adeptly capture the essence of the world, crafting compelling visual narratives through intricate camera movements. Witnessing the strides made by large language models in perceiving and interacting with the 3D world, this study explores their capability to control cameras with human language guidance. We introduce ChatCam, a system that navigates camera movements through conversations with users, mimicking a professional cinematographer's workflow. To achieve this, we propose CineGPT, a GPT-based autoregressive model for text-conditioned camera trajectory generation. We also develop an Anchor Determinator to ensure precise camera trajectory placement. ChatCam understands user requests and employs our proposed tools to generate trajectories, which can be used to render high-quality video footage on radiance field representations. Our experiments, including comparisons to state-of-the-art approaches and user studies, demonstrate our approach's ability to interpret and execute complex instructions for camera operation, showing promising applications in real-world production settings. Project page: https://xinhangliu.com/chatcam. | https://openreview.net/pdf/c3de0c5e1ef391957274ddbe6d9b8ccd92b4743f.pdf |
Elucidating the Design Space of Dataset Condensation | https://openreview.net/forum?id=az1SLLsmdR | https://openreview.net/forum?id=az1SLLsmdR | Shitong Shao,Zikai Zhou,Huanran Chen,Zhiqiang Shen | NIPS 2024,Poster | Dataset condensation, a concept within data-centric learning, efficiently transfers critical attributes from an original dataset to a synthetic version, maintaining both diversity and realism. This approach significantly improves model training efficiency and is adaptable across multiple application areas. Previous methods in dataset condensation have faced challenges: some incur high computational costs which limit scalability to larger datasets (e.g., MTT, DREAM, and TESLA), while others are restricted to less optimal design spaces, which could hinder potential improvements, especially in smaller datasets (e.g., SRe$^2$L, G-VBSM, and RDED). To address these limitations, we propose a comprehensive design framework that includes specific, effective strategies like implementing soft category-aware matching and adjusting the learning rate schedule. These strategies are grounded in empirical evidence and theoretical backing. Our resulting approach, Elucidate Dataset Condensation (EDC), establishes a benchmark for both small and large-scale dataset condensation. In our testing, EDC achieves state-of-the-art accuracy, reaching 48.6% on ImageNet-1k with a ResNet-18 model at an IPC of 10, which corresponds to a compression ratio of 0.78%. This performance exceeds those of SRe$^2$L, G-VBSM, and RDED by margins of 27.3%, 17.2%, and 6.6%, respectively. | https://openreview.net/pdf/68730dbf61fbb80c0e5e0a447a79deb3f7e705d9.pdf |
CondTSF: One-line Plugin of Dataset Condensation for Time Series Forecasting | https://openreview.net/forum?id=L1jajNWON5 | https://openreview.net/forum?id=L1jajNWON5 | Jianrong Ding,Zhanyu Liu,Guanjie Zheng,Haiming Jin,Linghe Kong | NIPS 2024,Poster | \textit{Dataset condensation} is a newborn technique that generates a small dataset that can be used in training deep neural networks (DNNs) to lower storage and training costs. The objective of dataset condensation is to ensure that the model trained with the synthetic dataset can perform comparably to the model trained with full datasets. However, existing methods predominantly concentrate on classification tasks, posing challenges in their adaptation to time series forecasting (TS-forecasting). This challenge arises from disparities in the evaluation of synthetic data. In classification, the synthetic data is considered well-distilled if the model trained with the full dataset and the model trained with the synthetic dataset yield identical labels for the same input, regardless of variations in output logits distribution. Conversely, in TS-forecasting, the effectiveness of synthetic data distillation is determined by the distance between predictions of the two models. The synthetic data is deemed well-distilled only when all data points within the predictions are similar. Consequently, TS-forecasting has a more rigorous evaluation methodology compared to classification. To mitigate this gap, we theoretically analyze the optimization objective of dataset condensation for TS-forecasting and propose a new one-line plugin of dataset condensation for TS-forecasting designated as Dataset \textbf{Cond}ensation for \textbf{T}ime \textbf{S}eries \textbf{F}orecasting (CondTSF) based on our analysis. Plugging CondTSF into previous dataset condensation methods facilitates a reduction in the distance between the predictions of the model trained with the full dataset and the model trained with the synthetic dataset, thereby enhancing performance. We conduct extensive experiments on eight commonly used time series datasets. CondTSF consistently improves the performance of all previous dataset condensation methods across all datasets, particularly at low condensing ratios. | https://openreview.net/pdf/28cc2b016db0f17c24001943fa709b13d11a3c95.pdf |
Revisiting Adversarial Patches for Designing Camera-Agnostic Attacks against Person Detection | https://openreview.net/forum?id=2Inwtjvyx8 | https://openreview.net/forum?id=2Inwtjvyx8 | Hui Wei,Zhixiang Wang,Kewei Zhang,Jiaqi Hou,Yuanwei Liu,Hao Tang,Zheng Wang | NIPS 2024,Poster | Physical adversarial attacks can deceive deep neural networks (DNNs), leading to erroneous predictions in real-world scenarios. To uncover potential security risks, attacking the safety-critical task of person detection has garnered significant attention. However, we observe that existing attack methods overlook the pivotal role of the camera, involving capturing real-world scenes and converting them into digital images, in the physical adversarial attack workflow. This oversight leads to instability and challenges in reproducing these attacks. In this work, we revisit patch-based attacks against person detectors and introduce a camera-agnostic physical adversarial attack to mitigate this limitation. Specifically, we construct a differentiable camera Image Signal Processing (ISP) proxy network to compensate for the physical-to-digital transition gap. Furthermore, the camera ISP proxy network serves as a defense module, forming an adversarial optimization framework with the attack module. The attack module optimizes adversarial patches to maximize effectiveness, while the defense module optimizes the conditional parameters of the camera ISP proxy network to minimize attack effectiveness. These modules engage in an adversarial game, enhancing cross-camera stability. Experimental results demonstrate that our proposed Camera-Agnostic Patch (CAP) attack effectively conceals persons from detectors across various imaging hardware, including two distinct cameras and four smartphones. | https://openreview.net/pdf/b51e857258f15640c07e6ad424e8d0656d7c2fdf.pdf |
Performative Control for Linear Dynamical Systems | https://openreview.net/forum?id=7qT72IGkr4 | https://openreview.net/forum?id=7qT72IGkr4 | Songfu Cai,Fei Han,Xuanyu Cao | NIPS 2024,Poster | We introduce the framework of performative control, where the policy chosen by the controller affects the underlying dynamics of the control system. This results in a sequence of policy-dependent system state data with policy-dependent temporal correlations. Following the recent literature on performative prediction \cite{perdomo2020performative}, we introduce the concept of a performatively stable control (PSC) solution. We first propose a sufficient condition for the performative control problem to admit a unique PSC solution with a problem-specific structure of distributional sensitivity propagation and aggregation. We further analyze the impacts of system stability on the existence of the PSC solution. Specifically, for almost surely stable policy-dependent dynamics, the PSC solution exists if the sum of the distributional sensitivities is small enough. However, for almost surely unstable policy-dependent dynamics, the existence of the PSC solution will necessitate a temporally backward decaying of the distributional sensitivities. We finally provide a repeated stochastic gradient descent scheme that converges to the PSC solution and analyze its non-asymptotic convergence rate. Numerical results validate our theoretical analysis. | https://openreview.net/pdf/c8594e4db0c08822719f7db68661c94dce0d365b.pdf |
Embedding Trajectory for Out-of-Distribution Detection in Mathematical Reasoning | https://openreview.net/forum?id=hYMxyeyEc5 | https://openreview.net/forum?id=hYMxyeyEc5 | Yiming Wang,Pei Zhang,Baosong Yang,Derek F. Wong,Zhuosheng Zhang,Rui Wang | NIPS 2024,Poster | Real-world data deviating from the independent and identically distributed (\textit{i.i.d.}) assumption of in-distribution training data poses security threats to deep networks, thus advancing out-of-distribution (OOD) detection algorithms. Detection methods in generative language models (GLMs) mainly focus on uncertainty estimation and embedding distance measurement, with the latter proven to be most effective in traditional linguistic tasks like summarization and translation. However, another complex generative scenario mathematical reasoning poses significant challenges to embedding-based methods due to its high-density feature of output spaces, but this feature causes larger discrepancies in the embedding shift trajectory between different samples in latent spaces. Hence, we propose a trajectory-based method TV score, which uses trajectory volatility for OOD detection in mathematical reasoning. Experiments show that our method outperforms all traditional algorithms on GLMs under mathematical reasoning scenarios and can be extended to more applications with high-density features in output spaces, such as multiple-choice questions. | https://openreview.net/pdf/ad87ad9e99c156ad5c72623e35594aa5e2971d1d.pdf |
XMask3D: Cross-modal Mask Reasoning for Open Vocabulary 3D Semantic Segmentation | https://openreview.net/forum?id=z1GwaNoGnr | https://openreview.net/forum?id=z1GwaNoGnr | Ziyi Wang,Yanbo Wang,Xumin Yu,Jie Zhou,Jiwen Lu | NIPS 2024,Poster | Existing methodologies in open vocabulary 3D semantic segmentation primarily concentrate on establishing a unified feature space encompassing 3D, 2D, and textual modalities. Nevertheless, traditional techniques such as global feature alignment or vision-language model distillation tend to impose only approximate correspondence, struggling notably with delineating fine-grained segmentation boundaries. To address this gap, we propose a more meticulous mask-level alignment between 3D features and the 2D-text embedding space through a cross-modal mask reasoning framework, XMask3D. In our approach, we developed a mask generator based on the denoising UNet from a pre-trained diffusion model, leveraging its capability for precise textual control over dense pixel representations and enhancing the open-world adaptability of the generated masks. We further integrate 3D global features as implicit conditions into the pre-trained 2D denoising UNet, enabling the generation of segmentation masks with additional 3D geometry awareness. Subsequently, the generated 2D masks are employed to align mask-level 3D representations with the vision-language feature space, thereby augmenting the open vocabulary capability of 3D geometry embeddings. Finally, we fuse complementary 2D and 3D mask features, resulting in competitive performance across multiple benchmarks for 3D open vocabulary semantic segmentation. Code is available at https://github.com/wangzy22/XMask3D. | https://openreview.net/pdf/a8f19e2b718d05d788970bd70a67a13e857d6c3f.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.