title
stringlengths 17
147
| url
stringlengths 42
42
| detail_url
stringlengths 42
42
| authors
stringlengths 8
486
| tags
stringclasses 2
values | abstract
stringlengths 468
2.51k
| pdf
stringlengths 71
71
|
---|---|---|---|---|---|---|
Quality-Improved and Property-Preserved Polarimetric Imaging via Complementarily Fusing | https://openreview.net/forum?id=mOK4yD8JFd | https://openreview.net/forum?id=mOK4yD8JFd | Chu Zhou,Yixing Liu,Chao Xu,Boxin Shi | NIPS 2024,Poster | Polarimetric imaging is a challenging problem in the field of polarization-based vision, since setting a short exposure time reduces the signal-to-noise ratio, making the degree of polarization (DoP) and the angle of polarization (AoP) severely degenerated, while if setting a relatively long exposure time, the DoP and AoP would tend to be over-smoothed due to the frequently-occurring motion blur. This work proposes a polarimetric imaging framework that can produce clean and clear polarized snapshots by complementarily fusing a degraded pair of noisy and blurry ones. By adopting a neural network-based three-phase fusing scheme with specially-designed modules tailored to each phase, our framework can not only improve the image quality but also preserve the polarization properties. Experimental results show that our framework achieves state-of-the-art performance. | https://openreview.net/pdf/ad7ff9defdfad11680f8cbd2354bff708adb0441.pdf |
Long-form factuality in large language models | https://openreview.net/forum?id=4M9f8VMt2C | https://openreview.net/forum?id=4M9f8VMt2C | Jerry Wei,Chengrun Yang,Xinying Song,Yifeng Lu,Nathan Zixia Hu,Jie Huang,Dustin Tran,Daiyi Peng,Ruibo Liu,Da Huang,Cosmo Du,Quoc V Le | NIPS 2024,Poster | Large language models (LLMs) often generate content that contains factual errors when responding to fact-seeking prompts on open-ended topics. To benchmark a model’s long-form factuality in open domains, we first use GPT-4 to generate LongFact, a prompt set comprising thousands of questions spanning 38 topics. We then propose that LLM agents can be used as automated evaluators for long-form factuality through a method which we call Search-Augmented Factuality Evaluator (SAFE). SAFE utilizes an LLM to break down a long-form response into a set of individual facts and to evaluate the accuracy of each fact using a multi-step reasoning process comprising sending search queries to Google Search and determining whether a fact is supported by the search results. Furthermore, we propose extending F1 score as an aggregated metric for long-form factuality. To do so, we balance the percentage of supported facts in a response (precision) with the percentage of provided facts relative to a hyperparameter representing a user’s preferred response length (recall).
Empirically, we demonstrate that LLM agents can outperform crowdsourced human annotators—on a set of∼16k individual facts, SAFE agrees with crowdsourced human annotators 72% of the time, and on a random subset of 100 disagreement cases, SAFE wins 76% of the time. At the same time, SAFE is more than 20 times cheaper than human annotators. We also benchmark thirteen language models on LongFact across four model families (Gemini, GPT, Claude, and PaLM-2), finding that larger language models generally achieve better long-form factuality. LongFact, SAFE, and all experimental code are available at https://github.com/google-deepmind/long-form-factuality. | https://openreview.net/pdf/a2aa82687764bc3707d25f7ccc532b2a4cd7de2e.pdf |
Sparse maximal update parameterization: A holistic approach to sparse training dynamics | https://openreview.net/forum?id=OWmu3QOa0O | https://openreview.net/forum?id=OWmu3QOa0O | Nolan Simran Dey,Shane Bergsma,Joel Hestness | NIPS 2024,Poster | Several challenges make it difficult for sparse neural networks to compete with dense models. First, setting a large fraction of weights to zero impairs forward and gradient signal propagation. Second, sparse studies often need to test multiple sparsity levels, while also introducing new hyperparameters (HPs), leading to prohibitive tuning costs. Indeed, the standard practice is to re-use the learning HPs originally crafted for dense models. Unfortunately, we show sparse and
dense networks do not share the same optimal HPs. Without stable dynamics and effective training recipes, it is costly to test sparsity at scale, which is key to surpassing dense networks and making the business case for sparsity acceleration in hardware.
A holistic approach is needed to tackle these challenges and we propose S$\textmu$Par as one such approach. For random unstructured static sparsity, S$\textmu$Par ensures activations, gradients, and weight updates all scale independently of sparsity level. Further, by reparameterizing the HPs, S$\textmu$Par enables the same HP values to be optimal as we vary both sparsity level and model width. HPs can be tuned on small dense networks and transferred to large sparse models, greatly reducing tuning costs. On large-scale language modeling, S$\textmu$Par shows increasing improvements over standard parameterization as sparsity increases, leading up to 11.9\% relative loss improvement at 99.2\% sparsity. A minimal implementation of S$\textmu$Par is available at https://github.com/EleutherAI/nanoGPT-mup/tree/supar. | https://openreview.net/pdf/0906d16e1447d3d0df540c03e4c6cfdd6a8e399f.pdf |
Human-3Diffusion: Realistic Avatar Creation via Explicit 3D Consistent Diffusion Models | https://openreview.net/forum?id=7W0f7lifDk | https://openreview.net/forum?id=7W0f7lifDk | Yuxuan Xue,Xianghui Xie,Riccardo Marin,Gerard Pons-Moll | NIPS 2024,Poster | Creating realistic avatars from a single RGB image is an attractive yet challenging problem. To deal with challenging loose clothing or occlusion by interaction objects, we leverage powerful shape prior from 2D diffusion models pretrained on large datasets. Although 2D diffusion models demonstrate strong generalization capability, they cannot provide multi-view shape priors with guaranteed 3D consistency. We propose Human-3Diffusion: Realistic Avatar Creation via Explicit 3D Consistent Diffusion. Our key insight is that 2D multi-view diffusion and 3D reconstruction models provide complementary information for each other. By coupling them in a tight manner, we can fully leverage the potential of both models. We introduce a novel image-conditioned generative 3D Gaussian Splats reconstruction model that leverages the prior from 2D multi-view diffusion models, and provides an explicit 3D representation, which further guides the 2D reverse sampling process
to have better 3D consistency. Experiments show that our proposed framework outperforms state-of-the-art methods and enables the creation of realistic avatars from a single RGB image, achieving high-fidelity in both geometry and appearance. Extensive ablations also validate the efficacy of our design, (1) multi-view 2D priors conditioning in generative 3D reconstruction and (2) consistency refinement of sampling trajectory via the explicit 3D representation. Our code and models are released at https://yuxuan-xue.com/human-3diffusion/. | https://openreview.net/pdf/75f7a0f999b597254f6d5c8883618b41c682659f.pdf |
EnsIR: An Ensemble Algorithm for Image Restoration via Gaussian Mixture Models | https://openreview.net/forum?id=s1MoH2pACa | https://openreview.net/forum?id=s1MoH2pACa | Shangquan Sun,Wenqi Ren,Zikun Liu,Hyunhee Park,Rui Wang,Xiaochun Cao | NIPS 2024,Poster | Image restoration has experienced significant advancements due to the development of deep learning. Nevertheless, it encounters challenges related to ill-posed problems, resulting in deviations between single model predictions and ground-truths. Ensemble learning, as a powerful machine learning technique, aims to address these deviations by combining the predictions of multiple base models. Most existing works adopt ensemble learning during the design of restoration models, while only limited research focuses on the inference-stage ensemble of pre-trained restoration models. Regression-based methods fail to enable efficient inference, leading researchers in academia and industry to prefer averaging as their choice for post-training ensemble. To address this, we reformulate the ensemble problem of image restoration into Gaussian mixture models (GMMs) and employ an expectation maximization (EM)-based algorithm to estimate ensemble weights for aggregating prediction candidates. We estimate the range-wise ensemble weights on a reference set and store them in a lookup table (LUT) for efficient ensemble inference on the test set. Our algorithm is model-agnostic and training-free, allowing seamless integration and enhancement of various pre-trained image restoration models. It consistently outperforms regression-based methods and averaging ensemble approaches on 14 benchmarks across 3 image restoration tasks, including super-resolution, deblurring and deraining. The codes and all estimated weights have been released in Github. | https://openreview.net/pdf/676c72ba568d7c50e683e52999b0ac04f2db0966.pdf |
Wild-GS: Real-Time Novel View Synthesis from Unconstrained Photo Collections | https://openreview.net/forum?id=Ss7l98DVvD | https://openreview.net/forum?id=Ss7l98DVvD | Jiacong Xu,Yiqun Mei,Vishal M. Patel | NIPS 2024,Poster | Photographs captured in unstructured tourist environments frequently exhibit variable appearances and transient occlusions, challenging accurate scene reconstruction and inducing artifacts in novel view synthesis. Although prior approaches have integrated the Neural Radiance Field (NeRF) with additional learnable modules to handle the dynamic appearances and eliminate transient objects, their extensive training demands and slow rendering speeds limit practical deployments. Recently, 3D Gaussian Splatting (3DGS) has emerged as a promising alternative to NeRF, offering superior training and inference efficiency along with better rendering quality. This paper presents \textit{Wild-GS}, an innovative adaptation of 3DGS optimized for unconstrained photo collections while preserving its efficiency benefits. \textit{Wild-GS} determines the appearance of each 3D Gaussian by their inherent material attributes, global illumination and camera properties per image, and point-level local variance of reflectance. Unlike previous methods that model reference features in image space, \textit{Wild-GS} explicitly aligns the pixel appearance features to the corresponding local Gaussians by sampling the triplane extracted from the reference image. This novel design effectively transfers the high-frequency detailed appearance of the reference view to 3D space and significantly expedites the training process. Furthermore, 2D visibility maps and depth regularization are leveraged to mitigate the transient effects and constrain the geometry, respectively. Extensive experiments demonstrate that \textit{Wild-GS} achieves state-of-the-art rendering performance and the highest efficiency in both training and inference among all the existing techniques. The code can be accessed via: https://github.com/XuJiacong/Wild-GS | https://openreview.net/pdf/0aef2c7ba708df29630528c5e2792d90218ee5dc.pdf |
What Variables Affect Out-of-Distribution Generalization in Pretrained Models? | https://openreview.net/forum?id=pOXgdFEB7q | https://openreview.net/forum?id=pOXgdFEB7q | Md Yousuf Harun,Kyungbok Lee,Jhair Gallardo,Giri Prashanth,Christopher Kanan | NIPS 2024,Poster | Embeddings produced by pre-trained deep neural networks (DNNs) are widely used; however, their efficacy for downstream tasks can vary widely. We study the factors influencing transferability and out-of-distribution (OOD) generalization of pre-trained DNN embeddings through the lens of the tunnel effect hypothesis, which is closely related to intermediate neural collapse. This hypothesis suggests that deeper DNN layers compress representations and hinder OOD generalization. Contrary to earlier work, our experiments show this is not a universal phenomenon. We comprehensively investigate the impact of DNN architecture, training data, image resolution, and augmentations on transferability. We identify that training with high-resolution datasets containing many classes greatly reduces representation compression and improves transferability. Our results emphasize the danger of generalizing findings from toy datasets to broader contexts. | https://openreview.net/pdf/72595a83a1c7b4280a80b08364ed2a1ddae7cae1.pdf |
2DQuant: Low-bit Post-Training Quantization for Image Super-Resolution | https://openreview.net/forum?id=ADJASE9uQ2 | https://openreview.net/forum?id=ADJASE9uQ2 | Kai Liu,Haotong Qin,Yong Guo,Xin Yuan,Linghe Kong,Guihai Chen,Yulun Zhang | NIPS 2024,Poster | Low-bit quantization has become widespread for compressing image super-resolution (SR) models for edge deployment, which allows advanced SR models to enjoy compact low-bit parameters and efficient integer/bitwise constructions for storage compression and inference acceleration, respectively. However, it is notorious that low-bit quantization degrades the accuracy of SR models compared to their full-precision (FP) counterparts. Despite several efforts to alleviate the degradation, the transformer-based SR model still suffers severe degradation due to its distinctive activation distribution. In this work, we present a dual-stage low-bit post-training quantization (PTQ) method for image super-resolution, namely 2DQuant, which achieves efficient and accurate SR under low-bit quantization. The proposed method first investigates the weight and activation and finds that the distribution is characterized by coexisting symmetry and asymmetry, long tails. Specifically, we propose Distribution-Oriented Bound Initialization (DOBI), using different searching strategies to search a coarse bound for quantizers. To obtain refined quantizer parameters, we further propose Distillation Quantization Calibration (DQC), which employs a distillation approach to make the quantized model learn from its FP counterpart. Through extensive experiments on different bits and scaling factors, the performance of DOBI can reach the state-of-the-art (SOTA) while after stage two, our method surpasses existing PTQ in both metrics and visual effects. 2DQuant gains an increase in PSNR as high as 4.52dB on Set5 (x2) compared with SOTA when quantized to 2-bit and enjoys a 3.60x compression ratio and 5.08x speedup ratio. The code and models are available at https://github.com/Kai-Liu001/2DQuant. | https://openreview.net/pdf/fe78af340c1536afd1f8a338d581a93187f1e7b3.pdf |
Reasoning Multi-Agent Behavioral Topology for Interactive Autonomous Driving | https://openreview.net/forum?id=FSgwgQXTxo | https://openreview.net/forum?id=FSgwgQXTxo | Haochen Liu,Li Chen,Yu Qiao,Chen Lv,Hongyang Li | NIPS 2024,Poster | Autonomous driving system aims for safe and social-consistent driving through the behavioral integration among interactive agents. However, challenges remain due to multi-agent scene uncertainty and heterogeneous interaction. Current dense and sparse behavioral representations struggle with inefficiency and inconsistency in multi-agent modeling, leading to instability of collective behavioral patterns when integrating prediction and planning (IPP). To address this, we initiate a topological formation that serves as a compliant behavioral foreground to guide downstream trajectory generations. Specifically, we introduce Behavioral Topology (BeTop), a pivotal topological formulation that explicitly represents the consensual behavioral pattern among multi-agent future. BeTop is derived from braid theory to distill compliant interactive topology from multi-agent future trajectories. A synergistic learning framework (BeTopNet) supervised by BeTop facilitates the consistency of behavior prediction and planning within the predicted topology priors. Through imitative contingency learning, BeTop also effectively manages behavioral uncertainty for prediction and planning. Extensive verification on large-scale real-world datasets, including nuPlan and WOMD, demonstrates that BeTop achieves state-of-the-art performance in both prediction and planning tasks. Further validations on the proposed interactive scenario benchmark showcase planning compliance in interactive cases. Code and model is available at https://github.com/OpenDriveLab/BeTop. | https://openreview.net/pdf/9e8beb8037d5b6dd0ca253feefa7fad799da0544.pdf |
ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models | https://openreview.net/forum?id=LjnDqVcrE9 | https://openreview.net/forum?id=LjnDqVcrE9 | Mingrui Wu,Xinyue Cai,Jiayi Ji,Jiale Li,Oucheng Huang,Gen Luo,Hao Fei,GUANNAN JIANG,Xiaoshuai Sun,Rongrong Ji | NIPS 2024,Poster | In this work, we propose a training-free method to inject visual prompts into Multimodal Large Language Models (MLLMs) through learnable latent variable optimization. We observe that attention, as the core module of MLLMs, connects text prompt tokens and visual tokens, ultimately determining the final results. Our approach involves adjusting visual tokens from the MLP output during inference, controlling the attention response to ensure text prompt tokens attend to visual tokens in referring regions. We optimize a learnable latent variable based on an energy function, enhancing the strength of referring regions in the attention map. This enables detailed region description and reasoning without the need for substantial training costs or model retraining. Our method offers a promising direction for integrating referring abilities into MLLMs, and supports referring with box, mask, scribble and point. The results demonstrate that our method exhibits out-of-domain generalization and interpretability. | https://openreview.net/pdf/0aed2671ad0387960c360d81f53d57a851c5c323.pdf |
Implicit Multimodal Alignment: On the Generalization of Frozen LLMs to Multimodal Inputs | https://openreview.net/forum?id=9622QfVSAb | https://openreview.net/forum?id=9622QfVSAb | Mustafa Shukor,Matthieu Cord | NIPS 2024,Poster | Large Language Models (LLMs) have demonstrated impressive performance on multimodal tasks, without any multimodal finetuning. They are the de facto building block for Large Multimodal Models (LMMs), yet, we still lack a proper understanding of their success. In this work, we expose frozen LLMs to image, video, audio and text inputs and analyse their internal representation with the attempt to understand their generalization beyond textual inputs. Our work provides the following **findings.** Perceptual tokens (1) are easily distinguishable from textual ones inside LLMs, with significantly different representations (e.g. live in different narrow cones), and complete translation to textual tokens does not exists. Yet, (2) both perceptual and textual tokens activate similar LLM weights. Despite their differences, (3) perceptual tokens are implicitly aligned to textual tokens inside LLMs, we call this the implicit multimodal alignment effect (IMA), and argue that this is linked to architectural design, helping LLMs to generalize. This provide more evidence to believe that the generalization of LLMs to multimodal inputs is mainly due to their architecture. These findings lead to several **implications.** This work provides several implications. (1) We find a positive correlation between the implicit alignment score and the task performance, suggesting that this could act as a proxy metric for model evaluation and selection. (2) A negative correlation exists regarding hallucinations (e.g. describing non-existing objects in images), revealing that this problem is mainly due to misalignment between the internal perceptual and textual representations. (3) Perceptual tokens change slightly throughout the model, thus, we propose different approaches to skip computations (e.g. in FFN layers), and significantly reduce the inference cost. (4) Due to the slowly changing embeddings across layers, and the high overlap between textual and multimodal activated weights, we compress LLMs by keeping only 1 subnetwork (called alpha-SubNet) that works well across a wide range of multimodal tasks. The code is available here: https://github.com/mshukor/ima-lmms. | https://openreview.net/pdf/1c63bbb0cdac3e041f2098f24d0e97ec8468bb2b.pdf |
Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability | https://openreview.net/forum?id=Tw9nfNyOMy | https://openreview.net/forum?id=Tw9nfNyOMy | Shenyuan Gao,Jiazhi Yang,Li Chen,Kashyap Chitta,Yihang Qiu,Andreas Geiger,Jun Zhang,Hongyang Li | NIPS 2024,Poster | World models can foresee the outcomes of different actions, which is of paramount importance for autonomous driving. Nevertheless, existing driving world models still have limitations in generalization to unseen environments, prediction fidelity of critical details, and action controllability for flexible application. In this paper, we present Vista, a generalizable driving world model with high fidelity and versatile controllability. Based on a systematic diagnosis of existing methods, we introduce several key ingredients to address these limitations. To accurately predict real-world dynamics at high resolution, we propose two novel losses to promote the learning of moving instances and structural information. We also devise an effective latent replacement approach to inject historical frames as priors for coherent long-horizon rollouts. For action controllability, we incorporate a versatile set of controls from high-level intentions (command, goal point) to low-level maneuvers (trajectory, angle, and speed) through an efficient learning strategy. After large-scale training, the capabilities of Vista can seamlessly generalize to different scenarios. Extensive experiments on multiple datasets show that Vista outperforms the most advanced general-purpose video generator in over 70% of comparisons and surpasses the best-performing driving world model by 55% in FID and 27% in FVD. Moreover, for the first time, we utilize the capacity of Vista itself to establish a generalizable reward for real-world action evaluation without accessing the ground truth actions. | https://openreview.net/pdf/692ed830a7f68a7b50cedc94e3ddc18cff8dd692.pdf |
Slack-Free Spiking Neural Network Formulation for Hypergraph Minimum Vertex Cover | https://openreview.net/forum?id=4A5IQEjG8c | https://openreview.net/forum?id=4A5IQEjG8c | Tam Ngoc-Bang Nguyen,Anh-Dzung Doan,zhipeng cai,Tat-Jun Chin | NIPS 2024,Poster | Neuromorphic computers open up the potential of energy-efficient computation using spiking neural networks (SNN), which consist of neurons that exchange spike-based information asynchronously. In particular, SNNs have shown promise in solving combinatorial optimization. Underpinning the SNN methods is the concept of energy minimization of an Ising model, which is closely related to quadratic unconstrained binary optimization (QUBO). Thus, the starting point for many SNN methods is reformulating the target problem as QUBO, then executing an SNN-based QUBO solver. For many combinatorial problems, the reformulation entails introducing penalty terms, potentially with slack variables, that implement feasibility constraints in the QUBO objective. For more complex problems such as hypergraph minimum vertex cover (HMVC), numerous slack variables are introduced which drastically increase the search domain and reduce the effectiveness of the SNN solver. In this paper, we propose a novel SNN formulation for HMVC. Rather than using penalty terms with slack variables, our SNN architecture introduces additional spiking neurons with a constraint checking and correction mechanism that encourages convergence to feasible solutions. In effect, our method obviates the need for reformulating HMVC as QUBO. Experiments on neuromorphic hardware show that our method consistently yielded high quality solutions for HMVC on real and synthetic instances where the SNN-based QUBO solver often failed, while consuming measurably less energy than global solvers on CPU. | https://openreview.net/pdf/c307c40636b15433c8728bdac1daa2f9b80b8318.pdf |
SplitNeRF: Split Sum Approximation Neural Field for Joint Geometry, Illumination, and Material Estimation | https://openreview.net/forum?id=clAOSSzT6v | https://openreview.net/forum?id=clAOSSzT6v | Jesus Zarzar,Bernard Ghanem | NIPS 2024,Poster | We present a novel approach for digitizing real-world objects by estimating their geometry, material properties, and environmental lighting from a set of posed images with fixed lighting. Our method incorporates into Neural Radiance Field (NeRF) pipelines the split sum approximation used with image-based lighting for real-time physically based rendering. We propose modeling the scene's lighting with a single scene-specific MLP representing pre-integrated image-based lighting at arbitrary resolutions. We accurately model pre-integrated lighting by exploiting a novel regularizer based on efficient Monte Carlo sampling. Additionally, we propose a new method of supervising self-occlusion predictions by exploiting a similar regularizer based on Monte Carlo sampling. Experimental results demonstrate the efficiency and effectiveness of our approach in estimating scene geometry, material properties, and lighting. Our method attains state-of-the-art relighting quality after only ${\sim}1$ hour of training in a single NVIDIA A100 GPU. | https://openreview.net/pdf/137e8310cddced8001956a18c816e7c84538ff45.pdf |
On the Comparison between Multi-modal and Single-modal Contrastive Learning | https://openreview.net/forum?id=O2UwxfhY1P | https://openreview.net/forum?id=O2UwxfhY1P | Wei Huang,Andi Han,Yongqiang Chen,Yuan Cao,zhiqiang xu,Taiji Suzuki | NIPS 2024,Poster | Multi-modal contrastive learning with language supervision has presented a paradigm shift in modern machine learning. By pre-training on a web-scale dataset, multi-modal contrastive learning can learn high-quality representations that exhibit impressive robustness and transferability. Despite its empirical success, the theoretical understanding is still in its infancy, especially regarding its comparison with single-modal contrastive learning. In this work, we introduce a feature learning theory framework that provides a theoretical foundation for understanding the differences between multi-modal and single-modal contrastive learning. Based on a data generation model consisting of signal and noise, our analysis is performed on a ReLU network trained with the InfoMax objective function. Through a trajectory-based optimization analysis and generalization characterization on downstream tasks, we identify the critical factor, which is the signal-to-noise ratio (SNR), that impacts the generalizability in downstream tasks of both multi-modal and single-modal contrastive learning. Through the cooperation between the two modalities, multi-modal learning can achieve better feature learning, leading to improvements in performance in downstream tasks compared to single-modal learning. Our analysis provides a unified framework that can characterize the optimization and generalization of both single-modal and multi-modal contrastive learning. Empirical experiments on both synthetic and real-world datasets further consolidate our theoretical findings. | https://openreview.net/pdf/e4742cbd1eae79f69e7fcce7a3d012f2be6b6b49.pdf |
Stochastic Optimization Algorithms for Instrumental Variable Regression with Streaming Data | https://openreview.net/forum?id=2RS0fL7Eet | https://openreview.net/forum?id=2RS0fL7Eet | Xuxing Chen,Abhishek Roy,Yifan Hu,Krishna Balasubramanian | NIPS 2024,Poster | We develop and analyze algorithms for instrumental variable regression by viewing the problem as a conditional stochastic optimization problem. In the context of least-squares instrumental variable regression, our algorithms neither require matrix inversions nor mini-batches thereby providing a fully online approach for performing instrumental variable regression with streaming data. When the true model is linear, we derive rates of convergence in expectation, that are of order $\mathcal{O}(\log T/T)$ and $\mathcal{O}(1/T^{1-\epsilon})$ for any $\epsilon>0$, respectively under the availability of two-sample and one-sample oracles respectively. Importantly, under the availability of the two-sample oracle, the aforementioned rate is actually agnostic to the relationship between confounder and the instrumental variable demonstrating the flexibility of the proposed approach in alleviating the need for explicit model assumptions required in recent works based on reformulating the problem as min-max optimization problems. Experimental validation is provided to demonstrate the advantages of the proposed algorithms over classical approaches like the 2SLS method. | https://openreview.net/pdf/43a926a61c18c89e80089a783c8d885ad119771d.pdf |
GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation | https://openreview.net/forum?id=eM5d7ZmekA | https://openreview.net/forum?id=eM5d7ZmekA | Chubin Zhang,Hongliang Song,Yi Wei,Chen Yu,Jiwen Lu,Yansong Tang | NIPS 2024,Poster | In this work, we introduce the Geometry-Aware Large Reconstruction Model (GeoLRM), an approach which can predict high-quality assets with 512k Gaussians and 21 input images in only 11 GB GPU memory. Previous works neglect the inherent sparsity of 3D structure and do not utilize explicit geometric relationships between 3D and 2D images. This limits these methods to a low-resolution representation and makes it difficult to scale up to the dense views for better quality. GeoLRM tackles these issues by incorporating a novel 3D-aware transformer structure that directly processes 3D points and uses deformable cross-attention mechanisms to effectively integrate image features into 3D representations. We implement this solution through a two-stage pipeline: initially, a lightweight proposal network generates a sparse set of 3D anchor points from the posed image inputs; subsequently, a specialized reconstruction transformer refines the geometry and retrieves textural details. Extensive experimental results demonstrate that GeoLRM significantly outperforms existing models, especially for dense view inputs. We also demonstrate the practical applicability of our model with 3D generation tasks, showcasing its versatility and potential for broader adoption in real-world applications. The project page: https://linshan-bin.github.io/GeoLRM/. | https://openreview.net/pdf/a8147f41f49f4ccf2a25bedb4829dba1ad6ddb47.pdf |
Classification Done Right for Vision-Language Pre-Training | https://openreview.net/forum?id=Hd2EOwKItm | https://openreview.net/forum?id=Hd2EOwKItm | Zilong Huang,Qinghao Ye,Bingyi Kang,Jiashi Feng,Haoqi Fan | NIPS 2024,Poster | We introduce SuperClass, a super simple classification method for vision-language pre-training on image-text data. Unlike its contrastive counterpart CLIP who contrast with a text encoder, SuperClass directly utilizes tokenized raw text as supervised classification labels, without the need for additional text filtering or selection. Due to the absence of the text encoding as contrastive target, SuperClass does not require a text encoder and does not need to maintain a large batch size as CLIP does. SuperClass demonstrated superior performance on various downstream tasks, including classic computer vision benchmarks and vision language downstream tasks. We further explored the scaling behavior of SuperClass on model size, training length, or data size, and reported encouraging results and comparisons to CLIP. https://github.com/x-cls/superclass | https://openreview.net/pdf/3da2324059e3262d31e1de61432cfde6f265149b.pdf |
Interpretable Mesomorphic Networks for Tabular Data | https://openreview.net/forum?id=PmLty7tODm | https://openreview.net/forum?id=PmLty7tODm | Arlind Kadra,Sebastian Pineda Arango,Josif Grabocka | NIPS 2024,Poster | Even though neural networks have been long deployed in applications involving tabular data, still existing neural architectures are not explainable by design. In this paper, we propose a new class of interpretable neural networks for tabular data that are both deep and linear at the same time (i.e. mesomorphic). We optimize deep hypernetworks to generate explainable linear models on a per-instance basis. As a result, our models retain the accuracy of black-box deep networks while offering free-lunch explainability for tabular data by design. Through extensive experiments, we demonstrate that our explainable deep networks have comparable performance to state-of-the-art classifiers on tabular data and outperform current existing methods that are explainable by design. | https://openreview.net/pdf/23c073e6afd4152e00535639570028d904f4e0d0.pdf |
FasterDiT: Towards Faster Diffusion Transformers Training without Architecture Modification | https://openreview.net/forum?id=cqRgoDFaGN | https://openreview.net/forum?id=cqRgoDFaGN | Jingfeng Yao,Cheng Wang,Wenyu Liu,Xinggang Wang | NIPS 2024,Poster | Diffusion Transformers (DiT) have attracted significant attention in research. However, they suffer from a slow convergence rate. In this paper, we aim to accelerate DiT training without any architectural modification. We identify the following issues in the training process: firstly, certain training strategies do not consistently perform well across different data. Secondly, the effectiveness of supervision at specific timesteps is limited. In response, we propose the following contributions: (1) We introduce a new perspective for interpreting the failure of the strategies. Specifically, we slightly extend the definition of Signal-to-Noise Ratio (SNR) and suggest observing the Probability Density Function (PDF) of SNR to understand the essence of the data robustness of the strategy. (2) We conduct numerous experiments and report over one hundred experimental results to empirically summarize a unified accelerating strategy from the perspective of PDF. (3) We develop a new supervision method that further accelerates the training process of DiT. Based on them, we propose FasterDiT, an exceedingly simple and practicable design strategy. With few lines of code modifications, it achieves 2.30 FID on ImageNet at 256x256 resolution with 1000 iterations, which is comparable to DiT (2.27 FID) but 7 times faster in training. | https://openreview.net/pdf/94ce28a4d652148b0b278970906ae6841d4ab774.pdf |
Prototypical Hash Encoding for On-the-Fly Fine-Grained Category Discovery | https://openreview.net/forum?id=seYXqfGT0q | https://openreview.net/forum?id=seYXqfGT0q | Haiyang Zheng,Nan Pu,Wenjing Li,Nicu Sebe,Zhun Zhong | NIPS 2024,Poster | In this paper, we study a practical yet challenging task, On-the-fly Category Discovery (OCD), aiming to online discover the newly-coming stream data that belong to both known and unknown classes, by leveraging only known category knowledge contained in labeled data. Previous OCD methods employ the hash-based technique to represent old/new categories by hash codes for instance-wise inference. However, directly mapping features into low-dimensional hash space not only inevitably damages the ability to distinguish classes and but also causes ``high sensitivity'' issue, especially for fine-grained classes, leading to inferior performance. To address these drawbacks, we propose a novel Prototypical Hash Encoding (PHE) framework consisting of Category-aware Prototype Generation (CPG) and Discriminative Category Encoding (DCE) to mitigate the sensitivity of hash code while preserving rich discriminative information contained in high-dimension feature space, in a two-stage projection fashion. CPG enables the model to fully capture the intra-category diversity by representing each category with multiple prototypes. DCE boosts the discrimination ability of hash code with the guidance of the generated category prototypes and the constraint of minimum separation distance. By jointly optimizing CPG and DCE, we demonstrate that these two components are mutually beneficial towards an effective OCD. Extensive experiments show the significant superiority of our PHE over previous methods, e.g. obtaining an improvement of +5.3% in ALL ACC averaged on all datasets. Moreover, due to the nature of the interpretable prototypes, we visually analyze the underlying mechanism of how PHE helps group certain samples into either known or unknown categories. Code is available at https://github.com/HaiyangZheng/PHE. | https://openreview.net/pdf/6c9715b8dcae81abdb9d730649c9a5bf577c074f.pdf |
IMAGPose: A Unified Conditional Framework for Pose-Guided Person Generation | https://openreview.net/forum?id=6IyYa4gETN | https://openreview.net/forum?id=6IyYa4gETN | Fei Shen,Jinhui Tang | NIPS 2024,Poster | Diffusion models represent a promising avenue for image generation, having demonstrated competitive performance in pose-guided person image generation.
However, existing methods are limited to generating target images from a source image and a target pose, overlooking two critical user scenarios: generating multiple target images with different poses simultaneously and generating target images from multi-view source images.
To overcome these limitations, we propose IMAGPose, a unified conditional framework for pose-guided image generation, which incorporates three pivotal modules: a feature-level conditioning (FLC) module, an image-level conditioning (ILC) module, and a cross-view attention (CVA) module.
Firstly, the FLC module combines the low-level texture feature from the VAE encoder with the high-level semantic feature from the image encoder, addressing the issue of missing detail information due to the absence of a dedicated person image feature extractor.
Then, the ILC module achieves an alignment of images and poses to adapt to flexible and diverse user scenarios by injecting a variable number of source image conditions and introducing a masking strategy.
Finally, the CVA module introduces decomposing global and local cross-attention, ensuring local fidelity and global consistency of the person image when multiple source image prompts.
The three modules of IMAGPose work together to unify the task of person image generation under various user scenarios.
Extensive experiment results demonstrate the consistency and photorealism of our proposed IMAGPose under challenging user scenarios.
The code and model will be available at https://github.com/muzishen/IMAGPose. | https://openreview.net/pdf/b3d40c0e6b07e67f29b1d3f1893f4598c5f13631.pdf |
TopoFR: A Closer Look at Topology Alignment on Face Recognition | https://openreview.net/forum?id=KVAx5tys2p | https://openreview.net/forum?id=KVAx5tys2p | Jun Dan,Yang Liu,Jiankang Deng,Haoyu Xie,Siyuan Li,Baigui Sun,Shan Luo | NIPS 2024,Poster | The field of face recognition (FR) has undergone significant advancements with the rise of deep learning. Recently, the success of unsupervised learning and graph neural networks has demonstrated the effectiveness of data structure information. Considering that the FR task can leverage large-scale training data, which intrinsically contains significant structure information, we aim to investigate how to encode such critical structure information into the latent space. As revealed from our observations, directly aligning the structure information between the input and latent spaces inevitably suffers from an overfitting problem, leading to a structure collapse phenomenon in the latent space. To address this problem, we propose TopoFR, a novel FR model that leverages a topological structure alignment strategy called PTSA and a hard sample mining strategy named SDE. Concretely, PTSA uses persistent homology to align the topological structures of the input and latent spaces, effectively preserving the structure information and improving the generalization performance of FR model. To mitigate the impact of hard samples on the latent space structure, SDE accurately identifies hard samples by automatically computing structure damage score (SDS) for each sample, and directs the model to prioritize optimizing these samples. Experimental results on popular face benchmarks demonstrate the superiority of our TopoFR over the state-of-the-art methods. Code and models are available at: https://github.com/modelscope/facechain/tree/main/face_module/TopoFR. | https://openreview.net/pdf/8272e55ab3fe57c9c48e0f682548a4c88d010242.pdf |
TFGDA: Exploring Topology and Feature Alignment in Semi-supervised Graph Domain Adaptation through Robust Clustering | https://openreview.net/forum?id=26BdXIY3ik | https://openreview.net/forum?id=26BdXIY3ik | Jun Dan,Weiming Liu,Chunfeng Xie,Hua Yu,Shunjie Dong,Yanchao Tan | NIPS 2024,Poster | Semi-supervised graph domain adaptation, as a branch of graph transfer learning, aims to annotate unlabeled target graph nodes by utilizing transferable knowledge learned from a label-scarce source graph. However, most existing studies primarily concentrate on aligning feature distributions directly to extract domain-invariant features, while ignoring the utilization of the intrinsic structure information in graphs. Inspired by the significance of data structure information in enhancing models' generalization performance, this paper aims to investigate how to leverage the structure information to assist graph transfer learning. To this end, we propose an innovative framework called TFGDA. Specially, TFGDA employs a structure alignment strategy named STSA to encode graphs' topological structure information into the latent space, greatly facilitating the learning of transferable features. To achieve a stable alignment of feature distributions, we also introduce a SDA strategy to mitigate domain discrepancy on the sphere. Moreover, to address the overfitting issue caused by label scarcity, a simple but effective RNC strategy is devised to guide the discriminative clustering of unlabeled nodes. Experiments on various benchmarks demonstrate the superiority of TFGDA over SOTA methods. | https://openreview.net/pdf/94c41b2cc103995791b2e73d632b3e321f19288f.pdf |
KALM: Knowledgeable Agents by Offline Reinforcement Learning from Large Language Model Rollouts | https://openreview.net/forum?id=tb1MlJCY5g | https://openreview.net/forum?id=tb1MlJCY5g | Jing-Cheng Pang,Si-Hang Yang,Kaiyuan Li,Jiaji Zhang,Xiong-Hui Chen,Nan Tang,Yang Yu | NIPS 2024,Poster | Reinforcement learning (RL) traditionally trains agents using interaction data, which limits their capabilities to the scope of the training data. To create more knowledgeable agents, leveraging knowledge from large language models (LLMs) has shown a promising way. Despite various attempts to combine LLMs with RL, there is commonly a semantic gap between action signals and LLM tokens, which hinders their integration. This paper introduces a novel approach, KALM (Knowledgeable Agents from Language Model Rollouts), to learn knowledgeable agents by bridging this gap. KALM extracts knowledge from LLMs in the form of imaginary rollouts, which agents can learn through offline RL. To overcome the limitation that LLMs are inherently text-based and may be incompatible with numerical environmental data, KALM fine-tunes the LLM to perform bidirectional translation between textual goals and rollouts. This process enables the LLM to understand the environment better, facilitating the generation of meaningful rollouts. Experiments on robotic manipulation tasks demonstrate that KALM allows agents to rephrase complex goals and tackle novel tasks requiring new optimal behaviors. KALM achieves a 46% success rate in completing 1400 various novel goals, significantly outperforming the 26% success rate of baseline methods. Project homepage: https://kalmneurips2024.github.io. | https://openreview.net/pdf/9dafc9653894e809e0f0beedfbf14e92f90376ed.pdf |
PACE: Pacing Operator Learning to Accurate Optical Field Simulation for Complicated Photonic Devices | https://openreview.net/forum?id=uXJlgkWdcI | https://openreview.net/forum?id=uXJlgkWdcI | Hanqing Zhu,Wenyan Cong,Guojin Chen,Shupeng Ning,Ray Chen,Jiaqi Gu,David Z. Pan | NIPS 2024,Poster | Electromagnetic field simulation is central to designing, optimizing, and validating photonic devices and circuits.
However, costly computation associated with numerical simulation poses a significant bottleneck, hindering scalability and turnaround time in the photonic circuit design process.
Neural operators offer a promising alternative, but existing SOTA approaches, Neurolight, struggle with predicting high-fidelity fields for real-world complicated photonic devices, with the best reported 0.38 normalized mean absolute error in Neurolight.
The interplays of highly complex light-matter interaction, e.g., scattering and resonance, sensitivity to local structure details, non-uniform learning complexity for full-domain simulation, and rich frequency information, contribute to the failure of existing neural PDE solvers.
In this work, we boost the prediction fidelity to an unprecedented level for simulating complex photonic devices with a novel operator design driven by the above challenges.
We propose a novel cross-axis factorized PACE operator with a strong long-distance modeling capacity to connect the full-domain complex field pattern with local device structures.
Inspired by human learning, we further divide and conquer the simulation task for extremely hard cases into two progressively easy tasks, with a first-stage model learning an initial solution refined by a second model.
On various complicated photonic device benchmarks, we demonstrate one sole PACE model is capable of achieving 73% lower error with 50% fewer parameters compared with various recent ML for PDE solvers.
The two-stage setup further advances high-fidelity simulation for even more intricate cases.
In terms of runtime,
PACE demonstrates 154-577x and 11.8-12x simulation speedup over numerical solver using scipy or highly-optimized pardiso solver, respectively.
We open-sourced the code and *complicated* optical device dataset at [PACE-Light](https://github.com/zhuhanqing/PACE-Light). | https://openreview.net/pdf/10b54377d0cf5b57481e75058fec3c1d6858aede.pdf |
Instructor-inspired Machine Learning for Robust Molecular Property Prediction | https://openreview.net/forum?id=j7sw0nXLjZ | https://openreview.net/forum?id=j7sw0nXLjZ | Fang Wu,Shuting Jin,Siyuan Li,Stan Z. Li | NIPS 2024,Poster | Machine learning catalyzes a revolution in chemical and biological science. However, its efficacy is heavily dependent on the availability of labeled data, and annotating biochemical data is extremely laborious. To surmount this data sparsity challenge, we present an instructive learning algorithm named InstructMol to measure pseudo-labels' reliability and help the target model leverage large-scale unlabeled data. InstructMol does not require transferring knowledge between multiple domains, which avoids the potential gap between the pretraining and fine-tuning stages. We demonstrated the high accuracy of InstructMol on several real-world molecular datasets and out-of-distribution (OOD) benchmarks. | https://openreview.net/pdf/3af40b767a8a0ebe4cc2100fd32571ac42642e08.pdf |
Lighting Every Darkness with 3DGS: Fast Training and Real-Time Rendering for HDR View Synthesis | https://openreview.net/forum?id=6W3LbkKriL | https://openreview.net/forum?id=6W3LbkKriL | Xin Jin,Pengyi Jiao,Zheng-Peng Duan,Xingchao Yang,Chongyi Li,Chun-Le Guo,Bo Ren | NIPS 2024,Poster | Volumetric rendering-based methods, like NeRF, excel in HDR view synthesis from RAW images, especially for nighttime scenes. They suffer from long training times and cannot perform real-time rendering due to dense sampling requirements. The advent of 3D Gaussian Splatting (3DGS) enables real-time rendering and faster training. However, implementing RAW image-based view synthesis directly using 3DGS is challenging due to its inherent drawbacks: 1) in nighttime scenes, extremely low SNR leads to poor structure-from-motion (SfM) estimation in dis- tant views; 2) the limited representation capacity of the spherical harmonics (SH) function is unsuitable for RAW linear color space; and 3) inaccurate scene structure hampers downstream tasks such as refocusing. To address these issues, we propose LE3D (Lighting Every darkness with 3DGS). Our method proposes Cone Scatter Initialization to enrich the estimation of SfM and replaces SH with a Color MLP to represent the RAW linear color space. Additionally, we introduce depth distortion and near-far regularizations to improve the accuracy of scene structure for down- stream tasks. These designs enable LE3D to perform real-time novel view synthesis, HDR rendering, refocusing, and tone-mapping changes. Compared to previous vol- umetric rendering-based methods, LE3D reduces training time to 1% and improves rendering speed by up to 4,000 times for 2K resolution images in terms of FPS. Code and viewer can be found in https://srameo.github.io/projects/le3d. | https://openreview.net/pdf/ecefd914db2374c1053fcff09f2647cd094248b9.pdf |
Generalized Fast Exact Conformalization | https://openreview.net/forum?id=KNZYJ5zQsG | https://openreview.net/forum?id=KNZYJ5zQsG | Diyang Li | NIPS 2024,Poster | Conformal prediction converts nearly any point estimator into a prediction interval under standard assumptions while ensuring valid coverage. However, the extensive computational demands of full conformal prediction are daunting in practice, as it necessitates a comprehensive number of trainings across the entire latent label space. Unfortunately, existing efforts to expedite conformalization often carry strong assumptions and are developed specifically for certain models, or they only offer approximate solution sets. To address this gap, we develop a method for fast exact conformalization of generalized statistical estimation. Our analysis reveals that the structure of the solution path is inherently piecewise smooth, and indicates that utilizing second-order information of difference equations suffices to approximate the entire solution spectrum arbitrarily. We provide a unified view that not only encompasses existing work but also attempts to offer geometric insights. Practically, our framework integrates seamlessly with well-studied numerical solvers. The significant speedups of our algorithm as compared to the existing standard methods are demonstrated across numerous benchmarks. | https://openreview.net/pdf/9d6357acb244f1d67ef9a2cb0af7bdec221c94fa.pdf |
Efficient Federated Learning against Heterogeneous and Non-stationary Client Unavailability | https://openreview.net/forum?id=DLNOBJa7TM | https://openreview.net/forum?id=DLNOBJa7TM | Ming Xiang,Stratis Ioannidis,Edmund Yeh,Carlee Joe-Wong,Lili Su | NIPS 2024,Poster | Addressing intermittent client availability is critical for the real-world deployment of federated learning algorithms. Most prior work either overlooks the potential non-stationarity in the dynamics of client unavailability or requires substantial memory/computation overhead. We study federated learning in the presence of heterogeneous and non-stationary client availability, which may occur when the deployment environments are uncertain, or the clients are mobile. The impacts of heterogeneity and non-stationarity on client unavailability can be significant, as we illustrate using FedAvg, the most widely adopted federated learning algorithm. We propose FedAWE, which includes novel algorithmic structures that (i) compensate for missed computations due to unavailability with only $O(1)$ additional memory and computation with respect to standard FedAvg, and (ii) evenly diffuse local updates within the federated learning system through implicit gossiping, despite being agnostic to non-stationary dynamics. We show that FedAWE converges to a stationary point of even non-convex objectives while achieving the desired linear speedup property. We corroborate our analysis with numerical experiments over diversified client unavailability dynamics on real-world data sets. | https://openreview.net/pdf/1c5c80086cb6366e13516baada3e2832e752e766.pdf |
Identifying General Mechanism Shifts in Linear Causal Representations | https://openreview.net/forum?id=jWaXhCYTV1 | https://openreview.net/forum?id=jWaXhCYTV1 | Tianyu Chen,Kevin Bello,Francesco Locatello,Bryon Aragam,Pradeep Kumar Ravikumar | NIPS 2024,Poster | We consider the linear causal representation learning setting where we observe a linear mixing of $d$ unknown latent factors, which follow a linear structural causal model.
Recent work has shown that it is possible to recover the latent factors as well as the underlying structural causal model over them, up to permutation and scaling, provided that we have at least $d$ environments, each of which corresponds to perfect interventions on a single latent node (factor).
After this powerful result, a key open problem faced by the community has been to relax these conditions: allow for coarser than perfect single-node interventions, and allow for fewer than $d$ of them, since the number of latent factors $d$ could be very large.
In this work, we consider precisely such a setting, where we allow a smaller than $d$ number of environments, and also allow for very coarse interventions that can very coarsely \textit{change the entire causal graph over the latent factors}.
On the flip side, we relax what we wish to extract to simply the \textit{list of nodes that have shifted between one or more environments}.
We provide a surprising identifiability result that it is indeed possible, under some very mild standard assumptions, to identify the set of shifted nodes.
Our identifiability proof moreover is a constructive one: we explicitly provide necessary and sufficient conditions for a node to be a shifted node, and show that we can check these conditions given observed data.
Our algorithm lends itself very naturally to the sample setting where instead of just interventional distributions, we are provided datasets of samples from each of these distributions.
We corroborate our results on both synthetic experiments as well as an interesting psychometric dataset. The code can be found at https://github.com/TianyuCodings/iLCS. | https://openreview.net/pdf/548aa0f1888d7e062e04466cb9a70a0f79f34ae3.pdf |
Markov Equivalence and Consistency in Differentiable Structure Learning | https://openreview.net/forum?id=TMlGQw7EbC | https://openreview.net/forum?id=TMlGQw7EbC | Chang Deng,Kevin Bello,Pradeep Kumar Ravikumar,Bryon Aragam | NIPS 2024,Poster | Existing approaches to differentiable structure learning of directed acyclic graphs (DAGs) rely on strong identifiability assumptions in order to guarantee that global minimizers of the acyclicity-constrained optimization problem identifies the true DAG. Moreover, it has been observed empirically that the optimizer may exploit undesirable artifacts in the loss function. We explain and remedy these issues by studying the behavior of differentiable acyclicity-constrained programs under general likelihoods with multiple global minimizers. By carefully regularizing the likelihood, it is possible to identify the sparsest model in the Markov equivalence class, even in the absence of an identifiable parametrization. We first study the Gaussian case in detail, showing how proper regularization of the likelihood defines a score that identifies the sparsest model. Assuming faithfulness, it also recovers the Markov equivalence class. These results are then generalized to general models and likelihoods, where the same claims hold. These theoretical results are validated empirically, showing how this can be done using standard gradient-based optimizers, thus paving the way for differentiable structure learning under general models and losses. | https://openreview.net/pdf/c07b05f02c7ca6130550850e2d7539c003b37590.pdf |
WildGaussians: 3D Gaussian Splatting In the Wild | https://openreview.net/forum?id=NU3tE3lIqf | https://openreview.net/forum?id=NU3tE3lIqf | Jonas Kulhanek,Songyou Peng,Zuzana Kukelova,Marc Pollefeys,Torsten Sattler | NIPS 2024,Poster | While the field of 3D scene reconstruction is dominated by NeRFs due to their photorealistic quality, 3D Gaussian Splatting (3DGS) has recently emerged, offering similar quality with real-time rendering speeds. However, both methods primarily excel with well-controlled 3D scenes, while in-the-wild data - characterized by occlusions, dynamic objects, and varying illumination - remains challenging. NeRFs can adapt to such conditions easily through per-image embedding vectors, but 3DGS struggles due to its explicit representation and lack of shared parameters. To address this, we introduce WildGaussians, a novel approach to handle occlusions and appearance changes with 3DGS. By leveraging robust DINO features and integrating an appearance modeling module within 3DGS, our method achieves state-of-the-art results. We demonstrate that WildGaussians matches the real-time rendering speed of 3DGS while surpassing both 3DGS and NeRF baselines in handling in-the-wild data, all within a simple architectural framework. | https://openreview.net/pdf/e68ccb32a1c9b66d901691ae7a5da16162cbeec0.pdf |
OpenDlign: Open-World Point Cloud Understanding with Depth-Aligned Images | https://openreview.net/forum?id=IGCaTQ4n1R | https://openreview.net/forum?id=IGCaTQ4n1R | Ye Mao,Junpeng Jing,Krystian Mikolajczyk | NIPS 2024,Poster | Recent open-world 3D representation learning methods using Vision-Language Models (VLMs) to align 3D point clouds with image-text information have shown superior 3D zero-shot performance. However, CAD-rendered images for this alignment often lack realism and texture variation, compromising alignment robustness. Moreover, the volume discrepancy between 3D and 2D pretraining datasets highlights the need for effective strategies to transfer the representational abilities of VLMs to 3D learning. In this paper, we present OpenDlign, a novel open-world 3D model using depth-aligned images generated from a diffusion model for robust multimodal alignment. These images exhibit greater texture diversity than CAD renderings due to the stochastic nature of the diffusion model. By refining the depth map projection pipeline and designing depth-specific prompts, OpenDlign leverages rich knowledge in pre-trained VLM for 3D representation learning with streamlined fine-tuning. Our experiments show that OpenDlign achieves high zero-shot and few-shot performance on diverse 3D tasks, despite only fine-tuning 6 million parameters on a limited ShapeNet dataset. In zero-shot classification, OpenDlign surpasses previous models by 8.0\% on ModelNet40 and 16.4\% on OmniObject3D. Additionally, using depth-aligned images for multimodal alignment consistently enhances the performance of other state-of-the-art models. | https://openreview.net/pdf/9d58c15d0d6023481a51b9cdefddf3ee04ce2c15.pdf |
RSA: Resolving Scale Ambiguities in Monocular Depth Estimators through Language Descriptions | https://openreview.net/forum?id=vH7GcaDhAo | https://openreview.net/forum?id=vH7GcaDhAo | Ziyao Zeng,Yangchao Wu,Hyoungseob Park,Daniel Wang,Fengyu Yang,Stefano Soatto,Dong Lao,Byung-Woo Hong,Alex Wong | NIPS 2024,Poster | We propose a method for metric-scale monocular depth estimation. Inferring depth from a single image is an ill-posed problem due to the loss of scale from perspective projection during the image formation process. Any scale chosen is a bias, typically stemming from training on a dataset; hence, existing works have instead opted to use relative (normalized, inverse) depth. Our goal is to recover metric-scaled depth maps through a linear transformation. The crux of our method lies in the observation that certain objects (e.g., cars, trees, street signs) are typically found or associated with certain types of scenes (e.g., outdoor). We explore whether language descriptions can be used to transform relative depth predictions to those in metric scale. Our method, RSA , takes as input a text caption describing objects present in an image and outputs the parameters of a linear transformation which can be applied globally to a relative depth map to yield metric-scaled depth predictions. We demonstrate our method on recent general-purpose monocular depth models on indoors (NYUv2, VOID) and outdoors (KITTI). When trained on multiple datasets, RSA can serve as a general alignment module in zero-shot settings. Our method improves over common practices in aligning relative to metric depth and results in predictions that are comparable to an upper bound of fitting relative depth to ground truth via a linear transformation. Code is available at: https://github.com/Adonis-galaxy/RSA. | https://openreview.net/pdf/2ada30851f1515e3885ffaa16845efacb06685cd.pdf |
RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar | https://openreview.net/forum?id=2oZea6pKhl | https://openreview.net/forum?id=2oZea6pKhl | Fangqiang Ding,Xiangyu Wen,Yunzhou Zhu,Yiming Li,Chris Xiaoxuan Lu | NIPS 2024,Poster | 3D occupancy-based perception pipeline has significantly advanced autonomous driving by capturing detailed scene descriptions and demonstrating strong generalizability across various object categories and shapes. Current methods predominantly rely on LiDAR or camera inputs for 3D occupancy prediction. These methods are susceptible to adverse weather conditions, limiting the all-weather deployment of self-driving cars. To improve perception robustness, we leverage the recent advances in automotive radars and introduce a novel approach that utilizes 4D imaging radar sensors for 3D occupancy prediction. Our method, RadarOcc, circumvents the limitations of sparse radar point clouds by directly processing the 4D radar tensor, thus preserving essential scene details. RadarOcc innovatively addresses the challenges associated with the voluminous and noisy 4D radar data by employing Doppler bins descriptors, sidelobe-aware spatial sparsification, and range-wise self-attention mechanisms. To minimize the interpolation errors associated with direct coordinate transformations, we also devise a spherical-based feature encoding followed by spherical-to-Cartesian feature aggregation. We benchmark various baseline methods based on distinct modalities on the public K-Radar dataset. The results demonstrate RadarOcc's state-of-the-art performance in radar-based 3D occupancy prediction and promising results even when compared with LiDAR- or camera-based methods. Additionally, we present qualitative evidence of the superior performance of 4D radar in adverse weather conditions and explore the impact of key pipeline components through ablation studies. | https://openreview.net/pdf/a7c6663d15027d21973269a119bc1ad67116b73f.pdf |
Binarized Diffusion Model for Image Super-Resolution | https://openreview.net/forum?id=yXpfrLMIr2 | https://openreview.net/forum?id=yXpfrLMIr2 | Zheng Chen,Haotong Qin,Yong Guo,Xiongfei Su,Xin Yuan,Linghe Kong,Yulun Zhang | NIPS 2024,Poster | Advanced diffusion models (DMs) perform impressively in image super-resolution (SR), but the high memory and computational costs hinder their deployment. Binarization, an ultra-compression algorithm, offers the potential for effectively accelerating DMs. Nonetheless, due to the model structure and the multi-step iterative attribute of DMs, existing binarization methods result in significant performance degradation. In this paper, we introduce a novel binarized diffusion model, BI-DiffSR, for image SR. First, for the model structure, we design a UNet architecture optimized for binarization. We propose the consistent-pixel-downsample (CP-Down) and consistent-pixel-upsample (CP-Up) to maintain dimension consistent and facilitate the full-precision information transfer. Meanwhile, we design the channel-shuffle-fusion (CS-Fusion) to enhance feature fusion in skip connection. Second, for the activation difference across timestep, we design the timestep-aware redistribution (TaR) and activation function (TaA). The TaR and TaA dynamically adjust the distribution of activations based on different timesteps, improving the flexibility and representation alability of the binarized module. Comprehensive experiments demonstrate that our BI-DiffSR outperforms existing binarization methods. Code is released at: https://github.com/zhengchen1999/BI-DiffSR. | https://openreview.net/pdf/bdcbf03ed2b041a6f0730d46d010316bdcad8da7.pdf |
PediatricsGPT: Large Language Models as Chinese Medical Assistants for Pediatric Applications | https://openreview.net/forum?id=WvoKwq12x5 | https://openreview.net/forum?id=WvoKwq12x5 | Dingkang Yang,Jinjie Wei,Dongling Xiao,Shunli Wang,Tong Wu,Gang Li,Mingcheng Li,Shuaibing Wang,Jiawei Chen,Yue Jiang,Qingyao Xu,Ke Li,Peng Zhai,Lihua Zhang | NIPS 2024,Poster | Developing intelligent pediatric consultation systems offers promising prospects for improving diagnostic efficiency, especially in China, where healthcare resources are scarce. Despite recent advances in Large Language Models (LLMs) for Chinese medicine, their performance is sub-optimal in pediatric applications due to inadequate instruction data and vulnerable training procedures.
To address the above issues, this paper builds PedCorpus, a high-quality dataset of over 300,000 multi-task instructions from pediatric textbooks, guidelines, and knowledge graph resources to fulfil diverse diagnostic demands. Upon well-designed PedCorpus, we propose PediatricsGPT, the first Chinese pediatric LLM assistant built on a systematic and robust training pipeline.
In the continuous pre-training phase, we introduce a hybrid instruction pre-training mechanism to mitigate the internal-injected knowledge inconsistency of LLMs for medical domain adaptation. Immediately, the full-parameter Supervised Fine-Tuning (SFT) is utilized to incorporate the general medical knowledge schema into the models. After that, we devise a direct following preference optimization to enhance the generation of pediatrician-like humanistic responses. In the parameter-efficient secondary SFT phase,
a mixture of universal-specific experts strategy is presented to resolve the competency conflict between medical generalist and pediatric expertise mastery. Extensive results based on the metrics, GPT-4, and doctor evaluations on distinct downstream tasks show that PediatricsGPT consistently outperforms previous Chinese medical LLMs. The project and data will be released at https://github.com/ydk122024/PediatricsGPT. | https://openreview.net/pdf/0383e37508b906bd1e4ecbd8a68ab9773ca50dfc.pdf |
SimVG: A Simple Framework for Visual Grounding with Decoupled Multi-modal Fusion | https://openreview.net/forum?id=fOLNl52Q5U | https://openreview.net/forum?id=fOLNl52Q5U | dai ming,Lingfeng Yang,Yihao Xu,Zhenhua Feng,Wankou Yang | NIPS 2024,Poster | Visual grounding is a common vision task that involves grounding descriptive sentences to the corresponding regions of an image. Most existing methods use independent image-text encoding and apply complex hand-crafted modules or encoder-decoder architectures for modal interaction and query reasoning. However, their performance significantly drops when dealing with complex textual expressions. This is because the former paradigm only utilizes limited downstream data to fit the multi-modal feature fusion. Therefore, it is only effective when the textual expressions are relatively simple. In contrast, given the wide diversity of textual expressions and the uniqueness of downstream training data, the existing fusion module, which extracts multimodal content from a visual-linguistic context, has not been fully investigated. In this paper, we present a simple yet robust transformer-based framework, SimVG, for visual grounding. Specifically, we decouple visual-linguistic feature fusion from downstream tasks by leveraging existing multimodal pre-trained models and incorporating additional object tokens to facilitate deep integration of downstream and pre-training tasks. Furthermore, we design a dynamic weight-balance distillation method in the multi-branch synchronous learning process to enhance the representation capability of the simpler branch. This branch only consists of a lightweight MLP, which simplifies the structure and improves reasoning speed. Experiments on six widely used VG datasets, i.e., RefCOCO/+/g, ReferIt, Flickr30K, and GRefCOCO, demonstrate the superiority of SimVG. Finally, the proposed method not only achieves improvements in efficiency and convergence speed but also attains new state-of-the-art performance on these benchmarks. Codes and models are available at https://github.com/Dmmm1997/SimVG. | https://openreview.net/pdf/4ea4c7f85226e3bce71f70db5a96302bd9198071.pdf |
Boundary Matters: A Bi-Level Active Finetuning Method | https://openreview.net/forum?id=444LAH3MhG | https://openreview.net/forum?id=444LAH3MhG | Han Lu,Yichen Xie,Xiaokang Yang,Junchi Yan | NIPS 2024,Poster | The pretraining-finetuning paradigm has gained widespread adoption in vision tasks and other fields. However, the finetuning phase still requires high-quality annotated samples. To overcome this challenge, the concept of active finetuning has emerged, aiming to select the most appropriate samples for model finetuning within a limited budget. Existing active learning methods struggle in this scenario due to their inherent bias in batch selection. Meanwhile, the recent active finetuning approach focuses solely on global distribution alignment but neglects the contributions of samples to local boundaries. Therefore, we propose a Bi-Level Active Finetuning framework (BiLAF) to select the samples for annotation in one shot, encompassing two stages: core sample selection for global diversity and boundary sample selection for local decision uncertainty. Without the need of ground-truth labels, our method can successfully identify pseudo-class centers, apply a novel denoising technique, and iteratively select boundary samples with designed evaluation metric. Extensive experiments provide qualitative and quantitative evidence of our method's superior efficacy, consistently outperforming the existing baselines. | https://openreview.net/pdf/f9edcbfade57966451fd0f6bb56e2d7e3c895b35.pdf |
Zero-to-Hero: Enhancing Zero-Shot Novel View Synthesis via Attention Map Filtering | https://openreview.net/forum?id=3uQtNWNTwz | https://openreview.net/forum?id=3uQtNWNTwz | Ido Sobol,Chenfeng Xu,Or Litany | NIPS 2024,Poster | Generating realistic images from arbitrary views based on a single source image remains a significant challenge in computer vision, with broad applications ranging from e-commerce to immersive virtual experiences. Recent advancements in diffusion models, particularly the Zero-1-to-3 model, have been widely adopted for generating plausible views, videos, and 3D models. However, these models still struggle with inconsistencies and implausibility in new views generation, especially for challenging changes in viewpoint. In this work, we propose Zero-to-Hero, a novel test-time approach that enhances view synthesis by manipulating attention maps during the denoising process of Zero-1-to-3. By drawing an analogy between the denoising process and stochastic gradient descent (SGD), we implement a filtering mechanism that aggregates attention maps, enhancing generation reliability and authenticity. This process improves geometric consistency without requiring retraining or significant computational resources. Additionally, we modify the self-attention mechanism to integrate information from the source view, reducing shape distortions. These processes are further supported by a specialized sampling schedule. Experimental results demonstrate substantial improvements in fidelity and consistency, validated on a diverse set of out-of-distribution objects. Additionally, we demonstrate the general applicability and effectiveness of Zero-to-Hero in multi-view, and image generation conditioned on semantic maps and pose. | https://openreview.net/pdf/2b060ee47100b88216ae60b75e7ca3abafc26b1f.pdf |
Optimal-state Dynamics Estimation for Physics-based Human Motion Capture from Videos | https://openreview.net/forum?id=RkOT8rAmRR | https://openreview.net/forum?id=RkOT8rAmRR | Cuong Le,Viktor Johansson,Manon Kok,Bastian Wandt | NIPS 2024,Poster | Human motion capture from monocular videos has made significant progress in recent years. However, modern approaches often produce temporal artifacts, e.g. in form of jittery motion and struggle to achieve smooth and physically plausible motions. Explicitly integrating physics, in form of internal forces and exterior torques, helps alleviating these artifacts. Current state-of-the-art approaches make use of an automatic PD controller to predict torques and reaction forces in order to re-simulate the input kinematics, i.e. the joint angles of a predefined skeleton. However, due to imperfect physical models, these methods often require simplifying assumptions and extensive preprocessing of the input kinematics to achieve good performance. To this end, we propose a novel method to selectively incorporate the physics models with the kinematics observations in an online setting, inspired by a neural Kalman-filtering approach. We develop a control loop as a meta-PD controller to predict internal joint torques and external reaction forces, followed by a physics-based motion simulation. A recurrent neural network is introduced to realize a Kalman filter that attentively balances the kinematics input and simulated motion, resulting in an optimal-state dynamics prediction. We show that this filtering step is crucial to provide an online supervision that helps balancing the shortcoming of the respective input motions, thus being important for not only capturing accurate global motion trajectories but also producing physically plausible human poses. The proposed approach excels in the physics-based human pose estimation task and demonstrates the physical plausibility of the predictive dynamics, compared to state of the art. The code is available on https://github.com/cuongle1206/OSDCap. | https://openreview.net/pdf/325366bc6a69db293281709cbf852252b3527c07.pdf |
Neural Pose Representation Learning for Generating and Transferring Non-Rigid Object Poses | https://openreview.net/forum?id=NU54MoKWlA | https://openreview.net/forum?id=NU54MoKWlA | Seungwoo Yoo,Juil Koo,Kyeongmin Yeo,Minhyuk Sung | NIPS 2024,Poster | We propose a novel method for learning representations of poses for 3D deformable objects, which specializes in 1) disentangling pose information from the object's identity, 2) facilitating the learning of pose variations, and 3) transferring pose information to other object identities. Based on these properties, our method enables the generation of 3D deformable objects with diversity in both identities and poses, using variations of a single object. It does not require explicit shape parameterization such as skeletons or joints, point-level or shape-level correspondence supervision, or variations of the target object for pose transfer.
To achieve pose disentanglement, compactness for generative models, and transferability, we first design the pose extractor to represent the pose as a keypoint-based hybrid representation and the pose applier to learn an implicit deformation field. To better distill pose information from the object's geometry, we propose the implicit pose applier to output an intrinsic mesh property, the face Jacobian. Once the extracted pose information is transferred to the target object, the pose applier is fine-tuned in a self-supervised manner to better describe the target object's shapes with pose variations. The extracted poses are also used to train a cascaded diffusion model to enable the generation of novel poses.
Our experiments with the DeformThings4D and Human datasets demonstrate state-of-the-art performance in pose transfer and the ability to generate diverse deformed shapes with various objects and poses. | https://openreview.net/pdf/ebb06e65079fdcecced9d3442b211f6a9ba6e5e4.pdf |
SyncTweedies: A General Generative Framework Based on Synchronized Diffusions | https://openreview.net/forum?id=06Vt6f2js7 | https://openreview.net/forum?id=06Vt6f2js7 | Jaihoon Kim,Juil Koo,Kyeongmin Yeo,Minhyuk Sung | NIPS 2024,Poster | We introduce a general diffusion synchronization framework for generating diverse visual content, including ambiguous images, panorama images, 3D mesh textures, and 3D Gaussian splats textures, using a pretrained image diffusion model. We first present an analysis of various scenarios for synchronizing multiple diffusion processes through a canonical space. Based on the analysis, we introduce a synchronized diffusion method, SyncTweedies, which averages the outputs of Tweedie’s formula while conducting denoising in multiple instance spaces. Compared to previous work that achieves synchronization through finetuning, SyncTweedies is a zero-shot method that does not require any finetuning, preserving the rich prior of diffusion models trained on Internet-scale image datasets without overfitting to specific domains. We verify that SyncTweedies offers the broadest applicability to diverse applications and superior performance compared to the previous state-of-the-art for each application. Our project page is at https://synctweedies.github.io. | https://openreview.net/pdf/359a870b4264337a01a5b8dd7b79979f90e67323.pdf |
Image Copy Detection for Diffusion Models | https://openreview.net/forum?id=gvlOQC6oP1 | https://openreview.net/forum?id=gvlOQC6oP1 | Wenhao Wang,Yifan Sun,Zhentao Tan,Yi Yang | NIPS 2024,Poster | Images produced by diffusion models are increasingly popular in digital artwork and visual marketing. However, such generated images might replicate content from existing ones and pose the challenge of content originality. Existing Image Copy Detection (ICD) models, though accurate in detecting hand-crafted replicas, overlook the challenge from diffusion models. This motivates us to introduce ICDiff, the first ICD specialized for diffusion models. To this end, we construct a Diffusion-Replication (D-Rep) dataset and correspondingly propose a novel deep embedding method. D-Rep uses a state-of-the-art diffusion model (Stable Diffusion V1.5) to generate 40, 000 image-replica pairs, which are manually annotated into 6 replication levels ranging from 0 (no replication) to 5 (total replication). Our method, PDF-Embedding, transforms the replication level of each image-replica pair into a probability density function (PDF) as the supervision signal. The intuition is that the probability of neighboring replication levels should be continuous and smooth. Experimental results show that PDF-Embedding surpasses protocol-driven methods and non-PDF choices on the D-Rep test set. Moreover, by utilizing PDF-Embedding, we find that the replication ratios of well-known diffusion models against an open-source gallery range from 10% to 20%. The project is publicly available at https://icdiff.github.io/. | https://openreview.net/pdf/d33680959e90956d425acc64ab0bb0cd0ff308e8.pdf |
EgoChoir: Capturing 3D Human-Object Interaction Regions from Egocentric Views | https://openreview.net/forum?id=ea4oxkiMP7 | https://openreview.net/forum?id=ea4oxkiMP7 | Yuhang Yang,Wei Zhai,Chengfeng Wang,Chengjun Yu,Yang Cao,Zheng-Jun Zha | NIPS 2024,Poster | Understanding egocentric human-object interaction (HOI) is a fundamental aspect of human-centric perception, facilitating applications like AR/VR and embodied AI. For the egocentric HOI, in addition to perceiving semantics e.g., ''what'' interaction is occurring, capturing ''where'' the interaction specifically manifests in 3D space is also crucial, which links the perception and operation. Existing methods primarily leverage observations of HOI to capture interaction regions from an exocentric view. However, incomplete observations of interacting parties in the egocentric view introduce ambiguity between visual observations and interaction contents, impairing their efficacy. From the egocentric view, humans integrate the visual cortex, cerebellum, and brain to internalize their intentions and interaction concepts of objects, allowing for the pre-formulation of interactions and making behaviors even when interaction regions are out of sight. In light of this, we propose harmonizing the visual appearance, head motion, and 3D object to excavate the object interaction concept and subject intention, jointly inferring 3D human contact and object affordance from egocentric videos. To achieve this, we present EgoChoir, which links object structures with interaction contexts inherent in appearance and head motion to reveal object affordance, further utilizing it to model human contact. Additionally, a gradient modulation is employed to adopt appropriate clues for capturing interaction regions across various egocentric scenarios. Moreover, 3D contact and affordance are annotated for egocentric videos collected from Ego-Exo4D and GIMO to support the task. Extensive experiments on them demonstrate the effectiveness and superiority of EgoChoir. | https://openreview.net/pdf/0a89640b3a0d8315ce020bcde82aff280fa8e7f0.pdf |
Black-Box Forgetting | https://openreview.net/forum?id=lpFDhC91Oj | https://openreview.net/forum?id=lpFDhC91Oj | Yusuke Kuwana,Yuta Goto,Takashi Shibata,Go Irie | NIPS 2024,Poster | Large-scale pre-trained models (PTMs) provide remarkable zero-shot classification capability covering a wide variety of object classes. However, practical applications do not always require the classification of all kinds of objects, and leaving the model capable of recognizing unnecessary classes not only degrades overall accuracy but also leads to operational disadvantages. To mitigate this issue, we explore the selective forgetting problem for PTMs, where the task is to make the model unable to recognize only the specified classes, while maintaining accuracy for the rest. All the existing methods assume ''white-box'' settings, where model information such as architectures, parameters, and gradients is available for training. However, PTMs are often ''black-box,'' where information on such models is unavailable for commercial reasons or social responsibilities. In this paper, we address a novel problem of selective forgetting for black-box models, named Black-Box Forgetting, and propose an approach to the problem. Given that information on the model is unavailable, we optimize the input prompt to decrease the accuracy of specified classes through derivative-free optimization. To avoid difficult high-dimensional optimization while ensuring high forgetting performance, we propose Latent Context Sharing, which introduces common low-dimensional latent components among multiple tokens for the prompt. Experiments on four standard benchmark datasets demonstrate the superiority of our method with reasonable baselines. The code is available at https://github.com/yusukekwn/Black-Box-Forgetting. | https://openreview.net/pdf/44c580114e459439b5c85268f8c524a7df6cb64d.pdf |
Subsurface Scattering for Gaussian Splatting | https://openreview.net/forum?id=2vMvh5XP0P | https://openreview.net/forum?id=2vMvh5XP0P | Jan-Niklas Dihlmann,Arjun Majumdar,Andreas Engelhardt,Raphael Braun,Hendrik Lensch | NIPS 2024,Poster | 3D reconstruction and relighting of objects made from scattering materials present a significant challenge due to the complex light transport beneath the surface. 3D Gaussian Splatting introduced high-quality novel view synthesis at real-time speeds. While 3D Gaussians efficiently approximate an object's surface, they fail to capture the volumetric properties of subsurface scattering. We propose a framework for optimizing an object's shape together with the radiance transfer field given multi-view OLAT (one light at a time) data. Our method decomposes the scene into an explicit surface represented as 3D Gaussians, with a spatially varying BRDF, and an implicit volumetric representation of the scattering component. A learned incident light field accounts for shadowing. We optimize all parameters jointly via ray-traced differentiable rendering. Our approach enables material editing, relighting, and novel view synthesis at interactive rates. We show successful application on synthetic data and contribute a newly acquired multi-view multi-light dataset of objects in a light-stage setup. Compared to previous work we achieve comparable or better results at a fraction of optimization and rendering time while enabling detailed control over material attributes. | https://openreview.net/pdf/e47f0c8da42e87fc6881bc7f968014b6ab73998d.pdf |
FreeLong: Training-Free Long Video Generation with SpectralBlend Temporal Attention | https://openreview.net/forum?id=X9Fga52OOv | https://openreview.net/forum?id=X9Fga52OOv | Yu Lu,Yuanzhi Liang,Linchao Zhu,Yi Yang | NIPS 2024,Poster | Video diffusion models have made substantial progress in various video generation applications. However, training models for long video generation tasks require significant computational and data resources, posing a challenge to developing long video diffusion models.
This paper investigates a straightforward and training-free approach to extend an existing short video diffusion model (e.g. pre-trained on 16-frame videos) for consistent long video generation (e.g. 128 frames). Our preliminary observation has found that directly applying the short video diffusion model to generate long videos can lead to severe video quality degradation. Further investigation reveals that this degradation is primarily due to the distortion of high-frequency components in long videos, characterized by a decrease in spatial high-frequency components and an increase in temporal high-frequency components. Motivated by this, we propose a novel solution named FreeLong to balance the frequency distribution of long video features during the denoising process. FreeLong blends the low-frequency components of global video features, which encapsulate the entire video sequence, with the high-frequency components of local video features that focus on shorter subsequences of frames. This approach maintains global consistency while incorporating diverse and high-quality spatiotemporal details from local videos, enhancing both the consistency and fidelity of long video generation. We evaluated FreeLong on multiple base video diffusion models and observed significant improvements. Additionally, our method supports coherent multi-prompt generation, ensuring both visual coherence and seamless transitions between scenes. Our project page is at: https://yulu.net.cn/freelong. | https://openreview.net/pdf/cccdd4d59c6dce06c7ef16201b6b0c7b1338d389.pdf |
Efficient Combinatorial Optimization via Heat Diffusion | https://openreview.net/forum?id=psDrko9v1D | https://openreview.net/forum?id=psDrko9v1D | Hengyuan Ma,Wenlian Lu,Jianfeng Feng | NIPS 2024,Poster | Combinatorial optimization problems are widespread but inherently challenging due to their discrete nature. The primary limitation of existing methods is that they can only access a small fraction of the solution space at each iteration, resulting in limited efficiency for searching the global optimal. To overcome this challenge, diverging from conventional efforts of expanding the solver's search scope, we focus on enabling information to actively propagate to the solver through heat diffusion. By transforming the target function while preserving its optima, heat diffusion facilitates information flow from distant regions to the solver, providing more efficient navigation. Utilizing heat diffusion, we propose a framework for solving general combinatorial optimization problems. The proposed methodology demonstrates superior performance across a range of the most challenging and widely encountered combinatorial optimizations. Echoing recent advancements in harnessing thermodynamics for generative artificial intelligence, our study further reveals its significant potential in advancing combinatorial optimization. | https://openreview.net/pdf/61bd38e29a750a8f1021582b9a9180767853efdf.pdf |
Exploring Structured Semantic Priors Underlying Diffusion Score for Test-time Adaptation | https://openreview.net/forum?id=c7m1HahBNf | https://openreview.net/forum?id=c7m1HahBNf | Mingjia Li,Shuang Li,Tongrui Su,Longhui Yuan,Jian Liang,Wei Li | NIPS 2024,Poster | Capitalizing on the complementary advantages of generative and discriminative models has always been a compelling vision in machine learning, backed by a growing body of research. This work discloses the hidden semantic structure within score-based generative models, unveiling their potential as effective discriminative priors. Inspired by our theoretical findings, we propose DUSA to exploit the structured semantic priors underlying diffusion score to facilitate the test-time adaptation of image classifiers or dense predictors. Notably, DUSA extracts knowledge from a single timestep of denoising diffusion, lifting the curse of Monte Carlo-based likelihood estimation over timesteps. We demonstrate the efficacy of our DUSA in adapting a wide variety of competitive pre-trained discriminative models on diverse test-time scenarios. Additionally, a thorough ablation study is conducted to dissect the pivotal elements in DUSA. Code is publicly available at https://github.com/BIT-DA/DUSA. | https://openreview.net/pdf/ea01c53f9c1ee51675430f4b33127b1575121b67.pdf |
LinNet: Linear Network for Efficient Point Cloud Representation Learning | https://openreview.net/forum?id=ehfCxpDsrw | https://openreview.net/forum?id=ehfCxpDsrw | Hao Deng,Kunlei Jing,Shengmei Chen,Cheng Liu,Jiawei Ru,Bo Jiang,Lin Wang | NIPS 2024,Poster | Point-based methods have made significant progress, but improving their scalability in large-scale 3D scenes is still a challenging problem. In this paper, we delve into the point-based method and develop a simpler, faster, stronger variant model, dubbed as LinNet. In particular, we first propose the disassembled set abstraction (DSA) module, which is more effective than the previous version of set abstraction. It achieves more efficient local aggregation by leveraging spatial anisotropy and channel anisotropy separately. Additionally, by mapping 3D point clouds onto 1D space-filling curves, we enable parallelization of downsampling and neighborhood queries on GPUs with linear complexity.
LinNet, as a purely point-based method, outperforms most previous methods in both indoor and outdoor scenes without any extra attention, and sparse convolution but merely relying on a simple MLP. It achieves the mIoU of 73.7\%, 81.4\%, and 69.1\% on the S3DIS Area5, NuScenes, and SemanticKITTI validation benchmarks, respectively, while speeding up almost 10x times over PointNeXt. Our work further reveals both the efficacy and efficiency potential of the vanilla point-based models in large-scale representation learning. Our code will be available upon publication. | https://openreview.net/pdf/28ce1d30f53f72e4a911b689d6440fffd5413571.pdf |
DeTeCtive: Detecting AI-generated Text via Multi-Level Contrastive Learning | https://openreview.net/forum?id=cdTTTJfJe3 | https://openreview.net/forum?id=cdTTTJfJe3 | Xun Guo,Yongxin He,Shan Zhang,Ting Zhang,Wanquan Feng,Haibin Huang,Chongyang Ma | NIPS 2024,Poster | Current techniques for detecting AI-generated text are largely confined to manual feature crafting and supervised binary classification paradigms. These methodologies typically lead to performance bottlenecks and unsatisfactory generalizability. Consequently, these methods are often inapplicable for out-of-distribution (OOD) data and newly emerged large language models (LLMs). In this paper, we revisit the task of AI-generated text detection. We argue that the key to accomplishing this task lies in distinguishing writing styles of different authors, rather than simply classifying the text into human-written or AI-generated text. To this end, we propose DeTeCtive, a multi-task auxiliary, multi-level contrastive learning framework. DeTeCtive is designed to facilitate the learning of distinct writing styles, combined with a dense information retrieval pipeline for AI-generated text detection. Our method is compatible with a range of text encoders. Extensive experiments demonstrate that our method enhances the ability of various text encoders in detecting AI-generated text across multiple benchmarks and achieves state-of-the-art results. Notably, in OOD zero-shot evaluation, our method outperforms existing approaches by a large margin. Moreover, we find our method boasts a Training-Free Incremental Adaptation (TFIA) capability towards OOD data, further enhancing its efficacy in OOD detection scenarios. We will open-source our code and models in hopes that our work will spark new thoughts in the field of AI-generated text detection, ensuring safe application of LLMs and enhancing compliance. | https://openreview.net/pdf/5b8e3eed0b160a34f3f8eac5164b410651264d7c.pdf |
Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow | https://openreview.net/forum?id=lhlIUxD5eE | https://openreview.net/forum?id=lhlIUxD5eE | Chen-Hao Chao,Chien Feng,Wei-Fang Sun,Cheng-Kuang Lee,Simon See,Chun-Yi Lee | NIPS 2024,Poster | Existing Maximum-Entropy (MaxEnt) Reinforcement Learning (RL) methods for continuous action spaces are typically formulated based on actor-critic frameworks and optimized through alternating steps of policy evaluation and policy improvement. In the policy evaluation steps, the critic is updated to capture the soft Q-function. In the policy improvement steps, the actor is adjusted in accordance with the updated soft Q-function. In this paper, we introduce a new MaxEnt RL framework modeled using Energy-Based Normalizing Flows (EBFlow). This framework integrates the policy evaluation steps and the policy improvement steps, resulting in a single objective training process. Our method enables the calculation of the soft value function used in the policy evaluation target without Monte Carlo approximation. Moreover, this design supports the modeling of multi-modal action distributions while facilitating efficient action sampling. To evaluate the performance of our method, we conducted experiments on the MuJoCo benchmark suite and a number of high-dimensional robotic tasks simulated by Omniverse Isaac Gym. The evaluation results demonstrate that our method achieves superior performance compared to widely-adopted representative baselines. | https://openreview.net/pdf/ec522f54f9fae11a7e5da3c9da4418013805b5c3.pdf |
HumanVLA: Towards Vision-Language Directed Object Rearrangement by Physical Humanoid | https://openreview.net/forum?id=pjD08dtAh0 | https://openreview.net/forum?id=pjD08dtAh0 | Xinyu Xu,Yizheng Zhang,Yong-Lu Li,Lei Han,Cewu Lu | NIPS 2024,Poster | Physical Human-Scene Interaction (HSI) plays a crucial role in numerous applications.
However, existing HSI techniques are limited to specific object dynamics and privileged information, which prevents the development of more comprehensive applications.
To address this limitation, we introduce HumanVLA for general object rearrangement directed by practical vision and language.
A teacher-student framework is utilized to develop HumanVLA.
A state-based teacher policy is trained first using goal-conditioned reinforcement learning and adversarial motion prior.
Then, it is distilled into a vision-language-action model via behavior cloning.
We propose several key insights to facilitate the large-scale learning process.
To support general object rearrangement by physical humanoid, we introduce a novel Human-in-the-Room dataset encompassing various rearrangement tasks.
Through extensive experiments and analysis, we demonstrate the effectiveness of our approach. | https://openreview.net/pdf/ad32b2e9331156429744c3a06a443f8e2b0be44a.pdf |
Speaking Your Language: Spatial Relationships in Interpretable Emergent Communication | https://openreview.net/forum?id=vIP8IWmZlN | https://openreview.net/forum?id=vIP8IWmZlN | Olaf Lipinski,Adam Sobey,Federico Cerutti,Timothy J. Norman | NIPS 2024,Poster | Effective communication requires the ability to refer to specific parts of an observation in relation to others. While emergent communication literature shows success in developing various language properties, no research has shown the emergence of such positional references. This paper demonstrates how agents can communicate about spatial relationships within their observations. The results indicate that agents can develop a language capable of expressing the relationships between parts of their observation, achieving over 90% accuracy when trained in a referential game which requires such communication. Using a collocation measure, we demonstrate how the agents create such references. This analysis suggests that agents use a mixture of non-compositional and compositional messages to convey spatial relationships. We also show that the emergent language is interpretable by humans. The translation accuracy is tested by communicating with the receiver agent, where the receiver achieves over 78% accuracy using parts of this lexicon, confirming that the interpretation of the emergent language was successful. | https://openreview.net/pdf/542546bc3b321700b242332d3fe1d91c56e85f07.pdf |
Samba: Severity-aware Recurrent Modeling for Cross-domain Medical Image Grading | https://openreview.net/forum?id=aIeXn5103e | https://openreview.net/forum?id=aIeXn5103e | Qi Bi,Jingjun Yi,Hao Zheng,Wei Ji,Haolan Zhan,Yawen Huang,Yuexiang Li,Yefeng Zheng | NIPS 2024,Poster | Disease grading is a crucial task in medical image analysis. Due to the continuous progression of diseases, i.e., the variability within the same level and the similarity between adjacent stages, accurate grading is highly challenging.
Furthermore, in real-world scenarios, models trained on limited source domain datasets should also be capable of handling data from unseen target domains.
Due to the cross-domain variants, the feature distribution between source and unseen target domains can be dramatically different, leading to a substantial decrease in model performance.
To address these challenges in cross-domain disease grading, we propose a Severity-aware Recurrent Modeling (Samba) method in this paper.
As the core objective of most staging tasks is to identify the most severe lesions, which may only occupy a small portion of the image, we propose to encode image patches in a sequential and recurrent manner.
Specifically, a state space model is tailored to store and transport the severity information by hidden states.
Moreover, to mitigate the impact of cross-domain variants, an Expectation-Maximization (EM) based state recalibration mechanism is designed to map the patch embeddings into a more compact space.
We model the feature distributions of different lesions through the Gaussian Mixture Model (GMM) and reconstruct the intermediate features based on learnable severity bases.
Extensive experiments show the proposed Samba outperforms the VMamba baseline by an average accuracy of 23.5\%, 5.6\% and 4.1\% on the cross-domain grading of fatigue fracture, breast cancer and diabetic retinopathy, respectively.
Source code is available at \url{https://github.com/BiQiWHU/Samba}. | https://openreview.net/pdf/5e3f12184937ecf5056c6b042db3ed337bfcceeb.pdf |
QT-ViT: Improving Linear Attention in ViT with Quadratic Taylor Expansion | https://openreview.net/forum?id=V2e0A2XIPF | https://openreview.net/forum?id=V2e0A2XIPF | Yixing Xu,Chao Li,Dong Li,Xiao Sheng,Fan Jiang,Lu Tian,Emad Barsoum | NIPS 2024,Poster | Vision transformer model (ViT) is widely used and performs well in vision tasks due to its ability to capture long-range dependencies. However, the time complexity and memory consumption increase quadratically with the number of input patches which limits the usage of ViT in real-world applications. Previous methods have employed linear attention to mitigate the complexity of the original self-attention mechanism at the expense of effectiveness. In this paper, we propose QT-ViT models that improve the previous linear self-attention using quadratic Taylor expansion. Specifically, we substitute the softmax-based attention with second-order Taylor expansion, and then accelerate the quadratic expansion by reducing the time complexity with a fast approximation algorithm. The proposed method capitalizes on the property of quadratic expansion to achieve superior performance while employing linear approximation for fast inference. Compared to previous studies of linear attention, our approach does not necessitate knowledge distillation or high-order attention residuals to facilitate the training process. Extensive experiments demonstrate the efficiency and effectiveness of the proposed QT-ViTs, showcasing the state-of-the-art results. Particularly, the proposed QT-ViTs consistently surpass the previous SOTA EfficientViTs under different model sizes, and achieve a new Pareto-front in terms of accuracy and speed. | https://openreview.net/pdf/4ac991a45662655a31b18c39d382096b531809a4.pdf |
PERIA: Perceive, Reason, Imagine, Act via Holistic Language and Vision Planning for Manipulation | https://openreview.net/forum?id=zw2K6LfFI9 | https://openreview.net/forum?id=zw2K6LfFI9 | Fei Ni,Jianye HAO,Shiguang Wu,Longxin Kou,Yifu Yuan,Zibin Dong,Jinyi Liu,MingZhi Li,Yuzheng Zhuang,YAN ZHENG | NIPS 2024,Poster | Long-horizon manipulation tasks with general instructions often implicitly encapsulate multiple sub-tasks, posing significant challenges in instruction following.
While language planning is a common approach to decompose general instructions into stepwise sub-instructions, text-only guidance may lack expressiveness and lead to potential ambiguity. Considering that humans often imagine and visualize sub-instructions reasoning out before acting, the imagined subgoal images can provide more intuitive guidance and enhance the reliability of decomposition. Inspired by this, we propose **PERIA**(**PE**rceive, **R**eason, **I**magine, **A**ct), a novel framework that integrates holistic language planning and vision planning for long-horizon manipulation tasks with complex instructions, leveraging both logical and intuitive aspects of task decomposition.
Specifically, we first perform a lightweight multimodal alignment on the encoding side to empower the MLLM to perceive visual details and language instructions.
The MLLM is then jointly instruction-tuned with a pretrained image-editing model to unlock capabilities of simultaneous reasoning of language instructions and generation of imagined subgoals. Furthermore, we introduce a consistency alignment loss to encourage coherent subgoal images and align with their corresponding instructions, mitigating potential hallucinations and semantic conflicts between the two planning manners.
Comprehensive evaluations across three task domains demonstrate that PERIA, benefiting from holistic language and vision planning, significantly outperforms competitive baselines in both instruction following accuracy and task success rate on complex manipulation tasks. | https://openreview.net/pdf/2a39fcbdd8617cd0a7fbe9312a20b9b51ea8ab74.pdf |
ContextGS : Compact 3D Gaussian Splatting with Anchor Level Context Model | https://openreview.net/forum?id=W2qGSMl2Uu | https://openreview.net/forum?id=W2qGSMl2Uu | Yufei Wang,Zhihao Li,Lanqing Guo,Wenhan Yang,Alex Kot,Bihan Wen | NIPS 2024,Poster | Recently, 3D Gaussian Splatting (3DGS) has become a promising framework for novel view synthesis, offering fast rendering speeds and high fidelity. However, the large number of Gaussians and their associated attributes require effective compression techniques.
Existing methods primarily compress neural Gaussians individually and independently, i.e., coding all the neural Gaussians at the same time, with little design for their interactions and spatial dependence. Inspired by the effectiveness of the context model in image compression, we propose the first autoregressive model at the anchor level for 3DGS compression in this work. We divide anchors into different levels and the anchors that are not coded yet can be predicted based on the already coded ones in all the coarser levels, leading to more accurate modeling and higher coding efficiency. To further improve the efficiency of entropy coding, e.g., to code the coarsest level with no already coded anchors, we propose to introduce a low-dimensional quantized feature as the hyperprior for each anchor, which can be effectively compressed. Our work pioneers the context model in the anchor level for 3DGS representation, yielding an impressive size reduction of over 100 times compared to vanilla 3DGS and 15 times compared to the most recent state-of-the-art work Scaffold-GS, while achieving comparable or even higher rendering quality. | https://openreview.net/pdf/ced7ef805dc0ec18637a322fa1740e31f2d4f203.pdf |
Meta-DT: Offline Meta-RL as Conditional Sequence Modeling with World Model Disentanglement | https://openreview.net/forum?id=U9MzoDOKZu | https://openreview.net/forum?id=U9MzoDOKZu | Zhi Wang,Li Zhang,Wenhao Wu,Yuanheng Zhu,Dongbin Zhao,Chunlin Chen | NIPS 2024,Poster | A longstanding goal of artificial general intelligence is highly capable generalists that can learn from diverse experiences and generalize to unseen tasks. The language and vision communities have seen remarkable progress toward this trend by scaling up transformer-based models trained on massive datasets, while reinforcement learning (RL) agents still suffer from poor generalization capacity under such paradigms. To tackle this challenge, we propose Meta Decision Transformer (Meta-DT), which leverages the sequential modeling ability of the transformer architecture and robust task representation learning via world model disentanglement to achieve efficient generalization in offline meta-RL. We pretrain a context-aware world model to learn a compact task representation, and inject it as a contextual condition to the causal transformer to guide task-oriented sequence generation. Then, we subtly utilize history trajectories generated by the meta-policy as a self-guided prompt to exploit the architectural inductive bias. We select the trajectory segment that yields the largest prediction error on the pretrained world model to construct the prompt, aiming to encode task-specific information complementary to the world model maximally. Notably, the proposed framework eliminates the requirement of any expert demonstration or domain knowledge at test time. Experimental results on MuJoCo and Meta-World benchmarks across various dataset types show that Meta-DT exhibits superior few and zero-shot generalization capacity compared to strong baselines while being more practical with fewer prerequisites. Our code is available at https://github.com/NJU-RL/Meta-DT. | https://openreview.net/pdf/cc9d77765bb93053c4d1798a94ec9c8a23501afd.pdf |
AutoPSV: Automated Process-Supervised Verifier | https://openreview.net/forum?id=eOAPWWOGs9 | https://openreview.net/forum?id=eOAPWWOGs9 | Jianqiao Lu,Zhiyang Dou,Hongru WANG,Zeyu Cao,Jianbo Dai,Yunlong Feng,Zhijiang Guo | NIPS 2024,Poster | In this work, we propose a novel method named \textbf{Auto}mated \textbf{P}rocess-\textbf{S}upervised \textbf{V}erifier (\textbf{\textsc{AutoPSV}}) to enhance the reasoning capabilities of large language models (LLMs) by automatically annotating the reasoning steps.
\textsc{AutoPSV} begins by training a verification model on the correctness of final answers, enabling it to generate automatic process annotations.
This verification model assigns a confidence score to each reasoning step, indicating the probability of arriving at the correct final answer from that point onward.
We detect relative changes in the verification's confidence scores across reasoning steps to automatically annotate the reasoning process, enabling error detection even in scenarios where ground truth answers are unavailable.
This alleviates the need for numerous manual annotations or the high computational costs associated with model-induced annotation approaches.
We experimentally validate that the step-level confidence changes learned by the verification model trained on the final answer correctness can effectively identify errors in the reasoning steps.
We demonstrate that the verification model, when trained on process annotations generated by \textsc{AutoPSV}, exhibits improved performance in selecting correct answers from multiple LLM-generated outputs.
Notably, we achieve substantial improvements across five datasets in mathematics and commonsense reasoning. The source code of \textsc{AutoPSV} is available at \url{https://github.com/rookie-joe/AutoPSV}. | https://openreview.net/pdf/4988de950b8e4b0dbf9ad310f40f2c8b4d3c11e2.pdf |
ReVideo: Remake a Video with Motion and Content Control | https://openreview.net/forum?id=xUjBZR6b1T | https://openreview.net/forum?id=xUjBZR6b1T | Chong Mou,Mingdeng Cao,Xintao Wang,Zhaoyang Zhang,Ying Shan,Jian Zhang | NIPS 2024,Poster | Despite significant advancements in video generation and editing using diffusion models, achieving accurate and localized video editing remains a substantial challenge. Additionally, most existing video editing methods primarily focus on altering visual content, with limited research dedicated to motion editing. In this paper, we present a novel attempt to Remake a Video (ReVideo) which stands out from existing methods by allowing precise video editing in specific areas through the specification of both content and motion. Content editing is facilitated by modifying the first frame, while the trajectory-based motion control offers an intuitive user interaction experience. ReVideo addresses a new task involving the coupling and training imbalance between content and motion control. To tackle this, we develop a three-stage training strategy that progressively decouples these two aspects from coarse to fine. Furthermore, we propose a spatiotemporal adaptive fusion module to integrate content and motion control across various sampling steps and spatial locations. Extensive experiments demonstrate that our ReVideo has promising performance on several accurate video editing applications, i.e., (1) locally changing video content while keeping the motion constant, (2) keeping content unchanged and customizing new motion trajectories, (3) modifying both content and motion trajectories. Our method can also seamlessly extend these applications to multi-area editing without specific training, demonstrating its flexibility and robustness. | https://openreview.net/pdf/bb0cf0788a982c6b491da99b791d82fa60d2e219.pdf |
Transferable Adversarial Attacks on SAM and Its Downstream Models | https://openreview.net/forum?id=yDjojeIWO9 | https://openreview.net/forum?id=yDjojeIWO9 | Song Xia,Wenhan Yang,Yi Yu,Xun Lin,Henghui Ding,LINGYU DUAN,Xudong Jiang | NIPS 2024,Poster | The utilization of large foundational models has a dilemma: while fine-tuning downstream tasks from them holds promise for making use of the well-generalized knowledge in practical applications, their open accessibility also poses threats of adverse usage.
This paper, for the first time, explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM), by solely utilizing the information from the open-sourced SAM.
In contrast to prevailing transfer-based adversarial attacks, we demonstrate the existence of adversarial dangers even without accessing the downstream task and dataset to train a similar surrogate model.
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm to extract the intrinsic vulnerability inherent in the foundation model, which is then utilized as the prior knowledge to guide the generation of adversarial perturbations.
Moreover, by formulating the gradient difference in the attacking process between the open-sourced SAM and its fine-tuned downstream models, we theoretically demonstrate that a deviation occurs in the adversarial update direction by directly maximizing the distance of encoded feature embeddings in the open-sourced SAM.
Consequently, we propose a gradient robust loss that simulates the associated uncertainty with gradient-based noise augmentation to enhance the robustness of generated adversarial examples (AEs) towards this deviation, thus improving the transferability.
Extensive experiments demonstrate the effectiveness of the proposed universal meta-initialized and gradient robust adversarial attack (UMI-GRAT) toward SAMs and their downstream models.
Code is available at https://github.com/xiasong0501/GRAT. | https://openreview.net/pdf/57573eaf34a55e1f4cc6ab0db0b428f8ee35133a.pdf |
DRACO: A Denoising-Reconstruction Autoencoder for Cryo-EM | https://openreview.net/forum?id=u1mNGLYN74 | https://openreview.net/forum?id=u1mNGLYN74 | YingJun Shen,Haizhao Dai,Qihe Chen,Yan Zeng,Jiakai Zhang,Yuan Pei,Jingyi Yu | NIPS 2024,Poster | Foundation models in computer vision have demonstrated exceptional performance in zero-shot and few-shot tasks by extracting multi-purpose features from large-scale datasets through self-supervised pre-training methods. However, these models often overlook the severe corruption in cryogenic electron microscopy (cryo-EM) images by high-level noises. We introduce DRACO, a Denoising-Reconstruction Autoencoder for CryO-EM, inspired by the Noise2Noise (N2N) approach. By processing cryo-EM movies into odd and even images and treating them as independent noisy observations, we apply a denoising-reconstruction hybrid training scheme. We mask both images to create denoising and reconstruction tasks. For DRACO's pre-training, the quality of the dataset is essential, we hence build a high-quality, diverse dataset from an uncurated public database, including over 270,000 movies or micrographs. After pre-training, DRACO naturally serves as a generalizable cryo-EM image denoiser and a foundation model for various cryo-EM downstream tasks. DRACO demonstrates the best performance in denoising, micrograph curation, and particle picking tasks compared to state-of-the-art baselines. | https://openreview.net/pdf/40929f3ca7d47d1d178027cd1953fdb7039fb933.pdf |
Automated Multi-level Preference for MLLMs | https://openreview.net/forum?id=woENr7FJaI | https://openreview.net/forum?id=woENr7FJaI | Mengxi Zhang,Wenhao Wu,Yu Lu,YuXin Song,KANG RONG,Huanjin Yao,Jianbo Zhao,Fanglong Liu,Haocheng Feng,Jingdong Wang,Yifan Sun | NIPS 2024,Poster | Current multimodal Large Language Models (MLLMs) suffer from ''hallucination'', occasionally generating responses that are not grounded in the input images. To tackle this challenge, one promising path is to utilize reinforcement learning from human feedback (RLHF), which steers MLLMs towards learning superior responses while avoiding inferior ones. We rethink the common practice of using binary preferences (*i.e.*, superior, inferior), and find that adopting multi-level preferences (*e.g.*, superior, medium, inferior) is better for two benefits: 1) It narrows the gap between adjacent levels, thereby encouraging MLLMs to discern subtle differences. 2) It further integrates cross-level comparisons (beyond adjacent-level comparisons), thus providing a broader range of comparisons with hallucination examples. To verify our viewpoint, we present the Automated Multi-level Preference (**AMP**) framework for MLLMs. To facilitate this framework, we first develop an automated dataset generation pipeline that provides high-quality multi-level preference datasets without any human annotators. Furthermore, we design the Multi-level Direct Preference Optimization (MDPO) algorithm to robustly conduct complex multi-level preference learning. Additionally, we propose a new hallucination benchmark, MRHal-Bench. Extensive experiments across public hallucination and general benchmarks, as well as our MRHal-Bench, demonstrate the effectiveness of our proposed method. Code is available at https://github.com/takomc/amp. | https://openreview.net/pdf/a5533caccb0d2513850f2e35a5cf67613481d4b0.pdf |
Training-Free Adaptive Diffusion with Bounded Difference Approximation Strategy | https://openreview.net/forum?id=cS63YtJ49A | https://openreview.net/forum?id=cS63YtJ49A | Hancheng Ye,Jiakang Yuan,Renqiu Xia,Xiangchao Yan,Tao Chen,Junchi Yan,Botian Shi,Bo Zhang | NIPS 2024,Poster | Diffusion models have recently achieved great success in the synthesis of high-quality images and videos. However, the existing denoising techniques in diffusion models are commonly based on step-by-step noise predictions, which suffers from high computation cost, resulting in a prohibitive latency for interactive applications. In this paper, we propose AdaptiveDiffusion to relieve this bottleneck by adaptively reducing the noise prediction steps during the denoising process. Our method considers the potential of skipping as many noise prediction steps as possible while keeping the final denoised results identical to the original full-step ones. Specifically, the skipping strategy is guided by the third-order latent difference that indicates the stability between timesteps during the denoising process, which benefits the reusing of previous noise prediction results. Extensive experiments on image and video diffusion models demonstrate that our method can significantly speed up the denoising process while generating identical results to the original process, achieving up to an average 2-5x speedup without quality degradation. The code is available at https://github.com/UniModal4Reasoning/AdaptiveDiffusion | https://openreview.net/pdf/9e7a5fae2084f95deda1fd4826e01e33d3efb7df.pdf |
DataStealing: Steal Data from Diffusion Models in Federated Learning with Multiple Trojans | https://openreview.net/forum?id=792txRlKit | https://openreview.net/forum?id=792txRlKit | Yuan Gan,Jiaxu Miao,Yi Yang | NIPS 2024,Poster | Federated Learning (FL) is commonly used to collaboratively train models with privacy preservation. In this paper, we found out that the popular diffusion models have introduced a new vulnerability to FL, which brings serious privacy threats. Despite stringent data management measures, attackers can steal massive private data from local clients through multiple Trojans, which control generative behaviors with multiple triggers. We refer to the new task as ${\bf\textit{DataStealing}}$ and demonstrate that the attacker can achieve the purpose based on our proposed Combinatorial Triggers (ComboTs) in a vanilla FL system. However, advanced distance-based FL defenses are still effective in filtering the malicious update according to the distances between each local update. Hence, we propose an Adaptive Scale Critical Parameters (AdaSCP) attack to circumvent the defenses and seamlessly incorporate malicious updates into the global model. Specifically, AdaSCP evaluates the importance of parameters with the gradients in dominant timesteps of the diffusion model. Subsequently, it adaptively seeks the optimal scale factor and magnifies critical parameter updates before uploading to the server. As a result, the malicious update becomes similar to the benign update, making it difficult for distance-based defenses to identify. Extensive experiments reveal the risk of leaking thousands of images in training diffusion models with FL. Moreover, these experiments demonstrate the effectiveness of AdaSCP in defeating advanced distance-based defenses. We hope this work will attract more attention from the FL community to the critical privacy security issues of Diffusion Models. Code: https://github.com/yuangan/DataStealing. | https://openreview.net/pdf/1204a82fd0e49368bee4c870ffd0aad8b7d17554.pdf |
Expert-level protocol translation for self-driving labs | https://openreview.net/forum?id=qXidsICaja | https://openreview.net/forum?id=qXidsICaja | Yu-Zhe Shi,Fanxu Meng,Haofei Hou,Zhangqian Bi,Qiao Xu,Lecheng Ruan,Qining Wang | NIPS 2024,Poster | Recent development in Artificial Intelligence (AI) models has propelled their application in scientific discovery, but the validation and exploration of these discoveries require subsequent empirical experimentation. The concept of self-driving laboratories promises to automate and thus boost the experimental process following AI-driven discoveries. However, the transition of experimental protocols, originally crafted for human comprehension, into formats interpretable by machines presents significant challenges, which, within the context of specific expert domain, encompass the necessity for structured as opposed to natural language, the imperative for explicit rather than tacit knowledge, and the preservation of causality and consistency throughout protocol steps. Presently, the task of protocol translation predominantly requires the manual and labor-intensive involvement of domain experts and information technology specialists, rendering the process time-intensive. To address these issues, we propose a framework that automates the protocol translation process through a three-stage workflow, which incrementally constructs Protocol Dependence Graphs (PDGs) that approach structured on the syntax level, completed on the semantics level, and linked on the execution level. Quantitative and qualitative evaluations have demonstrated its performance at par with that of human experts, underscoring its potential to significantly expedite and democratize the process of scientific discovery by elevating the automation capabilities within self-driving laboratories. | https://openreview.net/pdf/32c1b38c663bb9937862966e4eb3988a23de5cf9.pdf |
Continual Learning in the Frequency Domain | https://openreview.net/forum?id=XgAzCLsJAq | https://openreview.net/forum?id=XgAzCLsJAq | RuiQi Liu,Boyu Diao,Libo Huang,Zijia An,Zhulin An,Yongjun Xu | NIPS 2024,Poster | Continual learning (CL) is designed to learn new tasks while preserving existing knowledge. Replaying samples from earlier tasks has proven to be an effective method to mitigate the forgetting of previously acquired knowledge. However, the current research on the training efficiency of rehearsal-based methods is insufficient, which limits the practical application of CL systems in resource-limited scenarios. The human visual system (HVS) exhibits varying sensitivities to different frequency components, enabling the efficient elimination of visually redundant information. Inspired by HVS, we propose a novel framework called Continual Learning in the Frequency Domain (CLFD). To our knowledge, this is the first study to utilize frequency domain features to enhance the performance and efficiency of CL training on edge devices. For the input features of the feature extractor, CLFD employs wavelet transform to map the original input image into the frequency domain, thereby effectively reducing the size of input feature maps. Regarding the output features of the feature extractor, CLFD selectively utilizes output features for distinct classes for classification, thereby balancing the reusability and interference of output features based on the frequency domain similarity of the classes across various tasks. Optimizing only the input and output features of the feature extractor allows for seamless integration of CLFD with various rehearsal-based methods. Extensive experiments conducted in both cloud and edge environments demonstrate that CLFD consistently improves the performance of state-of-the-art (SOTA) methods in both precision and training efficiency. Specifically, CLFD can increase the accuracy of the SOTA CL method by up to 6.83% and reduce the training time by 2.6×. | https://openreview.net/pdf/f3f2ce5384cdb499ef0aebc4960c16bb56ecacc7.pdf |
Reconstruction of Manipulated Garment with Guided Deformation Prior | https://openreview.net/forum?id=a2ccaXTb4I | https://openreview.net/forum?id=a2ccaXTb4I | Ren Li,Corentin Dumery,Zhantao Deng,Pascal Fua | NIPS 2024,Poster | Modeling the shape of garments has received much attention, but most existing approaches assume the garments to be worn by someone, which constrains the range of shapes they can assume. In this work, we address shape recovery when garments are being manipulated instead of worn, which gives rise to an even larger range of possible shapes. To this end, we leverage the implicit sewing patterns (ISP) model for garment modeling and extend it by adding a diffusion-based deformation prior to represent these shapes. To recover 3D garment shapes from incomplete 3D point clouds acquired when the garment is folded, we map the points to UV space, in which our priors are learned, to produce partial UV maps, and then fit the priors to recover complete UV maps and 2D to 3D mappings. Experimental results demonstrate the superior reconstruction accuracy of our method compared to previous ones, especially when dealing with large non-rigid deformations arising from the manipulations. | https://openreview.net/pdf/465505c3b984943682d03d8d99c6f111462fb4db.pdf |
ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Field | https://openreview.net/forum?id=K5PA3SK2jB | https://openreview.net/forum?id=K5PA3SK2jB | Kiyohiro Nakayama,Mikaela Angelina Uy,Yang You,Ke Li,Leonidas Guibas | NIPS 2024,Poster | Neural radiance fields (NeRFs) have gained popularity with multiple works showing promising results across various applications. However, to the best of our knowledge, existing works do not explicitly model the distribution of training camera poses, or consequently the triangulation quality, a key factor affecting reconstruction quality dating back to classical vision literature. We close this gap with ProvNeRF, an approach that models the provenance for each point -- i.e., the locations where it is likely visible -- of NeRFs as a stochastic field. We achieve this by extending implicit maximum likelihood estimation (IMLE) to functional space with an optimizable objective. We show that modeling per-point provenance during the NeRF optimization enriches the model with information on triangulation leading to improvements in novel view synthesis and uncertainty estimation under the challenging sparse, unconstrained view setting against competitive baselines. The code will be available at https://github.com/georgeNakayama/ProvNeRF. | https://openreview.net/pdf/765ea306973ddd95d38b0db59d63489e1dc6f01b.pdf |
ANT: Adaptive Noise Schedule for Time Series Diffusion Models | https://openreview.net/forum?id=1ojAkTylz4 | https://openreview.net/forum?id=1ojAkTylz4 | Seunghan Lee,Kibok Lee,Taeyoung Park | NIPS 2024,Poster | Advances in diffusion models for generative artificial intelligence have recently propagated to the time series (TS) domain, demonstrating state-of-the-art performance on various tasks. However, prior works on TS diffusion models often borrow the framework of existing works proposed in other domains without considering the characteristics of TS data, leading to suboptimal performance. In this work, we
propose Adaptive Noise schedule for Time series diffusion models (ANT), which automatically predetermines proper noise schedules for given TS datasets based on their statistics representing non-stationarity. Our intuition is that an optimal noise schedule should satisfy the following desiderata: 1) It linearly reduces the non-stationarity of TS data so that all diffusion steps are equally meaningful, 2) the data is corrupted to the random noise at the final step, and 3) the number of steps is sufficiently large. The proposed method is practical for use in that it eliminates the necessity of finding the optimal noise schedule with a small additional cost to compute the statistics for given datasets, which can be done offline before training. We validate the effectiveness of our method across various tasks, including TS forecasting, refinement, and generation, on datasets from diverse domains. Code is available at this repository: https://github.com/seunghan96/ANT. | https://openreview.net/pdf/405d10773e550a76a57da34bdac831967b36cacf.pdf |
TPR: Topology-Preserving Reservoirs for Generalized Zero-Shot Learning | https://openreview.net/forum?id=zkfCa4oESF | https://openreview.net/forum?id=zkfCa4oESF | Hui Chen,Yanbin Liu,Yongqiang Ma,Nanning Zheng,Xin Yu | NIPS 2024,Poster | Pre-trained vision-language models (VLMs) such as CLIP have shown excellent performance for zero-shot classification. Based on CLIP, recent methods design various learnable prompts to evaluate the zero-shot generalization capability on a base-to-novel setting. This setting assumes test samples are already divided into either base or novel classes, limiting its application to realistic scenarios. In this paper, we focus on a more challenging and practical setting: generalized zero-shot learning (GZSL), i.e., testing with no information about the base/novel division. To address this challenging zero-shot problem, we introduce two unique designs that enable us to classify an image without the need of knowing whether it comes from seen or unseen classes. Firstly, most existing methods only adopt a single latent space to align visual and linguistic features, which has a limited ability to represent complex visual-linguistic patterns, especially for fine-grained tasks. Instead, we propose a dual-space feature alignment module that effectively augments the latent space with a novel attribute space induced by a well-devised attribute reservoir. In particular, the attribute reservoir consists of a static vocabulary and learnable tokens complementing each other for flexible control over feature granularity. Secondly, finetuning CLIP models (e.g., prompt learning) on seen base classes usually sacrifices the model's original generalization capability on unseen novel classes. To mitigate this issue, we present a new topology-preserving objective that can enforce feature topology structures of the combined base and novel classes to resemble the topology of CLIP. In this manner, our model will inherit the generalization ability of CLIP through maintaining the pairwise class angles in the attribute space. Extensive experiments on twelve object recognition datasets demonstrate that our model, termed Topology-Preserving Reservoir (TPR), outperforms strong baselines including both prompt learning and conventional generative-based zero-shot methods. | https://openreview.net/pdf/e9ab97ad78449ecd4bb7169860020d90e331f252.pdf |
Atlas3D: Physically Constrained Self-Supporting Text-to-3D for Simulation and Fabrication | https://openreview.net/forum?id=5x69CL2w3F | https://openreview.net/forum?id=5x69CL2w3F | Yunuo Chen,Tianyi Xie,Zeshun Zong,Xuan Li,Feng Gao,Yin Yang,Ying Nian Wu,Chenfanfu Jiang | NIPS 2024,Poster | Existing diffusion-based text-to-3D generation methods primarily focus on producing visually realistic shapes and appearances, often neglecting the physical constraints necessary for downstream tasks. Generated models frequently fail to maintain balance when placed in physics-based simulations or 3D printed. This balance is crucial for satisfying user design intentions in interactive gaming, embodied AI, and robotics, where stable models are needed for reliable interaction. Additionally, stable models ensure that 3D-printed objects, such as figurines for home decoration, can stand on their own without requiring additional supports. To fill this gap, we introduce Atlas3D, an automatic and easy-to-implement method that enhances existing Score Distillation Sampling (SDS)-based text-to-3D tools. Atlas3D ensures the generation of self-supporting 3D models that adhere to physical laws of stability under gravity, contact, and friction. Our approach combines a novel differentiable simulation-based loss function with physically inspired regularization, serving as either a refinement or a post-processing module for existing frameworks. We verify Atlas3D's efficacy through extensive generation tasks and validate the resulting 3D models in both simulated and real-world environments. | https://openreview.net/pdf/73f39a34aec6d1a695fa2762599bb14ea814a8e3.pdf |
Look, Listen, and Answer: Overcoming Biases for Audio-Visual Question Answering | https://openreview.net/forum?id=twpPD9UMUN | https://openreview.net/forum?id=twpPD9UMUN | Jie Ma,Min Hu,Pinghui Wang,Wangchun Sun,Lingyun Song,Hongbin Pei,Jun Liu,Youtian Du | NIPS 2024,Poster | Audio-Visual Question Answering (AVQA) is a complex multi-modal reasoning task, demanding intelligent systems to accurately respond to natural language queries based on audio-video input pairs. Nevertheless, prevalent AVQA approaches are prone to overlearning dataset biases, resulting in poor robustness. Furthermore, current datasets may not provide a precise diagnostic for these methods. To tackle these challenges, firstly, we propose a novel dataset, *MUSIC-AVQA-R*, crafted in two steps: rephrasing questions within the test split of a public dataset (*MUSIC-AVQA*) and subsequently introducing distribution shifts to split questions. The former leads to a large, diverse test space, while the latter results in a comprehensive robustness evaluation on rare, frequent, and overall questions. Secondly, we propose a robust architecture that utilizes a multifaceted cycle collaborative debiasing strategy to overcome bias learning. Experimental results show that this architecture achieves state-of-the-art performance on MUSIC-AVQA-R, notably obtaining a significant improvement of 9.32\%. Extensive ablation experiments are conducted on the two datasets mentioned to analyze the component effectiveness within the debiasing strategy. Additionally, we highlight the limited robustness of existing multi-modal QA methods through the evaluation on our dataset. We also conduct experiments combining various baselines with our proposed strategy on two datasets to verify its plug-and-play capability. Our dataset and code are available at <https://github.com/reml-group/MUSIC-AVQA-R>. | https://openreview.net/pdf/151a0f8e93ae1520f0c0a25929cbc80eab40a4fb.pdf |
Omnigrasp: Grasping Diverse Objects with Simulated Humanoids | https://openreview.net/forum?id=Glt37xoU7e | https://openreview.net/forum?id=Glt37xoU7e | Zhengyi Luo,Jinkun Cao,Sammy Christen,Alexander Winkler,Kris M. Kitani,Weipeng Xu | NIPS 2024,Poster | We present a method for controlling a simulated humanoid to grasp an object and move it to follow an object's trajectory. Due to the challenges in controlling a humanoid with dexterous hands, prior methods often use a disembodied hand and only consider vertical lifts or short trajectories. This limited scope hampers their applicability for object manipulation required for animation and simulation. To close this gap, we learn a controller that can pick up a large number (>1200) of objects and carry them to follow randomly generated trajectories. Our key insight is to leverage a humanoid motion representation that provides human-like motor skills and significantly speeds up training. Using only simplistic reward, state, and object representations, our method shows favorable scalability on diverse objects and trajectories. For training, we do not need a dataset of paired full-body motion and object trajectories. At test time, we only require the object mesh and desired trajectories for grasping and transporting. To demonstrate the capabilities of our method, we show state-of-the-art success rates in following object trajectories and generalizing to unseen objects. Code and models will be released. | https://openreview.net/pdf/08974bf790f4c1e69d0de5e867da7b9a5b9e0e44.pdf |
Adjust Pearson's $r$ to Measure Arbitrary Monotone Dependence | https://openreview.net/forum?id=8Dkz60yGfj | https://openreview.net/forum?id=8Dkz60yGfj | Xinbo Ai | NIPS 2024,Poster | Pearson's $r$, the most widely-used correlation coefficient, is traditionally regarded as exclusively capturing linear dependence, leading to its discouragement in contexts involving nonlinear relationships. However, recent research challenges this notion, suggesting that Pearson's $r$ should not be ruled out a priori for measuring nonlinear monotone relationships. Pearson's $r$ is essentially a scaled covariance, rooted in the renowned Cauchy-Schwarz Inequality. Our findings reveal that different scaling bounds yield coefficients with different capture ranges, and interestingly, tighter bounds actually expand these ranges. We derive a tighter inequality than Cauchy-Schwarz Inequality, leverage it to refine Pearson's $r$, and propose a new correlation coefficient, i.e., rearrangement correlation. This coefficient is able to capture arbitrary monotone relationships, both linear and nonlinear ones. It reverts to Pearson's $r$ in linear scenarios. Simulation experiments and real-life investigations show that the rearrangement correlation is more accurate in measuring nonlinear monotone dependence than the three classical correlation coefficients, and other recently proposed dependence measures. | https://openreview.net/pdf/4a0197d98d2298f8fd70ef4d1bc161625f1ad48f.pdf |
FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation | https://openreview.net/forum?id=3MW44iNdrD | https://openreview.net/forum?id=3MW44iNdrD | Christopher T.H Teo,Milad Abdollahzadeh,Xinda Ma,Ngai-man Cheung | NIPS 2024,Poster | Recently, prompt learning has emerged as the state-of-the-art (SOTA) for fair text-to-image (T2I) generation. Specifically, this approach leverages readily available reference images to learn inclusive prompts for each target Sensitive Attribute (tSA), allowing for fair image generation. In this work, we first reveal that this prompt learning-based approach results in degraded sample quality. Our analysis shows that the approach's training objective--which aims to align the embedding differences of learned prompts and reference images-- could be sub-optimal, resulting in distortion of the learned prompts and degraded generated images. To further substantiate this claim, **as our major contribution**, we deep dive into the denoising subnetwork of the T2I model to track down the effect of these learned prompts by analyzing the cross-attention maps. In our analysis, we propose a novel prompt switching analysis: I2H and H2I. Furthermore, we propose new quantitative characterization of cross-attention maps. Our analysis reveals abnormalities in the early denoising steps, perpetuating improper global structure that results in degradation in the generated samples. Building on insights from our analysis, we propose two ideas: (i) *Prompt Queuing* and (ii) *Attention Amplification* to address the quality issue. Extensive experimental results on a wide range of tSAs show that our proposed method outperforms SOTA approach's image generation quality, while achieving competitive fairness. More resources at FairQueue Project site: https://sutd-visual-computing-group.github.io/FairQueue | https://openreview.net/pdf/93674189727317f44879b1d81503695d023ff639.pdf |
MC-DiT: Contextual Enhancement via Clean-to-Clean Reconstruction for Masked Diffusion Models | https://openreview.net/forum?id=y9sHKrdnRt | https://openreview.net/forum?id=y9sHKrdnRt | Guanghao Zheng,Yuchen Liu,Wenrui Dai,Chenglin Li,Junni Zou,Hongkai Xiong | NIPS 2024,Poster | Diffusion Transformer (DiT) is emerging as a cutting-edge trend in the landscape of generative diffusion models for image generation. Recently, masked-reconstruction strategies have been considered to improve the efficiency and semantic consistency in training DiT but suffer from deficiency in contextual information extraction. In this paper, we provide a new insight to reveal that noisy-to-noisy masked-reconstruction harms sufficient utilization of contextual information. We further demonstrate the insight with theoretical analysis and empirical study on the mutual information between unmasked and masked patches. Guided by such insight, we propose a novel training paradigm named MC-DiT for fully learning contextual information via diffusion denoising at different noise variances with clean-to-clean mask-reconstruction. Moreover, to avoid model collapse, we design two complementary branches of DiT decoders for enhancing the use of noisy patches and mitigating excessive reliance on clean patches in reconstruction. Extensive experimental results on 256$\times$256 and 512$\times$512 image generation on the ImageNet dataset demonstrate that the proposed MC-DiT achieves state-of-the-art performance in unconditional and conditional image generation with enhanced convergence speed. | https://openreview.net/pdf/44798d431529adc7582ec95a03e0b069dec11d02.pdf |
COVE: Unleashing the Diffusion Feature Correspondence for Consistent Video Editing | https://openreview.net/forum?id=474M9aeI4U | https://openreview.net/forum?id=474M9aeI4U | Jiangshan Wang,Yue Ma,Jiayi Guo,Yicheng Xiao,Gao Huang,Xiu Li | NIPS 2024,Poster | Video editing is an emerging task, in which most current methods adopt the pre-trained text-to-image (T2I) diffusion model to edit the source video in a zero-shot manner. Despite extensive efforts, maintaining the temporal consistency of edited videos remains challenging due to the lack of temporal constraints in the regular T2I diffusion model. To address this issue, we propose COrrespondence-guided Video Editing (COVE), leveraging the inherent diffusion feature correspondence to achieve high-quality and consistent video editing. Specifically, we propose an efficient sliding-window-based strategy to calculate the similarity among tokens in the diffusion features of source videos, identifying the tokens with high correspondence across frames. During the inversion and denoising process, we sample the tokens in noisy latent based on the correspondence and then perform self-attention within them. To save the usage of GPU memory and accelerate the editing process, we further introduce the temporal-dimensional token merging strategy, which can effectively reduce the redundancy. COVE can be seamlessly integrated into the pre-trained T2I diffusion model without the need for extra training or optimization. Extensive experiment results demonstrate that COVE achieves the start-of-the-art performance in various video editing scenarios, outperforming existing methods both quantitatively and qualitatively. The source code will be released. | https://openreview.net/pdf/834986e167cbde8aff6e34cbfd0cb33bd122d237.pdf |
LM-HT SNN: Enhancing the Performance of SNN to ANN Counterpart through Learnable Multi-hierarchical Threshold Model | https://openreview.net/forum?id=IlIDNMvwmX | https://openreview.net/forum?id=IlIDNMvwmX | Zecheng Hao,Xinyu Shi,Yujia Liu,Zhaofei Yu,Tiejun Huang | NIPS 2024,Poster | Compared to traditional Artificial Neural Network (ANN), Spiking Neural Network (SNN) has garnered widespread academic interest for its intrinsic ability to transmit information in a more energy-efficient manner. However, despite previous efforts to optimize the learning algorithm of SNNs through various methods, SNNs still lag behind ANNs in terms of performance. The recently proposed multi-threshold model provides more possibilities for further enhancing the learning capability of SNNs. In this paper, we rigorously analyze the relationship among the multi-threshold model, vanilla spiking model and quantized ANNs from a mathematical perspective, then propose a novel LM-HT model, which is an equidistant multi-threshold model that can dynamically regulate the global input current and membrane potential leakage on the time dimension. The LM-HT model can also be transformed into a vanilla single threshold model through reparameterization, thereby achieving more flexible hardware deployment. In addition, we note that the LM-HT model can seamlessly integrate with ANN-SNN Conversion framework under special initialization. This novel hybrid learning framework can effectively improve the relatively poor performance of converted SNNs under low time latency. Extensive experimental results have demonstrated that our model can outperform previous state-of-the-art works on various types of datasets, which promote SNNs to achieve a brand-new level of performance comparable to quantized ANNs. Code is available at https://github.com/hzc1208/LMHT_SNN. | https://openreview.net/pdf/44cc450c54c4e22a84f43eeb382c31316668ecaf.pdf |
I2EBench: A Comprehensive Benchmark for Instruction-based Image Editing | https://openreview.net/forum?id=1dpmeH6IHa | https://openreview.net/forum?id=1dpmeH6IHa | Yiwei Ma,Jiayi Ji,Ke Ye,Weihuang Lin,zhibin wang,Yonghan Zheng,Qiang Zhou,Xiaoshuai Sun,Rongrong Ji | NIPS 2024,Poster | Significant progress has been made in the field of Instruction-based Image Editing (IIE). However, evaluating these models poses a significant challenge. A crucial requirement in this field is the establishment of a comprehensive evaluation benchmark for accurately assessing editing results and providing valuable insights for its further development. In response to this need, we propose I2EBench, a comprehensive benchmark designed to automatically evaluate the quality of edited images produced by IIE models from multiple dimensions. I2EBench consists of 2,000+ images for editing, along with 4,000+ corresponding original and diverse instructions. It offers three distinctive characteristics: 1) Comprehensive Evaluation Dimensions: I2EBench comprises 16 evaluation dimensions that cover both high-level and low-level aspects, providing a comprehensive assessment of each IIE model. 2) Human Perception Alignment: To ensure the alignment of our benchmark with human perception, we conducted an extensive user study for each evaluation dimension. 3) Valuable Research Insights: By analyzing the advantages and disadvantages of existing IIE models across the 16 dimensions, we offer valuable research insights to guide future development in the field. We will open-source I2EBench, including all instructions, input images, human annotations, edited images from all evaluated methods, and a simple script for evaluating the results from new IIE models. The code, dataset, and generated images from all IIE models are provided in GitHub: https://github.com/cocoshe/I2EBench. | https://openreview.net/pdf/73d473d8a8ce913526c065f9af10bc5a9f8e4fcd.pdf |
3DET-Mamba: Causal Sequence Modelling for End-to-End 3D Object Detection | https://openreview.net/forum?id=iOleSlC80F | https://openreview.net/forum?id=iOleSlC80F | Mingsheng Li,Jiakang Yuan,Sijin Chen,Lin Zhang,Anyu Zhu,Xin Chen,Tao Chen | NIPS 2024,Poster | Transformer-based architectures have been proven successful in detecting 3D objects from point clouds. However, the quadratic complexity of the attention mechanism struggles to encode rich information as point cloud resolution increases. Recently, state space models (SSM) such as Mamba have gained great attention due to their linear complexity and long sequence modeling ability for language understanding. To exploit the potential of Mamba on 3D scene-level perception, for the first time, we propose 3DET-Mamba, which is a novel SSM-based model designed for indoor 3d object detection. Specifically, we divide the point cloud into different patches and use a lightweight yet effective Inner Mamba to capture local geometric information. To observe the scene from a global perspective, we introduce a novel Dual Mamba module that models the point cloud in terms of spatial distribution and continuity. Additionally, we design a Query-aware Mamba module that decodes context features into object sets under the guidance of learnable queries. Extensive experiments demonstrate that 3DET-Mamba surpasses previous 3DETR on indoor 3D detection benchmarks such as ScanNet, improving AP25/AP50 from 65.0\%/47.0\% to 70.4\%/54.4\%, respectively. | https://openreview.net/pdf/e562d505611831135151fdb8395d9283e0291da6.pdf |
Online Consistency of the Nearest Neighbor Rule | https://openreview.net/forum?id=eOx0SMRUv7 | https://openreview.net/forum?id=eOx0SMRUv7 | Geelon So,Sanjoy Dasgupta | NIPS 2024,Poster | In the realizable online setting, a learner is tasked with making predictions for a stream of instances, where the correct answer is revealed after each prediction. A learning rule is online consistent if its mistake rate eventually vanishes. The nearest neighbor rule is fundamental prediction strategy, but it is only known to be consistent under strong statistical or geometric assumptions: the instances come i.i.d. or the label classes are well-separated. We prove online consistency for all measurable functions in doubling metric spaces under the mild assumption that instances are generated by a process that is uniformly absolutely continuous with respect to an underlying finite, upper doubling measure. | https://openreview.net/pdf/e4ece709bb02defc38ee0ee2062a75e296fcb2f4.pdf |
No-Regret M${}^{\natural}$-Concave Function Maximization: Stochastic Bandit Algorithms and NP-Hardness of Adversarial Full-Information Setting | https://openreview.net/forum?id=NnoAj91HZX | https://openreview.net/forum?id=NnoAj91HZX | Taihei Oki,Shinsaku Sakaue | NIPS 2024,Poster | M${}^{\natural}$-concave functions, a.k.a. gross substitute valuation functions, play a fundamental role in many fields, including discrete mathematics and economics. In practice, perfect knowledge of M${}^{\natural}$-concave functions is often unavailable a priori, and we can optimize them only interactively based on some feedback. Motivated by such situations, we study online M${}^{\natural}$-concave function maximization problems, which are interactive versions of the problem studied by Murota and Shioura (1999). For the stochastic bandit setting, we present $O(T^{-1/2})$-simple regret and $O(T^{2/3})$-regret algorithms under $T$ times access to unbiased noisy value oracles of M${}^{\natural}$-concave functions. A key to proving these results is the robustness of the greedy algorithm to local errors in M${}^{\natural}$-concave function maximization, which is one of our main technical results. While we obtain those positive results for the stochastic setting, another main result of our work is an impossibility in the adversarial setting. We prove that, even with full-information feedback, no algorithms that run in polynomial time per round can achieve $O(T^{1-c})$ regret for any constant $c > 0$ unless $\mathsf{P} = \mathsf{NP}$. Our proof is based on a reduction from the matroid intersection problem for three matroids, which would be a novel idea in the context of online learning. | https://openreview.net/pdf/d8f3b90ac5cfd7e5d2355a6b6f7046e7298a0b52.pdf |
MeshXL: Neural Coordinate Field for Generative 3D Foundation Models | https://openreview.net/forum?id=Gcks157FI3 | https://openreview.net/forum?id=Gcks157FI3 | Sijin Chen,Xin Chen,Anqi Pang,Xianfang Zeng,Wei Cheng,Yijun Fu,Fukun Yin,Zhibin Wang,Jingyi Yu,Gang Yu,BIN FU,Tao Chen | NIPS 2024,Poster | The polygon mesh representation of 3D data exhibits great flexibility, fast rendering speed, and storage efficiency, which is widely preferred in various applications. However, given its unstructured graph representation, the direct generation of high-fidelity 3D meshes is challenging. Fortunately, with a pre-defined ordering strategy, 3D meshes can be represented as sequences, and the generation process can be seamlessly treated as an auto-regressive problem. In this paper, we validate Neural Coordinate Field (NeurCF), an explicit coordinate representation with implicit neural embeddings, is a simple-yet-effective representation for large-scale sequential mesh modeling. After that, we present MeshXL, a family of generative pre-trained auto-regressive models that addresses 3D mesh generation with modern large language model approaches. Extensive experiments show that MeshXL is able to generate high-quality 3D meshes, and can also serve as foundation models for various down-stream applications. | https://openreview.net/pdf/e685fd63866710b5dc7a565325419b4d69c34882.pdf |
Curriculum Fine-tuning of Vision Foundation Model for Medical Image Classification Under Label Noise | https://openreview.net/forum?id=vYUx8j5KK2 | https://openreview.net/forum?id=vYUx8j5KK2 | Yeonguk Yu,Minhwan Ko,Sungho Shin,Kangmin Kim,Kyoobin Lee | NIPS 2024,Poster | Deep neural networks have demonstrated remarkable performance in various vision tasks, but their success heavily depends on the quality of the training data. Noisy labels are a critical issue in medical datasets and can significantly degrade model performance. Previous clean sample selection methods have not utilized the well pre-trained features of vision foundation models (VFMs) and assumed that training begins from scratch. In this paper, we propose CUFIT, a curriculum fine-tuning paradigm of VFMs for medical image classification under label noise. Our method is motivated by the fact that linear probing of VFMs is relatively unaffected by noisy samples, as it does not update the feature extractor of the VFM, thus robustly classifying the training samples. Subsequently, curriculum fine-tuning of two adapters is conducted, starting with clean sample selection from the linear probing phase. Our experimental results demonstrate that CUFIT outperforms previous methods across various medical image benchmarks. Specifically, our method surpasses previous baselines by 5.0\%, 2.1\%, 4.6\%, and 5.8\% at a 40\% noise rate on the HAM10000, APTOS-2019, BloodMnist, and OrgancMnist datasets, respectively. Furthermore, we provide extensive analyses to demonstrate the impact of our method on noisy label detection. For instance, our method shows higher label precision and recall compared to previous approaches. Our work highlights the potential of leveraging VFMs in medical image classification under challenging conditions of noisy labels. | https://openreview.net/pdf/d5aa08cc841e6f20d9d81a173a38781d31ff4224.pdf |
Randomized Exploration for Reinforcement Learning with Multinomial Logistic Function Approximation | https://openreview.net/forum?id=7tRtH0AoBl | https://openreview.net/forum?id=7tRtH0AoBl | Wooseong Cho,Taehyun Hwang,Joongkyu Lee,Min-hwan Oh | NIPS 2024,Poster | We study reinforcement learning with _multinomial logistic_ (MNL) function approximation where the underlying transition probability kernel of the _Markov decision processes_ (MDPs) is parametrized by an unknown transition core with features of state and action. For the finite horizon episodic setting with inhomogeneous state transitions, we propose provably efficient algorithms with randomized exploration having frequentist regret guarantees. For our first algorithm, $\texttt{RRL-MNL}$, we adapt optimistic sampling to ensure the optimism of the estimated value function with sufficient frequency and establish that $\texttt{RRL-MNL}$ is both _statistically_ and _computationally_ efficient, achieving a $\tilde{\mathcal{O}}(\kappa^{-1} d^{\frac{3}{2}} H^{\frac{3}{2}} \sqrt{T})$ frequentist regret bound with constant-time computational cost per episode. Here, $d$ is the dimension of the transition core, $H$ is the horizon length, $T$ is the total number of steps, and $\kappa$ is a problem-dependent constant. Despite the simplicity and practicality of $\texttt{RRL-MNL}$, its regret bound scales with $\kappa^{-1}$, which is potentially large in the worst case. To improve the dependence on $\kappa^{-1}$, we propose $\texttt{ORRL-MNL}$, which estimates the value function using local gradient information of the MNL transition model. We show that its frequentist regret bound is $\tilde{\mathcal{O}}(d^{\frac{3}{2}} H^{\frac{3}{2}} \sqrt{T} + \kappa^{-1} d^2 H^2)$. To the best of our knowledge, these are the first randomized RL algorithms for the MNL transition model that achieve both computational and statistical efficiency. Numerical experiments demonstrate the superior performance of the proposed algorithms. | https://openreview.net/pdf/a79c66b20df158162b782eaf4c9306f0b358e0ff.pdf |
Large Language Models Must Be Taught to Know What They Don’t Know | https://openreview.net/forum?id=QzvWyggrYB | https://openreview.net/forum?id=QzvWyggrYB | Sanyam Kapoor,Nate Gruver,Manley Roberts,Katherine M. Collins,Arka Pal,Umang Bhatt,Adrian Weller,Samuel Dooley,Micah Goldblum,Andrew Gordon Wilson | NIPS 2024,Poster | When using large language models (LLMs) in high-stakes applications, we need to know when we can trust their predictions. Some works argue that prompting high-performance LLMs is sufficient to produce calibrated uncertainties, while others introduce sampling methods that can be prohibitively expensive. In this work, we first argue that prompting on its own is insufficient to achieve good calibration and then show that fine-tuning on a small dataset of correct and incorrect answers can create an uncertainty estimate with good generalization and small computational overhead. We show that a thousand graded examples are sufficient to outperform baseline methods and that training through the features of a model is necessary for good performance and tractable for large open-source models when using LoRA. We also investigate the mechanisms that enable reliable LLM uncertainty estimation, finding that many models can be used as general-purpose uncertainty estimators, applicable not just to their own uncertainties but also the uncertainty of other models. Lastly, we show that uncertainty estimates inform human use of LLMs in human-AI collaborative settings through a user study. | https://openreview.net/pdf/466dc35468981596c539e33881e69e17f6a7facb.pdf |
Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs | https://openreview.net/forum?id=wSpIdUXZYX | https://openreview.net/forum?id=wSpIdUXZYX | Md Ashiqur Rahman,Robert Joseph George,Mogab Elleithy,Daniel Leibovici,Zongyi Li,Boris Bonev,Colin White,Julius Berner,Raymond A. Yeh,Jean Kossaifi,Kamyar Azizzadenesheli,Anima Anandkumar | NIPS 2024,Poster | Existing neural operator architectures face challenges when solving multiphysics problems with coupled partial differential equations (PDEs) due to complex geometries, interactions between physical variables, and the limited amounts of high-resolution training data.
To address these issues, we propose *Codomain Attention Neural Operator* (CoDA-NO), which tokenizes functions along the codomain or channel space, enabling self-supervised learning or pretraining of multiple PDE systems.
Specifically, we extend positional encoding, self-attention, and normalization layers to function spaces. CoDA-NO can learn representations of different PDE systems with a single model. We evaluate CoDA-NO's potential as a backbone for learning multiphysics PDEs over multiple systems by considering few-shot learning settings. On complex downstream tasks with limited data, such as fluid flow simulations, fluid-structure interactions, and Rayleigh-Bénard convection, we found CoDA-NO to outperform existing methods by over 36%. | https://openreview.net/pdf/8f9b484442e75171a47f932c6ba656a806dac2e1.pdf |
TuneTables: Context Optimization for Scalable Prior-Data Fitted Networks | https://openreview.net/forum?id=FOfU3qhcIG | https://openreview.net/forum?id=FOfU3qhcIG | Benjamin Feuer,Robin Tibor Schirrmeister,Valeriia Cherepanova,Chinmay Hegde,Frank Hutter,Micah Goldblum,Niv Cohen,Colin White | NIPS 2024,Poster | While tabular classification has traditionally relied on from-scratch training, a recent breakthrough called prior-data fitted networks (PFNs) challenges this approach. Similar to large language models, PFNs make use of pretraining and in-context learning to achieve strong performance on new tasks in a single forward pass. However, current PFNs have limitations that prohibit their widespread adoption. Notably, TabPFN achieves very strong performance on small tabular datasets but is not designed to make predictions for datasets of size larger than 1000. In this work, we overcome these limitations and substantially improve the performance of PFNs via context optimization. We introduce TuneTables, a parameter-efficient fine-tuning strategy for PFNs that compresses large datasets into a smaller learned context. We conduct extensive experiments on nineteen algorithms over 98 datasets and find that TuneTables achieves the best performance on average, outperforming boosted trees such as CatBoost, while optimizing fewer than 5\% of TabPFN's parameters. Furthermore, we show that TuneTables can be used as an interpretability tool and can even be used to mitigate biases by optimizing a fairness objective. | https://openreview.net/pdf/1e9d282b0239910446fc3e5b3da56e3bf3ae92a3.pdf |
Estimating Ego-Body Pose from Doubly Sparse Egocentric Video Data | https://openreview.net/forum?id=MHCnLo2QeA | https://openreview.net/forum?id=MHCnLo2QeA | Seunggeun Chi,Pin-Hao Huang,Enna Sachdeva,Hengbo Ma,Karthik Ramani,Kwonjoon Lee | NIPS 2024,Poster | We study the problem of estimating the body movements of a camera wearer from egocentric videos. Current methods for ego-body pose estimation rely on temporally dense sensor data, such as IMU measurements from spatially sparse body parts like the head and hands. However, we propose that even temporally sparse observations, such as hand poses captured intermittently from egocentric videos during natural or periodic hand movements, can effectively constrain overall body motion. Naively applying diffusion models to generate full-body pose from head pose and sparse hand pose leads to suboptimal results. To overcome this, we develop a two-stage approach that decomposes the problem into temporal completion and spatial completion. First, our method employs masked autoencoders to impute hand trajectories by leveraging the spatiotemporal correlations between the head pose sequence and intermittent hand poses, providing uncertainty estimates. Subsequently, we employ conditional diffusion models to generate plausible full-body motions based on these temporally dense trajectories of the head and hands, guided by the uncertainty estimates from the imputation. The effectiveness of our methods was rigorously tested and validated through comprehensive experiments conducted on various HMD setup with AMASS and Ego-Exo4D datasets. | https://openreview.net/pdf/567752917750c9da072764a3e1153c52f4092540.pdf |
Diffusion-Inspired Truncated Sampler for Text-Video Retrieval | https://openreview.net/forum?id=SrQua0ATRZ | https://openreview.net/forum?id=SrQua0ATRZ | Jiamian Wang,Pichao WANG,Dongfang Liu,Qiang Guan,Sohail Dianat,MAJID RABBANI,Raghuveer Rao,ZHIQIANG TAO | NIPS 2024,Poster | Prevalent text-to-video retrieval methods represent multimodal text-video data in a joint embedding space, aiming at bridging the relevant text-video pairs and pulling away irrelevant ones. One main challenge in state-of-the-art retrieval methods lies in the modality gap, which stems from the substantial disparities between text and video and can persist in the joint space. In this work, we leverage the potential of Diffusion models to address the text-video modality gap by progressively aligning text and video embeddings in a unified space. However, we identify two key limitations of existing Diffusion models in retrieval tasks: The L2 loss does not fit the ranking problem inherent in text-video retrieval, and the generation quality heavily depends on the varied initial point drawn from the isotropic Gaussian, causing inaccurate retrieval. To this end, we introduce a new Diffusion-Inspired Truncated Sampler (DITS) that jointly performs progressive alignment and modality gap modeling in the joint embedding space. The key innovation of DITS is to leverage the inherent proximity of text and video embeddings, defining a truncated diffusion flow from the fixed text embedding to the video embedding, enhancing controllability compared to adopting the isotropic Gaussian. Moreover, DITS adopts the contrastive loss to jointly consider the relevant and irrelevant pairs, not only facilitating alignment but also yielding a discriminatively structured embedding. Experiments on five benchmark datasets suggest the state-of-the-art performance of DITS. We empirically find that DITS can also improve the structure of the CLIP embedding space. Code is available at https://github.com/Jiamian- Wang/DITS-text-video-retrieval | https://openreview.net/pdf/7c7acf1335b4f24831d0925133663ff3c63f4e23.pdf |
Boosting Text-to-Video Generative Model with MLLMs Feedback | https://openreview.net/forum?id=3ivnixHy16 | https://openreview.net/forum?id=3ivnixHy16 | Xun Wu,Shaohan Huang,Guolong Wang,Jing Xiong,Furu Wei | NIPS 2024,Poster | Recent advancements in text-to-video generative models, such as Sora, have showcased impressive capabilities. These models have attracted significant interest for their potential applications. However, they often rely on extensive datasets of variable quality, which can result in generated videos that lack aesthetic appeal and do not accurately reflect the input text prompts. A promising approach to mitigate these issues is to leverage Reinforcement Learning from Human Feedback (RLHF), which aims to align the outputs of text-to-video generative with human preferences. However, the considerable costs associated with manual annotation have led to a scarcity of comprehensive preference datasets. In response to this challenge, our study begins by investigating the efficacy of Multimodal Large Language Models (MLLMs) generated annotations in capturing video preferences, discovering a high degree of concordance with human judgments. Building upon this finding, we utilize MLLMs to perform fine-grained video preference annotations across two dimensions, resulting in the creation of VideoPrefer, which includes 135,000 preference annotations. Utilizing this dataset, we introduce VideoRM, the first general-purpose reward model tailored for video preference in the text-to-video domain. Our comprehensive experiments confirm the effectiveness of both VideoPrefer and VideoRM, representing a significant step forward in the field. | https://openreview.net/pdf/4c9eebaad669788792e0a010be4031be5bdc426e.pdf |
Unified Generative and Discriminative Training for Multi-modal Large Language Models | https://openreview.net/forum?id=w67vRHZF13 | https://openreview.net/forum?id=w67vRHZF13 | Wei Chow,Juncheng Li,Qifan Yu,Kaihang Pan,Hao Fei,Zhiqi Ge,Shuai Yang,Siliang Tang,Hanwang Zhang,Qianru Sun | NIPS 2024,Poster | In recent times, Vision-Language Models (VLMs) have been trained under two predominant paradigms. Generative training has enabled Multimodal Large Language Models (MLLMs) to tackle various complex tasks, yet issues such as hallucinations and weak object discrimination persist. Discriminative training, exemplified by models like CLIP, excels in zero-shot image-text classification and retrieval, yet struggles with complex scenarios requiring fine-grained semantic differentiation. This paper addresses these challenges by proposing a unified approach that integrates the strengths of both paradigms. Considering interleaved image-text sequences as the general format of input samples, we introduce a structure-induced training strategy that imposes semantic relationships between input samples and the MLLM’s hidden state. This approach enhances the MLLM’s ability to capture global semantics and distinguish fine-grained semantics. By leveraging dynamic sequence alignment within the Dynamic Time Warping framework and integrating a novel kernel for fine-grained semantic differentiation, our method effectively balances generative and discriminative tasks. Extensive experiments demonstrate the effectiveness of our approach, achieving state-of-the-art results in multiple generative tasks, especially those requiring cognitive and discrimination abilities. Additionally, our method surpasses discriminative benchmarks in interleaved and fine-grained retrieval tasks. By employing a retrieval-augmented generation strategy, our approach further enhances performance in some generative tasks within one model, offering a promising direction for future research in vision-language modeling. | https://openreview.net/pdf/92d9a4d22bb9998d8f043e2b98b85d4d012ff3c7.pdf |
Discovering Sparsity Allocation for Layer-wise Pruning of Large Language Models | https://openreview.net/forum?id=rgtrYVC9n4 | https://openreview.net/forum?id=rgtrYVC9n4 | Lujun Li,Peijie Dong,Zhenheng Tang,Xiang Liu,Qiang Wang,Wenhan Luo,Wei Xue,Qifeng Liu,Xiaowen Chu,Yike Guo | NIPS 2024,Poster | In this paper, we present DSA, the first automated framework for discovering sparsity allocation schemes for layer-wise pruning in Large Language Models (LLMs). LLMs have become increasingly powerful, but their large parameter counts make them computationally expensive. Existing pruning methods for compressing LLMs primarily focus on evaluating redundancies and removing element-wise weights. However, these methods fail to allocate adaptive layer-wise sparsities, leading to performance degradation in challenging tasks. We observe that per-layer importance statistics can serve as allocation indications, but their effectiveness depends on the allocation function between layers. To address this issue, we develop an expression discovery framework to explore potential allocation strategies. Our allocation functions involve two steps: reducing element-wise metrics to per-layer importance scores, and modelling layer importance to sparsity ratios. To search for the most effective allocation function, we construct a search space consisting of pre-process, reduction, transform, and post-process operations. We leverage an evolutionary algorithm to perform crossover and mutation on superior candidates within the population, guided by performance evaluation. Finally, we seamlessly integrate our discovered functions into various uniform methods, resulting in significant performance improvements. We conduct extensive experiments on multiple challenging tasks such as arithmetic, knowledge reasoning, and multimodal benchmarks spanning GSM8K, MMLU, SQA, and VQA, demonstrating that our DSA method achieves significant performance gains on the LLaMA-1|2|3, Mistral, and OPT models. Notably, the LLaMA-1|2|3 model pruned by our DSA reaches 4.73\%|6.18\%|10.65\% gain over the state-of-the-art techniques (e.g., Wanda and SparseGPT). | https://openreview.net/pdf/c742f770723557fe9f03c7f7eb1944b07bd68423.pdf |
PCoTTA: Continual Test-Time Adaptation for Multi-Task Point Cloud Understanding | https://openreview.net/forum?id=739jAzUXk7 | https://openreview.net/forum?id=739jAzUXk7 | Jincen Jiang,Qianyu Zhou,Yuhang Li,Xinkui Zhao,Meili Wang,Lizhuang Ma,Jian Chang,Jian Jun Zhang,Xuequan Lu | NIPS 2024,Poster | In this paper, we present PCoTTA, an innovative, pioneering framework for Continual Test-Time Adaptation (CoTTA) in multi-task point cloud understanding, enhancing the model's transferability towards the continually changing target domain. We introduce a multi-task setting for PCoTTA, which is practical and realistic, handling multiple tasks within one unified model during the continual adaptation. Our PCoTTA involves three key components: automatic prototype mixture (APM), Gaussian Splatted feature shifting (GSFS), and contrastive prototype repulsion (CPR). Firstly, APM is designed to automatically mix the source prototypes with the learnable prototypes with a similarity balancing factor, avoiding catastrophic forgetting. Then, GSFS dynamically shifts the testing sample toward the source domain, mitigating error accumulation in an online manner. In addition, CPR is proposed to pull the nearest learnable prototype close to the testing feature and push it away from other prototypes, making each prototype distinguishable during the adaptation. Experimental comparisons lead to a new benchmark, demonstrating PCoTTA's superiority in boosting the model's transferability towards the continually changing target domain. Our source code is available at: https://github.com/Jinec98/PCoTTA. | https://openreview.net/pdf/10d19bfe9a28e15074dbb79450ad0e5bc9dde6e4.pdf |
TrAct: Making First-layer Pre-Activations Trainable | https://openreview.net/forum?id=gCCMzedgbo | https://openreview.net/forum?id=gCCMzedgbo | Felix Petersen,Christian Borgelt,Stefano Ermon | NIPS 2024,Poster | We consider the training of the first layer of vision models and notice the clear relationship between pixel values and gradient update magnitudes: the gradients arriving at the weights of a first layer are by definition directly proportional to (normalized) input pixel values. Thus, an image with low contrast has a smaller impact on learning than an image with higher contrast, and a very bright or very dark image has a stronger impact on the weights than an image with moderate brightness. In this work, we propose performing gradient descent on the embeddings produced by the first layer of the model. However, switching to discrete inputs with an embedding layer is not a reasonable option for vision models. Thus, we propose the conceptual procedure of (i) a gradient descent step on first layer activations to construct an activation proposal, and (ii) finding the optimal weights of the first layer, i.e., those weights which minimize the squared distance to the activation proposal. We provide a closed form solution of the procedure and adjust it for robust stochastic training while computing everything efficiently. Empirically, we find that TrAct (Training Activations) speeds up training by factors between 1.25x and 4x while requiring only a small computational overhead. We demonstrate the utility of TrAct with different optimizers for a range of different vision models including convolutional and transformer architectures. | https://openreview.net/pdf/36ebc96c6cc3bb9d241bfbe90714af100fc0bb94.pdf |
LRM-Zero: Training Large Reconstruction Models with Synthesized Data | https://openreview.net/forum?id=MtRvzJBsBA | https://openreview.net/forum?id=MtRvzJBsBA | Desai Xie,Sai Bi,Zhixin Shu,Kai Zhang,Zexiang Xu,Yi Zhou,Soren Pirk,Arie Kaufman,Xin Sun,Hao Tan | NIPS 2024,Poster | We present LRM-Zero, a Large Reconstruction Model (LRM) trained entirely on synthesized 3D data, achieving high-quality sparse-view 3D reconstruction. The core of LRM-Zero is our procedural 3D dataset, Zeroverse, which is automatically synthesized from simple primitive shapes with random texturing and augmentations (e.g., height fields, boolean differences, and wireframes). Unlike previous 3D datasets (e.g., Objaverse) which are often captured or crafted by humans to approximate real 3D data, Zeroverse completely ignores realistic global semantics but is rich in complex geometric and texture details that are locally similar to or even more intricate than real objects. We demonstrate that our LRM-Zero, trained with our fully synthesized Zeroverse, can achieve high visual quality in the reconstruction of real-world objects, competitive with models trained on Objaverse. We also analyze several critical design choices of Zeroverse that contribute to LRM-Zero's capability and training stability. Our work demonstrates that 3D reconstruction, one of the core tasks in 3D vision, can potentially be addressed without the semantics of real-world objects. The Zeroverse's procedural synthesis code and interactive visualization are available at: https://desaixie.github.io/lrm-zero/. | https://openreview.net/pdf/e1be4c6318db6da389fa5e7cc8d7250e92650ba6.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.