title
stringlengths
15
138
url
stringlengths
42
42
detail_url
stringlengths
42
42
authors
stringlengths
7
526
tags
stringclasses
3 values
abstract
stringlengths
480
3.09k
pdf
stringlengths
71
71
BrainSCUBA: Fine-Grained Natural Language Captions of Visual Cortex Selectivity
https://openreview.net/forum?id=mQYHXUUTkU
https://openreview.net/forum?id=mQYHXUUTkU
Andrew Luo,Margaret Marie Henderson,Michael J. Tarr,Leila Wehbe
ICLR 2024,Poster
Understanding the functional organization of higher visual cortex is a central focus in neuroscience. Past studies have primarily mapped the visual and semantic selectivity of neural populations using hand-selected stimuli, which may potentially bias results towards pre-existing hypotheses of visual cortex functionality. Moving beyond conventional approaches, we introduce a data-driven method that generates natural language descriptions for images predicted to maximally activate individual voxels of interest. Our method -- Semantic Captioning Using Brain Alignments ("BrainSCUBA") -- builds upon the rich embedding space learned by a contrastive vision-language model and utilizes a pre-trained large language model to generate interpretable captions. We validate our method through fine-grained voxel-level captioning across higher-order visual regions. We further perform text-conditioned image synthesis with the captions, and show that our images are semantically coherent and yield high predicted activations. Finally, to demonstrate how our method enables scientific discovery, we perform exploratory investigations on the distribution of "person" representations in the brain, and discover fine-grained semantic selectivity in body-selective areas. Unlike earlier studies that decode text, our method derives *voxel-wise captions of semantic selectivity*. Our results show that BrainSCUBA is a promising means for understanding functional preferences in the brain, and provides motivation for further hypothesis-driven investigation of visual cortex.
https://openreview.net/pdf/a709ed572ed6c4b00439d924d8b85931fc309202.pdf
GeneOH Diffusion: Towards Generalizable Hand-Object Interaction Denoising via Denoising Diffusion
https://openreview.net/forum?id=FvK2noilxT
https://openreview.net/forum?id=FvK2noilxT
Xueyi Liu,Li Yi
ICLR 2024,Poster
In this work, we tackle the challenging problem of denoising hand-object interactions (HOI). Given an erroneous interaction sequence, the objective is to refine the incorrect hand trajectory to remove interaction artifacts for a perceptually realistic sequence. This challenge involves intricate interaction noise, including unnatural hand poses and incorrect hand-object relations, alongside the necessity for robust generalization to new interactions and diverse noise patterns. We tackle those challenges through a novel approach, GeneOH Diffusion, incorporating two key designs: an innovative contact-centric HOI representation named GeneOH and a new domain-generalizable denoising scheme. The contact-centric representation GeneOH informatively parameterizes the HOI process, facilitating enhanced generalization across various HOI scenarios. The new denoising scheme consists of a canonical denoising model trained to project noisy data samples from a whitened noise space to a clean data manifold and a ``denoising via diffusion'' strategy which can handle input trajectories with various noise patterns by first diffusing them to align with the whitened noise space and cleaning via the canonical denoiser. Extensive experiments on four benchmarks with significant domain variations demonstrate the superior effectiveness of our method. GeneOH Diffusion also shows promise for various downstream applications. We include [a website](https://meowuu7.github.io/GeneOH-Diffusion/) for introducing the work.
https://openreview.net/pdf/758b44508c97b8e9709281ba88fb4e1cc4c92077.pdf
Get What You Want, Not What You Don't: Image Content Suppression for Text-to-Image Diffusion Models
https://openreview.net/forum?id=zpVPhvVKXk
https://openreview.net/forum?id=zpVPhvVKXk
Senmao Li,Joost van de Weijer,taihang Hu,Fahad Khan,Qibin Hou,Yaxing Wang,jian Yang
ICLR 2024,Poster
The success of recent text-to-image diffusion models is largely due to their capacity to be guided by a complex text prompt, which enables users to precisely describe the desired content. However, these models struggle to effectively suppress the generation of undesired content, which is explicitly requested to be omitted from the generated image in the prompt. In this paper, we analyze how to manipulate the text embeddings and remove unwanted content from them. We introduce two contributions, which we refer to as soft-weighted regularization and inference-time text embedding optimization. The first regularizes the text embedding matrix and effectively suppresses the undesired content. The second method aims to further suppress the unwanted content generation of the prompt, and encourages the generation of desired content. We evaluate our method quantitatively and qualitatively on extensive experiments, validating its effectiveness. Furthermore, our method is generalizability to both the pixel-space diffusion models (i.e. DeepFloyd-IF) and the latent-space diffusion models (i.e. Stable Diffusion).
https://openreview.net/pdf/9aee144b4a2835b09f6d4e543e1b219cbbd1ebc6.pdf
Less or More From Teacher: Exploiting Trilateral Geometry For Knowledge Distillation
https://openreview.net/forum?id=OZitfSXpdT
https://openreview.net/forum?id=OZitfSXpdT
Chengming Hu,Haolun Wu,Xuan Li,Chen Ma,Xi Chen,Boyu Wang,Jun Yan,Xue Liu
ICLR 2024,Poster
Knowledge distillation aims to train a compact student network using soft supervision from a larger teacher network and hard supervision from ground truths. However, determining an optimal knowledge fusion ratio that balances these supervisory signals remains challenging. Prior methods generally resort to a constant or heuristic-based fusion ratio, which often falls short of a proper balance. In this study, we introduce a novel adaptive method for learning a sample-wise knowledge fusion ratio, exploiting both the correctness of teacher and student, as well as how well the student mimics the teacher on each sample. Our method naturally leads to the \textit{intra-sample} trilateral geometric relations among the student prediction ($\mathcal{S}$), teacher prediction ($\mathcal{T}$), and ground truth ($\mathcal{G}$). To counterbalance the impact of outliers, we further extend to the \textit{inter-sample} relations, incorporating the teacher's global average prediction ($\mathcal{\bar{T}})$ for samples within the same class. A simple neural network then learns the implicit mapping from the intra- and inter-sample relations to an adaptive, sample-wise knowledge fusion ratio in a bilevel-optimization manner. Our approach provides a simple, practical, and adaptable solution for knowledge distillation that can be employed across various architectures and model sizes. Extensive experiments demonstrate consistent improvements over other loss re-weighting methods on image classification, attack detection, and click-through rate prediction.
https://openreview.net/pdf/1ff49fb0c880655936d929769c3a595b46f08d38.pdf
Unraveling the Enigma of Double Descent: An In-depth Analysis through the Lens of Learned Feature Space
https://openreview.net/forum?id=CEkIyshNbC
https://openreview.net/forum?id=CEkIyshNbC
Yufei Gu,Xiaoqing Zheng,Tomaso Aste
ICLR 2024,Poster
Double descent presents a counter-intuitive aspect within the machine learning domain, and researchers have observed its manifestation in various models and tasks. While some theoretical explanations have been proposed for this phenomenon in specific contexts, an accepted theory for its occurring mechanism in deep learning remains yet to be established. In this study, we revisit the phenomenon of double descent and demonstrate that the presence of noisy data strongly influences its occurrence. By comprehensively analysing the feature space of learned representations, we unveil that double descent arises in imperfect models trained with noisy data. We argue that while small and intermediate models before the interpolation threshold follow the traditional bias-variance trade-off, over-parameterized models interpolate noisy samples among robust data thus acquiring the capability to separate the information from the noise. The source code is available at \url{https://github.com/Yufei-Gu-451/double_descent_inference.git}.
https://openreview.net/pdf/01d16ee2c2efbbd0ca37c6d9b32b20eee49faf96.pdf
Meta-Evolve: Continuous Robot Evolution for One-to-many Policy Transfer
https://openreview.net/forum?id=RthOl4jHw5
https://openreview.net/forum?id=RthOl4jHw5
Xingyu Liu,Deepak Pathak,Ding Zhao
ICLR 2024,Poster
We investigate the problem of transferring an expert policy from a source robot to multiple different robots. To solve this problem, we propose a method named *Meta-Evolve* that uses continuous robot evolution to efficiently transfer the policy to each target robot through a set of tree-structured evolutionary robot sequences. The robot evolution tree allows the robot evolution paths to be shared, so our approach can significantly outperform naive one-to-one policy transfer. We present a heuristic approach to determine an optimized robot evolution tree. Experiments have shown that our method is able to improve the efficiency of one-to-three transfer of manipulation policy by up to 3.2$\times$ and one-to-six transfer of agile locomotion policy by 2.4$\times$ in terms of simulation cost over the baseline of launching multiple independent one-to-one policy transfers. Supplementary videos available at the project website: https://sites.google.com/view/meta-evolve.
https://openreview.net/pdf/5ef4fdbe7aea08a39e2674c53de2348273e64350.pdf
DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation
https://openreview.net/forum?id=h1sFUGlI09
https://openreview.net/forum?id=h1sFUGlI09
Bowen Yin,Xuying Zhang,Zhong-Yu Li,Li Liu,Ming-Ming Cheng,Qibin Hou
ICLR 2024,Poster
We present DFormer, a novel RGB-D pretraining framework to learn transferable representations for RGB-D segmentation tasks. DFormer has two new key innovations: 1) Unlike previous works that encode RGB-D information with RGB pretrained backbone, we pretrain the backbone using image-depth pairs from ImageNet-1K, and thus the DFormer is endowed with the capacity to encode RGB-D representations; 2) DFormer comprises a sequence of RGB-D blocks, which are tailored for encoding both RGB and depth information through a novel building block design. DFormer avoids the mismatched encoding of the 3D geometry relationships in depth maps by RGB pretrained backbones, which widely lies in existing methods but has not been resolved. We finetune the pretrained DFormer on two popular RGB-D tasks, i.e., RGB-D semantic segmentation and RGB-D salient object detection, with a lightweight decoder head. Experimental results show that our DFormer achieves new state-of-the-art performance on these two tasks with less than half of the computational cost of the current best methods on two RGB-D semantic segmentation datasets and five RGB-D salient object detection datasets. Code will be made publicly available.
https://openreview.net/pdf/04998d857bc9fdde1d3e08fcb47334b0e43a6d15.pdf
ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving
https://openreview.net/forum?id=Ep0TtjVoap
https://openreview.net/forum?id=Ep0TtjVoap
Zhibin Gou,Zhihong Shao,Yeyun Gong,yelong shen,Yujiu Yang,Minlie Huang,Nan Duan,Weizhu Chen
ICLR 2024,Poster
Large language models have made significant progress in various language tasks, yet they still struggle with complex mathematics. In this paper, we propose ToRA a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical problems by seamlessly integrating natural language reasoning with the utilization of external tools (e.g., computation libraries and symbolic solvers), thereby amalgamating the analytical prowess of language and the computational efficiency of tools. To train ToRA, we curate interactive tool-use trajectories on mathematical datasets, apply imitation learning on the annotations, and propose output space shaping to further refine models' reasoning behavior. As a result, ToRA models significantly outperform open-source models on 10 mathematical reasoning datasets across all scales with 13%-19% absolute improvements on average. Notably, ToRA-7B reaches 44.6% on the competition-level dataset MATH, surpassing the best open-source model WizardMath-70B by 22% absolute. ToRA-34B is also the first open-source model that achieves an accuracy exceeding 50% on MATH, which significantly outperforms GPT-4's CoT result, and is competitive with GPT-4 solving problems with programs. Additionally, we conduct a comprehensive analysis of the benefits and remaining challenges of tool interaction for mathematical reasoning, providing valuable insights for future research.
https://openreview.net/pdf/2b0b45b11d0f61912efb1a932fb494d36f7b88e6.pdf
Bayesian Bi-clustering of Neural Spiking Activity with Latent Structures
https://openreview.net/forum?id=ZYm1Ql6udy
https://openreview.net/forum?id=ZYm1Ql6udy
Ganchao Wei
ICLR 2024,Poster
Modern neural recording techniques allow neuroscientists to obtain spiking activity of multiple neurons from different brain regions over long time periods, which requires new statistical methods to be developed for understanding structure of the large-scale data. In this paper, we develop a bi-clustering method to cluster the neural spiking activity spatially and temporally, according to their low-dimensional latent structures. The spatial (neuron) clusters are defined by the latent trajectories within each neural population, while the temporal (state) clusters are defined by (populationally) synchronous local linear dynamics shared with different periods. To flexibly extract the bi-clustering structure, we build the model non-parametrically, and develop an efficient Markov chain Monte Carlo (MCMC) algorithm to sample the posterior distributions of model parameters. Validating our proposed MCMC algorithm through simulations, we find the method can recover unknown parameters and true bi-clustering structures successfully. We then apply the proposed bi-clustering method to multi-regional neural recordings under different experiment settings, where we find that simultaneously considering latent trajectories and spatial-temporal clustering structures can provide us with a more accurate and interpretable result. Overall, the proposed method provides scientific insights for large-scale (counting) time series with elongated recording periods, and it can potentially have application beyond neuroscience.
https://openreview.net/pdf/9c2a7f237abf74c83846f7f31f5e9f10de0e5c99.pdf
GRANDE: Gradient-Based Decision Tree Ensembles for Tabular Data
https://openreview.net/forum?id=XEFWBxi075
https://openreview.net/forum?id=XEFWBxi075
Sascha Marton,Stefan Lüdtke,Christian Bartelt,Heiner Stuckenschmidt
ICLR 2024,Poster
Despite the success of deep learning for text and image data, tree-based ensemble models are still state-of-the-art for machine learning with heterogeneous tabular data. However, there is a significant need for tabular-specific gradient-based methods due to their high flexibility. In this paper, we propose $\text{GRANDE}$, $\text{GRA}$die$\text{N}$t-Based $\text{D}$ecision Tree $\text{E}$nsembles, a novel approach for learning hard, axis-aligned decision tree ensembles using end-to-end gradient descent. GRANDE is based on a dense representation of tree ensembles, which affords to use backpropagation with a straight-through operator to jointly optimize all model parameters. Our method combines axis-aligned splits, which is a useful inductive bias for tabular data, with the flexibility of gradient-based optimization. Furthermore, we introduce an advanced instance-wise weighting that facilitates learning representations for both, simple and complex relations, within a single model. We conducted an extensive evaluation on a predefined benchmark with 19 classification datasets and demonstrate that our method outperforms existing gradient-boosting and deep learning frameworks on most datasets. The method is available under: https://github.com/s-marton/GRANDE
https://openreview.net/pdf/1bbd6ced063094ce68202e826b9d061a03785cea.pdf
GTA: A Geometry-Aware Attention Mechanism for Multi-View Transformers
https://openreview.net/forum?id=uJVHygNeSZ
https://openreview.net/forum?id=uJVHygNeSZ
Takeru Miyato,Bernhard Jaeger,Max Welling,Andreas Geiger
ICLR 2024,Poster
As transformers are equivariant to the permutation of input tokens, encoding the positional information of tokens is necessary for many tasks. However, since existing positional encoding schemes have been initially designed for NLP tasks, their suitability for vision tasks, which typically exhibit different structural properties in their data, is questionable. We argue that existing positional encoding schemes are suboptimal for 3D vision tasks, as they do not respect their underlying 3D geometric structure. Based on this hypothesis, we propose a geometry-aware attention mechanism that encodes the geometric structure of tokens as relative transformation determined by the geometric relationship between queries and key-value pairs. By evaluating on multiple novel view synthesis (NVS) datasets in the sparse wide-baseline multi-view setting, we show that our attention, called Geometric Transform Attention (GTA), improves learning efficiency and performance of state-of-the-art transformer-based NVS models without any additional learned parameters and only minor computational overhead.
https://openreview.net/pdf/8b4150f7731750cfa47b7cf419c6929fad3abbc0.pdf
VDC: Versatile Data Cleanser based on Visual-Linguistic Inconsistency by Multimodal Large Language Models
https://openreview.net/forum?id=ygxTuVz9eU
https://openreview.net/forum?id=ygxTuVz9eU
Zihao Zhu,Mingda Zhang,Shaokui Wei,Bingzhe Wu,Baoyuan Wu
ICLR 2024,Poster
The role of data in building AI systems has recently been emphasized by the emerging concept of data-centric AI. Unfortunately, in the real-world, datasets may contain dirty samples, such as poisoned samples from backdoor attack, noisy labels in crowdsourcing, and even hybrids of them. The presence of such dirty samples makes the DNNs vunerable and unreliable. Hence, it is critical to detect dirty samples to improve the quality and realiability of dataset. Existing detectors only focus on detecting poisoned samples or noisy labels, that are often prone to weak generalization when dealing with dirty samples from other fields. In this paper, we find a commonality of various dirty samples is visual-linguistic inconsistency between images and associated labels. To capture the semantic inconsistency between modalities, we propose versatile data cleanser (VDC) leveraging the surpassing capabilities of multimodal large language models (MLLM) in cross-modal alignment and reasoning. It consists of three consecutive modules: the visual question generation module to generate insightful questions about the image; the visual question answering module to acquire the semantics of the visual content by answering the questions with MLLM; followed by the visual answer evaluation module to evaluate the inconsistency. Extensive experiments demonstrate its superior performance and generalization to various categories and types of dirty samples. The code is available at [https://github.com/zihao-ai/vdc](https://github.com/zihao-ai/vdc).
https://openreview.net/pdf/66009612880b659116956be01719a60fbf3fdbca.pdf
Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching
https://openreview.net/forum?id=rTBL8OhdhH
https://openreview.net/forum?id=rTBL8OhdhH
Ziyao Guo,Kai Wang,George Cazenavette,HUI LI,Kaipeng Zhang,Yang You
ICLR 2024,Poster
The ultimate goal of Dataset Distillation is to synthesize a small synthetic dataset such that a model trained on this synthetic set will perform equally well as a model trained on the full, real dataset. Until now, no method of Dataset Distillation has reached this completely lossless goal, in part due to the fact that previous methods only remain effective when the total number of synthetic samples is extremely small. Since only so much information can be contained in such a small number of samples, it seems that to achieve truly loss dataset distillation, we must develop a distillation method that remains effective as the size of the synthetic dataset grows. In this work, we present such an algorithm and elucidate why existing methods fail to generate larger, high-quality synthetic sets. Current state-of-the-art methods rely on trajectory-matching, or optimizing the synthetic data to induce similar long-term training dynamics as the real data. We empirically find that the training stage of the trajectories we choose to match (i.e., early or late) greatly affects the effectiveness of the distilled dataset. Specifically, early trajectories (where the teacher network learns easy patterns) work well for a low-cardinality synthetic set since there are fewer examples wherein to distribute the necessary information. Conversely, late trajectories (where the teacher network learns hard patterns) provide better signals for larger synthetic sets since there are now enough samples to represent the necessary complex patterns. Based on our findings, we propose to align the difficulty of the generated patterns with the size of the synthetic dataset. In doing so, we successfully scale trajectory matching-based methods to larger synthetic datasets, achieving lossless dataset distillation for the very first time. Code and distilled datasets will be released.
https://openreview.net/pdf/76b4ca911ace4176947d021053d07d288e44f1a2.pdf
SYMBOL: Generating Flexible Black-Box Optimizers through Symbolic Equation Learning
https://openreview.net/forum?id=vLJcd43U7a
https://openreview.net/forum?id=vLJcd43U7a
Jiacheng Chen,Zeyuan Ma,Hongshu Guo,Yining Ma,Jie Zhang,Yue-Jiao Gong
ICLR 2024,Poster
Recent Meta-learning for Black-Box Optimization (MetaBBO) methods harness neural networks to meta-learn configurations of traditional black-box optimizers. Despite their success, they are inevitably restricted by the limitations of predefined hand-crafted optimizers. In this paper, we present SYMBOL, a novel framework that promotes the automated discovery of black-box optimizers through symbolic equation learning. Specifically, we propose a Symbolic Equation Generator (SEG) that allows closed-form optimization rules to be dynamically generated for specific tasks and optimization steps. Within SYMBOL, we then develop three distinct strategies based on reinforcement learning, so as to meta-learn the SEG efficiently. Extensive experiments reveal that the optimizers generated by SYMBOL not only surpass the state-of-the-art BBO and MetaBBO baselines, but also exhibit exceptional zero-shot generalization abilities across entirely unseen tasks with different problem dimensions, population sizes, and optimization horizons. Furthermore, we conduct in-depth analyses of our SYMBOL framework and the optimization rules that it generates, underscoring its desirable flexibility and interpretability.
https://openreview.net/pdf/97e4a97ace4b045a200769d9c4b982fa976fb93d.pdf
SEA: Sparse Linear Attention with Estimated Attention Mask
https://openreview.net/forum?id=JbcwfmYrob
https://openreview.net/forum?id=JbcwfmYrob
Heejun Lee,Jina Kim,Jeffrey Willette,Sung Ju Hwang
ICLR 2024,Poster
The transformer architecture has driven breakthroughs in recent years on tasks which require modeling pairwise relationships between sequential elements, as is the case in natural language understanding. However, long seqeuences pose a problem due to the quadratic complexity of the attention operation. Previous re- search has aimed to lower the complexity by sparsifying or linearly approximating the attention matrix. Yet, these approaches cannot straightforwardly distill knowl- edge from a teacher’s attention matrix, and often require complete retraining from scratch. Furthermore, previous sparse and linear approaches lose interpretability if they cannot produce full attention matrices. To address these challenges, we propose SEA: Sparse linear attention with an Estimated Attention mask. SEA estimates the attention matrix with linear complexity via kernel-based linear at- tention, then subsequently creates a sparse attention matrix with a top-k̂ selection to perform a sparse attention operation. For language modeling tasks (Wikitext2), previous linear and sparse attention methods show roughly two-fold worse per- plexity scores over the quadratic OPT-1.3B baseline, while SEA achieves better perplexity than OPT-1.3B, using roughly half the memory of OPT-1.3B. More- over, SEA maintains an interpretable attention matrix and can utilize knowledge distillation to lower the complexity of existing pretrained transformers. We be- lieve that our work will have a large practical impact, as it opens the possibility of running large transformers on resource-limited devices with less memory. Code: https://github.com/gmlwns2000/sea-attention
https://openreview.net/pdf/abc9f142fade538154ad1407071b12115c85b0af.pdf
Zero-Mean Regularized Spectral Contrastive Learning: Implicitly Mitigating Wrong Connections in Positive-Pair Graphs
https://openreview.net/forum?id=RZBy8oHTz4
https://openreview.net/forum?id=RZBy8oHTz4
Xiong Zhou,Xianming Liu,Feilong Zhang,Gang Wu,Deming Zhai,Junjun Jiang,Xiangyang Ji
ICLR 2024,Poster
Contrastive learning has emerged as a popular paradigm of self-supervised learning that learns representations by encouraging representations of positive pairs to be similar while representations of negative pairs to be far apart. The spectral contrastive loss, in synergy with the notion of positive-pair graphs, offers valuable theoretical insights into the empirical successes of contrastive learning. In this paper, we propose incorporating an additive factor into the term of spectral contrastive loss involving negative pairs. This simple modification can be equivalently viewed as introducing a regularization term that enforces the mean of representations to be zero, which thus is referred to as *zero-mean regularization*. It intuitively relaxes the orthogonality of representations between negative pairs and implicitly alleviates the adverse effect of wrong connections in the positive-pair graph, leading to better performance and robustness. To clarify this, we thoroughly investigate the role of zero-mean regularized spectral contrastive loss in both unsupervised and supervised scenarios with respect to theoretical analysis and quantitative evaluation. These results highlight the potential of zero-mean regularized spectral contrastive learning to be a promising approach in various tasks.
https://openreview.net/pdf/9cb499c6b04df9a78615ec3114dda464d2df3738.pdf
Variance-enlarged Poisson Learning for Graph-based Semi-Supervised Learning with Extremely Sparse Labeled Data
https://openreview.net/forum?id=yeeVBMDAwy
https://openreview.net/forum?id=yeeVBMDAwy
Xiong Zhou,Xianming Liu,Hao Yu,Jialiang Wang,Zeke Xie,Junjun Jiang,Xiangyang Ji
ICLR 2024,Poster
Graph-based semi-supervised learning, particularly in the context of extremely sparse labeled data, often suffers from degenerate solutions where label functions tend to be nearly constant across unlabeled data. In this paper, we introduce Variance-enlarged Poisson Learning (VPL), a simple yet powerful framework tailored to alleviate the issues arising from the presence of degenerate solutions. VPL incorporates a variance-enlarged regularization term, which induces a Poisson equation specifically for unlabeled data. This intuitive approach increases the dispersion of labels from their average mean, effectively reducing the likelihood of degenerate solutions characterized by nearly constant label functions. We subsequently introduce two streamlined algorithms, V-Laplace and V-Poisson, each intricately designed to enhance Laplace and Poisson learning, respectively. Furthermore, we broaden the scope of VPL to encompass graph neural networks, introducing Variance-enlarged Graph Poisson Networks (V-GPN) to facilitate improved label propagation. To achieve a deeper understanding of VPL's behavior, we conduct a comprehensive theoretical exploration in both discrete and variational cases. Our findings elucidate that VPL inherently amplifies the importance of connections within the same class while concurrently tempering those between different classes. We support our claims with extensive experiments, demonstrating the effectiveness of VPL and showcasing its superiority over existing methods. The code is available at https://github.com/hitcszx/VPL.
https://openreview.net/pdf/df906b36fa0d2fb0102380e2b7e72da2e53d32c8.pdf
Enhancing Contrastive Learning for Ordinal Regression via Ordinal Content Preserved Data Augmentation
https://openreview.net/forum?id=kx2XZlmgB1
https://openreview.net/forum?id=kx2XZlmgB1
Jiyang Zheng,Yu Yao,Bo Han,Dadong Wang,Tongliang Liu
ICLR 2024,Poster
Contrastive learning, while highly effective for a lot of tasks, shows limited improvement in ordinal regression. We find that the limitation comes from the predefined strong data augmentations employed in contrastive learning. Intuitively, for ordinal regression datasets, the discriminative information (ordinal content information) contained in instances is subtle. The strong augmentations can easily overshadow or diminish this ordinal content information. As a result, when contrastive learning is used to extract common features between weakly and strongly augmented images, the derived features often lack this essential ordinal content, rendering them less useful in training models for ordinal regression. To improve contrastive learning's utility for ordinal regression, we propose a novel augmentation method to replace the predefined strong argumentation based on the principle of minimal change. Our method is designed in a generative manner that can effectively generate images with different styles but contains desired ordinal content information. Extensive experiments validate the effectiveness of our proposed method, which serves as a plug-and-play solution and consistently improves the performance of existing state-of-the-art methods in ordinal regression tasks.
https://openreview.net/pdf/b2b7b12bdc0cd7d0f495fdc94d0d534fe7af6548.pdf
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
https://openreview.net/forum?id=pTHfApDakA
https://openreview.net/forum?id=pTHfApDakA
Ning Miao,Yee Whye Teh,Tom Rainforth
ICLR 2024,Poster
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on math- and logic-based datasets and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
https://openreview.net/pdf/4abf17ed9d1ca7b68a9c5ee39c9748a16cbab8f7.pdf
OmniControl: Control Any Joint at Any Time for Human Motion Generation
https://openreview.net/forum?id=gd0lAEtWso
https://openreview.net/forum?id=gd0lAEtWso
Yiming Xie,Varun Jampani,Lei Zhong,Deqing Sun,Huaizu Jiang
ICLR 2024,Poster
We present a novel approach named OmniControl for incorporating flexible spatial control signals into a text-conditioned human motion generation model based on the diffusion process. Unlike previous methods that can only control the pelvis trajectory, OmniControl can incorporate flexible spatial control signals over different joints at different times with only one model. Specifically, we propose analytic spatial guidance that ensures the generated motion can tightly conform to the input control signals. At the same time, realism guidance is introduced to refine all the joints to generate more coherent motion. Both the spatial and realism guidance are essential and they are highly complementary for balancing control accuracy and motion realism. By combining them, OmniControl generates motions that are realistic, coherent, and consistent with the spatial constraints. Experiments on HumanML3D and KIT-ML datasets show that OmniControl not only achieves significant improvement over state-of-the-art methods on pelvis control but also shows promising results when incorporating the constraints over other joints. Project page: https://neu-vi.github.io/omnicontrol/.
https://openreview.net/pdf/ccde3adf1de96ef348db1adc995af579e407bada.pdf
Guaranteed Approximation Bounds for Mixed-Precision Neural Operators
https://openreview.net/forum?id=QJGj07PD9C
https://openreview.net/forum?id=QJGj07PD9C
Renbo Tu,Colin White,Jean Kossaifi,Boris Bonev,Gennady Pekhimenko,Kamyar Azizzadenesheli,Anima Anandkumar
ICLR 2024,Poster
Neural operators, such as Fourier Neural Operators (FNO), form a principled approach for learning solution operators for partial differential equations (PDE) and other mappings between function spaces. However, many real-world problems require high-resolution training data, and the training time and limited GPU memory pose big barriers. One solution is to train neural operators in mixed precision to reduce the memory requirement and increase training speed. However, existing mixed-precision training techniques are designed for standard neural networks, and we find that their direct application to FNO leads to numerical overflow and poor memory efficiency. Further, at first glance, it may appear that mixed precision in FNO will lead to drastic accuracy degradation since reducing the precision of the Fourier transform yields poor results in classical numerical solvers. We show that this is not the case; in fact, we prove that reducing the precision in FNO still guarantees a good approximation bound, when done in a targeted manner. Specifically, we build on the intuition that neural operator learning inherently induces an approximation error, arising from discretizing the infinite-dimensional ground-truth input function, implying that training in full precision is not needed. We formalize this intuition by rigorously characterizing the approximation and precision errors of FNO and bounding these errors for general input functions. We prove that the precision error is asymptotically comparable to the approximation error. Based on this, we design a simple method to optimize the memory-intensive half-precision tensor contractions by greedily finding the optimal contraction order. Through extensive experiments on different state-of-the-art neural operators, datasets, and GPUs, we demonstrate that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
https://openreview.net/pdf/6c0041c80aa708b16a4dd909b572745445905c4c.pdf
Geometry-Aware Projective Mapping for Unbounded Neural Radiance Fields
https://openreview.net/forum?id=w7BwaDHppp
https://openreview.net/forum?id=w7BwaDHppp
Junoh Lee,Hyunjun Jung,Jin-Hwi Park,Inhwan Bae,Hae-Gon Jeon
ICLR 2024,Poster
Estimating neural radiance fields (NeRFs) is able to generate novel views of a scene from known imagery. Recent approaches have afforded dramatic progress on small bounded regions of the scene. For an unbounded scene where cameras point in any direction and contents exist at any distance, certain mapping functions are used to represent it within a bounded space, yet they either work in object-centric scenes or focus on objects close to the camera. The goal of this paper is to understand how to design a proper mapping function that considers per-scene optimization, which remains unexplored. We first present a geometric understanding of existing mapping functions that express the relation between the bounded and unbounded scenes. Here, we exploit a stereographic projection method to explain failures of the mapping functions, where input ray samples are too sparse to account for scene geometry in unbounded regions. To overcome the failures, we propose a novel mapping function based on a $p$-norm distance, allowing to adaptively sample the rays by adjusting the $p$-value according to scene geometry, even in unbounded regions. To take the advantage of our mapping function, we also introduce a new ray parameterization to properly allocate ray samples in the geometry of unbounded regions. Through the incorporation of both the novel mapping function and the ray parameterization within existing NeRF frameworks, our method achieves state-of-the-art novel view synthesis results on a variety of challenging datasets.
https://openreview.net/pdf/9a380db5c8eddb8e8366c87500c05613bc30bcc7.pdf
REValueD: Regularised Ensemble Value-Decomposition for Factorisable Markov Decision Processes
https://openreview.net/forum?id=Gf15GsnfTy
https://openreview.net/forum?id=Gf15GsnfTy
David Ireland,Giovanni Montana
ICLR 2024,Poster
Discrete-action reinforcement learning algorithms often falter in tasks with high-dimensional discrete action spaces due to the vast number of possible actions. A recent advancement leverages value-decomposition, a concept from multi-agent reinforcement learning, to tackle this challenge. This study delves deep into the effects of this value-decomposition, revealing that whilst it curtails the over-estimation bias inherent to Q-learning algorithms, it amplifies target variance. To counteract this, we present an ensemble of critics to mitigate target variance. Moreover, we introduce a regularisation loss that helps to mitigate the effects that exploratory actions in one dimension can have on the value of optimal actions in other dimensions. Our novel algorithm, REValueD, tested on discretised versions of the DeepMind Control Suite tasks, showcases superior performance, especially in the challenging humanoid and dog tasks. We further dissect the factors influencing REValueD's performance, evaluating the significance of the regularisation loss and the scalability of REValueD with increasing sub-actions per dimension.
https://openreview.net/pdf/7c65f83a959b080c4e7067cfb42e34fe41ed7631.pdf
Path Choice Matters for Clear Attributions in Path Methods
https://openreview.net/forum?id=gzYgsZgwXa
https://openreview.net/forum?id=gzYgsZgwXa
Borui Zhang,Wenzhao Zheng,Jie Zhou,Jiwen Lu
ICLR 2024,Poster
Rigorousness and clarity are both essential for interpretations of DNNs to engender human trust. Path methods are commonly employed to generate rigorous attributions that satisfy three axioms. However, the meaning of attributions remains ambiguous due to distinct path choices. To address the ambiguity, we introduce Concentration Principle, which centrally allocates high attributions to indispensable features, thereby endowing aesthetic and sparsity. We then present SAMP, a model-agnostic interpreter, which efficiently searches the near-optimal path from a pre-defined set of manipulation paths. Moreover, we propose the infinitesimal constraint (IC) and momentum strategy (MS) to improve the rigorousness and optimality. Visualizations show that SAMP can precisely reveal DNNs by pinpointing salient image pixels. We also perform quantitative experiments and observe that our method significantly outperforms the counterparts.
https://openreview.net/pdf/fdfec76299aea6f4172a06958754d19d20b2be55.pdf
Exploring Target Representations for Masked Autoencoders
https://openreview.net/forum?id=xmQMz9OPF5
https://openreview.net/forum?id=xmQMz9OPF5
xingbin liu,Jinghao Zhou,Tao Kong,Xianming Lin,Rongrong Ji
ICLR 2024,Poster
Masked autoencoders have become popular training paradigms for self-supervised visual representation learning. These models randomly mask a portion of the input and reconstruct the masked portion according to assigned target representations. In this paper, we show that a careful choice of the target representation is unnecessary for learning good visual representation since different targets tend to derive similarly behaved models. Driven by this observation, we propose a multi-stage masked distillation pipeline and use a randomly initialized model as the teacher, enabling us to effectively train high-capacity models without any effort to carefully design the target representation. On various downstream tasks, the proposed method to perform masked knowledge distillation with bootstrapped teachers (dbot) outperforms previous self-supervised methods by nontrivial margins. We hope our findings, as well as the proposed method, could motivate people to rethink the roles of target representations in pre-training masked autoencoders.
https://openreview.net/pdf/d05fab1c9b21d690b4a92e177803416cdf36d678.pdf
Koopman-based generalization bound: New aspect for full-rank weights
https://openreview.net/forum?id=JN7TcCm9LF
https://openreview.net/forum?id=JN7TcCm9LF
Yuka Hashimoto,Sho Sonoda,Isao Ishikawa,Atsushi Nitanda,Taiji Suzuki
ICLR 2024,Poster
We propose a new bound for generalization of neural networks using Koopman operators. Whereas most of existing works focus on low-rank weight matrices, we focus on full-rank weight matrices. Our bound is tighter than existing norm-based bounds when the condition numbers of weight matrices are small. Especially, it is completely independent of the width of the network if the weight matrices are orthogonal. Our bound does not contradict to the existing bounds but is a complement to the existing bounds. As supported by several existing empirical results, low-rankness is not the only reason for generalization. Furthermore, our bound can be combined with the existing bounds to obtain a tighter bound. Our result sheds new light on understanding generalization of neural networks with full-rank weight matrices, and it provides a connection between operator-theoretic analysis and generalization of neural networks.
https://openreview.net/pdf/39b09431cd59c08f98c5e21c1cc1d7783a995df4.pdf
Gaining Wisdom from Setbacks: Aligning Large Language Models via Mistake Analysis
https://openreview.net/forum?id=aA33A70IO6
https://openreview.net/forum?id=aA33A70IO6
Kai Chen,Chunwei Wang,Kuo Yang,Jianhua Han,Lanqing HONG,Fei Mi,Hang Xu,Zhengying Liu,Wenyong Huang,Zhenguo Li,Dit-Yan Yeung,Lifeng Shang
ICLR 2024,Poster
The rapid development of large language models (LLMs) has not only provided numerous opportunities but also presented significant challenges. This becomes particularly evident when LLMs inadvertently generate harmful or toxic content, either unintentionally or because of intentional inducement. Existing alignment methods usually direct LLMs toward the favorable outcomes by utilizing human-annotated, flawless instruction-response pairs. Conversely, this study proposes a novel alignment technique based on mistake analysis, which deliberately exposes LLMs to erroneous content to learn the reasons for mistakes and how to avoid them. In this case, mistakes are repurposed into valuable data for alignment, effectively helping to avoid the production of erroneous responses. Without external models or human annotations, our method leverages a model's intrinsic ability to discern undesirable mistakes and improves the safety of its generated responses. Experimental results reveal that our method outperforms existing alignment approaches in enhancing model safety while maintaining the overall utility.
https://openreview.net/pdf/d3ae49610289200e85da374558dbfbd71bad4ae5.pdf
MagicDrive: Street View Generation with Diverse 3D Geometry Control
https://openreview.net/forum?id=sBQwvucduK
https://openreview.net/forum?id=sBQwvucduK
Ruiyuan Gao,Kai Chen,Enze Xie,Lanqing HONG,Zhenguo Li,Dit-Yan Yeung,Qiang Xu
ICLR 2024,Poster
Recent advancements in diffusion models have significantly enhanced the data synthesis with 2D control. Yet, precise 3D control in street view generation, crucial for 3D perception tasks, remains elusive. Specifically, utilizing Bird's-Eye View (BEV) as the primary condition often leads to challenges in geometry control (e.g., height), affecting the representation of object shapes, occlusion patterns, and road surface elevations, all of which are essential to perception data synthesis, especially for 3D object detection tasks. In this paper, we introduce MagicDrive, a novel street view generation framework, offering diverse 3D geometry controls including camera poses, road maps, and 3D bounding boxes, together with textual descriptions, achieved through tailored encoding strategies. Besides, our design incorporates a cross-view attention module, ensuring consistency across multiple camera views. With MagicDrive, we achieve high-fidelity street-view image & video synthesis that captures nuanced 3D geometry and various scene descriptions, enhancing tasks like BEV segmentation and 3D object detection. Project Website: https://flymin.github.io/magicdrive
https://openreview.net/pdf/24ee27e06af8a9a4d217bf99e7d46340c5b078b0.pdf
MogaNet: Multi-order Gated Aggregation Network
https://openreview.net/forum?id=XhYWgjqCrV
https://openreview.net/forum?id=XhYWgjqCrV
Siyuan Li,Zedong Wang,Zicheng Liu,Cheng Tan,Haitao Lin,Di Wu,Zhiyuan Chen,Jiangbin Zheng,Stan Z. Li
ICLR 2024,Poster
By contextualizing the kernel as global as possible, Modern ConvNets have shown great potential in computer vision tasks. However, recent progress on \textit{multi-order game-theoretic interaction} within deep neural networks (DNNs) reveals the representation bottleneck of modern ConvNets, where the expressive interactions have not been effectively encoded with the increased kernel size. To tackle this challenge, we propose a new family of modern ConvNets, dubbed MogaNet, for discriminative visual representation learning in pure ConvNet-based models with favorable complexity-performance trade-offs. MogaNet encapsulates conceptually simple yet effective convolutions and gated aggregation into a compact module, where discriminative features are efficiently gathered and contextualized adaptively. MogaNet exhibits great scalability, impressive efficiency of parameters, and competitive performance compared to state-of-the-art ViTs and ConvNets on ImageNet and various downstream vision benchmarks, including COCO object detection, ADE20K semantic segmentation, 2D\&3D human pose estimation, and video prediction. Notably, MogaNet hits 80.0\% and 87.8\% accuracy with 5.2M and 181M parameters on ImageNet-1K, outperforming ParC-Net and ConvNeXt-L, while saving 59\% FLOPs and 17M parameters, respectively. The source code is available at https://github.com/Westlake-AI/MogaNet.
https://openreview.net/pdf/c97ca9c77b004d29ce9d4a0ee49d8c4af7c66111.pdf
GeoDiffusion: Text-Prompted Geometric Control for Object Detection Data Generation
https://openreview.net/forum?id=xBfQZWeDRH
https://openreview.net/forum?id=xBfQZWeDRH
Kai Chen,Enze Xie,Zhe Chen,Yibo Wang,Lanqing HONG,Zhenguo Li,Dit-Yan Yeung
ICLR 2024,Poster
Diffusion models have attracted significant attention due to the remarkable ability to create content and generate data for tasks like image classification. However, the usage of diffusion models to generate the high-quality object detection data remains an underexplored area, where not only image-level perceptual quality but also geometric conditions such as bounding boxes and camera views are essential. Previous studies have utilized either copy-paste synthesis or layout-to-image (L2I) generation with specifically designed modules to encode the semantic layouts. In this paper, we propose the GeoDiffusion, a simple framework that can flexibly translate various geometric conditions into text prompts and empower pre-trained text-to-image (T2I) diffusion models for high-quality detection data generation. Unlike previous L2I methods, our GeoDiffusion is able to encode not only the bounding boxes but also extra geometric conditions such as camera views in self-driving scenes. Extensive experiments demonstrate GeoDiffusion outperforms previous L2I methods while maintaining 4x training time faster. To the best of our knowledge, this is the first work to adopt diffusion models for layout-to-image generation with geometric conditions and demonstrate that L2I-generated images can be beneficial for improving the performance of object detectors.
https://openreview.net/pdf/4120084299533337d90cfa998fd0b8592f8587ac.pdf
Un-Mixing Test-Time Normalization Statistics: Combatting Label Temporal Correlation
https://openreview.net/forum?id=xyxU99Nutg
https://openreview.net/forum?id=xyxU99Nutg
Devavrat Tomar,Guillaume Vray,Jean-Philippe Thiran,Behzad Bozorgtabar
ICLR 2024,Poster
Recent test-time adaptation methods heavily rely on nuanced adjustments of batch normalization (BN) parameters. However, one critical assumption often goes overlooked: that of independently and identically distributed (i.i.d.) test batches with respect to unknown labels. This oversight leads to skewed BN statistics and undermines the reliability of the model under non-i.i.d. scenarios. To tackle this challenge, this paper presents a novel method termed '$\textbf{Un-Mix}$ing $\textbf{T}$est-Time $\textbf{N}$ormalization $\textbf{S}$tatistics' (UnMix-TNS). Our method re-calibrates the statistics for each instance within a test batch by $\textit{mixing}$ it with multiple distinct statistics components, thus inherently simulating the i.i.d. scenario. The core of this method hinges on a distinctive online $\textit{unmixing}$ procedure that continuously updates these statistics components by incorporating the most similar instances from new test batches. Remarkably generic in its design, UnMix-TNS seamlessly integrates with a wide range of leading test-time adaptation methods and pre-trained architectures equipped with BN layers. Empirical evaluations corroborate the robustness of UnMix-TNS under varied scenarios—ranging from single to continual and mixed domain shifts, particularly excelling with temporally correlated test data and corrupted non-i.i.d. real-world streams. This adaptability is maintained even with very small batch sizes or single instances. Our results highlight UnMix-TNS's capacity to markedly enhance stability and performance across various benchmarks. Our code is publicly available at https://github.com/devavratTomar/unmixtns.
https://openreview.net/pdf/60cb9a745b90bacd6f6b78a18bb6e64215a62c5d.pdf
Constraint-Free Structure Learning with Smooth Acyclic Orientations
https://openreview.net/forum?id=KWO8LSUC5W
https://openreview.net/forum?id=KWO8LSUC5W
Riccardo Massidda,Francesco Landolfi,Martina Cinquini,Davide Bacciu
ICLR 2024,Poster
The structure learning problem consists of fitting data generated by a Directed Acyclic Graph (DAG) to correctly reconstruct its arcs. In this context, differentiable approaches constrain or regularize an optimization problem with a continuous relaxation of the acyclicity property. The computational cost of evaluating graph acyclicity is cubic on the number of nodes and significantly affects scalability. In this paper, we introduce COSMO, a constraint-free continuous optimization scheme for acyclic structure learning. At the core of our method lies a novel differentiable approximation of an orientation matrix parameterized by a single priority vector. Differently from previous works, our parameterization fits a smooth orientation matrix and the resulting acyclic adjacency matrix without evaluating acyclicity at any step. Despite this absence, we prove that COSMO always converges to an acyclic solution. In addition to being asymptotically faster, our empirical analysis highlights how COSMO performance on graph reconstruction compares favorably with competing structure learning methods.
https://openreview.net/pdf/59eb1510b8ba77c7b7be42e8ab1b96dd793c2bea.pdf
Pareto Deep Long-Tailed Recognition: A Conflict-Averse Solution
https://openreview.net/forum?id=b66P1u0k15
https://openreview.net/forum?id=b66P1u0k15
Zhipeng Zhou,Liu Liu,Peilin Zhao,Wei Gong
ICLR 2024,Poster
Deep long-tailed recognition (DTLR) has attracted much attention due to its close touch with realistic scenarios. Recent advances have focused on re-balancing across various aspects, e.g., sampling strategy, loss re-weighting, logit adjustment, and input/parameter perturbation, to name a few. However, few studies have considered dynamic re-balancing to address intrinsic optimization conflicts. In this paper, we first empirically argue that the optimizations of mainstream DLTR methods are still dominated by some categories (e.g., major) due to a fixed re-balancing strategy. Thus, they fail to deal with gradient conflicts among categories, which naturally deduces the motivation for reaching Pareto optimal solutions. Unfortunately, a naive integration of multi-objective optimization (MOO) with DLTR methods is not applicable due to the gap between multi-task learning (MTL) and DLTR, and can in turn lead to class-specific feature degradation. Thus, we provide effective alternatives by decoupling MOO-based MTL from the temporal rather than structure perspective, and enhancing it via optimizing variability collapse loss motivated by the derived MOO-based DLTR generalization bound. Moreover, we resort to anticipating worst-case optimization with theoretical insights to further ensure convergence. We build a Pareto deep long-tailed recognition method termed PLOT upon the proposed MOO framework. Extensive evaluations demonstrate that our method not only generally improves mainstream pipelines, but also achieves an augmented version to realize state-of-the-art performance across multiple benchmarks.
https://openreview.net/pdf/010f844268393128404f69bc7fb505b83bea6aa6.pdf
MixSup: Mixed-grained Supervision for Label-efficient LiDAR-based 3D Object Detection
https://openreview.net/forum?id=Q1vkAhdI6j
https://openreview.net/forum?id=Q1vkAhdI6j
Yuxue Yang,Lue Fan,Zhaoxiang Zhang
ICLR 2024,Poster
Label-efficient LiDAR-based 3D object detection is currently dominated by weakly/semi-supervised methods. Instead of exclusively following one of them, we propose MixSup, a more practical paradigm simultaneously utilizing massive cheap coarse labels and a limited number of accurate labels for Mixed-grained Supervision. We start by observing that point clouds are usually textureless, making it hard to learn semantics. However, point clouds are geometrically rich and scale-invariant to the distances from sensors, making it relatively easy to learn the geometry of objects, such as poses and shapes. Thus, MixSup leverages massive coarse cluster-level labels to learn semantics and a few expensive box-level labels to learn accurate poses and shapes. We redesign the label assignment in mainstream detectors, which allows them seamlessly integrated into MixSup, enabling practicality and universality. We validate its effectiveness in nuScenes, Waymo Open Dataset, and KITTI, employing various detectors. MixSup achieves up to 97.31% of fully supervised performance, using cheap cluster annotations and only 10% box annotations. Furthermore, we propose PointSAM based on the Segment Anything Model for automated coarse labeling, further reducing the annotation burden. The code is available at https://github.com/BraveGroup/PointSAM-for-MixSup.
https://openreview.net/pdf/8ff478f64c38c91591fa7296da064a4fc05b28a9.pdf
Boosting Vanilla Lightweight Vision Transformers via Re-parameterization
https://openreview.net/forum?id=3rmpixOjPS
https://openreview.net/forum?id=3rmpixOjPS
Zhentao Tan,Xiaodan Li,Yue Wu,Qi Chu,Le Lu,Nenghai Yu,Jieping Ye
ICLR 2024,Poster
Large-scale Vision Transformers have achieved promising performance on downstream tasks through feature pre-training. However, the performance of vanilla lightweight Vision Transformers (ViTs) is still far from satisfactory compared to that of recent lightweight CNNs or hybrid networks. In this paper, we aim to unlock the potential of vanilla lightweight ViTs by exploring the adaptation of the widely-used re-parameterization technology to ViTs for improving learning ability during training without increasing the inference cost. The main challenge comes from the fact that CNNs perfectly complement with re-parameterization over convolution and batch normalization, while vanilla Transformer architectures are mainly comprised of linear and layer normalization layers. We propose to incorporate the nonlinear ensemble into linear layers by expanding the depth of the linear layers with batch normalization and fusing multiple linear features with hierarchical representation ability through a pyramid structure. We also discover and solve a new transformer-specific distribution rectification problem caused by multi-branch re-parameterization. Finally, we propose our Two-Dimensional Re-parameterized Linear module (TDRL) for ViTs. Under the popular self-supervised pre-training and supervised fine-tuning strategy, our TDRL can be used in these two stages to enhance both generic and task-specific representation. Experiments demonstrate that our proposed method not only boosts the performance of vanilla Vit-Tiny on various vision tasks to new state-of-the-art (SOTA) but also shows promising generality ability on other networks. Code will be available.
https://openreview.net/pdf/f669c82db24753f05848f03ca1491e10580fb946.pdf
Robust Angular Synchronization via Directed Graph Neural Networks
https://openreview.net/forum?id=5sjxMwWmk8
https://openreview.net/forum?id=5sjxMwWmk8
Yixuan He,Gesine Reinert,David Wipf,Mihai Cucuringu
ICLR 2024,Poster
The angular synchronization problem aims to accurately estimate (up to a constant additive phase) a set of unknown angles $\theta_1, \dots, \theta_n\in[0, 2\pi)$ from $m$ noisy measurements of their offsets $\theta_i-\theta_j$ mod $2\pi.$ Applications include, for example, sensor network localization, phase retrieval, and distributed clock synchronization. An extension of the problem to the heterogeneous setting (dubbed $k$-synchronization) is to estimate $k$ groups of angles simultaneously, given noisy observations (with unknown group assignment) from each group. Existing methods for angular synchronization usually perform poorly in high-noise regimes, which are common in applications. In this paper, we leverage neural networks for the angular synchronization problem, and its heterogeneous extension, by proposing GNNSync, a theoretically-grounded end-to-end trainable framework using directed graph neural networks. In addition, new loss functions are devised to encode synchronization objectives. Experimental results on extensive data sets demonstrate that GNNSync attains competitive, and often superior, performance against a comprehensive set of baselines for the angular synchronization problem and its extension, validating the robustness of GNNSync even at high noise levels.
https://openreview.net/pdf/492b774f3c98c936bb4b4dd64aca6ce4f392fdbc.pdf
Multi-Scale Representations by Varying Window Attention for Semantic Segmentation
https://openreview.net/forum?id=lAhWGOkpSR
https://openreview.net/forum?id=lAhWGOkpSR
Haotian Yan,Ming Wu,Chuang Zhang
ICLR 2024,Poster
Multi-scale learning is central to semantic segmentation. We visualize the effective receptive field (ERF) of canonical multi-scale representations and point out two risks learning them: \textit{scale inadequacy} and \textit{field inactivation}. A novel multi-scale learner, \textbf{varying window attention} (VWA), is presented to address these issues. VWA leverages the local window attention (LWA) and disentangles LWA into the query window and context window, allowing the context's scale to vary for the query to learn representations at multiple scales. However, varying the context to large-scale windows (enlarging ratio $R$) can significantly increase the memory footprint and computation cost ($R^2$ times larger than LWA). We propose a simple but professional re-scaling strategy to zero the extra induced cost without compromising performance. Consequently, VWA uses the same cost as LWA to overcome the receptive limitation of the local window. Furthermore, depending on VWA and employing various MLPs, we introduce a multi-scale decoder (MSD), \textbf{VWFormer}, to improve multi-scale representations for semantic segmentation. VWFormer achieves efficiency competitive with the most compute-friendly MSDs, like FPN and MLP decoder, but performs much better than any MSDs. For instance, using nearly half of UPerNet's computation, VWFormer outperforms it by $1.0\%-2.5\%$ mIoU on ADE20K. At little extra overhead, $\sim 10$G FLOPs, Mask2Former armed with VWFormer improves by $1.0\%-1.3\%$.
https://openreview.net/pdf/5759da5cdff163afbc7e96513ae3bb41d52ce451.pdf
FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity
https://openreview.net/forum?id=hbHwZYqk9T
https://openreview.net/forum?id=hbHwZYqk9T
Kai Yi,Nidham Gazagnadou,Peter Richtárik,Lingjuan Lyu
ICLR 2024,Poster
The interest in federated learning has surged in recent research due to its unique ability to train a global model using privacy-secured information held locally on each client. This paper pays particular attention to the issue of client-side model heterogeneity, a pervasive challenge in the practical implementation of FL that escalates its complexity. Assuming a scenario where each client possesses varied memory storage, processing capabilities and network bandwidth - a phenomenon referred to as system heterogeneity - there is a pressing need to customize a unique model for each client. In response to this, we present an effective and adaptable federated framework FedP3, representing Federated Personalized and Privacy-friendly network Pruning, tailored for model heterogeneity scenarios. Our proposed methodology can incorporate and adapt well-established techniques to its specific instances. We offer a theoretical interpretation of FedP3 and its locally differential-private variant, DP-FedP3, and theoretically validate their efficiencies.
https://openreview.net/pdf/f764a286e9019ed1fa4d66fad2d0df88386d6454.pdf
Compressed Context Memory for Online Language Model Interaction
https://openreview.net/forum?id=64kSvC4iPg
https://openreview.net/forum?id=64kSvC4iPg
Jang-Hyun Kim,Junyoung Yeom,Sangdoo Yun,Hyun Oh Song
ICLR 2024,Poster
This paper presents a context key/value compression method for Transformer language models in online scenarios, where the context continually expands. As the context lengthens, the attention process demands increasing memory and computations, which in turn reduces the throughput of the language model. To address this challenge, we propose a compressed context memory system that continually compresses the accumulating attention key/value pairs into a compact memory space, facilitating language model inference in a limited memory space of computing environments. Our compression process involves integrating a lightweight conditional LoRA into the language model's forward pass during inference, without the need for fine-tuning the model's entire set of weights. We achieve efficient training by modeling the recursive compression process as a single parallelized forward computation. Through evaluations on conversation, personalization, and multi-task learning, we demonstrate that our approach achieves the performance level of a full context model with $5\times$ smaller context memory size. We further demonstrate the applicability of our approach in a streaming setting with an unlimited context length, outperforming the sliding window approach. Codes are available at https://github.com/snu-mllab/context-memory.
https://openreview.net/pdf/b1ec67610e0db6c622b1745257004d5f79b63f38.pdf
TUVF: Learning Generalizable Texture UV Radiance Fields
https://openreview.net/forum?id=dN4vpVTvWX
https://openreview.net/forum?id=dN4vpVTvWX
An-Chieh Cheng,Xueting Li,Sifei Liu,Xiaolong Wang
ICLR 2024,Poster
Textures are a vital aspect of creating visually appealing and realistic 3D models. In this paper, we study the problem of generating high-fidelity texture given shapes of 3D assets, which has been relatively less explored compared with generic 3D shape modeling. Our goal is to facilitate a controllable texture generation process, such that one texture code can correspond to a particular appearance style independent of any input shapes from a category. We introduce Texture UV Radiance Fields (TUVF) that generate textures in a learnable UV sphere space rather than directly on the 3D shape. This allows the texture to be disentangled from the underlying shape and transferable to other shapes that share the same UV space, i.e., from the same category. We integrate the UV sphere space with the radiance field, which provides a more efficient and accurate representation of textures than traditional texture maps. We perform our experiments on synthetic and real-world object datasets where we achieve not only realistic synthesis but also substantial improvements over state-of-the-arts on texture controlling and editing.
https://openreview.net/pdf/e3ed9ed07f700f15cfa92ddec1c528a47e174a1a.pdf
Neural Processing of Tri-Plane Hybrid Neural Fields
https://openreview.net/forum?id=zRkM6UcA22
https://openreview.net/forum?id=zRkM6UcA22
Adriano Cardace,Pierluigi Zama Ramirez,Francesco Ballerini,Allan Zhou,Samuele Salti,Luigi di Stefano
ICLR 2024,Poster
Driven by the appealing properties of neural fields for storing and communicating 3D data, the problem of directly processing them to address tasks such as classification and part segmentation has emerged and has been investigated in recent works. Early approaches employ neural fields parameterized by shared networks trained on the whole dataset, achieving good task performance but sacrificing reconstruction quality. To improve the latter, later methods focus on individual neural fields parameterized as large Multi-Layer Perceptrons (MLPs), which are, however, challenging to process due to the high dimensionality of the weight space, intrinsic weight space symmetries, and sensitivity to random initialization. Hence, results turn out significantly inferior to those achieved by processing explicit representations, e.g., point clouds or meshes. In the meantime, hybrid representations, in particular based on tri-planes, have emerged as a more effective and efficient alternative to realize neural fields, but their direct processing has not been investigated yet. In this paper, we show that the tri-plane discrete data structure encodes rich information, which can be effectively processed by standard deep-learning machinery. We define an extensive benchmark covering a diverse set of fields such as occupancy, signed/unsigned distance, and, for the first time, radiance fields. While processing a field with the same reconstruction quality, we achieve task performance far superior to frameworks that process large MLPs and, for the first time, almost on par with architectures handling explicit representations.
https://openreview.net/pdf/93609429dd87387881b65722dd6a3c89e5c92e2e.pdf
Large-Vocabulary 3D Diffusion Model with Transformer
https://openreview.net/forum?id=q57JLSE2j5
https://openreview.net/forum?id=q57JLSE2j5
Ziang Cao,Fangzhou Hong,Tong Wu,Liang Pan,Ziwei Liu
ICLR 2024,Poster
Creating diverse and high-quality 3D assets with an automatic generative model is highly desirable. Despite extensive efforts on 3D generation, most existing works focus on the generation of a single category or a few categories. In this paper, we introduce a diffusion-based feed-forward framework for synthesizing massive categories of real-world 3D objects \textit{with a single generative model}. Notably, there are three major challenges for this large-vocabulary 3D generation: \textbf{a}) the need for expressive yet efficient 3D representation; \textbf{b}) large diversity in geometry and texture across categories; \textbf{c}) complexity in the appearances of real-world objects. To this end, we propose a novel triplane-based 3D-aware \textbf{Diff}usion model with \textbf{T}rans\textbf{F}ormer, \textbf{DiffTF}, for handling challenges via three aspects. \textbf{1}) Considering efficiency and robustness, we adopt a revised triplane representation and improve the fitting speed and accuracy. \textbf{2}) To handle the drastic variations in geometry and texture, we regard the features of all 3D objects as a combination of generalized 3D knowledge and specialized 3D features. To extract generalized 3D knowledge from diverse categories, we propose a novel 3D-aware transformer with shared cross-plane attention. It learns the cross-plane relations across different planes and aggregates the generalized 3D knowledge with specialized 3D features. \textbf{3}) In addition, we devise the 3D-aware encoder/decoder to enhance the generalized 3D knowledge in the encoded triplanes for handling categories with complex appearances. Extensive experiments on ShapeNet and OmniObject3D (over 200 diverse real-world categories) convincingly demonstrate that a single DiffTF model achieves state-of-the-art large-vocabulary 3D object generation performance with large diversity, rich semantics, and high quality. More results are available at https://difftf.github.io/
https://openreview.net/pdf/bc3224f45f98c7433120bdba86c0cec3c95a10be.pdf
SAS: Structured Activation Sparsification
https://openreview.net/forum?id=vZfi5to2Xl
https://openreview.net/forum?id=vZfi5to2Xl
Yusuke Sekikawa,Shingo Yashima
ICLR 2024,Poster
Wide networks usually yield better accuracy than their narrower counterpart at the expense of the massive $\texttt{mult}$ cost. To break this tradeoff, we advocate a novel concept of $\textit{Structured Activation Sparsification}$, dubbed SAS, which boosts accuracy without increasing computation by utilizing the projected sparsity in activation maps with a specific structure. Concretely, the projected sparse activation is allowed to have N nonzero value among M consecutive activations. Owing to the local structure in sparsity, the wide $\texttt{matmul}$ between a dense weight and the sparse activation is executed as an equivalent narrow $\texttt{matmul}$ between a dense weight and dense activation, which is compatible with NVIDIA's $\textit{SparseTensorCore}$ developed for the N:M structured sparse weight. In extensive experiments, we demonstrate that increasing sparsity monotonically improves accuracy (up to 7% on CIFAR10) without increasing the $\texttt{mult}$ count. Furthermore, we show that structured sparsification of $\textit{activation}$ scales better than that of $\textit{weight}$ given the same computational budget.
https://openreview.net/pdf/46b950fc8d399323be1b9f3146b99dfc92260653.pdf
A Progressive Training Framework for Spiking Neural Networks with Learnable Multi-hierarchical Model
https://openreview.net/forum?id=g52tgL8jy6
https://openreview.net/forum?id=g52tgL8jy6
Zecheng Hao,Xinyu Shi,Zihan Huang,Tong Bu,Zhaofei Yu,Tiejun Huang
ICLR 2024,Poster
Spiking Neural Networks (SNNs) have garnered considerable attention due to their energy efficiency and unique biological characteristics. However, the widely adopted Leaky Integrate-and-Fire (LIF) model, as the mainstream neuron model in current SNN research, has been revealed to exhibit significant deficiencies in deep-layer gradient calculation and capturing global information on the time dimension. In this paper, we propose the Learnable Multi-hierarchical (LM-H) model to address these issues by dynamically regulating its membrane-related factors. We point out that the LM-H model fully encompasses the information representation range of the LIF model while offering the flexibility to adjust the extraction ratio between historical and current information. Additionally, we theoretically demonstrate the effectiveness of the LM-H model and the functionality of its internal parameters, and propose a progressive training algorithm tailored specifically for the LM-H model. Furthermore, we devise an efficient training framework for our novel advanced model, encompassing hybrid training and time-slicing online training. Through extensive experiments on various datasets, we validate the remarkable superiority of our model and training algorithm compared to previous state-of-the-art approaches. Code is available at [https://github.com/hzc1208/STBP_LMH](https://github.com/hzc1208/STBP_LMH).
https://openreview.net/pdf/d72a759c89415e0f33d04dd3245a137044d9fcfe.pdf
Mega-TTS 2: Boosting Prompting Mechanisms for Zero-Shot Speech Synthesis
https://openreview.net/forum?id=mvMI3N4AvD
https://openreview.net/forum?id=mvMI3N4AvD
Ziyue Jiang,Jinglin Liu,Yi Ren,Jinzheng He,Zhenhui Ye,Shengpeng Ji,Qian Yang,Chen Zhang,Pengfei Wei,Chunfeng Wang,Xiang Yin,Zejun MA,Zhou Zhao
ICLR 2024,Poster
Zero-shot text-to-speech (TTS) aims to synthesize voices with unseen speech prompts, which significantly reduces the data and computation requirements for voice cloning by skipping the fine-tuning process. However, the prompting mechanisms of zero-shot TTS still face challenges in the following aspects: 1) previous works of zero-shot TTS are typically trained with single-sentence prompts, which significantly restricts their performance when the data is relatively sufficient during the inference stage. 2) The prosodic information in prompts is highly coupled with timbre, making it untransferable to each other. This paper introduces Mega-TTS 2, a generic prompting mechanism for zero-shot TTS, to tackle the aforementioned challenges. Specifically, we design a powerful acoustic autoencoder that separately encodes the prosody and timbre information into the compressed latent space while providing high-quality reconstructions. Then, we propose a multi-reference timbre encoder and a prosody latent language model (P-LLM) to extract useful information from multi-sentence prompts. We further leverage the probabilities derived from multiple P-LLM outputs to produce transferable and controllable prosody. Experimental results demonstrate that Mega-TTS 2 could not only synthesize identity-preserving speech with a short prompt of an unseen speaker from arbitrary sources but consistently outperform the fine-tuning method when the volume of data ranges from 10 seconds to 5 minutes. Furthermore, our method enables to transfer various speaking styles to the target timbre in a fine-grained and controlled manner. Audio samples can be found in https://boostprompt.github.io/boostprompt/.
https://openreview.net/pdf/9cd6af4b3063c11b7dba3aa572d8ab74e7274f8e.pdf
A Symmetry-Aware Exploration of Bayesian Neural Network Posteriors
https://openreview.net/forum?id=FOSBQuXgAq
https://openreview.net/forum?id=FOSBQuXgAq
Olivier Laurent,Emanuel Aldea,Gianni Franchi
ICLR 2024,Poster
The distribution of modern deep neural networks (DNNs) weights -- crucial for uncertainty quantification and robustness -- is an eminently complex object due to its extremely high dimensionality. This paper presents one of the first large-scale explorations of the posterior distribution of deep Bayesian Neural Networks (BNNs), expanding its study to real-world vision tasks and architectures. Specifically, we investigate the optimal approach for approximating the posterior, analyze the connection between posterior quality and uncertainty quantification, delve into the impact of modes on the posterior, and explore methods for visualizing the posterior. Moreover, we uncover weight-space symmetries as a critical aspect for understanding the posterior. To this extent, we develop an in-depth assessment of the impact of both permutation and scaling symmetries that tend to obfuscate the Bayesian posterior. While the first type of transformation is known for duplicating modes, we explore the relationship between the latter and L2 regularization, challenging previous misconceptions. Finally, to help the community improve our understanding of the Bayesian posterior, we release the first large-scale checkpoint dataset, including thousands of real-world models, along with our code.
https://openreview.net/pdf/68b6225f7dd97251741d9909405fe064b3f02a65.pdf
Threaten Spiking Neural Networks through Combining Rate and Temporal Information
https://openreview.net/forum?id=xv8iGxENyI
https://openreview.net/forum?id=xv8iGxENyI
Zecheng Hao,Tong Bu,Xinyu Shi,Zihan Huang,Zhaofei Yu,Tiejun Huang
ICLR 2024,Poster
Spiking Neural Networks (SNNs) have received widespread attention in academic communities due to their superior spatio-temporal processing capabilities and energy-efficient characteristics. With further in-depth application in various fields, the vulnerability of SNNs under adversarial attack has become a focus of concern. In this paper, we draw inspiration from two mainstream learning algorithms of SNNs and observe that SNN models reserve both rate and temporal information. To better understand the capabilities of these two types of information, we conduct a quantitative analysis separately for each. In addition, we note that the retention degree of temporal information is related to the parameters and input settings of spiking neurons. Building on these insights, we propose a hybrid adversarial attack based on rate and temporal information (HART), which allows for dynamic adjustment of the rate and temporal attributes. Experimental results demonstrate that compared to previous works, HART attack can achieve significant superiority under different attack scenarios, data types, network architecture, time-steps, and model hyper-parameters. These findings call for further exploration into how both types of information can be effectively utilized to enhance the reliability of SNNs. Code is available at [https://github.com/hzc1208/HART_Attack](https://github.com/hzc1208/HART_Attack).
https://openreview.net/pdf/6f08ceb0b9c1fe0346b48c7b0a316ede64f412f3.pdf
QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models
https://openreview.net/forum?id=FIplmUWdm3
https://openreview.net/forum?id=FIplmUWdm3
Jing Liu,Ruihao Gong,Xiuying Wei,Zhiwei Dong,Jianfei Cai,Bohan Zhuang
ICLR 2024,Poster
Large Language Models (LLMs) have demonstrated unparalleled efficacy in natural language processing. However, their high computational demands and memory overheads hinder their broad deployment. To address this, two quantization strategies emerge, including Quantization-Aware Training (QAT) and Post-Training Quantization (PTQ). For LLMs, the billions of parameters make the QAT impractical due to the prohibitive training cost and thus PTQ becomes more prevalent. In existing studies, activation outliers in particular channels are identified as the biggest challenge to PTQ accuracy. They propose to transform the magnitudes from activations to weights, which however offers limited alleviation or suffers from unstable gradients, resulting in a severe performance drop at low-bitwidth. In this paper, we propose QLLM, an accurate and efficient low-bitwidth PTQ method designed for LLMs. QLLM introduces an adaptive channel reassembly technique that reallocates the magnitude of outliers to other channels, thereby mitigating their impact on the quantization range. This is achieved by channel disassembly and channel assembly, which first breaks down the outlier channels into several sub-channels to ensure a more balanced distribution of activation magnitudes. Then similar channels are merged to maintain the original channel number for efficiency. Additionally, an adaptive strategy is designed to autonomously determine the optimal number of sub-channels for channel disassembly. To further compensate for the performance loss caused by quantization, we propose an efficient tuning method that only learns a small number of low-rank weights while freezing the pre-trained quantized model. After training, these low-rank parameters can be fused into the frozen weights without affecting inference. Extensive experiments on LLaMA-1 and LLaMA-2 show that QLLM is able to obtain accurate quantized models efficiently. For example, QLLM quantizes the 4-bit LLaMA-2-70B within 10 hours on a single A100-80G GPU, outperforming the previous state-of-the-art method by 7.89% on the average accuracy across five zero-shot tasks. Code is available at [ZIP Lab](https://github.com/ziplab/QLLM) and [ModelTC](https://github.com/ModelTC/QLLM).
https://openreview.net/pdf/8b8aeb5a8b38f435b1b5d6ed2806ae8e391276b1.pdf
3D-Aware Hypothesis & Verification for Generalizable Relative Object Pose Estimation
https://openreview.net/forum?id=U6hEOZlDf5
https://openreview.net/forum?id=U6hEOZlDf5
Chen Zhao,Tong Zhang,Mathieu Salzmann
ICLR 2024,Poster
Prior methods that tackle the problem of generalizable object pose estimation highly rely on having dense views of the unseen object. By contrast, we address the scenario where only a single reference view of the object is available. Our goal then is to estimate the relative object pose between this reference view and a query image that depicts the object in a different pose. In this scenario, robust generalization is imperative due to the presence of unseen objects during testing and the large-scale object pose variation between the reference and the query. To this end, we present a new hypothesis-and-verification framework, in which we generate and evaluate multiple pose hypotheses, ultimately selecting the most reliable one as the relative object pose. To measure reliability, we introduce a 3D-aware verification that explicitly applies 3D transformations to the 3D object representations learned from the two input images. Our comprehensive experiments on the Objaverse, LINEMOD, and CO3D datasets evidence the superior accuracy of our approach in relative pose estimation and its robustness in large-scale pose variations, when dealing with unseen objects.
https://openreview.net/pdf/c7f0109d51e7535da0efad4d433011e24c46a0f5.pdf
Language Model Self-improvement by Reinforcement Learning Contemplation
https://openreview.net/forum?id=38E4yUbrgr
https://openreview.net/forum?id=38E4yUbrgr
Jing-Cheng Pang,Pengyuan Wang,Kaiyuan Li,Xiong-Hui Chen,Jiacheng Xu,Zongzhang Zhang,Yang Yu
ICLR 2024,Poster
Language model self-improvement (LMSI) techniques have recently gained significant attention as they improve language models without requiring external supervision. A common approach is reinforcement learning from AI feedback (RLAIF), which trains a reward model based on AI preference data and employs a reinforcement learning algorithm to train the language model. However, RLAIF relies on the heuristic assumption that an AI model can provide effective feedback and correct wrong answers, requiring a solid capability of the language model. This paper presents a novel LMSI method, Reinforcement Learning Contemplation (RLC). We disclose that it is simpler for language models to evaluate a sentence than to generate it, even for small language models. Leveraging the gap between the evaluation and generation, RLC evaluates generated answers and updates language model parameters using reinforcement learning to maximize evaluation scores. Through testing on various challenging reasoning tasks and text summarization task, our experiments show that RLC effectively improves language model performance without external supervision, resulting in an answering accuracy increase (from 31.23% to 37.09%) for BigBench-hard reasoning tasks, and a rise in BERTScore for CNN/Daily Mail summarization tasks. Furthermore, RLC can be applied to models of different sizes, showcasing its broad applicability.
https://openreview.net/pdf/fbf1dbb2ce060d40c6445d59f06ab77e43d99c31.pdf
Divide and not forget: Ensemble of selectively trained experts in Continual Learning
https://openreview.net/forum?id=sSyytcewxe
https://openreview.net/forum?id=sSyytcewxe
Grzegorz Rypeść,Sebastian Cygert,Valeriya Khan,Tomasz Trzcinski,Bartosz Michał Zieliński,Bartłomiej Twardowski
ICLR 2024,Poster
Class-incremental learning is becoming more popular as it helps models widen their applicability while not forgetting what they already know. A trend in this area is to use a mixture-of-expert technique, where different models work together to solve the task. However, the experts are usually trained all at once using whole task data, which makes them all prone to forgetting and increasing computational burden. To address this limitation, we introduce a novel approach named SEED. SEED selects only one, the most optimal expert for a considered task, and uses data from this task to fine-tune only this expert. For this purpose, each expert represents each class with a Gaussian distribution, and the optimal expert is selected based on the similarity of those distributions. Consequently, SEED increases diversity and heterogeneity within the experts while maintaining the high stability of this ensemble method. The extensive experiments demonstrate that SEED achieves state-of-the-art performance in exemplar-free settings across various scenarios, showing the potential of expert diversification through data in continual learning.
https://openreview.net/pdf/ca11ed77ce19ec235f48c2e3055087722f6cff3c.pdf
Towards Offline Opponent Modeling with In-context Learning
https://openreview.net/forum?id=2SwHngthig
https://openreview.net/forum?id=2SwHngthig
Yuheng Jing,Kai Li,Bingyun Liu,Yifan Zang,Haobo Fu,QIANG FU,Junliang Xing,Jian Cheng
ICLR 2024,Poster
Opponent modeling aims at learning the opponent's behaviors, goals, or beliefs to reduce the uncertainty of the competitive environment and assist decision-making. Existing work has mostly focused on learning opponent models online, which is impractical and inefficient in practical scenarios. To this end, we formalize an Offline Opponent Modeling (OOM) problem with the objective of utilizing pre-collected offline datasets to learn opponent models that characterize the opponent from the viewpoint of the controlled agent, which aids in adapting to the unknown fixed policies of the opponent. Drawing on the promises of the Transformers for decision-making, we introduce a general approach, Transformer Against Opponent (TAO), for OOM. Essentially, TAO tackles the problem by harnessing the full potential of the supervised pre-trained Transformers' in-context learning capabilities. The foundation of TAO lies in three stages: an innovative offline policy embedding learning stage, an offline opponent-aware response policy training stage, and a deployment stage for opponent adaptation with in-context learning. Theoretical analysis establishes TAO's equivalence to Bayesian posterior sampling in opponent modeling and guarantees TAO's convergence in opponent policy recognition. Extensive experiments and ablation studies on competitive environments with sparse and dense rewards demonstrate the impressive performance of TAO. Our approach manifests remarkable prowess for fast adaptation, especially in the face of unseen opponent policies, confirming its in-context learning potency.
https://openreview.net/pdf/ebdbd616b536556e85afb869974a43d60b721e11.pdf
Early Stopping Against Label Noise Without Validation Data
https://openreview.net/forum?id=CMzF2aOfqp
https://openreview.net/forum?id=CMzF2aOfqp
Suqin Yuan,Lei Feng,Tongliang Liu
ICLR 2024,Poster
Early stopping methods in deep learning face the challenge of balancing the volume of training and validation data, especially in the presence of label noise. Concretely, sparing more data for validation from training data would limit the performance of the learned model, yet insufficient validation data could result in a sub-optimal selection of the desired model. In this paper, we propose a novel early stopping method called Label Wave, which does not require validation data for selecting the desired model in the presence of label noise. It works by tracking the changes in the model's predictions on the training set during the training process, aiming to halt training before the model unduly fits mislabeled data. This method is empirically supported by our observation that minimum fluctuations in predictions typically occur at the training epoch before the model excessively fits mislabeled data. Through extensive experiments, we show both the effectiveness of the Label Wave method across various settings and its capability to enhance the performance of existing methods for learning with noisy labels.
https://openreview.net/pdf/b11f54e4a5242e8130e556b3848fdcd44bae997f.pdf
Recursive Generalization Transformer for Image Super-Resolution
https://openreview.net/forum?id=owziuM1nsR
https://openreview.net/forum?id=owziuM1nsR
Zheng Chen,Yulun Zhang,Jinjin Gu,Linghe Kong,Xiaokang Yang
ICLR 2024,Poster
Transformer architectures have exhibited remarkable performance in image super-resolution (SR). Since the quadratic computational complexity of the self-attention (SA) in Transformer, existing methods tend to adopt SA in a local region to reduce overheads. However, the local design restricts the global context exploitation, which is crucial for accurate image reconstruction. In this work, we propose the Recursive Generalization Transformer (RGT) for image SR, which can capture global spatial information and is suitable for high-resolution images. Specifically, we propose the recursive-generalization self-attention (RG-SA). It recursively aggregates input features into representative feature maps, and then utilizes cross-attention to extract global information. Meanwhile, the channel dimensions of attention matrices ($query$, $key$, and $value$) are further scaled to mitigate the redundancy in the channel domain. Furthermore, we combine the RG-SA with local self-attention to enhance the exploitation of the global context, and propose the hybrid adaptive integration (HAI) for module integration. The HAI allows the direct and effective fusion between features at different levels (local or global). Extensive experiments demonstrate that our RGT outperforms recent state-of-the-art methods quantitatively and qualitatively. Code and pre-trained models are available at https://github.com/zhengchen1999/RGT.
https://openreview.net/pdf/cedcaa8b38ce2730b25e4b03d432a016574ef3bd.pdf
Rethinking Model Ensemble in Transfer-based Adversarial Attacks
https://openreview.net/forum?id=AcJrSoArlh
https://openreview.net/forum?id=AcJrSoArlh
Huanran Chen,Yichi Zhang,Yinpeng Dong,Xiao Yang,Hang Su,Jun Zhu
ICLR 2024,Poster
It is widely recognized that deep learning models lack robustness to adversarial examples. An intriguing property of adversarial examples is that they can transfer across different models, which enables black-box attacks without any knowledge of the victim model. An effective strategy to improve the transferability is attacking an ensemble of models. However, previous works simply average the outputs of different models, lacking an in-depth analysis on how and why model ensemble methods can strongly improve the transferability. In this paper, we rethink the ensemble in adversarial attacks and define the common weakness of model ensemble with two properties: 1) the flatness of loss landscape; and 2) the closeness to the local optimum of each model. We empirically and theoretically show that both properties are strongly correlated with the transferability and propose a Common Weakness Attack (CWA) to generate more transferable adversarial examples by promoting these two properties. Experimental results on both image classification and object detection tasks validate the effectiveness of our approach to improving the adversarial transferability, especially when attacking adversarially trained models. We also successfully apply our method to attack a black-box large vision-language model -- Google's Bard, showing the practical effectiveness. Code is available at \url{https://github.com/huanranchen/AdversarialAttacks}.
https://openreview.net/pdf/20c70b93e487bdad50b8ad236e2f42ce1e19ec4a.pdf
Langevin Monte Carlo for strongly log-concave distributions: Randomized midpoint revisited
https://openreview.net/forum?id=hOxgrGM63n
https://openreview.net/forum?id=hOxgrGM63n
Lu Yu,Avetik Karagulyan,Arnak S. Dalalyan
ICLR 2024,Poster
We revisit the problem of sampling from a target distribution that has a smooth strongly log-concave density everywhere in $\mathbb{R}^p$. In this context, if no additional density information is available, the randomized midpoint discretization for the kinetic Langevin diffusion is known to be the most scalable method in high dimensions with large condition numbers. Our main result is a nonasymptotic and easy to compute upper bound on the $W_2$-error of this method. To provide a more thorough explanation of our method for establishing the computable upper bound, we conduct an analysis of the midpoint discretization for the vanilla Langevin process. This analysis helps to clarify the underlying principles and provides valuable insights that we use to establish an improved upper bound for the kinetic Langevin process with the midpoint discretization. Furthermore, by applying these techniques we establish new guarantees for the kinetic Langevin process with Euler discretization, which have a better dependence on the condition number than existing upper bounds
https://openreview.net/pdf/8be6e385e53c2c9a2af47b0d45a4e85bee6910bd.pdf
MuSc: Zero-Shot Industrial Anomaly Classification and Segmentation with Mutual Scoring of the Unlabeled Images
https://openreview.net/forum?id=AHgc5SMdtd
https://openreview.net/forum?id=AHgc5SMdtd
Xurui Li,Ziming Huang,Feng Xue,Yu Zhou
ICLR 2024,Poster
This paper studies zero-shot anomaly classification (AC) and segmentation (AS) in industrial vision. We reveal that the abundant normal and abnormal cues implicit in unlabeled test images can be exploited for anomaly determination, which is ignored by prior methods. Our key observation is that for the industrial product images, the normal image patches could find a relatively large number of similar patches in other unlabeled images, while the abnormal ones only have a few similar patches. We leverage such a discriminative characteristic to design a novel zero-shot AC/AS method by Mutual Scoring (MuSc) of the unlabeled images, which does not need any training or prompts. Specifically, we perform Local Neighborhood Aggregation with Multiple Degrees (LNAMD) to obtain the patch features that are capable of representing anomalies in varying sizes. Then we propose the Mutual Scoring Mechanism (MSM) to leverage the unlabeled test images to assign the anomaly score to each other. Furthermore, we present an optimization approach named Re-scoring with Constrained Image-level Neighborhood (RsCIN) for image-level anomaly classification to suppress the false positives caused by noises in normal images. The superior performance on the challenging MVTec AD and VisA datasets demonstrates the effectiveness of our approach. Compared with the state-of-the-art zero-shot approaches, MuSc achieves a $\textbf{21.1}$% PRO absolute gain (from 72.7\% to 93.8\%) on MVTec AD, a $\textbf{19.4}$% pixel-AP gain and a $\textbf{14.7}$% pixel-AUROC gain on VisA. In addition, our zero-shot approach outperforms most of the few-shot approaches and is comparable to some one-class methods. Code is available at https://github.com/xrli-U/MuSc.
https://openreview.net/pdf/dd0c9086175785e7480d6c5302f62df0f492be98.pdf
To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination
https://openreview.net/forum?id=m2NVG4Htxs
https://openreview.net/forum?id=m2NVG4Htxs
Manley Roberts,Himanshu Thakur,Christine Herlihy,Colin White,Samuel Dooley
ICLR 2024,Poster
Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks. Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data. Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities. In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time. Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination. By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data.
https://openreview.net/pdf/1e44de0b013ebf5d819d5fe1e140585af153cda3.pdf
I-PHYRE: Interactive Physical Reasoning
https://openreview.net/forum?id=1bbPQShCT2
https://openreview.net/forum?id=1bbPQShCT2
Shiqian Li,Kewen Wu,Chi Zhang,Yixin Zhu
ICLR 2024,Poster
Current evaluation protocols predominantly assess physical reasoning in stationary scenes, creating a gap in evaluating agents' abilities to interact with dynamic events. While contemporary methods allow agents to modify initial scene configurations and observe consequences, they lack the capability to interact with events in real time. To address this, we introduce I-PHYRE, a framework that challenges agents to simultaneously exhibit intuitive physical reasoning, multi-step planning, and in-situ intervention. Here, intuitive physical reasoning refers to a quick, approximate understanding of physics to address complex problems; multi-step denotes the need for extensive sequence planning in I-PHYRE, considering each intervention can significantly alter subsequent choices; and in-situ implies the necessity for timely object manipulation within a scene, where minor timing deviations can result in task failure. We formulate four game splits to scrutinize agents' learning and generalization of essential principles of interactive physical reasoning, fostering learning through interaction with representative scenarios. Our exploration involves three planning strategies and examines several supervised and reinforcement agents' zero-shot generalization proficiency on I-PHYRE. The outcomes highlight a notable gap between existing learning algorithms and human performance, emphasizing the imperative for more research in enhancing agents with interactive physical reasoning capabilities. The environment and baselines will be made publicly available.
https://openreview.net/pdf/fad4695ed5caf629961f820cfffbd439e4662aa5.pdf
Exposing Text-Image Inconsistency Using Diffusion Models
https://openreview.net/forum?id=Ny150AblPu
https://openreview.net/forum?id=Ny150AblPu
Mingzhen Huang,Shan Jia,Zhou Zhou,Yan Ju,Jialing Cai,Siwei Lyu
ICLR 2024,Poster
In the battle against widespread online misinformation, a growing problem is text-image inconsistency, where images are misleadingly paired with texts with different intent or meaning. Existing classification-based methods for text-image inconsistency can identify contextual inconsistencies but fail to provide explainable justifications for their decisions that humans can understand. Although more nuanced, human evaluation is impractical at scale and susceptible to errors. To address these limitations, this study introduces D-TIIL (Diffusion-based Text-Image Inconsistency Localization), which employs text-to-image diffusion models to localize semantic inconsistencies in text and image pairs. These models, trained on large-scale datasets act as ``omniscient" agents that filter out irrelevant information and incorporate background knowledge to identify inconsistencies. In addition, D-TIIL uses text embeddings and modified image regions to visualize these inconsistencies. To evaluate D-TIIL's efficacy, we introduce a new TIIL dataset containing 14K consistent and inconsistent text-image pairs. Unlike existing datasets, TIIL enables assessment at the level of individual words and image regions and is carefully designed to represent various inconsistencies. D-TIIL offers a scalable and evidence-based approach to identifying and localizing text-image inconsistency, providing a robust framework for future research combating misinformation.
https://openreview.net/pdf/5ef14ecda408daf8c2e2a2063612332fe824cf04.pdf