title
stringlengths
15
138
url
stringlengths
42
42
detail_url
stringlengths
42
42
authors
stringlengths
7
526
tags
stringclasses
3 values
abstract
stringlengths
480
3.09k
pdf
stringlengths
71
71
PBADet: A One-Stage Anchor-Free Approach for Part-Body Association
https://openreview.net/forum?id=pPh9p8anUi
https://openreview.net/forum?id=pPh9p8anUi
Zhongpai Gao,Huayi Zhou,Abhishek Sharma,Meng Zheng,Benjamin Planche,Terrence Chen,Ziyan Wu
ICLR 2024,Poster
The detection of human parts (e.g., hands, face) and their correct association with individuals is an essential task, e.g., for ubiquitous human-machine interfaces and action recognition. Traditional methods often employ multi-stage processes, rely on cumbersome anchor-based systems, or do not scale well to larger part sets. This paper presents PBADet, a novel one-stage, anchor-free approach for part-body association detection. Building upon the anchor-free object representation across multi-scale feature maps, we introduce a singular part-to-body center offset that effectively encapsulates the relationship between parts and their parent bodies. Our design is inherently versatile and capable of managing multiple parts-to-body associations without compromising on detection accuracy or robustness. Comprehensive experiments on various datasets underscore the efficacy of our approach, which not only outperforms existing state-of-the-art techniques but also offers a more streamlined and efficient solution to the part-body association challenge.
https://openreview.net/pdf/c3401455d59b0c97a7d43a53a8d7aad479c5178b.pdf
Performance Gaps in Multi-view Clustering under the Nested Matrix-Tensor Model
https://openreview.net/forum?id=ILqA09Oeq2
https://openreview.net/forum?id=ILqA09Oeq2
Hugo Lebeau,Mohamed El Amine Seddik,José Henrique De Morais Goulart
ICLR 2024,Poster
We study the estimation of a planted signal hidden in a recently introduced nested matrix-tensor model, which is an extension of the classical spiked rank-one tensor model, motivated by multi-view clustering. Prior work has theoretically examined the performance of a tensor-based approach, which relies on finding a best rank-one approximation, a problem known to be computationally hard. A tractable alternative approach consists in computing instead the best rank-one (matrix) approximation of an unfolding of the observed tensor data, but its performance was hitherto unknown. We quantify here the performance gap between these two approaches, in particular by deriving the precise algorithmic threshold of the unfolding approach and demonstrating that it exhibits a BBP-type transition behavior. This work is therefore in line with recent contributions which deepen our understanding of why tensor-based methods surpass matrix-based methods in handling structured tensor data.
https://openreview.net/pdf/ad7dd0582861436a6cd70c962468ab363adc35aa.pdf
Deep Reinforcement Learning Guided Improvement Heuristic for Job Shop Scheduling
https://openreview.net/forum?id=jsWCmrsHHs
https://openreview.net/forum?id=jsWCmrsHHs
Cong Zhang,Zhiguang Cao,Wen Song,Yaoxin Wu,Jie Zhang
ICLR 2024,Poster
Recent studies in using deep reinforcement learning (DRL) to solve Job-shop scheduling problems (JSSP) focus on construction heuristics. However, their performance is still far from optimality, mainly because the underlying graph representation scheme is unsuitable for modelling partial solutions at each construction step. This paper proposes a novel DRL-guided improvement heuristic for solving JSSP, where graph representation is employed to encode complete solutions. We design a Graph-Neural-Network-based representation scheme, consisting of two modules to effectively capture the information of dynamic topology and different types of nodes in graphs encountered during the improvement process. To speed up solution evaluation during improvement, we present a novel message-passing mechanism that can evaluate multiple solutions simultaneously. We prove that the computational complexity of our method scales linearly with problem size. Experiments on classic benchmarks show that the improvement policy learned by our method outperforms state-of-the-art DRL-based methods by a large margin.
https://openreview.net/pdf/3d634c21596810cff12e8d113733c26fdc6a3246.pdf
Communication-Efficient Gradient Descent-Accent Methods for Distributed Variational Inequalities: Unified Analysis and Local Updates
https://openreview.net/forum?id=hORCalGn3Z
https://openreview.net/forum?id=hORCalGn3Z
Siqi Zhang,Sayantan Choudhury,Sebastian U Stich,Nicolas Loizou
ICLR 2024,Poster
Distributed and federated learning algorithms and techniques associated primarily with minimization problems. However, with the increase of minimax optimization and variational inequality problems in machine learning, the necessity of designing efficient distributed/federated learning approaches for these problems is becoming more apparent. In this paper, we provide a unified convergence analysis of communication-efficient local training methods for distributed variational inequality problems (VIPs). Our approach is based on a general key assumption on the stochastic estimates that allows us to propose and analyze several novel local training algorithms under a single framework for solving a class of structured non-monotone VIPs. We present the first local gradient descent-accent algorithms with provable improved communication complexity for solving distributed variational inequalities on heterogeneous data. The general algorithmic framework recovers state-of-the-art algorithms and their sharp convergence guarantees when the setting is specialized to minimization or minimax optimization problems. Finally, we demonstrate the strong performance of the proposed algorithms compared to state-of-the-art methods when solving federated minimax optimization problems.
https://openreview.net/pdf/21d897e117820c5c54783e95b397cd5074e5ca47.pdf
Batch normalization is sufficient for universal function approximation in CNNs
https://openreview.net/forum?id=wOSYMHfENq
https://openreview.net/forum?id=wOSYMHfENq
Rebekka Burkholz
ICLR 2024,Poster
Normalization techniques, for which Batch Normalization (BN) is a popular choice, is an integral part of many deep learning architectures and contributes significantly to the learning success. We provide a partial explanation for this phenomenon by proving that training normalization parameters alone is already sufficient for universal function approximation if the number of available, potentially random features matches or exceeds the weight parameters of the target networks that can be expressed. Our bound on the number of required features does not only improve on a recent result for fully-connected feed-forward architectures but also applies to CNNs with and without residual connections and almost arbitrary activation functions (which include ReLUs). Our explicit construction of a given target network solves a depth-width trade-off that is driven by architectural constraints and can explain why switching off entire neurons can have representational benefits, as has been observed empirically. To validate our theory, we explicitly match target networks that outperform experimentally obtained networks with trained BN parameters by utilizing a sufficient number of random features.
https://openreview.net/pdf/06c3bfb62a5c2c6bcf6ec9cb0bbb1e4bf340fbc3.pdf
Predicting Emergent Abilities with Infinite Resolution Evaluation
https://openreview.net/forum?id=lDbjooxLkD
https://openreview.net/forum?id=lDbjooxLkD
Shengding Hu,Xin Liu,Xu Han,Xinrong Zhang,Chaoqun He,Weilin Zhao,Yankai Lin,Ning Ding,Zebin Ou,Guoyang Zeng,Zhiyuan Liu,Maosong Sun
ICLR 2024,Poster
The scientific scale-up of large language models (LLMs) necessitates a comprehensive understanding of their scaling properties. However, the existing literature on the scaling properties only yields an incomplete answer: optimization loss decreases predictably as the model size increases, in line with established scaling law; yet no scaling law for task has been established and the task performances are far from predictable during scaling. Task performances typically show minor gains on small models until they improve dramatically once models exceed a size threshold, exemplifying the ''emergent abilities''. In this study, we discover that small models, although they exhibit minor performance, demonstrate critical and consistent task performance improvements that are not captured by conventional evaluation strategies due to insufficient measurement resolution. To measure such improvements, we introduce PassUntil, an evaluation strategy with theoretically infinite resolution, through massive sampling in the decoding phase. With PassUntil, we conduct a quantitative investigation into the scaling law of task performance. The investigation contains two parts. Firstly, a strict task scaling law that is not conventionally known to exist, is identified, enhancing the predictability of task performances. Remarkably, we are able to predict the performance of the 2.4B model on code generation with merely 0.05\% deviation before training starts, which is the first systematic attempt to verify predictable scaling proposed by GPT-4's report. Secondly, underpinned by PassUntil, we are able to study emergent abilities quantitatively. We identify a kind of accelerated emergence whose scaling curve cannot be fitted by standard scaling law function and has a increasing speed. We then examine two hypothesis and imply that the ``multiple circuits hypothesis'' might be responsible for the accelerated emergence.
https://openreview.net/pdf/f89b90f62a7b34940ee8160c0ac8cd2d8f9bfa3c.pdf
GRAPH-CONSTRAINED DIFFUSION FOR END-TO-END PATH PLANNING
https://openreview.net/forum?id=vuK8MhVtuu
https://openreview.net/forum?id=vuK8MhVtuu
Dingyuan Shi,Yongxin Tong,Zimu Zhou,Ke Xu,Zheng Wang,Jieping Ye
ICLR 2024,Poster
Path planning underpins various applications such as transportation, logistics, and robotics. Conventionally, path planning is formulated with explicit optimization objectives such as distance or time. However, real-world data reveals that user intentions are hard-to-model, suggesting a need for data-driven path planning that implicitly incorporates the complex user intentions. In this paper, we propose GDP, a diffusion-based model for end-to-end data-driven path planning. It effectively learns path patterns via a novel diffusion process that incorporates constraints from road networks, and plans paths as conditional path generation given the origin and destination as prior evidence. GDP is the first solution that bypasses the traditional search-based frameworks, a long-standing performance bottleneck in path planning. We validate the efficacy of GDP on two real-world datasets. Our GDP beats strong baselines by 14.2% ~ 43.5% and achieves state-of-the-art performances.
https://openreview.net/pdf/9e4c9fa690770bd2184573bbec75c797f7c9648f.pdf
Synapse: Trajectory-as-Exemplar Prompting with Memory for Computer Control
https://openreview.net/forum?id=Pc8AU1aF5e
https://openreview.net/forum?id=Pc8AU1aF5e
Longtao Zheng,Rundong Wang,Xinrun Wang,Bo An
ICLR 2024,Poster
Building agents with large language models (LLMs) for computer control is a burgeoning research area, where the agent receives computer states and performs actions to complete complex tasks. Previous computer agents have demonstrated the benefits of in-context learning (ICL); however, their performance is hindered by several issues. First, the limited context length of LLMs and complex computer states restrict the number of exemplars, as a single webpage can consume the entire context. Second, the exemplars in current methods, such as high-level plans and multi-choice questions, cannot represent complete trajectories, leading to suboptimal performance in long-horizon tasks. Third, existing computer agents rely on task-specific exemplars and overlook the similarity among tasks, resulting in poor generalization to novel tasks. To address these challenges, we introduce Synapse, a computer agent featuring three key components: i) state abstraction, which filters out task-irrelevant information from raw states, allowing more exemplars within the limited context, ii) trajectory-as-exemplar prompting, which prompts the LLM with complete trajectories of the abstracted states and actions to improve multi-step decision-making, and iii) exemplar memory, which stores the embeddings of exemplars and retrieves them via similarity search for generalization to novel tasks. We evaluate Synapse on MiniWoB++, a standard task suite, and Mind2Web, a real-world website benchmark. In MiniWoB++, Synapse achieves a 99.2% average success rate (a 10% relative improvement) across 64 tasks using demonstrations from only 48 tasks. Notably, Synapse is the first ICL method to solve the book-flight task in MiniWoB++. Synapse also exhibits a 56% relative improvement in average step success rate over the previous state-of-the-art prompting scheme in Mind2Web.
https://openreview.net/pdf/90203c60566501d9a43d3c69c8f26d50c4e151bc.pdf
Elastic Feature Consolidation For Cold Start Exemplar-Free Incremental Learning
https://openreview.net/forum?id=7D9X2cFnt1
https://openreview.net/forum?id=7D9X2cFnt1
Simone Magistri,Tomaso Trinci,Albin Soutif,Joost van de Weijer,Andrew D. Bagdanov
ICLR 2024,Poster
Exemplar-Free Class Incremental Learning (EFCIL) aims to learn from a sequence of tasks without having access to previous task data. In this paper, we consider the challenging Cold Start scenario in which insufficient data is available in the first task to learn a high-quality backbone. This is especially challenging for EFCIL since it requires high plasticity, which results in feature drift which is difficult to compensate for in the exemplar-free setting. To address this problem, we propose a simple and effective approach that consolidates feature representations by regularizing drift in directions highly relevant to previous tasks and employs prototypes to reduce task-recency bias. Our method, called Elastic Feature Consolidation (EFC), exploits a tractable second-order approximation of feature drift based on an Empirical Feature Matrix (EFM). The EFM induces a pseudo-metric in feature space which we use to regularize feature drift in important directions and to update Gaussian prototypes used in a novel asymmetric cross entropy loss which effectively balances prototype rehearsal with data from new tasks. Experimental results on CIFAR-100, Tiny-ImageNet, ImageNet-Subset and ImageNet-1K demonstrate that Elastic Feature Consolidation is better able to learn new tasks by maintaining model plasticity and significantly outperform the state-of-the-art.
https://openreview.net/pdf/2c3f5a622811b3ae9aac02633b78832202cb8de5.pdf
Adversarial Causal Bayesian Optimization
https://openreview.net/forum?id=YcW8i9VCf5
https://openreview.net/forum?id=YcW8i9VCf5
Scott Sussex,Pier Giuseppe Sessa,Anastasia Makarova,Andreas Krause
ICLR 2024,Poster
In Causal Bayesian Optimization (CBO), an agent intervenes on a structural causal model with known graph but unknown mechanisms to maximize a downstream reward variable. In this paper, we consider the generalization where other agents or external events also intervene on the system, which is key for enabling adaptiveness to non-stationarities such as weather changes, market forces, or adversaries. We formalize this generalization of CBO as Adversarial Causal Bayesian Optimization (ACBO) and introduce the first algorithm for ACBO with bounded regret: Causal Bayesian Optimization with Multiplicative Weights (CBO-MW). Our approach combines a classical online learning strategy with causal modeling of the rewards. To achieve this, it computes optimistic counterfactual reward estimates by propagating uncertainty through the causal graph. We derive regret bounds for CBO-MW that naturally depend on graph-related quantities. We further propose a scalable implementation for the case of combinatorial interventions and submodular rewards. Empirically, CBO-MW outperforms non-causal and non-adversarial Bayesian optimization methods on synthetic environments and environments based on real-word data. Our experiments include a realistic demonstration of how CBO-MW can be used to learn users' demand patterns in a shared mobility system and reposition vehicles in strategic areas.
https://openreview.net/pdf/70ae95d2b3d8a042800cb337082f488983922e18.pdf
Text-to-3D with Classifier Score Distillation
https://openreview.net/forum?id=ktG8Tun1Cy
https://openreview.net/forum?id=ktG8Tun1Cy
Xin Yu,Yuan-Chen Guo,Yangguang Li,Ding Liang,Song-Hai Zhang,XIAOJUAN QI
ICLR 2024,Poster
Text-to-3D generation has made remarkable progress recently, particularly with methods based on Score Distillation Sampling (SDS) that leverages pre-trained 2D diffusion models. While the usage of classifier-free guidance is well acknowledged to be crucial for successful optimization, it is considered an auxiliary trick rather than the most essential component. In this paper, we re-evaluate the role of classifier-free guidance in score distillation and discover a surprising finding: the guidance alone is enough for effective text-to-3D generation tasks. We name this method Classifier Score Distillation (CSD), which can be interpreted as using an implicit classification model for generation. This new perspective reveals new insights for understanding existing techniques. We validate the effectiveness of CSD across a variety of text-to-3D tasks including shape generation, texture synthesis, and shape editing, achieving results superior to those of state-of-the-art methods. Our project page is https://xinyu-andy.github.io/Classifier-Score-Distillation
https://openreview.net/pdf/df7c41c77b83e4c02875769a41c6ae52888cd037.pdf
Accurate Forgetting for Heterogeneous Federated Continual Learning
https://openreview.net/forum?id=ShQrnAsbPI
https://openreview.net/forum?id=ShQrnAsbPI
Abudukelimu Wuerkaixi,Sen Cui,Jingfeng Zhang,Kunda Yan,Bo Han,Gang Niu,Lei Fang,Changshui Zhang,Masashi Sugiyama
ICLR 2024,Poster
Recent years have witnessed a burgeoning interest in federated learning (FL). However, the contexts in which clients engage in sequential learning remain under- explored. Bridging FL and continual learning (CL) gives rise to a challenging practical problem: federated continual learning (FCL). Existing research in FCL primarily focuses on mitigating the catastrophic forgetting issue of continual learning while collaborating with other clients. We argue that forgetting phenomena are not invariably detrimental. In this paper, we consider a more practical and challenging FCL setting characterized by potentially unrelated or even antagonistic data/tasks across different clients. In the FL scenario, statistical heterogeneity and data noise among clients may exhibit spurious correlations which result in biased feature learning. While existing CL strategies focus on the complete utilization of previous knowledge, we found that forgetting biased information was beneficial in our study. Therefore, we propose a new concept accurate forgetting (AF) and develop a novel generative-replay method AF-FCL that selectively utilizes previous knowledge in federated networks. We employ a probabilistic framework based on a normalizing flow model to quantify the credibility of previous knowledge. Comprehensive experiments affirm the superiority of our method over baselines.
https://openreview.net/pdf/14a26f9a25e9e7dcddaa0838c85d6631ac65ff95.pdf
GNeRP: Gaussian-guided Neural Reconstruction of Reflective Objects with Noisy Polarization Priors
https://openreview.net/forum?id=pTN8dV2pL8
https://openreview.net/forum?id=pTN8dV2pL8
LI Yang,RUIZHENG WU,Jiyong Li,Ying-Cong Chen
ICLR 2024,Poster
Learning surfaces from neural radiance field (NeRF) became a rising topic in Multi-View Stereo (MVS). Recent Signed Distance Function (SDF)-based methods demonstrated their ability to reconstruct exact 3D shapes of Lambertian scenes. However, their results on reflective scenes are unsatisfactory due to the entanglement of specular radiance and complicated geometry. To address the challenges, we propose a Gaussian-based representation of normals in SDF fields. Supervised by polarization priors, this representation guides the learning of geometry behind the specular reflection and capture more details than existing methods. Moreover, we propose a reweighting strategy in optimization process to alleviate the noise issue of polarization priors. To validate the effectiveness of our design, we capture polarimetric information and ground truth meshes in additional reflective scenes with various geometry. We also evaluated our framework on PANDORA dataset. Both qualitative and quantitative comparisons prove our method outperforms existing neural 3D reconstruction methods in reflective scenes by a large margin.
https://openreview.net/pdf/2683693a971ea2559723db606a4afa7209d505b1.pdf
PORF: POSE RESIDUAL FIELD FOR ACCURATE NEURAL SURFACE RECONSTRUCTION
https://openreview.net/forum?id=eBeECjacpw
https://openreview.net/forum?id=eBeECjacpw
Jia-Wang Bian,Wenjing Bian,Victor Adrian Prisacariu,Philip Torr
ICLR 2024,Poster
Neural surface reconstruction is sensitive to the camera pose noise, even when state-of-the-art pose estimators like COLMAP or ARKit are used. Existing Pose-NeRF joint optimisation methods have struggled to improve pose accuracy in challenging real-world scenarios. To overcome the challenges, we introduce the pose residual field (PoRF), a novel implicit representation that uses an MLP for regressing pose updates. Compared with the conventional per-frame pose parameter optimisation, this new representation is more robust due to parameter sharing that leverages global information over the entire sequence. Furthermore, we propose an epipolar geometry loss to enhance the supervision that leverages the correspondences exported from COLMAP results without the extra computational overhead. Our method yields promising results. On the DTU dataset, we reduce the rotation error of COLMAP poses by 78\%, leading to the reduced reconstruction Chamfer distance from 3.48mm to 0.85mm. On the MobileBrick dataset that contains casually captured unbounded 360-degree videos, our method refines ARKit poses and improves the reconstruction F1 score from 69.18 to 75.67, outperforming that with the provided ground-truth pose (75.14). These achievements demonstrate the efficacy of our approach in refining camera poses and improving the accuracy of neural surface reconstruction in real-world scenarios.
https://openreview.net/pdf/c8d221580d9817be6ed858491fc79a21578b7f4d.pdf
Modelling complex vector drawings with stroke-clouds
https://openreview.net/forum?id=O2jyuo89CK
https://openreview.net/forum?id=O2jyuo89CK
Alexander Ashcroft,Ayan Das,Yulia Gryaditskaya,Zhiyu Qu,Yi-Zhe Song
ICLR 2024,Poster
Vector drawings are innately interactive as they preserve creational cues. Despite this desirable property they remain relatively under explored due to the difficulties in modeling complex vector drawings. This is in part due to the primarily _sequential and auto-regressive nature_ of existing approaches failing to scale beyond simple drawings. In this paper, we define generative models over _highly complex_ vector drawings by first representing them as “stroke-clouds” – _sets_ of arbitrary cardinality comprised of semantically meaningful strokes. The dimensionality of the strokes is a design choice that allows the model to adapt to a range of complexities. We learn to encode these _set of strokes_ into compact latent codes by a probabilistic reconstruction procedure backed by _De-Finetti’s Theorem of Exchangability_. The parametric generative model is then defined over the latent vectors of the encoded stroke-clouds. The resulting “Latent stroke-cloud generator (LSG)” thus captures the distribution of complex vector drawings on an implicit _set space_. We demonstrate the efficacy of our model on complex drawings (a newly created Anime line-art dataset) through a range of generative tasks.
https://openreview.net/pdf/2b24d6e6c960b31b2ea2498b3c81ed0403b2524a.pdf
Spurious Feature Diversification Improves Out-of-distribution Generalization
https://openreview.net/forum?id=d6H4RBi7RH
https://openreview.net/forum?id=d6H4RBi7RH
LIN Yong,Lu Tan,Yifan HAO,Ho Nam Wong,Hanze Dong,WEIZHONG ZHANG,Yujiu Yang,Tong Zhang
ICLR 2024,Poster
Generalization to out-of-distribution (OOD) data is a critical challenge in machine learning. Ensemble-based methods, like weight space ensembles that interpolate model parameters, have been shown to achieve superior OOD performance. However, the underlying mechanism for their effectiveness remains unclear. In this study, we closely examine WiSE-FT, a popular weight space ensemble method that interpolates between a pre-trained and a fine-tuned model. We observe an unexpected ``FalseFalseTrue" phenomenon, in which WiSE-FT successfully corrects many cases where each individual model makes incorrect predictions, which contributes significantly to its OOD effectiveness. To gain further insights, we conduct theoretical analysis in a multi-class setting with a large number of spurious features. Our analysis predicts the above phenomenon and it further shows that ensemble-based models reduce prediction errors in the OOD settings by utilizing a more diverse set of spurious features. Contrary to the conventional wisdom that focuses on learning invariant features for better OOD performance, our findings suggest that incorporating a large number of diverse spurious features weakens their individual contributions, leading to improved overall OOD generalization performance. Additionally, our findings provide the first explanation for the mysterious phenomenon of weight space ensembles outperforming output space ensembles in OOD. Empirically we demonstrate the effectiveness of utilizing diverse spurious features on a MultiColorMNIST dataset, and our experimental results are consistent with the theoretical analysis. Building upon the new theoretical insights into the efficacy of ensemble methods, we further identify an issue of WiSE-FT caused by the overconfidence of fine-tuned models in OOD situations. This overconfidence magnifies the fine-tuned model's incorrect prediction, leading to deteriorated OOD ensemble performance. To remedy this problem, we propose a novel method called BAlaNced averaGing (BANG) to mitigate the overconfidence problem, which significantly enhances the OOD performance of WiSE-FT.
https://openreview.net/pdf/e29eeaf00eacdc1ca3d6d7168d2a3cd9507c9566.pdf
Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models
https://openreview.net/forum?id=28L2FCtMWq
https://openreview.net/forum?id=28L2FCtMWq
Hyeonho Jeong,Jong Chul Ye
ICLR 2024,Poster
This paper introduces a novel grounding-guided video-to-video translation framework called Ground-A-Video for multi-attribute video editing. Recent endeavors in video editing have showcased promising results in single-attribute editing or style transfer tasks, either by training T2V models on text-video data or adopting training-free methods. However, when confronted with the complexities of multi-attribute editing scenarios, they exhibit shortcomings such as omitting or overlooking intended attribute changes, modifying the wrong elements of the input video, and failing to preserve regions of the input video that should remain intact. Ground-A-Video attains temporally consistent multi-attribute editing of input videos in a training-free manner without aforementioned shortcomings. Central to our method is the introduction of cross-frame gated attention which incorporates groundings information into the latent representations in a temporally consistent fashion, along with Modulated Cross-Attention and optical flow guided inverted latents smoothing. Extensive experiments and applications demonstrate that Ground-A-Video's zero-shot capacity outperforms other baseline methods in terms of edit-accuracy and frame consistency. Further results and code are available at our project page ( http://ground-a-video.github.io )
https://openreview.net/pdf/e60d863be261d829667f079d32ff13bd44447fa4.pdf
Scalable Language Model with Generalized Continual Learning
https://openreview.net/forum?id=mz8owj4DXu
https://openreview.net/forum?id=mz8owj4DXu
Bohao PENG,Zhuotao Tian,Shu Liu,Ming-Chang Yang,Jiaya Jia
ICLR 2024,Poster
Continual learning has gained increasing importance as it facilitates the acquisition and refinement of scalable knowledge and skills in language models. However, existing methods typically encounter strict limitations and challenges in real-world scenarios, such as reliance on experience replay, optimization constraints, and inference task-ID. In this study, we introduce the Scalable Language Model (SLM) to overcome these limitations within a more challenging and generalized setting, representing a significant advancement toward practical applications for continual learning. Specifically, we propose the Joint Adaptive Re-Parameterization (JARe), integrated with Dynamic Task-related Knowledge Retrieval (DTKR), to enable adaptive adjustment of language models based on specific downstream tasks. This approach leverages the task distribution within the vector space, aiming to achieve a smooth and effortless continual learning process. Our method demonstrates state-of-the-art performance on diverse backbones and benchmarks, achieving effective continual learning in both full-set and few-shot scenarios with minimal forgetting. Moreover, while prior research primarily focused on a single task type such as classification, our study goes beyond, with the large language model, i.e., LLaMA-2, to explore the effects across diverse domains and task types, such that a single language model can be decently scaled to broader applications. The code and models will be released to the public.
https://openreview.net/pdf/cf1e6ebe071241059492aade16a3e0e9c5b1743c.pdf
Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios
https://openreview.net/forum?id=vRyp2dhEQp
https://openreview.net/forum?id=vRyp2dhEQp
Ziqiang Li,Hong Sun,Pengfei Xia,Heng Li,Beihao Xia,Yi Wu,Bin Li
ICLR 2024,Poster
Recent deep neural networks (DNNs) have came to rely on vast amounts of training data, providing an opportunity for malicious attackers to exploit and contaminate the data to carry out backdoor attacks. However, existing backdoor attack methods make unrealistic assumptions, assuming that all training data comes from a single source and that attackers have full access to the training data. In this paper, we introduce a more realistic attack scenario where victims collect data from multiple sources, and attackers cannot access the complete training data. We refer to this scenario as $\textbf{data-constrained backdoor attacks}$. In such cases, previous attack methods suffer from severe efficiency degradation due to the $\textbf{entanglement}$ between benign and poisoning features during the backdoor injection process. To tackle this problem, we introduce three CLIP-based technologies from two distinct streams: $\textit{Clean Feature Suppression}$ and $\textit{Poisoning Feature Augmentation}$. The results demonstrate remarkable improvements, with some settings achieving over $\textbf{100}$% improvement compared to existing attacks in data-constrained scenarios.
https://openreview.net/pdf/5f34f9ae213bd62fdd09e90d6bb0201d59750f2f.pdf
Symmetric Basis Convolutions for Learning Lagrangian Fluid Mechanics
https://openreview.net/forum?id=HKgRwNhI9R
https://openreview.net/forum?id=HKgRwNhI9R
Rene Winchenbach,Nils Thuerey
ICLR 2024,Poster
Learning physical simulations has been an essential and central aspect of many recent research efforts in machine learning, particularly for Navier-Stokes-based fluid mechanics. Classic numerical solvers have traditionally been computationally expensive and challenging to use in inverse problems, whereas Neural solvers aim to address both concerns through machine learning. We propose a general formulation for continuous convolutions using separable basis functions as a superset of existing methods and evaluate a large set of basis functions in the context of (a) a compressible 1D SPH simulation, (b) a weakly compressible 2D SPH simulation, and (c) an incompressible 2D SPH Simulation. We demonstrate that even and odd symmetries included in the basis functions are key aspects of stability and accuracy. Our broad evaluation shows that Fourier-based continuous convolutions outperform all other architectures regarding accuracy and generalization. Finally, using these Fourier-based networks, we show that prior inductive biases, such as window functions, are no longer necessary. An implementation of our approach, as well as complete datasets and solver implementations, is available at https://github.com/orgs/tum-pbs/SFBC.
https://openreview.net/pdf/1b8842aa3a4f09628f2348891163d3d48740a341.pdf
G$^2$N$^2$ : Weisfeiler and Lehman go grammatical
https://openreview.net/forum?id=eZneJ55mRO
https://openreview.net/forum?id=eZneJ55mRO
Jason Piquenot,Aldo Moscatelli,Maxime Berar,Pierre Héroux,Romain Raveaux,Jean-Yves RAMEL,Sébastien Adam
ICLR 2024,Poster
This paper introduces a framework for formally establishing a connection between a portion of an algebraic language and a Graph Neural Network (GNN). The framework leverages Context-Free Grammars (CFG) to organize algebraic operations into generative rules that can be translated into a GNN layer model. As CFGs derived directly from a language tend to contain redundancies in their rules and variables, we present a grammar reduction scheme. By applying this strategy, we define a CFG that conforms to the third-order Weisfeiler-Lehman (3-WL) test using the matricial language MATLANG. From this 3-WL CFG, we derive a GNN model, named G$^2$N$^2$, which is provably 3-WL compliant. Through various experiments, we demonstrate the superior efficiency of G$^2$N$^2$ compared to other 3-WL GNNs across numerous downstream tasks. Specifically, one experiment highlights the benefits of grammar reduction within our framework.
https://openreview.net/pdf/69456b57a8257c76ee87242315e09b10eb30a33e.pdf
VertiBench: Advancing Feature Distribution Diversity in Vertical Federated Learning Benchmarks
https://openreview.net/forum?id=glwwbaeKm2
https://openreview.net/forum?id=glwwbaeKm2
Zhaomin Wu,Junyi Hou,Bingsheng He
ICLR 2024,Poster
Vertical Federated Learning (VFL) is a crucial paradigm for training machine learning models on feature-partitioned, distributed data. However, due to privacy restrictions, few public real-world VFL datasets exist for algorithm evaluation, and these represent a limited array of feature distributions. Existing benchmarks often resort to synthetic datasets, derived from arbitrary feature splits from a global set, which only capture a subset of feature distributions, leading to inadequate algorithm performance assessment. This paper addresses these shortcomings by introducing two key factors affecting VFL performance - feature importance and feature correlation - and proposing associated evaluation metrics and dataset splitting methods. Additionally, we introduce a real VFL dataset to address the deficit in image-image VFL scenarios. Our comprehensive evaluation of cutting-edge VFL algorithms provides valuable insights for future research in the field.
https://openreview.net/pdf/a8cf2e01c93c35a2ae7c336331cf9de773a11782.pdf
Multimodal Web Navigation with Instruction-Finetuned Foundation Models
https://openreview.net/forum?id=efFmBWioSc
https://openreview.net/forum?id=efFmBWioSc
Hiroki Furuta,Kuang-Huei Lee,Ofir Nachum,Yutaka Matsuo,Aleksandra Faust,Shixiang Shane Gu,Izzeddin Gur
ICLR 2024,Poster
The progress of autonomous web navigation has been hindered by the dependence on billions of exploratory interactions via online reinforcement learning, and domain-specific model designs that make it difficult to leverage generalization from rich out-of-domain data. In this work, we study data-driven offline training for web agents with vision-language foundation models. We propose an instruction-following multimodal agent, WebGUM, that observes both webpage screenshots and HTML pages and outputs web navigation actions, such as click and type. WebGUM is trained by jointly finetuning an instruction-finetuned language model and a vision encoder with temporal and local perception on a large corpus of demonstrations. We empirically demonstrate this recipe improves the agent's ability of grounded multimodal perception, HTML comprehension, and multi-step reasoning, outperforming prior works by a significant margin. On the MiniWoB, we improve over the previous best offline methods by more than 45.8%, even outperforming online-finetuned SoTA, humans, and GPT-4-based agent. On the WebShop benchmark, our 3-billion-parameter model achieves superior performance to the existing SoTA, PaLM-540B. Furthermore, WebGUM exhibits strong positive transfer to the real-world planning tasks on the Mind2Web. We also collect 347K high-quality demonstrations using our trained models, 38 times larger than prior work, and make them available to promote future research in this direction.
https://openreview.net/pdf/fabfa32c6fc766d096c8a789e2bd35887e182190.pdf
Real-Fake: Effective Training Data Synthesis Through Distribution Matching
https://openreview.net/forum?id=svIdLLZpsA
https://openreview.net/forum?id=svIdLLZpsA
Jianhao Yuan,Jie Zhang,Shuyang Sun,Philip Torr,Bo Zhao
ICLR 2024,Poster
Synthetic training data has gained prominence in numerous learning tasks and scenarios, offering advantages such as dataset augmentation, generalization evaluation, and privacy preservation. Despite these benefits, the efficiency of synthetic data generated by current methodologies remains inferior when training advanced deep models exclusively, limiting its practical utility. To address this challenge, we analyze the principles underlying training data synthesis for supervised learning and elucidate a principled theoretical framework from the distribution-matching perspective that explicates the mechanisms governing synthesis efficacy. Through extensive experiments, we demonstrate the effectiveness of our synthetic data across diverse image classification tasks, both as a replacement for and augmentation to real datasets, while also benefits such as out-of-distribution generalization, privacy preservation, and scalability. Specifically, we achieve 70.9% top1 classification accuracy on ImageNet1K when training solely with synthetic data equivalent to 1 × the original real data size, which increases to 76.0% when scaling up to 10 × synthetic data.
https://openreview.net/pdf/22c2e29e22c1cc145ff0d253589743fdd0e72267.pdf
Learning Conditional Invariances through Non-Commutativity
https://openreview.net/forum?id=tUVG9nGzgE
https://openreview.net/forum?id=tUVG9nGzgE
Abhra Chaudhuri,Serban Georgescu,Anjan Dutta
ICLR 2024,Poster
Invariance learning algorithms that conditionally filter out domain-specific random variables as distractors, do so based only on the data semantics, and not the target domain under evaluation. We show that a provably optimal and sample-efficient way of learning conditional invariances is by relaxing the invariance criterion to be non-commutatively directed towards the target domain. Under domain asymmetry, i.e., when the target domain contains semantically relevant information absent in the source, the risk of the encoder $\varphi^*$ that is optimal on average across domains is strictly lower-bounded by the risk of the target-specific optimal encoder $\Phi^*_\tau$. We prove that non-commutativity steers the optimization towards $\Phi^*_\tau$ instead of $\varphi^*$, bringing the $\mathcal{H}$-divergence between domains down to zero, leading to a stricter bound on the target risk. Both our theory and experiments demonstrate that non-commutative invariance (NCI) can leverage source domain samples to meet the sample complexity needs of learning $\Phi^*_\tau$, surpassing SOTA invariance learning algorithms for domain adaptation, at times by over 2\%, approaching the performance of an oracle. Implementation is available at https://github.com/abhrac/nci.
https://openreview.net/pdf/097ce135c5619a7cc846346c13f8f9ab6686ebf2.pdf
GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher
https://openreview.net/forum?id=MbfAK4s61A
https://openreview.net/forum?id=MbfAK4s61A
Youliang Yuan,Wenxiang Jiao,Wenxuan Wang,Jen-tse Huang,Pinjia He,Shuming Shi,Zhaopeng Tu
ICLR 2024,Poster
Safety lies at the core of the development of Large Language Models (LLMs). There is ample work on aligning LLMs with human ethics and preferences, including data filtering in pretraining, supervised fine-tuning, reinforcement learning from human feedback, red teaming, etc. In this study, we discover that chat in cipher can bypass the safety alignment techniques of LLMs, which are mainly conducted in natural languages. We propose a novel framework CipherChat to systematically examine the generalizability of safety alignment to non-natural languages -- ciphers. CipherChat enables humans to chat with LLMs through cipher prompts topped with system role descriptions and few-shot enciphered demonstrations. We use CipherChat to assess state-of-the-art LLMs, including ChatGPT and GPT-4 for different representative human ciphers across 11 safety domains in both English and Chinese. Experimental results show that certain ciphers succeed almost 100% of the time in bypassing the safety alignment of GPT-4 in several safety domains, demonstrating the necessity of developing safety alignment for non-natural languages. Notably, we identify that LLMs seem to have a ''secret cipher'', and propose a novel SelfCipher that uses only role play and several unsafe demonstrations in natural language to evoke this capability. SelfCipher surprisingly outperforms existing human ciphers in almost all cases.
https://openreview.net/pdf/0aa92cda622bd9e644859fb23d3f880aa272063e.pdf
Towards the Fundamental Limits of Knowledge Transfer over Finite Domains
https://openreview.net/forum?id=Zh2iqiOtMt
https://openreview.net/forum?id=Zh2iqiOtMt
Qingyue Zhao,Banghua Zhu
ICLR 2024,Poster
We characterize the statistical efficiency of knowledge transfer through $n$ samples from a teacher to a probabilistic student classifier with input space $\mathcal{S}$ over labels $\mathcal{A}$. We show that privileged information at three progressive levels accelerates the transfer. At the first level, only samples with hard labels are known, via which the maximum likelihood estimator attains the minimax rate $\sqrt{{|\mathcal{S}||\mathcal{A}|}/{n}}$. The second level has the teacher probabilities of sampled labels available in addition, which turns out to boost the convergence rate lower bound to ${{|\mathcal{S}||\mathcal{A}|}/{n}}$. However, under this second data acquisition protocol, minimizing a naive adaptation of the cross-entropy loss results in an asymptotically biased student. We overcome this limitation and achieve the fundamental limit by using a novel empirical variant of the squared error logit loss. The third level further equips the student with the soft labels (complete logits) on $\mathcal{A}$ given every sampled input, thereby provably enables the student to enjoy a rate ${|\mathcal{S}|}/{n}$ free of $|\mathcal{A}|$. We find any Kullback-Leibler divergence minimizer to be optimal in the last case. Numerical simulations distinguish the four learners and corroborate our theory.
https://openreview.net/pdf/91a46741e4bab3b0db520fd407c05afbd81f33b1.pdf
Self-Supervised Speech Quality Estimation and Enhancement Using Only Clean Speech
https://openreview.net/forum?id=ale56Ya59q
https://openreview.net/forum?id=ale56Ya59q
Szu-Wei Fu,Kuo-Hsuan Hung,Yu Tsao,Yu-Chiang Frank Wang
ICLR 2024,Poster
Speech quality estimation has recently undergone a paradigm shift from human-hearing expert designs to machine-learning models. However, current models rely mainly on supervised learning, which is time-consuming and expensive for label collection. To solve this problem, we propose VQScore, a self-supervised metric for evaluating speech based on the quantization error of a vector-quantized-variational autoencoder (VQ-VAE). The training of VQ-VAE relies on clean speech; hence, large quantization errors can be expected when the speech is distorted. To further improve correlation with real quality scores, domain knowledge of speech processing is incorporated into the model design. We found that the vector quantization mechanism could also be used for self-supervised speech enhancement (SE) model training. To improve the robustness of the encoder for SE, a novel self-distillation mechanism combined with adversarial training is introduced. In summary, the proposed speech quality estimation method and enhancement models require only clean speech for training without any label requirements. Experimental results show that the proposed VQScore and enhancement model are competitive with supervised baselines. The code and pre-trained models will be released
https://openreview.net/pdf/a84c30a39e43e373896b024b83a70414f6acaacb.pdf
Internal Cross-layer Gradients for Extending Homogeneity to Heterogeneity in Federated Learning
https://openreview.net/forum?id=Cc0qk6r4Nd
https://openreview.net/forum?id=Cc0qk6r4Nd
Yun-Hin Chan,Rui Zhou,Running Zhao,Zhihan JIANG,Edith C. H. Ngai
ICLR 2024,Poster
Federated learning (FL) inevitably confronts the challenge of system heterogeneity in practical scenarios. To enhance the capabilities of most model-homogeneous FL methods in handling system heterogeneity, we propose a training scheme that can extend their capabilities to cope with this challenge. In this paper, we commence our study with a detailed exploration of homogeneous and heterogeneous FL settings and discover three key observations: (1) a positive correlation between client performance and layer similarities, (2) higher similarities in the shallow layers in contrast to the deep layers, and (3) the smoother gradients distributions indicate the higher layer similarities. Building upon these observations, we propose InCo Aggregation that leverages internal cross-layer gradients, a mixture of gradients from shallow and deep layers within a server model, to augment the similarity in the deep layers without requiring additional communication between clients. Furthermore, our methods can be tailored to accommodate model-homogeneous FL methods such as FedAvg, FedProx, FedNova, Scaffold, and MOON, to expand their capabilities to handle the system heterogeneity. Copious experimental results validate the effectiveness of InCo Aggregation, spotlighting internal cross-layer gradients as a promising avenue to enhance the performance in heterogeneous FL.
https://openreview.net/pdf/6d4693c776d7edb7d372e746136af676a5d52c04.pdf
Contrastive Learning is Spectral Clustering on Similarity Graph
https://openreview.net/forum?id=hLZQTFGToA
https://openreview.net/forum?id=hLZQTFGToA
Zhiquan Tan,Yifan Zhang,Jingqin Yang,Yang Yuan
ICLR 2024,Poster
Contrastive learning is a powerful self-supervised learning method, but we have a limited theoretical understanding of how it works and why it works. In this paper, we prove that contrastive learning with the standard InfoNCE loss is equivalent to spectral clustering on the similarity graph. Using this equivalence as the building block, we extend our analysis to the CLIP model and rigorously characterize how similar multi-modal objects are embedded together. Motivated by our theoretical insights, we introduce the Kernel-InfoNCE loss, incorporating mixtures of kernel functions that outperform the standard Gaussian kernel on several vision datasets.
https://openreview.net/pdf/c2cad0470a643088313c949c261f8df1c7c269f0.pdf
On the Generalization and Approximation Capacities of Neural Controlled Differential Equations
https://openreview.net/forum?id=kILAd8RdzA
https://openreview.net/forum?id=kILAd8RdzA
Linus Bleistein,Agathe Guilloux
ICLR 2024,Poster
Neural Controlled Differential Equations (NCDE) are a state-of-the-art tool for supervised learning with irregularly sampled time series (Kidger 2020). However, no theoretical analysis of their performance has been provided yet, and it remains unclear in particular how the roughness of the sampling affects their predictions. By merging the rich theory of controlled differential equations (CDE) and Lipschitz-based measures of the complexity of deep neural nets, we take a first step towards the theoretical understanding of NCDE. Our first result is a sampling-dependant generalization bound for this class of predictors. In a second time, we leverage the continuity of the flow of CDEs to provide a detailed analysis of both the sampling-induced bias and the approximation bias. Regarding this last result, we show how classical approximation results on neural nets may transfer to NCDE. Our theoretical results are validated through a series of experiments.
https://openreview.net/pdf/8dcf437df5c1190e5c595dbb8121b3aa0dc4f275.pdf
Epitopological learning and Cannistraci-Hebb network shape intelligence brain-inspired theory for ultra-sparse advantage in deep learning
https://openreview.net/forum?id=iayEcORsGd
https://openreview.net/forum?id=iayEcORsGd
Yingtao Zhang,Jialin Zhao,Wenjing Wu,Alessandro Muscoloni,Carlo Vittorio Cannistraci
ICLR 2024,Poster
Sparse training (ST) aims to ameliorate deep learning by replacing fully connected artificial neural networks (ANNs) with sparse or ultra-sparse ones, such as brain networks are, therefore it might benefit to borrow brain-inspired learning paradigms from complex network intelligence theory. Here, we launch the ultra-sparse advantage challenge, whose goal is to offer evidence on the extent to which ultra-sparse (around 1\% connection retained) topologies can achieve any leaning advantage against fully connected. Epitopological learning is a field of network science and complex network intelligence that studies how to implement learning on complex networks by changing the shape of their connectivity structure (epitopological plasticity). One way to implement Epitopological (epi- means new) Learning is via link prediction: predicting the likelihood of non-observed links to appear in the network. Cannistraci-Hebb learning theory inspired the CH3-L3 network automata rule for link prediction which is effective for general-purpose link prediction. Here, starting from CH3-L3 we propose Epitopological Sparse Meta-deep Learning (ESML) to apply Epitopological Learning to sparse training. In empirical experiments, we find that ESML learns ANNs with ultra-sparse hyperbolic (epi-)topology in which emerges a community layer organization that is meta-deep (meaning that each layer also has an internal depth due to power-law node hierarchy). Furthermore, we discover that ESML can in many cases automatically sparse the neurons during training (arriving even to 30\% neurons left in hidden layers), this process of node dynamic removal is called percolation. Starting from this network science evidence, we design Cannistraci-Hebb training (CHT), a 4-step training methodology that puts ESML at its heart. We conduct experiments on 7 datasets and 5 network structures comparing CHT to dynamic sparse training SOTA algorithms and the fully connected counterparts. The results indicate that, with a mere 1\% of links retained during training, CHT surpasses fully connected networks on VGG16, GoogLeNet, ResNet50, and ResNet152. This key finding is an evidence for ultra-sparse advantage and signs a milestone in deep learning. CHT acts akin to a gradient-free oracle that adopts CH3-L3-based epitopological learning to guide the placement of new links in the ultra-sparse network topology to facilitate sparse-weight gradient learning, and this in turn reduces the convergence time of ultra-sparse training. Finally, CHT offers the first examples of parsimony dynamic sparse training because, in many datasets, it can retain network performance by percolating and significantly reducing the node network size. Our code is available at: https://github.com/biomedical-cybernetics/Cannistraci-Hebb-training
https://openreview.net/pdf/412cf94f4a0ac5183d635f2cd88565c0004ca7d6.pdf
Plug-and-Play: An Efficient Post-training Pruning Method for Large Language Models
https://openreview.net/forum?id=Tr0lPx9woF
https://openreview.net/forum?id=Tr0lPx9woF
Yingtao Zhang,Haoli Bai,Haokun Lin,Jialin Zhao,Lu Hou,Carlo Vittorio Cannistraci
ICLR 2024,Poster
With the rapid growth of large language models (LLMs), there is increasing demand for memory and computation in LLMs. Recent efforts on post-training pruning of LLMs aim to reduce the model size and computation requirements, yet the performance is still sub-optimal. In this paper, we present a plug-and-play solution for post-training pruning of LLMs. The proposed solution has two innovative components: 1) **Relative Importance and Activations (RIA)**, a new pruning metric that jointly considers the weight and activations efficiently on LLMs, and 2) **Channel Permutation**, a new approach to maximally preserves important weights under N:M sparsity. The two proposed components can be readily combined to further enhance the N:M semi-structured pruning of LLMs. Our empirical experiments show that RIA alone can already surpass all existing post-training pruning methods on prevalent LLMs, e.g., LLaMA ranging from 7B to 65B. Furthermore, N:M semi-structured pruning with channel permutation can even outperform the original LLaMA2-70B on zero-shot tasks, together with practical speed-up on specific hardware. Our code is available at: https://github.com/biomedical-cybernetics/Relative-importance-and-activation-pruning
https://openreview.net/pdf/b87692f5460a1e63c5b810c540ddead2a4f73570.pdf
Universal Jailbreak Backdoors from Poisoned Human Feedback
https://openreview.net/forum?id=GxCGsxiAaK
https://openreview.net/forum?id=GxCGsxiAaK
Javier Rando,Florian Tramèr
ICLR 2024,Poster
Reinforcement Learning from Human Feedback (RLHF) is used to align large language models to produce helpful and harmless responses. Yet, these models can be jailbroken by finding adversarial prompts that revert the model to its unaligned behavior. In this paper, we consider a new threat where an attacker poisons the RLHF data to embed a jailbreak trigger into the model as a backdoor. The trigger then acts like a universal sudo command, enabling arbitrary harmful responses without the need to search for an adversarial prompt. Universal jailbreak backdoors are much more powerful than previously studied backdoors on language models, and we find they are significantly harder to plant using common backdoor attack techniques. We investigate the design decisions in RLHF that contribute to its purported robustness, and release a benchmark of poisoned models to stimulate future research on universal jailbreak backdoors.
https://openreview.net/pdf/c0bc2ef77ed2fd6433e448da1152449296db831c.pdf
Neural Field Classifiers via Target Encoding and Classification Loss
https://openreview.net/forum?id=9NqC72m31m
https://openreview.net/forum?id=9NqC72m31m
Xindi Yang,Zeke Xie,Xiong Zhou,Boyu Liu,Buhua Liu,Yi Liu,Haoran Wang,YUNFENG CAI,Mingming Sun
ICLR 2024,Poster
Neural field methods have seen great progress in various long-standing tasks in computer vision and computer graphics, including novel view synthesis and geometry reconstruction. As existing neural field methods try to predict some coordinate-based continuous target values, such as RGB for Neural Radiance Field (NeRF), all of these methods are regression models and are optimized by some regression loss. However, are regression models really better than classification models for neural field methods? In this work, we try to visit this very fundamental but overlooked question for neural fields from a machine learning perspective. We successfully propose a novel Neural Field Classifier (NFC) framework which formulates existing neural field methods as classification tasks rather than regression tasks. The proposed NFC can easily transform arbitrary Neural Field Regressor (NFR) into its classification variant via employing a novel Target Encoding module and optimizing a classification loss. By encoding a continuous regression target into a high-dimensional discrete encoding, we naturally formulate a multi-label classification task. Extensive experiments demonstrate the impressive effectiveness of NFC at the nearly free extra computational costs. Moreover, NFC also shows robustness to sparse inputs, corrupted images, and dynamic scenes.
https://openreview.net/pdf/e18dd15b02cc89d63788c9f664ee2f834c306a26.pdf
Fusion Is Not Enough: Single Modal Attacks on Fusion Models for 3D Object Detection
https://openreview.net/forum?id=3VD4PNEt5q
https://openreview.net/forum?id=3VD4PNEt5q
Zhiyuan Cheng,Hongjun Choi,Shiwei Feng,James Chenhao Liang,Guanhong Tao,Dongfang Liu,Michael Zuzak,Xiangyu Zhang
ICLR 2024,Poster
Multi-sensor fusion (MSF) is widely used in autonomous vehicles (AVs) for perception, particularly for 3D object detection with camera and LiDAR sensors. The purpose of fusion is to capitalize on the advantages of each modality while minimizing its weaknesses. Advanced deep neural network (DNN)-based fusion techniques have demonstrated the exceptional and industry-leading performance. Due to the redundant information in multiple modalities, MSF is also recognized as a general defence strategy against adversarial attacks. In this paper, we attack fusion models from the camera modality that is considered to be of lesser importance in fusion but is more affordable for attackers. We argue that the weakest link of fusion models depends on their most vulnerable modality and propose an attack framework that targets advanced camera-LiDAR fusion-based 3D object detection models through camera-only adversarial attacks. Our approach employs a two-stage optimization-based strategy that first thoroughly evaluates vulnerable image areas under adversarial attacks, and then applies dedicated attack strategies for different fusion models to generate deployable patches. The evaluations with six advanced camera-LiDAR fusion models and one camera-only model indicate that our attacks successfully compromise all of them. Our approach can either decrease the mean average precision (mAP) of detection performance from 0.824 to 0.353 or degrade the detection score of a target object from 0.728 to 0.156, demonstrating the efficacy of our proposed attack framework. Code is available.
https://openreview.net/pdf/8982787a4eb9db2c27eb34b60ae0b93d4442453e.pdf
LipVoicer: Generating Speech from Silent Videos Guided by Lip Reading
https://openreview.net/forum?id=ZZCPSC5OgD
https://openreview.net/forum?id=ZZCPSC5OgD
Yochai Yemini,Aviv Shamsian,Lior Bracha,Sharon Gannot,Ethan Fetaya
ICLR 2024,Poster
Lip-to-speech involves generating a natural-sounding speech synchronized with a soundless video of a person talking. Despite recent advances, current methods still cannot produce high-quality speech with high levels of intelligibility for challenging and realistic datasets such as LRS3. In this work, we present LipVoicer, a novel method that generates high-quality speech, even for in-the-wild and rich datasets, by incorporating the text modality. Given a silent video, we first predict the spoken text using a pre-trained lip-reading network. We then condition a diffusion model on the video and use the extracted text through a classifier-guidance mechanism where a pre-trained automatic speech recognition (ASR ) serves as the classifier. LipVoicer outperforms multiple lip-to-speech baselines on LRS2 and LRS3, which are in-the-wild datasets with hundreds of unique speakers in their test set and an unrestricted vocabulary. Moreover, our experiments show that the inclusion of the text modality plays a major role in the intelligibility of the produced speech, readily perceptible while listening, and is empirically reflected in the substantial reduction of the word error rate ( WER ) metric. We demonstrate the effectiveness of LipVoicer through human evaluation, which shows that it produces more natural and synchronized speech signals compared to competing methods. Finally, we created a demo showcasing LipVoicer’s superiority in producing natural, synchronized, and intelligible speech, providing additional evidence of its effectiveness. Project page and code: https://github.com/yochaiye/LipVoicer
https://openreview.net/pdf/7380272c13a2c1f98c23701c4692e8c32c21a054.pdf
Window Attention is Bugged: How not to Interpolate Position Embeddings
https://openreview.net/forum?id=IPhm01y9a9
https://openreview.net/forum?id=IPhm01y9a9
Daniel Bolya,Chaitanya Ryali,Judy Hoffman,Christoph Feichtenhofer
ICLR 2024,Poster
Window attention, position embeddings, and high resolution finetuning are core concepts in the modern transformer era of computer vision. However, we find that naively combining these near ubiquitous components can have a detrimental effect on performance. The issue is simple: interpolating position embeddings while using window attention is wrong. We study two state-of-the-art methods that have these three components, namely Hiera and ViTDet, and find that both do indeed suffer from this bug. To fix it, we introduce a simple absolute window position embedding strategy, which solves the bug outright in Hiera and allows us to increase both speed and performance of the model in ViTDet. We finally combine the two to obtain HieraDet, which achieves 61.7 box mAP on COCO, making it state-of-the-art for models that only use ImageNet-1k pretraining. This all stems from what is essentially a 3 line bug fix, which we name "absolute win".
https://openreview.net/pdf/777da186eb4e6a60ec7bc9f95947a07cad533f5b.pdf
Embodied Active Defense: Leveraging Recurrent Feedback to Counter Adversarial Patches
https://openreview.net/forum?id=uXjfOmTiDt
https://openreview.net/forum?id=uXjfOmTiDt
Lingxuan Wu,Xiao Yang,Yinpeng Dong,Liuwei XIE,Hang Su,Jun Zhu
ICLR 2024,Poster
The vulnerability of deep neural networks to adversarial patches has motivated numerous defense strategies for boosting model robustness. However, the prevailing defenses depend on single observation or pre-established adversary information to counter adversarial patches, often failing to be confronted with unseen or adaptive adversarial attacks and easily exhibiting unsatisfying performance in dynamic 3D environments. Inspired by active human perception and recurrent feedback mechanisms, we develop Embodied Active Defense (EAD), a proactive defensive strategy that actively contextualizes environmental information to address misaligned adversarial patches in 3D real-world settings. To achieve this, EAD develops two central recurrent sub-modules, i.e., a perception module and a policy module, to implement two critical functions of active vision. These models recurrently process a series of beliefs and observations, facilitating progressive refinement of their comprehension of the target object and enabling the development of strategic actions to counter adversarial patches in 3D environments. To optimize learning efficiency, we incorporate a differentiable approximation of environmental dynamics and deploy patches that are agnostic to the adversary’s strategies. Extensive experiments demonstrate that EAD substantially enhances robustness against a variety of patches within just a few steps through its action policy in safety-critical tasks (e.g., face recognition and object detection), without compromising standard accuracy. Furthermore, due to the attack-agnostic characteristic, EAD facilitates excellent generalization to unseen attacks, diminishing the averaged attack success rate by 95% across a range of unseen adversarial attacks.
https://openreview.net/pdf/d08faf22abcf35007f6cbba6dedae6968afc0302.pdf
Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks
https://openreview.net/forum?id=1SbkubNdbW
https://openreview.net/forum?id=1SbkubNdbW
Lukas Struppek,Dominik Hintersdorf,Kristian Kersting
ICLR 2024,Poster
Label smoothing – using softened labels instead of hard ones – is a widely adopted regularization method for deep learning, showing diverse benefits such as enhanced generalization and calibration. Its implications for preserving model privacy, however, have remained unexplored. To fill this gap, we investigate the impact of label smoothing on model inversion attacks (MIAs), which aim to generate class-representative samples by exploiting the knowledge encoded in a classifier, thereby inferring sensitive information about its training data. Through extensive analyses, we uncover that traditional label smoothing fosters MIAs, thereby increasing a model's privacy leakage. Even more, we reveal that smoothing with negative factors counters this trend, impeding the extraction of class-related information and leading to privacy preservation, beating state-of-the-art defenses. This establishes a practical and powerful novel way for enhancing model resilience against MIAs.
https://openreview.net/pdf/4f5b4c982dbf157a729ec2be95eb6562ec9deed1.pdf
Continuous Invariance Learning
https://openreview.net/forum?id=70IgE3tRbu
https://openreview.net/forum?id=70IgE3tRbu
LIN Yong,Fan Zhou,Lu Tan,Lintao Ma,Jianmeng Liu,Yansu HE,Yuan Yuan,Yu Liu,James Y. Zhang,Yujiu Yang,Hao Wang
ICLR 2024,Poster
Invariance learning methods aim to learn invariant features in the hope that they generalize under distributional shift. Although many tasks are naturally characterized by continuous domains, current invariance learning techniques generally assume categorically indexed domains. For example, auto-scaling in cloud computing often needs a CPU utilization prediction model that generalizes across different times (e.g., time of a day and date of a year), where `time' is a continuous domain index. In this paper, we start by theoretically showing that existing invariance learning methods can fail for continuous domain problems. Specifically, the naive solution of splitting continuous domains into discrete ones ignores the underlying relationship among domains, and therefore potentially leads to suboptimal performance. To address this challenge, we then propose Continuous Invariance Learning (CIL), which extracts invariant features across continuously indexed domains. CIL is a novel adversarial procedure which measures and controls the conditional independence between the labels and continuous domain indices given the extracted features. Our theoretical analysis demonstrates that CIL learns features that satisfy the invariant constraint with infinite samples. Empirical results on both synthetic and real-world datasets (including data collected from production systems) show that CIL consistently outperforms strong baselines among all the tasks.
https://openreview.net/pdf/a60576de00dc34e15ed15f5b0bb1ebce91045d11.pdf
ZipIt! Merging Models from Different Tasks without Training
https://openreview.net/forum?id=LEYUkvdUhq
https://openreview.net/forum?id=LEYUkvdUhq
George Stoica,Daniel Bolya,Jakob Brandt Bjorner,Pratik Ramesh,Taylor Hearn,Judy Hoffman
ICLR 2024,Poster
Typical deep visual recognition models are capable of performing the one task they were trained on. In this paper, we tackle the extremely difficult problem of combining distinct models with different initializations, each solving a separate task, into one multi-task model without any additional training. Prior work in model merging permutes one model to the space of the other then averages them together. While this works for models trained on the same task, we find that this fails to account for the differences in models trained on disjoint tasks. Thus, we introduce "ZipIt!", a general method for merging two arbitrary models of the same architecture that incorporates two simple strategies. First, in order to account for features that aren't shared between models, we expand the model merging problem to allow for merging features within each model by defining a general "zip" operation. Second, we add support for partially zipping the models up until a specified layer, naturally creating a multi-head model. We find that these two changes combined account for 20-60% improvement over prior work, making it more feasible to merge models trained on disjoint tasks without retraining.
https://openreview.net/pdf/3c38571bb433cfd18849cfb475d86e2ee6f6d8b5.pdf
Hybrid Distillation: Connecting Masked Autoencoders with Contrastive Learners
https://openreview.net/forum?id=jUWktnsplU
https://openreview.net/forum?id=jUWktnsplU
Bowen Shi,XIAOPENG ZHANG,Yaoming Wang,Jin Li,Wenrui Dai,Junni Zou,Hongkai Xiong,Qi Tian
ICLR 2024,Poster
As two prominent strategies for representation learning, Contrastive Learning (CL) and Masked Image Modeling (MIM) have witnessed significant progress. Previous studies have demonstrated the advantages of each approach in specific scenarios. CL, resembling supervised pre-training, excels at capturing longer-range global patterns and enhancing feature discrimination, while MIM is adept at introducing local and diverse attention across transformer layers. Considering the respective strengths, previous studies utilize feature distillation to inherit both discrimination and diversity. In this paper, we thoroughly examine previous feature distillation methods and observe that the increase in diversity mainly stems from asymmetric designs, which may in turn compromise the discrimination ability. To strike a balance between the two properties, we propose a simple yet effective strategy termed Hybrid Distill, which leverages both the CL and MIM teachers to jointly guide the student model. Hybrid Distill emulates the token relations of the MIM teacher at intermediate layers for diversity, while simultaneously distilling the final features of the CL teacher to enhance discrimination. A progressive redundant token masking strategy is employed to reduce the expenses associated with distillation and aid in preventing the model from converging to local optima. Experimental results demonstrate that Hybrid Distill achieves superior performance on various benchmark datasets.
https://openreview.net/pdf/344a2f6d3bfb57ff4a94ee9491cb6050ee6fed3a.pdf
Facing the Elephant in the Room: Visual Prompt Tuning or Full finetuning?
https://openreview.net/forum?id=bJx4iOIOxn
https://openreview.net/forum?id=bJx4iOIOxn
Cheng Han,Qifan Wang,Yiming Cui,Wenguan Wang,Lifu Huang,Siyuan Qi,Dongfang Liu
ICLR 2024,Poster
As the scale of vision models continues to grow, the emergence of Visual Prompt Tuning (VPT) as a parameter-efficient transfer learning technique has gained attention due to its superior performance compared to traditional full-finetuning. However, the conditions favoring VPT (the "when") and the underlying rationale (the "why") remain unclear. In this paper, we conduct a comprehensive analysis across 19 distinct datasets and tasks. To understand the "when" aspect, we identify the scenarios where VPT proves favorable by two dimensions: task objectives and data distributions. We find that VPT is preferrable when there is 1) a substantial disparity between the original and the downstream task objectives ($e.g.$, transitioning from classification to counting), or 2) a notable similarity in data distributions between the two tasks ($e.g.$, both involve natural images). In exploring the "why" dimension, our results indicate VPT's success cannot be attributed solely to overfitting and optimization considerations. The unique way VPT preserves original features and adds parameters appears to be a pivotal factor. Our study provides insights into VPT's mechanisms, and offers guidance for its optimal utilization.
https://openreview.net/pdf/dd2e08f0b045e84ce6870f20ec7f43ac3bc30b04.pdf
Grounding Multimodal Large Language Models to the World
https://openreview.net/forum?id=lLmqxkfSIw
https://openreview.net/forum?id=lLmqxkfSIw
Zhiliang Peng,Wenhui Wang,Li Dong,Yaru Hao,Shaohan Huang,Shuming Ma,Qixiang Ye,Furu Wei
ICLR 2024,Poster
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent text spans (i.e., referring expressions and noun phrases) as links in Markdown, i.e., [text span](bounding boxes), where object descriptions are sequences of location tokens. To train the model, we construct a large-scale dataset about grounded image-text pairs (GrIT) together with multimodal corpora. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability to downstream applications, while maintaining the conventional capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning). Kosmos-2 is evaluated on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This study sheds a light on the big convergence of language, multimodal perception, and world modeling, which is a key step toward artificial general intelligence. Code can be found in [https://aka.ms/kosmos-2](https://aka.ms/kosmos-2).
https://openreview.net/pdf/0ea36b222b82ac76c018c9aa7a47f9f978c705b2.pdf
VFLAIR: A Research Library and Benchmark for Vertical Federated Learning
https://openreview.net/forum?id=sqRgz88TM3
https://openreview.net/forum?id=sqRgz88TM3
Tianyuan Zou,Zixuan GU,Yu He,Hideaki Takahashi,Yang Liu,Ya-Qin Zhang
ICLR 2024,Poster
Vertical Federated Learning (VFL) has emerged as a collaborative training paradigm that allows participants with different features of the same group of users to accomplish cooperative training without exposing their raw data or model parameters. VFL has gained significant attention for its research potential and real-world applications in recent years, but still faces substantial challenges, such as in defending various kinds of data inference and backdoor attacks. Moreover, most of existing VFL projects are industry-facing and not easily used for keeping track of the current research progress. To address this need, we present an extensible and lightweight VFL framework VFLAIR (available at https://github.com/FLAIR-THU/VFLAIR), which supports VFL training with a variety of models, datasets and protocols, along with standardized modules for comprehensive evaluations of attacks and defense strategies. We also benchmark $11$ attacks and $8$ defenses performance under different communication and model partition settings and draw concrete insights and recommendations on the choice of defense strategies for different practical VFL deployment scenarios.
https://openreview.net/pdf/014bacf6d092b8274bf80806af67041f1d4a4005.pdf
IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks
https://openreview.net/forum?id=jFa5KESW65
https://openreview.net/forum?id=jFa5KESW65
Yue Cao,Tianlin Li,Xiaofeng Cao,Ivor Tsang,Yang Liu,Qing Guo
ICLR 2024,Poster
We introduce a novel approach to counter adversarial attacks, namely, image resampling. Image resampling transforms a discrete image into a new one, simulating the process of scene recapturing or rerendering as specified by a geometrical transformation. The underlying rationale behind our idea is that image resampling can alleviate the influence of adversarial perturbations while preserving essential semantic information, thereby conferring an inherent advantage in defending against adversarial attacks. To validate this concept, we present a comprehensive study on leveraging image resampling to defend against adversarial attacks. We have developed basic resampling methods that employ interpolation strategies and coordinate shifting magnitudes. Our analysis reveals that these basic methods can partially mitigate adversarial attacks. However, they come with apparent limitations: the accuracy of clean images noticeably decreases, while the improvement in accuracy on adversarial examples is not substantial.We propose implicit representation-driven image resampling (IRAD) to overcome these limitations. First, we construct an implicit continuous representation that enables us to represent any input image within a continuous coordinate space. Second, we introduce SampleNet, which automatically generates pixel-wise shifts for resampling in response to different inputs. Furthermore, we can extend our approach to the state-of-the-art diffusion-based method, accelerating it with fewer time steps while preserving its defense capability. Extensive experiments demonstrate that our method significantly enhances the adversarial robustness of diverse deep models against various attacks while maintaining high accuracy on clean images.
https://openreview.net/pdf/58ef38bc40f4cdfbe398a4cdcf049f43e401c583.pdf
Stylized Offline Reinforcement Learning: Extracting Diverse High-Quality Behaviors from Heterogeneous Datasets
https://openreview.net/forum?id=rnHNDihrIT
https://openreview.net/forum?id=rnHNDihrIT
Yihuan Mao,Chengjie Wu,Xi Chen,Hao Hu,Ji Jiang,Tianze Zhou,Tangjie Lv,Changjie Fan,Zhipeng Hu,Yi Wu,Yujing Hu,Chongjie Zhang
ICLR 2024,Poster
Previous literature on policy diversity in reinforcement learning (RL) either focuses on the online setting or ignores the policy performance. In contrast, offline RL, which aims to learn high-quality policies from batched data, has yet to fully leverage the intrinsic diversity of the offline dataset. Addressing this dichotomy and aiming to balance quality and diversity poses a significant challenge to extant methodologies. This paper introduces a novel approach, termed Stylized Offline RL (SORL), which is designed to extract high-performing, stylistically diverse policies from a dataset characterized by distinct behavioral patterns. Drawing inspiration from the venerable Expectation-Maximization (EM) algorithm, SORL innovatively alternates between policy learning and trajectory clustering, a mechanism that promotes policy diversification. To further augment policy performance, we introduce advantage-weighted style learning into the SORL framework. Experimental evaluations across multiple environments demonstrate the significant superiority of SORL over previous methods in extracting high-quality policies with diverse behaviors. A case in point is that SORL successfully learns strong policies with markedly distinct playing patterns from a real-world human dataset of a popular basketball video game "Dunk City Dynasty."
https://openreview.net/pdf/2ca57d1d09d2c77b2d390bac2cb3c761f94d4358.pdf
Sample-Efficient Learning of POMDPs with Multiple Observations In Hindsight
https://openreview.net/forum?id=1hsVvgW0rU
https://openreview.net/forum?id=1hsVvgW0rU
Jiacheng Guo,Minshuo Chen,Huan Wang,Caiming Xiong,Mengdi Wang,Yu Bai
ICLR 2024,Poster
This paper studies the sample-efficiency of learning in Partially Observable Markov Decision Processes (POMDPs), a challenging problem in reinforcement learning that is known to be exponentially hard in the worst-case. Motivated by real-world settings such as loading in game playing, we propose an enhanced feedback model called ``multiple observations in hindsight'', where after each episode of interaction with the POMDP, the learner may collect multiple additional observations emitted from the encountered latent states, but may not observe the latent states themselves. We show that sample-efficient learning under this feedback model is possible for two new subclasses of POMDPs: \emph{multi-observation revealing POMDPs} and \emph{distinguishable POMDPs}. Both subclasses generalize and substantially relax \emph{revealing POMDPs}---a widely studied subclass for which sample-efficient learning is possible under standard trajectory feedback. Notably, distinguishable POMDPs only require the emission distributions from different latent states to be \emph{different} instead of \emph{linearly independent} as required in revealing POMDPs.
https://openreview.net/pdf/5f2fca010d39884681afbf62727150266e47e06f.pdf
Pre-training with Synthetic Data Helps Offline Reinforcement Learning
https://openreview.net/forum?id=PcxQgtHGj2
https://openreview.net/forum?id=PcxQgtHGj2
Zecheng Wang,Che Wang,Zixuan Dong,Keith W. Ross
ICLR 2024,Poster
Recently, it has been shown that for offline deep reinforcement learning (DRL), pre-training Decision Transformer with a large language corpus can improve downstream performance (Reid et al., 2022). A natural question to ask is whether this performance gain can only be achieved with language pre-training, or can be achieved with simpler pre-training schemes which do not involve language. In this paper, we first show that language is not essential for improved performance, and indeed pre-training with synthetic IID data for a small number of updates can match the performance gains from pre-training with a large language corpus; moreover, pre-training with data generated by a one-step Markov chain can further improve the performance. Inspired by these experimental results, we then consider pre-training Conservative Q-Learning (CQL), a popular offline DRL algorithm, which is Q-learning-based and typically employs a Multi-Layer Perceptron (MLP) backbone. Surprisingly, pre-training with simple synthetic data for a small number of updates can also improve CQL, providing consistent performance improvement on D4RL Gym locomotion datasets. The results of this paper not only illustrate the importance of pre-training for offline DRL but also show that the pre-training data can be synthetic and generated with remarkably simple mechanisms.
https://openreview.net/pdf/318129413e1d04dac4e9ef949bff927064665bee.pdf
AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors
https://openreview.net/forum?id=EHg5GDnyq1
https://openreview.net/forum?id=EHg5GDnyq1
Weize Chen,Yusheng Su,Jingwei Zuo,Cheng Yang,Chenfei Yuan,Chi-Min Chan,Heyang Yu,Yaxi Lu,Yi-Hsin Hung,Chen Qian,Yujia Qin,Xin Cong,Ruobing Xie,Zhiyuan Liu,Maosong Sun,Jie Zhou
ICLR 2024,Poster
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework AgentVerse that can effectively orchestrate a collaborative group of expert agents as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that AgentVerse can proficiently deploy multi-agent groups that outperform a single agent. Extensive experiments on text understanding, reasoning, coding, tool utilization, and embodied AI confirm the effectiveness of AgentVerse. Moreover, our analysis of agent interactions within AgentVerse reveals the emergence of specific collaborative behaviors, contributing to heightened group efficiency. We will release our codebase, AgentVerse, to further facilitate multi-agent research.
https://openreview.net/pdf/7c9bd9a841a2ba0c11ea97abb5e982430c2fc95e.pdf
IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs
https://openreview.net/forum?id=6RR3wU4mSZ
https://openreview.net/forum?id=6RR3wU4mSZ
Yuzhen Mao,Martin Ester,Ke Li
ICLR 2024,Poster
One limitation of existing Transformer-based models is that they cannot handle very long sequences as input since their self-attention operations exhibit quadratic time and space complexity. This problem becomes especially acute when Transformers are deployed on hardware platforms equipped only with CPUs. To address this issue, we propose a novel method for accelerating self-attention at inference time that works with pretrained Transformer models out-of-the-box without requiring retraining. We experiment using our method to accelerate various long-sequence Transformers, including a leading LLaMA 2-based LLM, on various benchmarks and demonstrate a greater speedup of $2.73\times$ - $7.63\times$ while retaining $98.6$% - $99.6$% of the accuracy of the original pretrained models. The code is available on our project website at https://yuzhenmao.github.io/IceFormer/.
https://openreview.net/pdf/747efc25fe5472aefe3220312fbe7ed5d468e4fc.pdf
Efficient Planning with Latent Diffusion
https://openreview.net/forum?id=btpgDo4u4j
https://openreview.net/forum?id=btpgDo4u4j
Wenhao Li
ICLR 2024,Poster
Temporal abstraction and efficient planning pose significant challenges in offline reinforcement learning, mainly when dealing with domains that involve temporally extended tasks and delayed sparse rewards. Existing methods typically plan in the raw action space and can be inefficient and inflexible. Latent action spaces offer a more flexible approach, capturing only possible actions within the behavior policy support and decoupling the temporal structure between planning and modeling. However, current latent-action-based methods are limited to discrete spaces and require expensive planning steps. This paper presents a unified framework for continuous latent action space representation learning and planning by leveraging latent, score-based diffusion models. We establish the theoretical equivalence between planning in the latent action space and energy-guided sampling with a pretrained diffusion model and introduce a novel sequence-level exact sampling method. Our proposed method, $\texttt{LatentDiffuser}$, demonstrates competitive performance on low-dimensional locomotion control tasks and surpasses existing methods in higher-dimensional tasks.
https://openreview.net/pdf/167456ea22dc65b008a48dd93008131198a73cd3.pdf
Conformal Prediction via Regression-as-Classification
https://openreview.net/forum?id=rulxyXjf46
https://openreview.net/forum?id=rulxyXjf46
Etash Kumar Guha,Shlok Natarajan,Thomas Möllenhoff,Mohammad Emtiyaz Khan,Eugene Ndiaye
ICLR 2024,Poster
Conformal prediction (CP) for regression can be challenging, especially when the output distribution is heteroscedastic, multimodal, or skewed. Some of the issues can be addressed by estimating a distribution over the output, but in reality, such approaches can be sensitive to estimation error and yield unstable intervals. Here, we circumvent the challenges by converting regression to a classification problem and then use CP for classification to obtain CP sets for regression. To preserve the ordering of the continuous-output space, we design a new loss function and present necessary modifications to the CP classification techniques. Empirical results on many benchmarks show that this simple approach gives surprisingly good results on many practical problems.
https://openreview.net/pdf/15477e23f5ceefa60baf806821120f22bfcab91d.pdf
Convolution Meets LoRA: Parameter Efficient Finetuning for Segment Anything Model
https://openreview.net/forum?id=ezscMer8L0
https://openreview.net/forum?id=ezscMer8L0
Zihan Zhong,Zhiqiang Tang,Tong He,Haoyang Fang,Chun Yuan
ICLR 2024,Poster
The Segment-Anything Model (SAM) stands as a foundational framework for image segmentation. While it exhibits remarkable zero-shot generalization in typical scenarios, its advantage diminishes when applied to specialized domains like medical imagery and remote sensing. To address this limitation, this paper introduces Conv-LoRA, a simple yet effective parameter-efficient fine-tuning approach. By integrating ultra-lightweight convolutional parameters into Low-Rank Adaptation (LoRA), Conv-LoRA can inject image-related inductive biases into the plain ViT encoder, further reinforcing SAM’s local prior assumption. Notably, Conv-LoRA not only preserves SAM’s extensive segmentation knowledge but also revives its capacity of learning high-level image semantics, which is constrained by SAM’s foreground-background segmentation pretraining. Comprehensive experimentation across diverse benchmarks spanning multiple domains underscores Conv-LoRA’s superiority in adapting SAM to real-world semantic segmentation tasks.
https://openreview.net/pdf/86ddb5f461f78211754778d45bea061cc74a2ca3.pdf
Tag2Text: Guiding Vision-Language Model via Image Tagging
https://openreview.net/forum?id=x6u2BQ7xcq
https://openreview.net/forum?id=x6u2BQ7xcq
Xinyu Huang,Youcai Zhang,Jinyu Ma,Weiwei Tian,Rui Feng,Yuejie Zhang,Yaqian Li,Yandong Guo,Lei Zhang
ICLR 2024,Poster
This paper presents Tag2Text, a vision language pre-training (VLP) framework, which introduces image tagging into vision-language models to guide the learning of visual-linguistic features. In contrast to prior works which utilize object tags either manually labeled or automatically detected with a limited detector, our approach utilizes tags parsed from its paired text to learn an image tagger and meanwhile provides guidance to vision-language models. Given that, Tag2Text can utilize large-scale annotation-free image tags in accordance with image-text pairs, and provides more diverse tag categories beyond objects. Strikingly, Tag2Text showcases the ability of a foundational image tagging model, with superior zero-shot performance even comparable to full supervision manner. Moreover, by leveraging tagging guidance, Tag2Text effectively enhances the performance of vision-language models on both generation-based and alignment-based tasks. Across a wide range of downstream benchmarks, Tag2Text achieves state-of-the-art results with similar model sizes and data scales, demonstrating the efficacy of the proposed tagging guidance.
https://openreview.net/pdf/6f598a30fa94b52faf7b0f47d914c7ca974e11b6.pdf
Class Incremental Learning via Likelihood Ratio Based Task Prediction
https://openreview.net/forum?id=8QfK9Dq4q0
https://openreview.net/forum?id=8QfK9Dq4q0
Haowei Lin,Yijia Shao,Weinan Qian,Ningxin Pan,Yiduo Guo,Bing Liu
ICLR 2024,Poster
Class incremental learning (CIL) is a challenging setting of continual learning, which learns a series of tasks sequentially. Each task consists of a set of unique classes. The key feature of CIL is that no task identifier (or task-id) is provided at test time. Predicting the task-id for each test sample is a challenging problem. An emerging theory-guided approach (called TIL+OOD) is to train a task-specific model for each task in a shared network for all tasks based on a task-incremental learning (TIL) method to deal with catastrophic forgetting. The model for each task is an out-of-distribution (OOD) detector rather than a conventional classifier. The OOD detector can perform both within-task (in-distribution (IND)) class prediction and OOD detection. The OOD detection capability is the key to task-id prediction during inference. However, this paper argues that using a traditional OOD detector for task-id prediction is sub-optimal because additional information (e.g., the replay data and the learned tasks) available in CIL can be exploited to design a better and principled method for task-id prediction. We call the new method TPL (Task-id Prediction based on Likelihood Ratio). TPL markedly outperforms strong CIL baselines and has negligible catastrophic forgetting. The code of TPL is publicly available at https://github.com/linhaowei1/TPL.
https://openreview.net/pdf/e63428085bffa2087502ff4f3070311d17fc324a.pdf
Decentralized Riemannian Conjugate Gradient Method on the Stiefel Manifold
https://openreview.net/forum?id=PQbFUMKLFp
https://openreview.net/forum?id=PQbFUMKLFp
Jun Chen,Haishan Ye,Mengmeng Wang,Tianxin Huang,Guang Dai,Ivor Tsang,Yong Liu
ICLR 2024,Poster
The conjugate gradient method is a crucial first-order optimization method that generally converges faster than the steepest descent method, and its computational cost is much lower than that of second-order methods. However, while various types of conjugate gradient methods have been studied in Euclidean spaces and on Riemannian manifolds, there is little study for those in distributed scenarios. This paper proposes a decentralized Riemannian conjugate gradient descent (DRCGD) method that aims at minimizing a global function over the Stiefel manifold. The optimization problem is distributed among a network of agents, where each agent is associated with a local function, and the communication between agents occurs over an undirected connected graph. Since the Stiefel manifold is a non-convex set, a global function is represented as a finite sum of possibly non-convex (but smooth) local functions. The proposed method is free from expensive Riemannian geometric operations such as retractions, exponential maps, and vector transports, thereby reducing the computational complexity required by each agent. To the best of our knowledge, DRCGD is the first decentralized Riemannian conjugate gradient algorithm to achieve global convergence over the Stiefel manifold.
https://openreview.net/pdf/c18ec32748c75a7da4da4513857c73685fe7366a.pdf
Improved Regret Bounds for Non-Convex Online-Within-Online Meta Learning
https://openreview.net/forum?id=pA8Q5WiEMg
https://openreview.net/forum?id=pA8Q5WiEMg
Jiechao Guan,Hui Xiong
ICLR 2024,Poster
Online-Within-Online (OWO) meta learning stands for the online multi-task learning paradigm in which both tasks and data within each task become available in a sequential order. In this work, we study the OWO meta learning of the initialization and step size of within-task online algorithms in the non-convex setting, and provide improved regret bounds under mild assumptions of loss functions. Previous work analyzing this scenario has obtained for bounded and piecewise Lipschitz functions an averaged regret bound $O((\frac{\sqrt{m}}{T^{1/4}}+\frac{(\log{m})\log{T}}{\sqrt{T}}+V)\sqrt{m})$ across $T$ tasks, with $m$ iterations per task and $V$ the task similarity. Our first contribution is to modify the existing non-convex OWO meta learning algorithm and improve the regret bound to $O((\frac{1}{T^{1/2-\alpha}}+\frac{(\log{T})^{9/2}}{T}+V)\sqrt{m})$, for any $\alpha \in (0,1/2)$. The derived bound has a faster convergence rate with respect to $T$, and guarantees a vanishing task-averaged regret with respect to $m$ (for any fixed $T$). Then, we propose a new algorithm of regret $O((\frac{\log{T}}{T}+V)\sqrt{m})$ for non-convex OWO meta learning. This regret bound exhibits a better asymptotic performance than previous ones, and holds for any bounded (not necessarily Lipschitz) loss functions. Besides the improved regret bounds, our contributions include investigating how to attain generalization bounds for statistical meta learning via regret analysis. Specifically, by online-to-batch arguments, we achieve a transfer risk bound for batch meta learning that assumes all tasks are drawn from a distribution. Moreover, by connecting multi-task generalization error with task-averaged regret, we develop for statistical multi-task learning a novel PAC-Bayes generalization error bound that involves our regret bound for OWO meta learning.
https://openreview.net/pdf/496b365e3f90de38c22e9b1cdfc795c2f5c62731.pdf
Momentum Benefits Non-iid Federated Learning Simply and Provably
https://openreview.net/forum?id=TdhkAcXkRi
https://openreview.net/forum?id=TdhkAcXkRi
Ziheng Cheng,Xinmeng Huang,Pengfei Wu,Kun Yuan
ICLR 2024,Poster
Federated learning is a powerful paradigm for large-scale machine learning, but it faces significant challenges due to unreliable network connections, slow commu- nication, and substantial data heterogeneity across clients. FedAvg and SCAFFOLD are two prominent algorithms to address these challenges. In particular, FedAvg employs multiple local updates before communicating with a central server, while SCAFFOLD maintains a control variable on each client to compen- sate for “client drift” in its local updates. Various methods have been proposed to enhance the convergence of these two algorithms, but they either make imprac- tical adjustments to algorithmic structure, or rely on the assumption of bounded data heterogeneity. This paper explores the utilization of momentum to enhance the performance of FedAvg and SCAFFOLD. When all clients participate in the training process, we demonstrate that incorporating momentum allows FedAvg to converge without relying on the assumption of bounded data heterogeneity even using a constant local learning rate. This is novel and fairly suprising as existing analyses for FedAvg require bounded data heterogeneity even with diminishing local learning rates. In partial client participation, we show that momentum en- ables SCAFFOLD to converge provably faster without imposing any additional assumptions. Furthermore, we use momentum to develop new variance-reduced extensions of FedAvg and SCAFFOLD, which exhibit state-of-the-art conver- gence rates. Our experimental results support all theoretical findings.
https://openreview.net/pdf/54a3a6078ec1b585b63168739d0b860e5868dcdc.pdf
Detecting Machine-Generated Texts by Multi-Population Aware Optimization for Maximum Mean Discrepancy
https://openreview.net/forum?id=3fEKavFsnv
https://openreview.net/forum?id=3fEKavFsnv
Shuhai Zhang,Yiliao Song,Jiahao Yang,Yuanqing Li,Bo Han,Mingkui Tan
ICLR 2024,Poster
Large language models (LLMs) such as ChatGPT have exhibited remarkable performance in generating human-like texts. However, machine-generated texts (MGTs) may carry critical risks, such as plagiarism issues and hallucination information. Therefore, it is very urgent and important to detect MGTs in many situations. Unfortunately, it is challenging to distinguish MGTs and human-written texts because the distributional discrepancy between them is often very subtle due to the remarkable performance of LLMS. In this paper, we seek to exploit \textit{maximum mean discrepancy} (MMD) to address this issue in the sense that MMD can well identify distributional discrepancies. However, directly training a detector with MMD using diverse MGTs will incur a significantly increased variance of MMD since MGTs may contain \textit{multiple text populations} due to various LLMs. This will severely impair MMD's ability to measure the difference between two samples. To tackle this, we propose a novel \textit{multi-population} aware optimization method for MMD called MMD-MP, which can \textit{avoid variance increases} and thus improve the stability to measure the distributional discrepancy. Relying on MMD-MP, we develop two methods for paragraph-based and sentence-based detection, respectively. Extensive experiments on various LLMs, \eg, GPT2 and ChatGPT, show superior detection performance of our MMD-MP.
https://openreview.net/pdf/006491b72f0ab715b4c84cc467ed1a22f8d02943.pdf
Only Pay for What Is Uncertain: Variance-Adaptive Thompson Sampling
https://openreview.net/forum?id=p8ujRTjEf3
https://openreview.net/forum?id=p8ujRTjEf3
Aadirupa Saha,Branislav Kveton
ICLR 2024,Poster
Most bandit algorithms assume that the reward variances or their upper bounds are known, and that they are the same for all arms. This naturally leads to suboptimal performance and higher regret due to variance overestimation. On the other hand, underestimated reward variances may lead to linear regret due to committing early to a suboptimal arm. This motivated prior works on variance-adaptive frequentist algorithms, which have strong instance-dependent regret bounds but cannot incorporate prior knowledge on reward variances. We lay foundations for the Bayesian setting, which incorporates prior knowledge. This results in lower regret in practice, since the prior is used in the algorithm design, and also improved regret guarantees. Specifically, we study Gaussian bandits with \emph{unknown heterogeneous reward variances} and develop a Thompson sampling algorithm with prior-dependent Bayes regret bounds. We achieve lower regret with lower reward variances and more informative priors on them, which is precisely why we pay only for what is uncertain. This is the first such result in the bandit literature. Finally, we corroborate our theory with experiments, which demonstrate the benefit of our variance-adaptive Bayesian algorithm over prior frequentist works. We also show that our approach is robust to model misspecification and can be applied with estimated priors.
https://openreview.net/pdf/aff563245ed8cb5fe19128cfccb5a242f0dc8642.pdf
PnP Inversion: Boosting Diffusion-based Editing with 3 Lines of Code
https://openreview.net/forum?id=FoMZ4ljhVw
https://openreview.net/forum?id=FoMZ4ljhVw
Xuan Ju,Ailing Zeng,Yuxuan Bian,Shaoteng Liu,Qiang Xu
ICLR 2024,Poster
Text-guided diffusion models have revolutionized image generation and editing, offering exceptional realism and diversity. Specifically, in the context of diffusion-based editing, where a source image is edited according to a target prompt, the process commences by acquiring a noisy latent vector corresponding to the source image via the diffusion model. This vector is subsequently fed into separate source and target diffusion branches for editing. The accuracy of this inversion process significantly impacts the final editing outcome, influencing both essential content preservation of the source image and edit fidelity according to the target prompt. Prior inversion techniques aimed at finding a unified solution in both the source and target diffusion branches. However, our theoretical and empirical analyses reveal that disentangling these branches leads to a distinct separation of responsibilities for preserving essential content and ensuring edit fidelity. Building on this insight, we introduce “PnP Inversion,” a novel technique achieving optimal performance of both branches with just three lines of code. To assess image editing performance, we present PIE-Bench, an editing benchmark with 700 images showcasing diverse scenes and editing types, accompanied by versatile annotations and comprehensive evaluation metrics. Compared to state-of-the-art optimization-based inversion techniques, our solution not only yields superior performance across 8 editing methods but also achieves nearly an order of speed-up.
https://openreview.net/pdf/7a08897fea2fe55b08fa202685f17c84e0337fe4.pdf
A Benchmark Study on Calibration
https://openreview.net/forum?id=GzNhzX9kVa
https://openreview.net/forum?id=GzNhzX9kVa
Linwei Tao,Younan Zhu,Haolan Guo,Minjing Dong,Chang Xu
ICLR 2024,Poster
Deep neural networks are increasingly utilized in various machine learning tasks. However, as these models grow in complexity, they often face calibration issues, despite enhanced prediction accuracy. Many studies have endeavored to improve calibration performance through the use of specific loss functions, data preprocessing and training frameworks. Yet, investigations into calibration properties have been somewhat overlooked. Our study leverages the Neural Architecture Search (NAS) search space, offering an exhaustive model architecture space for thorough calibration properties exploration. We specifically create a model calibration dataset. This dataset evaluates 90 bin-based and 12 additional calibration measurements across 117,702 unique neural networks within the widely employed NATS-Bench search space. Our analysis aims to answer several longstanding questions in the field, using our proposed dataset: (i) Can model calibration be generalized across different datasets? (ii) Can robustness be used as a calibration measurement? (iii) How reliable are calibration metrics? (iv) Does a post-hoc calibration method affect all models uniformly? (v) How does calibration interact with accuracy? (vi) What is the impact of bin size on calibration measurement? (vii) Which architectural designs are beneficial for calibration? Additionally, our study bridges an existing gap by exploring calibration within NAS. By providing this dataset, we enable further research into NAS calibration. As far as we are aware, our research represents the first large-scale investigation into calibration properties and the premier study of calibration issues within NAS.
https://openreview.net/pdf/8a5ca1f52aaff25e60f0001c7a00426c79b1885f.pdf
Enhancing Human-AI Collaboration Through Logic-Guided Reasoning
https://openreview.net/forum?id=TWC4gLoAxY
https://openreview.net/forum?id=TWC4gLoAxY
Chengzhi Cao,Yinghao Fu,Sheng Xu,Ruimao Zhang,Shuang Li
ICLR 2024,Poster
We present a systematic framework designed to enhance human-robot perception and collaboration through the integration of logical rules and Theory of Mind (ToM). Logical rules provide interpretable predictions and generalize well across diverse tasks, making them valuable for learning and decision-making. Leveraging the ToM for understanding others' mental states, our approach facilitates effective collaboration. In this paper, we employ logic rules derived from observational data to infer human goals and guide human-like agents. These rules are treated as latent variables, and a rule encoder is trained alongside a multi-agent system in the robot's mind. We assess the posterior distribution of latent rules using learned embeddings, representing entities and relations. Confidence scores for each rule indicate their consistency with observed data. Then, we employ a hierarchical reinforcement learning model with ToM to plan robot actions for assisting humans. Extensive experiments validate each component of our framework, and results on multiple benchmarks demonstrate that our model outperforms the majority of existing approaches.
https://openreview.net/pdf/7b8552a57028c2e6eef4786a49e15a94ad70758b.pdf
Learning with Mixture of Prototypes for Out-of-Distribution Detection
https://openreview.net/forum?id=uNkKaD3MCs
https://openreview.net/forum?id=uNkKaD3MCs
Haodong Lu,Dong Gong,Shuo Wang,Jason Xue,Lina Yao,Kristen Moore
ICLR 2024,Poster
Out-of-distribution (OOD) detection aims to detect testing samples far away from the in-distribution (ID) training data, which is crucial for the safe deployment of machine learning models in the real world. Distance-based OOD detection methods have emerged with enhanced deep representation learning. They identify unseen OOD samples by measuring their distances from ID class centroids or prototypes. However, existing approaches learn the representation relying on oversimplified data assumptions, e.g. modeling ID data of each class with one centroid class prototype or using loss functions not designed for OOD detection, which overlook the natural diversities within the data. Naively enforcing data samples of each class to be compact around only one prototype leads to inadequate modeling of realistic data and limited performance. To tackle these issues, we propose PrototypicAl Learning with a Mixture of prototypes (PALM) that models each class with multiple prototypes to capture the sample diversities, which learns more faithful and compact samples embeddings for enhanching OOD detection. Our method automatically identifies and dynamically updates prototypes, assigning each sample to a subset of prototypes via reciprocal neighbor soft assignment weights. To learn embeddings with multiple prototypes, PALM optimizes a maximum likelihood estimation (MLE) loss to encourage the sample embeddings to compact around the associated prototypes, as well as a contrastive loss on all prototypes to enhance intra-class compactness and inter-class discrimination at the prototype level. Compared to previous methods with prototypes, the proposed mixture prototype modeling of PALM promotes the representations of each ID class to be more compact and separable from others and the unseen OOD samples, resulting in more reliable OOD detection. Moreover, the automatic estimation of prototypes enables our approach to be extended to the challenging OOD detection task with unlabelled ID data. Extensive experiments demonstrate the superiority of PALM over previous methods, achieving state-of-the-art average AUROC performance of 93.82 on the challenging CIFAR-100 benchmark.
https://openreview.net/pdf/61b5cb8c14eb6ed40ffd7915eb9fd6edcd8bf2c3.pdf
PINNsFormer: A Transformer-Based Framework For Physics-Informed Neural Networks
https://openreview.net/forum?id=DO2WFXU1Be
https://openreview.net/forum?id=DO2WFXU1Be
Zhiyuan Zhao,Xueying Ding,B. Aditya Prakash
ICLR 2024,Poster
Physics-Informed Neural Networks (PINNs) have emerged as a promising deep learning framework for approximating numerical solutions to partial differential equations (PDEs). However, conventional PINNs, relying on multilayer perceptrons (MLP), neglect the crucial temporal dependencies inherent in practical physics systems and thus fail to propagate the initial condition constraints globally and accurately capture the true solutions under various scenarios. In this paper, we introduce a novel Transformer-based framework, termed PINNsFormer, designed to address this limitation. PINNsFormer can accurately approximate PDE solutions by utilizing multi-head attention mechanisms to capture temporal dependencies. PINNsFormer transforms point-wise inputs into pseudo sequences and replaces point-wise PINNs loss with a sequential loss. Additionally, it incorporates a novel activation function, \texttt{Wavelet}, which anticipates Fourier decomposition through deep neural networks. Empirical results demonstrate that PINNsFormer achieves superior generalization ability and accuracy across various scenarios, including PINNs failure modes and high-dimensional PDEs. Moreover, PINNsFormer offers flexibility in integrating existing learning schemes for PINNs, further enhancing its performance.
https://openreview.net/pdf/f06f4c86b32adde5a980de318a16e6874852f4d9.pdf
Attention-Guided Contrastive Role Representations for Multi-agent Reinforcement Learning
https://openreview.net/forum?id=LWmuPfEYhH
https://openreview.net/forum?id=LWmuPfEYhH
Zican Hu,Zongzhang Zhang,Huaxiong Li,Chunlin Chen,Hongyu Ding,Zhi Wang
ICLR 2024,Poster
Real-world multi-agent tasks usually involve dynamic team composition with the emergence of roles, which should also be a key to efficient cooperation in multi-agent reinforcement learning (MARL). Drawing inspiration from the correlation between roles and agent's behavior patterns, we propose a novel framework of **A**ttention-guided **CO**ntrastive **R**ole representation learning for **M**ARL (**ACORM**) to promote behavior heterogeneity, knowledge transfer, and skillful coordination across agents. First, we introduce mutual information maximization to formalize role representation learning, derive a contrastive learning objective, and concisely approximate the distribution of negative pairs. Second, we leverage an attention mechanism to prompt the global state to attend to learned role representations in value decomposition, implicitly guiding agent coordination in a skillful role space to yield more expressive credit assignment. Experiments on challenging StarCraft II micromanagement and Google research football tasks demonstrate the state-of-the-art performance of our method and its advantages over existing approaches. Our code is available at [https://github.com/NJU-RL/ACORM](https://github.com/NJU-RL/ACORM).
https://openreview.net/pdf/49724378efbbff7628c17117b3678dbca86ca748.pdf
A Statistical Analysis of Wasserstein Autoencoders for Intrinsically Low-dimensional Data
https://openreview.net/forum?id=WjRPZsfeBO
https://openreview.net/forum?id=WjRPZsfeBO
Saptarshi Chakraborty,Peter Bartlett
ICLR 2024,Poster
Variational Autoencoders (VAEs) have gained significant popularity among researchers as a powerful tool for understanding unknown distributions based on limited samples. This popularity stems partly from their impressive performance and partly from their ability to provide meaningful feature representations in the latent space. Wasserstein Autoencoders (WAEs), a variant of VAEs, aim to not only improve model efficiency but also interpretability. However, there has been limited focus on analyzing their statistical guarantees. The matter is further complicated by the fact that the data distributions to which WAEs are applied - such as natural images - are often presumed to possess an underlying low-dimensional structure within a high-dimensional feature space, which current theory does not adequately account for, rendering known bounds inefficient. To bridge the gap between the theory and practice of WAEs, in this paper, we show that WAEs can learn the data distributions when the network architectures are properly chosen. We show that the convergence rates of the expected excess risk in the number of samples for WAEs are independent of the high feature dimension, instead relying only on the intrinsic dimension of the data distribution.
https://openreview.net/pdf/cbe5676e2b5bf7da6241d7ba6c43d8b4b31dffc7.pdf
Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models
https://openreview.net/forum?id=gfFVATffPd
https://openreview.net/forum?id=gfFVATffPd
Mert Yuksekgonul,Varun Chandrasekaran,Erik Jones,Suriya Gunasekar,Ranjita Naik,Hamid Palangi,Ece Kamar,Besmira Nushi
ICLR 2024,Poster
We investigate the internal behavior of Transformer-based Large Language Models (LLMs) when they generate factually incorrect text. We propose modeling factual queries as constraint satisfaction problems and use this framework to investigate how the LLM interacts internally with factual constraints. We find a strong positive relationship between the LLM's attention to constraint tokens and the factual accuracy of generations. We curate a suite of 10 datasets containing over 40,000 prompts to study the task of predicting factual errors with the Llama-2 family across all scales (7B, 13B, 70B). We propose SAT Probe, a method probing attention patterns, that can predict factual errors and fine-grained constraint satisfaction, and allow early error identification. The approach and findings take another step towards using the mechanistic understanding of LLMs to enhance their reliability.
https://openreview.net/pdf/ea21a0f591dc0f19915f19c30b24ab3be8a05b7d.pdf
Structuring Representation Geometry with Rotationally Equivariant Contrastive Learning
https://openreview.net/forum?id=lgaFMvZHSJ
https://openreview.net/forum?id=lgaFMvZHSJ
Sharut Gupta,Joshua Robinson,Derek Lim,Soledad Villar,Stefanie Jegelka
ICLR 2024,Poster
Self-supervised learning converts raw perceptual data such as images to a compact space where simple Euclidean distances measure meaningful variations in data. In this paper, we extend this formulation by adding additional geometric structure to the embedding space by enforcing transformations of input space to correspond to simple (i.e., linear) transformations of embedding space. Specifically, in the contrastive learning setting, we introduce an equivariance objective and theoretically prove that its minima force augmentations on input space to correspond to rotations on the spherical embedding space. We show that merely combining our equivariant loss with a non-collapse term results in non-trivial representations, without requiring invariance to data augmentations. Optimal performance is achieved by also encouraging approximate invariance, where input augmentations correspond to small rotations. Our method, CARE: Contrastive Augmentation-induced Rotational Equivariance, leads to improved performance on downstream tasks and ensures sensitivity in embedding space to important variations in data (e.g., color) that standard contrastive methods do not achieve. Code is available at https://github.com/Sharut/CARE
https://openreview.net/pdf/67a2cbf3b6903bdf9529edb9a18eae00000ddb6d.pdf
Decodable and Sample Invariant Continuous Object Encoder
https://openreview.net/forum?id=QLoepRnoue
https://openreview.net/forum?id=QLoepRnoue
Dehao Yuan,Furong Huang,Cornelia Fermuller,Yiannis Aloimonos
ICLR 2024,Poster
We propose Hyper-Dimensional Function Encoding (HDFE). Given samples of a continuous object (e.g. a function), HDFE produces an explicit vector representation of the given object, invariant to the sample distribution and density. Sample distribution and density invariance enables HDFE to consistently encode continuous objects regardless of their sampling, and therefore allows neural networks to receive continuous objects as inputs for machine learning tasks, such as classification and regression. Besides, HDFE does not require any training and is proved to map the object into an organized embedding space, which facilitates the training of the downstream tasks. In addition, the encoding is decodable, which enables neural networks to regress continuous objects by regressing their encodings. Therefore, HDFE serves as an interface for processing continuous objects. We apply HDFE to function-to-function mapping, where vanilla HDFE achieves competitive performance with the state-of-the-art algorithm. We apply HDFE to point cloud surface normal estimation, where a simple replacement from PointNet to HDFE leads to 12\% and 15\% error reductions in two benchmarks. In addition, by integrating HDFE into the PointNet-based SOTA network, we improve the SOTA baseline by 2.5\% and 1.7\% on the same benchmarks.
https://openreview.net/pdf/0e2f0547d018ad2b49f03860971166c65f9b537c.pdf
Neural Snowflakes: Universal Latent Graph Inference via Trainable Latent Geometries
https://openreview.net/forum?id=djM3WzpOmK
https://openreview.net/forum?id=djM3WzpOmK
Haitz Sáez de Ocáriz Borde,Anastasis Kratsios
ICLR 2024,Poster
The inductive bias of a graph neural network (GNN) is largely encoded in its specified graph. Latent graph inference relies on latent geometric representations to dynamically rewire or infer a GNN's graph to maximize the GNN's predictive downstream performance, but it lacks solid theoretical foundations in terms of embedding-based representation guarantees. This paper addresses this issue by introducing a trainable deep learning architecture, coined \textit{neural snowflake}, that can adaptively implement fractal-like metrics on $\mathbb{R}^d$. We prove that any given finite weights graph can be isometrically embedded by a standard MLP encoder. Furthermore, when the latent graph can be represented in the feature space of a sufficiently regular kernel, we show that the combined neural snowflake and MLP encoder do not succumb to the curse of dimensionality by using only a low-degree polynomial number of parameters in the number of nodes. This implementation enables a low-dimensional isometric embedding of the latent graph. We conduct synthetic experiments to demonstrate the superior metric learning capabilities of neural snowflakes when compared to more familiar spaces like Euclidean space. Additionally, we carry out latent graph inference experiments on graph benchmarks. Consistently, the neural snowflake model achieves predictive performance that either matches or surpasses that of the state-of-the-art latent graph inference models. Importantly, this performance improvement is achieved without requiring random search for optimal latent geometry. Instead, the neural snowflake model achieves this enhancement in a differentiable manner.
https://openreview.net/pdf/60e6db71b5bef3a528d29bef36ec6ca6e6fbbcff.pdf
Context is Environment
https://openreview.net/forum?id=8VPWfqtQMX
https://openreview.net/forum?id=8VPWfqtQMX
Sharut Gupta,Stefanie Jegelka,David Lopez-Paz,Kartik Ahuja
ICLR 2024,Poster
Two lines of work are taking the central stage in AI research. On the one hand, the community is making increasing efforts to build models that discard spurious correlations and generalize better in novel test environments. Unfortunately, the hard lesson so far is that no proposal convincingly outperforms a simple empirical risk minimization baseline. On the other hand, large language models (LLMs) have erupted as algorithms able to learn in-context, generalizing on-the-fly to eclectic contextual circumstances that users enforce by means of prompting. In this paper, we argue that context is environment, and posit that in-context learning holds the key to better domain generalization. Via extensive theory and experiments, we show that paying attention to context$\unicode{x2013}\unicode{x2013}$unlabeled examples as they arrive$\unicode{x2013}\unicode{x2013}$allows our proposed In-Context Risk Minimization (ICRM) algorithm to zoom-in on the test environment risk minimizer, leading to significant out-of-distribution performance improvements. Furthermore, training with context helps the model learn a better featurizer. From all of this, two messages are worth taking home. Researchers in domain generalization should consider environment as context, and harness the adaptive power of in-context learning. Researchers in LLMs should consider context as environment, to better structure data towards generalization. Code is available at https://github.com/facebookresearch/ICRM.
https://openreview.net/pdf/7cfe1cc605119bc9518235e5d9655a1a4da8b204.pdf
Generalized Neural Sorting Networks with Error-Free Differentiable Swap Functions
https://openreview.net/forum?id=RLSWbk9kPw
https://openreview.net/forum?id=RLSWbk9kPw
Jungtaek Kim,Jeongbeen Yoon,Minsu Cho
ICLR 2024,Poster
Sorting is a fundamental operation of all computer systems, having been a long-standing significant research topic. Beyond the problem formulation of traditional sorting algorithms, we consider sorting problems for more abstract yet expressive inputs, e.g., multi-digit images and image fragments, through a neural sorting network. To learn a mapping from a high-dimensional input to an ordinal variable, the differentiability of sorting networks needs to be guaranteed. In this paper we define a softening error by a differentiable swap function, and develop an error-free swap function that holds a non-decreasing condition and differentiability. Furthermore, a permutation-equivariant Transformer network with multi-head attention is adopted to capture dependency between given inputs and also leverage its model capacity with self-attention. Experiments on diverse sorting benchmarks show that our methods perform better than or comparable to baseline methods.
https://openreview.net/pdf/6bf5bea0e265b05f037edb6ae0bc659bc28d0c72.pdf
IMPUS: Image Morphing with Perceptually-Uniform Sampling Using Diffusion Models
https://openreview.net/forum?id=gG38EBe2S8
https://openreview.net/forum?id=gG38EBe2S8
Zhaoyuan Yang,Zhengyang Yu,Zhiwei Xu,Jaskirat Singh,Jing Zhang,Dylan Campbell,Peter Tu,Richard Hartley
ICLR 2024,Poster
We present a diffusion-based image morphing approach with perceptually-uniform sampling (IMPUS) that produces smooth, direct and realistic interpolations given an image pair. The embeddings of two images may lie on distinct conditioned distributions of a latent diffusion model, especially when they have significant semantic difference. To bridge this gap, we interpolate in the locally linear and continuous text embedding space and Gaussian latent space. We first optimize the endpoint text embeddings and then map the images to the latent space using a probability flow ODE. Unlike existing work that takes an indirect morphing path, we show that the model adaptation yields a direct path and suppresses ghosting artifacts in the interpolated images. To achieve this, we propose a heuristic bottleneck constraint based on a novel relative perceptual path diversity score that automatically controls the bottleneck size and balances the diversity along the path with its directness. We also propose a perceptually-uniform sampling technique that enables visually smooth changes between the interpolated images. Extensive experiments validate that our IMPUS can achieve smooth, direct, and realistic image morphing and is adaptable to several other generative tasks.
https://openreview.net/pdf/fa558127bb88da51d55ad42974de32f41ac026f5.pdf
Adaptive Instrument Design for Indirect Experiments
https://openreview.net/forum?id=4Zz5UELkIt
https://openreview.net/forum?id=4Zz5UELkIt
Yash Chandak,Shiv Shankar,Vasilis Syrgkanis,Emma Brunskill
ICLR 2024,Poster
Indirect experiments provide a valuable framework for estimating treatment effects in situations where conducting randomized control trials (RCTs) is impractical or unethical. Unlike RCTs, indirect experiments estimate treatment effects by leveraging (conditional) instrumental variables, enabling estimation through encouragement and recommendation rather than strict treatment assignment. However, the sample efficiency of such estimators depends not only on the inherent variability in outcomes but also on the varying compliance levels of users with the instrumental variables and the choice of estimator being used, especially when dealing with numerous instrumental variables. While adaptive experiment design has a rich literature for \textit{direct} experiments, in this paper we take the initial steps towards enhancing sample efficiency for \textit{indirect} experiments by adaptively designing a data collection policy over instrumental variables. Our main contribution is a practical computational procedure that utilizes influence functions to search for an optimal data collection policy, minimizing the mean-squared error of the desired (non-linear) estimator. Through experiments conducted in various domains inspired by real-world applications, we showcase how our method can significantly improve the sample efficiency of indirect experiments.
https://openreview.net/pdf/841d3f3401e57b98c2c18da54ad64762ad042108.pdf
Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning
https://openreview.net/forum?id=HiYMiZYwkw
https://openreview.net/forum?id=HiYMiZYwkw
Johnathan Wenjia Xie,Yoonho Lee,Annie S Chen,Chelsea Finn
ICLR 2024,Poster
Self-supervised learning excels in learning representations from large amounts of unlabeled data, demonstrating success across multiple data modalities. Yet, extending self-supervised learning to new modalities is non-trivial because the specifics of existing methods are tailored to each domain, such as domain-specific augmentations which reflect the invariances in the target task. While masked modeling is promising as a domain-agnostic framework for self-supervised learning because it does not rely on input augmentations, its mask sampling procedure remains domain-specific. We present Self-guided Masked Autoencoders (SMA), a fully domain-agnostic masked modeling method. SMA trains an attention based model using a masked modeling objective, by learning masks to sample without any domain-specific assumptions. We evaluate SMA on three self-supervised learning benchmarks in protein biology, chemical property prediction, and particle physics. We find SMA is capable of learning representations without domain-specific knowledge and achieves state-of-the-art performance on these three benchmarks.
https://openreview.net/pdf/943220f4c11a3cd3ce2fd053c1546ed10efc9b49.pdf
Query-Dependent Prompt Evaluation and Optimization with Offline Inverse RL
https://openreview.net/forum?id=N6o0ZtPzTg
https://openreview.net/forum?id=N6o0ZtPzTg
Hao Sun,Alihan Hüyük,Mihaela van der Schaar
ICLR 2024,Poster
In this study, we aim to enhance the arithmetic reasoning ability of Large Language Models (LLMs) through zero-shot prompt optimization. We identify a previously overlooked objective of query dependency in such optimization and elucidate two ensuing challenges that impede the successful and economical design of prompt optimization techniques. One primary issue is the absence of an effective method to evaluate prompts during inference when the golden answer is unavailable. Concurrently, learning via interactions with the LLMs to navigate the expansive natural language prompting space proves to be resource-intensive. To address this, we introduce Prompt-OIRL, which harnesses offline inverse reinforcement learning to draw insights from offline prompting demonstration data. Such data exists as by-products when diverse prompts are benchmarked on open-accessible datasets. With Prompt-OIRL, the query-dependent prompt optimization objective is achieved by first learning an offline reward model. This model can evaluate any query-prompt pairs without accessing LLMs. Subsequently, a best-of-N strategy is deployed to recommend the optimal prompt. Our experimental evaluations across various LLM scales and arithmetic reasoning datasets underscore both the efficacy and economic viability of the proposed approach.
https://openreview.net/pdf/b8d5927e43a62e4845e65add9598c62476a17174.pdf
BioBridge: Bridging Biomedical Foundation Models via Knowledge Graphs
https://openreview.net/forum?id=jJCeMiwHdH
https://openreview.net/forum?id=jJCeMiwHdH
Zifeng Wang,Zichen Wang,Balasubramaniam Srinivasan,Vassilis N. Ioannidis,Huzefa Rangwala,RISHITA ANUBHAI
ICLR 2024,Poster
Foundation models (FMs) learn from large volumes of unlabeled data to demonstrate superior performance across a wide range of tasks. However, FMs developed for biomedical domains have largely remained unimodal, i.e., independently trained and used for tasks on protein sequences alone, small molecule structures alone, or clinical data alone. To overcome this limitation, we present BioBridge, a parameter-efficient learning framework, to bridge independently trained unimodal FMs to establish multimodal behavior. BioBridge achieves it by utilizing Knowledge Graphs (KG) to learn transformations between one unimodal FM and another without fine-tuning any underlying unimodal FMs. Our results demonstrate that BioBridge can beat the best baseline KG embedding methods (on average by ~ 76.3%) in cross-modal retrieval tasks. We also identify BioBridge demonstrates out-of-domain generalization ability by extrapolating to unseen modalities or relations. Additionally, we also show that BioBridge presents itself as a general-purpose retriever that can aid biomedical multimodal question answering as well as enhance the guided generation of novel drugs. Code is at https://github.com/RyanWangZf/BioBridge.
https://openreview.net/pdf/3187e49606ac2d2e325448d20bf29a709ce4a87d.pdf
Towards Few-Shot Adaptation of Foundation Models via Multitask Finetuning
https://openreview.net/forum?id=1jbh2e0b2K
https://openreview.net/forum?id=1jbh2e0b2K
Zhuoyan Xu,Zhenmei Shi,Junyi Wei,Fangzhou Mu,Yin Li,Yingyu Liang
ICLR 2024,Poster
Foundation models have emerged as a powerful tool for many AI problems. Despite the tremendous success of foundation models, effective adaptation to new tasks, particularly those with limited labels, remains an open question and lacks theoretical understanding. An emerging solution with recent success in vision and NLP involves finetuning a foundation model on a selection of relevant tasks, before its adaptation to a target task with limited labeled samples. In this paper, we study the theoretical justification of this multitask finetuning approach. Our theoretical analysis reveals that with a diverse set of related tasks, this multitask finetuning leads to reduced error in the target task, in comparison to directly adapting the same pretrained model. We quantify the relationship between finetuning tasks and target tasks by diversity and consistency metrics, and further propose a practical task selection algorithm. We substantiate our theoretical claims with extensive empirical evidence. Further, we present results affirming our task selection algorithm adeptly chooses related finetuning tasks, providing advantages to the model performance on target tasks. We believe our study shed new light on the effective adaptation of foundation models to new tasks that lack abundant labels. Our code is available at https://github.com/OliverXUZY/Foudation-Model_Multitask.
https://openreview.net/pdf/dbbf6585c52585c378083016f1a868cb2570b809.pdf
ReTaSA: A Nonparametric Functional Estimation Approach for Addressing Continuous Target Shift
https://openreview.net/forum?id=KdVvOA00Or
https://openreview.net/forum?id=KdVvOA00Or
Hwanwoo Kim,Xin Zhang,Jiwei Zhao,Qinglong Tian
ICLR 2024,Poster
The presence of distribution shifts poses a significant challenge for deploying modern machine learning models in real-world applications. This work focuses on the target shift problem in a regression setting (Zhang et al., 2013; Nguyen et al., 2016). More specifically, the target variable $y$ (also known as the response variable), which is continuous, has different marginal distributions in the training source and testing domain, while the conditional distribution of features $\boldsymbol{x}$ given $y$ remains the same. While most literature focuses on classification tasks with finite target space, the regression problem has an *infinite dimensional* target space, which makes many of the existing methods inapplicable. In this work, we show that the continuous target shift problem can be addressed by estimating the importance weight function from an ill-posed integral equation. We propose a nonparametric regularized approach named *ReTaSA* to solve the ill-posed integral equation and provide theoretical justification for the estimated importance weight function. The effectiveness of the proposed method has been demonstrated with extensive numerical studies on synthetic and real-world datasets.
https://openreview.net/pdf/d1da110b25cfb405463528a072d97b9b1bf77c6c.pdf
GPAvatar: Generalizable and Precise Head Avatar from Image(s)
https://openreview.net/forum?id=hgehGq2bDv
https://openreview.net/forum?id=hgehGq2bDv
Xuangeng Chu,Yu Li,Ailing Zeng,Tianyu Yang,Lijian Lin,Yunfei Liu,Tatsuya Harada
ICLR 2024,Poster
Head avatar reconstruction, crucial for applications in virtual reality, online meetings, gaming, and film industries, has garnered substantial attention within the computer vision community. The fundamental objective of this field is to faithfully recreate the head avatar and precisely control expressions and postures. Existing methods, categorized into 2D-based warping, mesh-based, and neural rendering approaches, present challenges in maintaining multi-view consistency, incorporating non-facial information, and generalizing to new identities. In this paper, we propose a framework named GPAvatar that reconstructs 3D head avatars from one or several images in a single forward pass. The key idea of this work is to introduce a dynamic point-based expression field driven by a point cloud to precisely and effectively capture expressions. Furthermore, we use a Multi Tri-planes Attention (MTA) fusion module in tri-planes canonical field to leverage information from multiple input images. The proposed method achieves faithful identity reconstruction, precise expression control, and multi-view consistency, demonstrating promising results for free-viewpoint rendering and novel view synthesis.
https://openreview.net/pdf/16ad6e98e148f655f20332da58fed2c789eeb33a.pdf
Mind Your Augmentation: The Key to Decoupling Dense Self-Supervised Learning
https://openreview.net/forum?id=WQYHbr36Fo
https://openreview.net/forum?id=WQYHbr36Fo
Congpei Qiu,Tong Zhang,Yanhao Wu,Wei Ke,Mathieu Salzmann,Sabine Süsstrunk
ICLR 2024,Poster
Dense Self-Supervised Learning (SSL) creates positive pairs by building positive paired regions or points, thereby aiming to preserve local features, for example of individual objects. However, existing approaches tend to couple objects by leaking information from the neighboring contextual regions when the pairs have a limited overlap. In this paper, we first quantitatively identify and confirm the existence of such a coupling phenomenon. We then address it by developing a remarkably simple yet highly effective solution comprising a novel augmentation method, Region Collaborative Cutout (RCC), and a corresponding decoupling branch. Importantly, our design is versatile and can be seamlessly integrated into existing SSL frameworks, whether based on Convolutional Neural Networks (CNNs) or Vision Transformers (ViTs). We conduct extensive experiments, incorporating our solution into two CNN-based and two ViT-based methods, with results confirming the effectiveness of our approach. Moreover, we provide empirical evidence that our method significantly contributes to the disentanglement of feature representations among objects, both in quantitative and qualitative terms.
https://openreview.net/pdf/e503fce9e1563e9cf78c2c12962fa1921f07b293.pdf
Entropy-MCMC: Sampling from Flat Basins with Ease
https://openreview.net/forum?id=oGNdBvymod
https://openreview.net/forum?id=oGNdBvymod
Bolian Li,Ruqi Zhang
ICLR 2024,Poster
Bayesian deep learning counts on the quality of posterior distribution estimation. However, the posterior of deep neural networks is highly multi-modal in nature, with local modes exhibiting varying generalization performance. Given a practical budget, targeting at the original posterior can lead to suboptimal performance, as some samples may become trapped in "bad" modes and suffer from overfitting. Leveraging the observation that "good" modes with low generalization error often reside in flat basins of the energy landscape, we propose to bias sampling on the posterior toward these flat regions. Specifically, we introduce an auxiliary guiding variable, the stationary distribution of which resembles a smoothed posterior free from sharp modes, to lead the MCMC sampler to flat basins. By integrating this guiding variable with the model parameter, we create a simple joint distribution that enables efficient sampling with minimal computational overhead. We prove the convergence of our method and further show that it converges faster than several existing flatness-aware methods in the strongly convex setting. Empirical results demonstrate that our method can successfully sample from flat basins of the posterior, and outperforms all compared baselines on multiple benchmarks including classification, calibration, and out-of-distribution detection.
https://openreview.net/pdf/9273ef589b4ef6eac79c1aae06fe6c4e629c9932.pdf
Xformer: Hybrid X-Shaped Transformer for Image Denoising
https://openreview.net/forum?id=vXrIQLzIKY
https://openreview.net/forum?id=vXrIQLzIKY
Jiale Zhang,Yulun Zhang,Jinjin Gu,Jiahua Dong,Linghe Kong,Xiaokang Yang
ICLR 2024,Poster
In this paper, we present a hybrid X-shaped vision Transformer, named Xformer, which performs notably on image denoising tasks. We explore strengthening the global representation of tokens from different scopes. In detail, we adopt two types of Transformer blocks. The spatial-wise Transformer block performs fine-grained local patches interactions across tokens defined by spatial dimension. The channel-wise Transformer block performs direct global context interactions across tokens defined by channel dimension. Based on the concurrent network structure, we design two branches to conduct these two interaction fashions. Within each branch, we employ an encoder-decoder architecture to capture multi-scale features. Besides, we propose the Bidirectional Connection Unit (BCU) to couple the learned representations from these two branches while providing enhanced information fusion. The joint designs make our Xformer powerful to conduct global information modeling in both spatial and channel dimensions. Extensive experiments show that Xformer, under the comparable model complexity, achieves state-of-the-art performance on the synthetic and real-world image denoising tasks. We also provide code and models at https://github.com/gladzhang/Xformer.
https://openreview.net/pdf/d4e56a2cae8e9a23ca086434c998bef898c630e2.pdf
Learning to Embed Time Series Patches Independently
https://openreview.net/forum?id=WS7GuBDFa2
https://openreview.net/forum?id=WS7GuBDFa2
Seunghan Lee,Taeyoung Park,Kibok Lee
ICLR 2024,Poster
Masked time series modeling has recently gained much attention as a self-supervised representation learning strategy for time series. Inspired by masked image modeling in computer vision, recent works first patchify and partially mask out time series, and then train Transformers to capture the dependencies between patches by predicting masked patches from unmasked patches. However, we argue that capturing such patch dependencies might not be an optimal strategy for time series representation learning; rather, learning to embed patches independently results in better time series representations. Specifically, we propose to use 1) the simple patch reconstruction task, which autoencode each patch without looking at other patches, and 2) the simple patch-wise MLP that embeds each patch independently. In addition, we introduce complementary contrastive learning to hierarchically capture adjacent time series information efficiently. Our proposed method improves time series forecasting and classification performance compared to state-of-the-art Transformer-based models, while it is more efficient in terms of the number of parameters and training time. Code is available at this repository: https://github.com/seunghan96/pits.
https://openreview.net/pdf/df0ac89a8662595a960d38ead045df8b5814eeac.pdf
Understanding Convergence and Generalization in Federated Learning through Feature Learning Theory
https://openreview.net/forum?id=EcetCr4trp
https://openreview.net/forum?id=EcetCr4trp
Wei Huang,Ye Shi,Zhongyi Cai,Taiji Suzuki
ICLR 2024,Poster
Federated Learning (FL) has attracted significant attention as an efficient privacy-preserving approach to distributed learning across multiple clients. Despite extensive empirical research and practical applications, a systematic way to theoretically understand the convergence and generalization properties in FL remains limited. This work aims to establish a unified theoretical foundation for understanding FL through feature learning theory. We focus on a scenario where each client employs a two-layer convolutional neural network (CNN) for local training on their own data. Many existing works analyze the convergence of Federated Averaging (FedAvg) under lazy training with linearizing assumptions in weight space. In contrast, our approach tracks the trajectory of signal learning and noise memorization in FL, eliminating the need for these assumptions. We further show that FedAvg can achieve near-zero test error by effectively increasing signal-to-noise ratio (SNR) in feature learning, while local training without communication achieves a large constant test error. This finding highlights the benefits of communication for generalization in FL. Moreover, our theoretical results suggest that a weighted FedAvg method, based on the similarity of input features across clients, can effectively tackle data heterogeneity issues in FL. Experimental results on both synthetic and real-world datasets verify our theoretical conclusions and emphasize the effectiveness of the weighted FedAvg approach.
https://openreview.net/pdf/06cdb541958881f994ad89f992d7f9bae0d0cca7.pdf
Quantifying the Sensitivity of Inverse Reinforcement Learning to Misspecification
https://openreview.net/forum?id=pz2E1Q9Wni
https://openreview.net/forum?id=pz2E1Q9Wni
Joar Max Viktor Skalse,Alessandro Abate
ICLR 2024,Poster
Inverse reinforcement learning (IRL) aims to infer an agent's *preferences* (represented as a reward function $R$) from their *behaviour* (represented as a policy $\pi$). To do this, we need a *behavioural model* of how $\pi$ relates to $R$. In the current literature, the most common behavioural models are *optimality*, *Boltzmann-rationality*, and *causal entropy maximisation*. However, the true relationship between a human's preferences and their behaviour is much more complex than any of these behavioural models. This means that the behavioural models are *misspecified*, which raises the concern that they may lead to systematic errors if applied to real data. In this paper, we analyse how sensitive the IRL problem is to misspecification of the behavioural model. Specifically, we provide necessary and sufficient conditions that completely characterise how the observed data may differ from the assumed behavioural model without incurring an error above a given threshold. In addition to this, we also characterise the conditions under which a behavioural model is robust to small perturbations of the observed policy, and we analyse how robust many behavioural models are to misspecification of their parameter values (such as e.g. the discount rate). Our analysis suggests that the IRL problem is highly sensitive to misspecification, in the sense that very mild misspecification can lead to very large errors in the inferred reward function.
https://openreview.net/pdf/6974d53f2ea3ce7ceff5c7af2dcdbeb94a1ce755.pdf
Rethinking CNN’s Generalization to Backdoor Attack from Frequency Domain
https://openreview.net/forum?id=mYhH0CDFFa
https://openreview.net/forum?id=mYhH0CDFFa
Quanrui Rao,Lin Wang,Wuying Liu
ICLR 2024,Poster
Convolutional neural network (CNN) is easily affected by backdoor injections, whose models perform normally on clean samples but produce specific outputs on poisoned ones. Most of the existing studies have focused on the effect of trigger feature changes of poisoned samples on model generalization in spatial domain. We focus on the mechanism of CNN memorize poisoned samples in frequency domain, and find that CNN generate generalization to poisoned samples by memorizing the frequency domain distribution of trigger changes. We also explore the influence of trigger perturbations in different frequency domain components on the generalization of poisoned models from visible and invisible backdoor attacks, and prove that high-frequency components are more susceptible to perturbations than low-frequency components. Based on the above fundings, we propose a universal invisible strategy for visible triggers, which can achieve trigger invisibility while maintaining raw attack performance. We also design a novel frequency domain backdoor attack method based on low-frequency semantic information, which can achieve 100\% attack accuracy on multiple models and multiple datasets, and can bypass multiple defenses.
https://openreview.net/pdf/064c6af562c3d5fede72417e015f125083346fde.pdf
LLCP: Learning Latent Causal Processes for Reasoning-based Video Question Answer
https://openreview.net/forum?id=Cu5wJa5LGO
https://openreview.net/forum?id=Cu5wJa5LGO
Guangyi Chen,Yuke Li,Xiao Liu,Zijian Li,Eman Al Suradi,Donglai Wei,Kun Zhang
ICLR 2024,Poster
Current approaches to Video Question Answering (VideoQA) primarily focus on cross-modality matching, which is limited by the requirement for extensive data annotations and the insufficient capacity for causal reasoning (e.g. attributing accidents). To address these challenges, we introduce a causal framework for video reasoning, termed Learning Latent Causal Processes (LLCP). At the heart of LLCP lies a multivariate generative model designed to analyze the spatial-temporal dynamics of objects within events. Leveraging the inherent modularity of causal mechanisms, we train the model through self-supervised local auto-regression eliminating the need for annotated question-answer pairs. During inference, the model is applied to answer two types of reasoning questions: accident attribution, which infers the cause from observed effects, and counterfactual prediction, which predicts the effects of counterfactual conditions given the factual evidence. In the first scenario, we identify variables that deviate from the established distribution by the learned model, signifying the root cause of accidents. In the second scenario, we replace embeddings of previous variables with counterfactual ones, enabling us to forecast potential developments. Once we have identified these cause/effect variables, natural language answers are derived through a combination of grammatical parsing and a pre-trained vision-language model. We assess the efficacy of LLCP on both synthetic and real-world data, demonstrating comparable performance to supervised methods despite our framework using no paired textual annotations.
https://openreview.net/pdf/e4886678d1fb61b3be5fd3c9001c2465abbc8e48.pdf
PromptTTS 2: Describing and Generating Voices with Text Prompt
https://openreview.net/forum?id=NsCXDyv2Bn
https://openreview.net/forum?id=NsCXDyv2Bn
Yichong Leng,Zhifang Guo,Kai Shen,Zeqian Ju,Xu Tan,Eric Liu,Yufei Liu,Dongchao Yang,leying zhang,Kaitao Song,Lei He,Xiangyang Li,sheng zhao,Tao Qin,Jiang Bian
ICLR 2024,Poster
Speech conveys more information than text, as the same word can be uttered in various voices to convey diverse information. Compared to traditional text-to-speech (TTS) methods relying on speech prompts (reference speech) for voice variability, using text prompts (descriptions) is more user-friendly since speech prompts can be hard to find or may not exist at all. TTS approaches based on the text prompt face two main challenges: 1) the one-to-many problem, where not all details about voice variability can be described in the text prompt, and 2) the limited availability of text prompt datasets, where vendors and large cost of data labeling are required to write text prompts for speech. In this work, we introduce PromptTTS 2 to address these challenges with a variation network to provide variability information of voice not captured by text prompts, and a prompt generation pipeline to utilize the large language models (LLM) to compose high quality text prompts. Specifically, the variation network predicts the representation extracted from the reference speech (which contains full information about voice variability) based on the text prompt representation. For the prompt generation pipeline, it generates text prompts for speech with a speech language understanding model to recognize voice attributes (e.g., gender, speed) from speech and a large language model to formulate text prompts based on the recognition results. Experiments on a large-scale (44K hours) speech dataset demonstrate that compared to the previous works, PromptTTS 2 generates voices more consistent with text prompts and supports the sampling of diverse voice variability, thereby offering users more choices on voice generation. Additionally, the prompt generation pipeline produces high-quality text prompts, eliminating the large labeling cost. The demo page of PromptTTS 2 is available (https://speechresearch.github.io/prompttts2).
https://openreview.net/pdf/efd8de37796cbf776003f8893be7b73e6d4a7253.pdf
RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation
https://openreview.net/forum?id=PEuDO2EiDr
https://openreview.net/forum?id=PEuDO2EiDr
Samuel Pegg,Kai Li,Xiaolin Hu
ICLR 2024,Poster
Audio-visual speech separation methods aim to integrate different modalities to generate high-quality separated speech, thereby enhancing the performance of downstream tasks such as speech recognition. Most existing state-of-the-art (SOTA) models operate in the time domain. However, their overly simplistic approach to modeling acoustic features often necessitates larger and more computationally intensive models in order to achieve SOTA performance. In this paper, we present a novel time-frequency domain audio-visual speech separation method: Recurrent Time-Frequency Separation Network (RTFS-Net), which applies its algorithms on the complex time-frequency bins yielded by the Short-Time Fourier Transform. We model and capture the time and frequency dimensions of the audio independently using a multi-layered RNN along each dimension. Furthermore, we introduce a unique attention-based fusion technique for the efficient integration of audio and visual information, and a new mask separation approach that takes advantage of the intrinsic spectral nature of the acoustic features for a clearer separation. RTFS-Net outperforms the prior SOTA method in both inference speed and separation quality while reducing the number of parameters by 90% and MACs by 83%. This is the first time-frequency domain audio-visual speech separation method to outperform all contemporary time-domain counterparts.
https://openreview.net/pdf/f6052900d23a154787826855283aad483cc3cd81.pdf
Consistent Video-to-Video Transfer Using Synthetic Dataset
https://openreview.net/forum?id=IoKRezZMxF
https://openreview.net/forum?id=IoKRezZMxF
Jiaxin Cheng,Tianjun Xiao,Tong He
ICLR 2024,Poster
We introduce a novel and efficient approach for text-based video-to-video editing that eliminates the need for resource-intensive per-video-per-model finetuning. At the core of our approach is a synthetic paired video dataset tailored for video-to-video transfer tasks. Inspired by Instruct Pix2Pix's image transfer via editing instruction, we adapt this paradigm to the video domain. Extending the Prompt-to-Prompt to videos, we efficiently generate paired samples, each with an input video and its edited counterpart. Alongside this, we introduce the Long Video Sampling Correction during sampling, ensuring consistent long videos across batches. Our method surpasses current methods like Tune-A-Video, heralding substantial progress in text-based video-to-video editing and suggesting exciting avenues for further exploration and deployment.
https://openreview.net/pdf/e91267e5675cf150b685ada8cd0303644c45a25f.pdf
Byzantine Robust Cooperative Multi-Agent Reinforcement Learning as a Bayesian Game
https://openreview.net/forum?id=z6KS9D1dxt
https://openreview.net/forum?id=z6KS9D1dxt
Simin Li,Jun Guo,Jingqiao Xiu,Ruixiao Xu,Xin Yu,Jiakai Wang,Aishan Liu,Yaodong Yang,Xianglong Liu
ICLR 2024,Poster
In this study, we explore the robustness of cooperative multi-agent reinforcement learning (c-MARL) against Byzantine failures, where any agent can enact arbitrary, worst-case actions due to malfunction or adversarial attack. To address the uncertainty that any agent can be adversarial, we propose a Bayesian Adversarial Robust Dec-POMDP (BARDec-POMDP) framework, which views Byzantine adversaries as nature-dictated types, represented by a separate transition. This allows agents to learn policies grounded on their posterior beliefs about the type of other agents, fostering collaboration with identified allies and minimizing vulnerability to adversarial manipulation. We define the optimal solution to the BARDec-POMDP as an ex interim robust Markov perfect Bayesian equilibrium, which we proof to exist and the corresponding policy weakly dominates previous approaches as time goes to infinity. To realize this equilibrium, we put forward a two-timescale actor-critic algorithm with almost sure convergence under specific conditions. Experiments on matrix game, Level-based Foraging and StarCraft II indicate that, our method successfully acquires intricate micromanagement skills and adaptively aligns with allies under worst-case perturbations, showing resilience against non-oblivious adversaries, random allies, observation-based attacks, and transfer-based attacks.
https://openreview.net/pdf/ae82041a50b5244c3cc55d71579b529584e36982.pdf
Scale-Adaptive Diffusion Model for Complex Sketch Synthesis
https://openreview.net/forum?id=5xadJmgwix
https://openreview.net/forum?id=5xadJmgwix
Jijin Hu,Ke Li,Yonggang Qi,Yi-Zhe Song
ICLR 2024,Poster
While diffusion models have revolutionized generative AI, their application to human sketch generation, especially in the creation of complex yet concise and recognizable sketches, remains largely unexplored. Existing efforts have primarily focused on vector-based sketches, limiting their ability to handle intricate sketch data. This paper introduces an innovative extension of diffusion models to pixellevel sketch generation, addressing the challenge of dynamically optimizing the guidance scale for classifier-guided diffusion. Our approach achieves a delicate balance between recognizability and complexity in generated sketches through scale-adaptive classifier-guided diffusion models, a scaling indicator, and the concept of a residual sketch. We also propose a three-phase sampling strategy to enhance sketch diversity and quality. Experiments on the QuickDraw dataset showcase the potential of diffusion models to push the boundaries of sketch generation, particularly in complex scenarios unattainable by vector-based methods.
https://openreview.net/pdf/5858671a13703c6e1bab5f1eb7a17e2c4c16909c.pdf
Fast-DetectGPT: Efficient Zero-Shot Detection of Machine-Generated Text via Conditional Probability Curvature
https://openreview.net/forum?id=Bpcgcr8E8Z
https://openreview.net/forum?id=Bpcgcr8E8Z
Guangsheng Bao,Yanbin Zhao,Zhiyang Teng,Linyi Yang,Yue Zhang
ICLR 2024,Poster
Large language models (LLMs) have shown the ability to produce fluent and cogent content, presenting both productivity opportunities and societal risks. To build trustworthy AI systems, it is imperative to distinguish between machine-generated and human-authored content. The leading zero-shot detector, DetectGPT, showcases commendable performance but is marred by its intensive computational costs. In this paper, we introduce the concept of **conditional probability curvature** to elucidate discrepancies in word choices between LLMs and humans within a given context. Utilizing this curvature as a foundational metric, we present **Fast-DetectGPT**, an optimized zero-shot detector, which substitutes DetectGPT's perturbation step with a more efficient sampling step. Our evaluations on various datasets, source models, and test conditions indicate that Fast-DetectGPT not only surpasses DetectGPT by a relative around 75\% in both the white-box and black-box settings but also accelerates the detection process by a factor of 340, as detailed in Table 1.
https://openreview.net/pdf/649dda0dade8733a7d8d0446a622b80031695383.pdf
Defining and extracting generalizable interaction primitives from DNNs
https://openreview.net/forum?id=OCqyFVFNeF
https://openreview.net/forum?id=OCqyFVFNeF
Lu Chen,Siyu Lou,Benhao Huang,Quanshi Zhang
ICLR 2024,Poster
Faithfully summarizing the knowledge encoded by a deep neural network (DNN) into a few symbolic primitive patterns without losing much information represents a core challenge in explainable AI. To this end, Ren et al. (2024) have derived a series of theorems to prove that the inference score of a DNN can be explained as a small set of interactions between input variables. However, the lack of generalization power makes it still hard to consider such interactions as faithful primitive patterns encoded by the DNN. Therefore, given different DNNs trained for the same task, we develop a new method to extract interactions that are shared by these DNNs. Experiments show that the extracted interactions can better reflect common knowledge shared by different DNNs.
https://openreview.net/pdf/eddfa8774eedc73536f84d190ef2dc53f714d119.pdf
Revisiting Deep Audio-Text Retrieval Through the Lens of Transportation
https://openreview.net/forum?id=l60EM8md3t
https://openreview.net/forum?id=l60EM8md3t
Manh Luong,Khai Nguyen,Nhat Ho,Reza Haf,Dinh Phung,Lizhen Qu
ICLR 2024,Poster
The Learning-to-match (LTM) framework proves to be an effective inverse optimal transport approach for learning the underlying ground metric between two sources of data, facilitating subsequent matching. However, the conventional LTM framework faces scalability challenges, necessitating the use of the entire dataset each time the parameters of the ground metric are updated. In adapting LTM to the deep learning context, we introduce the mini-batch Learning-to-match (m-LTM) framework for audio-text retrieval problems. This framework leverages mini-batch subsampling and Mahalanobis-enhanced family of ground metrics. Moreover, to cope with misaligned training data in practice, we propose a variant using partial optimal transport to mitigate the harm of misaligned data pairs in training data. We conduct extensive experiments on audio-text matching problems using three datasets: AudioCaps, Clotho, and ESC-50. Results demonstrate that our proposed method is capable of learning rich and expressive joint embedding space, which achieves SOTA performance. Beyond this, the proposed m-LTM framework is able to close the modality gap across audio and text embedding, which surpasses both triplet and contrastive loss in the zero-shot sound event detection task on the ESC-50 dataset. Notably, our strategy of employing partial optimal transport with m-LTM demonstrates greater noise tolerance than contrastive loss, especially under varying noise ratios in training data on the AudioCaps dataset. Our code is available at https://github.com/v-manhlt3/m-LTM-Audio-Text-Retrieval
https://openreview.net/pdf/9de14a40b6e020fc27cb5f55349c2cefefbfb408.pdf
Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching
https://openreview.net/forum?id=yzRXdhk2he
https://openreview.net/forum?id=yzRXdhk2he
Yang Liu,Muzhi Zhu,Hengtao Li,Hao Chen,Xinlong Wang,Chunhua Shen
ICLR 2024,Poster
Powered by large-scale pre-training, vision foundation models exhibit significant potential in open-world image understanding. However, unlike large language models that excel at directly tackling various language tasks, vision foundation models require a task-specific model structure followed by fine-tuning on specific tasks. In this work, we present $\textbf{Matcher}$, a novel perception paradigm that utilizes off-the-shelf vision foundation models to address various perception tasks. Matcher can segment anything by using an in-context example without training. Additionally, we design three effective components within the Matcher framework to collaborate with these foundation models and unleash their full potential in diverse perception tasks. Matcher demonstrates impressive generalization performance across various segmentation tasks, all without training. For example, it achieves 52.7% mIoU on COCO-20$^i$ with one example, surpassing the state-of-the-art specialist model by 1.6%. In addition, Matcher achieves 33.0% mIoU on the proposed LVIS-92$^i$ for one-shot semantic segmentation, outperforming the state-of-the-art generalist model by 14.4%. Our visualization results further showcase the open-world generality and flexibility of Matcher when applied to images in the wild.
https://openreview.net/pdf/3a761359308f7470da5a00aa71f2e5457f7c6282.pdf