title
stringlengths 17
147
| url
stringlengths 42
42
| detail_url
stringlengths 42
42
| authors
stringlengths 8
486
| tags
stringclasses 2
values | abstract
stringlengths 468
2.51k
| pdf
stringlengths 71
71
|
---|---|---|---|---|---|---|
Efficient Multi-task Reinforcement Learning with Cross-Task Policy Guidance | https://openreview.net/forum?id=3qUks3wrnH | https://openreview.net/forum?id=3qUks3wrnH | Jinmin He,Kai Li,Yifan Zang,Haobo Fu,QIANG FU,Junliang Xing,Jian Cheng | NIPS 2024,Poster | Multi-task reinforcement learning endeavors to efficiently leverage shared information across various tasks, facilitating the simultaneous learning of multiple tasks. Existing approaches primarily focus on parameter sharing with carefully designed network structures or tailored optimization procedures. However, they overlook a direct and complementary way to exploit cross-task similarities: the control policies of tasks already proficient in some skills can provide explicit guidance for unmastered tasks to accelerate skills acquisition. To this end, we present a novel framework called Cross-Task Policy Guidance (CTPG), which trains a guide policy for each task to select the behavior policy interacting with the environment from all tasks' control policies, generating better training trajectories. In addition, we propose two gating mechanisms to improve the learning efficiency of CTPG: one gate filters out control policies that are not beneficial for guidance, while the other gate blocks tasks that do not necessitate guidance. CTPG is a general framework adaptable to existing parameter sharing approaches. Empirical evaluations demonstrate that incorporating CTPG with these approaches significantly enhances performance in manipulation and locomotion benchmarks. | https://openreview.net/pdf/718f8a0162937d8a72dd87918f6855d8654402fe.pdf |
CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts | https://openreview.net/forum?id=hwuUBsMlBf | https://openreview.net/forum?id=hwuUBsMlBf | Jiachen Li,Xinyao Wang,Sijie Zhu,Chia-Wen Kuo,Lu XU,Fan Chen,Jitesh Jain,Humphrey Shi,Longyin Wen | NIPS 2024,Poster | Recent advancements in Multimodal Large Language Models (LLMs) have focused primarily on scaling by increasing text-image pair data and enhancing LLMs to improve performance on multimodal tasks. However, these scaling approaches are computationally expensive and overlook the significance of efficiently improving model capabilities from the vision side.
Inspired by the successful applications of Mixture-of-Experts (MoE) in LLMs, which improves model scalability during training while keeping inference costs similar to those of smaller models, we propose CuMo, which incorporates Co-upcycled Top-K sparsely-gated Mixture-of-experts blocks into both the vision encoder and the MLP connector, thereby enhancing the multimodal LLMs with neglectable additional activated parameters during inference.
CuMo first pre-trains the MLP blocks and then initializes each expert in the MoE block from the pre-trained MLP block during the visual instruction tuning stage, with auxiliary losses to ensure a balanced loading of experts.
CuMo outperforms state-of-the-art multimodal LLMs across various VQA and visual-instruction-following benchmarks within each model size group, all while training exclusively on open-sourced datasets. | https://openreview.net/pdf/f9d3434ee7bc78c3c09f5488b04345aff7f59570.pdf |
Learning to Predict Structural Vibrations | https://openreview.net/forum?id=i4jZ6fCDdy | https://openreview.net/forum?id=i4jZ6fCDdy | Jan van Delden,Julius Schultz,Christopher Blech,Sabine C. Langer,Timo Lüddecke | NIPS 2024,Poster | In mechanical structures like airplanes, cars and houses, noise is generated and transmitted through vibrations. To take measures to reduce this noise, vibrations need to be simulated with expensive numerical computations. Deep learning surrogate models present a promising alternative to classical numerical simulations as they can be evaluated magnitudes faster, while trading-off accuracy. To quantify such trade-offs systematically and foster the development of methods, we present a benchmark on the task of predicting the vibration of harmonically excited plates. The benchmark features a total of 12,000 plate geometries with varying forms of beadings, material, boundary conditions, load position and sizes with associated numerical solutions.
To address the benchmark task, we propose a new network architecture, named \modelname, which predicts vibration patterns of plate geometries given a specific excitation frequency. Applying principles from operator learning and implicit models for shape encoding, our approach effectively addresses the prediction of highly variable frequency response functions occurring in dynamic systems. To quantify the prediction quality, we introduce a set of evaluation metrics and evaluate the method on our vibrating-plates benchmark. Our method outperforms DeepONets, Fourier Neural Operators and more traditional neural network architectures and can be used for design optimization.
Code, dataset and visualizations: https://github.com/ecker-lab/Learning_Vibrating_Plates | https://openreview.net/pdf/758c40c301f0ebe352c423ba89f3bba34a76814d.pdf |
Conditional Controllable Image Fusion | https://openreview.net/forum?id=RSs4o7CSqe | https://openreview.net/forum?id=RSs4o7CSqe | Bing Cao,Xingxin Xu,Pengfei Zhu,Qilong Wang,Qinghua Hu | NIPS 2024,Poster | Image fusion aims to integrate complementary information from multiple input images acquired through various sources to synthesize a new fused image. Existing methods usually employ distinct constraint designs tailored to specific scenes, forming fixed fusion paradigms. However, this data-driven fusion approach is challenging to deploy in varying scenarios, especially in rapidly changing environments. To address this issue, we propose a conditional controllable fusion (CCF) framework for general image fusion tasks without specific training. Due to the dynamic differences of different samples, our CCF employs specific fusion constraints for each individual in practice. Given the powerful generative capabilities of the denoising diffusion model, we first inject the specific constraints into the pre-trained DDPM as adaptive fusion conditions. The appropriate conditions are dynamically selected to ensure the fusion process remains responsive to the specific requirements in each reverse diffusion stage. Thus, CCF enables conditionally calibrating the fused images step by step. Extensive experiments validate our effectiveness in general fusion tasks across diverse scenarios against the competing methods without additional training. The code is publicly available. | https://openreview.net/pdf/f1f982d9ee8421eb673f1c411176e80d82131bac.pdf |
Test-Time Dynamic Image Fusion | https://openreview.net/forum?id=NkXuAOygXN | https://openreview.net/forum?id=NkXuAOygXN | Bing Cao,Yinan Xia,Yi Ding,Changqing Zhang,Qinghua Hu | NIPS 2024,Poster | The inherent challenge of image fusion lies in capturing the correlation of multi-source images and comprehensively integrating effective information from different sources. Most existing techniques fail to perform dynamic image fusion while notably lacking theoretical guarantees, leading to potential deployment risks in this field. Is it possible to conduct dynamic image fusion with a clear theoretical justification? In this paper, we give our solution from a generalization perspective. We proceed to reveal the generalized form of image fusion and derive a new test-time dynamic image fusion paradigm. It provably reduces the upper bound of generalization error. Specifically, we decompose the fused image into multiple components corresponding to its source data. The decomposed components represent the effective information from the source data, thus the gap between them reflects the \textit{Relative Dominability} (RD) of the uni-source data in constructing the fusion image. Theoretically, we prove that the key to reducing generalization error hinges on the negative correlation between the RD-based fusion weight and the uni-source reconstruction loss. Intuitively, RD dynamically highlights the dominant regions of each source and can be naturally converted to the corresponding fusion weight, achieving robust results. Extensive experiments and discussions with in-depth analysis on multiple benchmarks confirm our findings and superiority. Our code is available at https://github.com/Yinan-Xia/TTD. | https://openreview.net/pdf/1d2963bc0e4c52c2e78d03718e913213141f5730.pdf |
Images that Sound: Composing Images and Sounds on a Single Canvas | https://openreview.net/forum?id=aAR0ejrYw1 | https://openreview.net/forum?id=aAR0ejrYw1 | Ziyang Chen,Daniel Geng,Andrew Owens | NIPS 2024,Poster | Spectrograms are 2D representations of sound that look very different from the images found in our visual world. And natural images, when played as spectrograms, make unnatural sounds. In this paper, we show that it is possible to synthesize spectrograms that simultaneously look like natural images and sound like natural audio. We call these visual spectrograms *images that sound*. Our approach is simple and zero-shot, and it leverages pre-trained text-to-image and text-to-spectrogram diffusion models that operate in a shared latent space. During the reverse process, we denoise noisy latents with both the audio and image diffusion models in parallel, resulting in a sample that is likely under both models. Through quantitative evaluations and perceptual studies, we find that our method successfully generates spectrograms that align with a desired audio prompt while also taking the visual appearance of a desired image prompt. | https://openreview.net/pdf/a0fcc31632a19b9cd61e2362374981c47cdfc196.pdf |
Acceleration Exists! Optimization Problems When Oracle Can Only Compare Objective Function Values | https://openreview.net/forum?id=kxBsNEWB42 | https://openreview.net/forum?id=kxBsNEWB42 | Aleksandr Lobanov,Alexander Gasnikov,Andrey Krasnov | NIPS 2024,Poster | Frequently, the burgeoning field of black-box optimization encounters challenges due to a limited understanding of the mechanisms of the objective function. To address such problems, in this work we focus on the deterministic concept of Order Oracle, which only utilizes order access between function values (possibly with some bounded noise), but without assuming access to their values. As theoretical results, we propose a new approach to create non-accelerated optimization algorithms (obtained by integrating Order Oracle into existing optimization “tools”) in non-convex, convex, and strongly convex settings that are as good as both SOTA coordinate algorithms with first-order oracle and SOTA algorithms with Order Oracle up to logarithm factor. Moreover, using the proposed approach, _we provide the first accelerated optimization algorithm using the Order Oracle_. And also, using an already different approach we provide the asymptotic convergence of _the first algorithm with the stochastic Order Oracle concept_. Finally, our theoretical results demonstrate effectiveness of proposed algorithms through numerical experiments. | https://openreview.net/pdf/799c77c17baccd8920d1b8e54eecfe9fe4e2ea10.pdf |
Task-recency bias strikes back: Adapting covariances in Exemplar-Free Class Incremental Learning | https://openreview.net/forum?id=5H4l37IsZ8 | https://openreview.net/forum?id=5H4l37IsZ8 | Grzegorz Rypeść,Sebastian Cygert,Tomasz Trzcinski,Bartłomiej Twardowski | NIPS 2024,Poster | Exemplar-Free Class Incremental Learning (EFCIL) tackles the problem of training a model on a sequence of tasks without access to past data. Existing state-of-the-art methods represent classes as Gaussian distributions in the feature extractor's latent space, enabling Bayes classification or training the classifier by replaying pseudo features. However, we identify two critical issues that compromise their efficacy when the feature extractor is updated on incremental tasks. First, they do not consider that classes' covariance matrices change and must be adapted after each task. Second, they are susceptible to a task-recency bias caused by dimensionality collapse occurring during training. In this work, we propose AdaGauss - a novel method that adapts covariance matrices from task to task and mitigates the task-recency bias owing to the additional anti-collapse loss function. AdaGauss yields state-of-the-art results on popular EFCIL benchmarks and datasets when training from scratch or starting from a pre-trained backbone. | https://openreview.net/pdf/f7b2676a40cc58a65cb73fa91bfabc13a9560c6d.pdf |
Dual Risk Minimization: Towards Next-Level Robustness in Fine-tuning Zero-Shot Models | https://openreview.net/forum?id=p50Dyqk0GX | https://openreview.net/forum?id=p50Dyqk0GX | Kaican Li,Weiyan Xie,Yongxiang Huang,Didan Deng,Lanqing HONG,Zhenguo Li,Ricardo Silva,Nevin L. Zhang | NIPS 2024,Poster | Fine-tuning foundation models often compromises their robustness to distribution shifts. To remedy this, most robust fine-tuning methods aim to preserve the pre-trained features. However, not all pre-trained features are robust and those methods are largely indifferent to which ones to preserve. We propose dual risk minimization (DRM), which combines empirical risk minimization with worst-case risk minimization, to better preserve the core features of downstream tasks. In particular, we utilize core-feature descriptions generated by LLMs to induce core-based zero-shot predictions which then serve as proxies to estimate the worst-case risk. DRM balances two crucial aspects of model robustness: expected performance and worst-case performance, establishing a new state of the art on various real-world benchmarks. DRM significantly improves the out-of-distribution performance of CLIP ViT-L/14@336 on ImageNet (75.9$\to$77.1), WILDS-iWildCam (47.1$\to$51.8), and WILDS-FMoW (50.7$\to$53.1); opening up new avenues for robust fine-tuning. Our code is available at https://github.com/vaynexie/DRM. | https://openreview.net/pdf/37aa594599d17d50f06646d1223c501116d57857.pdf |
SpatialRGPT: Grounded Spatial Reasoning in Vision-Language Models | https://openreview.net/forum?id=JKEIYQUSUc | https://openreview.net/forum?id=JKEIYQUSUc | An-Chieh Cheng,Hongxu Yin,Yang Fu,Qiushan Guo,Ruihan Yang,Jan Kautz,Xiaolong Wang,Sifei Liu | NIPS 2024,Poster | Vision Language Models (VLMs) have demonstrated remarkable performance in 2D vision and language tasks. However, their ability to reason about spatial arrangements remains limited. In this work, we introduce Spatial Region GPT (SpatialRGPT) to enhance VLMs’ spatial perception and reasoning capabilities. SpatialRGPT advances VLMs’ spatial understanding through two key innovations: (i) a data curation pipeline that enables effective learning of regional representation from 3D scene graphs, and (ii) a flexible ``plugin'' module for integrating depth information into the visual encoder of existing VLMs. During inference, when provided with user-specified region proposals, SpatialRGPT can accurately perceive their relative directions and distances. Additionally, we propose SpatialRGBT-Bench, a benchmark with ground-truth 3D annotations encompassing indoor, outdoor, and simulated environments, for evaluating 3D spatial cognition in Vision-Language Models (VLMs). Our results demonstrate that SpatialRGPT significantly enhances performance in spatial reasoning tasks, both with and without local region prompts. The model also exhibits strong generalization capabilities, effectively reasoning about complex spatial relations and functioning as a region-aware dense reward annotator for robotic tasks. Code, dataset, and benchmark are released at https://www.anjiecheng.me/SpatialRGPT. | https://openreview.net/pdf/72621ba2893cdc746a92fa241286edca2ca9aab0.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.