Video Object Segmentation-Aware Audio Generation
Abstract
SAGANet, a multimodal generative model, enhances audio generation by using object-level segmentation maps, improving control and fidelity in professional Foley workflows.
Existing multimodal audio generation models often lack precise user control, which limits their applicability in professional Foley workflows. In particular, these models focus on the entire video and do not provide precise methods for prioritizing a specific object within a scene, generating unnecessary background sounds, or focusing on the wrong objects. To address this gap, we introduce the novel task of video object segmentation-aware audio generation, which explicitly conditions sound synthesis on object-level segmentation maps. We present SAGANet, a new multimodal generative model that enables controllable audio generation by leveraging visual segmentation masks along with video and textual cues. Our model provides users with fine-grained and visually localized control over audio generation. To support this task and further research on segmentation-aware Foley, we propose Segmented Music Solos, a benchmark dataset of musical instrument performance videos with segmentation information. Our method demonstrates substantial improvements over current state-of-the-art methods and sets a new standard for controllable, high-fidelity Foley synthesis. Code, samples, and Segmented Music Solos are available at https://saganet.notion.site
Community
Project page: https://saganet.notion.site
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- StereoFoley: Object-Aware Stereo Audio Generation from Video (2025)
- LD-LAudio-V1: Video-to-Long-Form-Audio Generation Extension with Dual Lightweight Adapters (2025)
- Efficient Video-to-Audio Generation via Multiple Foundation Models Mapper (2025)
- VAInpaint: Zero-Shot Video-Audio inpainting framework with LLMs-driven Module (2025)
- UniVerse-1: Unified Audio-Video Generation via Stitching of Experts (2025)
- SSG-Dit: A Spatial Signal Guided Framework for Controllable Video Generation (2025)
- HunyuanVideo-Foley: Multimodal Diffusion with Representation Alignment for High-Fidelity Foley Audio Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper