Datasets:

Modalities:
Video
Audio
Languages:
English
ArXiv:
License:
Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

AVATAR: What’s Making That Sound Right Now? Video-centric Audio-Visual Localization

AVATAR stands for Audio-Visual localizAtion benchmark for a spatio-TemporAl peRspective in video.

AVATAR is a benchmark dataset designed to evaluate video-centric audio-visual localization (AVL) in complex and dynamic real-world scenarios.
Unlike previous benchmarks that rely on static image-level annotations and assume simplified conditions, AVATAR offers high-resolution temporal annotations over entire videos. It supports four challenging evaluation settings:
Single-sound, Mixed-sound, Multi-entity, and Off-screen.

πŸ“„ Paper (ICCV 2025)
🌐 Project Website
πŸ“ Code & Data Viewer


πŸ“¦ Dataset Structure

The dataset consists of the following files:

File Description
video.zip ~3.8GB of .mp4 video clips
metadata.zip ~1.6GB of annotations (bounding boxes, segmentation masks, scenario tags)
vggsound_10k.txt List of 10,000 training video IDs from VGGSound
code/ AVATAR benchmark evaluation code

Each annotated frame includes:

  • Visual bounding boxes and segmentation masks for sound-emitting objects
  • Audio-visual category labels aligned to the active sound source at each timestamp
  • Instance-level scenario labels (e.g., Off-screen, Mixed-sound)

πŸ“Š Dataset Statistics

AVATAR provides detailed quantitative statistics to help users understand its scale and diversity.

Type Count
Videos 5,000
Frames 24,266
Off-screen 670
Scenario Type Instances
Total 28,516
Single-sound 15,372
Multi-entity 9,322
Mixed-sound 3,822

πŸ§ͺ Scenarios and Tasks

AVATAR supports fine-grained scenario-wise evaluation of AVL models:

  1. Single-sound: One sound-emitting instance per frame
  2. Mixed-sound: Multiple overlapping sound sources (same or different categories)
  3. Multi-entity: One sounding instance among multiple visually similar ones
  4. Off-screen: No visible sound source within the frame

πŸ” You can evaluate your model using:

  • Consensus IoU (CIoU)
  • AUC
  • Pixel-level TN% (for Off-screen)

🧩 Audio-Visual Category Diversity

AVATAR spans 80 audio-visual categories covering a wide range of everyday domains, including:

  • Human activities (e.g., talking, singing)
  • Music performances (e.g., violin, drum, piano)
  • Animal sounds (e.g., dog barking, bird chirping)
  • Vehicles (e.g., car engine, helicopter)
  • Tools and machines (e.g., chainsaw, blender)

Such diversity enables a comprehensive evaluation of model generalizability across varied audio-visual contexts.


πŸ“ Example Metadata Format

{
  "video_id": str,
  "frame_number": int,
  "annotations": [
    { // instance 1 (e.g., man)
      "segmentation": [ // (x, y) annotated RLE format
            [float, float], 
            ...
          ],
      "bbox": [float, float, float, float], // (l, t, w, h),
      "scenario": str, // "Single-sound", "Mixed-sound", "Multi-entity", "Off-screen"
      "audio_visual_category": str,
    },
    { // instance 2 (e.g., piano)
      ...
    }, 
    ...
  ]
}
Downloads last month
166