Dataset Viewer

The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Dataset Card for DAVE πŸ‘¨πŸΏβ€πŸ”¬: Diagnostic benchmark for Audio Visual Evaluation

DAVE is a diagnostic benchmark for evaluating audio-visual models, ensuring both modalities are required and providing fine-grained error analysis to reveal specific failures.

Dataset Details

Dataset Description

DAVE (Diagnostic Audio-Visual Evaluation) is a benchmark dataset designed to systematically evaluate audio-visual models by addressing key limitations in existing datasets. Unlike prior benchmarks that often allow correct predictions using visual data alone, DAVE ensures that both audio and visual modalities are necessary for successful inference. It also provides fine-grained evaluation categories, allowing researchers to diagnose whether model errors stem from visual perception, audio interpretation, or audio-visual misalignment. DAVE is built to uncover specific issues in multimodal models and promote more targeted and robust improvements in audio-visual understanding.

Overview of DAVE

  • Curated by: Gorjan Radevski and Teodora Popordanoska
  • Language(s) (NLP): English
  • License: MIT

Dataset Sources

Uses

The DAVE dataset is intended as a diagnostic benchmark for evaluating multimodal models that process both audio and visual inputs, and output text. It is specifically designed to:

  • Assess model performance where both audio and visual information are required, avoiding the visual bias present in many existing benchmarks.
  • Disentangle model errors across four core capabilities: action recognition, temporal understanding, audio classification, and audio-visual alignment.
  • Guide model improvement by evaluating across different tasks (multimodal synchronization, sound absence detection, sound discrimination), and providing granular evaluation.

Researchers can use DAVE to test and compare audio-visual models, refine multi-modal architectures, or develop new methods for audio-visual alignment. It is not intended for training large-scale models but for targeted evaluation and analysis. You can load and use the dataset as:

from datasets import load_dataset
import random

ego4d_dataset = load_dataset("gorjanradevski/dave", split="ego4d", keep_in_memory=True, trust_remote_code=True)
# or
epic_dataset = load_dataset("gorjanradevski/dave", split="epic", keep_in_memory=True, trust_remote_code=True)

# Perform inference with an audio-visual model as:
sample = random.choice(epic_dataset)
# Obtain the audio/sound that is overlayed on the video
sound_effect = sample["audio_class"]
# Obtain the actions/narration of the events that take place in the video
options = sample["raw_choices_multimodal"]
# Get the video path where the specific event is overlayed with an audio
video_path = sample["video_with_overlayed_audio_path"] 
# Obtain the ground truth index
ground_truth_index = sample["overlayed_event_index"]
# Obtain the ground truth option
ground_truth = sample["raw_choices_multimodal"][ground_truth_index]

# Construct the prompt
prompt = f"""What is the person in the video doing when {sound_effect} is heard? Answer using one of the following options:

(A) {options[0]}
(B) {options[1]}
(C) {options[2]}
(D) {options[3]}

Answer only with the letter corresponding to the choice."""

# Load the video and perform inference with any model that can process audio and video input
# ...

Dataset Structure

The DAVE dataset consists of two main splits: ego4d and epic, each corresponding to curated samples from the Ego4D and EPIC-KITCHENS datasets respectively. Every example is structured to facilitate diagnostic evaluation of audio-visual models across multiple axes: visual, audio, temporal, and multimodal reasoning.

Data Fields

Each example contains the following fields:

  • compressed_video_path: Path to a compressed version of the raw video: unedited video containing 4 events with the original audio.

  • overlayed_event_index: Index of the event which we overlay with an unrelated audio sound.

  • events: Dictionary containing metadata about the events in the video:

    • start, end, duration: Timestamps and durations.
    • narration: Natural language descriptions.
    • action: Structured action annotations.
  • event_video_path: Clip extracted from the overlayed event.

  • audio_class: The audio class overlaid in this instance (e.g., "crow", "dog bark", "door knock").

  • video_with_overlayed_audio_path: Path to the video with audio overlayed on the specified event.

  • silent_video_path: Path to the video without any audio.

  • overlayed_audio_path: Path to the standalone audio clip extracted from the video with the overlayed audio.

  • video_id: Identifier for the video.

  • participant_id: Identifier for the subject or participant (in case of Epic, None in case of Ego4D).

  • type: Video type or category (e.g., "regular" or "none_of_the_above_incorrect_audio", "none_of_the_above_no_sound"), indicating the type of the task that sample belongs to.

  • raw_choices_*: Fields corresponding to multiple-choice options across various diagnostic sub-tasks:

    • raw_choices_simple_audio_classification
    • raw_choices_overlayed_full_audio_classification
    • raw_choices_video_segment
    • raw_choices_temporal_video
    • raw_choices_multimodal
    • raw_choices_silent_video
    • raw_choices_audio
    • raw_choices_text_only
    • raw_choices_pipeline_event_classification
  • correct_temporal_order: Ground-truth ordering of events (for temporal evaluation tasks).

Splits

  • epic: Samples sourced and annotated from EPIC-KITCHENS.
  • ego4d: Samples sourced and annotated from Ego4D.

Each split is structured identically in terms of fields, allowing for consistent benchmarking across domains.

Bias, Risks, and Limitations

Since our dataset is built on top of the Epic Kitchens and the Ego4D dataset, we inherit all risks associated with these two datasets.

Citation

@article{radevski2025dave,
  title={DAVE: Diagnostic benchmark for Audio Visual Evaluation},
  author={Radevski, Gorjan and Popordanoska, Teodora and Blaschko, Matthew B and Tuytelaars, Tinne},
  journal={arXiv preprint arXiv:2503.09321},
  year={2025}
}

Dataset Card Contact

Reach out to either of the authors at: [email protected]

Downloads last month
216