metadata
license: cc-by-nc-sa-4.0
language:
- en
tags:
- Audio
- Video
- Text
- audio-visual
- audio-visual reasoning
size_categories:
- 1K<n<10K
task_categories:
- question-answering
configs:
- config_name: default
data_files: savvy_bench.jsonl
SAVVY-Bench
This repository contains SAVVY-Bench, the first benchmark for dynamic 3D spatial reasoning in audio-visual environments, introduced in SAVVY: Spatial Awareness via Audio-Visual LLMs through Seeing and Hearing.
SAVVY-Bench Dataset
The benchmark dataset is also available on Hugging Face:
from datasets import load_dataset
dataset = load_dataset("ZijunCui/SAVVY-Bench")
This repository provides both the benchmark data and tools to process the underlying Aria Everyday Activities videos.
Setup Environment
Step 1: Create Conda Environment
# Create and activate conda environment (Python 3.10 recommended)
conda create -n savvy-bench python=3.10 -y
conda activate savvy-bench
# Install minimal dependencies for AEA processing
pip install requests tqdm numpy opencv-python imageio open3d matplotlib tyro pillow soundfile natsort
# Install Project Aria Tools (ESSENTIAL for VRS file processing) and VRS tools
pip install 'projectaria-tools[all]'
conda install -c conda-forge vrs
# Install PyTorch (required by EgoLifter rectification script)
# CPU version (sufficient for most users):
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
# OR GPU version if you have NVIDIA GPU (optional):
# conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
Step 2: Clone Repository with Submodules
# Clone the SAVVY-Bench repository
git clone --recursive https://github.com/ZijunCui/SAVVY-Bench.git
cd SAVVY-Bench
# The EgoLifter submodule will be automatically initialized with --recursive flag
# If you need to update submodules later, run:
# git submodule update --init --recursive
If you already cloned without submodules:
git submodule update --init --recursive
Step 3: Verify Installation
# Test essential dependencies
python -c "from projectaria_tools.core import mps; print('β ProjectAria tools')"
which vrs >/dev/null && echo "β VRS command available" || echo "β VRS command not found"
python -c "import torch; print('β PyTorch:', torch.__version__)"
python -c "import cv2, numpy, open3d; print('β OpenCV, NumPy, Open3D')"
python -c "import requests; print('β Download tools ready')"
Download and Process Aria Everyday Activities Videos
Step 1: Access the Dataset
- Visit Aria Everyday Activities Dataset
- Follow the instructions to access the dataset
- Download the
Aria Everyday Activities Dataset.json
file and place it in the repository root
Step 2: Automatic Download and Undistortion
# Activate environment
conda activate savvy-bench
# Run the automated processing script
chmod +x aea.sh
./aea.sh
This script will:
- Download 52 AEA video sequences (
.vrs
format) and SLAM data with resume capability - Extract RGB images and camera poses from VRS files using
egolifter/scripts/process_project_aria_3dgs.py
- Remove fisheye distortion and rectify images using
egolifter/scripts/rectify_aria.py
- Extract audio from VRS files
- Convert undistorted frames to MP4 videos
- Save all processed data in
aea/aea_processed/
Step 3: Verify Processing
After completion, you should have:
- Raw data in
aea/aea_data/
(52 scenes) - Processed data in
aea/aea_processed/
with the following structure:
aea/aea_processed/
βββ loc1_script2_seq1_rec1/
β βββ audio/
β β βββ loc1_script2_seq1_rec1.wav # Extract Audio from VRS
β βββ video/
β β βββ loc1_script2_seq1_rec1.mp4 # From undistorted frames
β βββ images/ # Undistorted frames
β βββ transforms.json # Camera poses for 3D reconstruction
βββ loc1_script2_seq1_rec2/
β βββ ...
βββ ...
Citation
@article{chen2025savvy,
title={SAVVY: Spatial Awareness via Audio-Visual LLMs through Seeing and Hearing},
author={Mingfei Chen and Zijun Cui and Xiulong Liu and Jinlin Xiang and Caleb Zheng and Jingyuan Li and Eli Shlizerman},
year={2025},
eprint={2506.05414},
archivePrefix={arXiv},
primaryClass={cs.CV}
}