Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
image
image
End of preview. Expand in Data Studio
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

AV-NeRF: Learning Neural Fields for Real-World Audio-Visual Scene Synthesis

Susan Liang, Chao Huang, Yapeng Tian, Anurag Kumar, Chenliang Xu

RWAVS Dataset

We provide the Real-World Audio-Visual Scene (RWAVS) Dataset.

  1. The dataset can be downloaded from this Hugging Face repository.

  2. After you download the dataset, you can decompress the RWAVS_Release.zip.

    unzip RWAVS_Release.zip
    cd release/
    
  3. The data is organized with the following directory structure.

    ./release/
    ├── 1
    │   ├── binaural_syn_re.wav
    │   ├── feats_train.pkl
    │   ├── feats_val.pkl
    │   ├── frames
    │   │   ├── 00001.png
    |   |   ├── ...
    │   │   ├── 00616.png
    │   ├── source_syn_re.wav
    │   ├── transforms_scale_train.json
    │   ├── transforms_scale_val.json
    │   ├── transforms_train.json
    │   └── transforms_val.json
    ├── ...
    ├── 13
    └── position.json
    

    The dataset contains 13 scenes indexed from 1 to 13. For each scene, we provide

    • transforms_train.json: camera poses for training.
    • transforms_val.json: camera poses for evaluation. We split the data into train and val subsets with 80% data for training and the rest for evaluation.
    • transforms_scale_train.json: normalized camera poses for training. We scale 3D coordindates to $[-1, 1]^3$.
    • transforms_scale_val.json: normalized camera poses for evaluation.
    • frames: corresponding video frames for each camera pose.
    • source_syn_re.wav: single-channel audio emitted by the sound source.
    • binaural_syn_re.wav: two-channel audio captured by the binaural microphone. We synchronize source_syn_re.wav and binaural_syn_re.wav and resample them to $22050$ Hz.
    • feats_train.pkl: extracted vision and depth features at each camera pose for training. We rely on V-NeRF to synthesize vision and depth images for each camera pose. We then use a pre-trained encoder to extract features from rendered images.
    • feats_val.pkl: extracted vision and depth features at each camera pose for inference.
    • position.json: normalized 3D coordinates of the sound source.

    Please note that some frames may not have corresponding camera poses because COLMAP fails to estimate the camera parameters of these frames.

Citation

@inproceedings{liang23avnerf,
 author = {Liang, Susan and Huang, Chao and Tian, Yapeng and Kumar, Anurag and Xu, Chenliang},
 booktitle = {Conference on Neural Information Processing Systems (NeurIPS)},
 title = {AV-NeRF: Learning Neural Fields for Real-World Audio-Visual Scene Synthesis},
 year = {2023}
}

Contact

If you have any comments or questions, feel free to contact Susan Liang and Chao Huang.

Downloads last month
11