π¬ MOVi-MC-AC
An Open Dataset for Multi-Object Video with Multiple Cameras and Amodal Content
Dataset Details
What is MOVi-MC-AC?
- MOVi-MC-AC -> Multi-Object Video
- MOVi-MC-AC -> Multiple Cameras
- MOVi-MC-AC -> Amodal Content
MOVi-MC-AC is the first dataset to include ground-truth annotations for amodal content of obscured objects with ~5.8 million instances, setting a new maximum in the amodal synthetic dataset literature!
This dataset contains video scenes of generic 3D objects thrown together, colliding with & bouncing off each other.
- All data is made using the open-source dataset generator: Kubric.
- Particularly, we modified the MOVi challenge to include additional (6) cameras, and unoccluded views of objects.
- The 3D objects used are the default objects from Google's Kubric/MOVi engine
- They come from the publicly available Google Scanned Objects dataset
MOVi-MC-AC Sample
Abstract, Introduction, and Enabled Tasks
Click to Expand
Abstract
Multiple Camera Amodal Content MOVi is a dataset build using the open-source dataset generator Kubric. Cluttered scenes of generic household objects are simulated in video. MOVi-MC-AC contributes to the growing literature of object detection, tracking, and segmentation by including two new contributions to the deep learning for computer vision world. Multiple camera settings (MC) where objects can be identified and tracked between various unique camera angles are rare in both synthetic and real-world video. We introduce a new complexity to synthetic video by providing consistent object ids for detections and segmentations between both frames and multiple cameras each with unique features and motion patterns on a single scene. Amodal Content is a reconstructive task in which models predict the appearance of target objects through occlusions. In the amodal segmentation literature, some datasets have been released with amodal detection, tracking, and segmentation labels. However, no dataset has ever provided ground-truth amodal content annotations until now. We provide over ~5.8 million amodal segmentation masks alongside ground-truth amodal content, which until now had to be generated with slow data cut-and-paste schemes.
Introduction
The ability to conceive of whole objects from glimpses at parts of objects is called gestalt psychology (cite gestalt psychology). Object detections and segmentations in video may rapidly change as objects undergo changes in position or occlusion through time. Tracking, video object segmentation segmentation, video object retrieval, and video inpainting may benefit from consistent object representations which maintain a cohesive object view invariant of representation or perspective change. Amodal segmentation and content completion are vital in real-world applications of machine learning requiring consistent object understanding and object permanence through complex video such as robots and autonomous driving. Monocular image amodal segmentation models rely on object priors to estimate occluded object size and shape through obscurations. Recent monocular video amodal segmentation models use context from temporally distant video features to estimate amodal segmentation masks across time. So far, no existing research has investigated using multi-view images and video to generate consistent object representations for the purpose of amodal segmentation. We further develop this research area to introduce multi-view video amodal content completion, a new task in which object visuals are estimated through occlusion using both temporal context as well as multi-view information. We release the first dataset to contain ground-truth amodal segmentation masks for all objects in the scene as well as ground-truth amodal content (or the "x-ray view") of all objects in every scene.
Enabled Tasks
MOVi-MC-AC enables a variety of computer-vision focused tasks, including:
- Image Segmentation
- Video Object Segmentation
- Object Detection & Classification
- Object Tracking
- Object Re-ID Across Views (multi-view)
- Object Re-ID Across Videos
These tasks will have Amodal ground truth, making these desirable tasks incredibly useful as the "first-step" when aiming to achieve more complex goals in computer vision, including:
- Object-based Retrieval
- Video Event Detection
- Amodal Object Detection
- Amodal Video Object Segmentation
- Amodal Content Completion (Improvement on CMU Amodal Content task)
- Consistent Object Reconstruction and Tracking (Improvement on LOCI?)
From the MOVi dataset engine, we also have access to object names and meta-class/category, enabling natural language inference on video:
- Grounded/referring Detection and Tracking
- Grounded/referring Segmentation and VOS
Dataset Statistics
Dataset | MOVi-MC-AC (Ours) | MOVI-Amodal (Amazon) | SAIL-VOS 3D | SAIL-VOS | COCOA | COCOA-cls | D2S | DYCE |
---|---|---|---|---|---|---|---|---|
Image or Video | Video | Video | Video | Video | Image | Image | Image | Image |
Synthetic or Real | Synthetic | Synthetic | Synthetic | Synthetic | Real | Real | Real | Synthetic |
Number of Video Scenes | 2041 | 838 | 203 | 201 | - | - | - | - |
Number of Scene Images | 293,904 | 20,112 | 237,611 | 111,654 | 5,073 | 3,499 | 5,600 | 5,500 |
Number of Classes | 1,033 | 930 | 178 | 162 | - | 80 | 60 | 79 |
Number of Instances | 5,899,104 | 295,176 | 3,460,213 | 1,896,296 | 46,314 | 10,562 | 28,720 | 85,975 |
Number of Occluded Instances | 4,089,229 | 247,565 | - | 1,653,980 | 28,106 | 5,175 | 16,337 | 70,766 |
Average Occlusion Rate | 45.2% | 52.0% | - | 56.3% | 18.8% | 10.7% | 15.0% | 27.7% |
Provided Modalities
Modality | MOVi-MC-AC (Ours) | MOVI-Amodal (Amazon) | SAIL-VOS 3D | SAIL-VOS | COCOA | COCOA-cls | D2S | DYCE |
---|---|---|---|---|---|---|---|---|
Scene-Level RGB Frames | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Modal Object Masks | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Model Object RGB Content | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Scene-Level (Modal) Depth Masks | Yes | Yes | Yes | Yes | No | No | No | No |
Amodal Object Masks | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Amodal Object RGB Content | Yes | No | No | No | No | No | No | No |
Amodal Object Depth Masks | Yes | No | No | No | No | No | No | No |
Multiple Cameras (multi-view) | Yes | No | No | No | No | No | No | No |
Scene-object descriptors (instance re-id) | Yes | Yes (implicitly) | No | No | No | No | No | No |
Dataset Sample Details
Dataset (2,041 scenes as .tar.gz) βββ Train set: 1,651 scenes βββ Test set: 390 scenes Each Scene βββ Cameras (6) β βββ Camera Types (random per camera) β βββ Static β βββ Linear Motion β βββ Linear Motion + Panning to middle βββ Frames (24) β βββ Captured at 12 fps (2 seconds, object collisions/interactions) βββ Objects (1β40 per scene) β βββ Static objects (random 1β20) β βββ Dynamic objects (random 1β20) β βββ Selection depends on train/test set (some objects exclusive to test) βββ Annotations βββ Scene-Level β βββ rgb content (.png) β βββ segmentation mask (.png) β βββ depth mask (.tiff) βββ Object-Level (per object) β βββ unoccluded rgb content (.png) β βββ unoccluded segmentation mask (.png) β βββ unoccluded depth mask (.tiff) βββ Collisions metadata (.json) βββ Scene & object instances metadata (.json) Total Files (~20 million) βββ Calculation: βββ 2,041 scenes Γ 6 cameras Γ 24 frames Γ ~21 objects/instances Γ 3 image files β 19,397,664 files
Citation
@misc{MOVi-MC-AC,
title = {MOVi-MC-AC: An Open Dataset for Multi-Object Video with Multiple Cameras and Amodal Content},
author = {Amar Saini and Alexander Moore},
year = {2025},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/datasets/Amar-S/MOVi-MC-AC}},
journal = {HuggingFace Repository},
}
License
CC Attribution 4.0 Intl Public License
See CC Attribution 4.0 Intl Public License.pdf for more information.
Notice
Copyright (c) 2025, Lawrence Livermore National Security, LLC. Produced at the Lawrence Livermore National Laboratory Written by Amar Saini ([email protected]) Release number LLNL-DATA-2006933 All rights reserved.
This work was produced under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
This work was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights.
Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or Lawrence Livermore National Security, LLC.
The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes.
- Downloads last month
- 409