Datasets:
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
video editing
Multi grained Video Editing
text-to-video
Pika
video generation
Video Generative Model Evaluation
License:
Dataset Viewer
Search is not available for this dataset
image
imagewidth (px) 512
512
|
---|
End of preview. Expand
in Data Studio
VideoGrain: Modulating Space-Time Attention for Multi-Grained Video Editing (ICLR 2025)
Github (⭐ Star our GitHub )
If you think this dataset is helpful, please feel free to leave a star⭐️⭐️⭐️ and cite our paper:
Summary
This is the dataset proposed in our paper VideoGrain: Modulating Space-Time Attention for Multi-Grained Video Editing (ICLR 2025).
VideoGrain is a zero-shot method for class-level, instance-level, and part-level video editing.
- Multi-grained Video Editing
- class-level: Editing objects within the same class (previous SOTA limited to this level)
- instance-level: Editing each individual instance to distinct object
- part-level: Adding new objects or modifying existing attributes at the part-level
- Training-Free
- Does not require any training/fine-tuning
- One-Prompt Multi-region Control & Deep investigations about cross/self attn
- modulating cross-attn for multi-regions control (visualizations available)
- modulating self-attn for feature decoupling (clustering are available)
Directory
data/
├── 2_cars
│ ├── 2_cars # original videos frames
│ └── layout_masks # layout masks subfolders (e.g., bg, left, right)
├── 2_cats
│ ├── 2_cats
│ └── layout_masks
├── 2_monkeys
├── badminton
├── boxer-punching
├── car
├── cat_flower
├── man_text_message
├── run_two_man
├── soap-box
├── spin-ball
├── tennis
└── wolf
Download
Automatical
Install the datasets library first, by:
pip install datasets
Then it can be downloaded automatically with
import numpy as np
from datasets import load_dataset
dataset = load_dataset("XiangpengYang/VideoGrain-dataset")
License
This dataset are licensed under the CC BY-NC 4.0 license.
Citation
@article{yang2025videograin,
title={VideoGrain: Modulating Space-Time Attention for Multi-grained Video Editing},
author={Yang, Xiangpeng and Zhu, Linchao and Fan, Hehe and Yang, Yi},
journal={arXiv preprint arXiv:2502.17258},
year={2025}
}
Contact
If you have any questions, feel free to contact Xiangpeng Yang ([email protected]).
- Downloads last month
- 97