language:
- en
license: cc-by-nc-4.0
size_categories:
- 1M<n<10M
task_categories:
- image-to-video
- text-to-video
- text-to-image
- image-to-image
pretty_name: TIP-I2V
library_name: datasets
dataset_info:
features:
- name: UUID
dtype: string
- name: Text_Prompt
dtype: string
- name: Image_Prompt
dtype: image
- name: Subject
dtype: string
- name: Timestamp
dtype: string
- name: Text_NSFW
dtype: float32
- name: Image_NSFW
dtype: string
splits:
- name: Full
num_bytes: 13440652664.125
num_examples: 1701935
- name: Subset
num_bytes: 790710630
num_examples: 100000
- name: Eval
num_bytes: 78258893
num_examples: 10000
download_size: 27500759907
dataset_size: 27750274851.25
configs:
- config_name: default
data_files:
- split: Full
path: data/Full-*
- split: Subset
path: data/Subset-*
- split: Eval
path: data/Eval-*
tags:
- prompt
- image-to-video
- text-to-video
- visual-generation
- video-generation
Summary
This is the dataset proposed in our paper TIP-I2V: A Million-Scale Real Text and Image Prompt Dataset for Image-to-Video Generation.
TIP-I2V is the first dataset comprising over 1.70 million unique user-provided text and image prompts. Besides the prompts, TIP-I2V also includes videos generated by five state-of-the-art image-to-video models (Pika, Stable Video Diffusion, Open-Sora, I2VGen-XL, and CogVideoX-5B). The TIP-I2V contributes to the development of better and safer image-to-video models.
Abstract
Video generation models are revolutionizing content creation, with image-to-video models drawing increasing attention due to their enhanced controllability, visual consistency, and practical applications. However, despite their popularity, these models rely on user-provided text and image prompts, and there is currently no dedicated dataset for studying these prompts. In this paper, we introduce TIP-I2V, the first large-scale dataset of over 1.70 million unique user-provided Text and Image Prompts specifically for Image-to-Video generation. Additionally, we provide the corresponding generated videos from five state-of-the-art image-to-video models. We begin by outlining the time-consuming and costly process of curating this large-scale dataset. Next, we compare TIP-I2V to two popular prompt datasets, VidProM (text-to-video) and DiffusionDB (text-to-image), highlighting differences in both basic and semantic information. This dataset enables advancements in image-to-video research. For instance, to develop better models, researchers can use the prompts in TIP-I2V to analyze user preferences and evaluate the multi-dimensional performance of their trained models; and to enhance model safety, they may focus on addressing the misinformation issue caused by image-to-video models. The new research inspired by TIP-I2V and the differences with existing datasets emphasize the importance of a specialized image-to-video prompt dataset. The project is available at this https URL .
Datapoint
Statistics
Download
For users in mainland China, try setting export HF_ENDPOINT=https://hf-mirror.com to successfully download the datasets.
Download the text and (compressed) image prompts with related information
# Full (text and compressed image) prompts: ~13.4G
from datasets import load_dataset
ds = load_dataset("WenhaoWang/TIP-I2V", split='Full', streaming=True)
# Convert to Pandas format (it may be slow)
import pandas as pd
df = pd.DataFrame(ds)
# 100k subset (text and compressed image) prompts: ~0.8G
from datasets import load_dataset
ds = load_dataset("WenhaoWang/TIP-I2V", split='Subset', streaming=True)
# Convert to Pandas format (it may be slow)
import pandas as pd
df = pd.DataFrame(ds)
# 10k TIP-Eval (text and compressed image) prompts: ~0.08G
from datasets import load_dataset
ds = load_dataset("WenhaoWang/TIP-I2V", split='Eval', streaming=True)
# Convert to Pandas format (it may be slow)
import pandas as pd
df = pd.DataFrame(ds)
Download the embeddings for text and image prompts
# Embeddings for full text prompts (~21G) and image prompts (~3.5G)
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Full_Text_Embedding.parquet", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Full_Image_Embedding.parquet", repo_type="dataset")
# Embeddings for 100k subset text prompts (~1.2G) and image prompts (~0.2G)
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Subset_Text_Embedding.parquet", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Subset_Image_Embedding.parquet", repo_type="dataset")
# Embeddings for 10k TIP-Eval text prompts (~0.1G) and image prompts (~0.02G)
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Eval_Text_Embedding.parquet", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Eval_Image_Embedding.parquet", repo_type="dataset")
Download uncompressed image prompts
# Full uncompressed image prompts: ~1T
from huggingface_hub import hf_hub_download
for i in range(1,52):
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="image_prompt_tar/image_prompt_%d.tar"%i, repo_type="dataset")
# 100k subset uncompressed image prompts: ~69.6G
from huggingface_hub import hf_hub_download
for i in range(1,3):
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="sub_image_prompt_tar/sub_image_prompt_%d.tar"%i, repo_type="dataset")
# 10k TIP-Eval uncompressed image prompts: ~6.5G
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_image_prompt_tar/eval_image_prompt.tar", repo_type="dataset")
Download generated videos
# Full videos generated by Pika: ~1T
from huggingface_hub import hf_hub_download
for i in range(1,52):
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="pika_videos_tar/pika_videos_%d.tar"%i, repo_type="dataset")
# 100k subset videos generated by Pika (~57.6G), Stable Video Diffusion (~38.9G), Open-Sora (~47.2G), I2VGen-XL (~54.4G), and CogVideoX-5B (~36.7G)
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/pika_videos_subset_1.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/pika_videos_subset_2.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/svd_videos_subset.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/opensora_videos_subset.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/i2vgenxl_videos_subset_1.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/i2vgenxl_videos_subset_2.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/cog_videos_subset.tar", repo_type="dataset")
# 10k TIP-Eval videos generated by Pika (~5.8G), Stable Video Diffusion (~3.9G), Open-Sora (~4.7G), I2VGen-XL (~5.4G), and CogVideoX-5B (~3.6G)
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_videos_tar/pika_videos_eval.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_videos_tar/svd_videos_eval.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_videos_tar/opensora_videos_eval.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_videos_tar/i2vgenxl_videos_eval.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_videos_tar/cog_videos_eval.tar", repo_type="dataset")
Comparison with VidProM and DiffusionDB
Click the WizMap (TIP-I2V VS VidProM) and WizMap (TIP-I2V VS DiffusionDB) (wait for 5 seconds) for an interactive visualization of our 1.70 million prompts.
License
The prompts and videos in our TIP-I2V are licensed under the CC BY-NC 4.0 license.
Curators
TIP-I2V is created by Wenhao Wang and Professor Yi Yang. We retain the Pika logos in our release to (1) acknowledge its contribution and (2) comply with its attribution requirements.
Citation
@article{wang2024tipi2v,
title={TIP-I2V: A Million-Scale Real Text and Image Prompt Dataset for Image-to-Video Generation},
author={Wang, Wenhao and Yang, Yi},
booktitle={arXiv preprint arXiv:2411.04709},
year={2024}
}
Contact
If you have any questions, feel free to contact Wenhao Wang ([email protected]).