Text-to-Video
Diffusers
Safetensors
Japanese
English
art

Model Card for AIdeaLab VideoJP

eyecatch

AIdeaLab VideoJP is a text-to-video model learning from CC-BY, CC-0 like images. AIdeaLab VideoJP is made in Japan. This model is supported by GENIAC (NEDO, METI).

Model Details

Model Description

At AIdeaLab, we develop AI technology through active dialogue with creators, aiming for mutual understanding and cooperation. We strive to solve challenges faced by creators and grow together. One of these challenges is that some creators and fans want to use video generation but can't, likely due to the lack of permission to use certain videos for training. To address this issue, we have developed AIdeaLab VideoJP.

Features of AIdeaLab VideoJP

  • Principally uses images with obtained learning permissions
  • Understands both Japanese and English text inputs directly
  • Minimizes the risk of exact reproduction of training images
  • Utilizes cutting-edge technology for high quality and efficiency

Misc.

  • Developed by: alfredplpl, maty0505
  • Funded by: AIdeaLab, Inc. and NEDO, and METI
  • Shared by: AIdeaLab, Inc.
  • Model type: Rectified Flow Transformer
  • Language(s) (NLP): Japanese, English
  • License: Apache-2.0

Model Sources

  • Repository: TBA
  • Paper : blog

How to Get Started with the Model

  • diffusers
  1. Install libraries.
pip install transformers diffusers
  1. Run the following script
from diffusers.utils import export_to_video
import tqdm
from torchvision.transforms import ToPILImage
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from diffusers import CogVideoXTransformer3DModel, AutoencoderKLCogVideoX

prompt="ใƒใƒฅใƒผใƒชใƒƒใƒ—ใ‚„่œใฎ่Šฑใ€่‰ฒใจใ‚Šใฉใ‚Šใฎ่ŠฑใŒๆžœใฆใ—ใชใ็ถšใ็•‘ใ‚’ๅŸ‹ใ‚ๅฐฝใใ—ใ€ใพใ‚‹ใงใƒ‘ใƒƒใƒใƒฏใƒผใ‚ฏใฎใ‚ˆใ†ใซใ‚ซใƒฉใƒ•ใƒซใซๅฝฉใ‚‹ใ€‚ๆœใฎๆŸ”ใ‚‰ใ‹ใชๅ…‰ใŒ่Šฑใณใ‚‰ใ‚’้€ใ‹ใ—ใ€ๆทกใ„ใ‚ฐใƒฉใƒ‡ใƒผใ‚ทใƒงใƒณใŒๆ˜ ใˆใ‚‹ใ€‚้ขจใซๆบใ‚Œใ‚‹่Šฑใ€…ใ‚’ใ‚นใƒญใƒผใƒขใƒผใ‚ทใƒงใƒณใงๆ‰ใˆใ€่Šฑใณใ‚‰ใŒๅ„ช้›…ใซ่ˆžใ†ๅงฟใ‚’ๆ˜ ็”ปใฎใ‚ˆใ†ใชๆผ”ๅ‡บใงๆ’ฎๅฝฑใ€‚่ƒŒๆ™ฏใซใฏ้ ใใซ้€ฃใชใ‚‹ๅฑฑไธฆใฟใ‚„้’ใ„็ฉบใ€ๆตฎใ‹ใถ็™ฝใ„้›ฒใŒ็ซ‹ไฝ“ๆ„Ÿใ‚’ๅผ•ใ็ซ‹ใฆใ‚‹ใ€‚"
device="cuda"
shape=(1,48//4,16,256//8,256//8)
sample_N=25
torch_dtype=torch.bfloat16
eps=1
cfg=2.5

tokenizer = AutoTokenizer.from_pretrained(
    "llm-jp/llm-jp-3-1.8b"
)

text_encoder = AutoModelForCausalLM.from_pretrained(
    "llm-jp/llm-jp-3-1.8b",
    torch_dtype=torch_dtype
)
text_encoder=text_encoder.to(device)

text_inputs = tokenizer(
    prompt,
    padding="max_length",
    max_length=512,
    truncation=True,
    add_special_tokens=True,
    return_tensors="pt",
)
text_input_ids = text_inputs.input_ids
prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True, attention_mask=text_inputs.attention_mask.to(device)).hidden_states[-1]
prompt_embeds = prompt_embeds.to(dtype=torch_dtype, device=device)

null_text_inputs = tokenizer(
    "",
    padding="max_length",
    max_length=512,
    truncation=True,
    add_special_tokens=True,
    return_tensors="pt",
)
null_text_input_ids = null_text_inputs.input_ids
null_prompt_embeds = text_encoder(null_text_input_ids.to(device), output_hidden_states=True, attention_mask=null_text_inputs.attention_mask.to(device)).hidden_states[-1]
null_prompt_embeds = null_prompt_embeds.to(dtype=torch_dtype, device=device)

# Free VRAM
del text_encoder

transformer = CogVideoXTransformer3DModel.from_pretrained(
    "aidealab/AIdeaLab-VideoJP",
    torch_dtype=torch_dtype
)
transformer=transformer.to(device)

vae = AutoencoderKLCogVideoX.from_pretrained(
    "THUDM/CogVideoX-2b",
    subfolder="vae"
)
vae=vae.to(dtype=torch_dtype, device=device)
vae.enable_slicing()
vae.enable_tiling()

# euler discreate sampler with cfg
z0 = torch.randn(shape, device=device)
latents = z0.detach().clone().to(torch_dtype)

dt = 1.0 / sample_N
with torch.no_grad():
    for i in tqdm.tqdm(range(sample_N)):
        num_t = i / sample_N
        t = torch.ones(shape[0], device=device) * num_t
        psudo_t=(1000-eps)*(1-t)+eps
        positive_conditional = transformer(hidden_states=latents, timestep=psudo_t, encoder_hidden_states=prompt_embeds, image_rotary_emb=None)
        null_conditional = transformer(hidden_states=latents, timestep=psudo_t, encoder_hidden_states=null_prompt_embeds, image_rotary_emb=None)
        pred = null_conditional.sample+cfg*(positive_conditional.sample-null_conditional.sample)
        latents = latents.detach().clone() + dt * pred.detach().clone()

    # Free VRAM
    del transformer

    latents = latents / vae.config.scaling_factor
    latents = latents.permute(0, 2, 1, 3, 4) # [B, F, C, H, W]
    x=vae.decode(latents).sample
    x = x / 2 + 0.5
    x = x.clamp(0,1)
    x=x.permute(0, 2, 1, 3, 4).to(torch.float32)# [B, F, C, H, W]
    print(x.shape)
    x=[ToPILImage()(frame) for frame in x[0]]

export_to_video(x,"output.mp4",fps=24)

Uses

Direct Use

  • Assistance in creating illustrations, manga, and anime
    • For both commercial and non-commercial purposes
    • Communication with creators when making requests
  • Commercial provision of image generation services
    • Please be cautious when handling generated content
  • Self-expression
    • Using this AI to express "your" uniqueness
  • Research and development
    • Fine-tuning (also known as additional training) such as LoRA
    • Merging with other models
    • Examining the performance of this model using metrics like FID
  • Education
    • Graduation projects for art school or vocational school students
    • University students' graduation theses or project assignments
    • Teachers demonstrating the current state of image generation AI
  • Uses described in the Hugging Face Community
    • Please ask questions in Japanese or English

Out-of-Scope Use

  • Generate misinfomation or disinformation.

Bias, Risks, and Limitations

  • Cannot generate anime

Training Details

Training Data

We used these dataset to train the transformer:

Technical Specifications

Model Architecture and Objective

Model Architecture

CogVideoX based architecture

Objective

Rectified Flow

Software

Finetrainers based code

Model Card Contact

Acknowledgement

We approciate the video providers. So, we are standing on the shoulders of giants.

Downloads last month
147
Inference API
Inference API (serverless) does not yet support diffusers models for this pipeline type.

Datasets used to train aidealab/AIdeaLab-VideoJP

Spaces using aidealab/AIdeaLab-VideoJP 2