AnimateDiff is a method that allows you to create videos using pre-existing Stable Diffusion Text to Image models.

Converted https://huggingface.co/guoyww/animatediff/blob/main/mm_sdxl_v10_beta.ckpt to Huggingface Diffusers format using following script based Diffuser's convetion script (available https://github.com/huggingface/diffusers/blob/main/scripts/convert_animatediff_motion_module_to_diffusers.py)

import argparse

import torch

from diffusers import MotionAdapter


def convert_motion_module(original_state_dict):
    converted_state_dict = {}
    for k, v in original_state_dict.items():
        if "pos_encoder" in k:
            continue

        else:
            converted_state_dict[
                k.replace(".norms.0", ".norm1")
                .replace(".norms.1", ".norm2")
                .replace(".ff_norm", ".norm3")
                .replace(".attention_blocks.0", ".attn1")
                .replace(".attention_blocks.1", ".attn2")
                .replace(".temporal_transformer", "")
            ] = v

    return converted_state_dict


def get_args():
    parser = argparse.ArgumentParser()
    parser.add_argument("--ckpt_path", type=str, required=True)
    parser.add_argument("--output_path", type=str, required=True)
    parser.add_argument("--use_motion_mid_block", action="store_true")
    parser.add_argument("--motion_max_seq_length", type=int, default=32)
    parser.add_argument("--save_fp16", action="store_true")

    return parser.parse_args()


if __name__ == "__main__":
    args = get_args()

    state_dict = torch.load(args.ckpt_path, map_location="cpu")
    if "state_dict" in state_dict.keys():
        state_dict = state_dict["state_dict"]

    conv_state_dict = convert_motion_module(state_dict)
    adapter = MotionAdapter(
        use_motion_mid_block=False,
        motion_max_seq_length=32,
        block_out_channels=(320, 640, 1280),
    )
    # skip loading position embeddings
    adapter.load_state_dict(conv_state_dict, strict=False)
    adapter.save_pretrained(args.output_path)

    if args.save_fp16:
        adapter.to(torch.float16).save_pretrained(args.output_path, variant="fp16")
        

The following example demonstrates how you can utilize the motion modules with an existing Stable Diffusion text to image model.

#TODO

Downloads last month
33
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text-to-video models for diffusers library.