Wan-2.1
Collection
10 items
•
Updated
This LoRA is trained on the Wan2.1 14B I2V 720p model.
pip install git+https://github.com/huggingface/diffusers.git
import torch
from diffusers.utils import export_to_video, load_image
from diffusers import AutoencoderKLWan, WanImageToVideoPipeline
from transformers import CLIPVisionModel
import numpy as np
model_id = "Wan-AI/Wan2.1-I2V-14B-720P-Diffusers"
image_encoder = CLIPVisionModel.from_pretrained(model_id, subfolder="image_encoder", torch_dtype=torch.float32)
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanImageToVideoPipeline.from_pretrained(model_id, vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16)
pipe.to("cuda")
pipe.load_lora_weights("valiantcat/Wan2.1-Fight-LoRA")
pipe.enable_model_cpu_offload() #for low-vram environments
prompt = "dajia,这是一个两个人对打得视频,镜头拉远,保持人物不变,这两个人开始对打,两个人不断旋转,拳打脚踢,姿势像是传统武术, 十分专业,两个人相互比试,打的有来有回,突然左边的人腾空跳起,保持飞在空中用腿不停踢另一个人上半身, 男人被踢得一边用手阻挡女人的飞踢一边后退,镜头跟随,保持背景和人物的服装不改变."
image = load_image("https://huggingface.co/valiantcat/Wan2.1-Fight-LoRA/blob/main/result/test.jpg")
max_area = 512 * 768
aspect_ratio = image.height / image.width
mod_value = pipe.vae_scale_factor_spatial * pipe.transformer.config.patch_size[1]
height = round(np.sqrt(max_area * aspect_ratio)) // mod_value * mod_value
width = round(np.sqrt(max_area / aspect_ratio)) // mod_value * mod_value
image = image.resize((width, height))
output = pipe(
image=image,
prompt=prompt,
height=height,
width=width,
num_frames=81,
guidance_scale=5.0,
num_inference_steps=25
).frames[0]
export_to_video(output, "output.mp4", fps=16)
The key trigger phrase is: dajia
For best results, use this prompt structure:
Simply replace [object] with whatever you want to let these two people fight!
Base model
Wan-AI/Wan2.1-I2V-14B-720P