valiantcat LoRA for Wan2.1 14B I2V 720p

Overview

This LoRA is trained on the Wan2.1 14B I2V 720p model.

Features

  • Transform any image into a video of two people began to fight
  • Trained on the Wan2.1 14B 720p I2V base model
  • Consistent results across different object types
  • Simple prompt structure that's easy to adapt

Community

Prompt
dajia,这是一个两个人对打得视频,镜头拉远,保持人物不变,这两个人开始对打,两个人不断旋转,拳打脚踢,姿势像是传统武术, 十分专业,两个人相互比试,打的有来有回,突然左边的人腾空跳起,保持飞在空中用腿不停踢另一个人上半身, 男人被踢得一边用手阻挡女人的飞踢一边后退,镜头跟随,保持背景和人物的服装不改变.
Prompt
dajia,这是一对情侣对打得视频,镜头拉远,保持人物不变,这两个人开始对打,两个人不断旋转,拳打脚踢,姿势像是传统武术, 十分专业,两个人相互比试,打的有来有回,突然左边的人腾空跳起,保持飞在空中用腿不停踢另一个人上半身, 男人被踢得一边用手阻挡女人的飞踢一边后退,镜头跟随,保持背景和人物的服装不改变。
Prompt
dajia,这是一对闺蜜对打得视频,镜头拉远,保持人物不变,这两个人开始对打,两个人不断旋转,拳打脚踢,姿势像是传统武术, 十分专业,两个人相互比试,打的有来有回,突然右边的人腾空跳起,保持飞在空中用腿不停踢另一个人上半身, 右边的女人被踢得一边用手阻挡女人的飞踢一边后退,镜头跟随,保持背景和人物的服装不改变。

Model File and Inference Workflow

📥 Download Links:

Using with Diffusers

pip install git+https://github.com/huggingface/diffusers.git
import torch
from diffusers.utils import export_to_video, load_image
from diffusers import AutoencoderKLWan, WanImageToVideoPipeline
from transformers import CLIPVisionModel
import numpy as np

model_id = "Wan-AI/Wan2.1-I2V-14B-720P-Diffusers"
image_encoder = CLIPVisionModel.from_pretrained(model_id, subfolder="image_encoder", torch_dtype=torch.float32)
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanImageToVideoPipeline.from_pretrained(model_id, vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16)
pipe.to("cuda")

pipe.load_lora_weights("valiantcat/Wan2.1-Fight-LoRA")

pipe.enable_model_cpu_offload() #for low-vram environments

prompt = "dajia,这是一个两个人对打得视频,镜头拉远,保持人物不变,这两个人开始对打,两个人不断旋转,拳打脚踢,姿势像是传统武术, 十分专业,两个人相互比试,打的有来有回,突然左边的人腾空跳起,保持飞在空中用腿不停踢另一个人上半身, 男人被踢得一边用手阻挡女人的飞踢一边后退,镜头跟随,保持背景和人物的服装不改变."

image = load_image("https://huggingface.co/valiantcat/Wan2.1-Fight-LoRA/blob/main/result/test.jpg")

max_area = 512 * 768
aspect_ratio = image.height / image.width
mod_value = pipe.vae_scale_factor_spatial * pipe.transformer.config.patch_size[1]
height = round(np.sqrt(max_area * aspect_ratio)) // mod_value * mod_value
width = round(np.sqrt(max_area / aspect_ratio)) // mod_value * mod_value
image = image.resize((width, height))

output = pipe(
    image=image,
    prompt=prompt,
    height=height,
    width=width,
    num_frames=81,
    guidance_scale=5.0,
    num_inference_steps=25
).frames[0]
export_to_video(output, "output.mp4", fps=16)

Recommended Settings

  • LoRA Strength: 1.0
  • Embedded Guidance Scale: 6.0
  • Flow Shift: 5.0

Trigger Words

The key trigger phrase is: dajia

Prompt Template

For best results, use this prompt structure:

dajia,这是[object]对打得视频,镜头拉远,保持人物不变,这两个人开始对打,两个人不断旋转,拳打脚踢,姿势像是传统武术,十分专业,两个人相互比试,打的有来有回,突然左边的人腾空跳起,保持飞在空中用腿不停踢另一个人上半身,男人被踢得一边用手阻挡女人的飞踢一边后退,镜头跟随,保持背景和人物的服装不改变。

Simply replace [object] with whatever you want to let these two people fight!

Downloads last month
58
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for valiantcat/Wan2.1-Fight-LoRA

Adapter
(3)
this model