metadata
license: apache-2.0
language:
- en
base_model:
- Wan-AI/Wan2.1-T2V-14B
tags:
- text-to-video
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
[origami] a crafted grasshopper moving on the jungle floor, dead leaves
all around, huge trees in the background.
output:
url: videos/1742855529510.mp4
- text: >-
[origami] a crafted grasshopper moving on the jungle floor, dead leaves
all around, huge trees in the background.
output:
url: videos/1742861776754.mp4
- text: >-
[origami] a monkey swinging on a branch of a tree, huge monkeys around
them.
output:
url: videos/1742862552292.mp4
Origami Lora for WanVideo2.1
- Prompt
- [origami] a crafted grasshopper moving on the jungle floor, dead leaves all around, huge trees in the background.
- Prompt
- [origami] a crafted grasshopper moving on the jungle floor, dead leaves all around, huge trees in the background.
- Prompt
- [origami] a monkey swinging on a branch of a tree, huge monkeys around them.
Trigger words
You should use origami
to trigger the video generation.
Using with Diffusers
pip install git+https://github.com/huggingface/diffusers.git
import torch
from diffusers.utils import export_to_video
from diffusers import AutoencoderKLWan, WanPipeline
from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler
# Available models: Wan-AI/Wan2.1-T2V-14B-Diffusers, Wan-AI/Wan2.1-T2V-1.3B-Diffusers
model_id = "Wan-AI/Wan2.1-T2V-14B-Diffusers"
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
flow_shift = 5.0 # 5.0 for 720P, 3.0 for 480P
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
pipe.to("cuda")
pipe.load_lora_weights("shauray/Origami_WanLora")
pipe.enable_model_cpu_offload() #for low-vram environments
prompt = "origami style bull charging towards a man"
output = pipe(
prompt=prompt,
height=480,
width=720,
num_frames=81,
guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
license: apache-2.0
this Lora is not perfect has a little like towards the bottom of every generation cause the dataset had those (I fucked up cleaning those)