multimodalart HF staff commited on
Commit
f99f972
·
verified ·
1 Parent(s): 91d8140

Fix tags and add diffusers inference example

Browse files
Files changed (1) hide show
  1. README.md +75 -1
README.md CHANGED
@@ -4,7 +4,34 @@ language:
4
  - en
5
  base_model:
6
  - Wan-AI/Wan2.1-I2V-14B-480P
 
7
  pipeline_tag: image-to-video
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  ---
9
  <div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">
10
  <h1 style="color: #24292e; margin-top: 0;">Squish Effect LoRA for Wan2.1 14B I2V 480p</h1>
@@ -33,6 +60,8 @@ pipeline_tag: image-to-video
33
  </div>
34
  </div>
35
 
 
 
36
  ## Examples
37
 
38
  ### Clay Dog
@@ -79,7 +108,52 @@ pipeline_tag: image-to-video
79
  ## 📥 Download Links:
80
 
81
  - [squish_18.safetensors](./squish_18.safetensors) - LoRA Model File
82
- - [wan_img2video_lora_workflow.json](./workflow/wan_img2video_lora_workflow.json) - Wan I2V with LoRA Workflow
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83
 
84
  ---
85
  <div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">
 
4
  - en
5
  base_model:
6
  - Wan-AI/Wan2.1-I2V-14B-480P
7
+ - Wan-AI/Wan2.1-I2V-14B-480P-Diffusers
8
  pipeline_tag: image-to-video
9
+ tags:
10
+ - text-to-image
11
+ - lora
12
+ - diffusers
13
+ - template:diffusion-lora
14
+ widget:
15
+ - text: >-
16
+ In the video, a miniature dog is presented. The dog is held in a person's hands. The person then presses on the dog, causing a sq41sh squish effect. The person keeps pressing down on the dog, further showing the sq41sh squish effect.
17
+ output:
18
+ url: example_videos/dog_squish.mp4
19
+ - text: >-
20
+ In the video, a miniature tank is presented. The tank is held in a person's hands. The person then presses on the tank, causing a sq41sh squish effect. The person keeps pressing down on the tank, further showing the sq41sh squish effect.
21
+ output:
22
+ url: example_videos/tank_squish.mp4
23
+ - text: >-
24
+ In the video, a miniature balloon is presented. The balloon is held in a person's hands. The person then presses on the balloon, causing a sq41sh squish effect. The person keeps pressing down on the balloon, further showing the sq41sh squish effect.
25
+ output:
26
+ url: example_videos/balloon_squish.mp4
27
+ - text: >-
28
+ In the video, a miniature rodent is presented. The rodent is held in a person's hands. The person then presses on the rodent, causing a sq41sh squish effect. The person keeps pressing down on the rodent, further showing the sq41sh squish effect.
29
+ output:
30
+ url: example_videos/rodent_squish.mp4
31
+ - text: >-
32
+ In the video, a miniature person is presented. The person is held in a person's hands. The person then presses on the person, causing a sq41sh squish effect. The person keeps pressing down on the person, further showing the sq41sh squish effect.
33
+ output:
34
+ url: example_videos/person_squish.mp4
35
  ---
36
  <div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">
37
  <h1 style="color: #24292e; margin-top: 0;">Squish Effect LoRA for Wan2.1 14B I2V 480p</h1>
 
60
  </div>
61
  </div>
62
 
63
+ <Gallery />
64
+
65
  ## Examples
66
 
67
  ### Clay Dog
 
108
  ## 📥 Download Links:
109
 
110
  - [squish_18.safetensors](./squish_18.safetensors) - LoRA Model File
111
+ - [wan_img2video_lora_workflow.json](./workflow/wan_img2video_lora_workflow.json) - Wan I2V with LoRA Workflow for ComfyUI
112
+
113
+ ## Using with Diffusers
114
+ ```py
115
+ pip install git+https://github.com/huggingface/diffusers.git
116
+ ```
117
+
118
+ ```py
119
+ import torch
120
+ from diffusers.utils import export_to_video, load_image
121
+ from diffusers import AutoencoderKLWan, WanImageToVideoPipeline
122
+ from transformers import CLIPVisionModel
123
+ import numpy as np
124
+
125
+ model_id = "Wan-AI/Wan2.1-I2V-14B-480P-Diffusers"
126
+ image_encoder = CLIPVisionModel.from_pretrained(model_id, subfolder="image_encoder", torch_dtype=torch.float32)
127
+ vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
128
+ pipe = WanImageToVideoPipeline.from_pretrained(model_id, vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16)
129
+ pipe.to("cuda")
130
+
131
+ pipe.load_lora_weights("Remade/Squish")
132
+
133
+ pipe.enable_model_cpu_offload() #for low-vram environments
134
+
135
+ prompt = "In the video, a miniature cat toy is presented. The cat toy is held in a person's hands. The person then presses on the cat toy, causing a sq41sh squish effect. The person keeps pressing down on the cat toy, further showing the sq41sh squish effect."
136
+
137
+ image = load_image("https://huggingface.co/datasets/diffusers/cat_toy_example/resolve/main/1.jpeg")
138
+
139
+ max_area = 480 * 832
140
+ aspect_ratio = image.height / image.width
141
+ mod_value = pipe.vae_scale_factor_spatial * pipe.transformer.config.patch_size[1]
142
+ height = round(np.sqrt(max_area * aspect_ratio)) // mod_value * mod_value
143
+ width = round(np.sqrt(max_area / aspect_ratio)) // mod_value * mod_value
144
+ image = image.resize((width, height))
145
+
146
+ output = pipe(
147
+ image=image,
148
+ prompt=prompt,
149
+ height=height,
150
+ width=width,
151
+ num_frames=81,
152
+ guidance_scale=5.0,
153
+ num_inference_steps=28
154
+ ).frames[0]
155
+ export_to_video(output, "output.mp4", fps=16)
156
+ ```
157
 
158
  ---
159
  <div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">