Claydoh LoRA for WAN 2.1 14B by Hot Hams
This LoRA is derived from WAN 2.1 by Wan-AI and is licensed under the Apache 2.0 license.
Usage
To use this LoRA, load it with the WAN 2.1 base model using the Diffusers library. Ensure you have the required dependencies installed, such as transformers
and diffusers
. Here's an example of how to load and use the LoRA:
from diffusers import StableDiffusionPipeline
# Load the base WAN 2.1 model
pipe = StableDiffusionPipeline.from_pretrained("Wan-AI/Wan2.1-T2V-14B")
# Load your LoRA weights (replace with your LoRA's path or Huggingface repo)
pipe.load_lora_weights("path/to/your/lora")
# Generate content with the LoRA applied
output = pipe("A description of the image or video you want to generate").images[0]
output.save("output.png") # Adjust based on the output type (e.g., video)
# The trigger word for this LoRA is "Claydoh." I like to use "Claydoh," "a Claydoh," "Claydoh style," etc. This LoRA was achieved with ~2000 steps at roughly the 38th epoch using the same dataset I created in Blender for my FLUX Claydoh LoRA. Enjoy!
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
1
Ask for provider support