--- license: apache-2.0 language: - en base_model: - Wan-AI/Wan2.1-T2V-14B pipeline_tag: text-to-video tags: - text-to-video - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- 4n1m4l animal documentary The video shows a close-up of a chimpanzee grooming its infant, set against the backdrop of a lush green jungle. output: url: example_videos/animal1.mp4 - text: >- 4n1m4l animal documentary The video shows a majestic lion resting in tall savanna grass, its golden mane catching the sunlight. output: url: example_videos/animal2.mp4 - text: >- 4n1m4l animal documentary The video shows a tiger mother and her cub walking through a grassy field. The mother is larger and has more prominent stripes. The cub is smaller and its stripes are less distinct. They are both walking in the same direction. output: url: example_videos/animal3.mp4 - text: >- 4n1m4l animal documentary a close-up of a scarlet macaw. The bird is perched on a branch and is facing to the left of the camera. Its feathers are a vibrant red, with blue and green accents. The macaw's large, black beak is slightly open, and its eyes are closed. The background of the video is a blurred out green foliage. output: url: example_videos/animal4.mp4 ---
This LoRA is trained on the Wan2.1 14B T2V model and allows you to generate videos in the style of an animal documentary!
The key trigger phrase is: 4n1m4l animal documentary
For prompting, check out the example prompts; this way of prompting seems to work very well.
This LoRA works with a modified version of Kijai's Wan Video Wrapper workflow. The main modification is adding a Wan LoRA node connected to the base model.
See the Downloads section above for the modified workflow.
The model weights are available in Safetensors format. See the Downloads section above.
Training was done using Diffusion Pipe for Training
Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!