--- license: apache-2.0 language: - en base_model: - Wan-AI/Wan2.1-I2V-14B-480P - Wan-AI/Wan2.1-I2V-14B-480P-Diffusers pipeline_tag: image-to-video tags: - text-to-image - lora - diffusers - template:diffusion-lora - image-to-video widget: - text: >- The video starts with a studio portrait of Goku. Then the image shifts to the 848y baby effect, Goku is in front of a crib, surrounded by toys. Finally, the 848y baby effect is shown again in a different location. The 848y baby version of Goku is in the crib and seems excited and amused. output: url: example_videos/goku_baby.mp4 - text: >- The video starts with a studio portrait of an Asian man. Then the image shifts to the 848y baby effect, the man is in front of a crib, surrounded by toys. Finally, the 848y baby effect is shown again in a different location. The 848y baby version of the man is in the crib and seems excited and amused. output: url: example_videos/man_baby.mp4 - text: >- The video starts with a studio portrait of a woman. Then the image shifts to the 848y baby effect, the woman is in front of a crib, surrounded by toys. Finally, the 848y baby effect is shown again in a different location. The 848y baby version of the woman is in the crib and seems excited and amused. output: url: example_videos/woman_baby.mp4 ---
This LoRA is trained on the Wan2.1 14B I2V 480p model and allows you to make any person/object in an image into a baby!
The key trigger phrase is: 848y baby effect
For best results, use this prompt structure:
Simply replace [object]
with whatever you want to see as a baby!
This LoRA works with a modified version of Kijai's Wan Video Wrapper workflow. The main modification is adding a Wan LoRA node connected to the base model.
See the Downloads section above for the modified workflow.
The model weights are available in Safetensors format. See the Downloads section above.
Training was done using Diffusion Pipe for Training
Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!