--- license: apache-2.0 language: - en base_model: - Wan-AI/Wan2.1-T2V-14B pipeline_tag: text-to-video tags: - text-to-video - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- The [d00m doom first person gameplay] showcases a player using a rocket launcher in a room with a cyberdemon, the text "TARGET ACQUIRED" flashing. output: url: example_videos/doom1.mp4 - text: >- The video shows [d00m doom first person gameplay] in a room with red walls and a lava pit. The player shoots a large monster with a rocket launcher. output: url: example_videos/doom2.mp4 - text: >- [d00m doom first person gameplay] in a hellish arena, the player is picking up a super shotgun. output: url: example_videos/doom3.mp4 - text: >- The [d00m doom first person gameplay] displays the player in a gray hallway with a chainsaw, the text "CHAINSAW ACQUIRED!" appearing on screen. output: url: example_videos/doom4.mp4 ---
This LoRA is trained on the Wan2.1 14B T2V model and allows you to generate videos in the style of Doom!
The key trigger phrase is: [d00m doom first person gameplay]
For prompting, check out the example prompts; this way of prompting seems to work well.
This LoRA works with a modified version of Kijai's Wan Video Wrapper workflow. The main modification is adding a Wan LoRA node connected to the base model.
See the Downloads section above for the modified workflow.
The model weights are available in Safetensors format. See the Downloads section above.
Training was done using Diffusion Pipe for Training
Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!