hdtrnk / FusioniX-All
A complete collection of WanGP FusioniX checkpoints
โ both pure 16-bit dumps and pre-quantized 8-bit versions for image-to-video (I2V) and Phantom text-to-video (T2V) pipelines.
๐ Files in this repo
Pure FP16 checkpoints (~28โ32 GB each)
Wan14Bi2vFusioniX_pure_fp16.safetensors
Wan2.1 ImageโVideo FusioniX 14B, pure 16-bitWan14BT2VFusioniX_Phantom_pure_fp16.safetensors
Wan2.1 Phantom TextโVideo FusioniX 14B, pure 16-bit
Pre-quantized BF16โInt8 checkpoints (~14โ16 GB each)
Wan14Bi2vFusioniX_pure_quanto_bf16_int8.safetensors
I2V FusioniX merged, quantized via Quanto (bf16โint8)Wan14BT2VFusioniX_Phantom_pure_quanto_bf16_int8.safetensors
Phantom T2V FusioniX merged, quantized via Quanto (bf16โint8)
๐ Quick Start (WanGP)
- Copy your desired file into your WanGP
ckpts/
folder. - Edit your finetune JSON (in
app/finetunes/
) so that under"model"
:"architecture": "i2v", // or "phantom_14B" "URLs": ["ckpts/<filename>.safetensors"], "auto_quantize": false
3. **Restart** WanGP (no `--save-quantized` flag).
4. In the web UI, set **Transformer Model Quantization โ Scaled Int8** (for the int8 files) or leave at BF16/FP16 for pure-16-bit.
5. **Generate**!
---
## โ๏ธ Recommended Defaults
### I2V FusioniX
```yaml
resolution: "832x480"
video_length: 81
num_steps: 8 # sweet-spot: 6โ10
guidance_scale: 6.0
flow_shift: 7.0
embed_guide: 6
prompt_enhancer: on
slg_switch: 0
```
### Phantom T2V FusioniX
```yaml
resolution: "832x480"
video_length: 81
num_steps: 8 # sweet-spot: 6โ10
guidance_scale: 7.5
flow_shift: 5.0
embed_guide: 6
prompt_enhancer: on
slg_switch: 0
```
---
## ๐ Architecture & Compatibility
* **WanGP architectures**:
* `i2v` for image-to-video
* `phantom_14B` for Phantom text-to-video
* (See `model_signatures` in `wgp.py` or browse `app/settings/*.json` for all supported keys.)
* **WanGP version**: โฅ v6.0 (mmgp 3.4.9)
* **ComfyUI**: You can load these safetensors in ComfyUIโs Checkpoint Merger or as a Diffusers pipeline, but you may need a custom video pipeline config. Theyโre *not* plug-and-play ComfyUI โmodelcards.โ
---
## ๐ JSON Snippets
### I2V Finetune JSON
```json
{
"model": {
"name": "Wan image2video FusioniX 14B",
"architecture": "i2v",
"URLs": ["ckpts/Wan14Bi2vFusioniX_pure_quanto_bf16_int8.safetensors"],
"auto_quantize": false
},
"... other settings ..."
}
```
### Phantom T2V Finetune JSON
```json
{
"model": {
"name": "Wan text2video Phantom FusioniX 14B",
"architecture": "phantom_14B",
"URLs": ["ckpts/Wan14BT2VFusioniX_Phantom_pure_quanto_bf16_int8.safetensors"],
"auto_quantize": false
},
"... other settings ..."
}
```
---
## ๐ Upstream & Licensing
* **Original WanGP repo** by DeepBeepMeep:
[https://github.com/DeepBeepMeep/WanGP](https://github.com/DeepBeepMeep/WanGP)
* **Original HF models**:
[https://huggingface.co/DeepBeepMeep/Wan2.1](https://huggingface.co/DeepBeepMeep/Wan2.1)
* **License**: Apache-2.0 (inherited from upstream)
---
*Happy generating!* ๐ฅโจ\`\`\`
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support