hdtrnk / FusioniX-All

A complete collection of WanGP FusioniX checkpoints
โ€“ both pure 16-bit dumps and pre-quantized 8-bit versions for image-to-video (I2V) and Phantom text-to-video (T2V) pipelines.


๐Ÿ“‚ Files in this repo

Pure FP16 checkpoints (~28โ€“32 GB each)

  • Wan14Bi2vFusioniX_pure_fp16.safetensors
    Wan2.1 Imageโ†’Video FusioniX 14B, pure 16-bit

  • Wan14BT2VFusioniX_Phantom_pure_fp16.safetensors
    Wan2.1 Phantom Textโ†’Video FusioniX 14B, pure 16-bit

Pre-quantized BF16โ†’Int8 checkpoints (~14โ€“16 GB each)

  • Wan14Bi2vFusioniX_pure_quanto_bf16_int8.safetensors
    I2V FusioniX merged, quantized via Quanto (bf16โ†’int8)

  • Wan14BT2VFusioniX_Phantom_pure_quanto_bf16_int8.safetensors
    Phantom T2V FusioniX merged, quantized via Quanto (bf16โ†’int8)


๐Ÿš€ Quick Start (WanGP)

  1. Copy your desired file into your WanGP ckpts/ folder.
  2. Edit your finetune JSON (in app/finetunes/) so that under "model":
    "architecture": "i2v",                // or "phantom_14B"
    "URLs": ["ckpts/<filename>.safetensors"],
    "auto_quantize": false
    

3. **Restart** WanGP (no `--save-quantized` flag).
4. In the web UI, set **Transformer Model Quantization โ†’ Scaled Int8** (for the int8 files) or leave at BF16/FP16 for pure-16-bit.
5. **Generate**!

---

## โš™๏ธ Recommended Defaults

### I2V FusioniX

```yaml
resolution:     "832x480"
video_length:   81
num_steps:      8    # sweet-spot: 6โ€“10
guidance_scale: 6.0
flow_shift:     7.0
embed_guide:    6
prompt_enhancer: on
slg_switch:     0
```

### Phantom T2V FusioniX

```yaml
resolution:     "832x480"
video_length:   81
num_steps:      8    # sweet-spot: 6โ€“10
guidance_scale: 7.5
flow_shift:     5.0
embed_guide:    6
prompt_enhancer: on
slg_switch:     0
```

---

## ๐Ÿ” Architecture & Compatibility

* **WanGP architectures**:

  * `i2v` for image-to-video
  * `phantom_14B` for Phantom text-to-video
  * (See `model_signatures` in `wgp.py` or browse `app/settings/*.json` for all supported keys.)

* **WanGP version**: โ‰ฅ v6.0 (mmgp 3.4.9)

* **ComfyUI**: You can load these safetensors in ComfyUIโ€™s Checkpoint Merger or as a Diffusers pipeline, but you may need a custom video pipeline config. Theyโ€™re *not* plug-and-play ComfyUI โ€œmodelcards.โ€

---

## ๐Ÿ“‹ JSON Snippets

### I2V Finetune JSON

```json
{
  "model": {
    "name": "Wan image2video FusioniX 14B",
    "architecture": "i2v",
    "URLs": ["ckpts/Wan14Bi2vFusioniX_pure_quanto_bf16_int8.safetensors"],
    "auto_quantize": false
  },
  "... other settings ..."
}
```

### Phantom T2V Finetune JSON

```json
{
  "model": {
    "name": "Wan text2video Phantom FusioniX 14B",
    "architecture": "phantom_14B",
    "URLs": ["ckpts/Wan14BT2VFusioniX_Phantom_pure_quanto_bf16_int8.safetensors"],
    "auto_quantize": false
  },
  "... other settings ..."
}
```

---

## ๐Ÿ”— Upstream & Licensing

* **Original WanGP repo** by DeepBeepMeep:
  [https://github.com/DeepBeepMeep/WanGP](https://github.com/DeepBeepMeep/WanGP)

* **Original HF models**:
  [https://huggingface.co/DeepBeepMeep/Wan2.1](https://huggingface.co/DeepBeepMeep/Wan2.1)

* **License**: Apache-2.0 (inherited from upstream)

---

*Happy generating!* ๐ŸŽฅโœจ\`\`\`
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support