Text encoders

#2
by jwooldridge - opened

Do you know where we can get ggufs of the Mistral text encoder?

Probably sometime later today. Thats gonna be a hard one to quant at 32gb

Can't you just use the existing GGUFs of Mistral Small 3?

use mistral_3_small_flux2_fp8.safetensors

I have a 3060 12GB variant with 64 GB RAM. Should I even bother to download? πŸ«₯

I have a 4060 Ti 8GB variant with 32 GB RAM and it works great, For Q5 time is around 3 to 4 min. Thank you...

You must have your pagefile size max set to some unimaginable number! haha Running the stock flux2 dev template in ComfyUI, resmon tells me that the process is committing 93.3GB 🀯 I don't have the HD space to allocate, so I have to humbly bow out πŸ’” (for now... 😈)

Sign up or log in to comment