Pure
- pure, conversion from safetensors BF16 via F32 gguf
- architecture: flex.2 (as not all tensor shapes match to flux)
- no imatrix was used to quantize
- biases and norms: F32
- img_in.weight: BF16 (due to tensor shape and block sizes)
- everything else according to file type
Filename | Quant Type | File Size | Description / L2 Loss Step 25 | Example Image |
---|---|---|---|---|
Flex.2-preview-BF16.gguf | BF16 | 16.3GB | - | - |
Flex.2-preview-Q8_0.gguf | Q8_0 | 8.68GB | TBC | - |
Flex.2-preview-Q6_K.gguf | Q6_K | 6.70GB | TBC | - |
Flex.2-preview-Q5_1.gguf | Q5_1 | 6.13GB | TBC | - |
Flex.2-preview-Q5_0.gguf | Q5_0 | 5.62GB | TBC | - |
Flex.2-preview-Q4_1.gguf | Q4_1 | 5.11GB | TBC | - |
Flex.2-preview-IQ4_NL.gguf | IQ4_NL | 4.60GB | TBC | - |
Flex.2-preview-Q4_0.gguf | Q4_0 | 4.60GB | TBC | - |
Flex.2-preview-Q3_K_S.gguf | Q3_K_S | 3.52GB | TBC | - |
Fluxified
- conversion from safetensors BF16 via F32 gguf
- truncated img_in.weight tensor to first 16 latent channels
- lost ability to do inpainting and process control image
- architecture: flux
- dynamic quantization?
Filename | Quant type | File Size | Description / L2 Loss Step 25 | Example Image |
---|---|---|---|---|
Flex.2-preview-fluxified-Q8_0.gguf | Q8_0 | 8.39GB | TBC | - |
- Downloads last month
- 221
Hardware compatibility
Log In
to view the estimation
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Eviation/Flex.2-preview
Base model
ostris/Flex.2-preview