Clybius commited on
Commit
03cb1ed
·
verified ·
1 Parent(s): 9a213ae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -9,7 +9,7 @@ Based on `FLUX.1 [schnell]` with heavy architectural modifications.
9
 
10
  Quantized into GGUF format using a modified llama.cpp & city96's ComfyUI-GGUF/tools. Distillation layers are not quantized.
11
 
12
- Also see [silveroxides'](https://huggingface.co/Clybius/Chroma-GGUF/edit/main/README.md) Chroma GGUFs! (BF16, Q8_0, Q6_K, Q5_K_S, Q5_1, Q5_0, Q4_K_M, Q4_1, Q4_0, Q3_K_L)
13
 
14
  Q*_M GGUFs are mixed quantizations with an aim at maximizing speed by selectively choosing the quantization of certain layers.
15
  - Q8_M focuses on Q8_0 quantization of weights for performance, mixed with Q6_K on less heavy layers.
 
9
 
10
  Quantized into GGUF format using a modified llama.cpp & city96's ComfyUI-GGUF/tools. Distillation layers are not quantized.
11
 
12
+ Also see [silveroxides'](https://huggingface.co/silveroxides/Chroma-GGUF) Chroma GGUFs! (BF16, Q8_0, Q6_K, Q5_K_S, Q5_1, Q5_0, Q4_K_M, Q4_1, Q4_0, Q3_K_L)
13
 
14
  Q*_M GGUFs are mixed quantizations with an aim at maximizing speed by selectively choosing the quantization of certain layers.
15
  - Q8_M focuses on Q8_0 quantization of weights for performance, mixed with Q6_K on less heavy layers.