calcuis commited on
Commit
4228058
·
verified ·
1 Parent(s): c40630a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -50,7 +50,8 @@ widget:
50
  ### **review**
51
  - use tag/word(s) as input for more accurate results for those legacy models; not very convenient (compare to the recent models) at the very beginning
52
  - credits should be given to those contributors from civitai platform
53
- - fp8 scaled file works fine in this model
 
54
  - good to run on old machines, i.e., 9xx series or before (legacy mode [--disable-cuda-malloc --lowvram] supported); compatible with the new gguf-node
55
 
56
  ### **reference**
 
50
  ### **review**
51
  - use tag/word(s) as input for more accurate results for those legacy models; not very convenient (compare to the recent models) at the very beginning
52
  - credits should be given to those contributors from civitai platform
53
+ - fast-illustrious gguf was quantized from fp8 scaled safetensors while illustrious gguf was quantized from the original bf16
54
+ - fp8 scaled file works fine in this model; including vae and clips
55
  - good to run on old machines, i.e., 9xx series or before (legacy mode [--disable-cuda-malloc --lowvram] supported); compatible with the new gguf-node
56
 
57
  ### **reference**