maddes8cht commited on
Commit
a65073c
·
1 Parent(s): 15c39e7

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -62,7 +62,7 @@ The core project making use of the ggml library is the [llama.cpp](https://githu
62
 
63
  There is a bunch of quantized files available. How to choose the best for you:
64
 
65
- # legacy quants
66
 
67
  Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
68
  Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
@@ -76,6 +76,7 @@ With a Q6_K you should find it really hard to find a quality difference to the o
76
 
77
 
78
 
 
79
  # Original Model Card:
80
  # Open-Assistant Falcon 7B SFT OASST-TOP1 Model
81
 
@@ -192,6 +193,7 @@ python export_model.py --dtype bf16 --hf_repo_name OpenAssistant/falcon-7b-sft-t
192
  ```
193
 
194
  ***End of original Model File***
 
195
 
196
 
197
  ## Please consider to support my work
 
62
 
63
  There is a bunch of quantized files available. How to choose the best for you:
64
 
65
+ # Legacy quants
66
 
67
  Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
68
  Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
 
76
 
77
 
78
 
79
+ ---
80
  # Original Model Card:
81
  # Open-Assistant Falcon 7B SFT OASST-TOP1 Model
82
 
 
193
  ```
194
 
195
  ***End of original Model File***
196
+ ---
197
 
198
 
199
  ## Please consider to support my work