Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,7 @@
|
|
1 |
This model is a straightforward copy of the [original 3B parameter model](https://huggingface.co/1bitLLM/bitnet_b1_58-3B/tree/main), but only with the following models:
|
2 |
|
|
|
|
|
3 |
* HF to GGUF converted model in `f16` precision -> `model_f16.gguf`
|
4 |
* It was converted using `llama.cpp` with [this specific](https://github.com/ggerganov/llama.cpp/pull/8151/commits/45719a2472dd43bc3ba43d27d61fec34c6c14cb2) commit.
|
5 |
* Command: `python3 path_to_llama_cpp/convert_hf_to_gguf.py --outfile ./model_f16.gguf --outtype f16`
|
|
|
1 |
This model is a straightforward copy of the [original 3B parameter model](https://huggingface.co/1bitLLM/bitnet_b1_58-3B/tree/main), but only with the following models:
|
2 |
|
3 |
+
Thanks to [Green-Sky](https://huggingface.co/Green-Sky/bitnet_b1_58-3B-GGUF) for also providing similar work.
|
4 |
+
|
5 |
* HF to GGUF converted model in `f16` precision -> `model_f16.gguf`
|
6 |
* It was converted using `llama.cpp` with [this specific](https://github.com/ggerganov/llama.cpp/pull/8151/commits/45719a2472dd43bc3ba43d27d61fec34c6c14cb2) commit.
|
7 |
* Command: `python3 path_to_llama_cpp/convert_hf_to_gguf.py --outfile ./model_f16.gguf --outtype f16`
|