basavyr commited on
Commit
c640337
·
verified ·
1 Parent(s): c1d6b44

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ This model is a straightforward copy of the [original 3B parameter model](https://huggingface.co/1bitLLM/bitnet_b1_58-3B/tree/main), but only with the following models:
2
+
3
+ * HF to GGUF converted model in `f16` precision -> `model_f16.gguf`
4
+ * It was converted using `llama.cpp` with [this specific](https://github.com/ggerganov/llama.cpp/pull/8151/commits/45719a2472dd43bc3ba43d27d61fec34c6c14cb2) commit.
5
+ * Command: `python3 path_to_llama_cpp/convert_hf_to_gguf.py --outfile ./model_f16.gguf --outtype f16`
6
+ * quantized (GGUF version) in [`Q1_3`](https://github.com/ggerganov/llama.cpp/pull/8151#issuecomment-2198043857) format
7
+ * Quantization is done via `llama-quantize` on that same commit.
8
+ * quantized (GGUF version) in [`Q2_2`](https://github.com/ggerganov/llama.cpp/pull/8151#issuecomment-2198043857) format
9
+ * Quantization is done via `llama-quantize` on that same commit.
10
+
11
+ Please keep in mind that if you want to test this model through `llama-cli` on Metal (e.g., MacBook Pro with M3 Pro, as I did) you would need to use the `--n-gpu-layers 0` flag, otherwise the following error will occur:
12
+ ```text
13
+ /Users/basavyr/Repos/external/llama.cpp/llama-cli -m model_quant_Q2_2.gguf -p "hey there"
14
+ Log start
15
+ main: build = 3505 (45719a24)
16
+ main: built with Apple clang version 15.0.0 (clang-1500.3.9.4) for arm64-apple-darwin23.6.0
17
+ main: seed = 1724230525
18
+ llama_model_loader: loaded meta data with 30 key-value pairs and 470 tensors from model_quant_Q2_2.gguf (version GGUF V3 (latest))
19
+ llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
20
+
21
+ .........................................................................................
22
+ llama_new_context_with_model: n_ctx = 2048
23
+ llama_new_context_with_model: n_batch = 2048
24
+ llama_new_context_with_model: n_ubatch = 512
25
+ llama_new_context_with_model: flash_attn = 0
26
+ llama_new_context_with_model: freq_base = 10000.0
27
+ llama_new_context_with_model: freq_scale = 1
28
+ ggml_metal_init: allocating
29
+ ggml_metal_init: found device: Apple M3 Pro
30
+ ggml_metal_init: picking default device: Apple M3 Pro
31
+ ggml_metal_init: using embedded metal library
32
+ ggml_metal_init: GPU name: Apple M3 Pro
33
+ ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009)
34
+ ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
35
+ ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
36
+ ggml_metal_init: simdgroup reduction support = true
37
+ ggml_metal_init: simdgroup matrix mul. support = true
38
+ ggml_metal_init: hasUnifiedMemory = true
39
+ ggml_metal_init: recommendedMaxWorkingSetSize = 12884.92 MB
40
+ llama_kv_cache_init: Metal KV buffer size = 650.00 MiB
41
+ llama_new_context_with_model: KV self size = 650.00 MiB, K (f16): 325.00 MiB, V (f16): 325.00 MiB
42
+ llama_new_context_with_model: CPU output buffer size = 0.12 MiB
43
+ llama_new_context_with_model: Metal compute buffer size = 157.00 MiB
44
+ llama_new_context_with_model: CPU compute buffer size = 62.50 MiB
45
+ llama_new_context_with_model: graph nodes = 1124
46
+ llama_new_context_with_model: graph splits = 3
47
+ ggml/src/ggml-metal.m:1612: MUL MAT-MAT not implemented
48
+ ggml/src/ggml-metal.m:1612: MUL MAT-MAT not implemented[1] 26436 abort /Users/basavyr/Repos/external/llama.cpp/llama-cli -m model_quant_Q2_2.gguf -p
49
+ ```