rahular commited on
Commit
6a1569f
·
verified ·
1 Parent(s): 8f43073

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -196,7 +196,7 @@ messages.append(
196
 
197
  # Running the model on a CPU
198
 
199
- This repo contains quantized (q8) version of the model as well. You can use the model on your local machine (without gpu) as explained [here](docs https://github.com/ggml-org/llama.cpp/tree/master/tools/main).
200
 
201
  Example Command:
202
  ```
 
196
 
197
  # Running the model on a CPU
198
 
199
+ This repo contains gguf versions of `sarvam-m` in both bf16 and q8 precisions. You can use the model on your local machine (without gpu) as explained [here](https://github.com/ggml-org/llama.cpp/tree/master/tools/main).
200
 
201
  Example Command:
202
  ```