TheBloke commited on
Commit
d9a0975
·
1 Parent(s): fad26f0

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -5
README.md CHANGED
@@ -31,7 +31,6 @@ quantized_by: TheBloke
31
  # LLaMA 65B - GGUF
32
  - Model creator: [Meta](https://huggingface.co/none)
33
  - Original model: [LLaMA 65B](https://ai.meta.com/blog/large-language-model-llama-meta-ai)
34
- - [Original model card](#original-model-card-metas-llama-65b)
35
 
36
  <!-- description start -->
37
  ## Description
@@ -42,7 +41,7 @@ This repo contains GGUF format model files for [Meta's LLaMA 65B](https://ai.met
42
  <!-- README_GGUF.md-about-gguf start -->
43
  ### About GGUF
44
 
45
- GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
46
 
47
  Here is an incomplate list of clients and libraries that are known to support GGUF:
48
 
@@ -80,7 +79,7 @@ Here is an incomplate list of clients and libraries that are known to support GG
80
  <!-- compatibility_gguf start -->
81
  ## Compatibility
82
 
83
- These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
84
 
85
  They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
86
 
@@ -205,10 +204,10 @@ pip3 install hf_transfer
205
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
206
 
207
  ```shell
208
- HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LLaMA-65B-GGUF llama-65b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
209
  ```
210
 
211
- Windows Command Line users: You can set the environment variable by running `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before the download command.
212
  </details>
213
  <!-- README_GGUF.md-how-to-download end -->
214
 
 
31
  # LLaMA 65B - GGUF
32
  - Model creator: [Meta](https://huggingface.co/none)
33
  - Original model: [LLaMA 65B](https://ai.meta.com/blog/large-language-model-llama-meta-ai)
 
34
 
35
  <!-- description start -->
36
  ## Description
 
41
  <!-- README_GGUF.md-about-gguf start -->
42
  ### About GGUF
43
 
44
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
45
 
46
  Here is an incomplate list of clients and libraries that are known to support GGUF:
47
 
 
79
  <!-- compatibility_gguf start -->
80
  ## Compatibility
81
 
82
+ These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
83
 
84
  They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
85
 
 
204
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
205
 
206
  ```shell
207
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LLaMA-65B-GGUF llama-65b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
208
  ```
209
 
210
+ Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
211
  </details>
212
  <!-- README_GGUF.md-how-to-download end -->
213