eaddario commited on
Commit
e05e682
·
verified ·
1 Parent(s): 4520bb7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -173,7 +173,7 @@ Scores generated using [llama-bench][bch]. Naive (`llama-quantize` with no optim
173
  **[Winogrande][wng]:** based on the [Winograd Schema Challenge][wng-chl], is a natural language understanding task requiring models to resolve ambiguities in sentences involving pronoun references.
174
 
175
  ## Credits
176
- A big **Thank You!** to [Colin Kealty][btk] for the many contributions and for being one of the best sources of high quality quantized models available on Hugging Face, and a really big ***Thank You!*** to [Georgi Gerganov][ggg] for his amazing work with **llama.cpp** and the **ggml/gguf** libraries.
177
 
178
  [arc]: https://leaderboard.allenai.org/arc/submissions/get-started
179
  [btk]: https://huggingface.co/bartowski
@@ -189,6 +189,7 @@ A big **Thank You!** to [Colin Kealty][btk] for the many contributions and for b
189
  [gh-prn]: https://github.com/EAddario/llama.cpp/tree/prune
190
  [hsw]: https://rowanzellers.com/hellaswag
191
  [hsw-tst]: https://github.com/klosax/hellaswag_text_data
 
192
  [imx-dat]: https://huggingface.co/eaddario/Qwen3-30B-A3B-pruned-pruned-GGUF/tree/main/imatrix
193
  [imx]: https://github.com/ggml-org/llama.cpp/tree/master/tools/imatrix
194
  [imtx-pr]: https://github.com/ggml-org/llama.cpp/pull/12718
@@ -196,6 +197,7 @@ A big **Thank You!** to [Colin Kealty][btk] for the many contributions and for b
196
  [llm]: https://github.com/ggerganov/llama.cpp
197
  [llm-rel]: https://github.com/ggml-org/llama.cpp/releases/tag/b5580
198
  [lgt]: https://huggingface.co/eaddario/Qwen3-30B-A3B-pruned-pruned-GGUF/tree/main/logits
 
199
  [lwq-lim]: https://arxiv.org/html/2403.03853v3#S3
200
  [lwq-ppr]: https://arxiv.org/abs/2406.17415
201
  [mdm]: https://medium.com/@eaddario/squeezing-tensor-bits-the-quest-for-smaller-llms-86b23bd052ca
 
173
  **[Winogrande][wng]:** based on the [Winograd Schema Challenge][wng-chl], is a natural language understanding task requiring models to resolve ambiguities in sentences involving pronoun references.
174
 
175
  ## Credits
176
+ [LLaMa C++][llm] has a large and vibrant community of [contributors][llm-ctt] (~1,200 last time I checked) that actively maintain and extend its functionality, adding new models and architectures almost as fast as they appear (considering the breakneck speed at which the AI/ML field is advancing, this alone is a remarkable feat!), and whilst I'm grateful to each and everyone of them, I want to recognise three people in particular: **Thank You!** [Colin Kealty][btk] for the many contributions and for being one of the best sources of high quality quantized models available on Hugging Face, and a really big ***Thank You!*** to [Georgi Gerganov][ggg] for his amazing work with **llama.cpp** and the **ggml/gguf** libraries, and [Iwan Kawrakow][ikk] for being one of the key authors behind the many quantisation algorithms and the imatrix functionality.
177
 
178
  [arc]: https://leaderboard.allenai.org/arc/submissions/get-started
179
  [btk]: https://huggingface.co/bartowski
 
189
  [gh-prn]: https://github.com/EAddario/llama.cpp/tree/prune
190
  [hsw]: https://rowanzellers.com/hellaswag
191
  [hsw-tst]: https://github.com/klosax/hellaswag_text_data
192
+ [ikk]: https://github.com/ikawrakow
193
  [imx-dat]: https://huggingface.co/eaddario/Qwen3-30B-A3B-pruned-pruned-GGUF/tree/main/imatrix
194
  [imx]: https://github.com/ggml-org/llama.cpp/tree/master/tools/imatrix
195
  [imtx-pr]: https://github.com/ggml-org/llama.cpp/pull/12718
 
197
  [llm]: https://github.com/ggerganov/llama.cpp
198
  [llm-rel]: https://github.com/ggml-org/llama.cpp/releases/tag/b5580
199
  [lgt]: https://huggingface.co/eaddario/Qwen3-30B-A3B-pruned-pruned-GGUF/tree/main/logits
200
+ [llm-ctt]: https://github.com/ggml-org/llama.cpp/graphs/contributors
201
  [lwq-lim]: https://arxiv.org/html/2403.03853v3#S3
202
  [lwq-ppr]: https://arxiv.org/abs/2406.17415
203
  [mdm]: https://medium.com/@eaddario/squeezing-tensor-bits-the-quest-for-smaller-llms-86b23bd052ca