Update README.md
Browse files
README.md
CHANGED
@@ -49,6 +49,10 @@ tags:
|
|
49 |
This model was converted to GGUF format from [`Spestly/Athena-3-14B`](https://huggingface.co/Spestly/Athena-3-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
50 |
Refer to the [original model card](https://huggingface.co/Spestly/Athena-3-14B) for more details on the model.
|
51 |
|
|
|
|
|
|
|
|
|
52 |
## Use with llama.cpp
|
53 |
Install llama.cpp through brew (works on Mac and Linux)
|
54 |
|
|
|
49 |
This model was converted to GGUF format from [`Spestly/Athena-3-14B`](https://huggingface.co/Spestly/Athena-3-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
50 |
Refer to the [original model card](https://huggingface.co/Spestly/Athena-3-14B) for more details on the model.
|
51 |
|
52 |
+
---
|
53 |
+
Athena-3-14B is a 14.0-billion-parameter causal language model fine-tuned from Qwen2.5-14B-Instruct. This model is designed to provide highly fluent, contextually aware, and logically sound outputs across a broad range of NLP and reasoning tasks. It balances instruction-following with generative flexibility.
|
54 |
+
|
55 |
+
---
|
56 |
## Use with llama.cpp
|
57 |
Install llama.cpp through brew (works on Mac and Linux)
|
58 |
|