metadata
base_model: HelpingAI/HelpingAI2.5-5B
language:
- en
library_name: transformers
license: other
license_name: helpingai
license_link: https://huggingface.co/OEvortex/HelpingAI2.5-2B/blob/main/LICENSE.md
pipeline_tag: text-generation
tags:
- HelpingAI
- Emotionally-Intelligent
- EQ-focused
- Conversational
- SLM
- llama-cpp
- matrixportal
ysn-rfd/HelpingAI2.5-5B-GGUF
This model was converted to GGUF format from HelpingAI/HelpingAI2.5-5B
using llama.cpp via the ggml.ai's all-gguf-same-where space.
Refer to the original model card for more details on the model.
β Quantized Models Download List
β¨ Recommended for CPU: Q4_K_M
| β‘ Recommended for ARM CPU: Q4_0
| π Best Quality: Q8_0
π Download | π’ Type | π Notes |
---|---|---|
Download | Basic quantization | |
Download | Small size | |
Download | Balanced quality | |
Download | Better quality | |
Download | Fast on ARM | |
Download | Fast, recommended | |
Download | Best balance | |
Download | Good quality | |
Download | Balanced | |
Download | High quality | |
Download | Very good quality | |
Download | Fast, best quality | |
Download | Maximum accuracy |
π‘ Tip: Use F16
for maximum precision when quality is critical