Metafor-GGUF / README.md
matrixportal's picture
Upload README.md with huggingface_hub
4a13b2a verified
|
raw
history blame contribute delete
1.05 kB
---
base_model: matrixportal/Turkce-LLM
language:
- tr
- en
library_name: transformers
license: apache-2.0
tags:
- matrixportal
inference: false
---
# Metafor GGUF Quantized Models
## Technical Details
- **Quantization Tool:** llama.cpp
- **Version:** version: 5170 (658987cf)
## Model Information
- **Base Model:** [matrixportal/Metafor](https://huggingface.co/matrixportal/Metafor)
- **Quantized by:** [matrixportal](https://huggingface.co/matrixportal)
## Available Files
| πŸš€ Download | πŸ”’ Type | πŸ“ Description |
|------------|---------|---------------|
| [Download](https://huggingface.co/matrixportal/Metafor-GGUF/resolve/main/metafor.q4_0.gguf) | Q4 0 | Standard 4-bit (fast on ARM) |
| [Download](https://huggingface.co/matrixportal/Metafor-GGUF/resolve/main/metafor.q4_k_m.gguf) | Q4 K M | 4-bit balanced (recommended default) |
| [Download](https://huggingface.co/matrixportal/Metafor-GGUF/resolve/main/metafor.q5_k_m.gguf) | Q5 K M | 5-bit best (recommended HQ option) |
πŸ’‘ **Q4 K M** provides the best balance for most use cases