metadata
base_model:
- ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1
- meta-llama/Llama-3.1-8B
- meta-llama/Llama-3.1-8B-Instruct
tags:
- merge
- mergekit
- lazymergekit
- ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1
- meta-llama/Llama-3.1-8B
- meta-llama/Llama-3.1-8B-Instruct
Hibrid-Llama-Linear GGUF Quantized Models
Technical Details
- Quantization Tool: llama.cpp
- Version: version: 5126 (307bfa25)
Model Information
- Base Model: matrixportal/Hibrid-Llama-Linear
- Quantized by: matrixportal
Available Files
π Download | π’ Type | π Description |
---|---|---|
Download | Q3 K M | Small, acceptable quality |
Download | Q4 0 | Standard 4-bit (fast on ARM) |
Download | Q4 K M | 4-bit balanced (recommended default) |
Download | Q5 K M | 5-bit best (recommended HQ option) |
Download | Q6 K | 6-bit near-perfect (premium quality) |
Download | Q8 0 | 8-bit maximum (overkill for most) |
π‘ Q4 K M provides the best balance for most use cases