--- base_model: - ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1 - meta-llama/Llama-3.1-8B tags: - merge - mergekit - lazymergekit - ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1 - meta-llama/Llama-3.1-8B language: - tr - en --- # Profesor-Dare_Ties GGUF Quantized Models ## Technical Details - **Quantization Tool:** llama.cpp - **Version:** version: 5121 (c94085df) ## Model Information - **Base Model:** [matrixportal/Profesor-Dare_Ties](https://huggingface.co/matrixportal/Profesor-Dare_Ties) - **Quantized by:** [matrixportal](https://huggingface.co/matrixportal) ## Available Files | 🚀 Download | 🔢 Type | 📝 Description | |------------|---------|---------------| | [Download](https://huggingface.co/matrixportal/Profesor-Dare_Ties-GGUF/resolve/main/profesor-dare-ties.q4_0.gguf) | Q4 0 | Standard 4-bit (fast on ARM) | | [Download](https://huggingface.co/matrixportal/Profesor-Dare_Ties-GGUF/resolve/main/profesor-dare-ties.q4_k_m.gguf) | Q4 K M | 4-bit balanced (recommended default) | | [Download](https://huggingface.co/matrixportal/Profesor-Dare_Ties-GGUF/resolve/main/profesor-dare-ties.q5_k_m.gguf) | Q5 K M | 5-bit best (recommended HQ option) | | [Download](https://huggingface.co/matrixportal/Profesor-Dare_Ties-GGUF/resolve/main/profesor-dare-ties.q6_k.gguf) | Q6 K | 6-bit near-perfect (premium quality) | 💡 **Q4 K M** provides the best balance for most use cases