Turkce-LLM GGUF Quantized Models

Technical Details

  • Quantization Tool: llama.cpp
  • Version: version: 5170 (658987cf)

Model Information

Available Files

🚀 Download 🔢 Type 📝 Description
Download Q4 0 Standard 4-bit (fast on ARM)

💡 Q4 K M provides the best balance for most use cases

Downloads last month
46
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for matrixportal/Turkce-LLM-GGUF

Finetuned
matrixportal/TR
Finetuned
matrixportal/TR-V1
Quantized
(1)
this model

Collection including matrixportal/Turkce-LLM-GGUF