
unsloth/Mistral-Small-24B-Instruct-2501-GGUF
Text Generation
โข
Updated
โข
18.2k
โข
25
try to reduce gpu_memory_utilization to some lower coefficient
Thank you.
Iโm also a big fan of Qwen models. However, in this case, I donโt think they are appropriate because Iโm not entirely confident in their capabilities regarding multilingual contexts. Thatโs why I chose Llama.
Overall, I agree that the Qwen series is excellent for most tasks.