Q6F Q8A: Q6_K ffn, Q8_0 attn, Q8_0 output, Q8_0 embeds

Fits โ‰ฅ24K CTX on a 24GiB GPU

Downloads last month
29
GGUF
Model size
23.6B params
Architecture
llama
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support