Model Summary

This repository hosts quantized versions of the Gemma 3 12B instruct model.

Format: GGUF
Converter: llama.cpp 7841fc723e059d1fd9640e5c0ef19050fcc7c698
Quantizer: LM-Kit.NET 2025.3.4

For more detailed information on the base model, please visit the following link

Downloads last month
276
GGUF
Model size
11.8B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support