Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
DevQuasar
/
google.gemma-3-12b-it-qat-int4-unquantized-GGUF
like
0
Follow
DevQuasar
144
Image-Text-to-Text
GGUF
conversational
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
google.gemma-3-12b-it-qat-int4-unquantized-GGUF
Ctrl+K
Ctrl+K
1 contributor
History:
17 commits
csabakecskemeti
Upload mmproj.gguf with huggingface_hub
721ad33
verified
22 days ago
.gitattributes
2.56 kB
Upload mmproj.gguf with huggingface_hub
22 days ago
README.md
885 Bytes
Update README.md
22 days ago
google.gemma-3-12b-it-qat-int4-unquantized.Q2_K.gguf
4.77 GB
xet
Upload google.gemma-3-12b-it-qat-int4-unquantized.Q2_K.gguf with huggingface_hub
22 days ago
google.gemma-3-12b-it-qat-int4-unquantized.Q3_K_L.gguf
6.48 GB
xet
Upload google.gemma-3-12b-it-qat-int4-unquantized.Q3_K_L.gguf with huggingface_hub
22 days ago
google.gemma-3-12b-it-qat-int4-unquantized.Q3_K_M.gguf
6.01 GB
xet
Upload google.gemma-3-12b-it-qat-int4-unquantized.Q3_K_M.gguf with huggingface_hub
22 days ago
google.gemma-3-12b-it-qat-int4-unquantized.Q3_K_S.gguf
5.46 GB
xet
Upload google.gemma-3-12b-it-qat-int4-unquantized.Q3_K_S.gguf with huggingface_hub
22 days ago
google.gemma-3-12b-it-qat-int4-unquantized.Q4_K_M.gguf
7.3 GB
xet
Upload google.gemma-3-12b-it-qat-int4-unquantized.Q4_K_M.gguf with huggingface_hub
22 days ago
google.gemma-3-12b-it-qat-int4-unquantized.Q4_K_S.gguf
6.94 GB
xet
Upload google.gemma-3-12b-it-qat-int4-unquantized.Q4_K_S.gguf with huggingface_hub
22 days ago
google.gemma-3-12b-it-qat-int4-unquantized.Q5_K_M.gguf
8.45 GB
xet
Upload google.gemma-3-12b-it-qat-int4-unquantized.Q5_K_M.gguf with huggingface_hub
22 days ago
google.gemma-3-12b-it-qat-int4-unquantized.Q5_K_S.gguf
8.23 GB
xet
Upload google.gemma-3-12b-it-qat-int4-unquantized.Q5_K_S.gguf with huggingface_hub
22 days ago
google.gemma-3-12b-it-qat-int4-unquantized.Q6_K.gguf
9.66 GB
xet
Upload google.gemma-3-12b-it-qat-int4-unquantized.Q6_K.gguf with huggingface_hub
22 days ago
google.gemma-3-12b-it-qat-int4-unquantized.Q8_0.gguf
12.5 GB
xet
Upload google.gemma-3-12b-it-qat-int4-unquantized.Q8_0.gguf with huggingface_hub
22 days ago
google.gemma-3-12b-it-qat-int4-unquantized.f16.gguf
23.5 GB
xet
Upload google.gemma-3-12b-it-qat-int4-unquantized.f16.gguf with huggingface_hub
22 days ago
mmproj.gguf
854 MB
xet
Upload mmproj.gguf with huggingface_hub
22 days ago