Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
steampunque
/
gemma-3-12b-it-Hybrid-GGUF
like
0
GGUF
gemma
gemma-3
GGUF
quantized
4-bit precision
License:
gemma
Model card
Files
Files and versions
Community
main
gemma-3-12b-it-Hybrid-GGUF
Ctrl+K
Ctrl+K
1 contributor
History:
5 commits
steampunque
Update README.md
5e65ae9
verified
about 2 months ago
.gitattributes
1.65 kB
Upload gemma-3-12b-it.mmproj.gguf with huggingface_hub
about 2 months ago
README.md
4.2 kB
Update README.md
about 2 months ago
gemma-3-12b-it.Q4_K_H.gguf
6.67 GB
LFS
Upload gemma-3-12b-it.Q4_K_H.gguf with huggingface_hub
about 2 months ago
gemma-3-12b-it.mmproj.gguf
854 MB
LFS
Upload gemma-3-12b-it.mmproj.gguf with huggingface_hub
about 2 months ago