Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
jnjj
/
gemma-3-4b-it-qat-int4-quantized-inference-unrestricted-weights-only-sf
like
0
Image-Text-to-Text
Transformers
Safetensors
gemma3
conversational
text-generation-inference
4-bit precision
bitsandbytes
Model card
Files
Files and versions
xet
Community
Train
Deploy
Use this model
main
gemma-3-4b-it-qat-int4-quantized-inference-unrestricted-weights-only-sf
Ctrl+K
Ctrl+K
1 contributor
History:
4 commits
jnjj
Upload INT4 quantized Gemma‑3‑4B‑IT QAT with bfloat16 compute, extensive unconventional modifications including instruction conversion and GPTQ/AutoGPTQ flags, and only bfloat16 .weight tensors saved as safetensors (bfloat16 compute)
1751f0d
verified
4 months ago
.gitattributes
Safe
1.57 kB
Upload INT4 quantized Gemma‑3‑4B‑IT QAT with bfloat16 compute, extensive unconventional modifications including instruction conversion flag, and only bfloat16 .weight tensors saved as safetensors (bfloat16 compute)
4 months ago
README.md
Safe
34 Bytes
Create README.md
4 months ago
added_tokens.json
Safe
35 Bytes
Upload INT4 quantized Gemma‑3‑4B‑IT QAT with bfloat16 compute, extensive unconventional modifications including instruction conversion flag, and only bfloat16 .weight tensors saved as safetensors (bfloat16 compute)
4 months ago
config.json
2.29 kB
Upload INT4 quantized Gemma‑3‑4B‑IT QAT with bfloat16 compute, extensive unconventional modifications including instruction conversion and GPTQ/AutoGPTQ flags, and only bfloat16 .weight tensors saved as safetensors (bfloat16 compute)
4 months ago
model.safetensors
6.32 GB
xet
Upload INT4 quantized Gemma‑3‑4B‑IT QAT with bfloat16 compute, extensive unconventional modifications including instruction conversion and GPTQ/AutoGPTQ flags, and only bfloat16 .weight tensors saved as safetensors (bfloat16 compute)
4 months ago
special_tokens_map.json
Safe
662 Bytes
Upload INT4 quantized Gemma‑3‑4B‑IT QAT with bfloat16 compute, extensive unconventional modifications including instruction conversion flag, and only bfloat16 .weight tensors saved as safetensors (bfloat16 compute)
4 months ago
tokenizer.json
Safe
33.4 MB
xet
Upload INT4 quantized Gemma‑3‑4B‑IT QAT with bfloat16 compute, extensive unconventional modifications including instruction conversion flag, and only bfloat16 .weight tensors saved as safetensors (bfloat16 compute)
4 months ago
tokenizer.model
Safe
4.69 MB
xet
Upload INT4 quantized Gemma‑3‑4B‑IT QAT with bfloat16 compute, extensive unconventional modifications including instruction conversion flag, and only bfloat16 .weight tensors saved as safetensors (bfloat16 compute)
4 months ago
tokenizer_config.json
Safe
1.16 MB
Upload INT4 quantized Gemma‑3‑4B‑IT QAT with bfloat16 compute, extensive unconventional modifications including instruction conversion flag, and only bfloat16 .weight tensors saved as safetensors (bfloat16 compute)
4 months ago