Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
marcelone
/
gemma-3-4b-it-gguf
like
0
GGUF
conversational
License:
gemma
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
161
GGUF
Model size
3.88B params
Architecture
gemma3
Chat template
Hardware compatibility
Log In
to view the estimation
4-bit
IQ4_NL
2.36 GB
6-bit
Q6_K
3.19 GB
8-bit
Q8_0
4.13 GB
16-bit
BF16
7.77 GB
F16
7.77 GB
32-bit
F32
15.5 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for
marcelone/gemma-3-4b-it-gguf
Base model
google/gemma-3-4b-pt
Finetuned
google/gemma-3-4b-it
Quantized
(
125
)
this model
Collection including
marcelone/gemma-3-4b-it-gguf
Language Learning - 8GB GPUs
Collection
1 item
โข
Updated
12 days ago