This is a converted weight from gemma-2-9b-it-abc-notation model in unsloth 4-bit dynamic quant using this collab notebook.

About this Conversion

This conversion uses Unsloth to load the model in 4-bit format and force-save it in the same 4-bit format.

How 4-bit Quantization Works

  • The actual 4-bit quantization is handled by BitsAndBytes (bnb), which works under Torch via AutoGPTQ or BitsAndBytes.
  • Unsloth acts as a wrapper, simplifying and optimizing the process for better efficiency.

This allows for reduced memory usage and faster inference while keeping the model compact.

Downloads last month
9
Safetensors
Model size
5.21B params
Tensor type
F32
FP16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for Seeker38/gemma-2-9b-it-abc-notation-bnb-4bit

Finetuned
(1)
this model