RedHatAI/gemma-3n-E4B-it-quantized.w8a8

Model Overview

  • Model Architecture: gemma-3n-E4B-it
    • Input: Audio-Vision-Text
    • Output: Text
  • Model Optimizations:
    • Weight quantization: INT8
    • Activation quantization: INT8
  • Release Date: 08/01/2025
  • Version: 1.0
  • Model Developers: RedHatAI

Quantized version of google/gemma-3n-E4B-it.

Model Optimizations

This model was obtained by quantizing the weights and activations of google/gemma-3n-E4B-it to INT8 data type, ready for inference with vLLM >= 0.10.0

Deployment

Use with vLLM

This model can be deployed efficiently using the vLLM backend, as shown in the example below.

from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams

# prepare model
llm = LLM(
    model="RedHatAI/gemma-3n-E4B-it-quantized.w8a8",
    trust_remote_code=True,
    max_model_len=4096,
    max_num_seqs=2,
)

# prepare inputs
question = "What is the content of this image?"
inputs = {
    "prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
    "multi_modal_data": {
        "image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
    },
}

# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT  : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")

vLLM also supports OpenAI-compatible serving. See the documentation for more details.

Creation

This model was created with llm-compressor by running the code snippet below.

Model Creation Code
import requests
import torch
from PIL import Image
from transformers import AutoProcessor, Gemma3nForConditionalGeneration

from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.utils import dispatch_for_generation

# Load model.
model_id = "google/gemma-3n-E4B-it"
model = Gemma3nForConditionalGeneration.from_pretrained(model_id, torch_dtype="auto", device_map="auto")
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)

# Oneshot arguments
DATASET_ID = "flickr30k"
DATASET_SPLIT = {"calibration": "test[:512]"}
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048


# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
    assert len(batch) == 1
    return {key: torch.tensor(value) for key, value in batch[0].items()}

dampening_frac=0.01

# Recipe
recipe = [
    GPTQModifier(
        targets="Linear",
        scheme="W8A8",
        ignore=[
            "re:.*embed_audio.*",
            "re:.*embed_vision.*",
            "re:.*audio_tower.*",
            "re:.*vision_tower.*",
            "re:.*altup.*",
            "re:.*lm_head.*",
            "re:.*laurel.*",
            "re:model\.language_model\.layers\.\d+\.per_layer_input_gate",
            "re:model\.language_model\.layers\.\d+\.per_layer_projection",
            "model.language_model.per_layer_model_projection",
        ],
        dampening_frac=dampening_frac
    ),
]

SAVE_DIR = f"{model_id.split('/')[1]}-quantized.{recipe[0].scheme}"

# Perform oneshot
oneshot(
    model=model,
    tokenizer=model_id,
    dataset=DATASET_ID,
    splits=DATASET_SPLIT,
    recipe=recipe,
    max_seq_length=MAX_SEQUENCE_LENGTH,
    num_calibration_samples=NUM_CALIBRATION_SAMPLES,
    trust_remote_code_model=True,
    data_collator=data_collator,
    # gemma3n has broken weight offloading which is required by the sequential pipeline
    pipeline="basic",
    # gemma3n does not support untying word embeddings
    tie_word_embeddings=True,
    output_dir=SAVE_DIR,
)

# Save to disk compressed.
model.save_pretrained(SAVE_DIR, save_compressed=True)
processor.save_pretrained(SAVE_DIR)

Evaluation

The model was evaluated using lm_evaluation_harness for OpenLLM V1 and V2 text-based benchmarks. The evaluations were conducted using the following commands:

Evaluation Commands

OpenLLM V1

lm_eval \
  --model vllm \
  --model_args pretrained="<model_name>",dtype=auto,add_bos_token=false,max_model_len=4096,gpu_memory_utilization=0.8,enable_chunked_prefill=True,enforce_eager=True,trust_remote_code=True \
  --tasks openllm \
  --batch_size auto \
  --apply_chat_template \
  --fewshot_as_multiturn

Leaderboard V2

lm_eval \
  --model vllm \
  --model_args pretrained="<model_name>",dtype=auto,add_bos_token=false,max_model_len=15000,gpu_memory_utilization=0.5,enable_chunked_prefill=True,enforce_eager=True,trust_remote_code=True \
  --tasks leaderboard \
  --batch_size auto \
  --apply_chat_template \
  --fewshot_as_multiturn

Accuracy

Category Metric google/gemma-3n-E4B-it RedHatAI/gemma-3n-E4B-it-quantized.w8a8 Recovery (%)
OpenLLM V1 arc_challenge 60.24 59.39 98.59%
gsm8k 60.12 70.28 116.91%
hellaswag 74.94 73.19 97.67%
mmlu 64.14 64.93 101.23%
truthfulqa_mc2 54.87 55.59 101.31%
winogrande 68.35 67.17 98.27%
Average 63.78 65.09 102.06%
Leaderboard bbh 55.46 55.25 99.62%
mmlu_pro 34.38 34.02 98.95%
musr 33.20 33.07 99.61%
ifeval 84.41 83.81 99.29%
gpqa 30.87 30.45 98.64%
math_hard 45.54 45.85 100.68%
Average 47.31 47.08 99.50%
Downloads last month
4
Safetensors
Model size
7.85B params
Tensor type
BF16
·
I8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for RedHatAI/gemma-3n-E4B-it-quantized.w8a8

Quantized
(40)
this model