Qwen3-0.6B-Base Quantized (INT8)

This model is a quantized version of Qwen/Qwen3-0.6B-Base using optimum-quanto with int8 weight quantization.

Model Details

  • Base Model: Qwen/Qwen3-0.6B-Base
  • Quantization: int8 weights using optimum-quanto
  • Library: Transformers + Optimum-Quanto

Usage

You can load and use this quantized model directly:

from transformers import AutoTokenizer, AutoModelForCausalLM
from optimum.quanto import QuantizedModelForCausalLM

# Load tokenizer and model directly
tokenizer = AutoTokenizer.from_pretrained("CarlOwOs/Qwen3-0.6B-Base-int8", trust_remote_code=True)
model = QuantizedModelForCausalLM.from_pretrained("CarlOwOs/Qwen3-0.6B-Base-int8")

# Generate text
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs, max_length=50, do_sample=True, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Alternative Loading Method

# If the direct method doesn't work, try this:
from transformers import AutoTokenizer
from optimum.quanto import QuantizedModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("CarlOwOs/Qwen3-0.6B-Base-int8", trust_remote_code=True)
model = QuantizedModelForCausalLM.from_pretrained("CarlOwOs/Qwen3-0.6B-Base-int8")

# Use the model for inference
inputs = tokenizer("What is the capital of France?", return_tensors="pt")
with torch.no_grad():
    outputs = model.generate(
        **inputs, 
        max_length=100, 
        do_sample=True, 
        temperature=0.7,
        pad_token_id=tokenizer.eos_token_id
    )
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)

Performance

This quantized model provides significant memory savings compared to the original model:

  • Inference Speed: Similar to original model
  • Quality: Maintains good performance for most tasks

Technical Details

  • Quantization Method: optimum-quanto int8 weight quantization
  • Base Model: Qwen/Qwen3-0.6B-Base
  • Precision: int8 weights, float16 activations

License

Same as the base model license.

Downloads last month
16
Safetensors
Model size
752M params
Tensor type
F16
·
I8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for CarlOwOs/Qwen3-0.6B-Base-int8

Finetuned
(283)
this model