Qwen3-0.6B-Base Quantized (INT2)
This model is a quantized version of Qwen/Qwen3-0.6B-Base
using optimum-quanto with int2 weight quantization.
Model Details
- Base Model: Qwen/Qwen3-0.6B-Base
- Quantization: int2 weights using optimum-quanto
- Library: Transformers + Optimum-Quanto
Usage
You can load and use this quantized model directly:
from transformers import AutoTokenizer, AutoModelForCausalLM
from optimum.quanto import QuantizedModelForCausalLM
# Load tokenizer and model directly
tokenizer = AutoTokenizer.from_pretrained("CarlOwOs/Qwen3-0.6B-Base-int2", trust_remote_code=True)
model = QuantizedModelForCausalLM.from_pretrained("CarlOwOs/Qwen3-0.6B-Base-int2")
# Generate text
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs, max_length=50, do_sample=True, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Alternative Loading Method
# If the direct method doesn't work, try this:
from transformers import AutoTokenizer
from optimum.quanto import QuantizedModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("CarlOwOs/Qwen3-0.6B-Base-int2", trust_remote_code=True)
model = QuantizedModelForCausalLM.from_pretrained("CarlOwOs/Qwen3-0.6B-Base-int2")
# Use the model for inference
inputs = tokenizer("What is the capital of France?", return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_length=100,
do_sample=True,
temperature=0.7,
pad_token_id=tokenizer.eos_token_id
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
Performance
This quantized model provides significant memory savings compared to the original model:
- Inference Speed: Similar to original model
- Quality: Maintains good performance for most tasks
Technical Details
- Quantization Method: optimum-quanto int2 weight quantization
- Base Model: Qwen/Qwen3-0.6B-Base
- Precision: int2 weights, float16 activations
License
Same as the base model license.
- Downloads last month
- 34
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for CarlOwOs/Qwen3-0.6B-Base-int2
Base model
Qwen/Qwen3-0.6B-Base