Model Details

This model is a mixed int4 model with group_size 128 and symmetric quantization of Qwen/Qwen3-Coder-480B-A35B-Instruct generated by intel/auto-round via auot-round-light

Please follow the license of the original model.

How To Use

INT4 Inference on CPU/Intel GPU/CUDA

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Intel/Qwen3-Coder-480B-A35B-Instruct-int4-AutoRound-vo"

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

prompts = [
    "Write a quick sort algorithm.",
    "Write a flappy bird.",
    "Write a llm quantization algorithm.",
]

texts = []
for prompt in prompts:
    messages = [
        {"role": "user", "content": prompt}
    ]
    text = tokenizer.apply_chat_template(
        messages,
        tokenize=False,
        add_generation_prompt=True
    )
    texts.append(text)
inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True, padding_side="left").to(model.device)

# conduct text completion
outputs = model.generate(
    **inputs,
    max_new_tokens=65536,
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs)
]

decoded_outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)

for i, prompt in enumerate(prompts):
    input_id = inputs
    print(f"Prompt: {prompt}")
    print(f"Generated: {decoded_outputs[i]}")
    print("-" * 50)

"""
Prompt: Write a quick sort algorithm.
Generated: Here's a QuickSort implementation in Python:

```python
def quicksort(arr):
    """
    QuickSort algorithm implementation
    
    Args:
        arr: List of comparable elements
    
    Returns:
        Sorted list
    """
    # Base case: arrays with 0 or 1 element are already sorted
    if len(arr) <= 1:
        return arr
    
    # Choose pivot (using middle element)
    pivot = arr[len(arr) // 2]
    
    # Partition array into three parts
    left = [x for x in arr if x < pivot]
    middle = [x for x in arr if x == pivot]
    right = [x for x in arr if x > pivot]
    
    # Recursively sort left and right partitions
    return quicksort(left) + middle + quicksort(right)

# In-place version (more memory efficient)
def quicksort_inplace(arr, low=0, high=None):
    """
    In-place QuickSort implementation
    
    Args:
        arr: List to be sorted in-place
        low: Starting index
        high: Ending index
    """
    if high is None:
        high = len(arr) - 1
    
    if low < high:
        # Partition the array and get pivot index
        pivot_index = partition(arr, low, high)
        
        # Recursively sort elements before and after partition
        quicksort_inplace(arr, low, pivot_index - 1)
        quicksort_inplace(arr, pivot_index + 1, high)

def partition(arr, low, high):
    """
    Partition function for in-place QuickSort
    
    Args:
        arr: Array to partition
        low: Starting index
        high: Ending index
    
    Returns:
        Index of the pivot after partitioning
    """
    # Choose rightmost element as pivot
    pivot = arr[high]
    
    # Index of smaller element (indicates right position of pivot)
    i = low - 1
    
    for j in range(low, high):
        # If current element is smaller than or equal to pivot
        if arr[j] <= pivot:
            i += 1
            arr[i], arr[j] = arr[j], arr[i]
    
    # Place pivot in correct position
    arr[i + 1], arr[high] = arr[high], arr[i + 1]
    return i + 1

# Example usage
if __name__ == "__main__":
    # Test the
--------------------------------------------------
Prompt: Write a flappy bird.
Generated: # Flappy Bird in PyGame

Here's a complete implementation of Flappy Bird using PyGame:

```python
import pygame
import sys
import random

# Initialize pygame
pygame.init()

# Game constants
WIDTH, HEIGHT = 400, 600
FPS = 60
GRAVITY = 0.25
FLAP_STRENGTH = -5
PIPE_SPEED = 3
PIPE_GAP = 150
PIPE_FREQUENCY = 1800  # milliseconds
GROUND_HEIGHT = 100
BIRD_SIZE = 30

# Colors
SKY_BLUE = (113, 197, 207)
GREEN = (111, 196, 69)
DARK_GREEN = (76, 145, 65)
BROWN = (160, 120, 40)
YELLOW = (255, 221, 45)
RED = (231, 76, 60)
WHITE = (255, 255, 255)
BLACK = (0, 0, 0)

# Set up the display
screen = pygame.display.set_mode((WIDTH, HEIGHT))
pygame.display.set_caption("Flappy Bird")
clock = pygame.time.Clock()

# Font
font = pygame.font.SysFont(None, 36)
small_font = pygame.font.SysFont(None, 24)

class Bird:
    def __init__(self):
        self.x = WIDTH // 3
        self.y = HEIGHT // 2
        self.velocity = 0
        self.alive = True
        self.rotation = 0
        
    def flap(self):
        if self.alive:
            self.velocity = FLAP_STRENGTH
            
    def update(self):
        # Apply gravity
        self.velocity += GRAVITY
        self.y += self.velocity
        
        # Rotate bird based on velocity
        self.rotation = max(-30, min(self.velocity * 3, 90))
        
        # Check boundaries
        if self.y < 0:
            self.y = 0
            self.velocity = 0
        if self.y > HEIGHT - GROUND_HEIGHT - BIRD_SIZE:
            self.y = HEIGHT - GROUND_HEIGHT - BIRD_SIZE
            self.velocity = 0
            self.alive = False
            
    def
--------------------------------------------------
Prompt: Write a llm quantization algorithm.
Generated: Here's a comprehensive implementation of LLM quantization algorithms, including post-training quantization and QLoRA-style quantization:

```python
import torch
import torch.nn as nn
from typing import Tuple, Dict, Any
import math

class Quantizer:
    """Base class for quantization operations"""
    
    @staticmethod
    def quantize_weights(weight: torch.Tensor, bits: int = 4) -> Tuple[torch.Tensor, torch.Tensor]:
        """
        Quantize weights to specified bit width using uniform quantization
        
        Args:
            weight: Input weight tensor
            bits: Number of bits for quantization (1-8)
            
        Returns:
            Tuple of (quantized_weight, scale_factor)
        """
        # Calculate min/max values for scaling
        min_val = weight.min()
        max_val = weight.max()
        
        # For symmetric quantization around zero
        if min_val >= 0:
            # Unsigned quantization
            q_min, q_max = 0, 2**bits - 1
            scale = (max_val - min_val) / (q_max - q_min)
            zero_point = -torch.round(min_val / scale)
        else:
            # Symmetric signed quantization
            q_min, q_max = -(2**(bits-1)), 2**(bits-1) - 1
            scale = max(abs(min_val), abs(max_val)) / (2**(bits-1) - 1)
            zero_point = torch.tensor(0.0)
        
        # Quantize weights
        quantized = torch.round(weight / scale + zero_point).clamp(q_min, q_max)
        quantized = quantized.to(torch.int8)  # Store as int8 for efficiency
        
        return quantized, scale
    
    @staticmethod
    def dequantize_weights(quantized: torch.Tensor, scale: torch.Tensor, 
                          zero_point: torch.Tensor = None) -> torch.Tensor:
        """Dequantize weights back to float32"""
        if zero_point is None:
            zero_point = torch.tensor(0.0)
        return (quantized - zero_point) * scale


class LinearQuantized(nn.Module):
    """Quantized Linear Layer with QLoRA-style implementation"""
    
    def __init__(self, in_features: int, out_features: int, bits: int = 4,
                 r: int = 64, alpha: float = 16.0, dropout
--------------------------------------------------
"""

Generate the model

Here is the sample command to reproduce the model. 3*80G

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers
from auto_round import AutoRound

model_name = "Qwen3/Qwen3-Coder-480B-A35B-Instruct"

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="cpu", torch_dtype="auto", trust_remote_code=True)

block = model.model.layers
device_map = {}

for n, m in block.named_modules():
    if isinstance(m, (torch.nn.Linear, transformers.modeling_utils.Conv1D)):
        if "experts" in n and ("shared_experts" not in n):
            if int(n.split('.')[-2]) < 30:
                device = "cuda:0"
            elif int(n.split('.')[-2]) >= 30 and int(n.split('.')[-2]) < 95:
                device = "cuda:1"
            elif int(n.split('.')[-2]) >= 95:
                device = "cuda:2"
        else:
            device = "cuda:0"

        n = n[2:]

        device_map.update({n: device})

autoround = AutoRound(
    model=model, tokenizer=tokenizer, device_map=device_map, iters=50, lr=5e-3, nsamples=512,dataset="github-code-clean")
autoround.quantize_and_save(format="auto_round", output_dir="/dataset/Qwen3-Coder-480B-A35B-Instruct-int4")

Ethical Considerations and Limitations

The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

Therefore, before deploying any applications of the model, developers should perform safety testing.

Caveats and Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

Here are a couple of useful links to learn more about Intel's AI software:

  • Intel Neural Compressor link

Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.

Cite

@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }

arxiv github

Downloads last month
-
Safetensors
Model size
1.87B params
Tensor type
I32
·
BF16
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Intel/Qwen3-Coder-480B-A35B-Instruct-int4-v0

Quantized
(17)
this model

Dataset used to train Intel/Qwen3-Coder-480B-A35B-Instruct-int4-v0