llmat commited on
Commit
16c7ae9
·
verified ·
1 Parent(s): dbde053

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -0
README.md CHANGED
@@ -18,3 +18,38 @@ NVFP4-quantized version of `Qwen/Qwen3-30B-A3B` produced with [llmcompressor](ht
18
  - Quantization scheme: NVFP4 (linear layers, `lm_head` excluded)
19
  - Calibration samples: 512
20
  - Max sequence length during calibration: 2048
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  - Quantization scheme: NVFP4 (linear layers, `lm_head` excluded)
19
  - Calibration samples: 512
20
  - Max sequence length during calibration: 2048
21
+
22
+
23
+ ## Deployment
24
+
25
+ ### Use with vLLM
26
+
27
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
28
+
29
+ ```python
30
+ from vllm import LLM, SamplingParams
31
+ from transformers import AutoTokenizer
32
+
33
+ model_id = "llmat/Qwen3-30B-A3B-NVFP4"
34
+ number_gpus = 1
35
+
36
+ sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
37
+
38
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
39
+
40
+ messages = [
41
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
42
+ {"role": "user", "content": "Who are you?"},
43
+ ]
44
+
45
+ prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
46
+
47
+ llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
48
+
49
+ outputs = llm.generate(prompts, sampling_params)
50
+
51
+ generated_text = outputs[0].outputs[0].text
52
+ print(generated_text)
53
+ ```
54
+
55
+ vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.