nm-research commited on
Commit
4618a5f
·
verified ·
1 Parent(s): ab028eb

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +183 -0
README.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - fp8
4
+ - vllm
5
+ license: apache-2.0
6
+ license_link: https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
7
+ language:
8
+ - en
9
+ base_model: ibm-granite/granite-3.1-2b-base
10
+ library_name: transformers
11
+ ---
12
+
13
+ # granite-3.1-2b-base-FP8-dynamic
14
+
15
+ ## Model Overview
16
+ - **Model Architecture:** granite-3.1-2b-base
17
+ - **Input:** Text
18
+ - **Output:** Text
19
+ - **Model Optimizations:**
20
+ - **Weight quantization:** FP8
21
+ - **Activation quantization:** FP8
22
+ - **Release Date:** 1/8/2025
23
+ - **Version:** 1.0
24
+ - **Model Developers:** Neural Magic
25
+
26
+ Quantized version of [ibm-granite/granite-3.1-2b-base](https://huggingface.co/ibm-granite/granite-3.1-2b-base).
27
+ It achieves an average score of xxxx on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves xxxx.
28
+
29
+ ### Model Optimizations
30
+
31
+ This model was obtained by quantizing the weights and activations of [ibm-granite/granite-3.1-2b-base](https://huggingface.co/ibm-granite/granite-3.1-2b-base) to FP8 data type, ready for inference with vLLM >= 0.5.2.
32
+ This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized.
33
+
34
+ ## Deployment
35
+
36
+ ### Use with vLLM
37
+
38
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
39
+
40
+ ```python
41
+ from transformers import AutoTokenizer
42
+ from vllm import LLM, SamplingParams
43
+
44
+ max_model_len, tp_size = 4096, 1
45
+ model_name = "neuralmagic-ent/granite-3.1-2b-base-FP8-dynamic"
46
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
47
+ llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True)
48
+ sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
49
+
50
+ messages_list = [
51
+ [{"role": "user", "content": "Who are you? Please respond in pirate speak!"}],
52
+ ]
53
+
54
+ prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
55
+
56
+ outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
57
+
58
+ generated_text = [output.outputs[0].text for output in outputs]
59
+ print(generated_text)
60
+ ```
61
+
62
+ vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
63
+
64
+ ## Creation
65
+
66
+ This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
67
+
68
+
69
+ ```python
70
+ import argparse
71
+ from transformers import AutoModelForCausalLM, AutoTokenizer
72
+ from llmcompressor.modifiers.quantization import QuantizationModifier
73
+ from llmcompressor.transformers import oneshot
74
+ import os
75
+
76
+ def main():
77
+ parser = argparse.ArgumentParser(description='Quantize a transformer model to FP8')
78
+ parser.add_argument('--model_id', type=str, required=True,
79
+ help='The model ID from HuggingFace (e.g., "meta-llama/Meta-Llama-3-2b-base")')
80
+ parser.add_argument('--save_path', type=str, default='.',
81
+ help='Custom path to save the quantized model. If not provided, will use model_name-FP8-dynamic')
82
+ args = parser.parse_args()
83
+
84
+ # Load model
85
+ model = AutoModelForCausalLM.from_pretrained(
86
+ args.model_id, device_map="auto", torch_dtype="auto", trust_remote_code=True,
87
+ )
88
+ tokenizer = AutoTokenizer.from_pretrained(args.model_id)
89
+
90
+ # Configure the quantization algorithm and scheme
91
+ recipe = QuantizationModifier(
92
+ targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"]
93
+ )
94
+
95
+ # Apply quantization
96
+ oneshot(model=model, recipe=recipe)
97
+
98
+ save_path = os.path.join(args.save_path, args.model_id.split("/")[1] + "-FP8-dynamic")
99
+ os.makedirs(save_path, exist_ok=True)
100
+
101
+ # Save to disk in compressed-tensors format
102
+ model.save_pretrained(save_path)
103
+ tokenizer.save_pretrained(save_path)
104
+ print(f"Model and tokenizer saved to: {save_path}")
105
+
106
+ if __name__ == "__main__":
107
+ main()
108
+ ```
109
+
110
+ ## Evaluation
111
+
112
+ The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard) and on [HumanEval](https://github.com/neuralmagic/evalplus), using the following commands:
113
+
114
+ OpenLLM Leaderboard V1:
115
+ ```
116
+ lm_eval \
117
+ --model vllm \
118
+ --model_args pretrained="neuralmagic-ent/granite-3.1-2b-base-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
119
+ --tasks openllm \
120
+ --write_out \
121
+ --batch_size auto \
122
+ --output_path output_dir \
123
+ --show_config
124
+ ```
125
+
126
+ #### HumanEval
127
+ ##### Generation
128
+ ```
129
+ python3 codegen/generate.py \
130
+ --model neuralmagic-ent/granite-3.1-2b-base-FP8-dynamic \
131
+ --bs 16 \
132
+ --temperature 0.2 \
133
+ --n_samples 50 \
134
+ --root "." \
135
+ --dataset humaneval
136
+ ```
137
+ ##### Sanitization
138
+ ```
139
+ python3 evalplus/sanitize.py \
140
+ humaneval/neuralmagic-ent--granite-3.1-2b-base-FP8-dynamic_vllm_temp_0.2
141
+ ```
142
+ ##### Evaluation
143
+ ```
144
+ evalplus.evaluate \
145
+ --dataset humaneval \
146
+ --samples humaneval/neuralmagic-ent--granite-3.1-2b-base-FP8-dynamic_vllm_temp_0.2-sanitized
147
+ ```
148
+
149
+ ### Accuracy
150
+
151
+ #### OpenLLM Leaderboard V1 evaluation scores
152
+
153
+ Here are the tables with all the numbers removed:
154
+
155
+ | Metric | ibm-granite/granite-3.1-2b-instruct | neuralmagic-ent/granite-3.1-2b-base-FP8-dynamic |
156
+ |-----------------------------------------|:---------------------------------:|:-------------------------------------------:|
157
+ | ARC-Challenge (Acc-Norm, 25-shot) | 55.63 | 53.50 |
158
+ | GSM8K (Strict-Match, 5-shot) | 60.96 | 46.10 |
159
+ | HellaSwag (Acc-Norm, 10-shot) | 75.21 | 77.76 |
160
+ | MMLU (Acc, 5-shot) | 54.38 | 52.61 |
161
+ | TruthfulQA (MC2, 0-shot) | 55.93 | 39.84 |
162
+ | Winogrande (Acc, 5-shot) | 69.67 | 74.43 |
163
+ | **Average Score** | **61.98** | **57.37** |
164
+ | **Recovery** | **100.00** | **99.52** |
165
+
166
+ | Metric | ibm-granite/granite-3.1-2b-base | neuralmagic-ent/granite-3.1-2b-base-FP8-dynamic |
167
+ |-----------------------------------------|:---------------------------------:|:-------------------------------------------:|
168
+ | IFEval (Inst Level Strict Acc, 0-shot)| 41.01 | 0.4185 |
169
+ | BBH (Acc-Norm, 3-shot) | 40.19 | 0.4854 |
170
+ | Math-Hard (Exact-Match, 4-shot) | 4.86 | 0.0465 |
171
+ | GPQA (Acc-Norm, 0-shot) | 27.11 | 0.2780 |
172
+ | MUSR (Acc-Norm, 0-shot) | 34.85 | 0.3406 |
173
+ | MMLU-Pro (Acc, 5-shot) | 22.49 | 0.2285 |
174
+ | **Average Score** | **28.42** | **29.96** |
175
+ | **Recovery** | **100.00** | **105.42** |
176
+
177
+ #### HumanEval pass@1 scores
178
+ | Metric | ibm-granite/granite-3.1-2b-base | neuralmagic-ent/granite-3.1-2b-base-FP8-dynamic |
179
+ |-----------------------------------------|:---------------------------------:|:-------------------------------------------:|
180
+ | HumanEval Pass@1 | 30.00 | 30.40 |
181
+
182
+
183
+