cjpais commited on
Commit
b7dd1a9
·
verified ·
1 Parent(s): 035ea6b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +484 -3
README.md CHANGED
@@ -1,3 +1,484 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ language:
5
+ - en
6
+ license_link: LICENSE
7
+ base_model:
8
+ - ibm-granite/granite-3.1-8b-instruct
9
+ quantized_by: bartowski
10
+ tags:
11
+ - llamafile
12
+ - language
13
+ - granite-3.2
14
+ ---
15
+
16
+ # Granite 3.2 8B Instruct - llamafile
17
+
18
+ - Model creator: [IBM](https://huggingface.co/ibm-granite)
19
+ - Original model: [ibm-granite/granite-3.2-8b-instruct](https://huggingface.co/ibm-granite/granite-3.2-8b-instruct)
20
+
21
+ Mozilla packaged the IBM Granite 3.2 models into executable weights that we
22
+ call [llamafiles](https://github.com/Mozilla-Ocho/llamafile). This gives
23
+ you the easiest fastest way to use the model on Linux, MacOS, Windows,
24
+ FreeBSD, OpenBSD and NetBSD systems you control on both AMD64 and ARM64.
25
+
26
+ *Software Last Updated: 2025-03-31*
27
+
28
+ *Llamafile Version: 0.9.2*
29
+
30
+ ## Quickstart
31
+
32
+ To get started, you need both the Granite 3.2 weights, and the llamafile
33
+ software. Both of them are included in a single file, which can be
34
+ downloaded and run as follows:
35
+
36
+ ```
37
+ wget https://huggingface.co/Mozilla/granite-3.2-8b-instruct-llamafile/resolve/main/granite-3.2-8b-instruct-Q6_K.llamafile
38
+ chmod +x granite-3.2-8b-instruct-Q6_K.llamafile
39
+ ./granite-3.2-8b-instruct-Q6_K.llamafile
40
+ ```
41
+
42
+ The default mode of operation for these llamafiles is our new command
43
+ line chatbot interface.
44
+
45
+ ## Usage
46
+
47
+ You can use triple quotes to ask questions on multiple lines. You can
48
+ pass commands like `/stats` and `/context` to see runtime status
49
+ information. You can change the system prompt by passing the `-p "new
50
+ system prompt"` flag. You can press CTRL-C to interrupt the model.
51
+ Finally CTRL-D may be used to exit.
52
+
53
+ If you prefer to use a web GUI, then a `--server` mode is provided, that
54
+ will open a tab with a chatbot and completion interface in your browser.
55
+ For additional help on how it may be used, pass the `--help` flag. The
56
+ server also has an OpenAI API compatible completions endpoint that can
57
+ be accessed via Python using the `openai` pip package.
58
+
59
+ ```
60
+ ./granite-3.2-8b-instruct-Q6_K.llamafile --server
61
+ ```
62
+
63
+ An advanced CLI mode is provided that's useful for shell scripting. You
64
+ can use it by passing the `--cli` flag. For additional help on how it
65
+ may be used, pass the `--help` flag.
66
+
67
+ ```
68
+ ./granite-3.2-8b-instruct-Q6_K.llamafile --cli -p 'four score and seven' --log-disable
69
+ ```
70
+
71
+ ## Troubleshooting
72
+
73
+ Having **trouble?** See the ["Gotchas"
74
+ section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas-and-troubleshooting)
75
+ of the README.
76
+
77
+ On Linux, the way to avoid run-detector errors is to install the APE
78
+ interpreter.
79
+
80
+ ```sh
81
+ sudo wget -O /usr/bin/ape https://cosmo.zip/pub/cosmos/bin/ape-$(uname -m).elf
82
+ sudo chmod +x /usr/bin/ape
83
+ sudo sh -c "echo ':APE:M::MZqFpD::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"
84
+ sudo sh -c "echo ':APE-jart:M::jartsr::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"
85
+ ```
86
+
87
+ On Windows there's a 4GB limit on executable sizes.
88
+
89
+ ## Context Window
90
+
91
+ This model has a max context window size of 128k tokens. By default, a
92
+ context window size of 8192 tokens is used. You can ask llamafile
93
+ to use the maximum context size by passing the `-c 0` flag. That's big
94
+ enough for a small book. If you want to be able to have a conversation
95
+ with your book, you can use the `-f book.txt` flag.
96
+
97
+ ## GPU Acceleration
98
+
99
+ On GPUs with sufficient RAM, the `-ngl 999` flag may be passed to use
100
+ the system's NVIDIA or AMD GPU(s). On Windows, only the graphics card
101
+ driver needs to be installed if you own an NVIDIA GPU. On Windows, if
102
+ you have an AMD GPU, you should install the ROCm SDK v6.1 and then pass
103
+ the flags `--recompile --gpu amd` the first time you run your llamafile.
104
+
105
+ On NVIDIA GPUs, by default, the prebuilt tinyBLAS library is used to
106
+ perform matrix multiplications. This is open source software, but it
107
+ doesn't go as fast as closed source cuBLAS. If you have the CUDA SDK
108
+ installed on your system, then you can pass the `--recompile` flag to
109
+ build a GGML CUDA library just for your system that uses cuBLAS. This
110
+ ensures you get maximum performance.
111
+
112
+ For further information, please see the [llamafile
113
+ README](https://github.com/mozilla-ocho/llamafile/).
114
+
115
+ ## About llamafile
116
+
117
+ llamafile is a new format introduced by Mozilla on Nov 20th 2023. It
118
+ uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp
119
+ binaries that run on the stock installs of six OSes for both ARM64 and
120
+ AMD64.
121
+
122
+ ---
123
+
124
+ # Granite-3.2-8B-Instruct
125
+
126
+ **Model Summary:**
127
+ Granite-3.2-8B-Instruct is an 8-billion-parameter, long-context AI model fine-tuned for thinking capabilities. Built on top of [Granite-3.1-8B-Instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct), it has been trained using a mix of permissively licensed open-source datasets and internally generated synthetic data designed for reasoning tasks. The model allows controllability of its thinking capability, ensuring it is applied only when required.
128
+
129
+
130
+ - **Developers:** Granite Team, IBM
131
+ - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
132
+ - **Release Date**: February 26th, 2025
133
+ - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
134
+
135
+ **Supported Languages:**
136
+ English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages.
137
+
138
+ **Intended Use:**
139
+ This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications.
140
+
141
+ **Capabilities**
142
+ * **Thinking**
143
+ * Summarization
144
+ * Text classification
145
+ * Text extraction
146
+ * Question-answering
147
+ * Retrieval Augmented Generation (RAG)
148
+ * Code related tasks
149
+ * Function-calling tasks
150
+ * Multilingual dialog use cases
151
+ * Long-context tasks including long document/meeting summarization, long document QA, etc.
152
+
153
+
154
+
155
+ **Generation:**
156
+ This is a simple example of how to use Granite-3.2-8B-Instruct model.
157
+
158
+ Install the following libraries:
159
+
160
+ ```shell
161
+ pip install torch torchvision torchaudio
162
+ pip install accelerate
163
+ pip install transformers
164
+ ```
165
+ Then, copy the snippet from the section that is relevant for your use case.
166
+
167
+ ```python
168
+ from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
169
+ import torch
170
+
171
+ model_path="ibm-granite/granite-3.2-8b-instruct"
172
+ device="cuda"
173
+ model = AutoModelForCausalLM.from_pretrained(
174
+ model_path,
175
+ device_map=device,
176
+ torch_dtype=torch.bfloat16,
177
+ )
178
+ tokenizer = AutoTokenizer.from_pretrained(
179
+ model_path
180
+ )
181
+
182
+ conv = [{"role": "user", "content":"You have 10 liters of a 30% acid solution. How many liters of a 70% acid solution must be added to achieve a 50% acid mixture?"}]
183
+
184
+ input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device)
185
+
186
+ set_seed(42)
187
+ output = model.generate(
188
+ **input_ids,
189
+ max_new_tokens=8192,
190
+ )
191
+
192
+ prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True)
193
+ print(prediction)
194
+ ```
195
+
196
+ **Example Outputs**
197
+ - thinking=True
198
+ ```md
199
+ Here is my thought process:
200
+
201
+ 1. We know we start with 10 liters of a 30% acid solution, so there's 3 liters of pure acid in it (10 liters * 0.3).
202
+ 2. We need to determine how much of the 70% acid solution to add such that the final mixture is 50% acid.
203
+ 3. Let's say x liters of the 70% acid solution are added. This will contribute 0.7x liters of pure acid.
204
+ 4. The total volume of the new mixture will be 10 + x liters, and the total amount of pure acid will be 3 + 0.7x liters.
205
+ 5. We need to set up an equation where (3 + 0.7x) / (10 + x) = 0.5 (since we want a 50% acid mixture).
206
+ 6. Solving this equation for x will give us the volume of 70% solution needed.
207
+
208
+ Here is my response:
209
+
210
+ To solve this, we need to set up an equation based on the concentration of acid before and after mixing.
211
+
212
+ 1. **Initial Acid Content**:
213
+ - We start with 10 liters of a 30% acid solution.
214
+ - Pure acid in this = 10 * 0.3 = 3 liters.
215
+
216
+ 2. **Variable for Addition**:
217
+ - Let x be the liters of 70% acid solution to be added.
218
+ - Pure acid in x liters of 70% solution = 0.7x liters.
219
+
220
+ 3. **Final Mixture Concentration**:
221
+ - Total volume of mixture = 10 + x liters.
222
+ - Total pure acid in the mixture = 3 (from initial solution) + 0.7x (from added solution).
223
+
224
+ 4. **Setting up the equation for 50% acid concentration**:
225
+ - We require that the ratio of pure acid to total volume is 0.5 (50%).
226
+ - Therefore, the equation is:
227
+
228
+ \[
229
+ \frac{3 + 0.7x}{10 + x} = 0.5
230
+ \]
231
+
232
+ 5. **Solving the equation**:
233
+
234
+ Multiply both sides by (10 + x) to clear the fraction:
235
+
236
+ \[
237
+ 3 + 0.7x = 0.5(10 + x)
238
+ \]
239
+
240
+ Expanding the right side:
241
+
242
+ \[
243
+ 3 + 0.7x = 5 + 0.5x
244
+ \]
245
+
246
+ Subtract 0.5x from both sides:
247
+
248
+ \[
249
+ 3 + 0.2x = 5
250
+ \]
251
+
252
+ Subtract 3 from both sides:
253
+
254
+ \[
255
+ 0.2x = 2
256
+ \]
257
+
258
+ Divide by 0.2:
259
+
260
+ \[
261
+ x = 10
262
+ \]
263
+
264
+ So, 10 liters of the 70% acid solution must be added to the initial 10 liters of 30% solution to get a 50% acid mixture.
265
+
266
+ Let's verify:
267
+
268
+ - Initial acid: 3 liters
269
+ - Acid from added solution: 0.7 * 10 = 7 liters
270
+ - Total acid: 3 + 7 = 10 liters
271
+ - Total volume: 10 + 10 = 20 liters
272
+ - Acid percentage: \(\frac{10}{20} = 0.5 = 50\%\)
273
+
274
+ This confirms our solution.
275
+
276
+ Final answer: 10 liters of 70% acid solution should be added.
277
+ ```
278
+ - thinking=False
279
+ ```md
280
+ Let's denote the amount of 70% acid solution to be added as x liters.
281
+
282
+ We start with 10 liters of a 30% acid solution, which means there are 0.3 * 10 = 3 liters of pure acid in it.
283
+
284
+ We want to end up with a 50% acid solution after adding x liters of 70% acid solution. The total volume of the mixture will be 10 + x liters, and we want half of this volume to be pure acid.
285
+
286
+ So, the amount of pure acid in the final mixture should be 0.5 * (10 + x) liters.
287
+
288
+ The amount of pure acid in the final mixture comes from two sources: the original 30% solution and the added 70% solution. So, we can set up the following equation:
289
+
290
+ 3 (from the original solution) + 0.7x (from the added solution) = 0.5 * (10 + x)
291
+
292
+ Now, let's solve for x:
293
+
294
+ 3 + 0.7x = 5 + 0.5x
295
+ 0.7x - 0.5x = 5 - 3
296
+ 0.2x = 2
297
+ x = 2 / 0.2
298
+ x = 10
299
+
300
+ So, you need to add 10 liters of a 70% acid solution to the 10 liters of a 30% acid solution to get a 50% acid mixture.
301
+ ```
302
+
303
+ **Evaluation Results:**
304
+ <table>
305
+
306
+ <thead>
307
+ <tr>
308
+ <th style="text-align:left; background-color: #001d6c; color: white;">Models</th>
309
+ <th style="text-align:center; background-color: #001d6c; color: white;">ArenaHard</th>
310
+ <th style="text-align:center; background-color: #001d6c; color: white;">Alpaca-Eval-2</th>
311
+ <th style="text-align:center; background-color: #001d6c; color: white;">MMLU</th>
312
+ <th style="text-align:center; background-color: #001d6c; color: white;">PopQA</th>
313
+ <th style="text-align:center; background-color: #001d6c; color: white;">TruthfulQA</th>
314
+ <th style="text-align:center; background-color: #001d6c; color: white;">BigBenchHard</th>
315
+ <th style="text-align:center; background-color: #001d6c; color: white;">DROP</th>
316
+ <th style="text-align:center; background-color: #001d6c; color: white;">GSM8K</th>
317
+ <th style="text-align:center; background-color: #001d6c; color: white;">HumanEval</th>
318
+ <th style="text-align:center; background-color: #001d6c; color: white;">HumanEval+</th>
319
+ <th style="text-align:center; background-color: #001d6c; color: white;">IFEval</th>
320
+ <th style="text-align:center; background-color: #001d6c; color: white;">AttaQ</th>
321
+ </tr></thead>
322
+ <tbody>
323
+ <tr>
324
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">Llama-3.1-8B-Instruct</td>
325
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">36.43</td>
326
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">27.22</td>
327
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">69.15</td>
328
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">28.79</td>
329
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">52.79</td>
330
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">72.66</td>
331
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">61.48</td>
332
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">83.24</td>
333
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">85.32</td>
334
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">80.15</td>
335
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">79.10</td>
336
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">83.43</td>
337
+ </tr>
338
+
339
+ <tr>
340
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">DeepSeek-R1-Distill-Llama-8B</td>
341
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">17.17</td>
342
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">21.85</td>
343
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">45.80</td>
344
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">13.25</td>
345
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">47.43</td>
346
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">65.71</td>
347
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">44.46</td>
348
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">72.18</td>
349
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">67.54</td>
350
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">62.91</td>
351
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">66.50</td>
352
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">42.87</td>
353
+ </tr>
354
+
355
+ <tr>
356
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">Qwen-2.5-7B-Instruct</td>
357
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">25.44</td>
358
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">30.34</td>
359
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">74.30</td>
360
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">18.12</td>
361
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">63.06</td>
362
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">70.40</td>
363
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">54.71</td>
364
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">84.46</td>
365
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">93.35</td>
366
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">89.91</td>
367
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">74.90</td>
368
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">81.90</td>
369
+ </tr>
370
+
371
+ <tr>
372
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">DeepSeek-R1-Distill-Qwen-7B</td>
373
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">10.36</td>
374
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">15.35</td>
375
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">50.72</td>
376
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">9.94</td>
377
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">47.14</td>
378
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">65.04</td>
379
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">42.76</td>
380
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">78.47</td>
381
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">79.89</td>
382
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">78.43</td>
383
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">59.10</td>
384
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">42.45</td>
385
+ </tr>
386
+
387
+ <tr>
388
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.1-8B-Instruct</td>
389
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">37.58</td>
390
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">30.34</td>
391
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">66.77</td>
392
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">28.7</td>
393
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">65.84</td>
394
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">68.55</td>
395
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">50.78</td>
396
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">79.15</td>
397
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">89.63</td>
398
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">85.79</td>
399
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">73.20</td>
400
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">85.73</td>
401
+ </tr>
402
+
403
+
404
+ <tr>
405
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.1-2B-Instruct</td>
406
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">23.3</td>
407
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">27.17</td>
408
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">57.11</td>
409
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">20.55</td>
410
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">59.79</td>
411
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">54.46</td>
412
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">18.68</td>
413
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">67.55</td>
414
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">79.45</td>
415
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">75.26</td>
416
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">63.59</td>
417
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">84.7</td>
418
+ </tr>
419
+
420
+
421
+ <tr>
422
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.2-2B-Instruct</td>
423
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">24.86</td>
424
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">34.51</td>
425
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">57.18</td>
426
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">20.56</td>
427
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">59.8</td>
428
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">52.27</td>
429
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">21.12</td>
430
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">67.02</td>
431
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">80.13</td>
432
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">73.39</td>
433
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">61.55</td>
434
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">83.23</td>
435
+ </tr>
436
+
437
+ <tr>
438
+ <td style="text-align:left; background-color: #DAE8FF; color: black;"><b>Granite-3.2-8B-Instruct</b></td>
439
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">55.25</td>
440
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">61.19</td>
441
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">66.79</td>
442
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">28.04</td>
443
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">66.92</td>
444
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">64.77</td>
445
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">50.95</td>
446
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">81.65</td>
447
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">89.35</td>
448
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">85.72</td>
449
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">74.31</td>
450
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">85.42</td>
451
+
452
+ </tr>
453
+
454
+
455
+
456
+ </tbody></table>
457
+
458
+ **Training Data:**
459
+ Overall, our training data is largely comprised of two key sources: (1) publicly available datasets with permissive license, (2) internal synthetically generated data targeted to enhance reasoning capabilites.
460
+ <!-- A detailed attribution of datasets can be found in [Granite 3.2 Technical Report (coming soon)](#), and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf). -->
461
+
462
+ **Infrastructure:**
463
+ We train Granite-3.2-8B-Instruct using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
464
+
465
+ **Ethical Considerations and Limitations:**
466
+ Granite-3.2-8B-Instruct builds upon Granite-3.1-8B-Instruct, leveraging both permissively licensed open-source and select proprietary data for enhanced performance. Since it inherits its foundation from the previous model, all ethical considerations and limitations applicable to [Granite-3.1-8B-Instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct) remain relevant.
467
+
468
+
469
+ **Resources**
470
+ - ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
471
+ - 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
472
+ - 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources
473
+
474
+ <!-- ## Citation
475
+ ```
476
+ @misc{granite-models,
477
+ author = {author 1, author2, ...},
478
+ title = {},
479
+ journal = {},
480
+ volume = {},
481
+ year = {2024},
482
+ url = {https://arxiv.org/abs/0000.00000},
483
+ }
484
+ ``` -->