Update README.md
Browse files
README.md
CHANGED
@@ -321,6 +321,15 @@ evalplus.evaluate \
|
|
321 |
This model achieves up to 1.9x speedup in single-stream deployment, depending on hardware and use-case scenario.
|
322 |
The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.6.6.post1, and [GuideLLM](https://github.com/neuralmagic/guidellm).
|
323 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
324 |
### Single-stream performance (measured with vLLM version 0.6.6.post1)
|
325 |
<table>
|
326 |
<tr>
|
|
|
321 |
This model achieves up to 1.9x speedup in single-stream deployment, depending on hardware and use-case scenario.
|
322 |
The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.6.6.post1, and [GuideLLM](https://github.com/neuralmagic/guidellm).
|
323 |
|
324 |
+
<details>
|
325 |
+
<summary>Benchmarking Command</summary>
|
326 |
+
|
327 |
+
```
|
328 |
+
guidellm --model neuralmagic/granite-3.1-2b-instruct-quantized.w4a16 --target "http://localhost:8000/v1" --data-type emulated --data "prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>" --max seconds 360 --backend aiohttp_server
|
329 |
+
```
|
330 |
+
|
331 |
+
</details>
|
332 |
+
|
333 |
### Single-stream performance (measured with vLLM version 0.6.6.post1)
|
334 |
<table>
|
335 |
<tr>
|