ig1sa commited on
Commit
fd7b357
·
verified ·
1 Parent(s): c253e1c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -8,5 +8,5 @@ Quantized with `AutoAWQ` `v0.2.8` and `transformers` `v4.49.0`.
8
 
9
  Example run:
10
  ```bash
11
- docker run --rm --runtime nvidia --gpus 'all' --ipc=host -e VLLM_WORKER_MULTIPROC_METHOD=spawn -e 'HF_TOKEN' -v '/data/hf_cache:/root/.cache/huggingface' -v '/data/llmcompressor/output/perplexity-ai/r1-1776-AWQ:/model' -p 127.0.0.1:8000:8000 "vllm/vllm-openai:v0.7.3" --tensor-parallel-size 4 --enable-chunked-prefill=False --enable-reasoning --reasoning-parser deepseek_r1 --model '/model' --trust-remote-code --dtype half --served-model-name "R1 1776" --max-model-len 65536 --override-generation-config '{"temperature":0.6,"top_p":0.95}'
12
  ```
 
8
 
9
  Example run:
10
  ```bash
11
+ docker run --rm --runtime nvidia --gpus 'all' --ipc=host -e VLLM_WORKER_MULTIPROC_METHOD=spawn -e 'HF_TOKEN' -v '/root/.cache/huggingface:/root/.cache/huggingface' -p 127.0.0.1:8000:8000 "vllm/vllm-openai:v0.7.3" --tensor-parallel-size 4 --enable-chunked-prefill=False --enable-reasoning --reasoning-parser deepseek_r1 --model 'ig1/r1-1776-AWQ' --trust-remote-code --dtype half --served-model-name "R1 1776" --max-model-len 65536 --override-generation-config '{"temperature":0.6,"top_p":0.95}'
12
  ```