ekurtic commited on
Commit
b152b05
·
verified ·
1 Parent(s): 82eb10c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -48,7 +48,7 @@ Weight quantization also reduces disk size requirements by approximately 50%.
48
  This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
49
 
50
  ```bash
51
- python quantize.py --mdoel_path mistralai/Devstral-Small-2507 --calib_size 512 --dampening_frac 0.05
52
  ```
53
 
54
  ```python
 
48
  This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
49
 
50
  ```bash
51
+ python quantize.py --model_path mistralai/Devstral-Small-2507 --calib_size 512 --dampening_frac 0.05
52
  ```
53
 
54
  ```python