Update README.md
Browse files
README.md
CHANGED
@@ -36,7 +36,7 @@ Quantized version of [sapienzanlp/Minerva-1B-base-v1.0](https://huggingface.co/s
|
|
36 |
- Asymmetrical Quantization
|
37 |
- Method AutoGPTQ
|
38 |
|
39 |
-
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.
|
40 |
|
41 |
Note: this INT8 version of Minerva-1B-base-v1.0 has been quantized to run inference through CPU.
|
42 |
|
@@ -47,9 +47,9 @@ Note: this INT8 version of Minerva-1B-base-v1.0 has been quantized to run infere
|
|
47 |
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
|
48 |
|
49 |
```
|
50 |
-
wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.
|
51 |
-
tar -xvzf v0.4.
|
52 |
-
cd auto-round-0.4.
|
53 |
pip install -r requirements-cpu.txt --upgrade
|
54 |
```
|
55 |
|
|
|
36 |
- Asymmetrical Quantization
|
37 |
- Method AutoGPTQ
|
38 |
|
39 |
+
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.6
|
40 |
|
41 |
Note: this INT8 version of Minerva-1B-base-v1.0 has been quantized to run inference through CPU.
|
42 |
|
|
|
47 |
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
|
48 |
|
49 |
```
|
50 |
+
wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.6.tar.gz
|
51 |
+
tar -xvzf v0.4.6.tar.gz
|
52 |
+
cd auto-round-0.4.6
|
53 |
pip install -r requirements-cpu.txt --upgrade
|
54 |
```
|
55 |
|