Add files using upload-large-folder tool
Browse files
README.md
CHANGED
@@ -16,19 +16,23 @@ First things first, you need to install the pruna library:
|
|
16 |
pip install pruna
|
17 |
```
|
18 |
|
19 |
-
You can
|
|
|
|
|
20 |
|
21 |
```python
|
22 |
from pruna import PrunaModel
|
23 |
|
24 |
-
loaded_model = PrunaModel.from_hub(
|
|
|
|
|
25 |
```
|
26 |
|
27 |
-
After loading the model, you can use the inference methods of the original model.
|
28 |
|
29 |
## Smash Configuration
|
30 |
|
31 |
-
The compression configuration of the model is stored in the `smash_config.json` file.
|
32 |
|
33 |
```bash
|
34 |
{
|
|
|
16 |
pip install pruna
|
17 |
```
|
18 |
|
19 |
+
You can [use the diffusers library to load the model](https://huggingface.co/PrunaAI/test-tiny-stable-diffusion-pipe-smashed?library=diffusers) but this might not include all optimizations by default.
|
20 |
+
|
21 |
+
To ensure that all optimizations are applied, use the pruna library to load the model using the following code:
|
22 |
|
23 |
```python
|
24 |
from pruna import PrunaModel
|
25 |
|
26 |
+
loaded_model = PrunaModel.from_hub(
|
27 |
+
"PrunaAI/test-tiny-stable-diffusion-pipe-smashed"
|
28 |
+
)
|
29 |
```
|
30 |
|
31 |
+
After loading the model, you can use the inference methods of the original model. Take a look at the [documentation](https://pruna.readthedocs.io/en/latest/index.html) for more usage information.
|
32 |
|
33 |
## Smash Configuration
|
34 |
|
35 |
+
The compression configuration of the model is stored in the `smash_config.json` file, which describes the optimization methods that were applied to the model.
|
36 |
|
37 |
```bash
|
38 |
{
|