Update README.md
Browse files
README.md
CHANGED
@@ -2,6 +2,8 @@
|
|
2 |
library_name: diffusers
|
3 |
tags:
|
4 |
- pruna-ai
|
|
|
|
|
5 |
---
|
6 |
|
7 |
# Model Card for PrunaAI/FLUX.1-Canny-dev-smashed
|
@@ -13,7 +15,7 @@ This model was created using the [pruna](https://github.com/PrunaAI/pruna) libra
|
|
13 |
First things first, you need to install the pruna library:
|
14 |
|
15 |
```bash
|
16 |
-
pip install pruna
|
17 |
```
|
18 |
|
19 |
You can [use the diffusers library to load the model](https://huggingface.co/PrunaAI/FLUX.1-Canny-dev-smashed?library=diffusers) but this might not include all optimizations by default.
|
@@ -23,9 +25,30 @@ To ensure that all optimizations are applied, use the pruna library to load the
|
|
23 |
```python
|
24 |
from pruna import PrunaModel
|
25 |
|
26 |
-
|
|
|
|
|
|
|
|
|
|
|
27 |
"PrunaAI/FLUX.1-Canny-dev-smashed"
|
28 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
```
|
30 |
|
31 |
After loading the model, you can use the inference methods of the original model. Take a look at the [documentation](https://pruna.readthedocs.io/en/latest/index.html) for more usage information.
|
|
|
2 |
library_name: diffusers
|
3 |
tags:
|
4 |
- pruna-ai
|
5 |
+
base_model:
|
6 |
+
- black-forest-labs/FLUX.1-Canny-dev
|
7 |
---
|
8 |
|
9 |
# Model Card for PrunaAI/FLUX.1-Canny-dev-smashed
|
|
|
15 |
First things first, you need to install the pruna library:
|
16 |
|
17 |
```bash
|
18 |
+
pip install pruna controlnet_aux
|
19 |
```
|
20 |
|
21 |
You can [use the diffusers library to load the model](https://huggingface.co/PrunaAI/FLUX.1-Canny-dev-smashed?library=diffusers) but this might not include all optimizations by default.
|
|
|
25 |
```python
|
26 |
from pruna import PrunaModel
|
27 |
|
28 |
+
import torch
|
29 |
+
from controlnet_aux import CannyDetector
|
30 |
+
from diffusers import FluxControlPipeline
|
31 |
+
from diffusers.utils import load_image
|
32 |
+
|
33 |
+
pipe = PrunaModel.from_hub(
|
34 |
"PrunaAI/FLUX.1-Canny-dev-smashed"
|
35 |
)
|
36 |
+
|
37 |
+
prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts."
|
38 |
+
control_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")
|
39 |
+
|
40 |
+
processor = CannyDetector()
|
41 |
+
control_image = processor(control_image, low_threshold=50, high_threshold=200, detect_resolution=1024, image_resolution=1024)
|
42 |
+
|
43 |
+
image = pipe(
|
44 |
+
prompt=prompt,
|
45 |
+
control_image=control_image,
|
46 |
+
height=1024,
|
47 |
+
width=1024,
|
48 |
+
num_inference_steps=50,
|
49 |
+
guidance_scale=30.0,
|
50 |
+
).images[0]
|
51 |
+
image.save("output.png")
|
52 |
```
|
53 |
|
54 |
After loading the model, you can use the inference methods of the original model. Take a look at the [documentation](https://pruna.readthedocs.io/en/latest/index.html) for more usage information.
|