library_name: diffusers
tags:
- pruna-ai
base_model:
- black-forest-labs/FLUX.1-Canny-dev
Model Card for PrunaAI/FLUX.1-Canny-dev-smashed
This model was created using the pruna library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.
Usage
First things first, you need to install the pruna library:
pip install pruna controlnet_aux
You can use the diffusers library to load the model but this might not include all optimizations by default.
To ensure that all optimizations are applied, use the pruna library to load the model using the following code:
from pruna import PrunaModel
import torch
from controlnet_aux import CannyDetector
from diffusers import FluxControlPipeline
from diffusers.utils import load_image
pipe = PrunaModel.from_hub(
"PrunaAI/FLUX.1-Canny-dev-smashed"
)
prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts."
control_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")
processor = CannyDetector()
control_image = processor(control_image, low_threshold=50, high_threshold=200, detect_resolution=1024, image_resolution=1024)
image = pipe(
prompt=prompt,
control_image=control_image,
height=1024,
width=1024,
num_inference_steps=50,
guidance_scale=30.0,
).images[0]
image.save("output.png")
After loading the model, you can use the inference methods of the original model. Take a look at the documentation for more usage information.
Smash Configuration
The compression configuration of the model is stored in the smash_config.json
file, which describes the optimization methods that were applied to the model.
{
"batcher": null,
"cacher": "fora",
"compiler": "torch_compile",
"factorizer": "qkv_diffusers",
"pruner": null,
"quantizer": null,
"fora_interval": 2,
"fora_start_step": 2,
"torch_compile_backend": "inductor",
"torch_compile_dynamic": null,
"torch_compile_fullgraph": true,
"torch_compile_make_portable": false,
"torch_compile_max_kv_cache_size": 400,
"torch_compile_mode": "default",
"torch_compile_seqlen_manual_cuda_graph": 100,
"torch_compile_target": "model",
"batch_size": 1,
"device": "cuda",
"save_fns": [
"save_before_apply",
"save_before_apply"
],
"load_fns": [
"diffusers"
],
"reapply_after_load": {
"factorizer": "qkv_diffusers",
"pruner": null,
"quantizer": null,
"cacher": "fora",
"compiler": "torch_compile",
"batcher": null
}
}