Model Card for PrunaAI/FLUX.1-Fill-dev-smashed

This model was created using the pruna library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.

Usage

First things first, you need to install the pruna library:

pip install pruna
pip install git+https://github.com/asomoza/image_gen_aux.git

You can use the diffusers library to load the model but this might not include all optimizations by default.

To ensure that all optimizations are applied, use the pruna library to load the model using the following code:

from pruna import PrunaModel

import torch
from diffusers import FluxControlPipeline, FluxTransformer2DModel
from diffusers.utils import load_image
from image_gen_aux import DepthPreprocessor

pipe = PrunaModel.from_hub(
    "PrunaAI/FLUX.1-Fill-dev-smashed"
)
prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts."
control_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")

processor = DepthPreprocessor.from_pretrained("LiheYoung/depth-anything-large-hf")
control_image = processor(control_image)[0].convert("RGB")

image = pipe(
    prompt=prompt,
    control_image=control_image,
    height=1024,
    width=1024,
    num_inference_steps=30,
    guidance_scale=10.0,
    generator=torch.Generator().manual_seed(42),
).images[0]
image.save("output.png")

After loading the model, you can use the inference methods of the original model. Take a look at the documentation for more usage information.

Smash Configuration

The compression configuration of the model is stored in the smash_config.json file, which describes the optimization methods that were applied to the model.

{
    "batcher": null,
    "cacher": "fora",
    "compiler": "torch_compile",
    "factorizer": "qkv_diffusers",
    "pruner": null,
    "quantizer": null,
    "fora_interval": 2,
    "fora_start_step": 2,
    "torch_compile_backend": "inductor",
    "torch_compile_dynamic": null,
    "torch_compile_fullgraph": true,
    "torch_compile_make_portable": false,
    "torch_compile_max_kv_cache_size": 400,
    "torch_compile_mode": "default",
    "torch_compile_seqlen_manual_cuda_graph": 100,
    "torch_compile_target": "model",
    "batch_size": 1,
    "device": "cuda",
    "save_fns": [
        "save_before_apply",
        "save_before_apply"
    ],
    "load_fns": [
        "diffusers"
    ],
    "reapply_after_load": {
        "factorizer": "qkv_diffusers",
        "pruner": null,
        "quantizer": null,
        "cacher": "fora",
        "compiler": "torch_compile",
        "batcher": null
    }
}

🌍 Join the Pruna AI community!

Twitter GitHub LinkedIn Discord Reddit

Downloads last month
38
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for PrunaAI/FLUX.1-Depth-dev-smashed

Finetuned
(1)
this model

Collection including PrunaAI/FLUX.1-Depth-dev-smashed