is this fp8

#5
by yiki12 - opened

is the gguf version of fp8 version from comfyUI or the original model?

from the original model

Hello @bullerwins sorry bother but none of your ggufs are working in python code

from diffusers import FluxKontextPipeline, FluxTransformer2DModel, GGUFQuantizationConfig

from diffusers.utils import load_image
import torch
ckpt_path = (
"https://huggingface.co/bullerwins/FLUX.1-Kontext-dev-GGUF/blob/main/flux1-kontext-dev-Q8_0.gguf"
)
transformer = FluxTransformer2DModel.from_single_file(
ckpt_path,
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=torch.bfloat16,
)

pipeline = FluxKontextPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Kontext-dev",
transformer=transformer
)

AFTER PIPELINE

input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")

input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")

image = pipe(
image=input_image,
prompt="Add a hat to the cat",
guidance_scale=2.5
).images[0]

dimensions mismatched .

Can you help how this can be overcome ?

@tahercoolguy
use Nunchaku FLux dev instead of gguf It's much faster than all this gguf and works on lower vram as low as 4GB and speed is around just 30 seconds on 6Gb vram, svdquant Nunchaku flux will replace gguf I guess.

Hello @bullerwins sorry bother but none of your ggufs are working in python code

from diffusers import FluxKontextPipeline, FluxTransformer2DModel, GGUFQuantizationConfig

from diffusers.utils import load_image
import torch
ckpt_path = (
"https://huggingface.co/bullerwins/FLUX.1-Kontext-dev-GGUF/blob/main/flux1-kontext-dev-Q8_0.gguf"
)
transformer = FluxTransformer2DModel.from_single_file(
ckpt_path,
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=torch.bfloat16,
)

pipeline = FluxKontextPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Kontext-dev",
transformer=transformer
)

AFTER PIPELINE

input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")

input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")

image = pipe(
image=input_image,
prompt="Add a hat to the cat",
guidance_scale=2.5
).images[0]

dimensions mismatched .

Can you help how this can be overcome ?

It works:
https://github.com/huggingface/diffusers/issues/11839

Nunchaku FLux dev, avoid, its counter intutitive and built off windows ide, so why i dunno when working with python, unless you're rewriting cuda then cs code.

Sign up or log in to comment