HighCWu's picture
Update README.md
6be73e9 verified
---
language:
- en
base_model:
- black-forest-labs/FLUX.1-Kontext-dev
- HighCWu/FLUX.1-Kontext-dev-bnb-hqq-4bit
pipeline_tag: image-to-image
library_name: diffusers
tags:
- Style
- lora
- Jojo
- FluxKontext
- Image-to-Image
datasets:
- HighCWu/OmniConsistencyJojo
---
# Jojo Style LoRA V2 Trained with FLUX.1 Kontext dev 4bit on 16GB VRAM
This repository provides the **Jojo** style LoRA adapter for the [FLUX.1 Kontext Model](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev).
It was trained by [a fork version of ostris/ai-toolkit](https://github.com/HighCWu/ai-toolkit/tree/vram-16gb) with the config file => [train_lora_flux_kontext_16gb_jojo_v2.yaml](./train_lora_flux_kontext_16gb_jojo_v2.yaml)
## Style Showcase
Here are some examples of images generated using this style LoRA:
![Jojo Style Example](./example-1.png)
![Jojo Style Example](./example-2.png)
![Jojo Style Example](./example-3.png)
![Jojo Style Example](./example-4.png)
![Jojo Style Example](./example-5.png)
![Jojo Style Example](./example-6.png)
## Inference Example
```python
from diffusers import FluxKontextPipeline
from diffusers.utils import load_image, peft_utils
import torch
try: # A temp hack for some version diffusers lora loading problem
from diffusers.utils.peft_utils import _derive_exclude_modules
def new_derive_exclude_modules(*args, **kwargs):
exclude_modules = _derive_exclude_modules(*args, **kwargs)
if exclude_modules is not None:
exclude_modules = [n for n in exclude_modules if "proj_out" not in n]
return exclude_modules
peft_utils._derive_exclude_modules = new_derive_exclude_modules
except:
pass
pipe = FluxKontextPipeline.from_pretrained("HighCWu/FLUX.1-Kontext-dev-bnb-hqq-4bit", torch_dtype=torch.bfloat16)
pipe.load_lora_weights("HighCWu/Jojo_lora_4bit_training_v2", weight_name="flux_kontext_jojo_style_lora_v2.safetensors", adapter_name="jojo")
pipe.set_adapters(["jojo"], [1.0])
pipe.to("cuda")
# Load a source image (you can use any image)
image = load_image("https://huggingface.co/datasets/black-forest-labs/kontext-bench/resolve/main/test/images/0003.jpg").resize((1024, 1024))
# Prepare the prompt
# The style_name is used in the prompt and for the output filename.
style_name = "JojoV2"
prompt = f"Turn this image into the style of JoJo's Bizarre Adventure. "
# Run inference
result_image = pipe(
image=image,
prompt=prompt,
height=1024,
width=1024,
guidance_scale=4.0,
num_inference_steps=28,
).images[0]
# Save the result
output_filename = f"{style_name.replace(' ', '_')}.png"
result_image.save(output_filename)
print(f"Image saved as {output_filename}")
```
Feel free to open an issue or contact us for feedback or collaboration!