File size: 2,750 Bytes
0330b80 f239f53 0330b80 6be73e9 0330b80 f239f53 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
---
language:
- en
base_model:
- black-forest-labs/FLUX.1-Kontext-dev
- HighCWu/FLUX.1-Kontext-dev-bnb-hqq-4bit
pipeline_tag: image-to-image
library_name: diffusers
tags:
- Style
- lora
- Jojo
- FluxKontext
- Image-to-Image
datasets:
- HighCWu/OmniConsistencyJojo
---
# Jojo Style LoRA V2 Trained with FLUX.1 Kontext dev 4bit on 16GB VRAM
This repository provides the **Jojo** style LoRA adapter for the [FLUX.1 Kontext Model](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev).
It was trained by [a fork version of ostris/ai-toolkit](https://github.com/HighCWu/ai-toolkit/tree/vram-16gb) with the config file => [train_lora_flux_kontext_16gb_jojo_v2.yaml](./train_lora_flux_kontext_16gb_jojo_v2.yaml)
## Style Showcase
Here are some examples of images generated using this style LoRA:






## Inference Example
```python
from diffusers import FluxKontextPipeline
from diffusers.utils import load_image, peft_utils
import torch
try: # A temp hack for some version diffusers lora loading problem
from diffusers.utils.peft_utils import _derive_exclude_modules
def new_derive_exclude_modules(*args, **kwargs):
exclude_modules = _derive_exclude_modules(*args, **kwargs)
if exclude_modules is not None:
exclude_modules = [n for n in exclude_modules if "proj_out" not in n]
return exclude_modules
peft_utils._derive_exclude_modules = new_derive_exclude_modules
except:
pass
pipe = FluxKontextPipeline.from_pretrained("HighCWu/FLUX.1-Kontext-dev-bnb-hqq-4bit", torch_dtype=torch.bfloat16)
pipe.load_lora_weights("HighCWu/Jojo_lora_4bit_training_v2", weight_name="flux_kontext_jojo_style_lora_v2.safetensors", adapter_name="jojo")
pipe.set_adapters(["jojo"], [1.0])
pipe.to("cuda")
# Load a source image (you can use any image)
image = load_image("https://huggingface.co/datasets/black-forest-labs/kontext-bench/resolve/main/test/images/0003.jpg").resize((1024, 1024))
# Prepare the prompt
# The style_name is used in the prompt and for the output filename.
style_name = "JojoV2"
prompt = f"Turn this image into the style of JoJo's Bizarre Adventure. "
# Run inference
result_image = pipe(
image=image,
prompt=prompt,
height=1024,
width=1024,
guidance_scale=4.0,
num_inference_steps=28,
).images[0]
# Save the result
output_filename = f"{style_name.replace(' ', '_')}.png"
result_image.save(output_filename)
print(f"Image saved as {output_filename}")
```
Feel free to open an issue or contact us for feedback or collaboration! |