Update 7/9/25: This model is now quantized and implemented in this example space. Seeing preliminary VRAM usage at around ~10GB with faster inferencing. Will be experimenting with different weights and schedulers to find particularly well-performing libraries.
FLUX.1 Kontext-dev X LoRA Experimentation
Highly experimental, will update with more details later.
- 6-8 steps
Euler, SGM Uniform (Recommended, feel free to play around)Getting mixed results now, feel free to play around and share.
Model Details
Experimenting with FLUX.1-dev LoRAs and how it affects Kontext-dev. This model has been fused with acceleration LoRAs.
License
This model falls under the FLUX.1 [dev] Non-Commercial License, please familiarize yourself with the license.
- Downloads last month
- 443
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for LPX55/FLUX.1_Kontext-Lightning
Base model
black-forest-labs/FLUX.1-Kontext-dev