Edit model card

flux-lora-training_pmf

This is a standard PEFT LoRA derived from black-forest-labs/FLUX.1-dev.

The main validation prompt used during training was:

pmfpmf wearing a chic outfit, posing like a professional model

Validation settings

  • CFG: 3.5
  • CFG Rescale: 0.0
  • Steps: 15
  • Sampler: None
  • Seed: 42
  • Resolution: 1024x1024

Note: The validation settings are not necessarily the same as the training settings.

You can find some example images in the following gallery:

Prompt
unconditional (blank prompt)
Negative Prompt
'
Prompt
pmfpmf wearing a chic outfit, posing like a professional model
Negative Prompt
'

The text encoder was not trained. You may reuse the base model text encoder for inference.

Training settings

  • Training epochs: 416
  • Training steps: 5000
  • Learning rate: 0.0001
  • Max grad norm: 1.0
  • Effective batch size: 1
    • Micro-batch size: 1
    • Gradient accumulation steps: 1
    • Number of GPUs: 1
  • Prediction type: flow-matching (extra parameters=['flux_schedule_auto_shift', 'shift=0.0', 'flux_guidance_value=1.0', 'flux_lora_target=all+ffs'])
  • Rescaled betas zero SNR: False
  • Optimizer: adamw_bf16
  • Precision: Pure BF16
  • Quantised: Yes: int8-quanto
  • Xformers: Not used
  • LoRA Rank: 16
  • LoRA Alpha: None
  • LoRA Dropout: 0.1
  • LoRA initialisation style: default

Datasets

default_dataset

  • Repeats: 0
  • Total number of images: 12
  • Total number of aspect buckets: 1
  • Resolution: 1.048576 megapixels
  • Cropped: True
  • Crop style: center
  • Crop aspect: square
  • Used for regularisation data: No

Inference

import torch
from diffusers import DiffusionPipeline

model_id = 'black-forest-labs/FLUX.1-dev'
adapter_id = 'Candler/flux-lora-training_pmf'
pipeline = DiffusionPipeline.from_pretrained(model_id), torch_dtype=torch.bfloat16) # loading directly in bf16
pipeline.load_lora_weights(adapter_id)

prompt = "pmfpmf wearing a chic outfit, posing like a professional model"


## Optional: quantise the model to save on vram.
## Note: The model was quantised during training, and so it is recommended to do the same during inference time.
from optimum.quanto import quantize, freeze, qint8
quantize(pipeline.transformer, weights=qint8)
freeze(pipeline.transformer)
    
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
image = pipeline(
    prompt=prompt,
    num_inference_steps=15,
    generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
    width=1024,
    height=1024,
    guidance_scale=3.5,
).images[0]
image.save("output.png", format="PNG")
Downloads last month
253
Inference API
Examples

Model tree for Candler/flux-lora-training_pmf

Adapter
(8652)
this model