simpletuner-lora
This is a PEFT LoRA derived from stabilityai/stable-diffusion-3.5-large.
The main validation prompt used during training was:
cnseah823, industrial sites where worker are walking on gray floor which is outside of the yellow and green colored safety road.
Validation settings
- CFG:
3.0 - CFG Rescale:
0.0 - Steps:
20 - Sampler:
FlowMatchEulerDiscreteScheduler - Seed:
42 - Resolution:
1024x1024 - Skip-layer guidance:
Note: The validation settings are not necessarily the same as the training settings.
You can find some example images in the following gallery:

- Prompt
- unconditional (blank prompt)
- Negative Prompt
- blurry, cropped, ugly

- Prompt
- cnseah823, industrial sites where worker are walking on gray floor which is outside of the yellow and green colored safety road.
- Negative Prompt
- blurry, cropped, ugly
The text encoder was not trained. You may reuse the base model text encoder for inference.
Training settings
Training epochs: 162
Training steps: 7000
Learning rate: 0.0001
- Learning rate schedule: polynomial
- Warmup steps: 100
Max grad value: 2.0
Effective batch size: 1
- Micro-batch size: 1
- Gradient accumulation steps: 1
- Number of GPUs: 1
Gradient checkpointing: True
Prediction type: flow_matching (extra parameters=['shift=3'])
Optimizer: adamw_bf16
Trainable parameter precision: Pure BF16
Base model precision:
no_changeCaption dropout probability: 0.1%
LoRA Rank: 16
LoRA Alpha: None
LoRA Dropout: 0.1
LoRA initialisation style: default
LoRA mode: Standard
Datasets
cn
- Repeats: 0
- Total number of images: 43
- Total number of aspect buckets: 1
- Resolution: 1.048576 megapixels
- Cropped: True
- Crop style: center
- Crop aspect: square
- Used for regularisation data: No
Inference
import torch
from diffusers import DiffusionPipeline
model_id = 'stabilityai/stable-diffusion-3.5-large'
adapter_id = 'ymb943/simpletuner-lora'
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
pipeline.load_lora_weights(adapter_id)
prompt = "cnseah823, industrial sites where worker are walking on gray floor which is outside of the yellow and green colored safety road."
negative_prompt = 'blurry, cropped, ugly'
## Optional: quantise the model to save on vram.
## Note: The model was not quantised during training, so it is not necessary to quantise it during inference time.
#from optimum.quanto import quantize, freeze, qint8
#quantize(pipeline.transformer, weights=qint8)
#freeze(pipeline.transformer)
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
model_output = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
num_inference_steps=20,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
width=1024,
height=1024,
guidance_scale=3.0,
).images[0]
model_output.save("output.png", format="PNG")
- Downloads last month
- 15
Model tree for ymb943/simpletuner-lora
Base model
stabilityai/stable-diffusion-3.5-large