The text encoder was not trained. You may reuse the base model text encoder for inference.
Training loss
Training settings
Training steps: 500
Learning rate: Automagic (adaptive)
- Learning rate schedule: Automagic optimizer
- Warmup steps: 0 (commented out: 100)
Max grad value: 1.0
Effective batch size: 4
- Micro-batch size: 4
- Gradient accumulation steps: 1
- Number of GPUs: 1
Gradient checkpointing: True (unsloth)
Prediction type: logit_normal
Optimizer: automagic
Trainable parameter precision: Pure BF16
Base model precision: Pure BF16
Caption dropout probability: 0.0% (not specified)
LoRA Rank: 64
LoRA Alpha: 64 (auto-set to rank)
LoRA Dropout: 0.0
LoRA initialisation style: default
LoRA mode: Standard
Datasets
rembrandt-sketch-640
- Repeats: 8
- Total number of images: TBD
- Total number of aspect buckets: 12 (from ar_buckets config)
- Resolution: 640 px
- Cropped: False
- Crop style: aspect ratio bucketing
- Used for regularisation data: No
rembrandt-sketch-1328
- Repeats: 4
- Total number of images: TBD
- Total number of aspect buckets: 12 (from ar_buckets config)
- Resolution: 1328 px
- Cropped: False
- Crop style: aspect ratio bucketing
- Used for regularisation data: No
- Downloads last month
- 16
Model tree for davidrd123/rembrandt_sketches_qwen
Base model
Qwen/Qwen-Image