Trained for 0 epochs and 250 steps.
Browse filesTrained with datasets ['text-embeds', 'pacs']
Learning rate 0.0001, batch size 4, and 4 gradient accumulation steps.
Used DDPM noise scheduler for training with epsilon prediction type and rescaled_betas_zero_snr=False
Using 'trailing' timestep spacing.
Base model: /ephemeral/shashmi/llava_lets_go/chimaa_finetuner/stable-diffusion-3.5-medium
VAE: None
pytorch_lora_weights.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f5ee776252350d1477cee44e8c9e77c3e307ef9e63993315868c7161d4cf0fee
|
3 |
+
size 116431016
|