End of training
Browse files- .gitattributes +4 -0
- README.md +100 -0
- image_0.png +3 -0
- image_1.png +3 -0
- image_2.png +3 -0
- image_3.png +3 -0
- logs/dreambooth-hidream-lora/1748782938.569996/events.out.tfevents.1748782938.modal.22.1 +3 -0
- logs/dreambooth-hidream-lora/1748782938.5723155/hparams.yml +76 -0
- logs/dreambooth-hidream-lora/events.out.tfevents.1748782938.modal.22.0 +3 -0
- pytorch_lora_weights.safetensors +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
image_0.png filter=lfs diff=lfs merge=lfs -text
|
37 |
+
image_1.png filter=lfs diff=lfs merge=lfs -text
|
38 |
+
image_2.png filter=lfs diff=lfs merge=lfs -text
|
39 |
+
image_3.png filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: HiDream-ai/HiDream-I1-Full
|
3 |
+
library_name: diffusers
|
4 |
+
license: mit
|
5 |
+
instance_prompt: TOK
|
6 |
+
widget:
|
7 |
+
- text: a woman riding an orca while waving hello and working on her laptop in the
|
8 |
+
style of TOK
|
9 |
+
output:
|
10 |
+
url: image_0.png
|
11 |
+
- text: a woman riding an orca while waving hello and working on her laptop in the
|
12 |
+
style of TOK
|
13 |
+
output:
|
14 |
+
url: image_1.png
|
15 |
+
- text: a woman riding an orca while waving hello and working on her laptop in the
|
16 |
+
style of TOK
|
17 |
+
output:
|
18 |
+
url: image_2.png
|
19 |
+
- text: a woman riding an orca while waving hello and working on her laptop in the
|
20 |
+
style of TOK
|
21 |
+
output:
|
22 |
+
url: image_3.png
|
23 |
+
tags:
|
24 |
+
- text-to-image
|
25 |
+
- diffusers-training
|
26 |
+
- diffusers
|
27 |
+
- lora
|
28 |
+
- hidream
|
29 |
+
- hidream-diffusers
|
30 |
+
- template:sd-lora
|
31 |
+
---
|
32 |
+
|
33 |
+
<!-- This model card has been generated automatically according to the information the training script had access to. You
|
34 |
+
should probably proofread and complete it, then remove this comment. -->
|
35 |
+
|
36 |
+
|
37 |
+
# HiDream Image DreamBooth LoRA - KristjanRRR/HiDream-I1-Full-ink-drawing-lora-4
|
38 |
+
|
39 |
+
<Gallery />
|
40 |
+
|
41 |
+
## Model description
|
42 |
+
|
43 |
+
These are KristjanRRR/HiDream-I1-Full-ink-drawing-lora-4 DreamBooth LoRA weights for HiDream-ai/HiDream-I1-Full.
|
44 |
+
|
45 |
+
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [HiDream Image diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_hidream.md).
|
46 |
+
|
47 |
+
## Trigger words
|
48 |
+
|
49 |
+
You should use `TOK` to trigger the image generation.
|
50 |
+
|
51 |
+
## Download model
|
52 |
+
|
53 |
+
[Download the *.safetensors LoRA](KristjanRRR/HiDream-I1-Full-ink-drawing-lora-4/tree/main) in the Files & versions tab.
|
54 |
+
|
55 |
+
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
|
56 |
+
|
57 |
+
```py
|
58 |
+
>>> import torch
|
59 |
+
>>> from transformers import PreTrainedTokenizerFast, LlamaForCausalLM
|
60 |
+
>>> from diffusers import HiDreamImagePipeline
|
61 |
+
|
62 |
+
>>> tokenizer_4 = PreTrainedTokenizerFast.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct")
|
63 |
+
>>> text_encoder_4 = LlamaForCausalLM.from_pretrained(
|
64 |
+
... "meta-llama/Meta-Llama-3.1-8B-Instruct",
|
65 |
+
... output_hidden_states=True,
|
66 |
+
... output_attentions=True,
|
67 |
+
... torch_dtype=torch.bfloat16,
|
68 |
+
... )
|
69 |
+
|
70 |
+
>>> pipe = HiDreamImagePipeline.from_pretrained(
|
71 |
+
... "HiDream-ai/HiDream-I1-Full",
|
72 |
+
... tokenizer_4=tokenizer_4,
|
73 |
+
... text_encoder_4=text_encoder_4,
|
74 |
+
... torch_dtype=torch.bfloat16,
|
75 |
+
... )
|
76 |
+
>>> pipe.enable_model_cpu_offload()
|
77 |
+
>>> pipe.load_lora_weights(f"KristjanRRR/HiDream-I1-Full-ink-drawing-lora-4")
|
78 |
+
>>> image = pipe(f"TOK").images[0]
|
79 |
+
|
80 |
+
|
81 |
+
```
|
82 |
+
|
83 |
+
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
84 |
+
|
85 |
+
|
86 |
+
## Intended uses & limitations
|
87 |
+
|
88 |
+
#### How to use
|
89 |
+
|
90 |
+
```python
|
91 |
+
# TODO: add an example code snippet for running this diffusion pipeline
|
92 |
+
```
|
93 |
+
|
94 |
+
#### Limitations and bias
|
95 |
+
|
96 |
+
[TODO: provide examples of latent issues and potential remediations]
|
97 |
+
|
98 |
+
## Training details
|
99 |
+
|
100 |
+
[TODO: describe the data used to train the model]
|
image_0.png
ADDED
![]() |
Git LFS Details
|
image_1.png
ADDED
![]() |
Git LFS Details
|
image_2.png
ADDED
![]() |
Git LFS Details
|
image_3.png
ADDED
![]() |
Git LFS Details
|
logs/dreambooth-hidream-lora/1748782938.569996/events.out.tfevents.1748782938.modal.22.1
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c97582a9ee81bf2812741bd1fb924c6d32d2ba82b9951a9055018ec78a33ba1b
|
3 |
+
size 3569
|
logs/dreambooth-hidream-lora/1748782938.5723155/hparams.yml
ADDED
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
adam_beta1: 0.9
|
2 |
+
adam_beta2: 0.999
|
3 |
+
adam_epsilon: 1.0e-08
|
4 |
+
adam_weight_decay: 0.0001
|
5 |
+
allow_tf32: false
|
6 |
+
bnb_quantization_config_path: null
|
7 |
+
cache_dir: null
|
8 |
+
cache_latents: false
|
9 |
+
caption_column: prompt
|
10 |
+
center_crop: false
|
11 |
+
checkpointing_steps: 500
|
12 |
+
checkpoints_total_limit: null
|
13 |
+
class_data_dir: null
|
14 |
+
class_prompt: null
|
15 |
+
dataloader_num_workers: 0
|
16 |
+
dataset_config_name: null
|
17 |
+
dataset_name: KristjanRRR/ink-drawing-2
|
18 |
+
final_validation_prompt: null
|
19 |
+
gradient_accumulation_steps: 1
|
20 |
+
gradient_checkpointing: false
|
21 |
+
hub_model_id: null
|
22 |
+
hub_token: null
|
23 |
+
image_column: image
|
24 |
+
instance_data_dir: null
|
25 |
+
instance_prompt: TOK
|
26 |
+
learning_rate: 0.0004
|
27 |
+
local_rank: -1
|
28 |
+
logging_dir: logs
|
29 |
+
logit_mean: 0.0
|
30 |
+
logit_std: 1.0
|
31 |
+
lora_dropout: 0.0
|
32 |
+
lora_layers: null
|
33 |
+
lr_num_cycles: 1
|
34 |
+
lr_power: 1.0
|
35 |
+
lr_scheduler: constant
|
36 |
+
lr_warmup_steps: 500
|
37 |
+
max_grad_norm: 1.0
|
38 |
+
max_sequence_length: 128
|
39 |
+
max_train_steps: 440
|
40 |
+
mixed_precision: bf16
|
41 |
+
mode_scale: 1.29
|
42 |
+
num_class_images: 100
|
43 |
+
num_train_epochs: 11
|
44 |
+
num_validation_images: 4
|
45 |
+
offload: false
|
46 |
+
optimizer: AdamW
|
47 |
+
output_dir: HiDream-I1-Full-ink-drawing-lora-4
|
48 |
+
pretrained_model_name_or_path: HiDream-ai/HiDream-I1-Full
|
49 |
+
pretrained_text_encoder_4_name_or_path: meta-llama/Meta-Llama-3.1-8B-Instruct
|
50 |
+
pretrained_tokenizer_4_name_or_path: meta-llama/Meta-Llama-3.1-8B-Instruct
|
51 |
+
prior_loss_weight: 1.0
|
52 |
+
prodigy_beta3: null
|
53 |
+
prodigy_decouple: true
|
54 |
+
prodigy_safeguard_warmup: true
|
55 |
+
prodigy_use_bias_correction: true
|
56 |
+
push_to_hub: true
|
57 |
+
random_flip: false
|
58 |
+
rank: 16
|
59 |
+
repeats: 2
|
60 |
+
report_to: tensorboard
|
61 |
+
resolution: 1024
|
62 |
+
resume_from_checkpoint: null
|
63 |
+
revision: null
|
64 |
+
sample_batch_size: 4
|
65 |
+
scale_lr: false
|
66 |
+
seed: 1
|
67 |
+
skip_final_inference: false
|
68 |
+
train_batch_size: 1
|
69 |
+
upcast_before_saving: false
|
70 |
+
use_8bit_adam: false
|
71 |
+
validation_epochs: 50
|
72 |
+
validation_prompt: a woman riding an orca while waving hello and working on her laptop
|
73 |
+
in the style of TOK
|
74 |
+
variant: null
|
75 |
+
weighting_scheme: none
|
76 |
+
with_prior_preservation: false
|
logs/dreambooth-hidream-lora/events.out.tfevents.1748782938.modal.22.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:83b70f64484f904ac2361d5f04084880ce294b15a3b11bb84f86f046ee6fa886
|
3 |
+
size 3705476
|
pytorch_lora_weights.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:84818098b5685a1be3f78454c0670a8370dca1fd9d833a04d23c140ca30602a8
|
3 |
+
size 31510456
|