prithivMLmods commited on
Commit
87b9462
·
verified ·
1 Parent(s): 7d916f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -1
README.md CHANGED
@@ -28,10 +28,54 @@ base_model: Qwen/Qwen-Image
28
  instance_prompt: Qwen Anime
29
  license: apache-2.0
30
  ---
31
- # Qwen-Anime-LoRA
32
 
33
  <Gallery />
34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
  ## Trigger words
37
 
 
28
  instance_prompt: Qwen Anime
29
  license: apache-2.0
30
  ---
31
+ # Qwen-Image-Anime-LoRA
32
 
33
  <Gallery />
34
 
35
+ ---
36
+
37
+ # Model description for Qwen-Image-Anime-LoRA
38
+
39
+ Image Processing Parameters
40
+
41
+ | Parameter | Value | Parameter | Value |
42
+ |---------------------------|--------|---------------------------|--------|
43
+ | LR Scheduler | constant | Noise Offset | 0.03 |
44
+ | Optimizer | AdamW | Multires Noise Discount | 0.1 |
45
+ | Network Dim | 64 | Multires Noise Iterations | 10 |
46
+ | Network Alpha | 32 | Repeat & Steps | 25 & 3000 |
47
+ | Epoch | 20 | Save Every N Epochs | 1 |
48
+
49
+ Labeling: florence2-en(natural language & English)
50
+
51
+ Total Images Used for Training : 44 [HQ Images]
52
+
53
+ ## Best Dimensions & Inference
54
+
55
+ | **Dimensions** | **Aspect Ratio** | **Recommendation** |
56
+ |-----------------|------------------|---------------------------|
57
+ | 1664 x 928 | 16:9 (approx.) | Best |
58
+ | 1024 x 1024 | 1:1 | Default |
59
+
60
+ ### Inference Range
61
+
62
+ - **Recommended Inference Steps:** 40-50
63
+
64
+ ## Setting Up
65
+ ```python
66
+ import torch
67
+ from diffusers import DiffusionPipeline
68
+
69
+ base_model = "Qwen/Qwen-Image"
70
+ pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
71
+
72
+ lora_repo = "prithivMLmods/Qwen-Image-Anime-LoRA"
73
+ trigger_word = "Qwen Anime"
74
+ pipe.load_lora_weights(lora_repo)
75
+
76
+ device = torch.device("cuda")
77
+ pipe.to(device)
78
+ ```
79
 
80
  ## Trigger words
81