svjack commited on
Commit
153f518
·
verified ·
1 Parent(s): 1281b2b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +258 -0
README.md ADDED
@@ -0,0 +1,258 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Shenhe lora Usage
2
+ ## Installtion
3
+ ```bash
4
+ pip install -U diffusers transformers torch sentencepiece peft controlnet-aux moviepy protobuf
5
+ ```
6
+
7
+ ## Demo
8
+ ```python
9
+ import torch
10
+ from diffusers import FluxPipeline
11
+
12
+ pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
13
+ pipe.load_lora_weights("svjack/FLUX_Shenhe_Lora" ,"tj_f1_shenhe_v1.safetensors")
14
+ pipe.enable_sequential_cpu_offload()
15
+
16
+ prompt = "tj_sthenhe, hair ornament,sliver hair,long hair,braid,"
17
+
18
+ image = pipe(prompt,
19
+ num_inference_steps=24,
20
+ guidance_scale=3.5,
21
+ ).images[0]
22
+ image.save("shenhe.png")
23
+
24
+ from IPython import display
25
+ display.Image("shenhe.png", width=512, height=512)
26
+ ```
27
+
28
+ ![shenhe](https://github.com/user-attachments/assets/34159126-3058-4078-a101-fdb22839d1f0)
29
+
30
+ # Shenhe Use Regional Flux Pipeline README
31
+
32
+ This README provides a guide on how to use the Regional Flux Pipeline, a powerful tool for generating images with regional control using PyTorch. The pipeline allows you to specify different prompts for different regions of the image, enabling fine-grained control over the generated content.
33
+
34
+ ## Table of Contents
35
+
36
+ - [Installation](#installation)
37
+ - [Usage](#usage)
38
+ - [Step 1: Load the Pipeline](#step-1-load-the-pipeline)
39
+ - [Step 2: Configure Attention Processors](#step-2-configure-attention-processors)
40
+ - [Step 3: Set General Settings](#step-3-set-general-settings)
41
+ - [Step 4: Define Regional Prompts and Masks](#step-4-define-regional-prompts-and-masks)
42
+ - [Step 5: Configure Region Control Factors](#step-5-configure-region-control-factors)
43
+ - [Step 6: Generate the Image](#step-6-generate-the-image)
44
+ - [Step 7: Display the Image](#step-7-display-the-image)
45
+ - [Step 8: Draw a Transparent Rectangle](#step-8-draw-a-transparent-rectangle)
46
+ - [Chinese Translations](#chinese-translations)
47
+
48
+ ## Installation
49
+
50
+ ### Create a New Conda Environment
51
+
52
+ ```bash
53
+ conda create --name py310 python=3.10 && conda activate py310 && pip install ipykernel && python -m ipykernel install --user --name py310 --display-name "py310"
54
+ ```
55
+
56
+ ### Install Dependencies
57
+
58
+ We use a specific commit from the `diffusers` repository to ensure reproducibility, as newer versions may produce different results.
59
+
60
+ ```bash
61
+ sudo apt-get update && sudo apt-get install git-lfs ffmpeg cbm
62
+ ```
63
+
64
+ ```bash
65
+ # Install diffusers locally
66
+ git clone https://github.com/huggingface/diffusers.git
67
+ cd diffusers
68
+
69
+ # Reset diffusers version to 0.31.dev
70
+ git reset --hard d13b0d63c0208f2c4c078c4261caf8bf587beb3b
71
+ pip install -e ".[torch]"
72
+ cd ..
73
+
74
+ # Install other dependencies
75
+ pip install -U transformers sentencepiece protobuf PEFT
76
+
77
+ # Clone this repo
78
+ git clone https://github.com/svjack/Regional-Prompting-FLUX
79
+
80
+ # Replace file in diffusers
81
+ cd Regional-Prompting-FLUX
82
+ cp transformer_flux.py ../diffusers/src/diffusers/models/transformers/transformer_flux.py
83
+ huggingface-cli login
84
+ ```
85
+
86
+ ## Usage
87
+
88
+ ### Step 1: Load the Pipeline
89
+
90
+ First, load the Regional Flux Pipeline from a pretrained model and set the desired data type:
91
+
92
+ ```python
93
+ import torch
94
+ from pipeline_flux_regional import RegionalFluxPipeline, RegionalFluxAttnProcessor2_0
95
+
96
+ pipeline = RegionalFluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
97
+ pipeline.load_lora_weights("svjack/FLUX_Shenhe_Lora" ,"tj_f1_shenhe_v1.safetensors")
98
+ pipeline.to("cuda")
99
+ ```
100
+
101
+ ### Step 2: Configure Attention Processors
102
+
103
+ Next, configure the attention processors to use the `RegionalFluxAttnProcessor2_0` for specific attention layers:
104
+
105
+ ```python
106
+ attn_procs = {}
107
+ for name in pipeline.transformer.attn_processors.keys():
108
+ if 'transformer_blocks' in name and name.endswith("attn.processor"):
109
+ attn_procs[name] = RegionalFluxAttnProcessor2_0()
110
+ else:
111
+ attn_procs[name] = pipeline.transformer.attn_processors[name]
112
+ pipeline.transformer.set_attn_processor(attn_procs)
113
+ ```
114
+
115
+ ### Step 3: Set General Settings
116
+
117
+ Define the general settings for the image generation:
118
+
119
+ ```python
120
+ image_width = 1280
121
+ image_height = 768
122
+ num_inference_steps = 24
123
+ seed = 124
124
+
125
+ base_prompt = "A snowy chinese hill in the background, A big sun rises."
126
+ background_prompt = "a photo of a snowy chinese hill"
127
+ ```
128
+
129
+ ### Step 4: Define Regional Prompts and Masks
130
+
131
+ Specify the regional prompts and corresponding masks for different parts of the image:
132
+
133
+ ```python
134
+ regional_prompt_mask_pairs = {
135
+ "0": {
136
+ "description": "A dignified woman stands in the foreground, her sliver hair and long braid adorned with a hair ornament, her face illuminated by the cold light of the snow. Her expression is one of determination and sorrow, her clothing and appearance reflecting the historical period. The snow casts a serene yet dramatic light across her features, its cold embrace enveloping her in a world of ice and frost. tj_sthenhe, hair ornament, sliver hair, long hair, braid.",
137
+ "mask": [128, 128, 640, 768]
138
+ }
139
+ }
140
+ ```
141
+
142
+ ### Step 5: Configure Region Control Factors
143
+
144
+ Set the control factors for region-specific attention injection:
145
+
146
+ ```python
147
+ mask_inject_steps = 10
148
+ double_inject_blocks_interval = 1
149
+ single_inject_blocks_interval = 1
150
+ base_ratio = 0.2
151
+ ```
152
+
153
+ ### Step 6: Generate the Image
154
+
155
+ Generate the image using the specified prompts and masks:
156
+
157
+ ```python
158
+ regional_prompts = []
159
+ regional_masks = []
160
+ background_mask = torch.ones((image_height, image_width))
161
+
162
+ for region_idx, region in regional_prompt_mask_pairs.items():
163
+ description = region['description']
164
+ mask = region['mask']
165
+ x1, y1, x2, y2 = mask
166
+ mask = torch.zeros((image_height, image_width))
167
+ mask[y1:y2, x1:x2] = 1.0
168
+ background_mask -= mask
169
+ regional_prompts.append(description)
170
+ regional_masks.append(mask)
171
+
172
+ if background_mask.sum() > 0:
173
+ regional_prompts.append(background_prompt)
174
+ regional_masks.append(background_mask)
175
+
176
+ image = pipeline(
177
+ prompt=base_prompt,
178
+ width=image_width, height=image_height,
179
+ mask_inject_steps=mask_inject_steps,
180
+ num_inference_steps=num_inference_steps,
181
+ generator=torch.Generator("cuda").manual_seed(seed),
182
+ joint_attention_kwargs={
183
+ "regional_prompts": regional_prompts,
184
+ "regional_masks": regional_masks,
185
+ "double_inject_blocks_interval": double_inject_blocks_interval,
186
+ "single_inject_blocks_interval": single_inject_blocks_interval,
187
+ "base_ratio": base_ratio
188
+ },
189
+ ).images[0]
190
+
191
+ image.save(f"shenhe_in_snow_hill.jpg")
192
+ ```
193
+
194
+ ### Step 7: Display the Image
195
+
196
+ Display the generated image:
197
+
198
+ ```python
199
+ from IPython import display
200
+ display.Image("shenhe_in_snow_hill.jpg", width=512, height=512)
201
+ ```
202
+
203
+ ![shenhe_in_snow_hill](https://github.com/user-attachments/assets/8edfb639-a624-4218-845d-b8579b41c62a)
204
+
205
+ ### Step 8: Draw a Transparent Rectangle
206
+
207
+ Optionally, draw a transparent rectangle on the generated image to highlight a specific region:
208
+
209
+ ```python
210
+ from PIL import Image, ImageDraw
211
+
212
+ def draw_transparent_rectangle(image_path, bbox, color, alpha=128, output_path=None):
213
+ """
214
+ 在指定区域绘制一个半透明的矩形,并将修改后的图片保存到本地新路径。
215
+
216
+ :param image_path: 图片路径
217
+ :param bbox: 长度为4的列表,表示矩形的边界框 [x1, y1, x2, y2]
218
+ :param color: 颜色,格式为 (R, G, B)
219
+ :param alpha: 透明度,范围为 0(完全透明)到 255(完全不透明),默认值为 128
220
+ :param output_path: 保存修改后图片的路径,如果为 None,则覆盖原图
221
+ :return: 修改后的图片对象
222
+ """
223
+ image = Image.open(image_path).convert("RGBA")
224
+ overlay = Image.new('RGBA', image.size, (0, 0, 0, 0))
225
+ draw = ImageDraw.Draw(overlay)
226
+
227
+ x1, y1, x2, y2 = bbox
228
+ draw.rectangle([x1, y1, x2, y2], fill=(*color, alpha))
229
+
230
+ image = Image.alpha_composite(image, overlay)
231
+
232
+ if output_path is None:
233
+ output_path = image_path
234
+
235
+ image.save(output_path)
236
+ return image
237
+
238
+ draw_transparent_rectangle("shenhe_in_snow_hill.jpg", [128, 128, 640, 768], (255, 0, 0), alpha=128, output_path="shenhe_in_snow_hill_rec.png")
239
+ display.Image("shenhe_in_snow_hill_rec.png", width=512, height=512)
240
+ ```
241
+
242
+ ![shenhe_in_snow_hill_rec](https://github.com/user-attachments/assets/64914e05-6cc5-4905-92e1-8ad112561e28)
243
+
244
+ ## Chinese Translations
245
+
246
+ - `base_prompt`: "背景是雪中的中国山丘,一轮大太阳正在升起。"
247
+ - `background_prompt`: "一张雪中的中国山丘的照片"
248
+
249
+ `regional_prompt_mask_pairs` 中的内容翻译如下:
250
+
251
+ ```json
252
+ {
253
+ "0": {
254
+ "description": "一位端庄的女子站在前景中,她的银发和长辫子上装饰着发饰,她的脸被雪的冷光照亮。她的表情既坚定又悲伤,她的服装和外貌反映了历史时期。雪花在她脸上投下宁静而戏剧性的光线,它的寒冷拥抱将她包裹在冰雪世界中。tj_sthenhe,发饰,银发,长发,辫子。",
255
+ "mask": [128, 128, 640, 768]
256
+ }
257
+ }
258
+ ```