File size: 2,662 Bytes
70c3ace
 
 
 
 
 
 
 
 
 
 
 
 
0abbb5d
 
 
 
 
 
 
019fa03
 
 
0abbb5d
019fa03
 
0abbb5d
019fa03
 
0abbb5d
019fa03
 
0abbb5d
70c3ace
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
base_model: HiDream-ai/HiDream-I1-Full
library_name: diffusers
license: mit
instance_prompt: a dog, yarn art style
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- hidream
- hidream-diffusers
- template:sd-lora
- text-to-image
- diffusers-training
- diffusers
- lora
- hidream
- hidream-diffusers
- template:sd-lora
widget:
- text: yoda, yarn art style
  output:
    url: image_1.png
- text: cookie monster, yarn art style
  output:
    url: cookie.png
- text: the joker, yarn art style
  output:
    url: joker.png
- text: a capybara in a bubble batch, yarn art style
  output:
    url: capy.png
---

<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->


# HiDream Image DreamBooth LoRA - linoyts/hidream-yarn-art-lora-v2-trainer

<Gallery />

## Model description

These are linoyts/hidream-yarn-art-lora-v2-trainer DreamBooth LoRA weights for HiDream-ai/HiDream-I1-Full.

The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [HiDream Image diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_hidream.md).

## Trigger words

You should use `a dog, yarn art style` to trigger the image generation.

## Download model

[Download the *.safetensors LoRA](linoyts/hidream-yarn-art-lora-v2-trainer/tree/main) in the Files & versions tab.

## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)

```py
    >>> import torch
    >>> from transformers import PreTrainedTokenizerFast, LlamaForCausalLM
    >>> from diffusers import HiDreamImagePipeline

    >>> tokenizer_4 = PreTrainedTokenizerFast.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct")
    >>> text_encoder_4 = LlamaForCausalLM.from_pretrained(
    ...     "meta-llama/Meta-Llama-3.1-8B-Instruct",
    ...     output_hidden_states=True,
    ...     output_attentions=True,
    ...     torch_dtype=torch.bfloat16,
    ... )

    >>> pipe = HiDreamImagePipeline.from_pretrained(
    ...     "HiDream-ai/HiDream-I1-Full",
    ...     tokenizer_4=tokenizer_4,
    ...     text_encoder_4=text_encoder_4,
    ...     torch_dtype=torch.bfloat16,
    ... )
    >>> pipe.enable_model_cpu_offload()
    >>> pipe.load_lora_weights(f"linoyts/hidream-yarn-art-lora-v2-trainer")
    >>> image = pipe(f"a dog, yarn art style").images[0]


```

For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)