--- base_model: HiDream-ai/HiDream-I1-Full library_name: diffusers license: mit instance_prompt: 3d icon widget: - text: 3dicon of a llama eating ramen output: url: image_0.png - text: 3dicon of a llama eating ramen output: url: image_1.png - text: 3dicon of a llama eating ramen output: url: image_2.png - text: 3dicon of a llama eating ramen output: url: image_3.png tags: - text-to-image - diffusers-training - diffusers - lora - hidream - hidream-diffusers - template:sd-lora - text-to-image - diffusers-training - diffusers - lora - hidream - hidream-diffusers - template:sd-lora --- # HiDream Image DreamBooth LoRA - linoyts/hidream-3dicon-lora ## Model description These are linoyts/hidream-3dicon-lora DreamBooth LoRA weights for HiDream-ai/HiDream-I1-Full. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [HiDream Image diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_hidream.md). ## Trigger words You should use `3d icon` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](linoyts/hidream-3dicon-lora/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py >>> import torch >>> from transformers import PreTrainedTokenizerFast, LlamaForCausalLM >>> from diffusers import HiDreamImagePipeline >>> tokenizer_4 = PreTrainedTokenizerFast.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct") >>> text_encoder_4 = LlamaForCausalLM.from_pretrained( ... "meta-llama/Meta-Llama-3.1-8B-Instruct", ... output_hidden_states=True, ... output_attentions=True, ... torch_dtype=torch.bfloat16, ... ) >>> pipe = HiDreamImagePipeline.from_pretrained( ... "HiDream-ai/HiDream-I1-Full", ... tokenizer_4=tokenizer_4, ... text_encoder_4=text_encoder_4, ... torch_dtype=torch.bfloat16, ... ) >>> pipe.enable_model_cpu_offload() >>> pipe.load_lora_weights(f"linoyts/hidream-3dicon-lora") >>> image = pipe(f"3d icon").images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]