--- base_model: stabilityai/stable-diffusion-2-1-base library_name: diffusers license: creativeml-openrail-m inference: true tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet - diffusers-training --- # controlnet-Amitz244/output_dir_controlnet These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning. You can find some example images below. prompt: Woman in blue and black on a large plaza. ![images_0)](./images_0.png) prompt: A men's restroom showcasing the toilet through an open door. ![images_1)](./images_1.png) prompt: A man riding a kiteboard over the ocean under a cloudy sky. ![images_2)](./images_2.png) prompt: Two skiers stand on their skis in the snow. ![images_3)](./images_3.png) prompt: A meal of cheese toast, spaghetti, and broccoli on a white plate. ![images_4)](./images_4.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]