Update README.md
Browse files
README.md
CHANGED
@@ -7,16 +7,16 @@ base_model:
|
|
7 |
pipeline_tag: text-to-image
|
8 |
library_name: diffusers
|
9 |
---
|
10 |
-
# AMD Nitro
|
11 |
|
12 |
|
13 |

|
14 |
|
15 |
## Introduction
|
16 |
-
|
17 |
|
18 |
-
* [
|
19 |
-
* [PixArt
|
20 |
|
21 |
⚡️ [Open-source code](https://github.com/AMD-AIG-AIMA/AMD-Diffusion-Distillation)! The models are based on our re-implementation of [Latent Adversarial Diffusion Distillation](https://arxiv.org/abs/2403.12015), the method used to build the popular Stable Diffusion 3 Turbo model. Since the original authors didn't provide training code, we release our re-implementation to help advance further research in the field.
|
22 |
|
@@ -24,10 +24,10 @@ AMD Nitro Diffusion is a series of efficient text-to-image generation models tha
|
|
24 |
|
25 |
## Details
|
26 |
|
27 |
-
* **Model architecture**:
|
28 |
* **Inference steps**: This model is distilled to perform inference in just a single step. However, the training code also supports distilling a model for 2, 4 or 8 steps.
|
29 |
-
* **Hardware**: We use a single node consisting of 4 AMD Instinct™ MI250 GPUs for distilling
|
30 |
-
* **Dataset**: We use 1M prompts from [DiffusionDB](https://huggingface.co/datasets/poloclub/diffusiondb) and generate the corresponding images from the base Stable Diffusion 2.1
|
31 |
* **Training cost**: The distillation process achieves reasonable results in less than 2 days on a single node.
|
32 |
|
33 |
|
@@ -64,7 +64,7 @@ Compared to the [Stable Diffusion 2.1 base model](https://huggingface.co/stabili
|
|
64 |
| Model | FID ↓ | CLIP ↑ |FLOPs| Latency on AMD Instinct MI250 (sec)
|
65 |
| :---: | :---: | :---: | :---: | :---:
|
66 |
| Stable Diffusion 2.1 base, 50 steps (cfg=7.5) | 25.47 | 0.3286 |83.04 | 4.94
|
67 |
-
| **
|
68 |
|
69 |
|
70 |
|
|
|
7 |
pipeline_tag: text-to-image
|
8 |
library_name: diffusers
|
9 |
---
|
10 |
+
# AMD Nitro-1
|
11 |
|
12 |
|
13 |

|
14 |
|
15 |
## Introduction
|
16 |
+
Nitro-1 is a series of efficient text-to-image generation models that are distilled from popular diffusion models on AMD Instinct™ GPUs. The release consists of:
|
17 |
|
18 |
+
* [Nitro-1-SD](https://huggingface.co/amd/SD2.1-Nitro): a UNet-based one-step model distilled from [Stable Diffusion 2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1-base).
|
19 |
+
* [Nitro-1-PixArt](https://huggingface.co/amd/PixArt-Sigma-Nitro): a high resolution transformer-based one-step model distilled from [PixArt-Sigma](https://pixart-alpha.github.io/PixArt-sigma-project/).
|
20 |
|
21 |
⚡️ [Open-source code](https://github.com/AMD-AIG-AIMA/AMD-Diffusion-Distillation)! The models are based on our re-implementation of [Latent Adversarial Diffusion Distillation](https://arxiv.org/abs/2403.12015), the method used to build the popular Stable Diffusion 3 Turbo model. Since the original authors didn't provide training code, we release our re-implementation to help advance further research in the field.
|
22 |
|
|
|
24 |
|
25 |
## Details
|
26 |
|
27 |
+
* **Model architecture**: Nitro-1-SD has the same architecture as Stable Diffusion 2.1 and is compatible with the diffusers pipeline.
|
28 |
* **Inference steps**: This model is distilled to perform inference in just a single step. However, the training code also supports distilling a model for 2, 4 or 8 steps.
|
29 |
+
* **Hardware**: We use a single node consisting of 4 AMD Instinct™ MI250 GPUs for distilling Nitro-1-SD.
|
30 |
+
* **Dataset**: We use 1M prompts from [DiffusionDB](https://huggingface.co/datasets/poloclub/diffusiondb) and generate the corresponding images from the base Stable Diffusion 2.1 model.
|
31 |
* **Training cost**: The distillation process achieves reasonable results in less than 2 days on a single node.
|
32 |
|
33 |
|
|
|
64 |
| Model | FID ↓ | CLIP ↑ |FLOPs| Latency on AMD Instinct MI250 (sec)
|
65 |
| :---: | :---: | :---: | :---: | :---:
|
66 |
| Stable Diffusion 2.1 base, 50 steps (cfg=7.5) | 25.47 | 0.3286 |83.04 | 4.94
|
67 |
+
| **Nitro-1-SD**, 1 step | 26.04 | 0.3204|3.36 | 0.18
|
68 |
|
69 |
|
70 |
|