Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,8 @@ license: mit
|
|
3 |
---
|
4 |
[](https://discord.gg/2JhHVh7CGu)
|
5 |
|
|
|
|
|
6 |
A semi custom network based on the follow paper [Simpler Diffusion (SiD2)](https://arxiv.org/abs/2410.19324v1)
|
7 |
|
8 |
This network uses the optimal transport flow matching objective outlined [Flow Matching for Generative Modeling](https://arxiv.org/abs/2210.02747)
|
@@ -10,8 +12,9 @@ This network uses the optimal transport flow matching objective outlined [Flow M
|
|
10 |
xATGLU Layers are used instead of linears for entry into the transformer MLP layer [Expanded Gating Ranges
|
11 |
Improve Activation Functions](https://arxiv.org/pdf/2405.20768)
|
12 |
|
13 |
-
|
14 |
-
|
|
|
15 |
|
16 |

|
17 |

|
|
|
3 |
---
|
4 |
[](https://discord.gg/2JhHVh7CGu)
|
5 |
|
6 |
+
This is a severely undertrained research network as a POC for the architecture. It was trained on ~700 example images for 2000 epochs reaching a minimal MSE loss of ~0.06. This repo is meant only as a demo of a strong, <100M parameter example model that can achieve strong color balance and achieve low loss on pixel diffusion. The next step is scaling up the data.
|
7 |
+
|
8 |
A semi custom network based on the follow paper [Simpler Diffusion (SiD2)](https://arxiv.org/abs/2410.19324v1)
|
9 |
|
10 |
This network uses the optimal transport flow matching objective outlined [Flow Matching for Generative Modeling](https://arxiv.org/abs/2210.02747)
|
|
|
12 |
xATGLU Layers are used instead of linears for entry into the transformer MLP layer [Expanded Gating Ranges
|
13 |
Improve Activation Functions](https://arxiv.org/pdf/2405.20768)
|
14 |
|
15 |
+
```python train.py``` will train a new image network on the provided dataset.
|
16 |
+
|
17 |
+
```python test_sample.py step_1799.safetensors``` Where step_1799.safetensors is the desired model to test inference on. This will always generate a sample grid of 16x16 images.
|
18 |
|
19 |

|
20 |

|