JoeySalmons commited on
Commit
bddd2fa
·
verified ·
1 Parent(s): db5f4f8

Update README.md

Browse files

Add link to Colab to run the models

Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -6,18 +6,21 @@ Checkpoint 278k-full_state_dict.pth has been trained on about 500 epochs and is
6
  The two checkpoints for 300k and 395k steps were further trained on a Midjourney dataset of 600k images for 9.4 epochs (300k steps) and 50 epochs (395k steps) at a constant LR of 5e-5.
7
  The additional training on the MJ dataset took ~8 hours on a 4090 with batch size 256.
8
 
9
- The models are the same as in the Google Colab below: embed_dim=512, n_layers=8, total parameters=30507328 (30M)
 
 
 
10
 
11
  # Colab Training Notebook
12
  https://colab.research.google.com/drive/1sKk0usxEF4bmdCDcNQJQNMt4l9qBOeAM?usp=sharing
13
 
14
- # Github Repo (not mine)
15
  This repo contains the original training code:
16
  https://github.com/apapiu/transformer_latent_diffusion
17
 
18
- # Datasets
19
  https://huggingface.co/apapiu/small_ldt/tree/main
20
 
21
  # Other
22
- See this Reddit post by u/spring_m for some more information:
23
  https://www.reddit.com/r/MachineLearning/comments/198eiv1/p_small_latent_diffusion_transformer_from_scratch/
 
6
  The two checkpoints for 300k and 395k steps were further trained on a Midjourney dataset of 600k images for 9.4 epochs (300k steps) and 50 epochs (395k steps) at a constant LR of 5e-5.
7
  The additional training on the MJ dataset took ~8 hours on a 4090 with batch size 256.
8
 
9
+ The models are the same as in the Google Colabs below: embed_dim=512, n_layers=8, total parameters=30507328 (30M)
10
+
11
+ # Run the Models in Colab
12
+ https://colab.research.google.com/drive/10yORcKXT40DLvZSceOJ1Hi5z_p5r-bOs?usp=sharing
13
 
14
  # Colab Training Notebook
15
  https://colab.research.google.com/drive/1sKk0usxEF4bmdCDcNQJQNMt4l9qBOeAM?usp=sharing
16
 
17
+ # Github Repo
18
  This repo contains the original training code:
19
  https://github.com/apapiu/transformer_latent_diffusion
20
 
21
+ # Datasets used
22
  https://huggingface.co/apapiu/small_ldt/tree/main
23
 
24
  # Other
25
+ See this Reddit post by u/spring_m (huggingface.co/apapiu) for some more information:
26
  https://www.reddit.com/r/MachineLearning/comments/198eiv1/p_small_latent_diffusion_transformer_from_scratch/