darshanmakwana commited on
Commit
a902fbb
·
verified ·
1 Parent(s): 911145c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -1
README.md CHANGED
@@ -33,7 +33,15 @@ configs:
33
  This dataset is derived from [conceptual captions](https://huggingface.co/datasets/pixparse/cc3m-wds) (CC3M) which contains roughly 3.3M image and caption pairs. For images we use [1d-tokenizer](https://github.com/bytedance/1d-tokenizer) by [bytedance](https://www.bytedance.com/en/) which tokenizes a 256 * 256 image into 32 tokens while still achieving SOTA fidelity ratio. For text we train a BPE based tokenizer on the image captions dataset with a vocab size set to 30K, where 4096 tokens where used to represent images, 9 to represent some special tokens and the remaining 25895 tokens for text
34
 
35
  # Visualization
36
- |![](./vis_1.png) | ![](./vis_2.png) | ![](./vis_3.png) | ![](./vis_4.png) |
 
 
 
 
 
 
 
 
37
 
38
  ## Training Procedure
39
  For training we prompt the model to generate an image based on a text such as: "a river has burst it 's banks and has spread out onto arable farmland alongside<|startofimage|><|image:2931|><|image:560|><|image:763|><|image:1539|><|image:3161|><|image:1997|><|image:3376|><|image:510|><|image:3036|><|image:1585|><|image:1853|><|image:1970|><|image:2687|><|image:1436|><|image:2213|><|image:3968|><|image:3999|><|image:877|><|image:725|><|image:3013|><|image:438|><|image:3159|><|image:2936|><|image:3003|><|image:2261|><|image:2137|><|image:3821|><|image:1513|><|image:3536|><|image:311|><|image:494|><|image:413|><|endofimage|>". We use use cross entropy loss with logits masked for the audio tokens as it showed performance improvements for speech-to-text tasks and employ the standard cross entorpy loss over the masked logits
 
33
  This dataset is derived from [conceptual captions](https://huggingface.co/datasets/pixparse/cc3m-wds) (CC3M) which contains roughly 3.3M image and caption pairs. For images we use [1d-tokenizer](https://github.com/bytedance/1d-tokenizer) by [bytedance](https://www.bytedance.com/en/) which tokenizes a 256 * 256 image into 32 tokens while still achieving SOTA fidelity ratio. For text we train a BPE based tokenizer on the image captions dataset with a vocab size set to 30K, where 4096 tokens where used to represent images, 9 to represent some special tokens and the remaining 25895 tokens for text
34
 
35
  # Visualization
36
+ <table>
37
+ <tr>
38
+ <td><img src="vis_1.png" alt="example 1" width="200"/></td>
39
+ <td><img src="vis_2.png" alt="example 2" width="200"/></td>
40
+ <td><img src="vis_3.png" alt="example 3" width="200"/></td>
41
+ <td><img src="vis_4.png" alt="example 4" width="200"/></td>
42
+ </tr>
43
+ </table>
44
+
45
 
46
  ## Training Procedure
47
  For training we prompt the model to generate an image based on a text such as: "a river has burst it 's banks and has spread out onto arable farmland alongside<|startofimage|><|image:2931|><|image:560|><|image:763|><|image:1539|><|image:3161|><|image:1997|><|image:3376|><|image:510|><|image:3036|><|image:1585|><|image:1853|><|image:1970|><|image:2687|><|image:1436|><|image:2213|><|image:3968|><|image:3999|><|image:877|><|image:725|><|image:3013|><|image:438|><|image:3159|><|image:2936|><|image:3003|><|image:2261|><|image:2137|><|image:3821|><|image:1513|><|image:3536|><|image:311|><|image:494|><|image:413|><|endofimage|>". We use use cross entropy loss with logits masked for the audio tokens as it showed performance improvements for speech-to-text tasks and employ the standard cross entorpy loss over the masked logits