Donka-V1
This model was a part of a undergrad research project, the goal was to implement the well known paper - Attention is All You need.
The following model can translate short macedonian text to english text. The model is quite small and simple but it works with short sentences. It is a Seq2Seq Transformer trained on around 500_000 Macedonian-English Sentences, mostly gathered from the internet.
The name of the model is derivied from the Macedonian female name Маке-донка (Make-DONKA), to associate the ability of the models task to translate macedonian text.
We want to give Special thanks to our faculty, and our professor, our assitant for their encouragement and mentorship , as well as special shoutout to the people mentioned bellow for the dataset resources.
Running the model
- Notebook for inference of the model can be found on github repo here.
License:
All of the models and .vocab files are Licensed under CC BY-NC 4.0
Dataset atribution:
Huge thank you from the following people and institiutions, without them our reasearch would have been imposible.
Our dataset is combination from the following websites/services/books/blogs.
We got written permission from the following wonderful people:
- Акад. Марјан Марковиќ - Дигитални ресурси на македонскиот јазик.
- Проф. д-р Георге Гоце Митревски - pelister.org.
- Ph.D Игор Трајковски - time.mk.
Honorable mentions:
Everything we used for our Data set is under CC BY-NC 4.0
Some checkpoint specs for the model:
Checkpoint keys: dict_keys(['epoch', 'model_state_dict', 'optimizer_state_dict', 'scaler_state_dict'])
Saved at epoch: 18
Model state_dict contents:
transformer_encoder.layers.0.self_attn.in_proj_weight shape: (1536, 512)
transformer_encoder.layers.0.self_attn.in_proj_bias shape: (1536,)
transformer_encoder.layers.0.self_attn.out_proj.weight shape: (512, 512)
transformer_encoder.layers.0.self_attn.out_proj.bias shape: (512,)
transformer_encoder.layers.0.linear1.weight shape: (512, 512)
transformer_encoder.layers.0.linear1.bias shape: (512,)
transformer_encoder.layers.0.linear2.weight shape: (512, 512)
transformer_encoder.layers.0.linear2.bias shape: (512,)
transformer_encoder.layers.0.norm1.weight shape: (512,)
transformer_encoder.layers.0.norm1.bias shape: (512,)
transformer_encoder.layers.0.norm2.weight shape: (512,)
transformer_encoder.layers.0.norm2.bias shape: (512,)
transformer_encoder.layers.1.self_attn.in_proj_weight shape: (1536, 512)
transformer_encoder.layers.1.self_attn.in_proj_bias shape: (1536,)
transformer_encoder.layers.1.self_attn.out_proj.weight shape: (512, 512)
transformer_encoder.layers.1.self_attn.out_proj.bias shape: (512,)
transformer_encoder.layers.1.linear1.weight shape: (512, 512)
transformer_encoder.layers.1.linear1.bias shape: (512,)
transformer_encoder.layers.1.linear2.weight shape: (512, 512)
transformer_encoder.layers.1.linear2.bias shape: (512,)
transformer_encoder.layers.1.norm1.weight shape: (512,)
transformer_encoder.layers.1.norm1.bias shape: (512,)
transformer_encoder.layers.1.norm2.weight shape: (512,)
transformer_encoder.layers.1.norm2.bias shape: (512,)
transformer_encoder.layers.2.self_attn.in_proj_weight shape: (1536, 512)
transformer_encoder.layers.2.self_attn.in_proj_bias shape: (1536,)
transformer_encoder.layers.2.self_attn.out_proj.weight shape: (512, 512)
transformer_encoder.layers.2.self_attn.out_proj.bias shape: (512,)
transformer_encoder.layers.2.linear1.weight shape: (512, 512)
transformer_encoder.layers.2.linear1.bias shape: (512,)
transformer_encoder.layers.2.linear2.weight shape: (512, 512)
transformer_encoder.layers.2.linear2.bias shape: (512,)
transformer_encoder.layers.2.norm1.weight shape: (512,)
transformer_encoder.layers.2.norm1.bias shape: (512,)
transformer_encoder.layers.2.norm2.weight shape: (512,)
transformer_encoder.layers.2.norm2.bias shape: (512,)
transformer_decoder.layers.0.self_attn.in_proj_weight shape: (1536, 512)
transformer_decoder.layers.0.self_attn.in_proj_bias shape: (1536,)
transformer_decoder.layers.0.self_attn.out_proj.weight shape: (512, 512)
transformer_decoder.layers.0.self_attn.out_proj.bias shape: (512,)
transformer_decoder.layers.0.multihead_attn.in_proj_weight shape: (1536, 512)
transformer_decoder.layers.0.multihead_attn.in_proj_bias shape: (1536,)
transformer_decoder.layers.0.multihead_attn.out_proj.weight shape: (512, 512)
transformer_decoder.layers.0.multihead_attn.out_proj.bias shape: (512,)
transformer_decoder.layers.0.linear1.weight shape: (512, 512)
transformer_decoder.layers.0.linear1.bias shape: (512,)
transformer_decoder.layers.0.linear2.weight shape: (512, 512)
transformer_decoder.layers.0.linear2.bias shape: (512,)
transformer_decoder.layers.0.norm1.weight shape: (512,)
transformer_decoder.layers.0.norm1.bias shape: (512,)
transformer_decoder.layers.0.norm2.weight shape: (512,)
transformer_decoder.layers.0.norm2.bias shape: (512,)
transformer_decoder.layers.0.norm3.weight shape: (512,)
transformer_decoder.layers.0.norm3.bias shape: (512,)
transformer_decoder.layers.1.self_attn.in_proj_weight shape: (1536, 512)
transformer_decoder.layers.1.self_attn.in_proj_bias shape: (1536,)
transformer_decoder.layers.1.self_attn.out_proj.weight shape: (512, 512)
transformer_decoder.layers.1.self_attn.out_proj.bias shape: (512,)
transformer_decoder.layers.1.multihead_attn.in_proj_weight shape: (1536, 512)
transformer_decoder.layers.1.multihead_attn.in_proj_bias shape: (1536,)
transformer_decoder.layers.1.multihead_attn.out_proj.weight shape: (512, 512)
transformer_decoder.layers.1.multihead_attn.out_proj.bias shape: (512,)
transformer_decoder.layers.1.linear1.weight shape: (512, 512)
transformer_decoder.layers.1.linear1.bias shape: (512,)
transformer_decoder.layers.1.linear2.weight shape: (512, 512)
transformer_decoder.layers.1.linear2.bias shape: (512,)
transformer_decoder.layers.1.norm1.weight shape: (512,)
transformer_decoder.layers.1.norm1.bias shape: (512,)
transformer_decoder.layers.1.norm2.weight shape: (512,)
transformer_decoder.layers.1.norm2.bias shape: (512,)
transformer_decoder.layers.1.norm3.weight shape: (512,)
transformer_decoder.layers.1.norm3.bias shape: (512,)
transformer_decoder.layers.2.self_attn.in_proj_weight shape: (1536, 512)
transformer_decoder.layers.2.self_attn.in_proj_bias shape: (1536,)
transformer_decoder.layers.2.self_attn.out_proj.weight shape: (512, 512)
transformer_decoder.layers.2.self_attn.out_proj.bias shape: (512,)
transformer_decoder.layers.2.multihead_attn.in_proj_weight shape: (1536, 512)
transformer_decoder.layers.2.multihead_attn.in_proj_bias shape: (1536,)
transformer_decoder.layers.2.multihead_attn.out_proj.weight shape: (512, 512)
transformer_decoder.layers.2.multihead_attn.out_proj.bias shape: (512,)
transformer_decoder.layers.2.linear1.weight shape: (512, 512)
transformer_decoder.layers.2.linear1.bias shape: (512,)
transformer_decoder.layers.2.linear2.weight shape: (512, 512)
transformer_decoder.layers.2.linear2.bias shape: (512,)
transformer_decoder.layers.2.norm1.weight shape: (512,)
transformer_decoder.layers.2.norm1.bias shape: (512,)
transformer_decoder.layers.2.norm2.weight shape: (512,)
transformer_decoder.layers.2.norm2.bias shape: (512,)
transformer_decoder.layers.2.norm3.weight shape: (512,)
transformer_decoder.layers.2.norm3.bias shape: (512,)
generator.weight shape: (8257, 512)
generator.bias shape: (8257,)
src_tok_emb.embedding.weight shape: (11370, 512)
tgt_tok_emb.embedding.weight shape: (8257, 512)
positional_encoding.pos_embedding shape: (10000, 1, 512)