Update README.md
Browse files
README.md
CHANGED
@@ -21,14 +21,13 @@ This model is a continued pre-trained version of [xlm-roberta-base](https://hugg
|
|
21 |
It achieves the following results on the evaluation set:
|
22 |
- Loss: 1.1697
|
23 |
|
|
|
|
|
|
|
24 |
## Model description
|
25 |
|
26 |
The model was trained on masked language model task on a single V100 GPU for 68 hours. For downstream tasks, it requires to be fine-tuned based on objective of the task.
|
27 |
|
28 |
-
## Intended uses & limitations
|
29 |
-
|
30 |
-
Since some of dependent datasets have non-commercial use licences, the model is under cc-by-nc-4.0 licence.
|
31 |
-
|
32 |
## Training and evaluation data
|
33 |
|
34 |
The training data is clean mix of various Azerbaijani corpus shared by the community.
|
|
|
21 |
It achieves the following results on the evaluation set:
|
22 |
- Loss: 1.1697
|
23 |
|
24 |
+
We thank Microsoft Accelerating Foundation Models Research Program for supporting our research.
|
25 |
+
Authors: Mammad Hajili, Duygu Ataman
|
26 |
+
|
27 |
## Model description
|
28 |
|
29 |
The model was trained on masked language model task on a single V100 GPU for 68 hours. For downstream tasks, it requires to be fine-tuned based on objective of the task.
|
30 |
|
|
|
|
|
|
|
|
|
31 |
## Training and evaluation data
|
32 |
|
33 |
The training data is clean mix of various Azerbaijani corpus shared by the community.
|