Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -22,6 +22,8 @@ The **MaLA Corpus** (Massive Language Adaptation) is a comprehensive, multilingu
|
|
22 |
- **Language Coverage**: Includes data for **939 languages**, with **546 languages** having over 100,000 tokens.
|
23 |
- **Pre-processing**: The corpus is cleaned and deduplicated to ensure high-quality training data.
|
24 |
|
|
|
|
|
25 |
|
26 |
---
|
27 |
|
@@ -60,7 +62,7 @@ We will comply with legitimate requests by removing the affected sources from th
|
|
60 |
|
61 |
---
|
62 |
## Citation
|
63 |
-
|
64 |
```
|
65 |
@article{ji2024emma500enhancingmassivelymultilingual,
|
66 |
title={{EMMA}-500: Enhancing Massively Multilingual Adaptation of Large Language Models},
|
@@ -71,7 +73,7 @@ We will comply with legitimate requests by removing the affected sources from th
|
|
71 |
}
|
72 |
```
|
73 |
|
74 |
-
The final version of this dataset 🤗[MaLA-LM/mala-monolingual-split](https://huggingface.co/datasets/MaLA-LM/mala-monolingual-split) is used for training the models presented in the below paper
|
75 |
```
|
76 |
@article{ji2025emma2,
|
77 |
title={Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data},
|
|
|
22 |
- **Language Coverage**: Includes data for **939 languages**, with **546 languages** having over 100,000 tokens.
|
23 |
- **Pre-processing**: The corpus is cleaned and deduplicated to ensure high-quality training data.
|
24 |
|
25 |
+
- Project page: https://mala-lm.github.io/emma-500
|
26 |
+
- Paper: https://arxiv.org/abs/2409.17892
|
27 |
|
28 |
---
|
29 |
|
|
|
62 |
|
63 |
---
|
64 |
## Citation
|
65 |
+
This dataset is compiled and released in the paper below.
|
66 |
```
|
67 |
@article{ji2024emma500enhancingmassivelymultilingual,
|
68 |
title={{EMMA}-500: Enhancing Massively Multilingual Adaptation of Large Language Models},
|
|
|
73 |
}
|
74 |
```
|
75 |
|
76 |
+
The final version of this dataset 🤗[MaLA-LM/mala-monolingual-split](https://huggingface.co/datasets/MaLA-LM/mala-monolingual-split) is also used for training the models presented in the below paper
|
77 |
```
|
78 |
@article{ji2025emma2,
|
79 |
title={Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data},
|