edited data card: data fields and wording
Browse files
README.md
CHANGED
@@ -8,18 +8,24 @@ size_categories:
|
|
8 |
|
9 |
# MaLA Corpus: Massive Language Adaptation Corpus
|
10 |
|
11 |
-
This [**mala-opus-dedup-2410**](https://huggingface.co/datasets/MaLA-LM/mala-opus-dedup-2410) is the MaLA
|
12 |
-
As a part of [**MaLA Corpus**](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529) that aims to enhance massive language adaptation in many languages, it contains bilingual translation data (aka, parallel data and bitexts) in 16,829 language pairs.
|
13 |
-
|
14 |
|
15 |
The [**MaLA Corpus** (Massive Language Adaptation)](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529) is a series of comprehensive, multilingual datasets designed to support the continual pre-training of large language models. This [**mala-opus-dedup-2410**](https://huggingface.co/datasets/MaLA-LM/mala-opus-dedup-2410) set can also support the training of multilingual translation models.
|
16 |
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
---
|
19 |
|
20 |
## Key Features
|
21 |
|
22 |
- **Language Coverage**: Includes data in 16,829 language pairs.
|
|
|
23 |
- **Pre-processing**: The corpus is cleaned and deduplicated to ensure high-quality training data.
|
24 |
|
25 |
|
@@ -29,12 +35,12 @@ The [**MaLA Corpus** (Massive Language Adaptation)](https://huggingface.co/colle
|
|
29 |
|
30 |
This [**mala-opus-dedup-2410**](https://huggingface.co/datasets/MaLA-LM/mala-opus-dedup-2410) set was created by processing data from [OPUS](https://opus.nlpl.eu), followed by rigorous pre-processing to ensure the quality of the data:
|
31 |
|
32 |
-
- **Cleaning**: Noisy and irrelevant data
|
33 |
- **Deduplication**: Duplicate entries across multiple sources were eliminated.
|
34 |
- **Normalization**: The data was normalized, and language codes were standardized to ISO 639-3 to ensure consistency across all sources.
|
35 |
|
36 |
-
We do a variety of handcrafted checks to filter out noisy lines
|
37 |
-
|
38 |
|
39 |
---
|
40 |
|
@@ -44,7 +50,7 @@ This [**mala-opus-dedup-2410**](https://huggingface.co/datasets/MaLA-LM/mala-opu
|
|
44 |
|
45 |
- **Pre-training** of large language models, particularly continual pre-training, to enhance the performance in low-resource languages.
|
46 |
- **Fine-tuning models** on multilingual benchmarks to improve language coverage across a variety of domains.
|
47 |
-
- **Multilingual tasks** such as machine translation.
|
48 |
|
49 |
---
|
50 |
## Take-down Policy
|
|
|
8 |
|
9 |
# MaLA Corpus: Massive Language Adaptation Corpus
|
10 |
|
11 |
+
This [**mala-opus-dedup-2410**](https://huggingface.co/datasets/MaLA-LM/mala-opus-dedup-2410) is the bilingual part of the [**MaLA Corpus**](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529). It is a cleaned and deduplicated version of OPUS corpus, collected from [OPUS](https://opus.nlpl.eu) with a cutoff of October 2024 (2410). Particularly, it contains bilingual translation data (aka, parallel data or bitexts) in 16,829 language pairs.
|
|
|
|
|
12 |
|
13 |
The [**MaLA Corpus** (Massive Language Adaptation)](https://huggingface.co/collections/MaLA-LM/mala-corpus-66e05127641a51de34d39529) is a series of comprehensive, multilingual datasets designed to support the continual pre-training of large language models. This [**mala-opus-dedup-2410**](https://huggingface.co/datasets/MaLA-LM/mala-opus-dedup-2410) set can also support the training of multilingual translation models.
|
14 |
|
15 |
|
16 |
+
---
|
17 |
+
|
18 |
+
## Data Fields
|
19 |
+
|
20 |
+
`text` - the source-target parallel data in the format of `${src} \t ${trg}`, delimited by " \t ". \
|
21 |
+
`original_code` - the source-target language codes in the format of `${src_code} - ${trg_code}`, delimited by " - ".
|
22 |
+
|
23 |
---
|
24 |
|
25 |
## Key Features
|
26 |
|
27 |
- **Language Coverage**: Includes data in 16,829 language pairs.
|
28 |
+
|
29 |
- **Pre-processing**: The corpus is cleaned and deduplicated to ensure high-quality training data.
|
30 |
|
31 |
|
|
|
35 |
|
36 |
This [**mala-opus-dedup-2410**](https://huggingface.co/datasets/MaLA-LM/mala-opus-dedup-2410) set was created by processing data from [OPUS](https://opus.nlpl.eu), followed by rigorous pre-processing to ensure the quality of the data:
|
37 |
|
38 |
+
- **Cleaning**: Noisy and irrelevant data were removed to ensure higher data quality.
|
39 |
- **Deduplication**: Duplicate entries across multiple sources were eliminated.
|
40 |
- **Normalization**: The data was normalized, and language codes were standardized to ISO 639-3 to ensure consistency across all sources.
|
41 |
|
42 |
+
We do a variety of handcrafted checks to filter out noisy lines based on [this pipeline](https://github.com/browsermt/students/tree/master/train-student/clean). In detail,
|
43 |
+
we run line-level script detection to ensure that the writing script identified for the dataset during the preliminary stage makes up more than 5% of that line. We remove lines where the same word or character repeats more than 5 times in a row. We calculate the source and target ratios in the number of characters and number of words as well as alphabetical ratio, but only require one of the ratios to be within our pre-determined range, due to the wide range of scripts, languages, and their usual word delimiters. We also ensure that the line fits between a minimum and a maximum number of character lengths (non-empty).
|
44 |
|
45 |
---
|
46 |
|
|
|
50 |
|
51 |
- **Pre-training** of large language models, particularly continual pre-training, to enhance the performance in low-resource languages.
|
52 |
- **Fine-tuning models** on multilingual benchmarks to improve language coverage across a variety of domains.
|
53 |
+
- **Multilingual tasks** such as machine translation training or fine-tuning.
|
54 |
|
55 |
---
|
56 |
## Take-down Policy
|