amanuelbyte commited on
Commit
a3dd762
Β·
verified Β·
1 Parent(s): 0558f5c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +181 -0
README.md CHANGED
@@ -17,3 +17,184 @@ configs:
17
  - split: train
18
  path: data/train-*
19
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  - split: train
18
  path: data/train-*
19
  ---
20
+ # Afrikaans-English Translation Dataset
21
+
22
+ A high-quality, curated parallel corpus for machine translation between English (en) and Afrikaans (af), created through extensive data processing and quality filtering.
23
+
24
+ ## πŸ“Š Dataset Overview
25
+
26
+ - **Language Pair**: English β†’ Afrikaans (en-af)
27
+ - **Total Sentence Pairs**: 161,644
28
+ - **Quality Threshold**: QE β‰₯ 0.7
29
+ - **Processing Date**: September 22, 2025
30
+ - **Repository**: [amanuelbyte/af-en-translation-dataset](https://huggingface.co/amanuelbyte/af-en-translation-dataset)
31
+
32
+ ## 🎯 Key Features
33
+
34
+ - **Quality Assured**: All sentence pairs meet minimum Quality Estimation (QE) score of 0.7
35
+ - **Deduplicated**: Extensive deduplication process removes redundant content
36
+ - **Multi-Domain**: Covers diverse domains including technical, religious, educational, and general content
37
+ - **Clean Data**: Filtered for language identification, semantic similarity, and rule-based quality checks
38
+
39
+ ## πŸ“ˆ Dataset Statistics
40
+
41
+ ### Data Volume
42
+ - **Before Deduplication**: 192,541 sentence pairs
43
+ - **After Deduplication**: 161,644 sentence pairs
44
+ - **Deduplication Ratio**: 16.1% reduction
45
+
46
+ ### Quality Metrics
47
+ - **Average QE Score**: 0.78
48
+ - **Highest Quality Dataset**: Tatoeba (QE: 0.861)
49
+ - **Lowest Quality Dataset**: KDE4 (QE: 0.723)
50
+
51
+ ## πŸ“š Data Sources
52
+
53
+ This dataset combines **20 high-quality sources**:
54
+
55
+ | Source | Dataset | Pairs | QE Score | Domain |
56
+ |--------|---------|-------|----------|---------|
57
+ | wikimedia | Wikipedia Articles | 56,766 | 0.781 | General |
58
+ | bible-uedin | Bible Translations | 51,118 | 0.748 | Religious |
59
+ | OpenSubtitles | Movie Subtitles | 37,731 | 0.795 | Entertainment |
60
+ | qed | Europarl Questions | 10,412 | 0.773 | Political |
61
+ | GNOME | Software Localization | 6,169 | 0.761 | Technical |
62
+ | spc | Scientific Papers | 6,811 | 0.739 | Academic |
63
+ | kde4 | KDE Software | 5,365 | 0.723 | Technical |
64
+ | Pontoon-Translations | Mozilla Translations | 5,606 | 0.785 | Technical |
65
+ | Weblate-Translations | Weblate Platform | 3,978 | 0.757 | General |
66
+ | TED2020 | TED Talks | 1,882 | 0.783 | Educational |
67
+ | ntrex | News Articles | 1,806 | 0.777 | News |
68
+ | Tatoeba | Community Translations | 2,197 | 0.861 | General |
69
+ | smol | Smol Dataset | 812 | 0.789 | General |
70
+ | lingvanex_test_references | Test References | 748 | 0.840 | General |
71
+ | ELRC-wikipedia_health | Health Wikipedia | 371 | 0.808 | Medical |
72
+ | ELRC_2922 | European Language Resources | 369 | 0.808 | General |
73
+ | ELRC-3039-wikipedia_health | Health Wikipedia | 371 | 0.808 | Medical |
74
+ | ted_talks_iwslt | TED/IWSLT Talks | 27 | 0.770 | Educational |
75
+ | ELRC-2638-monumentos_2007 | Cultural Heritage | 2 | 0.848 | Cultural |
76
+ | lingvanex_test_references | Test References | 748 | 0.840 | General |
77
+
78
+ ## πŸ› οΈ Processing Pipeline
79
+
80
+ The dataset underwent rigorous processing:
81
+
82
+ 1. **Rule-based Filtering**: Length constraints, character validation
83
+ 2. **Semantic Filtering**: Sentence embedding similarity (threshold: 0.7)
84
+ 3. **Language Detection**: FastText + AfroLID models for accurate language identification
85
+ 4. **Quality Estimation**: Africomet QE model for translation quality scoring
86
+ 5. **Sentence-level QE Filtering**: Only pairs with QE β‰₯ 0.7 retained
87
+ 6. **Deduplication**: 4-stage process removing identical and similar content
88
+
89
+ ### Deduplication Process
90
+ 1. Remove identical source-target pairs
91
+ 2. Remove duplicate (source, target) combinations
92
+ 3. Remove duplicate source sentences
93
+ 4. Remove duplicate target sentences
94
+
95
+ ## πŸ’Ύ Dataset Format
96
+
97
+ The dataset is available in Hugging Face `datasets` format with the following structure:
98
+
99
+ ```python
100
+ from datasets import load_dataset
101
+
102
+ dataset = load_dataset("amanuelbyte/af-en-translation-dataset")
103
+ print(dataset)
104
+
105
+ # Dataset structure:
106
+ # DatasetDict({
107
+ # "train": Dataset({
108
+ # features: ['en', 'af'],
109
+ # num_rows: 161644
110
+ # })
111
+ # })
112
+ ```
113
+
114
+ ### Columns
115
+ - `en`: English source text
116
+ - `af`: Afrikaans target text
117
+
118
+ ## πŸ“– Usage Examples
119
+
120
+ ### Loading the Dataset
121
+ ```python
122
+ from datasets import load_dataset
123
+
124
+ # Load the dataset
125
+ dataset = load_dataset("amanuelbyte/af-en-translation-dataset")
126
+
127
+ # Access the training split
128
+ train_data = dataset["train"]
129
+
130
+ # Get a sample
131
+ sample = train_data[0]
132
+ print(f"English: {sample['en']}")
133
+ print(f"Afrikaans: {sample['af']}")
134
+ ```
135
+
136
+ ### For Machine Translation Training
137
+ ```python
138
+ # Prepare data for training
139
+ source_texts = train_data["en"]
140
+ target_texts = train_data["af"]
141
+
142
+ # Statistics
143
+ print(f"Dataset size: {len(train_data)}")
144
+ print(f"Average source length: {sum(len(text.split()) for text in source_texts) / len(source_texts)".1f"}")
145
+ print(f"Average target length: {sum(len(text.split()) for text in target_texts) / len(target_texts)".1f"}")
146
+ ```
147
+
148
+ ## πŸŽ“ Applications
149
+
150
+ This dataset is suitable for:
151
+ - **Machine Translation**: Training and fine-tuning MT models
152
+ - **Language Modeling**: Pretraining language models for Afrikaans
153
+ - **Cross-lingual Transfer**: Multilingual NLP tasks
154
+ - **Quality Estimation**: Research on translation quality
155
+ - **Linguistic Studies**: Analysis of Afrikaans-English language pairs
156
+
157
+ ## πŸ” Quality Assessment
158
+
159
+ The dataset includes comprehensive quality metrics:
160
+
161
+ - **Individual Dataset QE Scores**: Range from 0.723 to 0.861
162
+ - **Overall Average QE**: 0.78
163
+ - **Quality Distribution**: All datasets exceed 0.7 threshold
164
+ - **Consistency**: Uniform quality across different domains
165
+
166
+ ## πŸ“„ License
167
+
168
+ This dataset is released under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) license. The individual source datasets maintain their original licenses.
169
+
170
+ ## 🀝 Citation
171
+
172
+ If you use this dataset in your research, please cite:
173
+
174
+ ```bibtex
175
+ @dataset{af-en-translation-dataset-2025,
176
+ title={Afrikaans-English Translation Dataset},
177
+ author={Amanuel Tewolde},
178
+ year={2025},
179
+ publisher={Hugging Face},
180
+ journal={Hugging Face Hub},
181
+ howpublished={\url{https://huggingface.co/amanuelbyte/af-en-translation-dataset}}
182
+ }
183
+ ```
184
+
185
+ ## πŸ“ž Contact
186
+
187
+ For questions or issues regarding this dataset, please contact:
188
+ - **Maintainer**: Amanuel Tewolde
189
+ - **Email**: [email protected]
190
+ - **Organization**: Hugging Face Community
191
+
192
+ ## πŸ”— Related Resources
193
+
194
+ - [AfroLID Model](https://huggingface.co/UBC-NLP/afrolid_1.5) - Language identification
195
+ - [Africomet QE](https://huggingface.co/masakhane/africomet-qe-stl) - Quality estimation
196
+ - [Processing Pipeline](https://github.com/amanuelbyte/mt-data-processing) - Source code
197
+
198
+ ---
199
+
200
+ **Note**: This dataset was created using automated processing pipelines with quality thresholds. While extensive filtering has been applied, users should perform their own quality assessment based on their specific use cases.