Datasets:
twi_sentence
stringlengths 3
124
| english_parahrase
stringlengths 3
174
|
---|---|
ɔsɔfo sɔre
|
pastor get up
|
afuo kyɛ
|
farm share
|
nkoraa ha
|
children hundred
|
bɛ mee
|
VEN to be satisfied
|
skul m
|
school inside
|
akonwa bene
|
chair be cooked
|
brodeɛ sakra
|
plantain repent
|
mframa dí
|
wind speak
|
twenee mra
|
drum come
|
kookoo tie
|
cocoa listen
|
awerɛho twitwi
|
sad brush
|
aban abrɛ
|
government relax
|
mfonsoɔ hye
|
mistake burn
|
subain yε
|
character be
|
akoraa berɛ
|
child time
|
hwansena nòá
|
fly cook
|
buuku foro
|
book climb
|
onyin bu
|
growth break
|
adaka ɔˋ
|
box s/he
|
panpane nantew
|
aroma walk
|
safoa i
|
key PAST
|
dzi aseɛ
|
eat under
|
kwadu wo
|
banana you
|
ɔpon o
|
table past
|
den ɛmoa
|
strong help
|
kuro mee
|
town to be satisfied
|
skul etwa
|
school cut
|
Esi kum
|
Esi sound
|
akwantuo dì
|
journey eat
|
aɔsre yera
|
church loose
|
akodaa tɔn
|
child sell
|
bɔɔlo kyɛ
|
football share
|
amponsah kyea
|
A greet
|
ᴐkra pɔn
|
cat close
|
nipa sakra
|
person repent
|
bɔɔlɔ gu
|
football keep
|
akodaa ɛsɛ
|
child it is right
|
kwane ehu
|
road fear
|
adar kyerε
|
machete show
|
ɛkɔm nyane
|
hunger wake
|
mmoa túrù
|
help carry
|
buɔ nya
|
blocks get
|
kᴐn kae
|
neck remember
|
ɛpono duru
|
table heavy
|
kuro yera
|
town loose
|
mmɛɛma pɛ́
|
men look ́
|
tɛ bá
|
marble baby
|
dan dɔ
|
house weed
|
mmiɛnsa dum
|
three turn off
|
owura srá
|
gentleman visit
|
pa fir
|
good leave
|
nipa di
|
person eat
|
adanko gyɛ
|
rabbit.AFF take
|
dwa bá
|
market baby
|
abufo hyɛ
|
anger arrange
|
hómá tɔn
|
letter sell
|
ekuman tua
|
E. pay
|
bɛɛma fa
|
man pass
|
asaase hwie
|
land pour
|
mpenatwee gyina
|
dating stand
|
kuro so
|
town beUp
|
mande bɛ
|
rite VEN
|
dompe aseɛ
|
bone under
|
kasa ehu
|
speak fear
|
mfuom bɛ
|
farm VEN
|
adwene foro
|
brain climb
|
pápà som
|
man worship
|
di gu
|
eat keep
|
nyom worɔ
|
song remove
|
tii wie
|
tea finish
|
gyakɛti hwehwɛ́
|
jacket wish ́
|
gyemo gyina
|
gym stand
|
dùań kɔ
|
food go
|
krataa ò
|
letter s/he
|
ano si
|
mouth build
|
dwuma su
|
work cry
|
bo ne
|
chest is
|
ɔkyerɛkyerɛni hyira
|
teacher bless
|
asodza pɛ
|
soldiers look
|
mmɛɛma (wɔ
|
men ( pound
|
bosome srá
|
month visit
|
dwuma tene
|
work stretch
|
mɔfra gyae
|
child leave
|
dware koa
|
bath bend
|
sidi bene
|
cedi be cooked
|
adwen ha
|
mind hundred
|
fie gyina
|
house stand
|
sekan kyerɛ
|
cutlass show
|
foɔ bɔ
|
people play
|
akodaa wura
|
child enter
|
sunsum ku
|
reflection kill
|
baayewa twitwa
|
girl cut
|
nwom hyɛɛ aseɛ
|
song hyɛɛ under
|
akye tɔne
|
morning sell
|
ano ati
|
mouth head
|
subain hyira
|
character bless
|
awiei bɔ́
|
end play ́
|
maame noa
|
mother cook
|
afuo hyɛ
|
farm arrange
|
adepam kyε
|
dressmaking give
|
Twi-English Parallel Dataset
Dataset Description
This dataset contains a large-scale parallel corpus of Twi-English sentence pairs, featuring synthetically generated Twi sentences with corresponding English paraphrases. The dataset is designed to support machine translation, cross-lingual understanding, and other NLP tasks involving the Twi language (a dialect of Akan spoken in Ghana).
Dataset Summary
- Total Size: 47,924,398 parallel sentence pairs
- Language Pair: Twi (tw) ↔ English (en)
- Generation Method: Synthetic generation with dictionary-based paraphrasing
Languages
- Twi (tw): A dialect of Akan, primarily spoken in Ghana. Part of the Niger-Congo language family.
- English (en): Used as the target language for translation tasks.
Dataset Structure
Data Fields
twi_sentence
: Synthetically generated sentence in Twi languageenglish_parahrase
: Corresponding English paraphrase created using Twi-to-English dictionary mapping
Dataset Size
Total Examples |
---|
47,924,398 |
Example Usage
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("michsethowusu/twi-english-parallel-synthetic-50m")
# Access the data
print("Twi:", dataset[0]['twi_sentence'])
print("English:", dataset[0]['english_parahrase'])
# Iterate through examples
for example in dataset:
twi_text = example['twi_sentence']
english_text = example['english_parahrase']
# Your processing here...
Dataset Creation
Source Data
The Twi sentences in this dataset were synthetically generated, and English paraphrases were created through dictionary-based translation methods. This approach enables the creation of large-scale parallel corpora for low-resource languages like Twi.
Data Collection Process
- Synthetic Generation: Twi sentences were generated using computational methods
- Dictionary Mapping: English paraphrases were created using Twi-to-English dictionary resources
- Quality Control: Data cleaning and filtering applied to ensure quality
Annotations
The dataset does not contain manual annotations but relies on algorithmic generation and dictionary-based translation.
Considerations for Use
Social Impact and Biases
- This dataset uses synthetic generation methods, which may not capture the full linguistic diversity and cultural nuances of natural Twi speech
- Dictionary-based paraphrasing may introduce systematic biases or miss contextual meanings
- Users should be aware that synthetic data may not generalize perfectly to real-world Twi usage
Limitations
- Synthetic Nature: Generated sentences may not reflect natural language patterns
- Dictionary Constraints: English paraphrases are limited by dictionary coverage and may lack contextual accuracy
- Domain Coverage: May not cover all domains or registers of Twi language use
Recommended Use Cases
- Machine Translation: Training MT systems for Twi-English translation
- Cross-lingual NLP: Developing cross-lingual models involving Twi
- Low-resource Language Research: Studying synthetic data approaches for under-resourced languages
- Educational Tools: Creating language learning applications for Twi
Licensing and Attribution
This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
Citation
If you use this dataset in your research, please cite:
@dataset{twi_english_parallel,
title={Twi-English Parallel Dataset},
author={[Your Name/Organization]},
year={2025},
url={https://huggingface.co/datasets/michsethowusu/twi-english-parallel-synthetic-50m},
note={Large-scale synthetic parallel corpus for Twi-English translation}
}
Additional Information
Dataset Curators
[Add your information here]
Contact Information
For questions or issues regarding this dataset, please [add contact information].
Disclaimer: This dataset contains synthetically generated content. While efforts have been made to ensure quality, users should validate the data for their specific use cases.
- Downloads last month
- 164