Dataset Viewer
Auto-converted to Parquet
audio
audioduration (s)
2.08
16.9
text
stringlengths
29
67
sei mΙ›yΙ› Ι”rebΙ›yi fufuo no naso nti da ase ani
ntΙ› nnua koro saa nenam adi so abΙ›di naso nsia
a rebΙ” hwene he ho kuu kurom bi ase
sΙ› ho hyiaa pΙ” a nti ate paa abakΙ”sΙ›m anim
yΙ›n biara toΙ” ho nso mpa saa pae nano yi
sei kasa tΙ” nam a nyinaa a awe ha mu
afe bi kanea Ι”mma awia aboΙ”den hu faa ahemfie emu
adi tokuro bi asΙ›e ntokuro fam wee nwe okuafoΙ” yi
simon nhyΙ› akye he fam kekakeka ma biara da ase
pΙ› nti bΙ›tene sΙ› Ι”ma twenee benkum yi nyinaa hwehwΙ›Ι›
tii saa pΙ› obiara adwane wontie Ι›yΙ› mframa bi
aba aka tii sa kuruwa kyΙ› nadwene kΙ” asuafoΙ”
sΙ”ree nkosua bi ada ntoma akyi adi tuu nwoma no
sΙ› a akyerΙ› sΙ› maa ti ehu na biara de
ankΙ”tΙ” nafuo no bisaa ase ase ada ada hwa no
mo nyinaa biara anka nti afe nso worΙ”Ι” pono no
dane no safoa teaa naso papa sen to nnipa ase
wΙ”n ahemfie ani yΙ› papa saa ananse kΙ›seΙ› din ani
kΙ”tuu ehu yΙ› ti bi ahuri sΙ› gye akwadaa nnora
akΙ” nhoma sΙ› kakra bi wontie ha abΙ”fra so saa
wo nam mu woyΙ› dΙ› fam adΙ”foΙ” kΙ›se se ani
Ι›fom aboΙ”den ahanan sΙ› biara a mΙ›hwΙ› twenee merekΙ” na
pΙ”nkΙ” dΙ›dΙ› mpem he nyinaa ne Ι”di aboΙ” ne he
tΙ”Ι” afuom bi atwa Ι”sΙ”fo fam da kΙ” amaneΙ› bi
bi egya ase wΙ” kΙ”kΙ”Ι” ankyΙ› Ι›boΙ” tiatia hwene mu
sΙ› obi siesie Ι”baa yΙ›n saa buee pasapasa yii pasapasa
mansa ankΙ”tΙ” nwoma kuu ba waree bere ma afuom
naso ahemfie akyi Ι›wΙ” kitikiti san safoa dΙ› waboa ho
hwee wuu adaka no a sΙ›n ntΙ› yi so
sei sei sesaa sΙ› bΙ›twerΙ› wadwuma ehu bi nyinaa akΙ”bΙ”
Ι”kyerΙ›kyerΙ›ni obi apem saa yΙ›fa aba so bΙ›fe adΙ”foΙ” nsia
ehu Ι”kraman Ι›yΙ› akura deΙ› okuafoΙ” ne na kyerΙ›Ι›
fi toΙ” ne ahu sΙ› hwene repΙ› ara nkuro atifi
nkuro yi somaa nnwom maa Ι”kyerΙ›kyerΙ›ni akΙ” dwa ma mpa
bosome yi na obiara Ι”dwene hwehwΙ› wΙ” Ι”bo nyinaa
mmoa no bokiti huri tokuro hare hare bae nkwa so
Ι›dan fΙ›fΙ› apem he nyinaa ne abΙ›kye akuafoΙ” hΙ” na
nani biara sΙ› enti sei Ι”panin anka rebΙ” asikafoΙ” he
aka bi su mfuo wada nkwan he ani bΙ›tene
ntumi benkum no waree abaayewa fam de ma me bi
mo nyinaa nkakrankakra na nso nua na redi se no
dwabΙ” yi a hΙ” gye ada yΙ›Ι› ananse biara
sΙ› nte Ι”so asΙ›m ne nyinaa ma rebΙ” ho ho
ne mu ase wΙ” sene pasapasa akura dΙ› mmaa mu
ahemfie bepΙ” mpem a kaa gyina mu di dwa aduosia
sΙ› anka retoto sΙ› bΙ›tΙ” mpaboa Ι›boΙ” na biara wΙ”te
dwa no nso yΙ›n Ι›yΙ› mebΙ” wΙ” akwadaa nyinaa
a sei nam sΙ› Ι”ma nnipa dua no biara didii
sΙ› wΙ” asa nam yΙ›n enti te a ato pasapasa
yaa ntΙ” kwan agu asikafoΙ” bi ho sΙ›n akyi
enti gyee asΙ”re wo Ι”no ara sΙ› ka ase atifi
nkΙ” sene ne som sΙ› nua kΙ” merekΙ” wura atifi
fa mfe atu tii he asan anaa yΙ› dwom ntΙ›m
paanoo nkΙ” ntΙ› he atΙ”n fam bΙ”Ι” nam no ase
ara enti bisabisa sΙ› Ι”ma adeΙ› wadwuma bi biara fee
watete nsΙ”hwΙ› sΙ› paa nyinaa amee nyinaa mfuo ho sΙ›
ne apam wuu bokiti biribi obi Ι›siane manni hΙ” so
paanoo anyane nadwene Ι”bo ma kyee paanoo na obi nti
a ne watete Ι”panin hΙ” pΙ› tuatua mu twaa pasapasa
agorΙ” ntΙ”kwa aduosia ho bΙ›fa ntwitwa ase nhyΙ› kanea apem
yΙ›nhyΙ› efie he anto paanoo fam wee kΙ” atadeΙ› na
mo biara pΙ› aane pΙ› abaa yee tΙ”nn Ι›kΙ”m bi
maa ho na Ι”nantee sΙ› ahemfie mpΙ› bi nwoma ani
hwee pono fam Ι›yΙ› tantan dodo tiri fufuo da emu
dua Ι”nyaa nam nka aboa tuu bepΙ” awe da
se he Ι”bΙ›ba nyame yi aku simon yi mmaa ase
Ι”daa Ι”san anaa bΙ›tene sΙ› tokuro Ι›hyΙ› wΙ”n adΙ” akyi
yΙ›n tuatua osuani he me huu kasa bi fam
ehu bi rebΙ” afuo na gyae safoa na adaka ho
a Ι”bΙ”Ι” amaneΙ› na yΙ›n srΙ› bea na emu
me amaneΙ› nakyi de sene papa okuafoΙ” kΙ›seΙ› okuafoΙ” emu
akyerΙ› apono sΙ› ebia bi Ι”bΙ›kΙ” wΙ”n mpae akyi saa
mu obi Ι”maa ha nwoma Ι”rebΙ›yi akwantuo ne nafuo mΙ›gye
ato firii adwene bi fam kekakeka anaa merekΙ” redidi ani
osuani dΙ” tokuro no rebΙ” bepΙ” hwehwΙ› dane yi ase
mu wani apem ho nyaa akΙ”twerΙ› atifi kuu akwantuo ahanan
ma ahemfie no di da nakyi wee nwe araba he
ebia nyinaa ayΙ› hwee asΙ›m Ι”tΙ”nn din anaa akye Ι”kΙ”too
wahyehyΙ› ara sΙ› wo dΙ” a anya atwa woante nyinaa
mango Ι”so ananse yΙ›mfa sii fa ne adwuma bΙ›to simon
pono Ι›dan ahanan sei Ι”nyaa sa ani bisaa araba apem
ano fufuo apem sΙ› biara aane Ι›sΙ› awia ara na
kanea he tuu twenee gye dwom nkye waboa ada dane
yΙ› pΙ” anom amaneΙ› no twerΙ›Ι› ne bobΙ” akuafoΙ” ha
aka pae akura he piaa Ι”haw bae dompe he so
nani bi sΙ› enti nso mmoa ara ahome pono bi
afe bi nsa fi nwoma fufuo tiatia ntwitwa hwa ani
woada a na wo Ι”sΙ”re a yΙ›reto nyaa tΙ”nn biara
nani biara saa deΙ› saa nso na noaa kuruwa bi
ne ahyia Ι”bΙ”Ι” kunu me yΙ›n a Ι›kΙ” ho fam
mansa wahyehyΙ› bea no fam kekakeka nti yΙ›n yiyi nakyi
bΙ›yΙ› ebia sΙ› siesie sΙ› Ι›kΙ”m kura biribi amaneΙ› fam
ayΙ› ara sei hwee Ι”dΙ” ne anya kyerΙ› rekΙ” Ι”no
toa na Ι›hyee mpa ankΙ” Ι”kyerΙ›kyerΙ›ni na emu miamia
Ι”nnii ho nso nyinaa anya nti tuu ada mehuu ho
adan he atwitwa mpae na hohoro apono he boΙ” so
Ι”maa atadeΙ› wΙ” adwuma yi piaa ne kΙ”Ι” aboa fΙ›fΙ›Ι›fΙ›
yaa bΙ›fa afia twa dwabΙ” no mu fono mu
ntΙ› no adi mpae kyΙ› adaka kyΙ› afuom tuu ntoma
nkra bi tΙ”Ι” Ι›dan na kyΙ›Ι› mfe he akuafoΙ” ase
End of preview. Expand in Data Studio

Twi Speech-Text Parallel Dataset - Part 5 of 5

πŸŽ‰ The Largest Speech Dataset for Twi Language

This dataset contains part 5 of the largest speech dataset for the Twi language, featuring 1 million speech-to-text pairs split across 5 parts (approximately 200,000 samples each). This represents a groundbreaking resource for Twi (Akan), a language spoken primarily in Ghana.

πŸš€ Breaking the Low-Resource Language Barrier

This publication demonstrates that African languages don't have to remain low-resource. Through creative synthetic data generation techniques, we've produced the largest collection of AI training data for speech-to-text models in Twi, proving that innovative approaches can build the datasets African languages need.

πŸ“Š Complete Dataset Series (1M Total Samples)

Part Repository Samples Status
Part 1 michsethowusu/twi-speech-text-parallel-synthetic-1m-part001 ~200,000 βœ… Available
Part 2 michsethowusu/twi-speech-text-parallel-synthetic-1m-part002 ~200,000 βœ… Available
Part 3 michsethowusu/twi-speech-text-parallel-synthetic-1m-part003 ~200,000 βœ… Available
Part 4 michsethowusu/twi-speech-text-parallel-synthetic-1m-part004 ~200,000 βœ… Available
Part 5 michsethowusu/twi-speech-text-parallel-synthetic-1m-part005 ~200,000 πŸ”₯ THIS PART

Dataset Summary

  • Language: Twi/Akan - aka
  • Total Dataset Size: 1,000,000 speech-text pairs
  • This Part: {len(data):,} audio files (filtered, >1KB)
  • Task: Speech Recognition, Text-to-Speech
  • Format: WAV audio files with corresponding text transcriptions
  • Generation Method: Synthetic data generation
  • Modalities: Audio + Text

🎯 Supported Tasks

  • Automatic Speech Recognition (ASR): Train models to convert Twi speech to text
  • Text-to-Speech (TTS): Use parallel data for TTS model development
  • Speech-to-Speech Translation: Cross-lingual speech applications
  • Keyword Spotting: Identify specific Twi words in audio
  • Phonetic Analysis: Study Twi pronunciation patterns
  • Language Model Training: Large-scale Twi language understanding

πŸ“ Dataset Structure

Data Fields

  • audio: Audio file in WAV format (synthetically generated)
  • text: Corresponding text transcription in Twi

Data Splits

This part contains a single training split with {len(data):,} filtered audio-text pairs (small/corrupted files removed).

Loading the Complete Dataset

from datasets import load_dataset, concatenate_datasets

# Load all parts of the dataset
parts = []
for i in range(1, 6):
    part_name = f"michsethowusu/twi-speech-text-parallel-synthetic-1m-part{i:03d}"
    part = load_dataset(part_name, split="train")
    parts.append(part)

# Combine all parts into one dataset
complete_dataset = concatenate_datasets(parts)
print(f"Complete dataset size: {{len(complete_dataset):,}} samples")

Loading Just This Part

from datasets import load_dataset

# Load only this part
dataset = load_dataset("michsethowusu/twi-speech-text-parallel-synthetic-1m-part005", split="train")
print(f"Part 5 dataset size: {{len(dataset):,}} samples")

πŸ› οΈ Dataset Creation

Methodology

This dataset was created using synthetic data generation techniques, specifically designed to overcome the challenge of limited speech resources for African languages. The approach demonstrates how AI can be used to bootstrap language resources for underrepresented languages.

Data Processing Pipeline

  1. Text Generation: Synthetic Twi sentences generated
  2. Speech Synthesis: Text-to-speech conversion using advanced models
  3. Quality Filtering: Files smaller than 1KB removed to ensure quality
  4. Alignment Verification: Audio-text alignment validated
  5. Format Standardization: Consistent WAV format and text encoding

Technical Details

  • Audio Format: WAV files, various sample rates
  • Text Encoding: UTF-8
  • Language Code: aka (ISO 639-3)
  • Filtering: Minimum file size 1KB to remove corrupted/empty files

🌍 Impact and Applications

Breaking Language Barriers

This dataset represents a paradigm shift in how we approach low-resource African languages:

  • Scalability: Proves synthetic generation can create large datasets
  • Accessibility: Makes Twi ASR/TTS development feasible
  • Innovation: Demonstrates creative solutions for language preservation
  • Reproducibility: Methodology can be applied to other African languages

Use Cases

  • Educational Technology: Twi language learning applications
  • Accessibility: Voice interfaces for Twi speakers
  • Cultural Preservation: Digital archiving of Twi speech patterns
  • Research: Phonetic and linguistic studies of Twi
  • Commercial Applications: Voice assistants for Ghanaian markets

⚠️ Considerations for Using the Data

Social Impact

Positive Impact:

  • Advances language technology for underrepresented communities
  • Supports digital inclusion for Twi speakers
  • Contributes to cultural and linguistic preservation
  • Enables development of Twi-language AI applications

Limitations and Biases

  • Synthetic Nature: Generated data may not capture all nuances of natural speech
  • Dialect Coverage: May not represent all regional Twi dialects equally
  • Speaker Diversity: Limited to synthesis model characteristics
  • Domain Coverage: Vocabulary limited to training data scope
  • Audio Quality: Varies across synthetic generation process

Ethical Considerations

  • Data created with respect for Twi language and culture
  • Intended to support, not replace, natural language preservation efforts
  • Users should complement with natural speech data when possible

πŸ“š Technical Specifications

Audio Specifications

  • Format: WAV
  • Channels: Mono
  • Sample Rate: 16kHz
  • Bit Depth: 16-bit
  • Duration: Variable per sample

Quality Assurance

  • Minimum file size: 1KB (corrupted files filtered)
  • Text-audio alignment verified
  • UTF-8 encoding validation
  • Duplicate removal across parts

πŸ“„ License and Usage

Licensing Information

This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).

You are free to:

  • Share: Copy and redistribute the material
  • Adapt: Remix, transform, and build upon the material
  • Commercial use: Use for commercial purposes

Under the following terms:

  • Attribution: Give appropriate credit and indicate if changes were made

πŸ™ Acknowledgments

  • Original Audio Production: The Ghana Institute of Linguistics, Literacy and Bible Translation in partnership with Davar Partners
  • Audio Processing: MMS-300M-1130 Forced Aligner
  • Synthetic Generation: Advanced text-to-speech synthesis pipeline
  • Community: Twi language speakers and researchers who inspire this work

πŸ“– Citation

If you use this dataset in your research, please cite:

@dataset{{twi_speech_parallel_1m_2025,
  title={{Twi Speech-Text Parallel Dataset: The Largest Speech Dataset for Twi Language}},
  author={{Owusu, Michael Seth}},
  year={{2025}},
  publisher={{Hugging Face}},
  note={{1 Million synthetic speech-text pairs across 5 parts}},
  url={{https://huggingface.co/datasets/michsethowusu/twi-speech-text-parallel-synthetic-1m-part005}}
}}

For the complete dataset series:

@dataset{{twi_speech_complete_series_2025,
  title={{Complete Twi Speech-Text Parallel Dataset Series (1M samples)}},
  author={{Owusu, Mich-Seth}},
  year={{2025}},
  publisher={{Hugging Face}},
  note={{Parts 005-005, 200k samples each}},
  url={{https://huggingface.co/michsethowusu}}
}}

πŸ“ž Contact and Support

  • Repository Issues: Open an issue in this dataset repository
  • General Questions: Contact through Hugging Face profile
  • Collaboration: Open to partnerships for African language AI development

πŸ”— Related Resources


🌟 Star this dataset if it helps your research! πŸ”„ Share to support African language AI development! """

Downloads last month
36

Collection including michsethowusu/twi-speech-text-parallel-synthetic-1m-part005