Dataset Viewer
Auto-converted to Parquet
audio
audioduration (s)
1.22
13.2
text
stringlengths
17
53
ne atentene nua rehwehwΙ› wΙ” nwoma
nnadeΙ› ho nti rehwehwΙ› Ι”bo yi fam
hwee repΙ› ne na Ι”gyae no
sΙ› nsanom di toΙ” akΙ” kwae bi
asuafoΙ” gyae atΙ›m he kunu bae
afia bua kwae tia akokΙ” akyi
araba poo mpa bΙ›tΙ” sΙ›n da
asi nyame he ani awu efie
ansa nsaden bi afi sekan bi akyiri
nsΙ”hwΙ› saa sΙ› ahwΙ› sΙ› Ι”kΙ”Ι”
awuwu asΙ”re he wonnim yere emu
Ι”panin yi rekΙ” bepΙ” no nakyi
ma ase yi ani maa nkwa
simon nyini yaa afei ato fa
nanso akyi yΙ› nsu anaa Ι›kΙ”m
naso wΙ” nsu bi paanoo wiaa
asΙ”re bΙ›tΙ” ananse de yΙ›n nnadeΙ›
kwan tee ntokuro yi kaa hwee nani
na me bΙ›pΙ”n repΙ› Ι”tee obiara
sunsum he mommra ahemfie dΙ› he
yΙ›n twee ne asΙ›m he ayΙ› simon
egya no buee sΙ› noaa awia bi
nani bi kΙ” egya no si fie
aban yi kura tia dwa a
dwabΙ” bi yΙ› me nyame kΙ” wani
na wura sesaa dodo too Ι”kyerΙ›kyerΙ›ni yi
paanoo he sanee afuom no wani
sunsum na Ι”mma yΙ›n nsa suae Ι”baa
Ι”kyena aboΙ” yi kura akuafoΙ” ho
afia Ι”su araba ma aku tΙ”nee
Ι”baa bi na Ι”daa asuafoΙ” he ho
nsrahwΙ› Ι”bΙ›kΙ” dΙ” wΙ” ehu aduane
pΙ› ne mommra sΙ› obiara sisi
biara Ι”nnii kyerΙ›Ι› ayΙ› atifi Ι”sΙ”fo
mepΙ› apono yi bΙ›pΙ”n akye emu
tiri atu nyame suae wura he
krataa he adwane sΙ› tenee nyame no
na nhyΙ› ana a hwee mΙ›hwΙ›
Ι”maa anim na emu agu nnuane
nnora afe emu ntΙ”Ι” woyΙ› ano
adan akΙ”bΙ” kuruwa firi pΙ” yi
hunuu aka sΙ› gyegyee afe bi
mango wie twenee mu wani pasapasa
ho noa nakyi nnwom he firi nano
nso Ι”kye na sΙ”ree pΙ”nkΙ” din sii
bokiti bi yΙ›mfa abΙ” biribi nyame
nnoa me kwae yΙ› paanoo bi
adan yi wΙ”rebΙ” kaa ne nkosua
ato som no Ι”so afuom na so
bi mankΙ” ahemfie asuafoΙ” Ι”kΙ”Ι” nkosua
mate araba yi fam enti woante
simon kΙ”Ι” hwa sensene aba akyi
yΙ›n pΙ› sΙ› dii resu ho
kunu yΙ›reto ho di wo aboΙ”
ehu a a miamia sΙ› Ι”bΙ›tumi
pagyaa ha biribi din na Ι”tumi
simon akasa nsuo wΙ” nso papa
anaa firii amaneΙ› som ho asuafoΙ”
na Ι”nyΙ› wΙ” resu obi he ani
nsa adi mo akura anaa wΙ” wo
Ι”kyena hwene resu si ne nnuane
na watete deΙ›n yee naso akye
sΙ› Ι”no sesaa sΙ› ne akΙ”gyina
ama poo adwene bi ahwe dwaree sunsum
Ι”sΙ”re wΙ”fa bi aboa afi aban
ananse nnadeΙ› yi Ι”de nakyi paanoo
simon atwitwa nafuo to akuafoΙ” waree
afei mma bi bΙ›tΙ”n papa ho
kwan nante twenee atu ada nwe
dan no firi ti bi ho
biribi Ι”san nani fie no kΙ”Ι” aboa
aba he abakΙ”sΙ›m adΙ”foΙ” fΙ›fΙ› bi
ohia nua bi ka dwom no nakyi
benkum yi Ι”bΙ›kΙ” too ama he ho
nnwom bi afa sesaa biara nnadeΙ›
ase firii me nsrahwΙ› sΙ› ka anigyeΙ›
dua Ι”nyaa adwuma bΙ”Ι” hwene ada
fa adwuma ara pΙ› dan ananse ano
se fufuo mmienu yi nyinaa Ι”redi hΙ”
ho sei naso tΙ”Ι” abΙ” nanso nkosua
nsanom yΙ› nadwene nam na bisabisa adanko
yΙ› yi ntumi ehu yi bae ti
mansa reyΙ› atadeΙ› bΙ›hye abaayewa ho
Ι›siane biribi baa faa abΙ›di yΙ›n
yaa wode Ι”panin asi atete dii
kuruwa twee bobΙ” fi tenee awareΙ›
ma ti sika kakra baa adan
nkuro saa yee asisi me saa twitwaa
no nhoma na hwiee merekΙ” akyi
aane bΙ›kΙ” wΙ”fa manyΙ› me se
nso atifi ara bere he kΙ”too akyi
nadwene Ι”daa obiara sΙ› kyΙ›Ι› obi
nnuane he a kyerΙ›kyerΙ›Ι› ara no hwiee
wontie ntoma suae Ι›dan na ani
nyansa foforΙ” fi dΙ› na dii ase
mango Ι”bΙ›ma dwom gyae wui dii
kaa no mommra aban yi mu
simon ama ntΙ› no kyΙ›Ι› wΙ”n
dua Ι”ntumi fam no aba paanoo
dua twi mpa awie mmΙ”fra noa
End of preview. Expand in Data Studio

Twi Speech-Text Parallel Dataset - Part 2 of 5

πŸŽ‰ The Largest Speech Dataset for Twi Language

This dataset contains part 2 of the largest speech dataset for the Twi language, featuring 1 million speech-to-text pairs split across 5 parts (approximately 200,000 samples each). This represents a groundbreaking resource for Twi (Akan), a language spoken primarily in Ghana.

πŸš€ Breaking the Low-Resource Language Barrier

This publication demonstrates that African languages don't have to remain low-resource. Through creative synthetic data generation techniques, we've produced the largest collection of AI training data for speech-to-text models in Twi, proving that innovative approaches can build the datasets African languages need.

πŸ“Š Complete Dataset Series (1M Total Samples)

Part Repository Samples Status
Part 1 michsethowusu/twi-speech-text-parallel-synthetic-1m-part001 ~200,000 βœ… Available
Part 2 michsethowusu/twi-speech-text-parallel-synthetic-1m-part002 ~200,000 πŸ”₯ THIS PART
Part 3 michsethowusu/twi-speech-text-parallel-synthetic-1m-part003 ~200,000 βœ… Available
Part 4 michsethowusu/twi-speech-text-parallel-synthetic-1m-part004 ~200,000 βœ… Available
Part 5 michsethowusu/twi-speech-text-parallel-synthetic-1m-part005 ~200,000 βœ… Available

Dataset Summary

  • Language: Twi/Akan - aka
  • Total Dataset Size: 1,000,000 speech-text pairs
  • This Part: {len(data):,} audio files (filtered, >1KB)
  • Task: Speech Recognition, Text-to-Speech
  • Format: WAV audio files with corresponding text transcriptions
  • Generation Method: Synthetic data generation
  • Modalities: Audio + Text

🎯 Supported Tasks

  • Automatic Speech Recognition (ASR): Train models to convert Twi speech to text
  • Text-to-Speech (TTS): Use parallel data for TTS model development
  • Speech-to-Speech Translation: Cross-lingual speech applications
  • Keyword Spotting: Identify specific Twi words in audio
  • Phonetic Analysis: Study Twi pronunciation patterns
  • Language Model Training: Large-scale Twi language understanding

πŸ“ Dataset Structure

Data Fields

  • audio: Audio file in WAV format (synthetically generated)
  • text: Corresponding text transcription in Twi

Data Splits

This part contains a single training split with {len(data):,} filtered audio-text pairs (small/corrupted files removed).

Loading the Complete Dataset

from datasets import load_dataset, concatenate_datasets

# Load all parts of the dataset
parts = []
for i in range(1, 6):
    part_name = f"michsethowusu/twi-speech-text-parallel-synthetic-1m-part{i:03d}"
    part = load_dataset(part_name, split="train")
    parts.append(part)

# Combine all parts into one dataset
complete_dataset = concatenate_datasets(parts)
print(f"Complete dataset size: {{len(complete_dataset):,}} samples")

Loading Just This Part

from datasets import load_dataset

# Load only this part
dataset = load_dataset("michsethowusu/twi-speech-text-parallel-synthetic-1m-part002", split="train")
print(f"Part 2 dataset size: {{len(dataset):,}} samples")

πŸ› οΈ Dataset Creation

Methodology

This dataset was created using synthetic data generation techniques, specifically designed to overcome the challenge of limited speech resources for African languages. The approach demonstrates how AI can be used to bootstrap language resources for underrepresented languages.

Data Processing Pipeline

  1. Text Generation: Synthetic Twi sentences generated
  2. Speech Synthesis: Text-to-speech conversion using advanced models
  3. Quality Filtering: Files smaller than 1KB removed to ensure quality
  4. Alignment Verification: Audio-text alignment validated
  5. Format Standardization: Consistent WAV format and text encoding

Technical Details

  • Audio Format: WAV files, various sample rates
  • Text Encoding: UTF-8
  • Language Code: aka (ISO 639-3)
  • Filtering: Minimum file size 1KB to remove corrupted/empty files

🌍 Impact and Applications

Breaking Language Barriers

This dataset represents a paradigm shift in how we approach low-resource African languages:

  • Scalability: Proves synthetic generation can create large datasets
  • Accessibility: Makes Twi ASR/TTS development feasible
  • Innovation: Demonstrates creative solutions for language preservation
  • Reproducibility: Methodology can be applied to other African languages

Use Cases

  • Educational Technology: Twi language learning applications
  • Accessibility: Voice interfaces for Twi speakers
  • Cultural Preservation: Digital archiving of Twi speech patterns
  • Research: Phonetic and linguistic studies of Twi
  • Commercial Applications: Voice assistants for Ghanaian markets

⚠️ Considerations for Using the Data

Social Impact

Positive Impact:

  • Advances language technology for underrepresented communities
  • Supports digital inclusion for Twi speakers
  • Contributes to cultural and linguistic preservation
  • Enables development of Twi-language AI applications

Limitations and Biases

  • Synthetic Nature: Generated data may not capture all nuances of natural speech
  • Dialect Coverage: May not represent all regional Twi dialects equally
  • Speaker Diversity: Limited to synthesis model characteristics
  • Domain Coverage: Vocabulary limited to training data scope
  • Audio Quality: Varies across synthetic generation process

Ethical Considerations

  • Data created with respect for Twi language and culture
  • Intended to support, not replace, natural language preservation efforts
  • Users should complement with natural speech data when possible

πŸ“š Technical Specifications

Audio Specifications

  • Format: WAV
  • Channels: Mono
  • Sample Rate: 16kHz
  • Bit Depth: 16-bit
  • Duration: Variable per sample

Quality Assurance

  • Minimum file size: 1KB (corrupted files filtered)
  • Text-audio alignment verified
  • UTF-8 encoding validation
  • Duplicate removal across parts

πŸ“„ License and Usage

Licensing Information

This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).

You are free to:

  • Share: Copy and redistribute the material
  • Adapt: Remix, transform, and build upon the material
  • Commercial use: Use for commercial purposes

Under the following terms:

  • Attribution: Give appropriate credit and indicate if changes were made

πŸ™ Acknowledgments

  • Original Audio Production: The Ghana Institute of Linguistics, Literacy and Bible Translation in partnership with Davar Partners
  • Audio Processing: MMS-300M-1130 Forced Aligner
  • Synthetic Generation: Advanced text-to-speech synthesis pipeline
  • Community: Twi language speakers and researchers who inspire this work

πŸ“– Citation

If you use this dataset in your research, please cite:

@dataset{{twi_speech_parallel_1m_2025,
  title={{Twi Speech-Text Parallel Dataset: The Largest Speech Dataset for Twi Language}},
  author={{Owusu, Mich-Seth}},
  year={{2025}},
  publisher={{Hugging Face}},
  note={{1 Million synthetic speech-text pairs across 5 parts}},
  url={{https://huggingface.co/datasets/michsethowusu/twi-speech-text-parallel-synthetic-1m-part002}}
}}

For the complete dataset series:

@dataset{{twi_speech_complete_series_2025,
  title={{Complete Twi Speech-Text Parallel Dataset Series (1M samples)}},
  author={{Owusu, Michael Seth}},
  year={{2025}},
  publisher={{Hugging Face}},
  note={{Parts 001-005, 200k samples each}},
  url={{https://huggingface.co/michsethowusu}}
}}

πŸ“ž Contact and Support

  • Repository Issues: Open an issue in this dataset repository
  • General Questions: Contact through Hugging Face profile
  • Collaboration: Open to partnerships for African language AI development

πŸ”— Related Resources


🌟 Star this dataset if it helps your research! πŸ”„ Share to support African language AI development! """

Downloads last month
14

Collection including michsethowusu/twi-speech-text-parallel-synthetic-1m-part002