Datasets:
audio
audioduration (s) 1.22
13.2
| text
stringlengths 17
53
|
---|---|
ne atentene nua rehwehwΙ wΙ nwoma
|
|
nnadeΙ ho nti rehwehwΙ Ιbo yi fam
|
|
hwee repΙ ne na Ιgyae no
|
|
sΙ nsanom di toΙ akΙ kwae bi
|
|
asuafoΙ gyae atΙm he kunu bae
|
|
afia bua kwae tia akokΙ akyi
|
|
araba poo mpa bΙtΙ sΙn da
|
|
asi nyame he ani awu efie
|
|
ansa nsaden bi afi sekan bi akyiri
|
|
nsΙhwΙ saa sΙ ahwΙ sΙ ΙkΙΙ
|
|
awuwu asΙre he wonnim yere emu
|
|
Ιpanin yi rekΙ bepΙ no nakyi
|
|
ma ase yi ani maa nkwa
|
|
simon nyini yaa afei ato fa
|
|
nanso akyi yΙ nsu anaa ΙkΙm
|
|
naso wΙ nsu bi paanoo wiaa
|
|
asΙre bΙtΙ ananse de yΙn nnadeΙ
|
|
kwan tee ntokuro yi kaa hwee nani
|
|
na me bΙpΙn repΙ Ιtee obiara
|
|
sunsum he mommra ahemfie dΙ he
|
|
yΙn twee ne asΙm he ayΙ simon
|
|
egya no buee sΙ noaa awia bi
|
|
nani bi kΙ egya no si fie
|
|
aban yi kura tia dwa a
|
|
dwabΙ bi yΙ me nyame kΙ wani
|
|
na wura sesaa dodo too ΙkyerΙkyerΙni yi
|
|
paanoo he sanee afuom no wani
|
|
sunsum na Ιmma yΙn nsa suae Ιbaa
|
|
Ιkyena aboΙ yi kura akuafoΙ ho
|
|
afia Ιsu araba ma aku tΙnee
|
|
Ιbaa bi na Ιdaa asuafoΙ he ho
|
|
nsrahwΙ ΙbΙkΙ dΙ wΙ ehu aduane
|
|
pΙ ne mommra sΙ obiara sisi
|
|
biara Ιnnii kyerΙΙ ayΙ atifi ΙsΙfo
|
|
mepΙ apono yi bΙpΙn akye emu
|
|
tiri atu nyame suae wura he
|
|
krataa he adwane sΙ tenee nyame no
|
|
na nhyΙ ana a hwee mΙhwΙ
|
|
Ιmaa anim na emu agu nnuane
|
|
nnora afe emu ntΙΙ woyΙ ano
|
|
adan akΙbΙ kuruwa firi pΙ yi
|
|
hunuu aka sΙ gyegyee afe bi
|
|
mango wie twenee mu wani pasapasa
|
|
ho noa nakyi nnwom he firi nano
|
|
nso Ιkye na sΙree pΙnkΙ din sii
|
|
bokiti bi yΙmfa abΙ biribi nyame
|
|
nnoa me kwae yΙ paanoo bi
|
|
adan yi wΙrebΙ kaa ne nkosua
|
|
ato som no Ιso afuom na so
|
|
bi mankΙ ahemfie asuafoΙ ΙkΙΙ nkosua
|
|
mate araba yi fam enti woante
|
|
simon kΙΙ hwa sensene aba akyi
|
|
yΙn pΙ sΙ dii resu ho
|
|
kunu yΙreto ho di wo aboΙ
|
|
ehu a a miamia sΙ ΙbΙtumi
|
|
pagyaa ha biribi din na Ιtumi
|
|
simon akasa nsuo wΙ nso papa
|
|
anaa firii amaneΙ som ho asuafoΙ
|
|
na ΙnyΙ wΙ resu obi he ani
|
|
nsa adi mo akura anaa wΙ wo
|
|
Ιkyena hwene resu si ne nnuane
|
|
na watete deΙn yee naso akye
|
|
sΙ Ιno sesaa sΙ ne akΙgyina
|
|
ama poo adwene bi ahwe dwaree sunsum
|
|
ΙsΙre wΙfa bi aboa afi aban
|
|
ananse nnadeΙ yi Ιde nakyi paanoo
|
|
simon atwitwa nafuo to akuafoΙ waree
|
|
afei mma bi bΙtΙn papa ho
|
|
kwan nante twenee atu ada nwe
|
|
dan no firi ti bi ho
|
|
biribi Ιsan nani fie no kΙΙ aboa
|
|
aba he abakΙsΙm adΙfoΙ fΙfΙ bi
|
|
ohia nua bi ka dwom no nakyi
|
|
benkum yi ΙbΙkΙ too ama he ho
|
|
nnwom bi afa sesaa biara nnadeΙ
|
|
ase firii me nsrahwΙ sΙ ka anigyeΙ
|
|
dua Ιnyaa adwuma bΙΙ hwene ada
|
|
fa adwuma ara pΙ dan ananse ano
|
|
se fufuo mmienu yi nyinaa Ιredi hΙ
|
|
ho sei naso tΙΙ abΙ nanso nkosua
|
|
nsanom yΙ nadwene nam na bisabisa adanko
|
|
yΙ yi ntumi ehu yi bae ti
|
|
mansa reyΙ atadeΙ bΙhye abaayewa ho
|
|
Ιsiane biribi baa faa abΙdi yΙn
|
|
yaa wode Ιpanin asi atete dii
|
|
kuruwa twee bobΙ fi tenee awareΙ
|
|
ma ti sika kakra baa adan
|
|
nkuro saa yee asisi me saa twitwaa
|
|
no nhoma na hwiee merekΙ akyi
|
|
aane bΙkΙ wΙfa manyΙ me se
|
|
nso atifi ara bere he kΙtoo akyi
|
|
nadwene Ιdaa obiara sΙ kyΙΙ obi
|
|
nnuane he a kyerΙkyerΙΙ ara no hwiee
|
|
wontie ntoma suae Ιdan na ani
|
|
nyansa foforΙ fi dΙ na dii ase
|
|
mango ΙbΙma dwom gyae wui dii
|
|
kaa no mommra aban yi mu
|
|
simon ama ntΙ no kyΙΙ wΙn
|
|
dua Ιntumi fam no aba paanoo
|
|
dua twi mpa awie mmΙfra noa
|
Twi Speech-Text Parallel Dataset - Part 2 of 5
π The Largest Speech Dataset for Twi Language
This dataset contains part 2 of the largest speech dataset for the Twi language, featuring 1 million speech-to-text pairs split across 5 parts (approximately 200,000 samples each). This represents a groundbreaking resource for Twi (Akan), a language spoken primarily in Ghana.
π Breaking the Low-Resource Language Barrier
This publication demonstrates that African languages don't have to remain low-resource. Through creative synthetic data generation techniques, we've produced the largest collection of AI training data for speech-to-text models in Twi, proving that innovative approaches can build the datasets African languages need.
π Complete Dataset Series (1M Total Samples)
Part | Repository | Samples | Status |
---|---|---|---|
Part 1 | michsethowusu/twi-speech-text-parallel-synthetic-1m-part001 |
~200,000 | β Available |
Part 2 | michsethowusu/twi-speech-text-parallel-synthetic-1m-part002 |
~200,000 | π₯ THIS PART |
Part 3 | michsethowusu/twi-speech-text-parallel-synthetic-1m-part003 |
~200,000 | β Available |
Part 4 | michsethowusu/twi-speech-text-parallel-synthetic-1m-part004 |
~200,000 | β Available |
Part 5 | michsethowusu/twi-speech-text-parallel-synthetic-1m-part005 |
~200,000 | β Available |
Dataset Summary
- Language: Twi/Akan -
aka
- Total Dataset Size: 1,000,000 speech-text pairs
- This Part: {len(data):,} audio files (filtered, >1KB)
- Task: Speech Recognition, Text-to-Speech
- Format: WAV audio files with corresponding text transcriptions
- Generation Method: Synthetic data generation
- Modalities: Audio + Text
π― Supported Tasks
- Automatic Speech Recognition (ASR): Train models to convert Twi speech to text
- Text-to-Speech (TTS): Use parallel data for TTS model development
- Speech-to-Speech Translation: Cross-lingual speech applications
- Keyword Spotting: Identify specific Twi words in audio
- Phonetic Analysis: Study Twi pronunciation patterns
- Language Model Training: Large-scale Twi language understanding
π Dataset Structure
Data Fields
audio
: Audio file in WAV format (synthetically generated)text
: Corresponding text transcription in Twi
Data Splits
This part contains a single training split with {len(data):,} filtered audio-text pairs (small/corrupted files removed).
Loading the Complete Dataset
from datasets import load_dataset, concatenate_datasets
# Load all parts of the dataset
parts = []
for i in range(1, 6):
part_name = f"michsethowusu/twi-speech-text-parallel-synthetic-1m-part{i:03d}"
part = load_dataset(part_name, split="train")
parts.append(part)
# Combine all parts into one dataset
complete_dataset = concatenate_datasets(parts)
print(f"Complete dataset size: {{len(complete_dataset):,}} samples")
Loading Just This Part
from datasets import load_dataset
# Load only this part
dataset = load_dataset("michsethowusu/twi-speech-text-parallel-synthetic-1m-part002", split="train")
print(f"Part 2 dataset size: {{len(dataset):,}} samples")
π οΈ Dataset Creation
Methodology
This dataset was created using synthetic data generation techniques, specifically designed to overcome the challenge of limited speech resources for African languages. The approach demonstrates how AI can be used to bootstrap language resources for underrepresented languages.
Data Processing Pipeline
- Text Generation: Synthetic Twi sentences generated
- Speech Synthesis: Text-to-speech conversion using advanced models
- Quality Filtering: Files smaller than 1KB removed to ensure quality
- Alignment Verification: Audio-text alignment validated
- Format Standardization: Consistent WAV format and text encoding
Technical Details
- Audio Format: WAV files, various sample rates
- Text Encoding: UTF-8
- Language Code:
aka
(ISO 639-3) - Filtering: Minimum file size 1KB to remove corrupted/empty files
π Impact and Applications
Breaking Language Barriers
This dataset represents a paradigm shift in how we approach low-resource African languages:
- Scalability: Proves synthetic generation can create large datasets
- Accessibility: Makes Twi ASR/TTS development feasible
- Innovation: Demonstrates creative solutions for language preservation
- Reproducibility: Methodology can be applied to other African languages
Use Cases
- Educational Technology: Twi language learning applications
- Accessibility: Voice interfaces for Twi speakers
- Cultural Preservation: Digital archiving of Twi speech patterns
- Research: Phonetic and linguistic studies of Twi
- Commercial Applications: Voice assistants for Ghanaian markets
β οΈ Considerations for Using the Data
Social Impact
Positive Impact:
- Advances language technology for underrepresented communities
- Supports digital inclusion for Twi speakers
- Contributes to cultural and linguistic preservation
- Enables development of Twi-language AI applications
Limitations and Biases
- Synthetic Nature: Generated data may not capture all nuances of natural speech
- Dialect Coverage: May not represent all regional Twi dialects equally
- Speaker Diversity: Limited to synthesis model characteristics
- Domain Coverage: Vocabulary limited to training data scope
- Audio Quality: Varies across synthetic generation process
Ethical Considerations
- Data created with respect for Twi language and culture
- Intended to support, not replace, natural language preservation efforts
- Users should complement with natural speech data when possible
π Technical Specifications
Audio Specifications
- Format: WAV
- Channels: Mono
- Sample Rate: 16kHz
- Bit Depth: 16-bit
- Duration: Variable per sample
Quality Assurance
- Minimum file size: 1KB (corrupted files filtered)
- Text-audio alignment verified
- UTF-8 encoding validation
- Duplicate removal across parts
π License and Usage
Licensing Information
This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
You are free to:
- Share: Copy and redistribute the material
- Adapt: Remix, transform, and build upon the material
- Commercial use: Use for commercial purposes
Under the following terms:
- Attribution: Give appropriate credit and indicate if changes were made
π Acknowledgments
- Original Audio Production: The Ghana Institute of Linguistics, Literacy and Bible Translation in partnership with Davar Partners
- Audio Processing: MMS-300M-1130 Forced Aligner
- Synthetic Generation: Advanced text-to-speech synthesis pipeline
- Community: Twi language speakers and researchers who inspire this work
π Citation
If you use this dataset in your research, please cite:
@dataset{{twi_speech_parallel_1m_2025,
title={{Twi Speech-Text Parallel Dataset: The Largest Speech Dataset for Twi Language}},
author={{Owusu, Mich-Seth}},
year={{2025}},
publisher={{Hugging Face}},
note={{1 Million synthetic speech-text pairs across 5 parts}},
url={{https://huggingface.co/datasets/michsethowusu/twi-speech-text-parallel-synthetic-1m-part002}}
}}
For the complete dataset series:
@dataset{{twi_speech_complete_series_2025,
title={{Complete Twi Speech-Text Parallel Dataset Series (1M samples)}},
author={{Owusu, Michael Seth}},
year={{2025}},
publisher={{Hugging Face}},
note={{Parts 001-005, 200k samples each}},
url={{https://huggingface.co/michsethowusu}}
}}
π Contact and Support
- Repository Issues: Open an issue in this dataset repository
- General Questions: Contact through Hugging Face profile
- Collaboration: Open to partnerships for African language AI development
π Related Resources
π Star this dataset if it helps your research! π Share to support African language AI development! """
- Downloads last month
- 14