Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

audio
audio
End of preview.

Multilingual TTS Dataset (LJSpeech Format)

A high-quality multilingual Text-to-Speech dataset combining English and Chinese speech data, optimized for TTS training and suitable for commercial use.

🎯 Quick Start

from datasets import load_dataset

# Load the entire dataset
dataset = load_dataset("ayousanz/multi-dataset-v2")

# Access data
for item in dataset["train"]:
    audio = item["audio"]          # 22050Hz mono audio
    text = item["transcription"]   # Original text
    speaker = item["speaker_id"]   # Speaker identifier  
    language = item["language"]    # "en" or "zh"

πŸ“Š Dataset Statistics

Metric Value
Total Duration 97.2 hours
Total Utterances 95,568
Languages English, Chinese
Speakers 421 unique speakers
Audio Format 22050Hz, 16-bit, mono WAV

Language Breakdown

Language Hours Speakers Utterances
English 48.6 247 32,310
Chinese 48.6 174 63,258

Duration Distribution

Range Count Percentage
0-2s 28,555 29.9%
2-5s 48,261 50.5%
5-10s 14,167 14.8%
10-15s 3,417 3.6%
15-20s 1,168 1.2%
20s+ 0 0.0%

πŸ“ Repository Structure

β”œβ”€β”€ audio/                          # Audio files (ZIP compressed)
β”‚   β”œβ”€β”€ train_english.zip          # English training audio
β”‚   β”œβ”€β”€ train_chinese.zip          # Chinese training audio  
β”‚   β”œβ”€β”€ validation_english.zip     # English validation audio
β”‚   β”œβ”€β”€ validation_chinese.zip     # Chinese validation audio
β”‚   β”œβ”€β”€ test_english.zip           # English test audio
β”‚   └── test_chinese.zip           # Chinese test audio
β”œβ”€β”€ metadata/                       # Metadata files
β”‚   β”œβ”€β”€ train.csv                  # Training metadata (all languages)
β”‚   β”œβ”€β”€ validation.csv             # Validation metadata (all languages)
β”‚   └── test.csv                   # Test metadata (all languages)
β”œβ”€β”€ dataset_info.json              # Dataset statistics and info
β”œβ”€β”€ multilingual_tts_ljspeech.py   # Dataset loader script
└── README.md                      # This file

πŸ’Ύ Download Instructions

Option 1: Using Hugging Face CLI (Recommended)

# Install Hugging Face CLI
pip install huggingface-hub

# Download entire dataset
huggingface-cli download ayousanz/multi-dataset-v2 --repo-type dataset --local-dir ./multilingual-tts

# Download specific files only
huggingface-cli download ayousanz/multi-dataset-v2 audio/train_english.zip metadata/train.csv --repo-type dataset --local-dir ./multilingual-tts

Option 2: Using Python

from huggingface_hub import snapshot_download

# Download entire dataset
snapshot_download(
    repo_id="ayousanz/multi-dataset-v2",
    repo_type="dataset", 
    local_dir="./multilingual-tts"
)

Extracting Audio Files

After downloading, extract the ZIP files:

cd multilingual-tts
for zip_file in audio/*.zip; do
    unzip "$zip_file" -d audio_extracted/
done

πŸš€ Usage Examples

Basic Usage

from datasets import load_dataset

dataset = load_dataset("ayousanz/multi-dataset-v2")

# Filter by language
english_data = dataset["train"].filter(lambda x: x["language"] == "en")
chinese_data = dataset["train"].filter(lambda x: x["language"] == "zh")

# Filter by speaker
speaker_data = dataset["train"].filter(lambda x: x["speaker_id"] == "en_1234")

For TTS Training

# Example with PyTorch DataLoader
from torch.utils.data import DataLoader

def collate_fn(batch):
    audios = [item["audio"]["array"] for item in batch]
    texts = [item["transcription"] for item in batch]
    speakers = [item["speaker_id"] for item in batch]
    return audios, texts, speakers

dataloader = DataLoader(
    dataset["train"], 
    batch_size=32, 
    collate_fn=collate_fn,
    shuffle=True
)

πŸ“‹ Data Format

Each sample contains:

  • audio_id: Unique identifier for the audio file
  • audio: Audio data (22050Hz, 16-bit, mono)
  • transcription: Original text transcription
  • normalized_text: Normalized text for TTS training
  • speaker_id: Speaker identifier with language prefix (en_* or zh_*)
  • language: Language code (en for English, zh for Chinese)

πŸ“œ License

This dataset combines data from multiple sources:

  • English data (LibriTTS-R): CC BY 4.0 - requires attribution
  • Chinese data (AISHELL-3): Apache 2.0

Attribution Requirements

When using this dataset, please cite:

@dataset{{multilingual_tts_ljspeech,
  title={{Multilingual TTS Dataset in LJSpeech Format}},
  year={{2024}},
  note={{English: LibriTTS-R (CC BY 4.0), Chinese: AISHELL-3 (Apache 2.0)}}
}}

πŸ”— Source Datasets

⚑ Performance Notes

  • Audio files are stored in ZIP format for faster download
  • Use datasets library's built-in caching for optimal performance
  • Consider using streaming=True for large-scale training to save memory

🀝 Contributing

Found an issue? Please report it on the repository issues page.

Downloads last month
227