TR-Podcast-dataset
This is a merged speech dataset containing 115031 audio segments from 281 source datasets.
Dataset Information
- Total Segments: 115031
- Speakers: 825
- Languages: tr
- Emotions: happy, neutral, angry, sad
- Original Datasets: 281
Dataset Structure
Each example contains:
audio
: Audio file (WAV format, original sampling rate preserved)text
: Transcription of the audiospeaker_id
: Unique speaker identifier (made unique across all merged datasets)language
: Language code (en, es, fr, etc.)emotion
: Detected emotion (neutral, happy, sad, etc.)original_dataset
: Name of the source dataset this segment came fromoriginal_filename
: Original filename in the source datasetstart_time
: Start time of the segment in secondsend_time
: End time of the segment in secondsduration
: Duration of the segment in seconds
Usage
Loading the Dataset
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("Codyfederer/tr-podcast-dataset")
# Access the training split
train_data = dataset["train"]
# Example: Get first sample
sample = train_data[0]
print(f"Text: {sample['text']}")
print(f"Speaker: {sample['speaker_id']}")
print(f"Language: {sample['language']}")
print(f"Emotion: {sample['emotion']}")
print(f"Original Dataset: {sample['original_dataset']}")
print(f"Duration: {sample['duration']}s")
# Play audio (requires audio libraries)
# sample['audio']['array'] contains the audio data
# sample['audio']['sampling_rate'] contains the sampling rate
Alternative: Load from JSONL
from datasets import Dataset, Audio, Features, Value
import json
# Load the JSONL file
rows = []
with open("data.jsonl", "r", encoding="utf-8") as f:
for line in f:
rows.append(json.loads(line))
features = Features({
"audio": Audio(sampling_rate=None),
"text": Value("string"),
"speaker_id": Value("string"),
"language": Value("string"),
"emotion": Value("string"),
"original_dataset": Value("string"),
"original_filename": Value("string"),
"start_time": Value("float32"),
"end_time": Value("float32"),
"duration": Value("float32")
})
dataset = Dataset.from_list(rows, features=features)
Dataset Structure
The dataset includes:
data.jsonl
- Main dataset file with all columns (JSON Lines)*.wav
- Audio files underaudio_XXX/
subdirectoriesload_dataset.txt
- Python script for loading the dataset (rename to .py to use)
JSONL keys:
audio
: Relative audio path (e.g.,audio_000/segment_000000_speaker_0.wav
)text
: Transcription of the audiospeaker_id
: Unique speaker identifierlanguage
: Language codeemotion
: Detected emotionoriginal_dataset
: Name of the source datasetoriginal_filename
: Original filename in the source datasetstart_time
: Start time of the segment in secondsend_time
: End time of the segment in secondsduration
: Duration of the segment in seconds
Speaker ID Mapping
Speaker IDs have been made unique across all merged datasets to avoid conflicts. For example:
- Original Dataset A:
speaker_0
,speaker_1
- Original Dataset B:
speaker_0
,speaker_1
- Merged Dataset:
speaker_0
,speaker_1
,speaker_2
,speaker_3
Original dataset information is preserved in the metadata for reference.
Data Quality
This dataset was created using the Vyvo Dataset Builder with:
- Automatic transcription and diarization
- Quality filtering for audio segments
- Music and noise filtering
- Emotion detection
- Language identification
License
This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
Citation
@dataset{vyvo_merged_dataset,
title={TR-Podcast-dataset},
author={Vyvo Dataset Builder},
year={2025},
url={https://huggingface.co/datasets/Codyfederer/tr-podcast-dataset}
}
This dataset was created using the Vyvo Dataset Builder tool.
- Downloads last month
- 159