Datasets:
The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
Phonemized-VCTK (speech + features)
Phonemized-VCTK is a light-repack of the VCTK corpus that bundles—per utterance—
- the raw audio (
wav/
) - the plain transcript (
txt/
) - the IPA phoneme string (
phonemized/
) - frame-level pitch-aligned segments (
segments/
) - sentence-level context embeddings (
context_embeddings/
) - speaker-level embeddings (
speaker_embeddings/
)
The goal is to provide a turn-key dataset for
forced alignment, prosody modelling, TTS, and speaker adaptation experiments without having to regenerate these side-products every time.
Folder layout
Folder | Contents | Shape / format |
---|---|---|
wav/<spk>/ |
48 kHz 16‑bit mono .wav files |
p225_001.wav , … |
txt/<spk>/ |
original plain‑text transcript | p225_001.txt , … |
phonemized/<spk>/ |
whitespace‑separated IPA symbols, #h = word boundary |
p225_001.txt , … |
segments/<spk>/ |
JSON with per‑phoneme timing & mean pitch | p225_001.json , … |
context_embeddings/<spk>/ |
NumPy float32 .npy , sentence embedding of the utterance |
p225_001.npy , … |
speaker_embeddings/ |
NumPy float32 .npy , one vector per speaker, generated from NVIDIA TitaNet-Large model |
p225.npy , … |
Example segments
entry
{
"0": ["h#", {"start_sec":0.0,"end_sec":0.10,"duration_sec":0.10,"mean_pitch":0.0}],
"1": ["p", {"start_sec":0.10,"end_sec":0.18,"duration_sec":0.08,"mean_pitch":0.0}],
"2": ["l", {"start_sec":0.18,"end_sec":1.32,"duration_sec":1.14,"mean_pitch":1377.16}]
}
Quick start
from pathlib import Path
import json, soundfile as sf
import numpy as np
root = Path("Phonemized-VCTK")
wav, sr = sf.read(root/"wav/p225/p225_001.wav")
text = (root/"txt/p225/p225_001.txt").read_text().strip()
ipa = (root/"phonemized/p225/p225_001.txt").read_text().strip()
segs = json.loads((root/"segments/p225/p225_001.json").read_text())
ctx = np.load(root/"context_embeddings/p225/p225_001.npy")
print(text)
print(ipa.split()) # IPA tokens
print(ctx.shape) # (384,)
Known limitations
- The phone set is plain IPA—no stress or intonation markers.
- English only (≈109 speakers, various accents).
- Pitch = 0 on unvoiced phones; interpolate if needed.
- Embedding models were chosen for convenience—swap as you like.
Citation
Please cite both VCTK and this derivative if you use the corpus:
@misc{yours2025phonvctk,
title = {Phonemized-VCTK: An enriched version of VCTK with IPA, alignments and embeddings},
author = {Your Name},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/your-handle/phonemized-vctk}}
}
@inproceedings{yamagishi2019cstr,
title={The CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit},
author={Yamagishi, Junichi et al.},
booktitle={Proc. LREC},
year={2019}
}
- Downloads last month
- 0