potsawee commited on
Commit
bf002d0
·
verified ·
1 Parent(s): dc38ff4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md CHANGED
@@ -49,3 +49,53 @@ configs:
49
  - split: other
50
  path: data/other-*
51
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  - split: other
50
  path: data/other-*
51
  ---
52
+
53
+ # Thai-TTS-Intelligibility-Eval
54
+
55
+ **Thai-TTS-Intelligibility-Eval** is a curated evaluation set for measuring **intelligibility** of Thai Text-to-Speech (TTS) systems.
56
+ All 290 items are short, challenging phrases that commonly trip up phoneme-to-grapheme converters, prosody models, or pronunciation lexicons.
57
+ It is **not** intended for training; use it purely for benchmarking and regression tests.
58
+
59
+ ## Dataset Summary
60
+
61
+ | Split | #Utterances | Description |
62
+ |---------|-------------|-------------------------------------------------------------|
63
+ | `easy` | 50 | Everyday phrases that most TTS systems should read correctly|
64
+ | `medium`| 50 | More challening than easy |
65
+ | `hard` | 50 | Hard phrases, e.g., mixed Thai and English and unique names |
66
+ | `scbx` | 50 | SCBX-specific terminology, products, and names |
67
+ | `name` | 50 | Synthetic Thai personal names (mixed Thai & foreign roots) |
68
+ | `other` | 40 | Miscellaneous edge-cases not covered above |
69
+ | **Total** | **290** | |
70
+
71
+ Each record contains:
72
+ - **`id`** `string` Unique identifier
73
+ - **`text`** `string` sentence/phrase
74
+ - **`category`** `string` One of *easy, medium, hard, scbx, name, other*
75
+
76
+ ## Loading With 🤗 `datasets`
77
+
78
+ ```python
79
+ from datasets import load_dataset
80
+
81
+ ds = load_dataset(
82
+ "scb10x/thai-tts-intelligiblity-eval", # ← typo intentionally left? change if needed
83
+ )
84
+ ds_scbx = ds["scbx"]
85
+ print(ds[0])
86
+ # {'id': '53ef39464d9c1e6f', 'text': '...', 'category': 'scbx'}
87
+ ```
88
+
89
+ ## Intended Use
90
+
91
+ 1. **Objective evaluation**
92
+ - *Compute WER/CER* between automatic transcripts of your TTS output and the gold reference text.
93
+ - Code: https://github.com/scb-10x/thai-tts-eval/tree/main/intelligibility
94
+ 2. **Subjective evaluation**
95
+ - Conduct human listening tests (MOS, ABX, etc.)—the dataset is small enough for quick rounds.
96
+ - Future work
97
+ 4. **Regression testing**
98
+ - Track intelligibility across model versions with a fixed set of hard sentences.
99
+ - Future work
100
+
101
+