File size: 3,575 Bytes
dc38ff4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bf002d0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1e4b3aa
bf002d0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1e4b3aa
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
---
dataset_info:
  features:
  - name: id
    dtype: string
  - name: category
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: all
    num_bytes: 91813
    num_examples: 290
  - name: easy
    num_bytes: 9124
    num_examples: 50
  - name: medium
    num_bytes: 20234
    num_examples: 50
  - name: hard
    num_bytes: 27971
    num_examples: 50
  - name: scbx
    num_bytes: 17314
    num_examples: 50
  - name: name
    num_bytes: 10118
    num_examples: 50
  - name: other
    num_bytes: 7052
    num_examples: 40
  download_size: 103240
  dataset_size: 183626
configs:
- config_name: default
  data_files:
  - split: all
    path: data/all-*
  - split: easy
    path: data/easy-*
  - split: medium
    path: data/medium-*
  - split: hard
    path: data/hard-*
  - split: scbx
    path: data/scbx-*
  - split: name
    path: data/name-*
  - split: other
    path: data/other-*
---

# Thai-TTS-Intelligibility-Eval

**Thai-TTS-Intelligibility-Eval** is a curated evaluation set for measuring **intelligibility** of Thai Text-to-Speech (TTS) systems.  
All 290 items are short, challenging phrases that commonly trip up phoneme-to-grapheme converters, prosody models, or pronunciation lexicons.  
It is **not** intended for training; use it purely for benchmarking and regression tests.

## Dataset Summary

| Split   | #Utterances | Description                                                 |
|---------|-------------|-------------------------------------------------------------|
| `easy`  | 50          | Everyday phrases that most TTS systems should read correctly|
| `medium`| 50          | More challening than easy                                   |
| `hard`  | 50          | Hard phrases, e.g., mixed Thai and English and unique names |
| `scbx`  | 50          | SCBX-specific terminology, products, and names              |
| `name`  | 50          | Synthetic Thai personal names (mixed Thai & foreign roots)  |
| `other` | 40          | Miscellaneous edge-cases not covered above                  |
| **Total** | **290**   |                                                             |

Each record contains:
- **`id`**`string` Unique identifier  
- **`text`**`string` sentence/phrase  
- **`category`**`string` One of *easy, medium, hard, scbx, name, other*

## Loading With 🤗 `datasets`

```python
from datasets import load_dataset

ds = load_dataset(
    "scb10x/thai-tts-intelligiblity-eval",
)
ds_scbx = ds["scbx"]
print(ds[0])
# {'id': '53ef39464d9c1e6f', 'text': '...', 'category': 'scbx'}
```

## Intended Use

1. **Objective evaluation**  
   - *Compute WER/CER* between automatic transcripts of your TTS output and the gold reference text.
   - Code: https://github.com/scb-10x/thai-tts-eval/tree/main/intelligibility
2. **Subjective evaluation**  
   - Conduct human listening tests (MOS, ABX, etc.)—the dataset is small enough for quick rounds.
   - Future work
4. **Regression testing**  
   - Track intelligibility across model versions with a fixed set of hard sentences.
   - Future work


## CER Evaluation Results
- CER: lower is better

| System                            | All  | Easy  | Medium | Hard | SCBX | Name  | Other | 
|-----------------------------------|------|-------|--------|------|------|-------|-------|
| Azure Premwadee                   | 9.39 | 2.87  | 2.92   | 13.80| 10.44| 13.07 | 7.57  |
| `facebook-mms-tts-tha`            | 28.47| 10.31 | 12.40  | 38.83| 36.04| 26.33 | 30.83 |
| `VIZINTZOR-MMS-TTS-THAI-FEMALEV1` | 27.42| 13.30 | 13.13  | 30.92| 34.76| 25.53 | 54.60 |