Validation by me for Turkish
I manually validate some part of data for Turkish, unfortunately miss labeled data is big part.
hi, thanks for your feedback!
can you share more info on what the miss label means here? does it mean the languages are wrong or the transcription quality is bad?
Transcription quality is bad
I'll provide some useful feedback on this topic because I think this dataset has a lot of potential, but needs a lot of work in the filtering and alignment stages.
Consider this Korean sample:
{
'id': '10682',
'utt_id':
'Y6ktQBUClBI-00002-00000754-00000804',
'audio': {
'path': None,
'array': array([1.22070312e-03, 9.76562500e-04, 9.76562500e-04, ..., 3.05175781e-05, 3.05175781e-05, 6.10351562e-05]),
'sampling_rate': 16000
},
'text': '๊ณต๋ถ ์ํ๋๋ฒ ์์ด ์ํ ๊ณผ์ธ ๊ตฌํ๊ธฐ ๊ณต๋ถ ์์ด๊ณผ์ธ ์ํ๊ณผ์ธ ์จ๋ผ์ธ ๊ณผ์ธ ์จ๋ผ์ธ ์์ด ์์ด ํ์ต ๊ณ ๋ฑ ์ํ ํ๋์ค ์คํ์ ์์ด ๊ณผ์ธ ์ด๋ฑ ์์ด ์ด๋ฑ ์ํ ๋
ผ์ ๊ณผ ์์ด ๋ฌธ๋ฒ ํ๊ตญ์ฌ ๊ณต๋ถ ํ๋๋ ์ด๋ฑ ์ํ ๊ณผ์ธ ์์ด ๋ฌธ์ฅ ๋ง๋ค๊ธฐ ์ด๋ฌธ์ ์คํ ์์ด ์์ด ์ ๋ฌธ ๊ณผ์ธ ์์ด ์ํ ๊ณผ์ธ ์ํ ๋ฌธ์ ํ์ด ๊ตญ์ด ๋ฌธ์ ์คํ์ ๊ณต๋ถ ์์ธ ๊ณผ์ธ ์ด๋ฑํ์ ์ํ ์ด๋ฑ ๊ตญ์ด ๊ณต๋ถ ์ด๋ฑ ์ํ ๊ณต๋ถ๋ฒ ์ด๋ฑ ์ํ ๊ณต๋ถ ์คํฐ๋ ์ํ ๊ณผ์ธ ์๋ฃ ์์ ๊ฐ์ ์์ด ๋ฆฌ์ค๋ ํ๋ จ ํ์ต๊ธฐ ebs ์๋จ์ด ์ํ ๋ต์ง ์จ๋ผ์ธ ์์ด ๊ณผ์ธ ๋์
ํ์ ์ํ ๋ฌธ์ ์ง ๋ต์ง ์ด๋ฑ ๊ตญ์ด ebs ๊ณ ๋ฑ ์ค๋ฑ ์ธ๊ฐ ๊ณ ๋ฑ ์ํ ์ธ๊ฐ ๋ถ๋น ์ํ ๊ณผ์ธ ๋ํ ์ํ ๊ณผ์ธ ์์ฐ ์์ด ๊ณผ์ธ ์ ์ฃผ ์์ด ๊ณผ์ธ'
}
After a little inspection, you don't need to speak Korean to figure out there's something wrong here:
print(f"{record['audio']['array'].shape=}")
# Output:
# record['audio']['array'].shape=(7999,)
A 246-character transcript with 0.5 seconds of audio... something went very wrong in the processing of this sample.
So let's investigate further.
Whisper has a reported very low character error rate on Korean. So let's do this:
- Transcribe FLEURS Korean test set with Whisper Turbo
- Transcribe a random subset of YODAS Korean (chunk 000, "manual" transcripts) with Whisper Turbo
- Check distribution of error rates
We should expect that Whisper performs reasonably well on the bulk of the data, albeit not as well as a test set as easy as FLEURS.
(This is a random 20k samples from the first 200k samples of ko 000, clipped at CER 1.0)
I don't expect CER on "wild" data to be as low as FLEURS, but my spot-checking agrees with Whisper's CER: the data quality is all over the place, and can easily be caught.
While this isn't a perfect analysis (I don't know a single word in Korean), it seems clear that something went wrong in the processing of this data. The sample shown above would have been filtered out by the most basic of cleaning algorithms, and the distribution of Whisper's CER is telling.
Another possibility is a bug in the audio cut extraction... like the example above, there are a lot of files with exactly 0.5 seconds of audio and really long transcripts.