Datasets:
About data quality
Hi~ Thanks for your wonderful works. But I'm confused about this dataset as the text annotations here seems to be wrong?
Hi~ Thanks for your wonderful works. But I'm confused about this dataset as the text annotations here seems to be wrong?
@Jamhome First of all, thank you for your interest in Korean.
KsponSpeech has been rewritten according to transcription rules called ETRI transcription rules
when transcribing speech, so it requires additional preprocessing that is different from typical ASR data.
The main features of the ETRI transcription rules are as follows
- If more than one pronunciation is possible in Korean (e.g., EBS or ์ด๋น์์ค),
the spelling and pronunciation are transcribed in parallel (spelling transcription)/(pronunciation transcription).
This is called dual transcription.
When performing dual transcription, use only () and / to express dual transcription, which can be expressed by ASCII code alone. - Noise that can be distinguished by humans when transcribing speech should be labeled separately with the โ/โ symbol.
- b: breathing sound
- l: Laughter (laugh)
- o: If another person's speech is included, put it at the beginning of the sentence
- n: ambient noise
If you want to learn more, please go to this site 'ํ๊ตญ์ด ์์ฑ' ์ ์ฌ ๊ท์น ๊ณต์ and download the attachment.
Since KsponSpeech needs to be preprocessed differently than normal ASR data, I wrote the code like this.
jp1924/ASR/src/preprocessor.py
And the code I used to produce the data is here.
jp1924/HF_builders/src/Audio
Hi~ Thanks for your wonderful works. But I'm confused about this dataset as the text annotations here seems to be wrong?
@Jamhome First of all, thank you for your interest in Korean.
KsponSpeech has been rewritten according to transcription rules called
ETRI transcription rules
when transcribing speech, so it requires additional preprocessing that is different from typical ASR data.The main features of the ETRI transcription rules are as follows
- If more than one pronunciation is possible in Korean (e.g., EBS or ์ด๋น์์ค),
the spelling and pronunciation are transcribed in parallel (spelling transcription)/(pronunciation transcription).
This is called dual transcription.
When performing dual transcription, use only () and / to express dual transcription, which can be expressed by ASCII code alone.- Noise that can be distinguished by humans when transcribing speech should be labeled separately with the โ/โ symbol.
- b: breathing sound
- l: Laughter (laugh)
- o: If another person's speech is included, put it at the beginning of the sentence
- n: ambient noise
If you want to learn more, please go to this site 'ํ๊ตญ์ด ์์ฑ' ์ ์ฌ ๊ท์น ๊ณต์ and download the attachment.
Since KsponSpeech needs to be preprocessed differently than normal ASR data, I wrote the code like this.
jp1924/ASR/src/preprocessor.pyAnd the code I used to produce the data is here.
jp1924/HF_builders/src/Audio
Thanks for your response!
The annotation principle mentioned here is useful, but i was actually talking about the mismatch between audio and text.
It seems that the 'id' column in this corpus provides many duplicate ids such as "S000314". I have checked the samples here and they are not correct.
I've noticed that there are audioPath and TextPath attributes in the detailed info, but I have no access to those folders.
@Jamhome
Ah, it was KconfSpeech, I confused it with KsponSpeech.
First, thank you for mentioning the problem, it's quite critical, isn't it?
After checking on my end, I also confirmed data where the transcription and audio don't match.
I think I need to reprocess and reupload this.
To explain the circumstances, this issue seems to have occurred because I created this data when I had just started using build scripts and lacked experience.
The ID duplication issue is one of these problems.
I should also review other datasets created during that time.
I'll reprocess and upload it again soon, thank you!
Well. Thank you for the efforts!
Waiting for the next update ~