cantonese-youtube / README.md
alvanlii's picture
Update README.md
0ffff93 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: channel
      dtype: string
    - name: transcript_whisper
      dtype: string
    - name: title
      dtype: string
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: transcript_sensevoice
      dtype: string
    - name: emotion_sensevoice
      sequence: string
    - name: event_sensevoice
      sequence: string
    - name: c50
      dtype: float
    - name: snr
      dtype: float
    - name: speech_duration
      dtype: float
    - name: emotion_emotion2vec
      dtype: string
  splits:
    - name: train
      num_bytes: 544892035865.877
      num_examples: 1478373
  download_size: 527025543429
  dataset_size: 544892035865.877
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - automatic-speech-recognition
  - audio-classification
language:
  - zh
  - yue

Cantonese Youtube Pseudo-Transcription Dataset

  • Contains approximately 10k hours of audio sourced from YouTube
    • Videos are chosen at random, and scraped on a channel basis
    • Includes news, vlogs, entertainment, stories, health
  • Columns
    • transcript_whisper: Transcribed using Scrya/whisper-large-v2-cantonese with alvanlii/whisper-small-cantonese for speculative decoding
    • transcript_sensevoice: Transcribed using FunAudioLLM/SenseVoiceSmall
      • used OpenCC to convert to traditional chinese
      • isolated event tags to event_sensevoice
      • isolated emotion tags to emotion_sensevoice
    • snr: Signal-to-noise ratio, extracted from ylacombe/brouhaha-best
    • c50: Speech clarity, extracted from ylacombe/brouhaha-best
    • emotion: Emotion, extracted from emotion2vec/emotion2vec_plus_large
    • Note that id does not reflect the ordering of the audio within the same video
  • Processing
    • The full audio is split using WhisperX, using Scrya/whisper-large-v2-cantonese
      • it is split in <30s chunks and according to speakers
    • Preliminary filtering includes filtering out phrases like:
      • like/subscribe to YouTube channel
      • subtitles by [xxxx]
      • Additional filtering is recommended for your own use
  • Note: An earlier version of the dataset has duplicated data. I recommend re-downloading it if you downloaded it before Nov-7-2024