potsawee's picture
Update README.md
d553493 verified
metadata
dataset_info:
  features:
    - name: audio_a
      dtype: audio
    - name: audio_b
      dtype: audio
    - name: label
      dtype: string
    - name: text_a
      dtype: string
    - name: text_b
      dtype: string
    - name: human_quality_a
      dtype: float64
    - name: human_quality_b
      dtype: float64
    - name: system_a
      dtype: string
    - name: system_b
      dtype: string
  splits:
    - name: train
      num_bytes: 241032306
      num_examples: 600
  download_size: 221703663
  dataset_size: 241032306
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Speech Quality Assessment Data

The dataset is derived from the TMHINT-Q dataset (paper). The speech is in Mandarin Chinese, and noise of different types and levels was added to clean speech. Human annotators were asked to score each audio on its "quality" aspect on a 1-5 scale (shown in human_quality_a/b is the average across multiple annotators). Note that the full pairwise mapped dataset contains 6475, and for this work we sample 600 rows.

  • label = a => audio_a is of higher quality than audio_b (i.e., less noisy and clearer)
  • label = b => audio_b is of higher quality than audio_a