YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
StyleSet
A spoken language benchmark for evaluating speaking-style-realted speech generation
Released in our paper, Audio-Aware Large Language Models as Judges for Speaking Styles
Tasks
Voice Style Instruction Following
- Reproduce a given sentence verbatim.
- Match specified prosodic styles (emotion, volume, pace, emphasis, pitch, non-verbal cues).
Role Playing
- Continue a two-turn dialogue prompt in character.
- Generate the next utterance with appropriate prosody and style.
- The dataset is modified from IEMOCAP with the consent of the authors. Please refer to IEMOCAP for details and the original data of IEMOCAP. We do not redistribute the data here.
Evaluation
We use ALLM-as-a-judge for evaluation. Currently, we found that gemini-2.5-pro-0506
reaches the best agreement with human evaluators.
The complete evaluation prompt and evaluation pipelines can be found in Table 3 to Table 5 in our paper.
Citation
If you use StyleSet or find ALLM-as-a-judge useful, please cite our paper by
@misc{chiang2025audioawarelargelanguagemodels,
title={Audio-Aware Large Language Models as Judges for Speaking Styles},
author={Cheng-Han Chiang and Xiaofei Wang and Chung-Ching Lin and Kevin Lin and Linjie Li and Radu Kopetz and Yao Qian and Zhendong Wang and Zhengyuan Yang and Hung-yi Lee and Lijuan Wang},
year={2025},
eprint={2506.05984},
archivePrefix={arXiv},
primaryClass={eess.AS},
url={https://arxiv.org/abs/2506.05984},
}
- Downloads last month
- 24