# StyleSet **A spoken language benchmark for evaluating speaking-style-realted speech generation** Released in our paper, [Audio-Aware Large Language Models as Judges for Speaking Styles](https://arxiv.org/abs/2506.05984) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/622326ae0129f2097d69a3e2/Q8Os1g5vfy22Y9myvSc7X.png) --- ## Tasks 1. **Voice Style Instruction Following** - Reproduce a given sentence verbatim. - Match specified prosodic styles (emotion, volume, pace, emphasis, pitch, non-verbal cues). 2. **Role Playing** - Continue a two-turn dialogue prompt in character. - Generate the next utterance with appropriate prosody and style. - The dataset is modified from IEMOCAP with the consent of the authors. Please refer to [IEMOCAP](https://sail.usc.edu/iemocap/) for details and the original data of IEMOCAP. We do not redistribute the data here. --- ## Evaluation We use ALLM-as-a-judge for evaluation. Currently, we found that `gemini-2.5-pro-0506` reaches the best agreement with human evaluators. The complete evaluation prompt and evaluation pipelines can be found in Table 3 to Table 5 in our paper. ## Citation If you use StyleSet or find ALLM-as-a-judge useful, please cite our paper by ``` @misc{chiang2025audioawarelargelanguagemodels, title={Audio-Aware Large Language Models as Judges for Speaking Styles}, author={Cheng-Han Chiang and Xiaofei Wang and Chung-Ching Lin and Kevin Lin and Linjie Li and Radu Kopetz and Yao Qian and Zhendong Wang and Zhengyuan Yang and Hung-yi Lee and Lijuan Wang}, year={2025}, eprint={2506.05984}, archivePrefix={arXiv}, primaryClass={eess.AS}, url={https://arxiv.org/abs/2506.05984}, } ```