Datasets:
Improve dataset card: update task category and add usage instructions
This PR updates the task_categories
metadata field from question-answering
to audio-text-to-text
. This change more accurately reflects that the C3 benchmark is designed for evaluating Spoken Dialogue Models (SDMs) in complex conversational settings, involving both audio input and text/speech output.
Additionally, a "Sample Usage" section has been added to provide clear instructions on how to utilize the dataset for evaluation, directly referencing the detailed guides available in the accompanying GitHub repository.
Thanks for polishing the README.md
.
The Sample Usage
section is indeed helpful.
However, audio-text-to-text
is confusing. Our benchmark expects spoken dialogue models that accepts speech input and produces speech output, and we evaluate the text either produced directly by the model or transcribed from its speech output. The tag audio-text-to-text
incorrectly implies that the model itself outputs text, so the original Question Answering
tag should remain. Could you please restore and clarify this?