Improve dataset card: update task category and add usage instructions

#2
by nielsr HF Staff - opened

This PR updates the task_categories metadata field from question-answering to audio-text-to-text. This change more accurately reflects that the C3 benchmark is designed for evaluating Spoken Dialogue Models (SDMs) in complex conversational settings, involving both audio input and text/speech output.

Additionally, a "Sample Usage" section has been added to provide clear instructions on how to utilize the dataset for evaluation, directly referencing the detailed guides available in the accompanying GitHub repository.

ChengqianMa changed pull request status to closed
ChengqianMa changed pull request status to open

Thanks for polishing the README.md.
The Sample Usage section is indeed helpful.
However, audio-text-to-text is confusing. Our benchmark expects spoken dialogue models that accepts speech input and produces speech output, and we evaluate the text either produced directly by the model or transcribed from its speech output. The tag audio-text-to-text incorrectly implies that the model itself outputs text, so the original Question Answering tag should remain. Could you please restore and clarify this?

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment