Datasets:
BAAI
/

Languages:
Chinese
ArXiv:
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

RealTalk-CN: A Realistic Chinese Speech-Text Dialogue Benchmark With Cross-Modal Interaction Analysis

📌 Resources:

RealTalk-CN is the first large-scale, multi-domain, bimodal (speech-text) Chinese Task-Oriented Dialogue (TOD) dataset. All data come from real human-to-human conversations, specifically constructed to advance research on speech-based large language models (Speech LLMs). Existing TOD datasets are mostly text-based, lacking real speech, spontaneous disfluencies, and cross-modal interaction scenarios. RealTalk-CN achieves breakthroughs in these aspects, fully supporting Chinese speech dialogue modeling and evaluation. The dataset is released under the CC BY-NC-SA 4.0 license, and can be freely used for non-commercial research.


Dataset Composition

  • Total Duration: ~150 hours of verified real human-to-human dialogue audio
  • Dialogue Scale: 5,400 multi-turn dialogues, over 60,000 utterances
  • Speakers: 113 individuals, balanced gender ratio, ages 18–50, covering major dialect regions across China
  • Dialogue Domains: 58 task-oriented domains (e.g., dining, transportation, shopping, healthcare, finance), including 55 intents and 115 slots
  • Audio Specifications: 16kHz sampling rate, WAV format, recorded via both professional and mobile devices
  • Transcription & Annotation:
  • Manually transcribed at the character level, preserving spoken language features
  • Annotated with 4 categories of disfluencies (elongation, repetition, self-correction, hesitation)
  • Includes transcriptions, slot values, intents, and speaker metadata (gender, age, region, etc.)

Dataset Features

  1. Natural and Colloquial: Contains spoken features and disfluencies in real task-oriented dialogues, overcoming the limitation of “read speech” corpora.
  2. Bimodal and Real Interaction: Provides paired speech-text annotations and introduces a cross-modal chat task, supporting dynamic switching between speech and text—closer to real-world human-computer interaction.
  3. Complete Dialogues and Multi-Domain Coverage: Average of 12 turns per dialogue, covering 58 real-world domains, supporting both single-domain and cross-domain dialogue modeling.
  4. Diverse Speakers: Covers major regions in China, balanced across gender and age, enabling research on the impact of accents, dialects, and demographic differences.
  5. High-Quality Annotation and Strict Quality Control: Multiple rounds of manual verification, detailed timestamps, and slot annotations ensure reliability and research value.

Advantages

  • The first large-scale Chinese speech-text TOD corpus, filling the gap in benchmark datasets for Chinese spoken dialogue.
  • Provides disfluency annotations, supporting robustness evaluation and error correction research in speech-based TOD systems.
  • Enables research in speech recognition, speech synthesis, intent recognition, slot filling, dialogue management, and cross-modal studies.
  • Serves as a benchmark for Speech LLMs in Chinese TOD tasks, driving the development of advanced speech interaction systems.

Conclusion

The release of RealTalk-CN lays the foundation for research in Chinese speech-text bimodal dialogue. With its large scale, multi-domain coverage, natural spoken language, diverse speakers, and cross-modal interaction, it not only advances the development of Speech LLMs in task-oriented dialogue but also provides a key resource for future cross-modal and multimodal intelligent systems.

Downloads last month
133