ScreenTalk_JA2ZH-XS / README.md
fj11's picture
Update README.md
7350122 verified
metadata
task_categories:
  - translation
language:
  - ja
  - zh
tags:
  - translation
  - ja
  - zh_cn
dataset_info:
  features:
    - name: audio
      dtype: audio
    - name: duration
      dtype: float64
    - name: sentence
      dtype: string
    - name: uid
      dtype: string
    - name: group_id
      dtype: string
  splits:
    - name: train
      num_bytes: 2072186696
      num_examples: 8000
    - name: valid
      num_bytes: 259808873
      num_examples: 1000
    - name: test
      num_bytes: 252154427
      num_examples: 1000
  download_size: 2596980172
  dataset_size: 2584149996
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: valid
        path: data/valid-*
      - split: test
        path: data/test-*

ScreenTalk_JA2ZH-XS

ScreenTalk_JA2ZH-XS is a paired dataset of Japanese speech and Chinese translated text released by DataLabX. It is designed for training and evaluating speech translation (ST) and multilingual speech understanding models. The data consists of spoken dialogue extracted from real-world Japanese movies and TV shows.

๐Ÿ“ฆ Dataset Overview

  • Source Language: Japanese (Audio)
  • Target Language: Simplified Chinese (Text)
  • Number of Samples: 10,000
  • Total Duration: ~30 hours
  • Format: Parquet
  • License: CC BY 4.0
  • Tasks:
    • Speech-to-Text Translation (ST)
    • Multilingual ASR+MT joint modeling
    • Japanese ASR with Chinese aligned text training

๐Ÿ“ Data Fields

Field Name Type Description
audio Audio Raw Japanese speech audio clip
sentence string Corresponding Simplified Chinese text
duration float Duration of the audio in seconds
uid string Unique sample identifier
group_id string Grouping ID (e.g., speaker or scene tag)

๐Ÿ” Example Samples

audio Duration (s) sentence
JA_00012 4.21 ไป–ไธไผšๆฅไบ†ใ€‚
JA_00038 6.78 ไธบไป€ไนˆไฝ ไผš่ฟ™ๆ ท่ฏด๏ผŸๅ‘Š่ฏ‰ๆˆ‘็œŸ็›ธใ€‚
JA_00104 3.33 ๅฎ‰้™๏ผŒๆœ‰ไบบๆฅไบ†ใ€‚

๐Ÿ’ก Use Cases

This dataset is ideal for:

  • ๐ŸŽฏ Training speech translation models, such as Whisper ST
  • ๐Ÿงช Research on multilingual speech understanding
  • ๐Ÿง  Developing multimodal AI systems (audio โ†’ Chinese text)
  • ๐Ÿซ Educational tools for Japanese learners

๐Ÿ“ฅ Loading Example (Hugging Face Datasets)

from datasets import load_dataset

ds = load_dataset("DataLabX/ScreenTalk_JA2ZH-XS", split="train")

๐Ÿ“ƒ Citation

@misc{datalabx2025screentalkja,
  title = {DataLabX/ScreenTalk_JA2ZH-XS: A Speech Translation Dataset of Japanese Audio and Chinese Text},
  author = {DataLabX},
  year = {2025},
  howpublished = {\url{https://huggingface.co/datasets/DataLabX/ScreenTalk_JA2ZH-XS}},
}

We welcome feedback, suggestions, and contributions! ๐Ÿ™Œ