JamC-QA / README.md
teruo6939's picture
Update README.md
0d0b99d verified
|
raw
history blame
5.19 kB
metadata
license: cc-by-sa-4.0
task_categories:
  - question-answering
  - multiple-choice
language:
  - ja
configs:
  - config_name: v1.0
    data_files:
      - split: test
        path: v1.0/test-*
dataset_info:
  config_name: v1.0
  features:
    - name: qid
      dtype: string
    - name: category
      dtype: string
    - name: question
      dtype: string
    - name: choice0
      dtype: string
    - name: choice1
      dtype: string
    - name: choice2
      dtype: string
    - name: choice3
      dtype: string
    - name: answer_index
      dtype: int64
  splits:
    - name: test
      num_bytes: 495590
      num_examples: 2341
  download_size: 291218
  dataset_size: 495590

Dataset Card for JamC-QA

English/Japanese

Dataset Summary

This benchmark test evaluates cultural knowledge related to Japan-specific topics, such as culture and customs, using multiple-choice questions. This test includes questions across eight categories: Japanese culture, custom, climate, geography, history, government, law and healthcare. To achieve high accuracy on this test, the model must possess extensive knowledge about Japanese culture.

Supported Tasks and Leaderboards

Model Micro-average culture custom climate geography history government law healthcare
sarashina2-8x70b 0.7364 0.7220 0.8088 0.7855 0.6522 0.7839 0.7719 0.6436 0.8462
sarashina2-70b 0.7245 0.6988 0.7892 0.7556 0.6558 0.7781 0.7544 0.6733 0.7885
Llama-3.3-Swallow-70B-v0.4 0.6950 0.6894 0.7353 0.6185 0.5688 0.7781 0.7719 0.7459 0.8462
RakutenAI-2.0-8x7B 0.6160 0.6056 0.6814 0.6160 0.4855 0.6888 0.6754 0.5941 0.6923
Mixtral-8x7B-v0.1-japanese 0.5950 0.5885 0.7500 0.5985 0.4601 0.6052 0.6404 0.5710 0.7308
plamo-100b 0.5908 0.6102 0.6422 0.6384 0.4565 0.6398 0.5526 0.5182 0.6731
llm-jp-3.1-8x13b 0.5737 0.5839 0.6275 0.6060 0.4674 0.6110 0.6404 0.4884 0.6538
Meta-Llama-3.1-405B 0.5724 0.5699 0.5245 0.4688 0.5435 0.6571 0.6579 0.6403 0.5962
Nemotron-4-340B-Base 0.5600 0.5761 0.6176 0.5062 0.4601 0.5821 0.6491 0.5776 0.6346
Qwen2.5-72B 0.5421 0.5419 0.6324 0.4763 0.4746 0.5677 0.6053 0.5644 0.6154

Languages

Japanese

Dataset Structure

Data Instances

An example from custom category looks as follows:

{
  "qid": "jamcqa_test_culture_00001",
  "category": '文化',
  "question": "「狂った世で気が狂うなら気は確かだ」の名言を残した映画はどれ?",
  "choice0": "乱",
  "choice1": "羅生門",
  "choice2": "隠し砦の三悪人",
  "choice3": "影武者",
  "answer_index": 0,
}

Data Fields

  • qid (str): A unique identifier for each question.
  • category (str): The category of the question.
    • Culture, customs, geography, history, government, law, healthcare, and environmental aspects.
  • question (str): The question text.
    • Converted from full-width to half-width characters except for katakana characters.
    • Does not contain line breaks (\n).
    • Whitespace at the beginning/end has been removed.
  • choice{0..3} (str): Four answer options (choice0 to choice3).
    • Converted from full-width to half-width characters except for katakana characters.
    • Does not contain line breaks (\n).
    • Whitespace at the beginning/end has been removed.
  • answer_index (int): The index of the correct answer among choice{0..3} (0–3).

Data Splits

  • dev: 5 examples, meant for few-shot setting
  • test: there are 2,341 examples

Licensing Information

How to use

$ python
>>> import datasets
>>> jamcqa_test = datasets.load_dataset('sbintuitions/JamC-QA', 'v1.0', split='test')
>>> print(jamcqa_test)
Dataset({
    features: ['qid', 'category', 'question', 'choice0', 'choice1', 'choice2', 'choice3', 'answer_index'],
    num_rows: 2341
})
>>> print(jamcqa_test[0])
{'qid': 'jamcqa_test_culture_00001', 'category': '文化', 'question': '「狂った世で気が狂うなら気は確かだ」の名言を残した映画はどれ?', 'choice0': '乱', 'choice1': '羅生門', 'choice2': '隠し砦の三悪人', 'choice3': '影武者', 'answer_index': 0}
>>> 

Citation Information

@article{Oka2025,
  author={岡 照晃, 柴田 知秀, 吉田 奈央},
  title={JamC-QA: 日本固有の知識を問う多肢選択式質問応答ベンチマークの構築},
  journal={言語処理学会第31回年次大会(NLP2025)},
  year={2025}
}