Datasets:
task_categories:
- question-answering
language:
- en
- zh
tags:
- biology
- medical
size_categories:
- 1K<n<10K
viewer: true
configs:
- config_name: default
data_files:
- split: test
path:
- AnesBench.json
license: c-uda
The AnesBench Datasets Collection comprises three distinct datasets: AnesBench, an anesthesiology reasoning benchmark; AnesQA, an SFT dataset; and AnesCorpus, a continual pre-training dataset. This repository pertains to AnesBench. For AnesQA and AnesCorpus, please refer to their respective links: https://huggingface.co/datasets/MiliLab/AnesQA and https://huggingface.co/datasets/MiliLab/AnesCorpus.
Dataset Description
AnesBench is designed to assess anesthesiology-related reasoning capabilities of Large Language Models (LLMs). It contains 4,427 anesthesiology questions in English. Each question is labeled with a three-level categorization of cognitive demands and includes Chinese-English translations, enabling evaluation of LLMs’ knowledge, application, and clinical reasoning abilities across diverse linguistic contexts.
JSON Sample
{
"id": "1bb76e22-6dbf-5b17-bbdf-0e6cde9f9440",
"choice_num": 4,
"answer": "A",
"level": 1,
"en_question": "english question",
"en_X": "option X",
"zh_question": "中文问题",
"zh_X": "选项X",
}
Field Explanations
Field | Type | Description |
---|---|---|
id |
string | A randomly generated ID using UUID |
choice_num |
int | The number of options in this question |
answer |
string | The correct answer to this question |
level |
int | The cognitive demand level of the question (1 , 2 , and 3 represent system1 , system1.x , and system2 respectively) |
en_question |
string | English description of the question stem |
cn_question |
string | Chinese description of the question stem |
en_X |
string | English description of the option (X takes values from A until the total number of options is reached) |
cn_X |
string | Chinese description of the option (X takes values from A until the total number of options is reached) |
Recommended Usage
- Question Answering: QA in a zero-shot or few-shot setting, where the question is fed into a QA system. Accuracy should be used as the evaluation metric.