tomshe's picture
Update README.md
65bd274 verified
---
license: cc-by-4.0
task_categories:
- table-question-answering
- multiple-choice
language:
- en
pretty_name: Internal Medicine MCQ
size_categories:
- n<1K
---
# Dataset Card for **Internal Medicine MCQ**
## Dataset Details
### **Dataset Description**
This dataset consists of **41 high-quality**, two-choice multiple-choice questions (MCQs) focused on **core biomedical knowledge** and clinical scenarios from **internal medicine**. These questions were specifically curated for research evaluating medical knowledge, clinical reasoning, and confidence-based interactions among medical trainees and large language models (LLMs).
* **Curated by:** Tom Sheffer
* **Shared by:** Tom Sheffer (The Hebrew University of Jerusalem)
* **Language:** English
* **License:** [Creative Commons Attribution 4.0 International (CC-BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
* **Paper:** *\[Information Needed]*
---
## Uses
### **Direct Use**
This dataset is suitable for:
* Evaluating medical knowledge and clinical reasoning skills of medical students and healthcare professionals.
* Benchmarking performance and reasoning capabilities of large language models (LLMs) in medical question-answering tasks.
* Research on collaborative human–AI and human–human interactions involving clinical decision-making.
### **Out-of-Scope Use**
* **Not intended** as a diagnostic or clinical decision-making tool in real clinical settings.
* Should **not** be used to train systems intended for direct clinical application without extensive validation.
---
## Dataset Structure
The dataset comprises **41 multiple-choice questions** with two answer choices (binary-choice format). The dataset includes the following fields:
* `question_id`: A unique identifier for each question.
* `question_text`: The clinical vignette or biomedical question.
* `optionA`: First possible answer choice.
* `optionB`: Second possible answer choice.
* `answer`: The correct answer text.
* `answer_idx`: The correct answer choice (A or B).
---
## Dataset Creation
### **Curation Rationale**
The dataset was created to study **knowledge diversity**, internal confidence, and collaborative decision-making between medical trainees and AI agents. Questions were carefully selected to represent authentic licensing exam–style questions in internal medicine, ensuring ecological validity for medical education and AI–human collaborative studies.
---
### **Source Data**
#### **Data Collection and Processing**
The questions were sourced and adapted from standardized medical licensing preparation materials. All questions were reviewed, translated, and validated by licensed physicians.
#### **Who are the source data producers?**
The original data sources are standard medical licensing examination preparation materials.
---
### **Personal and Sensitive Information**
The dataset **does not contain** any personal, sensitive, or identifiable patient or clinician information. All clinical scenarios are fictionalized or generalized for educational and research purposes.
---
## Bias, Risks, and Limitations
* The dataset size (**41 questions**) is limited; therefore, findings using this dataset might not generalize broadly.
* Content is limited to internal medicine; results may not generalize across all medical specialties.
---
## Citation
If using this dataset, please cite:
```bibtex
```
---
## More Information
For more details, please contact the dataset author listed below.
---
## Dataset Card Author
* **Tom Sheffer** (The Hebrew University of Jerusalem)
---
## Dataset Card Contact
* **Email:** [[email protected]](mailto:[email protected])
---