Update README.md
Browse files
README.md
CHANGED
@@ -1,93 +1,112 @@
|
|
|
|
1 |
|
|
|
2 |
|
|
|
3 |
|
4 |
---
|
5 |
-
license: cc-by-4.0
|
6 |
-
task_categories:
|
7 |
-
- question-answering
|
8 |
-
- multiple-choice
|
9 |
-
language:
|
10 |
-
- en
|
11 |
-
pretty_name: Internal_Medicine
|
12 |
-
size_categories:
|
13 |
-
- n<1K
|
14 |
-
---
|
15 |
-
Dataset Card for Internal Medicine MCQ
|
16 |
-
Dataset Details
|
17 |
-
Dataset Description
|
18 |
-
This dataset consists of 41 high-quality, two-choice multiple-choice questions (MCQs) focused on core biomedical knowledge and clinical scenarios from internal medicine. These questions were specifically curated for research evaluating medical knowledge, clinical reasoning, and confidence-based interactions among medical trainees and large language models (LLMs).
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
23 |
|
24 |
-
|
25 |
|
26 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
|
|
28 |
|
29 |
-
|
30 |
|
31 |
-
Uses
|
32 |
-
Direct Use
|
33 |
This dataset is suitable for:
|
34 |
|
35 |
-
Evaluating medical knowledge and clinical reasoning skills of medical students and healthcare professionals.
|
|
|
|
|
36 |
|
37 |
-
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
-
|
40 |
|
41 |
-
|
42 |
-
This dataset is not intended as a diagnostic or clinical decision-making tool in real clinical settings.
|
43 |
|
44 |
-
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
-
|
47 |
-
The dataset comprises 41 multiple-choice questions with two answer choices (binary-choice format):
|
48 |
|
49 |
-
|
50 |
|
51 |
-
|
52 |
|
53 |
-
|
54 |
|
55 |
-
|
56 |
|
57 |
-
|
58 |
|
59 |
-
|
60 |
|
61 |
-
|
62 |
-
Curation Rationale
|
63 |
-
The dataset was created to study knowledge diversity, internal confidence, and collaborative decision-making between medical trainees and AI agents. Questions were carefully selected to represent authentic licensing exam style questions in internal medicine, ensuring ecological validity for medical education and AI-human collaborative studies.
|
64 |
|
65 |
-
|
66 |
-
Data Collection and Processing
|
67 |
-
The questions were sourced and adapted from standardized medical licensing preparation materials. All questions were reviewed, translated and validated by licensed physicians.
|
68 |
|
69 |
-
Who are the source data producers?
|
70 |
The original data sources are standard medical licensing examination preparation materials.
|
71 |
|
72 |
-
|
73 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
74 |
|
75 |
-
|
76 |
-
The dataset size (41 questions) is limited; therefore, findings using this dataset might not generalize broadly.
|
77 |
|
78 |
-
|
79 |
|
80 |
-
Citation
|
81 |
If using this dataset, please cite:
|
82 |
|
83 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
84 |
|
|
|
85 |
|
86 |
-
|
87 |
-
|
|
|
88 |
|
89 |
-
|
90 |
-
|
|
|
91 |
|
92 |
-
Dataset Card Contact
|
93 |
-
|
|
|
|
|
|
|
|
1 |
+
Here's your improved, styled, and formatted dataset card:
|
2 |
|
3 |
+
---
|
4 |
|
5 |
+
# Dataset Card for **Internal Medicine MCQ**
|
6 |
|
7 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
|
9 |
+
## Dataset Details
|
10 |
|
11 |
+
### **Dataset Description**
|
12 |
|
13 |
+
This dataset consists of **41 high-quality**, two-choice multiple-choice questions (MCQs) focused on **core biomedical knowledge** and clinical scenarios from **internal medicine**. These questions were specifically curated for research evaluating medical knowledge, clinical reasoning, and confidence-based interactions among medical trainees and large language models (LLMs).
|
14 |
|
15 |
+
* **Curated by:** Tom Sheffer
|
16 |
+
* **Shared by:** Tom Sheffer (The Hebrew University of Jerusalem)
|
17 |
+
* **Language:** English
|
18 |
+
* **License:** [Creative Commons Attribution 4.0 International (CC-BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
|
19 |
+
* **Paper:** *\[Information Needed]*
|
20 |
+
|
21 |
+
---
|
22 |
|
23 |
+
## Uses
|
24 |
|
25 |
+
### **Direct Use**
|
26 |
|
|
|
|
|
27 |
This dataset is suitable for:
|
28 |
|
29 |
+
* Evaluating medical knowledge and clinical reasoning skills of medical students and healthcare professionals.
|
30 |
+
* Benchmarking performance and reasoning capabilities of large language models (LLMs) in medical question-answering tasks.
|
31 |
+
* Research on collaborative human–AI and human–human interactions involving clinical decision-making.
|
32 |
|
33 |
+
### **Out-of-Scope Use**
|
34 |
+
|
35 |
+
* **Not intended** as a diagnostic or clinical decision-making tool in real clinical settings.
|
36 |
+
* Should **not** be used to train systems intended for direct clinical application without extensive validation.
|
37 |
+
|
38 |
+
---
|
39 |
|
40 |
+
## Dataset Structure
|
41 |
|
42 |
+
The dataset comprises **41 multiple-choice questions** with two answer choices (binary-choice format). The dataset includes the following fields:
|
|
|
43 |
|
44 |
+
* `question_id`: A unique identifier for each question.
|
45 |
+
* `question_text`: The clinical vignette or biomedical question.
|
46 |
+
* `optionA`: First possible answer choice.
|
47 |
+
* `optionB`: Second possible answer choice.
|
48 |
+
* `answer`: The correct answer text.
|
49 |
+
* `answer_idx`: The correct answer choice (A or B).
|
50 |
|
51 |
+
---
|
|
|
52 |
|
53 |
+
## Dataset Creation
|
54 |
|
55 |
+
### **Curation Rationale**
|
56 |
|
57 |
+
The dataset was created to study **knowledge diversity**, internal confidence, and collaborative decision-making between medical trainees and AI agents. Questions were carefully selected to represent authentic licensing exam–style questions in internal medicine, ensuring ecological validity for medical education and AI–human collaborative studies.
|
58 |
|
59 |
+
---
|
60 |
|
61 |
+
### **Source Data**
|
62 |
|
63 |
+
#### **Data Collection and Processing**
|
64 |
|
65 |
+
The questions were sourced and adapted from standardized medical licensing preparation materials. All questions were reviewed, translated, and validated by licensed physicians.
|
|
|
|
|
66 |
|
67 |
+
#### **Who are the source data producers?**
|
|
|
|
|
68 |
|
|
|
69 |
The original data sources are standard medical licensing examination preparation materials.
|
70 |
|
71 |
+
---
|
72 |
+
|
73 |
+
### **Personal and Sensitive Information**
|
74 |
+
|
75 |
+
The dataset **does not contain** any personal, sensitive, or identifiable patient or clinician information. All clinical scenarios are fictionalized or generalized for educational and research purposes.
|
76 |
+
|
77 |
+
---
|
78 |
+
|
79 |
+
## Bias, Risks, and Limitations
|
80 |
+
|
81 |
+
* The dataset size (**41 questions**) is limited; therefore, findings using this dataset might not generalize broadly.
|
82 |
+
* Content is limited to internal medicine; results may not generalize across all medical specialties.
|
83 |
|
84 |
+
---
|
|
|
85 |
|
86 |
+
## Citation
|
87 |
|
|
|
88 |
If using this dataset, please cite:
|
89 |
|
90 |
+
```bibtex
|
91 |
+
|
92 |
+
```
|
93 |
+
|
94 |
+
---
|
95 |
+
|
96 |
+
## More Information
|
97 |
|
98 |
+
For more details, please contact the dataset author listed below.
|
99 |
|
100 |
+
---
|
101 |
+
|
102 |
+
## Dataset Card Author
|
103 |
|
104 |
+
* **Tom Sheffer** (The Hebrew University of Jerusalem)
|
105 |
+
|
106 |
+
---
|
107 |
|
108 |
+
## Dataset Card Contact
|
109 |
+
|
110 |
+
* **Email:** [[email protected]](mailto:[email protected])
|
111 |
+
|
112 |
+
---
|