tomshe commited on
Commit
0db2266
·
verified ·
1 Parent(s): c31142c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -1
README.md CHANGED
@@ -1,3 +1,6 @@
 
 
 
1
  ---
2
  license: cc-by-4.0
3
  task_categories:
@@ -8,4 +11,83 @@ language:
8
  pretty_name: Internal_Medicine
9
  size_categories:
10
  - n<1K
11
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
  ---
5
  license: cc-by-4.0
6
  task_categories:
 
11
  pretty_name: Internal_Medicine
12
  size_categories:
13
  - n<1K
14
+ ---
15
+ Dataset Card for Internal Medicine MCQ
16
+ Dataset Details
17
+ Dataset Description
18
+ This dataset consists of 41 high-quality, two-choice multiple-choice questions (MCQs) focused on core biomedical knowledge and clinical scenarios from internal medicine. These questions were specifically curated for research evaluating medical knowledge, clinical reasoning, and confidence-based interactions among medical trainees and large language models (LLMs).
19
+
20
+ Curated by: Tom Sheffer
21
+
22
+ Shared by: Tom Sheffer (The Hebrew University of Jerusalem)
23
+
24
+ Language: English
25
+
26
+ License: Creative Commons Attribution 4.0 International (CC-BY 4.0)
27
+
28
+
29
+ Paper: [...]
30
+
31
+ Uses
32
+ Direct Use
33
+ This dataset is suitable for:
34
+
35
+ Evaluating medical knowledge and clinical reasoning skills of medical students and healthcare professionals.
36
+
37
+ Benchmarking performance and reasoning capabilities of large language models (LLMs) in medical question-answering tasks.
38
+
39
+ Research on collaborative human–AI and human-human interactions involving clinical decision-making.
40
+
41
+ Out-of-Scope Use
42
+ This dataset is not intended as a diagnostic or clinical decision-making tool in real clinical settings.
43
+
44
+ It should not be used to train systems intended for direct clinical application without extensive validation.
45
+
46
+ Dataset Structure
47
+ The dataset comprises 41 multiple-choice questions with two answer choices (binary-choice format):
48
+
49
+ question_id: A unique identifier for each question.
50
+
51
+ question_text: The clinical vignette or biomedical question.
52
+
53
+ optionA: First possible answer choice.
54
+
55
+ optionB: Second possible answer choice.
56
+
57
+ answer: The correct answer text.
58
+
59
+ answer idx: The correct answer choice (A or B)
60
+
61
+ Dataset Creation
62
+ Curation Rationale
63
+ The dataset was created to study knowledge diversity, internal confidence, and collaborative decision-making between medical trainees and AI agents. Questions were carefully selected to represent authentic licensing exam style questions in internal medicine, ensuring ecological validity for medical education and AI-human collaborative studies.
64
+
65
+ Source Data
66
+ Data Collection and Processing
67
+ The questions were sourced and adapted from standardized medical licensing preparation materials. All questions were reviewed, translated and validated by licensed physicians.
68
+
69
+ Who are the source data producers?
70
+ The original data sources are standard medical licensing examination preparation materials.
71
+
72
+ Personal and Sensitive Information
73
+ The dataset does not contain any personal, sensitive, or identifiable patient or clinician information. All clinical scenarios are fictionalized or generalized for educational and research purposes.
74
+
75
+ Bias, Risks, and Limitations
76
+ The dataset size (41 questions) is limited; therefore, findings using this dataset might not generalize broadly.
77
+
78
+ Content is limited to internal medicine; results may not generalize across all medical specialties.
79
+
80
+ Citation
81
+ If using this dataset, please cite:
82
+
83
+ BibTeX:
84
+
85
+
86
+ More Information
87
+ For more details, please contact the dataset authors listed below.
88
+
89
+ Dataset Card Authors
90
+ Tom Sheffer (The Hebrew University of Jerusalem)
91
+
92
+ Dataset Card Contact
93
+ Tom Sheffer: [email protected]