Update README.md
Browse files
README.md
CHANGED
@@ -120,31 +120,33 @@ size_categories:
|
|
120 |
- 100K<n<1M
|
121 |
---
|
122 |
|
123 |
-
|
124 |
# Unified MCQA: Aggregated Multiple-Choice Datasets
|
125 |
|
126 |
-
A standardized collection of multiple-choice question answering datasets organized by the number of answer choices (2, 3, 4, or
|
127 |
|
128 |
-
The intended use case is to include phase(s) of training on MCQ for encoders as part of a 'multi-task pretraining' with an adapted version of the [multiple choice example script](https://github.com/huggingface/transformers/tree/
|
129 |
|
130 |
## Dataset Structure
|
131 |
|
132 |
### Data Fields
|
133 |
|
134 |
-
-
|
135 |
-
-
|
136 |
-
-
|
137 |
-
-
|
138 |
-
-
|
139 |
|
140 |
### Configurations
|
141 |
|
142 |
-
|
143 |
-
|
144 |
-
-
|
145 |
-
-
|
|
|
|
|
|
|
146 |
|
147 |
-
Only `train` splits are currently available.
|
148 |
|
149 |
## Quick Usage
|
150 |
|
@@ -152,49 +154,80 @@ Only `train` splits are currently available.
|
|
152 |
from datasets import load_dataset
|
153 |
|
154 |
# Load 4-choice examples
|
|
|
155 |
ds = load_dataset("pszemraj/unified-mcqa", name="4-choice", split="train")
|
156 |
|
157 |
# Example structure
|
158 |
-
print(ds
|
|
|
159 |
# {
|
160 |
-
# 'context': '...',
|
161 |
# 'question': 'What is the main idea?',
|
162 |
-
# 'choices': ['Choice A', 'Choice B', 'Choice C', 'Choice D'],
|
163 |
# 'label': 2,
|
164 |
# 'source_dataset': 'race'
|
165 |
# }
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
166 |
```
|
167 |
|
168 |
## Source Datasets
|
169 |
|
170 |
-
This dataset aggregates examples from:
|
171 |
-
|
172 |
-
-
|
173 |
-
-
|
174 |
-
-
|
175 |
-
-
|
176 |
-
-
|
177 |
-
-
|
178 |
-
-
|
179 |
-
-
|
180 |
-
-
|
181 |
-
-
|
182 |
-
-
|
183 |
-
-
|
184 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
185 |
|
186 |
## Creation Process
|
187 |
|
188 |
-
|
189 |
-
|
190 |
-
|
191 |
-
|
192 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
193 |
|
194 |
## License
|
195 |
|
196 |
-
|
|
|
|
|
197 |
|
198 |
## Citation
|
199 |
|
200 |
-
If using this dataset, please cite the original datasets that contribute to your specific use case.
|
|
|
120 |
- 100K<n<1M
|
121 |
---
|
122 |
|
|
|
123 |
# Unified MCQA: Aggregated Multiple-Choice Datasets
|
124 |
|
125 |
+
A standardized collection of multiple-choice question answering datasets organized by the number of answer choices (typically 2, 3, 4, 5, or 8). Only 'train' splits (or equivalent, like MMLU's 'auxiliary_train') were included to avoid contamination with evaluation sets.
|
126 |
|
127 |
+
The intended use case is to include phase(s) of training on MCQ for encoders as part of a 'multi-task pretraining' with an adapted version of the [multiple choice example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice).
|
128 |
|
129 |
## Dataset Structure
|
130 |
|
131 |
### Data Fields
|
132 |
|
133 |
+
- `context` (string): Supporting text for the question (may be empty).
|
134 |
+
- `question` (string): The multiple-choice question.
|
135 |
+
- `choices` (List[string]): Potential answer options, with prefixes like "a)" removed.
|
136 |
+
- `label` (int): 0-indexed integer representing the correct answer choice.
|
137 |
+
- `source_dataset` (string): Identifier for the original source dataset.
|
138 |
|
139 |
### Configurations
|
140 |
|
141 |
+
The dataset is organized into configurations based on the number of choices per question after processing:
|
142 |
+
|
143 |
+
- `2-choice`: Examples with 2 answer choices (e.g., from PIQA, Winogrande, COPA).
|
144 |
+
- `3-choice`: Examples with 3 answer choices (e.g., from Social IQa).
|
145 |
+
- `4-choice`: Examples with 4 answer choices (e.g., from RACE, ARC, OpenBookQA, SWAG, Cosmos QA, DREAM, Quail, MedMCQA, MMLU, QuALITY, MCTest).
|
146 |
+
- `5-choice`: Examples with 5 answer choices (e.g., from CommonsenseQA, MathQA).
|
147 |
+
- `8-choice`: Examples with 8 answer choices (e.g., from QASC).
|
148 |
|
149 |
+
Only `train` splits are currently available for each configuration.
|
150 |
|
151 |
## Quick Usage
|
152 |
|
|
|
154 |
from datasets import load_dataset
|
155 |
|
156 |
# Load 4-choice examples
|
157 |
+
# Replace 'pszemraj/unified-mcqa' with the actual Hub path if different
|
158 |
ds = load_dataset("pszemraj/unified-mcqa", name="4-choice", split="train")
|
159 |
|
160 |
# Example structure
|
161 |
+
print(ds)
|
162 |
+
# Example Output:
|
163 |
# {
|
164 |
+
# 'context': 'Article text here...',
|
165 |
# 'question': 'What is the main idea?',
|
166 |
+
# 'choices': ['Choice text A', 'Choice text B', 'Choice text C', 'Choice text D'],
|
167 |
# 'label': 2,
|
168 |
# 'source_dataset': 'race'
|
169 |
# }
|
170 |
+
|
171 |
+
# Load 5-choice examples
|
172 |
+
ds_5 = load_dataset("pszemraj/unified-mcqa", name="5-choice", split="train")
|
173 |
+
print(ds_5)
|
174 |
+
# Example Output (MathQA):
|
175 |
+
# {
|
176 |
+
# 'context': '',
|
177 |
+
# 'question': 'a monkey ascends a greased pole 26 meters high. he ascends 2 meters in the first minute and then slips down 1 meter in the alternate minute. if this pattern continues until he climbs the pole, in how many minutes would he reach at the top of the pole?',
|
178 |
+
# 'choices': ['50 th minute', '41 st minute', '45 th minute', '42 nd minute', '49 th minute'],
|
179 |
+
# 'label': 4,
|
180 |
+
# 'source_dataset': 'math_qa'
|
181 |
+
# }
|
182 |
```
|
183 |
|
184 |
## Source Datasets
|
185 |
|
186 |
+
This dataset aggregates examples from the following Hugging Face datasets (using specified paths/configs):
|
187 |
+
|
188 |
+
- `commonsense_qa` (path: `commonsense_qa`)
|
189 |
+
- `sciq` (path: `sciq`)
|
190 |
+
- `math_qa` (path: `math_qa`)
|
191 |
+
- `swag` (path: `swag`, name: `regular`)
|
192 |
+
- `hellaswag` (path: `hellaswag`)
|
193 |
+
- `social_i_qa` (path: `social_i_qa`)
|
194 |
+
- `cosmos_qa` (path: `cosmos_qa`)
|
195 |
+
- `piqa` (path: `piqa`)
|
196 |
+
- `winogrande` (path: `winogrande`, name: `winogrande_xl`)
|
197 |
+
- `dream` (path: `dream`)
|
198 |
+
- `quail` (path: `quail`)
|
199 |
+
- `ai2_arc` (name: `ARC-Challenge`) -> `arc_challenge`
|
200 |
+
- `ai2_arc` (name: `ARC-Easy`) -> `arc_easy`
|
201 |
+
- `openbookqa` (path: `openbookqa`, name: `main`)
|
202 |
+
- `cais/mmlu` (name: `all`, split: `auxiliary_train`) -> `mmlu`
|
203 |
+
- `lighteval/QuALITY` (path: `lighteval/QuALITY`) -> `quality`
|
204 |
+
- `qasc` (path: `qasc`)
|
205 |
+
- `codah` (path: `codah`, name: `codah`)
|
206 |
+
- `super_glue` (name: `copa`) -> `copa`
|
207 |
+
- `copenlu/mctest_corrected` (name: `mc500`) -> `mctest_corrected`
|
208 |
+
- `openlifescienceai/medmcqa` (path: `openlifescienceai/medmcqa`) -> `medmcqa`
|
209 |
+
- `race` (path: `race`, name: `all`)
|
210 |
|
211 |
## Creation Process
|
212 |
|
213 |
+
The aggregation script (`create_unified_mcqa.py`) performs the following steps for each source dataset:
|
214 |
+
|
215 |
+
1. Loads the specified split (usually `train`).
|
216 |
+
2. Applies a custom canonicalization function (`SPECIAL`) or a generic mapping (`COLMAP`) to standardize field names (`context`, `question`, `choices`, `label`) and structures.
|
217 |
+
- For `math_qa`, the `Problem` field is mapped to `question`, and the `Rationale` is ignored. Choice prefixes (e.g., "a )") are removed.
|
218 |
+
3. Converts all labels to 0-indexed integers.
|
219 |
+
4. Adds a `source_dataset` field to identify the origin.
|
220 |
+
5. Filters out malformed or skipped examples.
|
221 |
+
6. Casts the resulting examples to the `TARGET_FEATURES` schema.
|
222 |
+
7. Groups the processed datasets by the number of choices found in each example.
|
223 |
+
8. Concatenates datasets within each group.
|
224 |
|
225 |
## License
|
226 |
|
227 |
+
This aggregated dataset combines data from multiple sources, each with its own license. The licenses are generally permissive (MIT, Apache 2.0, CC BY variants, ODC-By 1.0, Custom Permissive).
|
228 |
+
|
229 |
+
Users should consult the original licenses for each source dataset included in their specific use case, especially before commercial use. The aggregation script attempts to log the license found for each dataset during processing.
|
230 |
|
231 |
## Citation
|
232 |
|
233 |
+
If using this dataset, please cite the original datasets that contribute to your specific use case. Refer to the Hugging Face Hub pages for the source datasets listed above for their respective citation information.
|