Datasets:
Tasks:
Multiple Choice
Modalities:
Text
Formats:
parquet
Sub-tasks:
multiple-choice-qa
Languages:
English
Size:
10K - 100K
License:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,32 +1,82 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
splits:
|
17 |
-
- name: train
|
18 |
-
num_bytes: 12554162.663742885
|
19 |
-
num_examples: 25389
|
20 |
-
- name: validation
|
21 |
-
num_bytes: 2215731.336257114
|
22 |
-
num_examples: 4481
|
23 |
-
download_size: 8989707
|
24 |
-
dataset_size: 14769894.0
|
25 |
-
configs:
|
26 |
-
- config_name: default
|
27 |
-
data_files:
|
28 |
-
- split: train
|
29 |
-
path: data/train-*
|
30 |
-
- split: validation
|
31 |
-
path: data/validation-*
|
32 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
annotations_creators:
|
3 |
+
- expert-generated
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
license: mit
|
7 |
+
multilinguality:
|
8 |
+
- monolingual
|
9 |
+
size_categories:
|
10 |
+
- 10K<n<100K
|
11 |
+
task_categories:
|
12 |
+
- multiple-choice
|
13 |
+
task_ids:
|
14 |
+
- multiple-choice-qa
|
15 |
+
pretty_name: MNLP M3 MCQA Dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
---
|
17 |
+
|
18 |
+
# MNLP M3 MCQA Dataset
|
19 |
+
|
20 |
+
The **MNLP M3 MCQA Dataset** is a carefully curated collection of **Multiple-Choice Question Answering (MCQA)** examples, unified from several academic and benchmark datasets.
|
21 |
+
|
22 |
+
Developed as part of the *CS-552: Modern NLP* course at EPFL (Spring 2025), this dataset is designed for training and evaluating models on multiple-choice QA tasks, particularly in the **STEM** and general knowledge domains.
|
23 |
+
|
24 |
+
## Key Features
|
25 |
+
|
26 |
+
- ~30,000 MCQA questions
|
27 |
+
- 6 diverse sources: `SciQ`, `OpenBookQA`, `MathQA`, `ARC-Easy`, `ARC-Challenge`, and `MedMCQA`
|
28 |
+
- Each question has exactly 4 options (A–D) and one correct answer
|
29 |
+
- Covers a wide range of topics: science, technology, engineering, mathematics, and general knowledge
|
30 |
+
|
31 |
+
## Dataset Structure
|
32 |
+
|
33 |
+
Each example is a dictionary with the following fields:
|
34 |
+
|
35 |
+
| Field | Type | Description |
|
36 |
+
|-----------|----------|---------------------------------------------------|
|
37 |
+
| `dataset` | `string` | Source dataset (`sciq`, `openbookqa`, etc.) |
|
38 |
+
| `id` | `string` | Unique identifier for the question |
|
39 |
+
| `question`| `string` | The question text |
|
40 |
+
| `choices` | `list` | List of 4 answer options (corresponding to A–D) |
|
41 |
+
| `answer` | `string` | The correct option, as a letter: `"A"`, `"B"`, `"C"`, or `"D"` |
|
42 |
+
| `support` | `string` | A brief explanation or fact supporting the correct answer when available |
|
43 |
+
|
44 |
+
```markdown
|
45 |
+
Example:
|
46 |
+
```json
|
47 |
+
{
|
48 |
+
"dataset": "sciq",
|
49 |
+
"id": "sciq_01_00042",
|
50 |
+
"question": "What does a seismograph measure?",
|
51 |
+
"choices": ["Earthquakes", "Rainfall", "Sunlight", "Temperature"],
|
52 |
+
"answer": "A",
|
53 |
+
"support": "A seismograph is an instrument that detects and records earthquakes."
|
54 |
+
}
|
55 |
+
```
|
56 |
+
|
57 |
+
## Source Datasets
|
58 |
+
|
59 |
+
This dataset combines multiple high-quality MCQA sources to support research and fine-tuning in STEM education and reasoning. The full corpus contains **29,870 multiple-choice questions** from the following sources:
|
60 |
+
|
61 |
+
| Source (Hugging Face) | Name | Size | Description & Role in the Dataset |
|
62 |
+
| ------------------------------------------- | ------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
63 |
+
| `allenai/sciq` | **SciQ** | 11,679 | **Science questions** (Physics, Chemistry, Biology, Earth science). Crowdsourced with 4 answer choices and optional supporting evidence. Used to provide **well-balanced, factual STEM questions** at a middle/high-school level. |
|
64 |
+
| `allenai/openbookqa` | **OpenBookQA** | 4,957 | Science exam-style questions requiring **multi-step reasoning** and use of **commonsense or external knowledge**. Contributes more **challenging** and **inference-based** questions. |
|
65 |
+
| `allenai/math_qa` | **MathQA** | 5,000 | Subsample of quantitative math word problems derived from AQuA-RAT, annotated with structured answer options. Introduces **numerical reasoning** and **problem-solving** components into the dataset. |
|
66 |
+
| `allenai/ai2_arc` (config: `ARC-Easy`) | **ARC-Easy** | 2,140 | Science questions at the middle school level. Useful for testing **basic STEM understanding** and **factual recall**. Filtered to retain only valid 4-choice entries. |
|
67 |
+
| `allenai/ai2_arc` (config: `ARC-Challenge`) | **ARC-Challenge** | 1,094 | More difficult science questions requiring **reasoning and inference**. Widely used as a benchmark for evaluating LLMs. Also filtered for clean MCQA format compatibility. |
|
68 |
+
| `openlifescienceai/medmcqa` | **MedMCQA** | 5,000 | A subsample of multiple-choice questions on **medical topics** from various exams, filtered for a single-choice format. Contains real-world and domain-specific **clinical reasoning** questions covering various medical disciplines. |
|
69 |
+
|
70 |
+
## Intended Applications and Structure
|
71 |
+
|
72 |
+
This dataset is split into three parts:
|
73 |
+
|
74 |
+
- `train` (~70%) — for training MCQA models
|
75 |
+
- `validation` (~15%) — for tuning and monitoring performance during training
|
76 |
+
- `test` (~15%) — for final evaluation on unseen questions
|
77 |
+
|
78 |
+
It is suitable for multiple-choice question answering tasks, especially in the **STEM** domain (Science, Technology, Engineering, Mathematics).
|
79 |
+
|
80 |
+
## Author
|
81 |
+
|
82 |
+
This dataset was created and published by [Youssef Belghmi](https://huggingface.co/youssefbelghmi) as part of the *CS-552: Modern NLP* course at EPFL (Spring 2025).
|