Datasets:
Tasks:
Multiple Choice
Modalities:
Text
Formats:
parquet
Sub-tasks:
multiple-choice-qa
Languages:
English
Size:
10K - 100K
License:
File size: 5,227 Bytes
58c635d 22fb00d 58c635d 22fb00d aa8b00a 22fb00d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
---
annotations_creators:
- expert-generated
language:
- en
license: mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
pretty_name: MNLP M3 MCQA Dataset
---
# MNLP M3 MCQA Dataset
The **MNLP M3 MCQA Dataset** is a carefully curated collection of **Multiple-Choice Question Answering (MCQA)** examples, unified from several academic and benchmark datasets.
Developed as part of the *CS-552: Modern NLP* course at EPFL (Spring 2025), this dataset is designed for training and evaluating models on multiple-choice QA tasks, particularly in the **STEM** and general knowledge domains.
## Key Features
- ~30,000 MCQA questions
- 6 diverse sources: `SciQ`, `OpenBookQA`, `MathQA`, `ARC-Easy`, `ARC-Challenge`, and `MedMCQA`
- Each question has exactly 4 options (A–D) and one correct answer
- Covers a wide range of topics: science, technology, engineering, mathematics, and general knowledge
## Dataset Structure
Each example is a dictionary with the following fields:
| Field | Type | Description |
|-----------|----------|---------------------------------------------------|
| `dataset` | `string` | Source dataset (`sciq`, `openbookqa`, etc.) |
| `id` | `string` | Unique identifier for the question |
| `question`| `string` | The question text |
| `choices` | `list` | List of 4 answer options (corresponding to A–D) |
| `answer` | `string` | The correct option, as a letter: `"A"`, `"B"`, `"C"`, or `"D"` |
| `support` | `string` | A brief explanation or fact supporting the correct answer when available |
```markdown
Example:
```json
{
"dataset": "sciq",
"id": "sciq_01_00042",
"question": "What does a seismograph measure?",
"choices": ["Earthquakes", "Rainfall", "Sunlight", "Temperature"],
"answer": "A",
"support": "A seismograph is an instrument that detects and records earthquakes."
}
```
## Source Datasets
This dataset combines multiple high-quality MCQA sources to support research and fine-tuning in STEM education and reasoning. The full corpus contains **29,870 multiple-choice questions** from the following sources:
| Source (Hugging Face) | Name | Size | Description & Role in the Dataset |
| ------------------------------------------- | ------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `allenai/sciq` | **SciQ** | 11,679 | **Science questions** (Physics, Chemistry, Biology, Earth science). Crowdsourced with 4 answer choices and optional supporting evidence. Used to provide **well-balanced, factual STEM questions** at a middle/high-school level. |
| `allenai/openbookqa` | **OpenBookQA** | 4,957 | Science exam-style questions requiring **multi-step reasoning** and use of **commonsense or external knowledge**. Contributes more **challenging** and **inference-based** questions. |
| `allenai/math_qa` | **MathQA** | 5,000 | Subsample of quantitative math word problems derived from AQuA-RAT, annotated with structured answer options. Introduces **numerical reasoning** and **problem-solving** components into the dataset. |
| `allenai/ai2_arc` (config: `ARC-Easy`) | **ARC-Easy** | 2,140 | Science questions at the middle school level. Useful for testing **basic STEM understanding** and **factual recall**. Filtered to retain only valid 4-choice entries. |
| `allenai/ai2_arc` (config: `ARC-Challenge`) | **ARC-Challenge** | 1,094 | More difficult science questions requiring **reasoning and inference**. Widely used as a benchmark for evaluating LLMs. Also filtered for clean MCQA format compatibility. |
| `openlifescienceai/medmcqa` | **MedMCQA** | 5,000 | A subsample of multiple-choice questions on **medical topics** from various exams, filtered for a single-choice format. Contains real-world and domain-specific **clinical reasoning** questions covering various medical disciplines. |
## Intended Applications and Structure
This dataset is split into three parts:
- `train` (~85%) — for training MCQA models
- `validation` (~15%) — for tuning and monitoring performance during training
It is suitable for multiple-choice question answering tasks, especially in the **STEM** domain (Science, Technology, Engineering, Mathematics).
## Author
This dataset was created and published by [Youssef Belghmi](https://huggingface.co/youssefbelghmi) as part of the *CS-552: Modern NLP* course at EPFL (Spring 2025).
|