youssefbelghmi's picture
Update README.md
aa8b00a verified
---
annotations_creators:
- expert-generated
language:
- en
license: mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
pretty_name: MNLP M3 MCQA Dataset
---
# MNLP M3 MCQA Dataset
The **MNLP M3 MCQA Dataset** is a carefully curated collection of **Multiple-Choice Question Answering (MCQA)** examples, unified from several academic and benchmark datasets.
Developed as part of the *CS-552: Modern NLP* course at EPFL (Spring 2025), this dataset is designed for training and evaluating models on multiple-choice QA tasks, particularly in the **STEM** and general knowledge domains.
## Key Features
- ~30,000 MCQA questions
- 6 diverse sources: `SciQ`, `OpenBookQA`, `MathQA`, `ARC-Easy`, `ARC-Challenge`, and `MedMCQA`
- Each question has exactly 4 options (A–D) and one correct answer
- Covers a wide range of topics: science, technology, engineering, mathematics, and general knowledge
## Dataset Structure
Each example is a dictionary with the following fields:
| Field | Type | Description |
|-----------|----------|---------------------------------------------------|
| `dataset` | `string` | Source dataset (`sciq`, `openbookqa`, etc.) |
| `id` | `string` | Unique identifier for the question |
| `question`| `string` | The question text |
| `choices` | `list` | List of 4 answer options (corresponding to A–D) |
| `answer` | `string` | The correct option, as a letter: `"A"`, `"B"`, `"C"`, or `"D"` |
| `support` | `string` | A brief explanation or fact supporting the correct answer when available |
```markdown
Example:
```json
{
"dataset": "sciq",
"id": "sciq_01_00042",
"question": "What does a seismograph measure?",
"choices": ["Earthquakes", "Rainfall", "Sunlight", "Temperature"],
"answer": "A",
"support": "A seismograph is an instrument that detects and records earthquakes."
}
```
## Source Datasets
This dataset combines multiple high-quality MCQA sources to support research and fine-tuning in STEM education and reasoning. The full corpus contains **29,870 multiple-choice questions** from the following sources:
| Source (Hugging Face) | Name | Size | Description & Role in the Dataset |
| ------------------------------------------- | ------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `allenai/sciq` | **SciQ** | 11,679 | **Science questions** (Physics, Chemistry, Biology, Earth science). Crowdsourced with 4 answer choices and optional supporting evidence. Used to provide **well-balanced, factual STEM questions** at a middle/high-school level. |
| `allenai/openbookqa` | **OpenBookQA** | 4,957 | Science exam-style questions requiring **multi-step reasoning** and use of **commonsense or external knowledge**. Contributes more **challenging** and **inference-based** questions. |
| `allenai/math_qa` | **MathQA** | 5,000 | Subsample of quantitative math word problems derived from AQuA-RAT, annotated with structured answer options. Introduces **numerical reasoning** and **problem-solving** components into the dataset. |
| `allenai/ai2_arc` (config: `ARC-Easy`) | **ARC-Easy** | 2,140 | Science questions at the middle school level. Useful for testing **basic STEM understanding** and **factual recall**. Filtered to retain only valid 4-choice entries. |
| `allenai/ai2_arc` (config: `ARC-Challenge`) | **ARC-Challenge** | 1,094 | More difficult science questions requiring **reasoning and inference**. Widely used as a benchmark for evaluating LLMs. Also filtered for clean MCQA format compatibility. |
| `openlifescienceai/medmcqa` | **MedMCQA** | 5,000 | A subsample of multiple-choice questions on **medical topics** from various exams, filtered for a single-choice format. Contains real-world and domain-specific **clinical reasoning** questions covering various medical disciplines. |
## Intended Applications and Structure
This dataset is split into three parts:
- `train` (~85%) — for training MCQA models
- `validation` (~15%) — for tuning and monitoring performance during training
It is suitable for multiple-choice question answering tasks, especially in the **STEM** domain (Science, Technology, Engineering, Mathematics).
## Author
This dataset was created and published by [Youssef Belghmi](https://huggingface.co/youssefbelghmi) as part of the *CS-552: Modern NLP* course at EPFL (Spring 2025).