youssefbelghmi's picture
Update README.md
9a9c51d verified
---
license: mit
language: en
datasets:
- youssefbelghmi/MNLP_M3_mcqa_dataset
library_name: transformers
pipeline_tag: text-classification
tags:
- mcqa
- multiple-choice
- qwen
- qwen3
- supervised-fine-tuning
- mnlp
- epfl
- stem
---
# MNLP M3 MCQA Model (Qwen3-0.6B fine-tuned)
This model is a fine-tuned version of **Qwen/Qwen3-0.6B-Base** on the [MNLP M3 MCQA dataset](https://huggingface.co/datasets/youssefbelghmi/MNLP_M3_mcqa_dataset), a large-scale collection of multiple-choice questions designed for evaluating and training models in **STEM** domains (science, math, engineering, medicine, etc.).
It was trained as part of the final milestone of the **CS-552: Modern NLP** course at EPFL (Spring 2025).
## Task
**Multiple-Choice Question Answering (MCQA):** Given a question and four answer options (A–D), the model must complete the prompt with the correct option letter only (e.g., `A`, `B`, `C`, or `D`). It was trained with rationales during supervision but outputs only the letter during inference, making it compatible with evaluation frameworks such as LightEval.
## Training Dataset
- **Dataset:** [`youssefbelghmi/MNLP_M3_mcqa_dataset`](https://huggingface.co/datasets/youssefbelghmi/MNLP_M3_mcqa_dataset).
- ~30,000 questions from SciQ, OpenBookQA, MathQA, ARC, and MedMCQA.
- Each sample includes in particular:
- question,
- four answer choices (A–D),
- the correct answer as a letter,
- a short explanation (`support`) to guide learning.
## Training Setup
- **Base model:** `Qwen/Qwen3-0.6B-Base`.
- **Method:** Supervised Fine-Tuning (SFT) with `trl` and `SFTTrainer`.
- **Tokenizer:** AutoTokenizer (with `eos_token` used as padding).
## Training Prompt Format
During fine-tuning, each training example is converted into a prompt-completion pair. The prompt includes both the question and an explanation to guide the model’s reasoning:
```text
The following is a multiple-choice question (with answers) about knowledge and skills in advanced master's-level STEM fields.
You will be provided with an explanation to help you understand the correct answer.
Select the correct answer by replying with the option letter (A, B, C, or D) only.
Question: <question_text>
A. <option_A>
B. <option_B>
C. <option_C>
D. <option_D>
Explanation: <support_text>
Answer:
```
The completion is a single token: " A", " B", " C", or " D", corresponding to the correct answer.
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-5
- num_train_epochs: 1
- per_device_train_batch_size: 4
- per_device_eval_batch_size: 4
- gradient_accumulation_steps: 4
- gradient_checkpointing: true
- eval_strategy: steps
- eval_steps: 100
- logging_steps: 100
## Training Results
| Epoch | Training Loss | Validation Loss |
|--------:|----------------:|------------------:|
| 0.08 | 0.3461 | 0.2748 |
| 0.15 | 0.2881 | 0.2666 |
| 0.23 | 0.2938 | 0.2661 |
| 0.31 | 0.2741 | 0.26 |
| 0.38 | 0.2684 | 0.257 |
| 0.46 | 0.2603 | 0.2539 |
| 0.54 | 0.2635 | 0.2441 |
| 0.61 | 0.2555 | 0.2457 |
| 0.69 | 0.2459 | 0.2414 |
| 0.77 | 0.2383 | 0.2353 |
| 0.84 | 0.2266 | 0.2337 |
| 0.92 | 0.2112 | 0.2338 |
| 0.99 | 0.211 | 0.2335 |
- **Final training loss:** 0.211
- **Final validation accuracy:** ~92.0%
### Framework versions
- TRL: 0.17.0
- Transformers: 4.53.0.dev0
- Pytorch: 2.7.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
## Author
Developed by [**Youssef Belghmi**](https://huggingface.co/youssefbelghmi)
CS-552: Modern NLP – EPFL, Spring 2025