The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
JEE/NEET LLM Benchmark Dataset
Dataset Description
This repository contains a benchmark dataset designed for evaluating the capabilities of Large Language Models (LLMs) on questions from major Indian competitive examinations:
- JEE (Main & Advanced): Joint Entrance Examination for engineering.
- NEET: National Eligibility cum Entrance Test for medical fields.
The questions are presented in image format (.png
) as they appear in the original papers. The dataset includes metadata linking each image to its corresponding exam details (name, year, subject, question type), and correct answer(s). The benchmark framework supports various question types including Single Correct MCQs, Multiple Correct MCQs (with partial marking for JEE Advanced), and Integer type questions.
Current Data (Examples):
- NEET 2024 (Code T3)
- NEET 2025 (Code 45)
- (Support for JEE Main & Advanced questions can be added by updating
data/metadata.jsonl
and theimages/
directory accordingly.)
How to Use
Using datasets
Library
The dataset is designed to be loaded using the Hugging Face datasets
library:
from datasets import load_dataset
# Load the evaluation split
dataset = load_dataset("Reja1/jee-neet-benchmark", split='test') # Replace with your HF repo name
# Example: Access the first question
example = dataset[0]
image = example["image"]
question_id = example["question_id"]
subject = example["subject"]
correct_answers = example["correct_answer"]
print(f"Question ID: {question_id}")
print(f"Subject: {subject}")
print(f"Correct Answer(s): {correct_answers}")
# Display the image (requires Pillow)
# image.show()
Manual Usage (Benchmark Scripts)
This repository contains scripts to run the benchmark evaluation directly:
- Clone the repository:
# Replace with your actual Hugging Face repository URL git clone https://huggingface.co/datasets/Reja1/jee-neet-benchmark cd your-repo-name # Ensure Git LFS is installed and pull large files if necessary # git lfs pull
- Install dependencies:
# It's recommended to use a virtual environment python -m venv venv # source venv/bin/activate # or .\venv\Scripts\activate on Windows pip install -r requirements.txt
- Configure API Key:
- Create a file named
.env
in the root directory of the project (your-repo-name/
). - Add your OpenRouter API key to this file:
OPENROUTER_API_KEY=your_actual_openrouter_api_key
- Important: The
.gitignore
file is already configured to prevent committing the.env
file. Never commit your API keys directly.
- Create a file named
- Configure Models:
- Edit the
configs/benchmark_config.yaml
file. - Modify the
openrouter_models
list to include the specific model identifiers (e.g.,"openai/gpt-4o"
,"google/gemini-2.5-pro-preview-03-25"
) you want to evaluate. Ensure these models support vision input on OpenRouter. - You can also adjust other parameters like
max_tokens
andrequest_timeout
if needed.
- Edit the
- Run the benchmark:
- Execute the runner script from the root directory:
python src/benchmark_runner.py --config configs/benchmark_config.yaml
- You can override the models list from the command line:
python src/benchmark_runner.py --config configs/benchmark_config.yaml --models "openai/gpt-4o" "google/gemini-2.5-pro-preview-03-25"
- You can specify a different output directory:
python src/benchmark_runner.py --config configs/benchmark_config.yaml --output_dir my_custom_results
- To run the benchmark on a specific exam paper, use the
--exam_name
and--exam_year
arguments. Both must be provided. Theexam_name
should match the values in yourmetadata.jsonl
(e.g., "NEET", "JEE_MAIN", "JEE_ADVANCED").
Note: If using exam names with spaces (though not recommended in metadata), enclose them in quotes.# Example: Run only NEET 2024 questions python src/benchmark_runner.py --config configs/benchmark_config.yaml --exam_name NEET --exam_year 2024 # Example: Run only JEE_MAIN 2023 questions (assuming data exists) python src/benchmark_runner.py --config configs/benchmark_config.yaml --exam_name JEE_MAIN --exam_year 2023
- Execute the runner script from the root directory:
- Check Results:
- Results for each model will be saved in subdirectories within the
results/
folder (or your custom output directory). - Each model's folder (e.g.,
results/openai_gpt-4o_NEET_2024_YYYYMMDD_HHMMSS
) will contain:predictions.jsonl
: Detailed results for each question (prediction, ground truth, raw response, evaluation status, marks awarded).summary.json
: Overall scores and statistics for that model run.summary.md
: A human-readable Markdown version of the summary.
- Sample benchmark results for some models can be found in the
results/
folder (these may be outdated).
- Results for each model will be saved in subdirectories within the
Pros
- Multimodal Reasoning: Uses images of questions directly, testing the multimodal reasoning capability of the model.
- Flexible Exam Support: Designed to support multiple exams (NEET, JEE Main, JEE Advanced) and various question types (MCQ Single Correct, MCQ Multiple Correct, Integer).
- Detailed Scoring: Implements specific scoring rules for different exams and question types, including partial marking for JEE Advanced multiple correct questions.
- Reattempt Mechanism: Implements a reattempt mechanism to encourage the model to provide the final answer within
<answer>
tags, adapted for different question types. - Reproducibility: Easily reproducible with simple commands and an OpenRouter API key.
- Model Flexibility: Allows testing of various models available through OpenRouter.
Dataset Structure
data/metadata.jsonl
: Contains metadata for each question image. Each line is a JSON object with fields likeimage_path
,question_id
,exam_name
(e.g., "NEET", "JEE_MAIN", "JEE_ADVANCED"),exam_year
,subject
,question_type
(e.g., "MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT", "INTEGER"),correct_answer
.images/
: Contains subdirectories for each exam set (e.g.,images/NEET_2024_T3/
,images/JEE_MAIN_2023_Example/
), holding the.png
question images.src/
: Python source code for running the benchmark (data loading, LLM interaction, evaluation).configs/
: Configuration files for the benchmark.results/
: Directory where benchmark results (LLM outputs) will be stored.jee_neet_benchmark_dataset.py
: Hugging Facedatasets
loading script (defines how to loadmetadata.jsonl
and images).requirements.txt
: Python dependencies.README.md
: This file.
Data Fields
The dataset contains the following fields (accessible via datasets
):
image
: The question image (datasets.Image
).question_id
: Unique identifier for the question (string).exam_name
: Name of the exam (e.g., "NEET", "JEE_MAIN", "JEE_ADVANCED") (string).exam_year
: Year of the exam (int).subject
: Subject (e.g., "Physics", "Chemistry", "Botany", "Zoology", "Mathematics") (string).question_type
: Type of question (e.g., "MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT", "INTEGER") (string).correct_answer
: List containing the correct answer index/indices (e.g.,[2]
,[1, 3]
) or a single integer for INTEGER type questions (list of int, or int).
Cons / Current Limitations
- Data Expansion: While the framework supports various exams and question types, the current
metadata.jsonl
primarily contains NEET data. More diverse data (especially for JEE Main and Advanced with varied question types) needs to be added to make the benchmark more comprehensive. - Max Score in Summary: The overall maximum score in the generated Markdown summary is currently marked as "N/A (variable per question)" due to the complexity of calculating it accurately across mixed question types in a single run. Each question's max score depends on its type and exam.
Citation
If you use this dataset or benchmark code, please cite:
@misc{rejaullah_2025_jeeneetbenchmark,
title={JEE/NEET LLM Benchmark},
author={Md Rejaullah},
year={2025},
howpublished={\\url{https://huggingface.co/datasets/Reja1/jee-neet-benchmark}},
}
Contact
For questions, suggestions, or collaboration, feel free to reach out:
- X (Twitter): https://x.com/RejaullahmdMd
License
This dataset and associated code are licensed under the MIT License.
- Downloads last month
- 315