Reja1 commited on
Commit
2f49b44
·
1 Parent(s): c7f64e6

Huggingface dataset compatibility

Browse files
README.md CHANGED
@@ -1,3 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # JEE/NEET LLM Benchmark Dataset
2
 
3
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) <!-- Choose your license -->
 
1
+ ---
2
+ # Dataset Card Metadata
3
+ # For more information, see: https://huggingface.co/docs/hub/datasets-cards
4
+ # Example: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
5
+ #
6
+ # Important: Fill in all sections. If a section is not applicable, comment it out.
7
+ # Remove this comment block before saving.
8
+
9
+ # Basic Information
10
+ # ---------------
11
+ license: mit # (already in your README, but good to have here)
12
+ # A list of languages the dataset is in.
13
+ language:
14
+ - en
15
+ # A list of tasks the dataset is suitable for.
16
+ task_categories:
17
+ - visual-question-answering
18
+ - image-text-to-text
19
+ - question-answering
20
+ # task_ids: # More specific task IDs from https://hf.co/tasks
21
+ # - visual-question-answering
22
+ # Pretty name for the dataset.
23
+ pretty_name: JEE/NEET LLM Benchmark
24
+ # Dataset identifier from a recognized benchmark.
25
+ # benchmark: # e.g., super_glue, anli
26
+ # Date of the last update.
27
+ # date: # YYYY-MM-DD or YYYY-MM-DDTHH:MM:SSZ (ISO 8601)
28
+
29
+ # Dataset Structure
30
+ # -----------------
31
+ # List of configurations for the dataset.
32
+ configs:
33
+ - config_name: default
34
+ data_files: # How data files are structured for this config
35
+ - split: test
36
+ path: data/metadata.jsonl # Path to the data file or glob pattern
37
+ images_dir: images # Path to the directory containing the image files
38
+ # You can add more configs if your dataset has them.
39
+
40
+
41
+ # Splits
42
+ # ------
43
+ # Information about the data splits.
44
+ splits:
45
+ test: # Name of the split
46
+ # num_bytes: # Size of the split in bytes (you might need to calculate this)
47
+ num_examples: 380 # Number of examples in the split (from your script output)
48
+ # You can add dataset_tags, dataset_summary, etc. for each split.
49
+
50
+ # Column Naming
51
+ # -------------
52
+ # Information about the columns (features) in the dataset.
53
+ column_info:
54
+ image:
55
+ description: The question image.
56
+ data_type: image
57
+ question_id:
58
+ description: Unique identifier for the question.
59
+ data_type: string
60
+ exam_name:
61
+ description: Name of the exam (e.g., "NEET", "JEE Main").
62
+ data_type: string
63
+ exam_year:
64
+ description: Year of the exam.
65
+ data_type: int32
66
+ exam_code:
67
+ description: Specific paper code/session (e.g., "T3", "S1").
68
+ data_type: string
69
+ subject:
70
+ description: Subject (e.g., "Physics", "Chemistry", "Biology", "Mathematics").
71
+ data_type: string
72
+ question_type:
73
+ description: Type of question (e.g., "MCQ", "Multiple Correct").
74
+ data_type: string
75
+ correct_answer:
76
+ description: List containing the correct answer index/indices (e.g., [2], [1, 3]).
77
+ data_type: list[int32] # or sequence of int32
78
+
79
+ # More Information
80
+ # ----------------
81
+ # Add any other relevant information about the dataset.
82
+ dataset_summary: |
83
+ A benchmark dataset for evaluating Large Language Models (LLMs) on Joint Entrance Examination (JEE)
84
+ and National Eligibility cum Entrance Test (NEET) questions from India. Questions are provided as
85
+ images, and metadata includes exam details, subject, and correct answers.
86
+ dataset_tags: # Tags to help users find your dataset
87
+ - education
88
+ - science
89
+ - india
90
+ - competitive-exams
91
+ - llm-benchmark
92
+ - multimodal-reasoning
93
+ annotations_creators: # How annotations were created
94
+ - found # As questions are from existing exams
95
+ - expert-generated # Assuming answers are official/verified
96
+ annotation_types: # Types of annotations
97
+ - multiple-choice
98
+ source_datasets: # If your dataset is derived from other datasets
99
+ - original # If it's original data
100
+ # - extended # If it extends another dataset
101
+ size_categories: # Approximate size of the dataset
102
+ - n<1K # (380 examples)
103
+ # paper: # Link to a paper if available
104
+ # - # "Title of Paper"
105
+ # - # "URL or ArXiv ID"
106
+ dataset_curation_process: |
107
+ Questions are sourced from official JEE and NEET examination papers.
108
+ They are provided as images to maintain original formatting and diagrams.
109
+ Metadata is manually compiled to link images with exam details and answers.
110
+ personal_sensitive_information: false # Does the dataset contain PII?
111
+ # similar_datasets:
112
+ # - # List similar datasets if any
113
+ ---
114
  # JEE/NEET LLM Benchmark Dataset
115
 
116
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) <!-- Choose your license -->
images/NEET_2025_45/NEET_2025_45_040A.png DELETED

Git LFS Details

  • SHA256: 11164be15c83ecd9eae09633f18bf7095863295b6db6f68ff5575a43982580ea
  • Pointer size: 130 Bytes
  • Size of remote file: 26.6 kB
images/NEET_2025_45/NEET_2025_45_040B.png DELETED

Git LFS Details

  • SHA256: df5c1d066fb3ea143795918ea6cc32b94000f0f07c020b1595821434feed49c9
  • Pointer size: 130 Bytes
  • Size of remote file: 15.6 kB
images/NEET_2025_45/NEET_2025_45_125A.png DELETED

Git LFS Details

  • SHA256: e7443340e59939fc410592feb98b0f8eaf960b8f1cfdca59f362dd173159ba4b
  • Pointer size: 130 Bytes
  • Size of remote file: 35.5 kB
images/NEET_2025_45/NEET_2025_45_125B.png DELETED

Git LFS Details

  • SHA256: efcd0ae744de5a8d823c6750b3af092df99e25c067c483f594ed943c61c9ccda
  • Pointer size: 130 Bytes
  • Size of remote file: 77.1 kB
jee-neet-benchmark.py CHANGED
@@ -1,5 +1,7 @@
1
  import json
2
  import os
 
 
3
  import datasets
4
 
5
  _CITATION = """\
@@ -25,11 +27,13 @@ _LICENSE = "MIT License"
25
  class JeeNeetBenchmarkConfig(datasets.BuilderConfig):
26
  """BuilderConfig for JeeNeetBenchmark."""
27
 
28
- def __init__(self, **kwargs):
29
  """BuilderConfig for JeeNeetBenchmark.
30
  Args:
 
31
  **kwargs: keyword arguments forwarded to super.
32
  """
 
33
  super(JeeNeetBenchmarkConfig, self).__init__(**kwargs)
34
 
35
 
@@ -43,6 +47,7 @@ class JeeNeetBenchmark(datasets.GeneratorBasedBuilder):
43
  name="default",
44
  version=VERSION,
45
  description="Default config for JEE/NEET Benchmark",
 
46
  ),
47
  ]
48
 
@@ -71,86 +76,82 @@ class JeeNeetBenchmark(datasets.GeneratorBasedBuilder):
71
 
72
  def _split_generators(self, dl_manager):
73
  """Returns SplitGenerators."""
74
- # dl_manager is useful if downloading/extracting files, but here we use local paths
75
- # Determine the base directory for data files
76
- # Use data_dir if provided (for local loading), otherwise use the script's directory
77
- base_dir = self.config.data_dir if self.config.data_dir is not None else os.path.dirname(__file__)
78
- metadata_path = os.path.join(base_dir, "data", "metadata.jsonl")
79
- image_dir = os.path.join(base_dir, "images")
80
-
81
- # Check if metadata file exists
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
  if not os.path.exists(metadata_path):
83
- # For debugging, let's list what IS in base_dir (which is the dataset_root when run from Hub)
84
- files_in_base_dir = []
85
- try:
86
- files_in_base_dir = os.listdir(base_dir)
87
- except Exception as e:
88
- files_in_base_dir = [f"Error listing base_dir: {e}"]
89
-
90
- files_in_data_sub_dir = []
91
- data_sub_dir_path = os.path.join(base_dir, "data")
92
- if os.path.exists(data_sub_dir_path) and os.path.isdir(data_sub_dir_path):
93
- try:
94
- files_in_data_sub_dir = os.listdir(data_sub_dir_path)
95
- except Exception as e:
96
- files_in_data_sub_dir = [f"Error listing data_sub_dir_path: {e}"]
97
- elif not os.path.exists(data_sub_dir_path):
98
- files_in_data_sub_dir = ["Data subdirectory does not exist at expected path"]
99
- else:
100
- files_in_data_sub_dir = ["Data subdirectory path exists but is not a directory"]
101
-
102
-
103
- raise FileNotFoundError(
104
- f"Metadata file not found at {metadata_path}. "
105
- f"Base directory (dataset_root): {base_dir}. Files in base_dir: {files_in_base_dir}. "
106
- f"Expected data subdirectory path: {data_sub_dir_path}. Files in data_sub_dir: {files_in_data_sub_dir}. "
107
- f"Make sure 'data/metadata.jsonl' exists in your dataset repository. "
108
- f"If running locally, you might need to specify the path using --data_dir argument "
109
- f"or ensure the script is run from the project root."
110
- )
111
-
112
  return [
113
  datasets.SplitGenerator(
114
- name=datasets.Split.TEST, # Using TEST split as it's standard for evaluation-only data
115
- # Or use name="evaluate" if you prefer that specific name
116
  gen_kwargs={
117
  "metadata_filepath": metadata_path,
118
- "image_base_dir": image_dir, # Pass the base image directory
119
  },
120
  ),
121
  ]
122
 
123
  def _generate_examples(self, metadata_filepath, image_base_dir):
124
  """Yields examples."""
 
 
 
125
  with open(metadata_filepath, "r", encoding="utf-8") as f:
126
  for idx, line in enumerate(f):
127
  try:
128
  row = json.loads(line)
129
  except json.JSONDecodeError as e:
130
- print(f"Error decoding JSON on line {idx+1}: {e}")
131
  continue # Skip malformed lines
132
 
133
- image_path_relative = row.get("image_path")
134
- if not image_path_relative:
135
- print(f"Warning: Missing 'image_path' on line {idx+1}. Skipping.")
 
 
136
  continue
137
 
138
- # Construct the full path relative to the dataset root
139
- image_path_full = os.path.join(image_base_dir, os.path.relpath(image_path_relative, start="images"))
140
- # Alternative if image_path is already relative to root:
141
- # image_path_full = os.path.join(image_base_dir, image_path_relative)
142
 
143
  if not os.path.exists(image_path_full):
144
- print(f"Warning: Image file not found at {image_path_full} referenced on line {idx+1}. Skipping.")
145
- # Yielding with None image might cause issues later, better to skip or handle
146
- # image_data = None
147
  continue
148
- # else:
149
- # Let datasets.Image() handle the loading by passing the path
150
- # image_data = image_path_full
151
-
152
  yield idx, {
153
- "image": image_path_full, # Pass the full path to the image feature
154
  "question_id": row.get("question_id", ""),
155
  "exam_name": row.get("exam_name", ""),
156
  "exam_year": row.get("exam_year", -1), # Use a default if missing
 
1
  import json
2
  import os
3
+ import logging # Added
4
+ import tarfile # Added (though dl_manager handles .tar.gz, good for completeness if script evolves)
5
  import datasets
6
 
7
  _CITATION = """\
 
27
  class JeeNeetBenchmarkConfig(datasets.BuilderConfig):
28
  """BuilderConfig for JeeNeetBenchmark."""
29
 
30
+ def __init__(self, images_dir="images", **kwargs):
31
  """BuilderConfig for JeeNeetBenchmark.
32
  Args:
33
+ images_dir: Directory containing the image files, relative to the dataset root.
34
  **kwargs: keyword arguments forwarded to super.
35
  """
36
+ self.images_dir = images_dir
37
  super(JeeNeetBenchmarkConfig, self).__init__(**kwargs)
38
 
39
 
 
47
  name="default",
48
  version=VERSION,
49
  description="Default config for JEE/NEET Benchmark",
50
+ images_dir="images", # Default images directory
51
  ),
52
  ]
53
 
 
76
 
77
  def _split_generators(self, dl_manager):
78
  """Returns SplitGenerators."""
79
+ # Define paths to the files within the Hugging Face dataset repository
80
+ # Assumes 'images.tar.gz' is at the root and 'metadata.jsonl' is in 'data/'
81
+ repo_metadata_path = os.path.join("data", "metadata.jsonl")
82
+ repo_images_archive_path = "images.tar.gz" # At the root of the repository
83
+
84
+ try:
85
+ # Download and extract metadata and images archive
86
+ downloaded_files = dl_manager.download_and_extract({
87
+ "metadata_file": repo_metadata_path,
88
+ "images_archive": repo_images_archive_path
89
+ })
90
+ except Exception as e:
91
+ # More specific error if download/extraction fails
92
+ logging.error(f"Failed to download/extract dataset files. Metadata path in repo: '{repo_metadata_path}', Images archive path in repo: '{repo_images_archive_path}'. Error: {e}")
93
+ raise
94
+
95
+ metadata_path = downloaded_files["metadata_file"]
96
+ # images_archive_path is the directory where images.tar.gz was extracted by dl_manager
97
+ images_extracted_root = downloaded_files["images_archive"]
98
+
99
+ logging.info(f"Metadata file successfully downloaded to: {metadata_path}")
100
+ logging.info(f"Images archive successfully extracted to: {images_extracted_root}")
101
+
102
+ # Verify that the essential files/directories exist after download/extraction
103
  if not os.path.exists(metadata_path):
104
+ error_msg = f"Metadata file not found at expected local path after download: {metadata_path}. Check repository path '{repo_metadata_path}'."
105
+ logging.error(error_msg)
106
+ raise FileNotFoundError(error_msg)
107
+
108
+ if not os.path.isdir(images_extracted_root):
109
+ error_msg = f"Images archive was not extracted to a valid directory: {images_extracted_root}. Check repository path '{repo_images_archive_path}' and archive integrity."
110
+ logging.error(error_msg)
111
+ raise FileNotFoundError(error_msg)
112
+
113
+ # The image_base_dir for _generate_examples will be the root of the extracted archive.
114
+ # Paths in metadata.jsonl (e.g., "images/NEET_2024_T3/file.png")
115
+ # are assumed to be relative to this extracted root.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
116
  return [
117
  datasets.SplitGenerator(
118
+ name=datasets.Split.TEST,
 
119
  gen_kwargs={
120
  "metadata_filepath": metadata_path,
121
+ "image_base_dir": images_extracted_root,
122
  },
123
  ),
124
  ]
125
 
126
  def _generate_examples(self, metadata_filepath, image_base_dir):
127
  """Yields examples."""
128
+ logging.info(f"Generating examples from metadata: {metadata_filepath}")
129
+ logging.info(f"Using image base directory: {image_base_dir}")
130
+
131
  with open(metadata_filepath, "r", encoding="utf-8") as f:
132
  for idx, line in enumerate(f):
133
  try:
134
  row = json.loads(line)
135
  except json.JSONDecodeError as e:
136
+ logging.error(f"Error decoding JSON on line {idx+1} in {metadata_filepath}: {e}")
137
  continue # Skip malformed lines
138
 
139
+ # image_path_from_metadata is e.g., "images/NEET_2024_T3/file.png"
140
+ # This path is assumed to be relative to the root of the extracted image archive (image_base_dir)
141
+ image_path_from_metadata = row.get("image_path")
142
+ if not image_path_from_metadata:
143
+ logging.warning(f"Missing 'image_path' in metadata on line {idx+1} of {metadata_filepath}. Skipping.")
144
  continue
145
 
146
+ # Construct the full absolute path to the image file
147
+ image_path_full = os.path.join(image_base_dir, image_path_from_metadata)
 
 
148
 
149
  if not os.path.exists(image_path_full):
150
+ logging.warning(f"Image file not found at {image_path_full} (referenced on line {idx+1} of {metadata_filepath}). Skipping.")
 
 
151
  continue
152
+
 
 
 
153
  yield idx, {
154
+ "image": image_path_full, # Pass the full path; datasets.Image() will load it
155
  "question_id": row.get("question_id", ""),
156
  "exam_name": row.get("exam_name", ""),
157
  "exam_year": row.get("exam_year", -1), # Use a default if missing
src/benchmark_runner.py CHANGED
@@ -1,7 +1,5 @@
1
  import argparse
2
  import yaml
3
- import argparse
4
- import yaml
5
  import os
6
  import json
7
  import logging
 
1
  import argparse
2
  import yaml
 
 
3
  import os
4
  import json
5
  import logging
src/llm_interface.py CHANGED
@@ -24,10 +24,6 @@ RETRYABLE_EXCEPTIONS = (
24
  # Define status codes that warrant a retry
25
  RETRYABLE_STATUS_CODES = {500, 502, 503, 504}
26
 
27
- def should_retry_response(response):
28
- """Check if the response status code warrants a retry."""
29
- return response.status_code in RETRYABLE_STATUS_CODES
30
-
31
  # Retry decorator configuration
32
  retry_config = dict(
33
  stop=stop_after_attempt(3), # Retry up to 3 times
@@ -250,10 +246,11 @@ if __name__ == '__main__':
250
 
251
  except ValueError as e:
252
  print(f"Setup Error: {e}")
 
 
 
253
  except Exception as e:
254
- print(f"Raw Response: {raw_resp}")
255
-
256
- except ValueError as e:
257
- print(f"Setup Error: {e}")
258
- except Exception as e:
259
- print(f"Runtime Error: {e}")
 
24
  # Define status codes that warrant a retry
25
  RETRYABLE_STATUS_CODES = {500, 502, 503, 504}
26
 
 
 
 
 
27
  # Retry decorator configuration
28
  retry_config = dict(
29
  stop=stop_after_attempt(3), # Retry up to 3 times
 
246
 
247
  except ValueError as e:
248
  print(f"Setup Error: {e}")
249
+ # The following Exception catch was too broad and could mask the raw_resp not being defined
250
+ # if the ValueError for setup occurred first.
251
+ # It's better to catch a more general Exception for runtime issues after setup.
252
  except Exception as e:
253
+ # Check if raw_resp was defined (e.g. if initial call succeeded but re-prompt failed)
254
+ # This is a bit tricky as raw_resp might be from a successful first call even if a later part fails.
255
+ # For simplicity in an example, just print the runtime error.
256
+ print(f"Runtime Error during example execution: {e}")