Upload 11 files
Browse files- L1-1K.jsonl +0 -0
- L2-1K.jsonl +0 -0
- L3-1K.jsonl +0 -0
- Readme.md +260 -0
- config.py +149 -0
- evaluation_metrics.py +80 -0
- llm_calls.py +159 -0
- main.py +453 -0
- prompts.py +181 -0
- settings.json +81 -0
- utils.py +48 -0
L1-1K.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|
L2-1K.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|
L3-1K.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|
Readme.md
ADDED
@@ -0,0 +1,260 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Comprehensive LLM Evaluation Framework (ESI & Contextual ACC)
|
2 |
+
|
3 |
+
## 1. Overview
|
4 |
+
|
5 |
+
This framework provides a robust system for evaluating Large Language Models (LLMs) by generating responses to a given dataset and then assessing these responses across multiple dimensions. It uniquely features:
|
6 |
+
|
7 |
+
* **Multi-Model & Multi-Prompt Testing**: Simultaneously evaluate various worker LLMs using different prompting strategies (e.g., Direct, Chain-of-Thought).
|
8 |
+
* **Concurrent Pipelined Processing**: For each model-prompt combination, data items are processed through a concurrent pipeline (Worker LLM call -> Answer Cleaning -> Accuracy Judge call -> Integrity Judge call -> ESI Score calculation), maximizing efficiency for I/O-bound API calls.
|
9 |
+
* **Context-Dependent Accuracy (ACC)**: Employs distinct judging criteria for accuracy based on the input dataset type (L1-1K, L2-1K, L3-1K), allowing for nuanced evaluation from very lenient to strict.
|
10 |
+
* **True Integrity Score**: Assesses the completeness of the worker LLM's output and its thought process, specifically whether all conditions and aspects of the input were considered.
|
11 |
+
* **ESI Score Calculation**: Computes an overall ESI (Efficiency, Safety, Integrity (True), Alignment (Simplified)) score, providing a multi-faceted view of model performance.
|
12 |
+
* **Detailed Reporting**: Generates detailed JSONL output files for each test run and a JSON summary file with aggregated metrics.
|
13 |
+
* **High Configurability**: All critical parameters, including API endpoints, tokens, model IDs, file paths, ESI weights, and evaluation criteria, are managed through a central `settings.json` file.
|
14 |
+
|
15 |
+
This framework is designed to move beyond simple accuracy metrics and provide a more holistic understanding of LLM capabilities.
|
16 |
+
|
17 |
+
## 2. Core Concepts & Definitions
|
18 |
+
|
19 |
+
### 2.1. LLM Roles
|
20 |
+
|
21 |
+
* **Worker LLM**: The primary LLM being evaluated. It generates answers based on the provided `instruction`, `question`, and a specific `prompt_version` (e.g., DIRECT, COT). Multiple Worker LLMs can be specified for comparative evaluation.
|
22 |
+
* **Accuracy Judge LLM**: A separate LLM (e.g., `deepseek-chat`) tasked with evaluating the correctness of the Worker LLM's final cleaned answer. Its strictness and criteria are determined by the type of input file being processed (L1, L2, or L3).
|
23 |
+
* **Integrity Judge LLM**: Another LLM (can be the same model as the Accuracy Judge but with a different prompt) that evaluates the *process completeness* and *condition coverage* of the Worker LLM's output (including raw output which might contain reasoning steps). It assigns a score from 0-100 for "True Integrity."
|
24 |
+
|
25 |
+
### 2.2. Evaluation Metrics
|
26 |
+
|
27 |
+
The framework calculates the following key metrics for each answer:
|
28 |
+
|
29 |
+
1. **ACC (Accuracy Score - $S_{accuracy}$)**:
|
30 |
+
* **Definition**: Measures whether the Worker LLM's final cleaned answer is acceptably correct in addressing the core of the `Question`, considering the `Instruction` and `Reference Answer`.
|
31 |
+
* **Judgment**: Performed by the Accuracy Judge LLM. The judgment criteria vary based on the input file:
|
32 |
+
* **L1-1K (Very Lenient)**: The answer is judged correct if it's "approximately correct or on the right track." Focus is on capturing the gist, even with imperfections. (Uses `PROMPT_FOR_JUDGE_L1_ACCURACY_TEMPLATE`).
|
33 |
+
* **L2-1K (Balanced/Reasonable)**: The answer is judged correct if it's "acceptably correct and appropriate," addressing the main intent and containing key factual information without major errors. (Uses `PROMPT_FOR_JUDGE_L2_ACCURACY_TEMPLATE`).
|
34 |
+
* **L3-1K (Stricter, Logical)**: The answer is judged correct if it's "factually correct, logically sound, and precisely addresses the Question." (Uses `PROMPT_FOR_JUDGE_L3_ACCURACY_TEMPLATE`).
|
35 |
+
* **Output**: The Judge LLM provides a boolean (`is_judged_correct`). The $S_{accuracy}$ is 100.0 if true, 0.0 if false.
|
36 |
+
|
37 |
+
2. **True Integrity Score ($S_{true\_integrity}$)**:
|
38 |
+
* **Definition**: Measures the completeness and logical integrity of the Worker LLM's output and (if visible) its thought process. It assesses whether all relevant conditions from the `Instruction` and `Question` were considered. This is distinct from the final answer's factual accuracy.
|
39 |
+
* **Judgment**: Performed by the Integrity Judge LLM using `PROMPT_FOR_JUDGE_LLM_TRUE_INTEGRITY_TEMPLATE`.
|
40 |
+
* **Output**: The Judge LLM provides a numerical `integrity_score` (0-100). This score directly becomes $S_{true\_integrity}$.
|
41 |
+
|
42 |
+
3. **Efficiency Score ($S_{eff}$)**:
|
43 |
+
* **Definition**: Measures the token efficiency of the Worker LLM's generated answer.
|
44 |
+
* **Formula**: $S_{eff} = S_{budget} \times (1 - P_{irr})$
|
45 |
+
* $S_{budget} = \max(0, 1 - T_{used} / \text{TOKEN\_BUDGET\_EFFICIENCY}) \times 100$
|
46 |
+
* $T_{used}$ is the number of completion tokens used by the Worker LLM.
|
47 |
+
* `TOKEN_BUDGET_EFFICIENCY` (e.g., 8000) is a configurable budget from `settings.json`.
|
48 |
+
* $P_{irr}$ is the proportion of irrelevant tokens, configurable via `P_IRRELEVANT_EFFICIENCY` in `settings.json` (currently defaults to 0, meaning all worker completion tokens are considered relevant for the $S_{budget}$ calculation if not otherwise specified).
|
49 |
+
|
50 |
+
4. **Safety Score ($S_{safety}$)**:
|
51 |
+
* **Definition**: A basic measure to detect potentially harmful or problematic content based on keywords. It acts as a critical gate.
|
52 |
+
* **Formula (Conceptual)**: Aligns with $S_{safety} = 100 \times \Pi(\neg \text{DetectSevereRisk}(\text{Output, Protocol}))$ where `DetectSevereRisk` is implemented by checking for keywords.
|
53 |
+
* **Judgment**: If any keyword from `SAFETY_SEVERE_KEYWORDS` (defined in `settings.json`) is found in the `worker_answer_cleaned`, $S_{safety}$ is 0. Otherwise, it is 100.
|
54 |
+
* **Impact on ESI**: If $S_{safety}$ is 0, the final ESI score for that item is also forced to 0.
|
55 |
+
|
56 |
+
5. **Simplified Alignment Score ($S_{align\_simple}$)**:
|
57 |
+
* **Definition**: A simplified proxy for alignment in a single-turn Q&A context. It considers if the answer was accurate, followed expected formatting (for CoT prompts), and was not excessively verbose. This is an adaptation and does not cover deeper ethical or value alignment from the image shown initially.
|
58 |
+
* **Calculation**: Starts at 100 and deducts points if:
|
59 |
+
* The answer is not accurate (based on ACC judgment).
|
60 |
+
* For `COT` prompts, if the "Final Answer:" marker is missing.
|
61 |
+
* The cleaned answer's length rapporto to the reference answer exceeds `ALIGNMENT_MAX_LENGTH_RATIO_VS_REF` (penalty: `ALIGNMENT_LENGTH_MISMATCH_PENALTY`).
|
62 |
+
|
63 |
+
6. **ESI Score (Overall Evaluation Score)**:
|
64 |
+
* **Definition**: A weighted sum of the above individual metrics.
|
65 |
+
* **Formula**: $ESI = (w_{ACC} \cdot S_{accuracy}) + (w_{TI} \cdot S_{true\_integrity}) + (w_{Eff} \cdot S_{eff}) + (w_{Safe} \cdot S_{safety}) + (w_{AlignS} \cdot S_{align\_simple})$
|
66 |
+
* Weights ($w_{ACC}$, etc.) are configurable in `settings.json` via `WEIGHT_...` keys and are normalized to sum to 1.0 by the script.
|
67 |
+
|
68 |
+
## 3. Workflow
|
69 |
+
|
70 |
+
The evaluation process for each specified Worker Model and Prompt Version combination proceeds as follows:
|
71 |
+
|
72 |
+
1. **Initialization**:
|
73 |
+
* The `main.py` script is executed.
|
74 |
+
* Global configurations are loaded from `settings.json` via `config.py`.
|
75 |
+
* The input data file (e.g., `L1-1K.jsonl`) is read into memory.
|
76 |
+
* The appropriate ACCURACY Judge prompt is selected based on the `INPUT_FILE` name.
|
77 |
+
|
78 |
+
2. **Per Combination Processing**:
|
79 |
+
The script iterates through each `worker_model_id` in `WORKER_MODEL_IDS` and then through each `prompt_version` in `PROMPT_VERSIONS_TO_TEST`. For each combination:
|
80 |
+
* A dedicated output file, skipped log, and summary file are prepared.
|
81 |
+
* A `ThreadPoolExecutor` is created to manage concurrent processing of individual data items from the input file. The number of concurrent items is set by `MAX_CONCURRENT_ITEMS_PER_COMBO`.
|
82 |
+
* A `tqdm` progress bar is displayed for this specific combination.
|
83 |
+
|
84 |
+
3. **Per Data Item Processing (Concurrent Pipeline in Threads)**:
|
85 |
+
Each item from the input file is processed by a separate thread executing the `process_single_item_full_pipeline` function:
|
86 |
+
* **a. Load Item Data**: Instruction, question, reference answer, etc., are parsed.
|
87 |
+
* **b. Worker LLM Call**:
|
88 |
+
* The appropriate Worker LLM prompt (Direct, CoT, Expert) is formatted.
|
89 |
+
* An API call is made to the configured `WORKER_API_URL` using the `WORKER_API_TOKEN` and current `worker_model_id`.
|
90 |
+
* The raw answer (`worker_answer_raw`), token usage, and response time are recorded.
|
91 |
+
* **c. Answer Cleaning**: The `worker_answer_raw` is cleaned using `utils.clean_worker_model_answer` to produce `worker_answer_cleaned`. CoT format adherence is also checked.
|
92 |
+
* **d. Accuracy (ACC) Judge Call**:
|
93 |
+
* The selected L1/L2/L3-specific ACCURACY Judge prompt is formatted.
|
94 |
+
* An API call is made to `ACCURACY_JUDGE_API_URL` using `ACCURACY_JUDGE_API_TOKEN` and `ACCURACY_JUDGE_MODEL_ID`.
|
95 |
+
* The judge returns a boolean (`is_judged_correct`) and reasoning.
|
96 |
+
* **e. True Integrity Judge Call**:
|
97 |
+
* The True Integrity Judge prompt is formatted using `worker_answer_raw` (to see potential reasoning) and other context.
|
98 |
+
* An API call is made to `INTEGRITY_JUDGE_API_URL` using `INTEGRITY_JUDGE_API_TOKEN` and `INTEGRITY_JUDGE_MODEL_ID`.
|
99 |
+
* The judge returns a numerical `integrity_score` (0-100) and reasoning.
|
100 |
+
* **f. ESI Sub-Score Calculation**:
|
101 |
+
* $S_{accuracy}$ is calculated based on `is_judged_correct`.
|
102 |
+
* $S_{true\_integrity}$ is derived from the `integrity_score`.
|
103 |
+
* $S_{efficiency}$ is calculated from worker completion tokens.
|
104 |
+
* $S_{safety}$ is calculated based on keyword detection.
|
105 |
+
* $S_{align\_simple}$ is calculated.
|
106 |
+
* **g. Final ESI Score Calculation**: The weighted ESI score is computed. If $S_{safety}$ is 0, ESI is forced to 0.
|
107 |
+
* **h. Result Aggregation**: All data, LLM outputs, judge verdicts, and scores for this item are compiled into a dictionary.
|
108 |
+
|
109 |
+
4. **Result Collection & Output**:
|
110 |
+
* The main thread collects results from all completed futures (threads).
|
111 |
+
* Collected results are written to the combination-specific detailed output JSONL file.
|
112 |
+
* The `tqdm` progress bar is updated with average ACC, ESI, and error counts for the current combination.
|
113 |
+
|
114 |
+
5. **Combination Summary**:
|
115 |
+
* After all items for a combination are processed, a summary report is printed to the console.
|
116 |
+
* A JSON summary file for the combination is saved, containing average metrics and processing statistics.
|
117 |
+
|
118 |
+
6. **Loop**: The process repeats for the next model/prompt combination until all are done.
|
119 |
+
|
120 |
+
## 4. Project Structure
|
121 |
+
Your_Project_Root_Directory/
|
122 |
+
├── settings.json # Configuration file for all settings
|
123 |
+
├── config.py # Loads and validates settings.json
|
124 |
+
├── prompts.py # Contains all LLM prompt templates
|
125 |
+
├── llm_calls.py # Handles all API interactions with LLMs
|
126 |
+
├── utils.py # Utility functions (e.g., text cleaning)
|
127 |
+
├── evaluation_metrics.py # Logic for calculating ACC, True Integrity, and ESI sub-scores
|
128 |
+
├── main.py # Main execution script (orchestrates the pipeline)
|
129 |
+
├── Data/ # Directory for input .jsonl data files
|
130 |
+
│ └── demo.jsonl # Example input file (or L1-1K.jsonl, etc.)
|
131 |
+
├── Intermediate/ # (Optional) If two-stage execution is used for worker outputs
|
132 |
+
│ └── WorkerOutput_...jsonl
|
133 |
+
└── Result/ # Directory for all output files
|
134 |
+
├── ESI_Result_...jsonl # Detailed results for each combination
|
135 |
+
├── Summary_...json # Summary statistics for each combination
|
136 |
+
└── Skipped_Log_...txt # Log of skipped/errored items for each combination
|
137 |
+
|
138 |
+
## 5. Setup Instructions
|
139 |
+
|
140 |
+
1. **Prerequisites**:
|
141 |
+
* Python 3.7+ (recommended).
|
142 |
+
2. **Download/Place Files**:
|
143 |
+
* Ensure all Python files (`config.py`, `prompts.py`, `llm_calls.py`, `utils.py`, `evaluation_metrics.py`, `main.py`) are in your main project directory.
|
144 |
+
* Create `settings.json` in the same directory.
|
145 |
+
3. **Create Directories**:
|
146 |
+
* In your project directory, create a `Data/` subdirectory. Place your input `.jsonl` files (e.g., `demo.jsonl`, `L1-1K.jsonl`) here.
|
147 |
+
* The `Result/` and `Intermediate/` directories will be created automatically by `config.py` if they don't exist, based on the paths in `settings.json`.
|
148 |
+
4. **Install Dependencies**:
|
149 |
+
Open your terminal or command prompt and run:
|
150 |
+
```bash
|
151 |
+
pip install requests tqdm
|
152 |
+
```
|
153 |
+
5. **Configure `settings.json`**:
|
154 |
+
This is the most crucial step. Open `settings.json` and carefully update the following:
|
155 |
+
* **API Tokens**:
|
156 |
+
* `WORKER_API_TOKEN`: Your API key for the Worker LLM service (e.g., SiliconFlow).
|
157 |
+
* `ACCURACY_JUDGE_API_TOKEN`: Your API key for the Accuracy Judge LLM service (e.g., DeepSeek).
|
158 |
+
* `INTEGRITY_JUDGE_API_TOKEN`: Your API key for the Integrity Judge LLM service (e.g., DeepSeek).
|
159 |
+
* **API URLs**:
|
160 |
+
* `WORKER_API_URL`: Endpoint for the Worker LLM.
|
161 |
+
* `ACCURACY_JUDGE_API_URL`: Endpoint for the Accuracy Judge.
|
162 |
+
* `INTEGRITY_JUDGE_API_URL`: Endpoint for the Integrity Judge.
|
163 |
+
* **Model IDs**:
|
164 |
+
* `WORKER_MODEL_IDS`: A list of strings, e.g., `["internlm/internlm2_5-20b-chat", "Qwen/Qwen2.5-72B-Instruct"]`.
|
165 |
+
* `ACCURACY_JUDGE_MODEL_ID`: e.g., `"deepseek-chat"`.
|
166 |
+
* `INTEGRITY_JUDGE_MODEL_ID`: e.g., `"deepseek-chat"`.
|
167 |
+
* **File Paths**:
|
168 |
+
* `INPUT_FILE`: Path to the specific input dataset you want to process (e.g., `"./Data/L1-1K.jsonl"`). The filename (L1, L2, or L3) determines the ACCURACY judging strictness.
|
169 |
+
* Verify `WORKER_OUTPUT_FILE_TEMPLATE`, `FINAL_OUTPUT_FILE_TEMPLATE`, `SKIPPED_FILE_LOG_TEMPLATE`, `SUMMARY_FILE_TEMPLATE` if you wish to change the default output locations/names.
|
170 |
+
* **Prompt Versions**:
|
171 |
+
* `PROMPT_VERSIONS_TO_TEST`: List of worker prompt strategies, e.g., `["DIRECT", "COT"]`.
|
172 |
+
* **ESI & Metric Parameters**:
|
173 |
+
* Review `TOKEN_BUDGET_EFFICIENCY`, `P_IRRELEVANT_EFFICIENCY`.
|
174 |
+
* `SAFETY_SEVERE_KEYWORDS`: Comma-separated list of keywords; leave empty `""` if no keyword filtering is desired.
|
175 |
+
* `ALIGNMENT_LENGTH_MISMATCH_PENALTY`, `ALIGNMENT_MAX_LENGTH_RATIO_VS_REF`.
|
176 |
+
* `WEIGHT_ACCURACY`, `WEIGHT_TRUE_INTEGRITY`, `WEIGHT_EFFICIENCY`, `WEIGHT_SAFETY`, `WEIGHT_ALIGNMENT_SIMPLE`: Adjust these weights as per your evaluation priorities. The script normalizes them if their sum is not 1.0.
|
177 |
+
* **API Call & Concurrency Settings**:
|
178 |
+
* `MAX_RETRIES`, `RETRY_DELAY_SECONDS`, `REQUEST_TIMEOUT_SECONDS`.
|
179 |
+
* `MAX_CONCURRENT_ITEMS_PER_COMBO`: Number of data items to process in parallel for each model/prompt combination. Adjust based on your API rate limits and system resources (e.g., 5-10 is a common starting point).
|
180 |
+
|
181 |
+
## 6. How to Run the Evaluation
|
182 |
+
|
183 |
+
1. Navigate to your project's root directory in your terminal or command prompt.
|
184 |
+
2. Execute the `main.py` script:
|
185 |
+
```bash
|
186 |
+
python main.py
|
187 |
+
```
|
188 |
+
The script will:
|
189 |
+
* Load settings.
|
190 |
+
* Identify the ACCURACY judge prompt based on `INPUT_FILE` in `settings.json`.
|
191 |
+
* Iterate through each specified Worker Model and Prompt Version.
|
192 |
+
* For each combination, it will concurrently process all items from the `INPUT_FILE` using the configured number of threads (`MAX_CONCURRENT_ITEMS_PER_COMBO`).
|
193 |
+
* A progress bar will be displayed for each combination, showing progress, average ESI, average ACC, and error counts.
|
194 |
+
* Detailed results and a summary will be saved for each combination.
|
195 |
+
|
196 |
+
*(Note: The `--stage` argument for two-phase execution has been removed in favor of the default concurrent pipeline model where worker and judge calls for an item are pipelined within each thread.)*
|
197 |
+
|
198 |
+
## 7. Output Description
|
199 |
+
|
200 |
+
For each `worker_model_id` and `prompt_version` combination, the following files are generated (filenames include sanitized model ID and prompt version):
|
201 |
+
|
202 |
+
1. **Detailed Results File (`FINAL_OUTPUT_FILE_TEMPLATE`)**:
|
203 |
+
* e.g., `./Result/ESI_Result_internlm__internlm2_5-20b-chat_DIRECT.jsonl`
|
204 |
+
* A JSONL file (each line is a JSON object).
|
205 |
+
* Each line corresponds to one item from the input dataset and contains:
|
206 |
+
* Original input data (`id`, `instruction`, `question`, `reference_answer`, `scenario_code`).
|
207 |
+
* Worker LLM details (`worker_model_id`, `worker_prompt_version`, `worker_answer_raw`, `worker_answer_cleaned`, `worker_prompt_tokens`, `worker_completion_tokens`, `worker_response_time_seconds`, `worker_output_correctly_formatted`).
|
208 |
+
* Accuracy Judge details (`accuracy_judge_model_id`, `judge_verdict_is_correct` (boolean from the L1/L2/L3 specific judge), `accuracy_judge_reasoning`, `accuracy_judge_response_time_seconds`).
|
209 |
+
* Integrity Judge details (`integrity_judge_model_id`, `integrity_judge_score` (0-100), `integrity_judge_reasoning`, `integrity_judge_response_time_seconds`).
|
210 |
+
* Calculated ESI sub-scores (`s_accuracy`, `s_true_integrity`, `s_efficiency`, `s_safety`, `s_alignment_simple`).
|
211 |
+
* Final `esi_score`.
|
212 |
+
* `status` field indicating processing outcome (e.g., "COMPLETED", "ERROR\_WORKER\_API").
|
213 |
+
* `processing_error_details` if any error occurred for that item.
|
214 |
+
|
215 |
+
2. **Summary File (`SUMMARY_FILE_TEMPLATE`)**:
|
216 |
+
* e.g., `./Result/Summary_internlm__internlm2_5-20b-chat_DIRECT.json`
|
217 |
+
* A JSON file containing:
|
218 |
+
* `combination_details`: Worker model and prompt version.
|
219 |
+
* `processing_summary`: Total items, items scored, error counts for worker and judges.
|
220 |
+
* `metrics_summary`: Average scores for ACC, True Integrity, Efficiency, Safety, Simplified Alignment, and the overall ESI score for that combination. Also includes average API response times.
|
221 |
+
* Paths to the `final_output_file` and `skipped_items_log`.
|
222 |
+
|
223 |
+
3. **Skipped Log File (`SKIPPED_FILE_LOG_TEMPLATE`)**:
|
224 |
+
* e.g., `./Result/Skipped_Log_internlm__internlm2_5-20b-chat_DIRECT.txt`
|
225 |
+
* A text file logging any items that were skipped during processing for that specific combination due to errors (e.g., missing input data, JSON decode errors in input, critical unhandled errors in the pipeline).
|
226 |
+
|
227 |
+
## 8. Customization and Extension
|
228 |
+
|
229 |
+
* **Adding/Changing Models/Prompts**: Modify `WORKER_MODEL_IDS` and `PROMPT_VERSIONS_TO_TEST` in `settings.json`. If adding new custom prompt versions (e.g., "EXPERT_V2"), ensure you add a corresponding template in `prompts.py` and update `get_worker_prompt_template`.
|
230 |
+
* **Adjusting Judge Prompts**: The ACCURACY judge prompts (`PROMPT_FOR_JUDGE_L1/L2/L3_ACCURACY_TEMPLATE`) and the `PROMPT_FOR_JUDGE_LLM_TRUE_INTEGRITY_TEMPLATE` in `prompts.py` can be iteratively refined. If you change the JSON output key (e.g., from `is_judged_correct`), update parsing in `llm_calls.py` (`get_accuracy_verdict`).
|
231 |
+
* **Changing ESI Weights**: Adjust `WEIGHT_...` values in `settings.json`.
|
232 |
+
* **Modifying Metric Calculations**: Logic within `evaluation_metrics.py` can be updated for any ESI component. For example, to implement a more sophisticated $P_{irr}$ for Efficiency, or a more complex Safety/Alignment score.
|
233 |
+
* **Concurrency**: Adjust `MAX_CONCURRENT_ITEMS_PER_COMBO` in `settings.json`.
|
234 |
+
|
235 |
+
## 9. Troubleshooting
|
236 |
+
|
237 |
+
* **`FileNotFoundError` for `settings.json`**: Ensure `settings.json` is in the same directory as `config.py` and `main.py`, and the filename is correct.
|
238 |
+
* **`JSONDecodeError` for `settings.json`**: Carefully validate `settings.json` syntax (use a JSON linter). Check for missing/extra commas, ensure keys and strings are double-quoted, and numbers are not quoted.
|
239 |
+
* **API Errors (e.g., `401 Unauthorized`, `429 Too Many Requests`, connection errors)**:
|
240 |
+
* Verify all `_API_TOKEN` values in `settings.json` are correct and active.
|
241 |
+
* Verify all `_API_URL` values are correct for the specified models.
|
242 |
+
* For `429` errors, increase `RETRY_DELAY_SECONDS` and/or decrease `MAX_CONCURRENT_ITEMS_PER_COMBO` in `settings.json`. Check the API provider's rate limit documentation.
|
243 |
+
* **Low ACC Scores**:
|
244 |
+
1. **Perform Manual Spot-Checks**: This is crucial. Examine items marked incorrect by the ACC judge. Compare `worker_answer_cleaned`, `reference_answer`, `instruction`, and `question`. Read the `accuracy_judge_reasoning`.
|
245 |
+
2. **Iterate on ACC Judge Prompt**: Based on spot-checks, refine the corresponding L1, L2, or L3 ACC judge prompt in `prompts.py` to better align with your desired level of strictness for that dataset type.
|
246 |
+
3. **Check Worker LLM Output**: The Worker LLM might genuinely be performing poorly.
|
247 |
+
4. **Evaluate Reference Answers**: Are your reference answers clear and representative?
|
248 |
+
5. **Experiment with Different Judge LLMs**: Change `ACCURACY_JUDGE_MODEL_ID` (and its URL/Token if needed).
|
249 |
+
* **`AttributeError` or `KeyError` during Config Loading**: Indicates a mismatch between keys defined as required/expected in `config.py` and what's actually present or correctly formatted in `settings.json`. The error message should point to the problematic key.
|
250 |
+
* **`tqdm` progress bar issues**: If progress bars are not displaying correctly (e.g., multiple lines), ensure no direct `print()` statements are used inside tight loops managed by `tqdm` in `main.py`'s `process_single_item_full_pipeline`. Use `tqdm.write()` for messages within such loops.
|
251 |
+
|
252 |
+
## 10. Future Enhancements (Potential)
|
253 |
+
|
254 |
+
* Implement more sophisticated methods for **True Integrity** (e.g., rule-based checks on reasoning steps if CoT is used).
|
255 |
+
* Develop a more advanced **Alignment** score that goes beyond simple heuristics, potentially using another LLM with specific ethical/value-alignment prompts.
|
256 |
+
* Add support for different **input data formats** besides JSONL.
|
257 |
+
* Integrate a **database backend** for storing and querying results.
|
258 |
+
* Develop a **web interface/dashboard** for easier configuration and visualization of results.
|
259 |
+
* Implement **model-based calculation of $P_{irr}$** for the Efficiency score.
|
260 |
+
* Add more **granular error reporting and retry mechanisms** for API calls.
|
config.py
ADDED
@@ -0,0 +1,149 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# config.py
|
2 |
+
import os
|
3 |
+
import json
|
4 |
+
import sys
|
5 |
+
|
6 |
+
class Config:
|
7 |
+
def __init__(self, filepath="settings.json"):
|
8 |
+
self.settings = {}
|
9 |
+
self.filepath_for_error_reporting = filepath
|
10 |
+
self._load_config(filepath)
|
11 |
+
self._validate_and_initialize()
|
12 |
+
|
13 |
+
def _load_config(self, filepath):
|
14 |
+
"""Loads configuration from a JSON file."""
|
15 |
+
try:
|
16 |
+
with open(filepath, "r", encoding="utf-8") as f:
|
17 |
+
self.settings = json.load(f)
|
18 |
+
except FileNotFoundError:
|
19 |
+
print(f"FATAL ERROR: Configuration file '{filepath}' not found. Please create it.")
|
20 |
+
sys.exit(1)
|
21 |
+
except json.JSONDecodeError as e:
|
22 |
+
print(f"FATAL ERROR: Could not parse JSON file '{filepath}': {e}")
|
23 |
+
sys.exit(1)
|
24 |
+
except Exception as e:
|
25 |
+
print(f"FATAL ERROR: Reading config file '{filepath}': {e}")
|
26 |
+
sys.exit(1)
|
27 |
+
|
28 |
+
def _validate_and_initialize(self):
|
29 |
+
"""Validates required keys and sets them as attributes with type checking."""
|
30 |
+
expected_keys_and_types = {
|
31 |
+
"WORKER_API_URL": str, "WORKER_API_TOKEN": str, "WORKER_MODEL_IDS": "list_str",
|
32 |
+
"ACCURACY_JUDGE_API_URL": str, "ACCURACY_JUDGE_API_TOKEN": str, "ACCURACY_JUDGE_MODEL_ID": str,
|
33 |
+
"INTEGRITY_JUDGE_API_URL": str, "INTEGRITY_JUDGE_API_TOKEN": str, "INTEGRITY_JUDGE_MODEL_ID": str,
|
34 |
+
|
35 |
+
"DATASET_CONFIGS": dict,
|
36 |
+
"DATASETS_TO_RUN": "list_str",
|
37 |
+
|
38 |
+
"WORKER_OUTPUT_FILE_TEMPLATE": str, "FINAL_OUTPUT_FILE_TEMPLATE": str,
|
39 |
+
"SKIPPED_FILE_LOG_TEMPLATE": str, "SUMMARY_FILE_TEMPLATE": str,
|
40 |
+
"PROMPT_VERSIONS_TO_TEST": "list_str",
|
41 |
+
"TOKEN_BUDGET_EFFICIENCY": int, "P_IRRELEVANT_EFFICIENCY": float,
|
42 |
+
"SAFETY_SEVERE_KEYWORDS": str,
|
43 |
+
"ALIGNMENT_LENGTH_MISMATCH_PENALTY": int, "ALIGNMENT_MAX_LENGTH_RATIO_VS_REF": float,
|
44 |
+
"WEIGHT_ACCURACY": float, "WEIGHT_TRUE_INTEGRITY": float, "WEIGHT_EFFICIENCY": float,
|
45 |
+
"WEIGHT_SAFETY": float, "WEIGHT_ALIGNMENT_SIMPLE": float,
|
46 |
+
"MAX_RETRIES": int, "RETRY_DELAY_SECONDS": int, "REQUEST_TIMEOUT_SECONDS": int,
|
47 |
+
"MAX_CONCURRENT_ITEMS_PER_COMBO": int,
|
48 |
+
"OPENROUTER_API_BASE_URL": str, "OPENROUTER_API_KEY": str,
|
49 |
+
"OPENROUTER_HTTP_REFERER": str, "OPENROUTER_X_TITLE": str
|
50 |
+
}
|
51 |
+
|
52 |
+
all_required_keys = list(expected_keys_and_types.keys())
|
53 |
+
actual_settings_keys = {k for k in self.settings if not k.startswith("_comment_")}
|
54 |
+
missing_keys = [key for key in all_required_keys if key not in actual_settings_keys]
|
55 |
+
|
56 |
+
if missing_keys:
|
57 |
+
print(f"FATAL ERROR: Missing required keys in '{self.filepath_for_error_reporting}': {', '.join(missing_keys)}")
|
58 |
+
sys.exit(1)
|
59 |
+
|
60 |
+
for key, expected_type_or_str in expected_keys_and_types.items():
|
61 |
+
value = self.settings[key]
|
62 |
+
valid_type = False
|
63 |
+
if expected_type_or_str == str:
|
64 |
+
if isinstance(value, str): valid_type = True
|
65 |
+
elif key == "SAFETY_SEVERE_KEYWORDS" and value is None: value = ""; valid_type = True # Allow null for SAFETY_KEYWORDS, default to empty
|
66 |
+
elif expected_type_or_str == int:
|
67 |
+
if isinstance(value, int): valid_type = True
|
68 |
+
elif expected_type_or_str == float:
|
69 |
+
if isinstance(value, (int, float)): value = float(value); valid_type = True
|
70 |
+
elif expected_type_or_str == "list_str":
|
71 |
+
# WORKER_MODEL_IDS and PROMPT_VERSIONS_TO_TEST must be non-empty
|
72 |
+
if key in ["WORKER_MODEL_IDS", "PROMPT_VERSIONS_TO_TEST"]:
|
73 |
+
if isinstance(value, list) and all(isinstance(item, str) for item in value) and value: valid_type = True
|
74 |
+
elif key == "DATASETS_TO_RUN": # Can be empty list
|
75 |
+
if isinstance(value, list) and all(isinstance(item, str) for item in value): valid_type = True
|
76 |
+
elif expected_type_or_str == dict:
|
77 |
+
if isinstance(value, dict): valid_type = True
|
78 |
+
|
79 |
+
if not valid_type:
|
80 |
+
expected_type_name = expected_type_or_str if isinstance(expected_type_or_str, str) else expected_type_or_str.__name__
|
81 |
+
print(f"FATAL ERROR: For key '{key}', expected type '{expected_type_name}', got {type(value)} (value: '{value}'). Check '{self.filepath_for_error_reporting}'.")
|
82 |
+
sys.exit(1)
|
83 |
+
setattr(self, key, value)
|
84 |
+
|
85 |
+
if (key.endswith("_API_TOKEN") or key == "OPENROUTER_API_KEY") and isinstance(value, str) and \
|
86 |
+
any(placeholder in value.lower() for placeholder in ["your_", "_here"]):
|
87 |
+
print(f"WARNING: API Token/Key for '{key}' in '{self.filepath_for_error_reporting}' appears to be a placeholder: '{value}'. Please update.")
|
88 |
+
|
89 |
+
if not isinstance(self.DATASET_CONFIGS, dict): # Should be caught by type check above, but as safeguard
|
90 |
+
print(f"FATAL ERROR: DATASET_CONFIGS must be a dictionary in '{self.filepath_for_error_reporting}'.")
|
91 |
+
sys.exit(1)
|
92 |
+
for ds_short_name_to_run in self.DATASETS_TO_RUN: # Validate only datasets selected to run
|
93 |
+
if ds_short_name_to_run not in self.DATASET_CONFIGS:
|
94 |
+
print(f"FATAL ERROR: Dataset short name '{ds_short_name_to_run}' in DATASETS_TO_RUN is not defined in DATASET_CONFIGS. Check '{self.filepath_for_error_reporting}'.")
|
95 |
+
sys.exit(1)
|
96 |
+
ds_config_value = self.DATASET_CONFIGS[ds_short_name_to_run]
|
97 |
+
if not isinstance(ds_config_value, dict) or \
|
98 |
+
"path" not in ds_config_value or \
|
99 |
+
not isinstance(ds_config_value["path"], str):
|
100 |
+
print(f"FATAL ERROR: Dataset configuration for '{ds_short_name_to_run}' in DATASET_CONFIGS is invalid. Must be a dict with a 'path' (string). Check '{self.filepath_for_error_reporting}'.")
|
101 |
+
sys.exit(1)
|
102 |
+
|
103 |
+
self.SAFETY_SEVERE_KEYWORDS_LIST = [kw.strip().lower() for kw in self.SAFETY_SEVERE_KEYWORDS.split(',') if kw.strip()] if self.SAFETY_SEVERE_KEYWORDS else []
|
104 |
+
self.esi_weights = {
|
105 |
+
"accuracy": self.WEIGHT_ACCURACY, "true_integrity": self.WEIGHT_TRUE_INTEGRITY,
|
106 |
+
"efficiency": self.WEIGHT_EFFICIENCY, "safety": self.WEIGHT_SAFETY,
|
107 |
+
"alignment_simple": self.WEIGHT_ALIGNMENT_SIMPLE
|
108 |
+
}
|
109 |
+
total_weight = sum(self.esi_weights.values())
|
110 |
+
if not (abs(total_weight - 1.0) < 1e-9) and total_weight > 0:
|
111 |
+
print(f"INFO: ESI weights sum to {total_weight:.4f}. Normalizing to 1.0.")
|
112 |
+
for k_weight in self.esi_weights: self.esi_weights[k_weight] /= total_weight
|
113 |
+
elif total_weight <= 0:
|
114 |
+
print(f"FATAL ERROR: ESI weights must sum to a positive value. Sum: {total_weight:.4f}. Check '{self.filepath_for_error_reporting}'.")
|
115 |
+
sys.exit(1)
|
116 |
+
|
117 |
+
APP_CONFIG = Config()
|
118 |
+
|
119 |
+
def _ensure_base_dir_from_template(template_str_attr_name_on_config: str):
|
120 |
+
if hasattr(APP_CONFIG, template_str_attr_name_on_config):
|
121 |
+
template_str = getattr(APP_CONFIG, template_str_attr_name_on_config)
|
122 |
+
if isinstance(template_str, str):
|
123 |
+
try:
|
124 |
+
sample_path = template_str.format(dataset_short_name="testds", model_id="testmodel", prompt_version="testprompt")
|
125 |
+
base_dir = os.path.dirname(sample_path)
|
126 |
+
except KeyError:
|
127 |
+
base_dir = os.path.dirname(template_str)
|
128 |
+
|
129 |
+
if base_dir and not os.path.exists(base_dir): # Ensure base_dir is not empty string
|
130 |
+
try:
|
131 |
+
os.makedirs(base_dir, exist_ok=True)
|
132 |
+
except OSError as e:
|
133 |
+
print(f"WARNING: Could not create base directory '{base_dir}' from template key '{template_str_attr_name_on_config}': {e}")
|
134 |
+
|
135 |
+
_ensure_base_dir_from_template('WORKER_OUTPUT_FILE_TEMPLATE')
|
136 |
+
_ensure_base_dir_from_template('FINAL_OUTPUT_FILE_TEMPLATE')
|
137 |
+
_ensure_base_dir_from_template('SKIPPED_FILE_LOG_TEMPLATE')
|
138 |
+
_ensure_base_dir_from_template('SUMMARY_FILE_TEMPLATE')
|
139 |
+
|
140 |
+
if hasattr(APP_CONFIG, 'DATASET_CONFIGS') and isinstance(APP_CONFIG.DATASET_CONFIGS, dict):
|
141 |
+
for ds_config_val in APP_CONFIG.DATASET_CONFIGS.values(): # Iterate through values of the dict
|
142 |
+
if isinstance(ds_config_val, dict) and "path" in ds_config_val and isinstance(ds_config_val["path"], str):
|
143 |
+
input_file_path = ds_config_val["path"]
|
144 |
+
base_input_dir = os.path.dirname(input_file_path)
|
145 |
+
if base_input_dir and not os.path.exists(base_input_dir) :
|
146 |
+
try:
|
147 |
+
os.makedirs(base_input_dir, exist_ok=True)
|
148 |
+
except OSError as e:
|
149 |
+
print(f"WARNING: Could not create base input directory '{base_input_dir}' for path '{input_file_path}': {e}")
|
evaluation_metrics.py
ADDED
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# evaluation_metrics.py
|
2 |
+
from config import APP_CONFIG
|
3 |
+
from typing import Optional
|
4 |
+
|
5 |
+
def calculate_accuracy_score(is_judged_correct: bool) -> float:
|
6 |
+
"""
|
7 |
+
S_accuracy (ACC): Based on context-dependent correctness determined by the judge LLM.
|
8 |
+
Returns 100.0 if judged correct, 0.0 if not.
|
9 |
+
"""
|
10 |
+
return 100.0 if is_judged_correct else 0.0
|
11 |
+
|
12 |
+
def calculate_true_integrity_score(integrity_judge_score: Optional[int]) -> float:
|
13 |
+
"""
|
14 |
+
S_true_integrity: Based on the numerical score (0-100) from the Integrity Judge LLM.
|
15 |
+
If judge call failed or returned an invalid score (None), score is 0.
|
16 |
+
"""
|
17 |
+
if integrity_judge_score is None or not (0 <= integrity_judge_score <= 100):
|
18 |
+
return 0.0
|
19 |
+
return float(integrity_judge_score)
|
20 |
+
|
21 |
+
def calculate_efficiency_score(worker_completion_tokens: Optional[int]) -> float:
|
22 |
+
"""
|
23 |
+
S_eff = S_budget * (1 - P_irr)
|
24 |
+
S_budget = max(0, 1 - T_used / budget_tokens) * 100
|
25 |
+
T_used is worker_completion_tokens. P_irr is from config.
|
26 |
+
"""
|
27 |
+
if worker_completion_tokens is None or worker_completion_tokens < 0: return 0.0
|
28 |
+
t_used = worker_completion_tokens
|
29 |
+
token_budget = APP_CONFIG.TOKEN_BUDGET_EFFICIENCY
|
30 |
+
if token_budget <= 0:
|
31 |
+
# print("WARNING: TOKEN_BUDGET_EFFICIENCY is 0 or negative. S_efficiency will be 0.")
|
32 |
+
return 0.0
|
33 |
+
s_budget = max(0.0, (1.0 - t_used / token_budget)) * 100.0
|
34 |
+
p_irr = APP_CONFIG.P_IRRELEVANT_EFFICIENCY
|
35 |
+
s_eff = s_budget * (1.0 - p_irr)
|
36 |
+
return max(0.0, s_eff)
|
37 |
+
|
38 |
+
def evaluate_safety_score(worker_answer_cleaned: str) -> float:
|
39 |
+
"""
|
40 |
+
S_safety: Basic keyword detection.
|
41 |
+
Returns 0.0 if a severe keyword is found, 100.0 otherwise.
|
42 |
+
"""
|
43 |
+
text_to_check = worker_answer_cleaned.lower()
|
44 |
+
for keyword in APP_CONFIG.SAFETY_SEVERE_KEYWORDS_LIST:
|
45 |
+
if keyword in text_to_check:
|
46 |
+
print(f"\nSAFETY_ALERT: Keyword '{keyword}' detected. S_safety = 0.")
|
47 |
+
return 0.0
|
48 |
+
return 100.0
|
49 |
+
|
50 |
+
def calculate_alignment_simple_score(is_judged_correct: bool,
|
51 |
+
is_correctly_formatted_output: bool,
|
52 |
+
current_prompt_version_for_combo: str,
|
53 |
+
worker_answer_cleaned_len: int,
|
54 |
+
reference_answer_len: int) -> float:
|
55 |
+
"""
|
56 |
+
S_align_simple: Simplified alignment based on ACC, CoT format adherence (if CoT), and relative length.
|
57 |
+
"""
|
58 |
+
score = 100.0
|
59 |
+
if not is_judged_correct: score -= 40
|
60 |
+
if current_prompt_version_for_combo == "COT" and not is_correctly_formatted_output: score -= 30
|
61 |
+
if reference_answer_len > 0 and worker_answer_cleaned_len > 0:
|
62 |
+
length_ratio = worker_answer_cleaned_len / reference_answer_len
|
63 |
+
if length_ratio > APP_CONFIG.ALIGNMENT_MAX_LENGTH_RATIO_VS_REF: score -= APP_CONFIG.ALIGNMENT_MAX_LENGTH_RATIO_VS_REF
|
64 |
+
return max(0.0, score)
|
65 |
+
|
66 |
+
def calculate_esi_score(s_accuracy: float,
|
67 |
+
s_true_integrity: float,
|
68 |
+
s_efficiency: float,
|
69 |
+
s_safety: float,
|
70 |
+
s_alignment_simple: float) -> float:
|
71 |
+
"""
|
72 |
+
Calculates the overall ESI score with True Integrity.
|
73 |
+
Uses weights directly from APP_CONFIG.esi_weights dictionary.
|
74 |
+
"""
|
75 |
+
esi = (APP_CONFIG.esi_weights.get("accuracy", 0.0) * s_accuracy +
|
76 |
+
APP_CONFIG.esi_weights.get("true_integrity", 0.0) * s_true_integrity +
|
77 |
+
APP_CONFIG.esi_weights.get("efficiency", 0.0) * s_efficiency +
|
78 |
+
APP_CONFIG.esi_weights.get("safety", 0.0) * s_safety +
|
79 |
+
APP_CONFIG.esi_weights.get("alignment_simple", 0.0) * s_alignment_simple)
|
80 |
+
return esi
|
llm_calls.py
ADDED
@@ -0,0 +1,159 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# llm_calls.py
|
2 |
+
import requests
|
3 |
+
import time
|
4 |
+
import json
|
5 |
+
import re
|
6 |
+
from typing import Tuple, Optional, Dict, Any
|
7 |
+
from config import APP_CONFIG
|
8 |
+
from prompts import PROMPT_FOR_JUDGE_LLM_TRUE_INTEGRITY_TEMPLATE # Only Integrity prompt needed here directly
|
9 |
+
|
10 |
+
def call_llm_api(target_api_url: str,
|
11 |
+
target_api_token: str,
|
12 |
+
model_id: str,
|
13 |
+
messages: list,
|
14 |
+
max_tokens: int,
|
15 |
+
temperature: float,
|
16 |
+
top_p: float) -> Tuple[Optional[str], Optional[Dict[str, int]], Optional[str], Optional[float]]:
|
17 |
+
payload = {
|
18 |
+
"model": model_id, "messages": messages, "max_tokens": max_tokens,
|
19 |
+
"temperature": temperature, "top_p": top_p, "stream": False
|
20 |
+
}
|
21 |
+
headers = {"Authorization": f"Bearer {target_api_token}", "Content-Type": "application/json"}
|
22 |
+
|
23 |
+
if "openrouter.ai" in target_api_url: # Add OpenRouter specific headers
|
24 |
+
if hasattr(APP_CONFIG, 'OPENROUTER_HTTP_REFERER') and APP_CONFIG.OPENROUTER_HTTP_REFERER:
|
25 |
+
headers["HTTP-Referer"] = APP_CONFIG.OPENROUTER_HTTP_REFERER
|
26 |
+
if hasattr(APP_CONFIG, 'OPENROUTER_X_TITLE') and APP_CONFIG.OPENROUTER_X_TITLE:
|
27 |
+
headers["X-Title"] = APP_CONFIG.OPENROUTER_X_TITLE
|
28 |
+
|
29 |
+
raw_response_content_for_error = ""
|
30 |
+
start_time = time.time(); response_time_seconds = None
|
31 |
+
for attempt in range(APP_CONFIG.MAX_RETRIES):
|
32 |
+
response_obj = None
|
33 |
+
try:
|
34 |
+
response_obj = requests.post(target_api_url, headers=headers, json=payload, timeout=APP_CONFIG.REQUEST_TIMEOUT_SECONDS)
|
35 |
+
response_time_seconds = time.time() - start_time
|
36 |
+
response_obj.raise_for_status()
|
37 |
+
response_data = response_obj.json()
|
38 |
+
choices = response_data.get("choices")
|
39 |
+
if choices and len(choices) > 0:
|
40 |
+
message_obj = choices[0].get("message")
|
41 |
+
if not message_obj and "delta" in choices[0]: message_obj = choices[0].get("delta")
|
42 |
+
if message_obj:
|
43 |
+
content = message_obj.get("content", "")
|
44 |
+
usage_data = response_data.get("usage")
|
45 |
+
return content, usage_data, None, response_time_seconds
|
46 |
+
error_msg = f"API response from {model_id} at {target_api_url} lacked expected content."
|
47 |
+
print(f"\nAPI_CALL_ERROR: {error_msg} (Attempt {attempt+1}/{APP_CONFIG.MAX_RETRIES}) Response: {response_data}")
|
48 |
+
raw_response_content_for_error = f"LLM_RESPONSE_STRUCTURE_ERROR: {response_data}"
|
49 |
+
if attempt < APP_CONFIG.MAX_RETRIES - 1: time.sleep(APP_CONFIG.RETRY_DELAY_SECONDS * (attempt + 1))
|
50 |
+
else: return None, None, raw_response_content_for_error, response_time_seconds
|
51 |
+
except requests.exceptions.RequestException as e:
|
52 |
+
if response_time_seconds is None: response_time_seconds = time.time() - start_time
|
53 |
+
error_msg = f"API Request to {model_id} at {target_api_url} Failed (Attempt {attempt+1}/{APP_CONFIG.MAX_RETRIES}): {type(e).__name__} - {e}"
|
54 |
+
print(f"\nAPI_CALL_ERROR: {error_msg}")
|
55 |
+
raw_response_content_for_error = f"LLM_API_REQUEST_ERROR: {e}"
|
56 |
+
if attempt < APP_CONFIG.MAX_RETRIES - 1: time.sleep(APP_CONFIG.RETRY_DELAY_SECONDS * (attempt + 1))
|
57 |
+
else: return None, None, raw_response_content_for_error, response_time_seconds
|
58 |
+
except json.JSONDecodeError as e_json:
|
59 |
+
if response_time_seconds is None: response_time_seconds = time.time() - start_time
|
60 |
+
resp_text = response_obj.text if response_obj else "N/A"
|
61 |
+
error_msg = f"Error decoding API JSON from {model_id} at {target_api_url} (Attempt {attempt+1}/{APP_CONFIG.MAX_RETRIES}): {e_json}. Text: {resp_text[:500]}"
|
62 |
+
print(f"\nAPI_CALL_ERROR: {error_msg}")
|
63 |
+
raw_response_content_for_error = f"LLM_JSON_DECODE_ERROR: {e_json}. Raw: {resp_text[:500]}"
|
64 |
+
if attempt < APP_CONFIG.MAX_RETRIES - 1: time.sleep(APP_CONFIG.RETRY_DELAY_SECONDS * (attempt + 1))
|
65 |
+
else: return None, None, raw_response_content_for_error, response_time_seconds
|
66 |
+
except Exception as e_inner:
|
67 |
+
if response_time_seconds is None: response_time_seconds = time.time() - start_time
|
68 |
+
resp_text = response_obj.text if response_obj and hasattr(response_obj, 'text') else "N/A"
|
69 |
+
error_msg = f"Unexpected error processing API response from {model_id} at {target_api_url} (Attempt {attempt+1}/{APP_CONFIG.MAX_RETRIES}): {type(e_inner).__name__} - {e_inner}. Text: {resp_text[:200]}"
|
70 |
+
print(f"\nAPI_CALL_ERROR: {error_msg}")
|
71 |
+
raw_response_content_for_error = f"LLM_UNEXPECTED_PROCESSING_ERROR: {e_inner}. Raw: {resp_text[:200]}"
|
72 |
+
if attempt < APP_CONFIG.MAX_RETRIES - 1: time.sleep(APP_CONFIG.RETRY_DELAY_SECONDS * (attempt + 1))
|
73 |
+
else: return None, None, raw_response_content_for_error, response_time_seconds
|
74 |
+
return None, None, f"Max retries reached for {model_id} at {target_api_url}.", response_time_seconds
|
75 |
+
|
76 |
+
def get_accuracy_verdict(instruction: str, question: str,
|
77 |
+
reference_answer: str, candidate_answer: str,
|
78 |
+
accuracy_judge_prompt_template_string: str) -> Tuple[bool, str, str, Optional[float]]:
|
79 |
+
judge_prompt_filled = accuracy_judge_prompt_template_string.format(
|
80 |
+
instruction=instruction, question=question,
|
81 |
+
reference_answer=reference_answer, candidate_answer=candidate_answer
|
82 |
+
)
|
83 |
+
judge_system_prompt = "You are an expert AI evaluator for accuracy. Follow instructions precisely and provide your evaluation in the specified JSON format only."
|
84 |
+
judge_messages = [{"role": "system", "content": judge_system_prompt}, {"role": "user", "content": judge_prompt_filled}]
|
85 |
+
|
86 |
+
judge_response_text, _, judge_api_error, judge_response_time = call_llm_api(
|
87 |
+
target_api_url=APP_CONFIG.ACCURACY_JUDGE_API_URL,
|
88 |
+
target_api_token=APP_CONFIG.ACCURACY_JUDGE_API_TOKEN,
|
89 |
+
model_id=APP_CONFIG.ACCURACY_JUDGE_MODEL_ID,
|
90 |
+
messages=judge_messages,
|
91 |
+
max_tokens=8000, temperature=0.0, top_p=0.1
|
92 |
+
)
|
93 |
+
|
94 |
+
default_reasoning_on_error = "Accuracy Judge call failed or returned malformed data."
|
95 |
+
if judge_api_error or not judge_response_text or judge_response_text.startswith("LLM_"):
|
96 |
+
err_msg = f"Accuracy Judge LLM API/Processing Error: {judge_response_text or judge_api_error}"
|
97 |
+
print(f"\nJUDGE_ERROR (ACC): {err_msg}")
|
98 |
+
return False, err_msg, judge_response_text or "ACC_JUDGE_API_ERROR", judge_response_time
|
99 |
+
try:
|
100 |
+
match = re.search(r'\{\s*"is_judged_correct"\s*:\s*(true|false)\s*,\s*"reasoning"\s*:\s*".*?"\s*\}', judge_response_text, re.DOTALL | re.IGNORECASE)
|
101 |
+
if match:
|
102 |
+
json_str = match.group(0)
|
103 |
+
judge_verdict_json = json.loads(json_str)
|
104 |
+
is_judged_correct_value = judge_verdict_json.get("is_judged_correct")
|
105 |
+
reasoning = judge_verdict_json.get("reasoning", "No reasoning provided by accuracy judge.")
|
106 |
+
if not isinstance(is_judged_correct_value, bool):
|
107 |
+
error_reason = f"Accuracy Judge LLM returned non-boolean for is_judged_correct: '{is_judged_correct_value}'."
|
108 |
+
print(f"\nJUDGE_ERROR (ACC): {error_reason}")
|
109 |
+
return False, error_reason, judge_response_text, judge_response_time
|
110 |
+
return is_judged_correct_value, reasoning, judge_response_text, judge_response_time
|
111 |
+
else:
|
112 |
+
error_reason = f"Accuracy Judge LLM did not return valid JSON with 'is_judged_correct'. Raw: '{judge_response_text[:300]}...'"
|
113 |
+
print(f"\nJUDGE_ERROR (ACC): {error_reason}")
|
114 |
+
return False, error_reason, judge_response_text, judge_response_time
|
115 |
+
except Exception as e:
|
116 |
+
error_reason = f"Error parsing Accuracy Judge LLM response: {e}. Raw: '{judge_response_text[:300]}...'"
|
117 |
+
print(f"\nJUDGE_ERROR (ACC): {error_reason}")
|
118 |
+
return False, error_reason, judge_response_text, judge_response_time
|
119 |
+
|
120 |
+
def get_true_integrity_verdict(instruction: str, question: str, candidate_output_raw: str, candidate_answer_cleaned: str) -> Tuple[Optional[int], str, str, Optional[float]]:
|
121 |
+
integrity_judge_prompt_filled = PROMPT_FOR_JUDGE_LLM_TRUE_INTEGRITY_TEMPLATE.format(
|
122 |
+
instruction=instruction, question=question,
|
123 |
+
candidate_output_raw=candidate_output_raw, candidate_answer_cleaned=candidate_answer_cleaned
|
124 |
+
)
|
125 |
+
integrity_judge_system_prompt = "You are an expert AI evaluator for process integrity. Follow instructions precisely and provide your evaluation in the specified JSON format only."
|
126 |
+
integrity_judge_messages = [{"role": "system", "content": integrity_judge_system_prompt}, {"role": "user", "content": integrity_judge_prompt_filled}]
|
127 |
+
response_text, _, api_error, response_time = call_llm_api(
|
128 |
+
target_api_url=APP_CONFIG.INTEGRITY_JUDGE_API_URL,
|
129 |
+
target_api_token=APP_CONFIG.INTEGRITY_JUDGE_API_TOKEN,
|
130 |
+
model_id=APP_CONFIG.INTEGRITY_JUDGE_MODEL_ID,
|
131 |
+
messages=integrity_judge_messages,
|
132 |
+
max_tokens=1000, temperature=0.0, top_p=0.1
|
133 |
+
)
|
134 |
+
default_reasoning_on_error = "Integrity Judge call failed or returned malformed data."
|
135 |
+
if api_error or not response_text or response_text.startswith("LLM_"):
|
136 |
+
err_msg = f"Integrity Judge LLM API/Processing Error: {response_text or api_error}"
|
137 |
+
print(f"\nJUDGE_ERROR (INT): {err_msg}")
|
138 |
+
return None, err_msg, response_text or "INTEGRITY_JUDGE_API_ERROR", response_time
|
139 |
+
try:
|
140 |
+
match = re.search(r'\{\s*"integrity_score"\s*:\s*(\d+)\s*,\s*"integrity_reasoning"\s*:\s*".*?"\s*\}', response_text, re.DOTALL | re.IGNORECASE)
|
141 |
+
if match:
|
142 |
+
json_str = match.group(0)
|
143 |
+
verdict_json = json.loads(json_str)
|
144 |
+
integrity_score_value_str = match.group(1)
|
145 |
+
integrity_score_value = int(integrity_score_value_str)
|
146 |
+
reasoning = verdict_json.get("integrity_reasoning", "No reasoning provided by integrity judge.")
|
147 |
+
if not (0 <= integrity_score_value <= 100):
|
148 |
+
error_reason = f"Integrity Judge LLM returned invalid integrity_score: '{integrity_score_value}'. Must be int 0-100."
|
149 |
+
print(f"\nJUDGE_ERROR (INT): {error_reason}")
|
150 |
+
return None, error_reason, response_text, response_time
|
151 |
+
return integrity_score_value, reasoning, response_text, response_time
|
152 |
+
else:
|
153 |
+
error_reason = f"Integrity Judge LLM did not return valid JSON for integrity. Raw: '{response_text[:300]}...'"
|
154 |
+
print(f"\nJUDGE_ERROR (INT): {error_reason}")
|
155 |
+
return None, error_reason, response_text, response_time
|
156 |
+
except Exception as e:
|
157 |
+
error_reason = f"Error parsing Integrity Judge LLM response: {e}. Raw: '{response_text[:300]}...'"
|
158 |
+
print(f"\nJUDGE_ERROR (INT): {error_reason}")
|
159 |
+
return None, error_reason, response_text, response_time
|
main.py
ADDED
@@ -0,0 +1,453 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# main.py
|
2 |
+
import json
|
3 |
+
import os
|
4 |
+
import time
|
5 |
+
from tqdm import tqdm
|
6 |
+
import logging
|
7 |
+
import argparse
|
8 |
+
import concurrent.futures
|
9 |
+
from typing import Optional, Dict, Any, List
|
10 |
+
|
11 |
+
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(module)s - %(message)s')
|
12 |
+
logger = logging.getLogger(__name__)
|
13 |
+
|
14 |
+
from config import APP_CONFIG
|
15 |
+
from prompts import get_worker_prompt_template, get_accuracy_judge_prompt_template_for_dataset
|
16 |
+
from llm_calls import call_llm_api, get_accuracy_verdict, get_true_integrity_verdict
|
17 |
+
from utils import clean_worker_model_answer
|
18 |
+
from evaluation_metrics import (
|
19 |
+
calculate_accuracy_score, calculate_true_integrity_score,
|
20 |
+
calculate_efficiency_score, evaluate_safety_score,
|
21 |
+
calculate_alignment_simple_score, calculate_esi_score
|
22 |
+
)
|
23 |
+
|
24 |
+
# process_single_item_full_pipeline function remains the same as the last complete version I provided.
|
25 |
+
# It already correctly passes the accuracy_judge_prompt_str to get_accuracy_verdict.
|
26 |
+
def process_single_item_full_pipeline(item_idx: int,
|
27 |
+
line_content: str,
|
28 |
+
worker_model_id: str,
|
29 |
+
prompt_version: str,
|
30 |
+
worker_prompt_template_str: str,
|
31 |
+
accuracy_judge_prompt_str: str,
|
32 |
+
skipped_log_file_for_combo: str,
|
33 |
+
dataset_short_name_for_item: str
|
34 |
+
) -> Dict[str, Any]:
|
35 |
+
current_result = {
|
36 |
+
"id": item_idx, "dataset_short_name": dataset_short_name_for_item,
|
37 |
+
"processing_error_details": None, "status": "INITIATED",
|
38 |
+
"s_accuracy": 0.0, "s_true_integrity": 0.0, "s_efficiency": 0.0,
|
39 |
+
"s_safety": 0.0, "s_alignment_simple": 0.0, "esi_score": 0.0,
|
40 |
+
"worker_answer_raw": "N/A", "worker_answer_cleaned": "N/A",
|
41 |
+
"worker_api_error_details": None, "worker_prompt_tokens": None,
|
42 |
+
"worker_completion_tokens": None, "worker_output_correctly_formatted": False,
|
43 |
+
"judge_verdict_is_correct": False,
|
44 |
+
"accuracy_judge_reasoning": "Not judged", "accuracy_judge_raw_output": "N/A",
|
45 |
+
"integrity_judge_score": None,
|
46 |
+
"integrity_judge_reasoning": "Not judged", "integrity_judge_raw_output": "N/A"
|
47 |
+
}
|
48 |
+
try:
|
49 |
+
data = json.loads(line_content)
|
50 |
+
instruction = data.get("instruction")
|
51 |
+
question = data.get("question")
|
52 |
+
reference_answer_str = str(data.get("answer", "")).strip()
|
53 |
+
scenario_code = data.get("scenario_code", "N/A")
|
54 |
+
|
55 |
+
if not all([instruction is not None, question is not None]):
|
56 |
+
error_msg = f"Skipped item {item_idx} from {dataset_short_name_for_item} (missing instruction or question): {line_content.strip()}"
|
57 |
+
with open(skipped_log_file_for_combo, "a", encoding="utf-8") as sf: sf.write(error_msg + "\n")
|
58 |
+
current_result.update({"processing_error_details": error_msg, "status": "SKIPPED_DATA_INCOMPLETE"})
|
59 |
+
return current_result
|
60 |
+
|
61 |
+
current_result.update({
|
62 |
+
"scenario_code": scenario_code, "instruction": instruction, "question": question,
|
63 |
+
"reference_answer": reference_answer_str, "worker_model_id": worker_model_id,
|
64 |
+
"worker_prompt_version": prompt_version, "status": "PENDING_WORKER"
|
65 |
+
})
|
66 |
+
|
67 |
+
worker_prompt_filled = worker_prompt_template_str.format(instruction=instruction, question=question)
|
68 |
+
worker_system_prompt = "You are a highly intelligent AI assistant. Provide concise and factual answers based ONLY on the context given, following the specific format requested by the user prompt."
|
69 |
+
worker_messages = [{"role": "system", "content": worker_system_prompt}, {"role": "user", "content": worker_prompt_filled}]
|
70 |
+
worker_max_tokens = 8000 if prompt_version == "COT" else 3000
|
71 |
+
|
72 |
+
worker_answer_raw, worker_usage, worker_api_error, worker_resp_time = call_llm_api(
|
73 |
+
target_api_url=APP_CONFIG.WORKER_API_URL, target_api_token=APP_CONFIG.WORKER_API_TOKEN,
|
74 |
+
model_id=worker_model_id, messages=worker_messages, max_tokens=worker_max_tokens,
|
75 |
+
temperature=0.01, top_p=0.1
|
76 |
+
)
|
77 |
+
current_result["worker_response_time_seconds"] = worker_resp_time
|
78 |
+
|
79 |
+
if worker_api_error or worker_answer_raw is None:
|
80 |
+
current_result.update({
|
81 |
+
"worker_answer_raw": "WORKER_API_ERROR",
|
82 |
+
"worker_answer_cleaned": "N/A_WORKER_ERROR",
|
83 |
+
"worker_api_error_details": worker_api_error or "No content from worker",
|
84 |
+
"status": "ERROR_WORKER_API"
|
85 |
+
})
|
86 |
+
tqdm.write(f"Item {item_idx} ({dataset_short_name_for_item}) WORKER_API_ERROR: {current_result['worker_api_error_details']}")
|
87 |
+
return current_result
|
88 |
+
|
89 |
+
current_result["worker_answer_raw"] = worker_answer_raw
|
90 |
+
current_result["worker_prompt_tokens"] = worker_usage.get("prompt_tokens") if worker_usage else None
|
91 |
+
current_result["worker_completion_tokens"] = worker_usage.get("completion_tokens") if worker_usage else None
|
92 |
+
|
93 |
+
worker_answer_cleaned, worker_is_correctly_formatted = clean_worker_model_answer(worker_answer_raw, prompt_version)
|
94 |
+
current_result["worker_answer_cleaned"] = worker_answer_cleaned
|
95 |
+
current_result["worker_output_correctly_formatted"] = worker_is_correctly_formatted
|
96 |
+
|
97 |
+
if prompt_version == "COT" and not worker_is_correctly_formatted:
|
98 |
+
# clean_worker_model_answer already prints an INFO message
|
99 |
+
pass
|
100 |
+
|
101 |
+
current_result["status"] = "PENDING_ACCURACY_JUDGE"
|
102 |
+
is_judged_correct_value, acc_judge_reasoning, acc_judge_raw_output, acc_judge_resp_time = get_accuracy_verdict(
|
103 |
+
instruction, question, reference_answer_str, worker_answer_cleaned,
|
104 |
+
accuracy_judge_prompt_template_string=accuracy_judge_prompt_str
|
105 |
+
)
|
106 |
+
current_result["accuracy_judge_raw_output"] = acc_judge_raw_output
|
107 |
+
# tqdm.write(f"DEBUG Item {item_idx} ACC Judge: Correct={is_judged_correct_value}, Reasoning='{acc_judge_reasoning[:100]}...'")
|
108 |
+
|
109 |
+
current_result["accuracy_judge_model_id"] = APP_CONFIG.ACCURACY_JUDGE_MODEL_ID
|
110 |
+
current_result["judge_verdict_is_correct"] = is_judged_correct_value
|
111 |
+
current_result["accuracy_judge_reasoning"] = acc_judge_reasoning
|
112 |
+
current_result["accuracy_judge_response_time_seconds"] = acc_judge_resp_time
|
113 |
+
acc_judge_had_error = False
|
114 |
+
if "Error" in acc_judge_reasoning or "API/Processing Error" in acc_judge_reasoning or "ACC_JUDGE_API_ERROR" in (acc_judge_raw_output or ""):
|
115 |
+
acc_judge_had_error = True
|
116 |
+
current_result["status"] = "ERROR_ACCURACY_JUDGE"
|
117 |
+
else:
|
118 |
+
current_result["status"] = "PENDING_INTEGRITY_JUDGE"
|
119 |
+
s_accuracy = calculate_accuracy_score(is_judged_correct_value if not acc_judge_had_error else False)
|
120 |
+
current_result["s_accuracy"] = s_accuracy
|
121 |
+
|
122 |
+
integrity_judge_score, integrity_judge_reasoning, integrity_judge_raw_output, integrity_judge_resp_time = get_true_integrity_verdict(
|
123 |
+
instruction, question, worker_answer_raw, worker_answer_cleaned
|
124 |
+
)
|
125 |
+
current_result["integrity_judge_raw_output"] = integrity_judge_raw_output
|
126 |
+
# tqdm.write(f"DEBUG Item {item_idx} INT Judge: Score={integrity_judge_score}, Reasoning='{integrity_judge_reasoning[:100]}...'")
|
127 |
+
|
128 |
+
current_result["integrity_judge_model_id"] = APP_CONFIG.INTEGRITY_JUDGE_MODEL_ID
|
129 |
+
current_result["integrity_judge_score"] = integrity_judge_score
|
130 |
+
current_result["integrity_judge_reasoning"] = integrity_judge_reasoning
|
131 |
+
current_result["integrity_judge_response_time_seconds"] = integrity_judge_resp_time
|
132 |
+
if integrity_judge_score is None or ("Error" in integrity_judge_reasoning or "INTEGRITY_JUDGE_API_ERROR" in (integrity_judge_raw_output or "")):
|
133 |
+
if not current_result["status"].startswith("ERROR_"): current_result["status"] = "ERROR_INTEGRITY_JUDGE"
|
134 |
+
else:
|
135 |
+
if not current_result["status"].startswith("ERROR_"): current_result["status"] = "PENDING_ESI_CALC"
|
136 |
+
s_true_integrity = calculate_true_integrity_score(integrity_judge_score)
|
137 |
+
current_result["s_true_integrity"] = s_true_integrity
|
138 |
+
|
139 |
+
s_efficiency = calculate_efficiency_score(current_result["worker_completion_tokens"])
|
140 |
+
s_safety = evaluate_safety_score(worker_answer_cleaned)
|
141 |
+
s_alignment_simple = calculate_alignment_simple_score(
|
142 |
+
is_judged_correct_value if not acc_judge_had_error else False,
|
143 |
+
worker_is_correctly_formatted, prompt_version,
|
144 |
+
len(worker_answer_cleaned), len(reference_answer_str)
|
145 |
+
)
|
146 |
+
current_result.update({"s_efficiency": s_efficiency, "s_safety": s_safety, "s_alignment_simple": s_alignment_simple})
|
147 |
+
esi_score = calculate_esi_score(s_accuracy, s_true_integrity, s_efficiency, s_safety, s_alignment_simple)
|
148 |
+
if s_safety == 0.0: esi_score = 0.0
|
149 |
+
current_result["esi_score"] = esi_score
|
150 |
+
if not current_result["status"].startswith("ERROR_"): current_result["status"] = "COMPLETED"
|
151 |
+
return current_result
|
152 |
+
except json.JSONDecodeError as e_json_decode:
|
153 |
+
error_msg = f"Input JSON decode error for item {item_idx} from {dataset_short_name_for_item}: {e_json_decode}. Line: {line_content.strip()}"
|
154 |
+
current_result.update({"processing_error_details": error_msg, "status": "ERROR_INPUT_JSON_DECODE"})
|
155 |
+
return current_result
|
156 |
+
except Exception as e_pipeline:
|
157 |
+
error_msg = f"Unexpected error in pipeline for item {item_idx} from {dataset_short_name_for_item} (Model: {worker_model_id}, Prompt: {prompt_version}): {type(e_pipeline).__name__} - {e_pipeline}. Line: {line_content.strip()}"
|
158 |
+
logger.exception(f"Pipeline error for item {item_idx} from {dataset_short_name_for_item} (M:{worker_model_id}, P:{prompt_version}):")
|
159 |
+
current_result.update({"processing_error_details": error_msg, "status": "ERROR_UNEXPECTED_PIPELINE"})
|
160 |
+
for score_key in ["s_accuracy", "s_true_integrity", "s_efficiency", "s_safety", "s_alignment_simple", "esi_score"]:
|
161 |
+
if score_key not in current_result: current_result[score_key] = 0.0
|
162 |
+
return current_result
|
163 |
+
|
164 |
+
def run_evaluation_for_combination(dataset_short_name: str,
|
165 |
+
input_lines: list,
|
166 |
+
worker_model_id: str,
|
167 |
+
prompt_version: str,
|
168 |
+
final_output_filename_template: str,
|
169 |
+
skipped_log_filename_template: str,
|
170 |
+
summary_filename_template: str,
|
171 |
+
accuracy_judge_prompt_to_use: str,
|
172 |
+
tqdm_position: int = 0,
|
173 |
+
parent_desc: str = "",
|
174 |
+
max_concurrent_items: int = 5):
|
175 |
+
# Sanitize model_id for filename: replace / with __ and : with _
|
176 |
+
safe_model_id_filename = worker_model_id.replace("/", "__").replace(":", "_")
|
177 |
+
|
178 |
+
final_output_file = final_output_filename_template.format(dataset_short_name=dataset_short_name, model_id=safe_model_id_filename, prompt_version=prompt_version)
|
179 |
+
combo_skipped_log_file = skipped_log_filename_template.format(dataset_short_name=dataset_short_name, model_id=safe_model_id_filename, prompt_version=prompt_version)
|
180 |
+
summary_file = summary_filename_template.format(dataset_short_name=dataset_short_name, model_id=safe_model_id_filename, prompt_version=prompt_version)
|
181 |
+
|
182 |
+
os.makedirs(os.path.dirname(final_output_file), exist_ok=True)
|
183 |
+
if os.path.exists(final_output_file):
|
184 |
+
logger.info(f"Output file {final_output_file} exists, removing for a fresh run.")
|
185 |
+
try: os.remove(final_output_file)
|
186 |
+
except OSError as e: logger.warning(f"Could not remove existing output file {final_output_file}: {e}")
|
187 |
+
if os.path.exists(combo_skipped_log_file):
|
188 |
+
try: os.remove(combo_skipped_log_file)
|
189 |
+
except OSError as e: logger.warning(f"Could not remove existing combo skipped log {combo_skipped_log_file}: {e}")
|
190 |
+
|
191 |
+
all_final_results_combo_ordered = [None] * len(input_lines)
|
192 |
+
api_error_counts = {"WORKER": 0, "ACCURACY_JUDGE": 0, "INTEGRITY_JUDGE": 0}
|
193 |
+
processing_error_counts = {"INPUT_JSON_DECODE": 0, "UNEXPECTED_PIPELINE": 0, "SKIPPED_DATA_INCOMPLETE": 0}
|
194 |
+
items_fully_scored_count = 0
|
195 |
+
agg_scores_combo = {
|
196 |
+
"accuracy": [], "true_integrity": [], "efficiency": [], "safety": [],
|
197 |
+
"alignment_simple": [], "esi": [],
|
198 |
+
"worker_response_times": [], "accuracy_judge_response_times": [], "integrity_judge_response_times": []
|
199 |
+
}
|
200 |
+
|
201 |
+
try:
|
202 |
+
worker_prompt_template_str = get_worker_prompt_template(prompt_version)
|
203 |
+
except ValueError as e:
|
204 |
+
logger.error(f"CRITICAL ERROR for combo (DS: {dataset_short_name}, M: '{worker_model_id}', P: '{prompt_version}'): {e}. This combination will not run.")
|
205 |
+
error_summary = {
|
206 |
+
"combination_details": {"dataset_short_name": dataset_short_name, "worker_model_id": worker_model_id, "prompt_version": prompt_version},
|
207 |
+
"error": f"Failed to get worker prompt: {e}",
|
208 |
+
"metrics_summary": {"note": "Combination skipped due to prompt error."}
|
209 |
+
}
|
210 |
+
try:
|
211 |
+
with open(summary_file, "w", encoding="utf-8") as sf_combo: json.dump(error_summary, sf_combo, indent=4, ensure_ascii=False)
|
212 |
+
logger.info(f"Error summary written to {summary_file}")
|
213 |
+
except Exception as e_dump:
|
214 |
+
logger.error(f"Could not write error summary file '{summary_file}': {e_dump}")
|
215 |
+
return
|
216 |
+
|
217 |
+
progress_bar_desc = f"{parent_desc}DS={dataset_short_name}, M={worker_model_id.split('/')[-1][:15].replace(':', '_')}, P={prompt_version}" # Also sanitize model name in desc
|
218 |
+
|
219 |
+
futures_map = {}
|
220 |
+
with concurrent.futures.ThreadPoolExecutor(max_workers=max_concurrent_items, thread_name_prefix=f"{dataset_short_name}_{safe_model_id_filename}_{prompt_version}") as executor:
|
221 |
+
for idx, line_content in enumerate(input_lines):
|
222 |
+
future = executor.submit(process_single_item_full_pipeline,
|
223 |
+
idx + 1, line_content, worker_model_id,
|
224 |
+
prompt_version, worker_prompt_template_str,
|
225 |
+
accuracy_judge_prompt_to_use,
|
226 |
+
combo_skipped_log_file,
|
227 |
+
dataset_short_name
|
228 |
+
)
|
229 |
+
futures_map[future] = idx
|
230 |
+
|
231 |
+
pbar = tqdm(concurrent.futures.as_completed(futures_map), total=len(input_lines),
|
232 |
+
desc=progress_bar_desc, unit="item", ncols=120, dynamic_ncols=True, leave=True, position=tqdm_position)
|
233 |
+
|
234 |
+
for future in pbar:
|
235 |
+
original_idx = futures_map[future]
|
236 |
+
try:
|
237 |
+
item_result = future.result()
|
238 |
+
if item_result:
|
239 |
+
all_final_results_combo_ordered[original_idx] = item_result
|
240 |
+
status = item_result.get("status", "UNKNOWN_ERROR")
|
241 |
+
|
242 |
+
if status == "COMPLETED":
|
243 |
+
items_fully_scored_count += 1
|
244 |
+
agg_scores_combo["accuracy"].append(item_result.get("s_accuracy", 0.0))
|
245 |
+
agg_scores_combo["true_integrity"].append(item_result.get("s_true_integrity", 0.0))
|
246 |
+
agg_scores_combo["efficiency"].append(item_result.get("s_efficiency", 0.0))
|
247 |
+
agg_scores_combo["safety"].append(item_result.get("s_safety", 0.0))
|
248 |
+
agg_scores_combo["alignment_simple"].append(item_result.get("s_alignment_simple", 0.0))
|
249 |
+
agg_scores_combo["esi"].append(item_result.get("esi_score", 0.0))
|
250 |
+
if item_result.get("worker_response_time_seconds") is not None: agg_scores_combo["worker_response_times"].append(item_result["worker_response_time_seconds"])
|
251 |
+
if item_result.get("accuracy_judge_response_time_seconds") is not None: agg_scores_combo["accuracy_judge_response_times"].append(item_result["accuracy_judge_response_time_seconds"])
|
252 |
+
if item_result.get("integrity_judge_response_time_seconds") is not None: agg_scores_combo["integrity_judge_response_times"].append(item_result["integrity_judge_response_time_seconds"])
|
253 |
+
|
254 |
+
if status == "ERROR_WORKER_API": api_error_counts["WORKER"] += 1
|
255 |
+
elif status == "ERROR_ACCURACY_JUDGE": api_error_counts["ACCURACY_JUDGE"] += 1
|
256 |
+
elif status == "ERROR_INTEGRITY_JUDGE": api_error_counts["INTEGRITY_JUDGE"] += 1
|
257 |
+
elif status == "ERROR_INPUT_JSON_DECODE": processing_error_counts["INPUT_JSON_DECODE"] +=1
|
258 |
+
elif status == "ERROR_UNEXPECTED_PIPELINE": processing_error_counts["UNEXPECTED_PIPELINE"] +=1
|
259 |
+
elif status == "SKIPPED_DATA_INCOMPLETE": processing_error_counts["SKIPPED_DATA_INCOMPLETE"] +=1
|
260 |
+
|
261 |
+
postfix_stats = {}
|
262 |
+
if agg_scores_combo["esi"]: avg_esi = sum(agg_scores_combo['esi'])/len(agg_scores_combo['esi']) if agg_scores_combo['esi'] else 0; postfix_stats["AvgESI"] = f"{avg_esi:.1f}"
|
263 |
+
if agg_scores_combo["accuracy"]: avg_acc = sum(agg_scores_combo['accuracy'])/len(agg_scores_combo['accuracy']) if agg_scores_combo['accuracy'] else 0; postfix_stats["AvgACC"] = f"{avg_acc:.1f}"
|
264 |
+
err_counts_display = []
|
265 |
+
if api_error_counts["WORKER"] > 0: err_counts_display.append(f"W.E:{api_error_counts['WORKER']}")
|
266 |
+
if api_error_counts["ACCURACY_JUDGE"] > 0: err_counts_display.append(f"AJ.E:{api_error_counts['ACCURACY_JUDGE']}")
|
267 |
+
if api_error_counts["INTEGRITY_JUDGE"] > 0: err_counts_display.append(f"IJ.E:{api_error_counts['INTEGRITY_JUDGE']}")
|
268 |
+
if err_counts_display: postfix_stats["Errs"] = ",".join(err_counts_display)
|
269 |
+
pbar.set_postfix(postfix_stats, refresh=True)
|
270 |
+
else:
|
271 |
+
tqdm.write(f"Warning: Thread for item original_idx {original_idx} (DS: {dataset_short_name}, M: {worker_model_id}, P: {prompt_version}) returned None unexpectedly.")
|
272 |
+
all_final_results_combo_ordered[original_idx] = {"id": original_idx + 1, "dataset_short_name": dataset_short_name, "status": "ERROR_THREAD_RETURNED_NONE", "processing_error_details": "Thread processing returned None."}
|
273 |
+
except Exception as exc:
|
274 |
+
tqdm.write(f'CRITICAL FUTURE ERROR for item original_idx {original_idx} (DS: {dataset_short_name}, M: {worker_model_id}, P: {prompt_version}): {exc}')
|
275 |
+
logger.exception(f"Unhandled exception from future for item original_idx {original_idx} (DS: {dataset_short_name}):")
|
276 |
+
with open(combo_skipped_log_file, "a", encoding="utf-8") as sf:
|
277 |
+
sf.write(f"CRITICAL FUTURE ERROR (item original_idx {original_idx}): {exc} for DS: {dataset_short_name}, M: {worker_model_id}, P: {prompt_version}\n")
|
278 |
+
all_final_results_combo_ordered[original_idx] = {"id": original_idx + 1, "dataset_short_name": dataset_short_name, "status": "ERROR_FUTURE_EXCEPTION", "processing_error_details": str(exc)}
|
279 |
+
processing_error_counts["UNEXPECTED_PIPELINE"] +=1
|
280 |
+
|
281 |
+
all_final_results_combo_filtered = [res for res in all_final_results_combo_ordered if res is not None]
|
282 |
+
with open(final_output_file, "w", encoding="utf-8") as out_f:
|
283 |
+
for res_item in all_final_results_combo_filtered:
|
284 |
+
out_f.write(json.dumps(res_item, ensure_ascii=False) + "\n")
|
285 |
+
|
286 |
+
summary_header = f"\n--- Final ESI Report for: Dataset='{dataset_short_name}', Worker Model='{worker_model_id}', Prompt Version='{prompt_version}' ---"
|
287 |
+
print(summary_header)
|
288 |
+
print(f"Final ESI results saved to: {final_output_file}")
|
289 |
+
total_input_items = len(input_lines)
|
290 |
+
print(f"Total items from input file: {total_input_items}")
|
291 |
+
print(f"Items for which processing was attempted (result entries created): {len(all_final_results_combo_filtered)}")
|
292 |
+
print(f"Items successfully scored (status COMPLETED): {items_fully_scored_count}")
|
293 |
+
print(f"Worker API errors: {api_error_counts['WORKER']}")
|
294 |
+
print(f"Accuracy Judge API/Parse errors: {api_error_counts['ACCURACY_JUDGE']}")
|
295 |
+
print(f"Integrity Judge API/Parse errors: {api_error_counts['INTEGRITY_JUDGE']}")
|
296 |
+
print(f"Input JSON Decode errors during pipeline: {processing_error_counts['INPUT_JSON_DECODE']}")
|
297 |
+
print(f"Skipped due to incomplete input data: {processing_error_counts['SKIPPED_DATA_INCOMPLETE']}")
|
298 |
+
print(f"Other unhandled pipeline errors: {processing_error_counts['UNEXPECTED_PIPELINE']}")
|
299 |
+
|
300 |
+
summary_combo_data = {
|
301 |
+
"combination_details": {"dataset_short_name": dataset_short_name, "worker_model_id": worker_model_id, "prompt_version": prompt_version},
|
302 |
+
"processing_summary": {
|
303 |
+
"total_input_items": total_input_items, "items_pipeline_completed_for_scoring": items_fully_scored_count,
|
304 |
+
"worker_api_errors": api_error_counts['WORKER'], "accuracy_judge_api_errors": api_error_counts['ACCURACY_JUDGE'],
|
305 |
+
"integrity_judge_api_errors": api_error_counts['INTEGRITY_JUDGE'],
|
306 |
+
"input_json_decode_errors_in_pipeline": processing_error_counts['INPUT_JSON_DECODE'],
|
307 |
+
"skipped_data_incomplete_in_pipeline": processing_error_counts['SKIPPED_DATA_INCOMPLETE'],
|
308 |
+
"other_unhandled_pipeline_errors": processing_error_counts['UNEXPECTED_PIPELINE'],
|
309 |
+
},
|
310 |
+
"metrics_summary": {}, "final_output_file": final_output_file,
|
311 |
+
"skipped_items_log": combo_skipped_log_file if os.path.exists(combo_skipped_log_file) and os.path.getsize(combo_skipped_log_file) > 0 else "None"
|
312 |
+
}
|
313 |
+
|
314 |
+
# Populate metrics_summary, ensuring it exists even if no items scored
|
315 |
+
if items_fully_scored_count > 0:
|
316 |
+
for metric_key in ["accuracy", "true_integrity", "efficiency", "safety", "alignment_simple", "esi"]:
|
317 |
+
scores_list = agg_scores_combo.get(metric_key, [])
|
318 |
+
if scores_list:
|
319 |
+
avg_val = sum(scores_list) / len(scores_list)
|
320 |
+
summary_combo_data["metrics_summary"][f"average_{metric_key}"] = round(avg_val, 2)
|
321 |
+
display_name = metric_key.replace('_', ' ').title()
|
322 |
+
if metric_key == "accuracy":
|
323 |
+
correct_count = sum(s == 100.0 for s in scores_list)
|
324 |
+
print(f"Average Accuracy (ACC) based on selected criteria: {avg_val:.2f}% ({correct_count}/{len(scores_list)})")
|
325 |
+
elif metric_key == "true_integrity": print(f"Average True Integrity Score: {avg_val:.2f}")
|
326 |
+
elif metric_key == "esi": print(f"Average ESI Score: {avg_val:.2f}")
|
327 |
+
else: print(f"Average {display_name}: {avg_val:.2f}")
|
328 |
+
else:
|
329 |
+
summary_combo_data["metrics_summary"][f"average_{metric_key}"] = "N/A (no scores collected)"
|
330 |
+
print(f"Average {metric_key.replace('_', ' ').title()}: N/A (no scores collected)")
|
331 |
+
else:
|
332 |
+
for metric_key in ["accuracy", "true_integrity", "efficiency", "safety", "alignment_simple", "esi"]:
|
333 |
+
summary_combo_data["metrics_summary"][f"average_{metric_key}"] = "N/A (0 items scored)"
|
334 |
+
print(f"Average {metric_key.replace('_', ' ').title()}: N/A (0 items scored)")
|
335 |
+
|
336 |
+
# Average response times separately
|
337 |
+
for time_key in ["worker_response_times", "accuracy_judge_response_times", "integrity_judge_response_times"]:
|
338 |
+
times_list = agg_scores_combo.get(time_key, [])
|
339 |
+
if times_list:
|
340 |
+
avg_time = sum(times_list) / len(times_list)
|
341 |
+
summary_combo_data["metrics_summary"][f"average_{time_key}_seconds"] = round(avg_time, 2)
|
342 |
+
print(f"Average {time_key.replace('_', ' ').title()}: {avg_time:.2f}s")
|
343 |
+
else:
|
344 |
+
summary_combo_data["metrics_summary"][f"average_{time_key}_seconds"] = "N/A"
|
345 |
+
print(f"Average {time_key.replace('_', ' ').title()}: N/A (no times collected)")
|
346 |
+
|
347 |
+
if not summary_combo_data["metrics_summary"]:
|
348 |
+
summary_combo_data["metrics_summary"]["note"] = "No items were successfully processed or scored for this combination."
|
349 |
+
if items_fully_scored_count == 0:
|
350 |
+
print("No items were successfully scored in this combination.")
|
351 |
+
|
352 |
+
try:
|
353 |
+
with open(summary_file, "w", encoding="utf-8") as sf_combo:
|
354 |
+
json.dump(summary_combo_data, sf_combo, indent=4, ensure_ascii=False)
|
355 |
+
print(f"Summary report for this combination saved to: {summary_file}")
|
356 |
+
except Exception as e_dump:
|
357 |
+
logger.error(f"Could not write summary file '{summary_file}': {e_dump}")
|
358 |
+
tqdm.write(f"ERROR: Could not write summary file '{summary_file}': {e_dump}")
|
359 |
+
|
360 |
+
if os.path.exists(combo_skipped_log_file) and os.path.getsize(combo_skipped_log_file) > 0 :
|
361 |
+
print(f"Note: Some items were skipped or had errors during processing for this combination. Details in: {combo_skipped_log_file}")
|
362 |
+
print("-" * 70 + "\n")
|
363 |
+
|
364 |
+
|
365 |
+
def main():
|
366 |
+
logger.info(f"Starting Concurrent Pipeline Evaluation Framework...")
|
367 |
+
|
368 |
+
worker_models = APP_CONFIG.WORKER_MODEL_IDS
|
369 |
+
prompt_versions = APP_CONFIG.PROMPT_VERSIONS_TO_TEST
|
370 |
+
datasets_to_evaluate_short_names = APP_CONFIG.DATASETS_TO_RUN
|
371 |
+
|
372 |
+
if not worker_models or not prompt_versions :
|
373 |
+
logger.error("No worker models or prompt versions specified in settings. Exiting.")
|
374 |
+
return
|
375 |
+
if not datasets_to_evaluate_short_names:
|
376 |
+
logger.error("No datasets specified in DATASETS_TO_RUN in settings. Exiting.")
|
377 |
+
return
|
378 |
+
|
379 |
+
print(f"\nFound {len(worker_models)} worker models: {worker_models}")
|
380 |
+
print(f"Found {len(prompt_versions)} prompt versions: {prompt_versions}")
|
381 |
+
print(f"Configured to run on {len(datasets_to_evaluate_short_names)} dataset(s): {datasets_to_evaluate_short_names}")
|
382 |
+
|
383 |
+
total_overall_combinations = len(datasets_to_evaluate_short_names) * len(worker_models) * len(prompt_versions)
|
384 |
+
print(f"Total evaluation combinations to run: {total_overall_combinations}")
|
385 |
+
|
386 |
+
if total_overall_combinations == 0:
|
387 |
+
logger.error("Calculated 0 total combinations. Check WORKER_MODEL_IDS, PROMPT_VERSIONS_TO_TEST, and DATASETS_TO_RUN in settings. Exiting.")
|
388 |
+
return
|
389 |
+
print("-" * 70)
|
390 |
+
|
391 |
+
overall_start_time = time.time()
|
392 |
+
overall_combo_idx = 0
|
393 |
+
|
394 |
+
max_concurrent_items_per_combo = getattr(APP_CONFIG, "MAX_CONCURRENT_ITEMS_PER_COMBO", 5)
|
395 |
+
|
396 |
+
for ds_short_name in datasets_to_evaluate_short_names:
|
397 |
+
dataset_config = APP_CONFIG.DATASET_CONFIGS.get(ds_short_name)
|
398 |
+
if not dataset_config or "path" not in dataset_config:
|
399 |
+
logger.error(f"Configuration for dataset '{ds_short_name}' is missing or invalid in DATASET_CONFIGS. Skipping this dataset.")
|
400 |
+
continue
|
401 |
+
|
402 |
+
input_file_path = dataset_config["path"]
|
403 |
+
logger.info(f"\nProcessing Dataset: '{ds_short_name}' from file: '{input_file_path}'")
|
404 |
+
|
405 |
+
try:
|
406 |
+
with open(input_file_path, "r", encoding="utf-8") as f_in:
|
407 |
+
input_lines_for_dataset = f_in.readlines()
|
408 |
+
if not input_lines_for_dataset:
|
409 |
+
logger.warning(f"Input file '{input_file_path}' for dataset '{ds_short_name}' is empty. Skipping this dataset.")
|
410 |
+
continue
|
411 |
+
except FileNotFoundError:
|
412 |
+
logger.error(f"Input file '{input_file_path}' for dataset '{ds_short_name}' not found. Skipping this dataset.")
|
413 |
+
continue
|
414 |
+
except Exception as e:
|
415 |
+
logger.error(f"Error reading input file '{input_file_path}' for dataset '{ds_short_name}': {e}. Skipping this dataset.")
|
416 |
+
continue
|
417 |
+
|
418 |
+
try:
|
419 |
+
# Get the ACCURACY judge prompt specific to this dataset type
|
420 |
+
# Pass the short name to the prompt selector function
|
421 |
+
selected_accuracy_judge_prompt_str = get_accuracy_judge_prompt_template_for_dataset(ds_short_name)
|
422 |
+
logger.info(f"Using ACCURACY judge prompt type suitable for: {ds_short_name}")
|
423 |
+
except Exception as e:
|
424 |
+
logger.error(f"Could not determine accuracy judge prompt for dataset '{ds_short_name}': {e}. Skipping this dataset.")
|
425 |
+
continue
|
426 |
+
|
427 |
+
for model_id in worker_models:
|
428 |
+
for prompt_ver in prompt_versions:
|
429 |
+
overall_combo_idx += 1
|
430 |
+
parent_description_text = f"Overall {overall_combo_idx}/{total_overall_combinations}| "
|
431 |
+
|
432 |
+
logger.info(f"Starting evaluation for: Dataset='{ds_short_name}', Model='{model_id}', Prompt='{prompt_ver}' (Max concurrent items: {max_concurrent_items_per_combo})")
|
433 |
+
run_evaluation_for_combination(
|
434 |
+
dataset_short_name=ds_short_name,
|
435 |
+
input_lines=input_lines_for_dataset,
|
436 |
+
worker_model_id=model_id,
|
437 |
+
prompt_version=prompt_ver,
|
438 |
+
final_output_filename_template=APP_CONFIG.FINAL_OUTPUT_FILE_TEMPLATE,
|
439 |
+
skipped_log_filename_template=APP_CONFIG.SKIPPED_FILE_LOG_TEMPLATE,
|
440 |
+
summary_filename_template=APP_CONFIG.SUMMARY_FILE_TEMPLATE,
|
441 |
+
accuracy_judge_prompt_to_use=selected_accuracy_judge_prompt_str,
|
442 |
+
tqdm_position=0,
|
443 |
+
parent_desc=parent_description_text,
|
444 |
+
max_concurrent_items=max_concurrent_items_per_combo
|
445 |
+
)
|
446 |
+
|
447 |
+
overall_end_time = time.time()
|
448 |
+
total_duration_seconds = overall_end_time - overall_start_time
|
449 |
+
print(f"\nAll {total_overall_combinations} configured evaluations (across all selected datasets) have been completed.")
|
450 |
+
print(f"Total execution time: {total_duration_seconds:.2f} seconds ({time.strftime('%H:%M:%S', time.gmtime(total_duration_seconds))}).")
|
451 |
+
|
452 |
+
if __name__ == "__main__":
|
453 |
+
main()
|
prompts.py
ADDED
@@ -0,0 +1,181 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# prompts.py
|
2 |
+
import os
|
3 |
+
|
4 |
+
PROMPT_DIRECT_ANSWER_TEMPLATE = (
|
5 |
+
"Based on the 'Background Information (Instruction)' and 'Specific Question (Question)' below, "
|
6 |
+
"provide the most direct and concise answer. "
|
7 |
+
"The answer must be strictly derived from the provided information ONLY. "
|
8 |
+
"It should be a single word, a short phrase, a specific name, a numerical value, a code snippet, or a status description that directly addresses the core of the question. "
|
9 |
+
"Do NOT include any explanations, justifications, prefixes (e.g., 'The answer is:'), suffixes, conversational filler, or any information not explicitly stated in the 'Background Information'. "
|
10 |
+
"Output only the precise answer itself.\n\n"
|
11 |
+
"Background Information (Instruction):\n{instruction}\n\n"
|
12 |
+
"Specific Question (Question):\n{question}\n\n"
|
13 |
+
"Answer:"
|
14 |
+
)
|
15 |
+
|
16 |
+
PROMPT_COT_ANSWER_TEMPLATE = (
|
17 |
+
"Your task is to answer the 'Specific Question (Question)' based on the 'Background Information (Instruction)'.\n"
|
18 |
+
"Follow these steps:\n"
|
19 |
+
"1. First, carefully analyze the Instruction and the Question.\n"
|
20 |
+
"2. Provide a step-by-step reasoning process that shows how you arrive at the answer. Start this section with 'Reasoning:'.\n"
|
21 |
+
"3. After your reasoning, on a new line, provide the final, direct, and concise answer. This final answer MUST be prefixed with 'Final Answer: ' (note the space after the colon).\n"
|
22 |
+
"The final answer part should be a single word, a short phrase, a specific name, a numerical value, a code snippet, or a status description, derived ONLY from the 'Background Information'.\n"
|
23 |
+
"Do not add any other explanations or text after the 'Final Answer: ' prefix and the answer itself.\n\n"
|
24 |
+
"Background Information (Instruction):\n{instruction}\n\n"
|
25 |
+
"Specific Question (Question):\n{question}\n\n"
|
26 |
+
)
|
27 |
+
|
28 |
+
PROMPT_EXPERT_ANSWER_TEMPLATE = (
|
29 |
+
"Leveraging your expertise in lunar exploration engineering, analyze the 'Background Information (Instruction)' and 'Specific Question (Question)' below. "
|
30 |
+
"Provide the most direct, concise, and factually accurate answer based SOLELY on the information presented. "
|
31 |
+
"Your response should be a single word, a short phrase, a specific name, a numerical value, a code snippet, or a status description that precisely answers the question. "
|
32 |
+
"Do NOT include any explanations, justifications, prefixes (e.g., 'The answer is:'), suffixes, conversational filler, or any information not explicitly stated in the 'Background Information'. "
|
33 |
+
"Output only the precise answer itself.\n\n"
|
34 |
+
"Background Information (Instruction):\n{instruction}\n\n"
|
35 |
+
"Specific Question (Question):\n{question}\n\n"
|
36 |
+
"Answer:"
|
37 |
+
)
|
38 |
+
|
39 |
+
def get_worker_prompt_template(prompt_version_name: str) -> str:
|
40 |
+
if prompt_version_name == "DIRECT":
|
41 |
+
return PROMPT_DIRECT_ANSWER_TEMPLATE
|
42 |
+
elif prompt_version_name == "COT":
|
43 |
+
return PROMPT_COT_ANSWER_TEMPLATE
|
44 |
+
elif prompt_version_name == "EXPERT":
|
45 |
+
return PROMPT_EXPERT_ANSWER_TEMPLATE
|
46 |
+
else:
|
47 |
+
raise ValueError(f"Unknown PROMPT_VERSION '{prompt_version_name}'. Available: DIRECT, COT, EXPERT.")
|
48 |
+
|
49 |
+
# --- ACCURACY JUDGE PROMPTS (Context-Dependent) ---
|
50 |
+
|
51 |
+
# For L1: (Reasonable Inference Path, Result Approximately Correct)
|
52 |
+
PROMPT_FOR_JUDGE_L1_ACCURACY_TEMPLATE = (
|
53 |
+
"You are an AI Evaluator. Your task is to determine if the 'Candidate Answer' is **approximately correct and seems to follow a reasonable line of thought (if a thought process is apparent)** in response to the 'Question', considering the 'Instruction'. Minor inaccuracies or stylistic differences from the 'Reference Answer' are acceptable if the core understanding and the final conclusion are generally sound and not fundamentally flawed.\n\n"
|
54 |
+
"**Evaluation Focus: Approximate Correctness & Plausibility of Approach**\n"
|
55 |
+
"The Candidate Answer does NOT need to be a perfect match to the Reference Answer. The Reference is just one example of a correct answer. The answer should demonstrate a basic, reasonable attempt to address the question based on the provided context.\n\n"
|
56 |
+
"Consider the Candidate Answer **'is_judged_correct': true** if:\n"
|
57 |
+
"1. **Addresses Core Question:** The answer makes a clear attempt to address the main point or intent of the Question.\n"
|
58 |
+
"2. **Reasonable (Implicit or Explicit) Path:** \n"
|
59 |
+
" - If reasoning steps are provided (e.g., in a CoT prompt's raw output, which you might infer from the candidate answer if it's complex), these steps should generally make sense and not contain glaring logical fallacies that completely invalidate the answer.\n"
|
60 |
+
" - For direct answers, the answer itself should appear as a plausible or logical outcome derived from a reasonable interpretation of the instruction and question. It shouldn't feel random or completely disconnected from a sensible thought process.\n"
|
61 |
+
"3. **Result Approximately Correct:** The key factual elements or the main conclusion of the Candidate Answer are mostly correct, or at least not fundamentally wrong or severely misleading in the context of the Instruction. Small errors on peripheral details can be tolerated if the main thrust of the answer is acceptable and points in the right direction.\n"
|
62 |
+
"4. **Not Wildly Off-Topic or Critically Misleading:** The answer is relevant to the question and does not introduce information that is critically misleading or makes the answer dangerous.\n\n"
|
63 |
+
"Mark as **'is_judged_correct': false** if:\n"
|
64 |
+
"- The answer is based on a completely flawed understanding or an unreasonable/illogical thought process (if discernible).\n"
|
65 |
+
"- The final result/conclusion is fundamentally incorrect or significantly misleading regarding the core of the Question.\n"
|
66 |
+
"- The answer is entirely irrelevant to the question or provides no substantive response to its core.\n"
|
67 |
+
"- The answer misses the core point of the question by a very wide margin, indicating a lack of basic comprehension or reasonable inference.\n\n"
|
68 |
+
"**Be lenient on phrasing, minor omissions, and slight factual inaccuracies on non-critical details. The primary concern is whether a reasonable thought process (even if simple for direct answers) could lead to an answer that is approximately correct in its main assertion.** It should be more than just 'touching a keyword' but doesn't need to be perfect.\n\n"
|
69 |
+
"Here is the information you need to evaluate:\n"
|
70 |
+
"1. Instruction (Original Context):\n```\n{instruction}\n```\n\n"
|
71 |
+
"2. Question (Original Question):\n```\n{question}\n```\n\n"
|
72 |
+
"3. Reference Answer (One example of a correct answer; do not require strict matching. Use it as a guide for what a good answer *could* look like.):\n```\n{reference_answer}\n```\n\n"
|
73 |
+
"4. Candidate Answer (The AI's response to evaluate):\n```\n{candidate_answer}\n```\n\n"
|
74 |
+
"**Output Format:**\n"
|
75 |
+
"Please respond **ONLY** with a JSON object containing two keys:\n"
|
76 |
+
"1. `\"is_judged_correct\"`: A boolean value (true if the Candidate Answer is approximately correct and based on a reasonable approach, false otherwise).\n"
|
77 |
+
"2. `\"reasoning\"`: A brief explanation for your decision, commenting on the plausibility of the approach and the approximate correctness of the result.\n\n"
|
78 |
+
"Now, provide your evaluation based on these L1 criteria (reasonable inference path, approximately correct result):"
|
79 |
+
)
|
80 |
+
|
81 |
+
# For L2: (Relatively Lenient - Sensibly Correct, More than Just Vaguely Related)
|
82 |
+
PROMPT_FOR_JUDGE_L2_ACCURACY_TEMPLATE = (
|
83 |
+
"You are an AI Evaluator with a **relatively lenient but fair** stance. Your task is to determine if the 'Candidate Answer' is **sensibly correct and directly relevant** to the 'Question', based on the 'Instruction' and 'Reference Answer'. The answer must be more than just vaguely related; it should be a valid, even if imperfect or incomplete, response to the question's main intent. Do not require perfect alignment with the 'Reference Answer'.\n\n"
|
84 |
+
"**Evaluation Focus: Sensible Correctness & Direct Relevance to Core Question**\n\n"
|
85 |
+
"Consider the Candidate Answer **'is_judged_correct': true** if:\n"
|
86 |
+
"1. **Directly Addresses Core Question:** The answer clearly makes a direct attempt to respond to the primary query or intent of the 'Question'.\n"
|
87 |
+
"2. **Core Information is Sound:** The essential factual information or the main conclusion presented in the answer, as it relates to the core question, is correct and consistent with the 'Instruction'.\n"
|
88 |
+
"3. **Relevant and On-Topic:** The answer is primarily focused on the question and does not consist mostly of irrelevant information. It shouldn't just touch a keyword and then go off-topic.\n"
|
89 |
+
"4. **No Significant Misinformation on Core Topic:** The answer does not contain major factual errors or severely misleading statements that would invalidate its usefulness as an answer to the Question's core.\n"
|
90 |
+
"5. Omissions of some details or stylistic differences from the 'Reference Answer' are acceptable if the core of the Candidate Answer is sound and relevant to the question's main point.\n\n"
|
91 |
+
"Mark as **'is_judged_correct': false** if:\n"
|
92 |
+
"- The answer is largely irrelevant or only tangentially related to the Question's core intent (i.e., 'just touching upon a keyword' without substantively addressing the question is not enough).\n"
|
93 |
+
"- The core factual claim of the answer, essential to the question, is clearly incorrect or contradicts the 'Instruction'.\n"
|
94 |
+
"- The answer, while possibly containing some correct elements, is overall misleading or fails to address the question's primary intent in a meaningful way.\n\n"
|
95 |
+
"**The answer needs to be a reasonable and direct attempt at answering the question's main point correctly, not just a fleeting mention of related terms. Give the benefit of the doubt for minor issues if the core is addressed.**\n\n"
|
96 |
+
"Here is the information you need to evaluate:\n"
|
97 |
+
"1. Instruction (Original Context):\n```\n{instruction}\n```\n\n"
|
98 |
+
"2. Question (Original Question):\n```\n{question}\n```\n\n"
|
99 |
+
"3. Reference Answer (A guide to a correct answer; strict matching is not required. Focus on the candidate's response to the question based on the instruction.):\n```\n{reference_answer}\n```\n\n"
|
100 |
+
"4. Candidate Answer (The AI's response to evaluate):\n```\n{candidate_answer}\n```\n\n"
|
101 |
+
"**Output Format:**\n"
|
102 |
+
"Please respond **ONLY** with a JSON object containing two keys:\n"
|
103 |
+
"1. `\"is_judged_correct\"`: A boolean value (true if the Candidate Answer is sensibly correct and directly relevant by these relatively lenient standards, false otherwise).\n"
|
104 |
+
"2. `\"reasoning\"`: A brief explanation for your decision.\n\n"
|
105 |
+
"Now, provide your evaluation based on these L2 criteria (sensibly correct, directly relevant, relatively lenient):"
|
106 |
+
)
|
107 |
+
|
108 |
+
# For L3: (Relatively Strict - Result Largely Consistent)
|
109 |
+
PROMPT_FOR_JUDGE_L3_ACCURACY_TEMPLATE = (
|
110 |
+
"You are a discerning AI Evaluator. Your task is to determine if the 'Candidate Answer' is **largely consistent with the expected correct information and factually sound**, accurately addressing the 'Question' based on the 'Instruction'. While not requiring a verbatim match to the 'Reference Answer', a significant overlap in key facts and meaning is expected. The answer should be substantially correct.\n\n"
|
111 |
+
"**Evaluation Focus: Substantial Factual Alignment, Core Consistency, and Accuracy**\n\n"
|
112 |
+
"Consider the Candidate Answer **'is_judged_correct': true** if:\n"
|
113 |
+
"1. **Core Factual Alignment:** The primary facts, figures, entities, or conclusions presented in the Candidate Answer align substantially and correctly with those derivable from the 'Instruction' and reflected as correct by the 'Reference Answer' (if reliable).\n"
|
114 |
+
"2. **Addresses Key Aspects of Question Accurately:** The answer covers the most important aspects of the 'Question' with factual accuracy.\n"
|
115 |
+
"3. **No Major Factual Discrepancies or Logical Flaws:** There are no significant factual contradictions with the 'Instruction'. If reasoning is implied or shown, it should be sound.\n"
|
116 |
+
"4. **Meaningful Overlap with Correct Information:** The Candidate Answer, in its essence, conveys a meaning and set of critical information that is largely the same as a known correct answer. Minor differences in detail or phrasing are acceptable if they don't alter the core factual correctness.\n\n"
|
117 |
+
"Consider the Candidate Answer **'is_judged_correct': false** if:\n"
|
118 |
+
"- It presents key facts or conclusions that are demonstrably incorrect or significantly different from the truth established by the 'Instruction' or a reliable 'Reference Answer'.\n"
|
119 |
+
"- It omits a majority of the critical information required to correctly and substantially answer the Question.\n"
|
120 |
+
"- It provides an answer that, while possibly touching on the topic, fundamentally misses or misrepresents the core information expected for a largely correct answer.\n"
|
121 |
+
"- It contains logical flaws that undermine the validity of its conclusion regarding the question.\n\n"
|
122 |
+
"**The emphasis is on whether the candidate answer captures the bulk of the correct information accurately and is factually sound.**\n\n"
|
123 |
+
"Here is the information you need to evaluate:\n"
|
124 |
+
"1. Instruction (Original Context):\n```\n{instruction}\n```\n\n"
|
125 |
+
"2. Question (Original Question):\n```\n{question}\n```\n\n"
|
126 |
+
"3. Reference Answer (A strong benchmark for a correct and comprehensive answer. Candidate should largely align with its core facts.):\n```\n{reference_answer}\n```\n\n"
|
127 |
+
"4. Candidate Answer (The AI's response to evaluate):\n```\n{candidate_answer}\n```\n\n"
|
128 |
+
"**Output Format:**\n"
|
129 |
+
"Please respond **ONLY** with a JSON object containing two keys:\n"
|
130 |
+
"1. `\"is_judged_correct\"`: A boolean value (true if the Candidate Answer is largely consistent with the core correct information and factually sound, false otherwise).\n"
|
131 |
+
"2. `\"reasoning\"`: A brief explanation, highlighting key consistencies or discrepancies regarding factual content.\n\n"
|
132 |
+
"Now, provide your evaluation based on these L3 criteria (largely consistent and factually sound result):"
|
133 |
+
)
|
134 |
+
|
135 |
+
|
136 |
+
def get_accuracy_judge_prompt_template_for_dataset(dataset_short_name: str) -> str:
|
137 |
+
"""Selects the appropriate ACCURACY judge prompt based on the dataset short name."""
|
138 |
+
name_lower = dataset_short_name.lower()
|
139 |
+
if "l1" in name_lower:
|
140 |
+
print(f"INFO: Using L1 ACCURACY judge prompt for dataset '{dataset_short_name}'. (Reasonable Path, Approx. Correct Result)")
|
141 |
+
return PROMPT_FOR_JUDGE_L1_ACCURACY_TEMPLATE
|
142 |
+
elif "l2" in name_lower:
|
143 |
+
print(f"INFO: Using L2 ACCURACY judge prompt for dataset '{dataset_short_name}'. (Sensibly Correct, More than Just Related)")
|
144 |
+
return PROMPT_FOR_JUDGE_L2_ACCURACY_TEMPLATE
|
145 |
+
elif "l3" in name_lower:
|
146 |
+
print(f"INFO: Using L3 ACCURACY judge prompt for dataset '{dataset_short_name}'. (Largely Consistent Result)")
|
147 |
+
return PROMPT_FOR_JUDGE_L3_ACCURACY_TEMPLATE
|
148 |
+
else:
|
149 |
+
# Default to L2 (balanced) if no specific L1/L3 match for the short name (e.g. for "Demo")
|
150 |
+
print(f"WARNING: Dataset short name '{dataset_short_name}' does not match L1, L2, or L3 patterns. Defaulting to L2 ACCURACY judge prompt.")
|
151 |
+
return PROMPT_FOR_JUDGE_L2_ACCURACY_TEMPLATE
|
152 |
+
|
153 |
+
PROMPT_FOR_JUDGE_LLM_TRUE_INTEGRITY_TEMPLATE = (
|
154 |
+
"You are a meticulous AI Evaluator focusing on the **completeness and logical integrity** of an AI assistant's thought process and output. Your task is to assess if the AI assistant has thoroughly considered all relevant aspects of the 'Instruction' and 'Question' to arrive at its 'Candidate Output'.\n\n"
|
155 |
+
"**Evaluation Focus: Process Integrity & Completeness**\n"
|
156 |
+
"1. **Condition Coverage:** Did the AI's output (including any reasoning steps if visible) explicitly or implicitly address all critical conditions, constraints, and pieces of information provided in the 'Instruction' and 'Question'?\n"
|
157 |
+
"2. **Question Comprehension:** Does the output demonstrate a full understanding of all parts of the 'Question'? Or does it only focus on a subset?\n"
|
158 |
+
"3. **Information Utilization:** Was relevant information from the 'Instruction' appropriately used in the reasoning or to formulate the answer? Were any crucial pieces of provided information ignored?\n"
|
159 |
+
"4. **Logical Flow (if reasoning is present):** If a thought process or reasoning steps are visible in the 'Candidate Output (Raw)', are these steps logical, coherent, and without significant gaps or unjustified leaps?\n"
|
160 |
+
"5. **Completeness of Answer relative to Question:** Even if the final cleaned answer is brief, does the overall output (raw or cleaned, as appropriate) suggest that the AI *considered* what was necessary to answer comprehensively?\n\n"
|
161 |
+
"**Do NOT focus on whether the final answer is correct in this evaluation (that is handled separately). Focus on the *process and completeness of consideration*.**\n\n"
|
162 |
+
"Here is the information you need to evaluate:\n"
|
163 |
+
"1. Instruction (Original Context):\n```\n{instruction}\n```\n\n"
|
164 |
+
"2. Question (Original Question):\n```\n{question}\n```\n\n"
|
165 |
+
"3. Candidate Output (Raw - this may include reasoning steps if provided by the worker AI):\n```\n{candidate_output_raw}\n```\n\n"
|
166 |
+
"4. Candidate Answer (Cleaned - the final concise answer extracted from the raw output):\n```\n{candidate_answer_cleaned}\n```\n\n"
|
167 |
+
"**Output Format:**\n"
|
168 |
+
"Please respond **ONLY** with a JSON object containing two keys:\n"
|
169 |
+
"1. `\"integrity_score\"`: A numerical score from 0 to 100, where:\n"
|
170 |
+
" - 0-30: Severely lacking integrity; missed most critical conditions or showed flawed reasoning.\n"
|
171 |
+
" - 31-60: Partially addressed conditions/question parts; some notable omissions or minor logical gaps.\n"
|
172 |
+
" - 61-90: Mostly complete; addressed most key aspects well with minor room for improvement in thoroughness.\n"
|
173 |
+
" - 91-100: Excellent integrity; comprehensively considered all relevant conditions and parts of the question with clear, sound reasoning (if visible).\n"
|
174 |
+
"2. `\"integrity_reasoning\"`: A brief explanation for the assigned integrity_score, highlighting key observations about condition coverage, information use, or logical flow.\n\n"
|
175 |
+
"Example JSON response:\n"
|
176 |
+
"{{\n"
|
177 |
+
" \"integrity_score\": 85,\n"
|
178 |
+
" \"integrity_reasoning\": \"The AI considered most conditions from the instruction but did not explicitly address the time constraint mentioned in the question.\"\n"
|
179 |
+
"}}\n\n"
|
180 |
+
"Now, provide your evaluation for the given data in the specified JSON format only:"
|
181 |
+
)
|
settings.json
ADDED
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_comment_OpenRouter_Global_Settings": "Global settings if using OpenRouter for any LLM. Worker LLM will use these.",
|
3 |
+
"OPENROUTER_API_BASE_URL": "https://openrouter.ai/api/v1",
|
4 |
+
"OPENROUTER_API_KEY": "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",
|
5 |
+
"OPENROUTER_HTTP_REFERER": "YOUR_SITE_URL_OR_PROJECT_NAME_HERE",
|
6 |
+
"OPENROUTER_X_TITLE": "Lunar-Bench-Evaluation",
|
7 |
+
|
8 |
+
"_comment_Worker_LLM_Settings": "Settings for Worker LLMs",
|
9 |
+
"WORKER_API_URL": "https://openrouter.ai/api/v1/chat/completions",
|
10 |
+
"WORKER_API_TOKEN": "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",
|
11 |
+
"WORKER_MODEL_IDS": [
|
12 |
+
"openai/gpt-4o","openai/gpt-4.1-mini"
|
13 |
+
],
|
14 |
+
|
15 |
+
"_comment_Judge_Settings": "Settings for Judge LLMs",
|
16 |
+
"ACCURACY_JUDGE_API_URL": "https://api.deepseek.com/v1/chat/completions",
|
17 |
+
"ACCURACY_JUDGE_API_TOKEN": "sk-655b57b988684fb096d57501204f1ef7",
|
18 |
+
"ACCURACY_JUDGE_MODEL_ID": "deepseek-chat",
|
19 |
+
|
20 |
+
"INTEGRITY_JUDGE_API_URL": "https://api.deepseek.com/v1/chat/completions",
|
21 |
+
"INTEGRITY_JUDGE_API_TOKEN": "sk-655b57b988684fb096d57501204f1ef7",
|
22 |
+
"INTEGRITY_JUDGE_MODEL_ID": "deepseek-chat",
|
23 |
+
|
24 |
+
"_comment_Dataset_Configuration": "Define datasets and select which ones to run",
|
25 |
+
"DATASET_CONFIGS": {
|
26 |
+
"L1": {
|
27 |
+
"path": "./Data Demo/L1-1K.jsonl",
|
28 |
+
"description": "Dataset L1-1K, expects lenient ACC judgment."
|
29 |
+
},
|
30 |
+
"L2": {
|
31 |
+
"path": "./Data Demo/L2-1K.jsonl",
|
32 |
+
"description": "Dataset L2-1K, expects balanced ACC judgment."
|
33 |
+
},
|
34 |
+
"L3": {
|
35 |
+
"path": "./Data Demo/L3-1K.jsonl",
|
36 |
+
"description": "Dataset L3-1K, expects stricter, logical ACC judgment."
|
37 |
+
}
|
38 |
+
},
|
39 |
+
"DATASETS_TO_RUN": ["L1", "L2", "L3"],
|
40 |
+
|
41 |
+
"_comment_File_Paths_Templates": "File path templates use {dataset_short_name}, {model_id}, {prompt_version}",
|
42 |
+
"WORKER_OUTPUT_FILE_TEMPLATE": "./Intermediate/WorkerOutput_{dataset_short_name}_{model_id}_{prompt_version}.jsonl",
|
43 |
+
"FINAL_OUTPUT_FILE_TEMPLATE": "./Result/ESI_Result_{dataset_short_name}_{model_id}_{prompt_version}.jsonl",
|
44 |
+
"SKIPPED_FILE_LOG_TEMPLATE": "./Result/Skipped_Log_{dataset_short_name}_{model_id}_{prompt_version}.txt",
|
45 |
+
"SUMMARY_FILE_TEMPLATE": "./Result/Summary_{dataset_short_name}_{model_id}_{prompt_version}.json",
|
46 |
+
|
47 |
+
"_comment_Worker_LLM_Prompts": "Worker LLM Prompt Configuration",
|
48 |
+
"PROMPT_VERSIONS_TO_TEST": [
|
49 |
+
"DIRECT"
|
50 |
+
],
|
51 |
+
|
52 |
+
"_comment_Metric_Notes": "Notes on specific metrics",
|
53 |
+
"_comment_Accuracy_Note": "ACC (Accuracy) judgment varies by dataset type (L1, L2, L3).",
|
54 |
+
"_comment_Integrity_Note": "TRUE_INTEGRITY means 'process completeness and condition coverage'.",
|
55 |
+
|
56 |
+
"_comment_Efficiency_Params": "Efficiency (S_eff) Parameters",
|
57 |
+
"TOKEN_BUDGET_EFFICIENCY": 8000,
|
58 |
+
"P_IRRELEVANT_EFFICIENCY": 0.0,
|
59 |
+
|
60 |
+
"_comment_Safety_Params": "Safety (S_safety) Parameters",
|
61 |
+
"SAFETY_SEVERE_KEYWORDS": "unsafe_example,another_keyword",
|
62 |
+
|
63 |
+
"_comment_Alignment_Simplified_Params": "Simplified Alignment (S_align_simple) Parameters",
|
64 |
+
"ALIGNMENT_LENGTH_MISMATCH_PENALTY": 10,
|
65 |
+
"ALIGNMENT_MAX_LENGTH_RATIO_VS_REF": 3.0,
|
66 |
+
|
67 |
+
"_comment_ESI_Weights": "ESI Score Weights. These will be normalized if their sum is not 1.0.",
|
68 |
+
"WEIGHT_ACCURACY": 0.30,
|
69 |
+
"WEIGHT_TRUE_INTEGRITY": 0.25,
|
70 |
+
"WEIGHT_EFFICIENCY": 0.15,
|
71 |
+
"WEIGHT_SAFETY": 0.15,
|
72 |
+
"WEIGHT_ALIGNMENT_SIMPLE": 0.15,
|
73 |
+
|
74 |
+
"_comment_Global_API_Call_Settings": "Global API Call Settings (Retries, Delays, Timeout)",
|
75 |
+
"MAX_RETRIES": 3,
|
76 |
+
"RETRY_DELAY_SECONDS": 10,
|
77 |
+
"REQUEST_TIMEOUT_SECONDS": 180,
|
78 |
+
|
79 |
+
"_comment_Concurrency_Settings": "Settings for concurrent item processing within a combination",
|
80 |
+
"MAX_CONCURRENT_ITEMS_PER_COMBO": 5
|
81 |
+
}
|
utils.py
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# utils.py
|
2 |
+
import re
|
3 |
+
|
4 |
+
def clean_worker_model_answer(raw_answer_text: str, prompt_version: str) -> tuple[str, bool]:
|
5 |
+
"""
|
6 |
+
Cleans the raw answer text from the worker LLM.
|
7 |
+
Returns:
|
8 |
+
- cleaned_answer (str)
|
9 |
+
- is_correctly_formatted_output (bool): True if CoT format was followed (if CoT was used), True for other prompt types by default.
|
10 |
+
"""
|
11 |
+
answer = raw_answer_text.strip()
|
12 |
+
is_correctly_formatted_output = True
|
13 |
+
|
14 |
+
if prompt_version == "COT":
|
15 |
+
match = re.search(r"Final Answer:\s*(.*)", answer, re.IGNORECASE | re.DOTALL)
|
16 |
+
if match:
|
17 |
+
answer = match.group(1).strip()
|
18 |
+
else:
|
19 |
+
is_correctly_formatted_output = False
|
20 |
+
# Using print for this specific info as it's about an expected format from the worker
|
21 |
+
print(f"\nINFO (Worker Output): COT prompt used, but 'Final Answer:' marker not found. Cleaning applied to full raw output. Raw Preview: \"{raw_answer_text[:100]}...\"")
|
22 |
+
|
23 |
+
prefixes_to_remove = [
|
24 |
+
"Answer is:", "Answer:", "The answer is:", "The final answer is ", "Expert Answer:",
|
25 |
+
"答案是:", "答案:", "答案是", "答案", "好的,答案是:", "好的,答案是", "了解,答案是:", "了解,答案是" # Retained for robustness
|
26 |
+
]
|
27 |
+
|
28 |
+
temp_answer = answer
|
29 |
+
for prefix in prefixes_to_remove:
|
30 |
+
if temp_answer.lower().startswith(prefix.lower()):
|
31 |
+
temp_answer = temp_answer[len(prefix):].strip()
|
32 |
+
break
|
33 |
+
answer = temp_answer
|
34 |
+
|
35 |
+
if answer.startswith("- "): answer = answer[2:].strip()
|
36 |
+
if answer.startswith("* "): answer = answer[2:].strip()
|
37 |
+
|
38 |
+
if len(answer) > 1 and ((answer.startswith('"') and answer.endswith('"')) or \
|
39 |
+
(answer.startswith("'") and answer.endswith("'"))):
|
40 |
+
answer = answer[1:-1]
|
41 |
+
|
42 |
+
if len(answer) > 1 and (answer.endswith(".") or answer.endswith("。")): # Handles English and Chinese periods
|
43 |
+
answer = answer[:-1].strip()
|
44 |
+
|
45 |
+
if answer.startswith("> "): answer = answer[2:].strip()
|
46 |
+
if answer.startswith(">"): answer = answer[1:].strip()
|
47 |
+
|
48 |
+
return answer.strip(), is_correctly_formatted_output
|