xihongshi111 commited on
Commit
f2440bb
Β·
verified Β·
1 Parent(s): f9e1473

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -81
README.md CHANGED
@@ -1,81 +1,93 @@
1
- [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/mit)
2
-
3
-
4
- ## 🌟 Overview
5
-
6
- **Lunar-Bench** is the first benchmark specifically designed to evaluate Large Language Models (LLMs) in realistic lunar mission scenarios. Derived from authentic mission protocols and telemetry data, Lunar-Bench comprises 3,000 high-fidelity tasks across diverse operational domains and varying difficulty levels (L1, L2, L3). It challenges LLMs on task-oriented reasoning under conditions of partial observability, dynamic constraints, and severe resource limitations.
7
-
8
- **Key Features**:
9
-
10
- ![image](https://github.com/user-attachments/assets/e73253da-15d6-4a36-8926-770ebf541206)
11
-
12
- ## πŸ“Š ESI Metric Framework
13
-
14
- To move beyond conventional task-level accuracy, the **Environmental Scenario Indicators (ESI)** provide a structured, multi-faceted framework for quantifying the nuanced qualities of LLM reasoning within mission-critical lunar contexts. While standard Accuracy captures final correctness, ESI is designed to dissect how models reason, plan, and interact.
15
-
16
- ![image](https://github.com/user-attachments/assets/dfb9aeaf-7298-48ce-8e34-6db1b7619d1c)
17
-
18
-
19
- ## πŸš€ How to Use
20
-
21
- ### 1. Prerequisites
22
-
23
- - Python (3.8+ recommended).
24
- - Install dependencies:
25
- ```bash
26
- pip install requests tqdm
27
- ```
28
-
29
- ### 2. Setup & Configuration
30
-
31
- 1. **Clone/Download Project**: Obtain all project files (`main.py`, `config.py`, `settings.json`, etc.).
32
- 2. **Directory Structure**:
33
- ```
34
- .
35
- β”œβ”€β”€ Data Demo/ # Your .jsonl datasets
36
- β”‚ β”œβ”€β”€ L1-1K.jsonl
37
- β”‚ └── ...
38
- β”œβ”€β”€ Intermediate/ # Stores intermediate files (if enabled)
39
- β”œβ”€β”€ Result/ # Output: detailed results and summaries
40
- β”œβ”€β”€ config.py
41
- β”œβ”€β”€ evaluation_metrics.py
42
- β”œβ”€β”€ llm_calls.py
43
- β”œβ”€β”€ main.py # Main script to run
44
- β”œβ”€β”€ prompts.py
45
- β”œβ”€β”€ settings.json # CRITICAL: Configure this file
46
- └── utils.py
47
- ```
48
- 3. **Configure `settings.json`**: This is the **most important step**.
49
- * **API Credentials**:
50
- * `WORKER_API_URL`, `WORKER_API_TOKEN`
51
- * `ACCURACY_JUDGE_API_URL`, `ACCURACY_JUDGE_API_TOKEN`
52
- * `INTEGRITY_JUDGE_API_URL`, `INTEGRITY_JUDGE_API_TOKEN`
53
- * If using OpenRouter: `OPENROUTER_API_BASE_URL`, `OPENROUTER_API_KEY`, etc.
54
- * **Security**: Avoid committing real API keys. Consider environment variables for production/shared use.
55
- * **Models**:
56
- * `WORKER_MODEL_IDS`: List of worker LLM IDs to test (e.g., `["openai/gpt-4o", "meta-llama/Llama-3-8b-chat-hf"]`).
57
- * `ACCURACY_JUDGE_MODEL_ID`, `INTEGRITY_JUDGE_MODEL_ID`: Models for judgment tasks.
58
- * **Datasets**:
59
- * `DATASET_CONFIGS`: Define your datasets. Each entry maps a short name (e.g., `"L1"`) to an object with a `"path"` (e.g., `"./Data Demo/L1-1K.jsonl"`) and `"description"`.
60
- * Dataset files must be in **`.jsonl` format**, where each line is a JSON object containing at least:
61
- * `"instruction"`: (string) Background information/context.
62
- * `"question"`: (string) The question for the LLM.
63
- * `"answer"`: (string) The reference/ground truth answer.
64
- * `DATASETS_TO_RUN`: List of dataset short names to evaluate in the current run (e.g., `["L1", "L2"]`).
65
- * **Prompts**:
66
- * `PROMPT_VERSIONS_TO_TEST`: List of prompt strategies (e.g., `["DIRECT", "COT"]`). These correspond to templates in `prompts.py`.
67
- * **Output Paths**: Configure `FINAL_OUTPUT_FILE_TEMPLATE`, `SKIPPED_FILE_LOG_TEMPLATE`, `SUMMARY_FILE_TEMPLATE`.
68
- * **Metric Parameters & ESI Weights**: Adjust values under `_comment_Efficiency_Params`, `_comment_Safety_Params`, `_comment_Alignment_Simplified_Params`, and `_comment_ESI_Weights` as needed.
69
- * **API & Concurrency**: Set `MAX_RETRIES`, `REQUEST_TIMEOUT_SECONDS`, `MAX_CONCURRENT_ITEMS_PER_COMBO`.
70
-
71
- ### 4. Prepare Datasets
72
-
73
- - Create your `.jsonl` dataset files according to the format specified above.
74
- - Place them in the relevant directory (e.g., `Data Demo/`) and ensure paths in `settings.json` are correct.
75
-
76
- ### 5. Run Evaluation
77
-
78
- Execute the main script from the project's root directory:
79
-
80
- ```bash
81
- python main.py
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text2text-generation
5
+ language:
6
+ - en
7
+ tags:
8
+ - Lunar
9
+ - Benchmark
10
+ size_categories:
11
+ - 1K<n<10K
12
+ ---
13
+ [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/mit)
14
+
15
+
16
+ ## 🌟 Overview
17
+
18
+ **Lunar-Bench** is the first benchmark specifically designed to evaluate Large Language Models (LLMs) in realistic lunar mission scenarios. Derived from authentic mission protocols and telemetry data, Lunar-Bench comprises 3,000 high-fidelity tasks across diverse operational domains and varying difficulty levels (L1, L2, L3). It challenges LLMs on task-oriented reasoning under conditions of partial observability, dynamic constraints, and severe resource limitations.
19
+
20
+ **Key Features**:
21
+
22
+ ![image](https://github.com/user-attachments/assets/e73253da-15d6-4a36-8926-770ebf541206)
23
+
24
+ ## πŸ“Š ESI Metric Framework
25
+
26
+ To move beyond conventional task-level accuracy, the **Environmental Scenario Indicators (ESI)** provide a structured, multi-faceted framework for quantifying the nuanced qualities of LLM reasoning within mission-critical lunar contexts. While standard Accuracy captures final correctness, ESI is designed to dissect how models reason, plan, and interact.
27
+
28
+ ![image](https://github.com/user-attachments/assets/dfb9aeaf-7298-48ce-8e34-6db1b7619d1c)
29
+
30
+
31
+ ## πŸš€ How to Use
32
+
33
+ ### 1. Prerequisites
34
+
35
+ - Python (3.8+ recommended).
36
+ - Install dependencies:
37
+ ```bash
38
+ pip install requests tqdm
39
+ ```
40
+
41
+ ### 2. Setup & Configuration
42
+
43
+ 1. **Clone/Download Project**: Obtain all project files (`main.py`, `config.py`, `settings.json`, etc.).
44
+ 2. **Directory Structure**:
45
+ ```
46
+ .
47
+ β”œβ”€β”€ Data Demo/ # Your .jsonl datasets
48
+ β”‚ β”œβ”€β”€ L1-1K.jsonl
49
+ β”‚ └── ...
50
+ β”œβ”€β”€ Intermediate/ # Stores intermediate files (if enabled)
51
+ β”œβ”€β”€ Result/ # Output: detailed results and summaries
52
+ β”œβ”€β”€ config.py
53
+ β”œβ”€β”€ evaluation_metrics.py
54
+ β”œβ”€β”€ llm_calls.py
55
+ β”œβ”€β”€ main.py # Main script to run
56
+ β”œβ”€β”€ prompts.py
57
+ β”œβ”€β”€ settings.json # CRITICAL: Configure this file
58
+ └── utils.py
59
+ ```
60
+ 3. **Configure `settings.json`**: This is the **most important step**.
61
+ * **API Credentials**:
62
+ * `WORKER_API_URL`, `WORKER_API_TOKEN`
63
+ * `ACCURACY_JUDGE_API_URL`, `ACCURACY_JUDGE_API_TOKEN`
64
+ * `INTEGRITY_JUDGE_API_URL`, `INTEGRITY_JUDGE_API_TOKEN`
65
+ * If using OpenRouter: `OPENROUTER_API_BASE_URL`, `OPENROUTER_API_KEY`, etc.
66
+ * **Security**: Avoid committing real API keys. Consider environment variables for production/shared use.
67
+ * **Models**:
68
+ * `WORKER_MODEL_IDS`: List of worker LLM IDs to test (e.g., `["openai/gpt-4o", "meta-llama/Llama-3-8b-chat-hf"]`).
69
+ * `ACCURACY_JUDGE_MODEL_ID`, `INTEGRITY_JUDGE_MODEL_ID`: Models for judgment tasks.
70
+ * **Datasets**:
71
+ * `DATASET_CONFIGS`: Define your datasets. Each entry maps a short name (e.g., `"L1"`) to an object with a `"path"` (e.g., `"./Data Demo/L1-1K.jsonl"`) and `"description"`.
72
+ * Dataset files must be in **`.jsonl` format**, where each line is a JSON object containing at least:
73
+ * `"instruction"`: (string) Background information/context.
74
+ * `"question"`: (string) The question for the LLM.
75
+ * `"answer"`: (string) The reference/ground truth answer.
76
+ * `DATASETS_TO_RUN`: List of dataset short names to evaluate in the current run (e.g., `["L1", "L2"]`).
77
+ * **Prompts**:
78
+ * `PROMPT_VERSIONS_TO_TEST`: List of prompt strategies (e.g., `["DIRECT", "COT"]`). These correspond to templates in `prompts.py`.
79
+ * **Output Paths**: Configure `FINAL_OUTPUT_FILE_TEMPLATE`, `SKIPPED_FILE_LOG_TEMPLATE`, `SUMMARY_FILE_TEMPLATE`.
80
+ * **Metric Parameters & ESI Weights**: Adjust values under `_comment_Efficiency_Params`, `_comment_Safety_Params`, `_comment_Alignment_Simplified_Params`, and `_comment_ESI_Weights` as needed.
81
+ * **API & Concurrency**: Set `MAX_RETRIES`, `REQUEST_TIMEOUT_SECONDS`, `MAX_CONCURRENT_ITEMS_PER_COMBO`.
82
+
83
+ ### 4. Prepare Datasets
84
+
85
+ - Create your `.jsonl` dataset files according to the format specified above.
86
+ - Place them in the relevant directory (e.g., `Data Demo/`) and ensure paths in `settings.json` are correct.
87
+
88
+ ### 5. Run Evaluation
89
+
90
+ Execute the main script from the project's root directory:
91
+
92
+ ```bash
93
+ python main.py