Update README.md
Browse files
README.md
CHANGED
@@ -1,81 +1,93 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
* `
|
65 |
-
|
66 |
-
*
|
67 |
-
* **
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
task_categories:
|
4 |
+
- text2text-generation
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
tags:
|
8 |
+
- Lunar
|
9 |
+
- Benchmark
|
10 |
+
size_categories:
|
11 |
+
- 1K<n<10K
|
12 |
+
---
|
13 |
+
[](https://opensource.org/licenses/mit)
|
14 |
+
|
15 |
+
|
16 |
+
## π Overview
|
17 |
+
|
18 |
+
**Lunar-Bench** is the first benchmark specifically designed to evaluate Large Language Models (LLMs) in realistic lunar mission scenarios. Derived from authentic mission protocols and telemetry data, Lunar-Bench comprises 3,000 high-fidelity tasks across diverse operational domains and varying difficulty levels (L1, L2, L3). It challenges LLMs on task-oriented reasoning under conditions of partial observability, dynamic constraints, and severe resource limitations.
|
19 |
+
|
20 |
+
**Key Features**:
|
21 |
+
|
22 |
+

|
23 |
+
|
24 |
+
## π ESI Metric Framework
|
25 |
+
|
26 |
+
To move beyond conventional task-level accuracy, the **Environmental Scenario Indicators (ESI)** provide a structured, multi-faceted framework for quantifying the nuanced qualities of LLM reasoning within mission-critical lunar contexts. While standard Accuracy captures final correctness, ESI is designed to dissect how models reason, plan, and interact.
|
27 |
+
|
28 |
+

|
29 |
+
|
30 |
+
|
31 |
+
## π How to Use
|
32 |
+
|
33 |
+
### 1. Prerequisites
|
34 |
+
|
35 |
+
- Python (3.8+ recommended).
|
36 |
+
- Install dependencies:
|
37 |
+
```bash
|
38 |
+
pip install requests tqdm
|
39 |
+
```
|
40 |
+
|
41 |
+
### 2. Setup & Configuration
|
42 |
+
|
43 |
+
1. **Clone/Download Project**: Obtain all project files (`main.py`, `config.py`, `settings.json`, etc.).
|
44 |
+
2. **Directory Structure**:
|
45 |
+
```
|
46 |
+
.
|
47 |
+
βββ Data Demo/ # Your .jsonl datasets
|
48 |
+
β βββ L1-1K.jsonl
|
49 |
+
β βββ ...
|
50 |
+
βββ Intermediate/ # Stores intermediate files (if enabled)
|
51 |
+
βββ Result/ # Output: detailed results and summaries
|
52 |
+
βββ config.py
|
53 |
+
βββ evaluation_metrics.py
|
54 |
+
βββ llm_calls.py
|
55 |
+
βββ main.py # Main script to run
|
56 |
+
βββ prompts.py
|
57 |
+
βββ settings.json # CRITICAL: Configure this file
|
58 |
+
βββ utils.py
|
59 |
+
```
|
60 |
+
3. **Configure `settings.json`**: This is the **most important step**.
|
61 |
+
* **API Credentials**:
|
62 |
+
* `WORKER_API_URL`, `WORKER_API_TOKEN`
|
63 |
+
* `ACCURACY_JUDGE_API_URL`, `ACCURACY_JUDGE_API_TOKEN`
|
64 |
+
* `INTEGRITY_JUDGE_API_URL`, `INTEGRITY_JUDGE_API_TOKEN`
|
65 |
+
* If using OpenRouter: `OPENROUTER_API_BASE_URL`, `OPENROUTER_API_KEY`, etc.
|
66 |
+
* **Security**: Avoid committing real API keys. Consider environment variables for production/shared use.
|
67 |
+
* **Models**:
|
68 |
+
* `WORKER_MODEL_IDS`: List of worker LLM IDs to test (e.g., `["openai/gpt-4o", "meta-llama/Llama-3-8b-chat-hf"]`).
|
69 |
+
* `ACCURACY_JUDGE_MODEL_ID`, `INTEGRITY_JUDGE_MODEL_ID`: Models for judgment tasks.
|
70 |
+
* **Datasets**:
|
71 |
+
* `DATASET_CONFIGS`: Define your datasets. Each entry maps a short name (e.g., `"L1"`) to an object with a `"path"` (e.g., `"./Data Demo/L1-1K.jsonl"`) and `"description"`.
|
72 |
+
* Dataset files must be in **`.jsonl` format**, where each line is a JSON object containing at least:
|
73 |
+
* `"instruction"`: (string) Background information/context.
|
74 |
+
* `"question"`: (string) The question for the LLM.
|
75 |
+
* `"answer"`: (string) The reference/ground truth answer.
|
76 |
+
* `DATASETS_TO_RUN`: List of dataset short names to evaluate in the current run (e.g., `["L1", "L2"]`).
|
77 |
+
* **Prompts**:
|
78 |
+
* `PROMPT_VERSIONS_TO_TEST`: List of prompt strategies (e.g., `["DIRECT", "COT"]`). These correspond to templates in `prompts.py`.
|
79 |
+
* **Output Paths**: Configure `FINAL_OUTPUT_FILE_TEMPLATE`, `SKIPPED_FILE_LOG_TEMPLATE`, `SUMMARY_FILE_TEMPLATE`.
|
80 |
+
* **Metric Parameters & ESI Weights**: Adjust values under `_comment_Efficiency_Params`, `_comment_Safety_Params`, `_comment_Alignment_Simplified_Params`, and `_comment_ESI_Weights` as needed.
|
81 |
+
* **API & Concurrency**: Set `MAX_RETRIES`, `REQUEST_TIMEOUT_SECONDS`, `MAX_CONCURRENT_ITEMS_PER_COMBO`.
|
82 |
+
|
83 |
+
### 4. Prepare Datasets
|
84 |
+
|
85 |
+
- Create your `.jsonl` dataset files according to the format specified above.
|
86 |
+
- Place them in the relevant directory (e.g., `Data Demo/`) and ensure paths in `settings.json` are correct.
|
87 |
+
|
88 |
+
### 5. Run Evaluation
|
89 |
+
|
90 |
+
Execute the main script from the project's root directory:
|
91 |
+
|
92 |
+
```bash
|
93 |
+
python main.py
|