sw_tuenguyen
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ For SFT stage we using the hyperparameters:
|
|
22 |
- Max Length: 16378.
|
23 |
- Batch Size: 128.
|
24 |
- Learning-Rate: 5e-5.
|
25 |
-
- Number Of Epoch:
|
26 |
|
27 |
For RL stage we setup training with:
|
28 |
|
@@ -47,7 +47,7 @@ Detailed result for HealthBench can be found [here](https://huggingface.co/datas
|
|
47 |
|
48 |

|
49 |
|
50 |
-
We evaluate on ten medical QA benchmarks include MedMCQA, MedQA, PubMedQA, medical related questions from MMLU-Pro
|
51 |
Journal of Medicine, 4 Options and 5 Options splits from the MedBullets platform and MedXpertQA.
|
52 |
|
53 |
| Model | MedMC | MedQA | PubMed | MMLU-P | HealthBench | Lancet | MedB-4 | MedB-5 | MedX | NEJM | Avg |
|
|
|
22 |
- Max Length: 16378.
|
23 |
- Batch Size: 128.
|
24 |
- Learning-Rate: 5e-5.
|
25 |
+
- Number Of Epoch: 6.
|
26 |
|
27 |
For RL stage we setup training with:
|
28 |
|
|
|
47 |
|
48 |

|
49 |
|
50 |
+
We evaluate on ten medical QA benchmarks include MedMCQA, MedQA, PubMedQA, HealthBench, medical related questions from MMLU-Pro, small QA sets from Lancet and the New England
|
51 |
Journal of Medicine, 4 Options and 5 Options splits from the MedBullets platform and MedXpertQA.
|
52 |
|
53 |
| Model | MedMC | MedQA | PubMed | MMLU-P | HealthBench | Lancet | MedB-4 | MedB-5 | MedX | NEJM | Avg |
|