Update README.md
Browse files
README.md
CHANGED
@@ -41,6 +41,16 @@ The Med-REFL LoRA weights can be applied to the following base models to enhance
|
|
41 |
| **Med-REFL for Huatuo-o1-8B** | Huatuo-o1-8b | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-Huatuo-o1-8B-lora) |
|
42 |
| **Med-REFL for MedReason-8B**| MedReason-8B | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-MedReason-8B-lora) |
|
43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
# <span>Usage</span>
|
45 |
|
46 |
You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm). For more usages, please refer to our github page.
|
|
|
41 |
| **Med-REFL for Huatuo-o1-8B** | Huatuo-o1-8b | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-Huatuo-o1-8B-lora) |
|
42 |
| **Med-REFL for MedReason-8B**| MedReason-8B | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-MedReason-8B-lora) |
|
43 |
|
44 |
+
|
45 |
+
# <span> **Qwen2.5-7B Model Performance**</span>
|
46 |
+
|
47 |
+
The following table shows the performance of the Qwen2.5-7B model on the In-Domain benchmark before and after applying Med-REFL.
|
48 |
+
|
49 |
+
| Domain | Benchmark | Original | **+ Med-REFL** |
|
50 |
+
| :--- | :--- | :--- | :--- |
|
51 |
+
| **In-Domain** | MedQA-USMLE | 57.11 | **59.70** <span style="color: #2E8B57; font-size: small;">(+2.59)</span> |
|
52 |
+
|
53 |
+
|
54 |
# <span>Usage</span>
|
55 |
|
56 |
You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm). For more usages, please refer to our github page.
|