Update README.md
Browse files
README.md
CHANGED
@@ -78,18 +78,15 @@ Complete training details, model merging configurations, and all training data (
|
|
78 |
Aloe Beta has been tested on the most popular healthcare QA datasets, with and without **Medprompt** inference technique. Results show competitive performance, achieving SOTA within models of the same size.
|
79 |
|
80 |
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |

|
85 |
|
86 |
|
87 |
The Beta model has been developed to excel in several different medical tasks. For this reason, we evaluated the model in many different medical benchmarks:
|
88 |
|
89 |
-

|
90 |
|
|
|
91 |
|
92 |
-

|
82 |
|
83 |
|
84 |
The Beta model has been developed to excel in several different medical tasks. For this reason, we evaluated the model in many different medical benchmarks:
|
85 |
|
|
|
86 |
|
87 |
+

|
88 |
|
89 |
+

|
90 |
|
91 |
|
92 |
We also compared the performance of the model in the general domain, using the OpenLLM Leaderboard benchmark. Aloe-Beta gets competitive results with the current SOTA general models in the most used general benchmarks and outperforms the medical models:
|