Update README.md
Browse files
README.md
CHANGED
@@ -23,15 +23,15 @@ tags:
|
|
23 |
pipeline_tag: question-answering
|
24 |
---
|
25 |
|
26 |
-
<
|
27 |
<picture>
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
</
|
35 |
<hr style="margin: 15px">
|
36 |
<div align="center" style="line-height: 1;">
|
37 |
<a href="https://hpai.bsc.es/" target="_blank" style="margin: 1px;">
|
@@ -64,8 +64,6 @@ pipeline_tag: question-answering
|
|
64 |
</a>
|
65 |
</div>
|
66 |
|
67 |
-
---
|
68 |
-
|
69 |
|
70 |
|
71 |
Qwen2.5-Aloe-Beta-7B is an **open healthcare LLM** achieving **state-of-the-art performance** on several medical tasks. Aloe Beta is made available in four model sizes: [7B](https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-7B/), [8B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-8B), [70B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-70B), and [72B](https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-72B). All models are trained using the same recipe, on top of two different families of models: Llama3.1 and Qwen2.5.
|
|
|
23 |
pipeline_tag: question-answering
|
24 |
---
|
25 |
|
26 |
+
<p align="center">
|
27 |
<picture>
|
28 |
+
<source media="(prefers-color-scheme: dark)" srcset="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/ARcIVTFxuBMV5DKooCgJH.png">
|
29 |
+
<img alt="prompt_engine" src="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/ARcIVTFxuBMV5DKooCgJH.png" width=55%>
|
30 |
+
</picture>
|
31 |
+
</p>
|
32 |
+
<h1 align="center">
|
33 |
+
Aloe: A Family of Fine-tuned Open Healthcare LLMs
|
34 |
+
</h1>
|
35 |
<hr style="margin: 15px">
|
36 |
<div align="center" style="line-height: 1;">
|
37 |
<a href="https://hpai.bsc.es/" target="_blank" style="margin: 1px;">
|
|
|
64 |
</a>
|
65 |
</div>
|
66 |
|
|
|
|
|
67 |
|
68 |
|
69 |
Qwen2.5-Aloe-Beta-7B is an **open healthcare LLM** achieving **state-of-the-art performance** on several medical tasks. Aloe Beta is made available in four model sizes: [7B](https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-7B/), [8B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-8B), [70B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-70B), and [72B](https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-72B). All models are trained using the same recipe, on top of two different families of models: Llama3.1 and Qwen2.5.
|