JordiBayarri commited on
Commit
4bf4fc6
verified
1 Parent(s): d3eeead

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -56,16 +56,17 @@ pipeline_tag: question-answering
56
  </a>
57
  </div>
58
  <div align="center" style="line-height: 1;">
59
- <a href="https://arxiv.org/abs/2409.15127" target="_blank" style="margin: 1px;">
60
- <img alt="Arxiv" src="https://img.shields.io/badge/arXiv-2409.15127-b31b1b.svg" style="display: inline-block; vertical-align: middle;"/>
61
  </a>
62
  <a href="LICENSE" style="margin: 1px;">
63
- <img alt="License" src="https://img.shields.io/badge/license-Apache%202.0-green" style="display: inline-block; vertical-align: middle;"/>
64
  </a>
65
  </div>
66
 
67
 
68
 
 
69
  Qwen2.5-Aloe-Beta-7B is an **open healthcare LLM** achieving **state-of-the-art performance** on several medical tasks. Aloe Beta is made available in four model sizes: [7B](https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-7B/), [8B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-8B), [70B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-70B), and [72B](https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-72B). All models are trained using the same recipe, on top of two different families of models: Llama3.1 and Qwen2.5.
70
 
71
  Aloe is trained on 20 medical tasks, resulting in a robust and versatile healthcare model. Evaluations show Aloe models to be among the best in their class. When combined with a RAG system ([also released](https://github.com/HPAI-BSC/prompt_engine)) the 7B and 8B version gets close to the performance of closed models like MedPalm-2, GPT4. With the same RAG system, Llama3.1-Aloe-Beta-70B and Qwen2.5-Aloe-Beta-72B outperforms those private alternatives, producing state-of-the-art results.
 
56
  </a>
57
  </div>
58
  <div align="center" style="line-height: 1;">
59
+ <a href="https://arxiv.org/abs/2405.01886" target="_blank" style="margin: 1px;">
60
+ <img alt="Arxiv" src="https://img.shields.io/badge/arXiv-2405.01886-b31b1b.svg" style="display: inline-block; vertical-align: middle;"/>
61
  </a>
62
  <a href="LICENSE" style="margin: 1px;">
63
+ <img alt="License" src="https://img.shields.io/badge/license-Llama%203.1-green" style="display: inline-block; vertical-align: middle;"/>
64
  </a>
65
  </div>
66
 
67
 
68
 
69
+
70
  Qwen2.5-Aloe-Beta-7B is an **open healthcare LLM** achieving **state-of-the-art performance** on several medical tasks. Aloe Beta is made available in four model sizes: [7B](https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-7B/), [8B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-8B), [70B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-70B), and [72B](https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-72B). All models are trained using the same recipe, on top of two different families of models: Llama3.1 and Qwen2.5.
71
 
72
  Aloe is trained on 20 medical tasks, resulting in a robust and versatile healthcare model. Evaluations show Aloe models to be among the best in their class. When combined with a RAG system ([also released](https://github.com/HPAI-BSC/prompt_engine)) the 7B and 8B version gets close to the performance of closed models like MedPalm-2, GPT4. With the same RAG system, Llama3.1-Aloe-Beta-70B and Qwen2.5-Aloe-Beta-72B outperforms those private alternatives, producing state-of-the-art results.