Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ license: llama2
|
|
13 |
</div>
|
14 |
|
15 |
**Model type:**
|
16 |
-
Dromedary-2 is an open-source self-aligned language model trained
|
17 |
The base language model is LLaMA-70b, based on the transformer architecture.
|
18 |
|
19 |
**NOTE: *Dromedary-2* is trained with QLoRA and the bfloat16 data type.** While it is [possible](https://gist.github.com/ChrisHayduk/1a53463331f52dca205e55982baf9930) to merge the QLoRA weights with the quantized model and thus enable inference with libraries such as [TGI](https://github.com/huggingface/text-generation-inference) and [vLLM](https://github.com/vllm-project/vllm), we found the merged weights can lead to degenerated performance. Therefore, we recommend directly loading the QLoRA weights with the PEFT-LoRA framework.
|
|
|
13 |
</div>
|
14 |
|
15 |
**Model type:**
|
16 |
+
Dromedary-2 is an open-source self-aligned language model trained in minimal human supervision with the SALMON (Self-Alignment with Principle-Following Reward Models) technique.
|
17 |
The base language model is LLaMA-70b, based on the transformer architecture.
|
18 |
|
19 |
**NOTE: *Dromedary-2* is trained with QLoRA and the bfloat16 data type.** While it is [possible](https://gist.github.com/ChrisHayduk/1a53463331f52dca205e55982baf9930) to merge the QLoRA weights with the quantized model and thus enable inference with libraries such as [TGI](https://github.com/huggingface/text-generation-inference) and [vLLM](https://github.com/vllm-project/vllm), we found the merged weights can lead to degenerated performance. Therefore, we recommend directly loading the QLoRA weights with the PEFT-LoRA framework.
|