Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ license: apache-2.0
|
|
13 |
|
14 |
This is a fine-tuned Llama-3.1-70B-Instruct model on the [Egida-DPO-Llama-3.1-70B-Instruct](http://huggingface.co/datasets/HPAI-BSC/Egida/viewer/Egida-DPO-Meta-Llama-3.1-70B-Instruct) dataset.
|
15 |
|
16 |
-
The [Egida](https://huggingface.co/datasets/HPAI-BSC/Egida/viewer/Egida?views%5B%5D=egida_full) dataset is a collection of adversarial prompts that are thought to ellicit unsafe behaviors from language models. Specifically for this case, the Egida train split is used to run inference on
|
17 |
dataset for this model. This results in a DPO dataset composed by triplets < ”question”, ”chosen answer”, ”discarded answer” > which contain questions that elicit unsafe responses by this target model, as well as the unsafe responses produced by it.
|
18 |
|
19 |
## Training Details
|
|
|
13 |
|
14 |
This is a fine-tuned Llama-3.1-70B-Instruct model on the [Egida-DPO-Llama-3.1-70B-Instruct](http://huggingface.co/datasets/HPAI-BSC/Egida/viewer/Egida-DPO-Meta-Llama-3.1-70B-Instruct) dataset.
|
15 |
|
16 |
+
The [Egida](https://huggingface.co/datasets/HPAI-BSC/Egida/viewer/Egida?views%5B%5D=egida_full) dataset is a collection of adversarial prompts that are thought to ellicit unsafe behaviors from language models. Specifically for this case, the Egida train split is used to run inference on Llama-3.1-70B-Instruct. Unsafe answers are selected, and paired with safe answers to create a customized DPO
|
17 |
dataset for this model. This results in a DPO dataset composed by triplets < ”question”, ”chosen answer”, ”discarded answer” > which contain questions that elicit unsafe responses by this target model, as well as the unsafe responses produced by it.
|
18 |
|
19 |
## Training Details
|