|
--- |
|
license: apache-2.0 |
|
library_name: transformers |
|
base_model: |
|
- nbeerbower/mistral-nemo-kartoffel-12B |
|
datasets: |
|
- jondurbin/gutenberg-dpo-v0.1 |
|
- nbeerbower/gutenberg2-dpo |
|
- nbeerbower/gutenberg-moderne-dpo |
|
- nbeerbower/synthetic-fiction-dpo |
|
- nbeerbower/Arkhaios-DPO |
|
- nbeerbower/Purpura-DPO |
|
- nbeerbower/Schule-DPO |
|
--- |
|
|
|
# Schreiber-mistral-nemo-12B |
|
|
|
[nbeerbower/mistral-nemo-kartoffel-12B](https://huggingface.co/nbeerbower/mistral-nemo-kartoffel-12B) finetuned on: |
|
* [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1) |
|
* [nbeerbower/gutenberg2-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg2-dpo) |
|
* [nbeerbower/gutenberg-moderne-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg-moderne-dpo) |
|
* [nbeerbower/synthetic-fiction-dpo](https://huggingface.co/datasets/nbeerbower/synthetic-fiction-dpo) |
|
* [nbeerbower/Arkhaios-DPO](https://huggingface.co/datasets/nbeerbower/Arkhaios-DPO) |
|
* [nbeerbower/Purpura-DPO](https://huggingface.co/datasets/nbeerbower/Purpura-DPO) |
|
* [nbeerbower/Schule-DPO](https://huggingface.co/datasets/nbeerbower/Schule-DPO) |
|
|
|
## Method |
|
|
|
[ORPO tuned](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with 1x RTX A6000 for 3 epochs. |
|
|
|
### QLoRA config |
|
``` |
|
# QLoRA config |
|
bnb_config = BitsAndBytesConfig( |
|
load_in_4bit=True, |
|
bnb_4bit_quant_type="nf4", |
|
bnb_4bit_compute_dtype=torch_dtype, |
|
bnb_4bit_use_double_quant=True, |
|
) |
|
# LoRA config |
|
peft_config = LoraConfig( |
|
r=64, |
|
lora_alpha=128, |
|
lora_dropout=0.05, |
|
bias="none", |
|
task_type="CAUSAL_LM", |
|
target_modules=['up_proj', 'down_proj', 'gate_proj', 'k_proj', 'q_proj', 'v_proj', 'o_proj'] |
|
) |
|
``` |
|
|
|
### ORPO config |
|
``` |
|
orpo_args = ORPOConfig( |
|
learning_rate=8e-6, |
|
lr_scheduler_type="cosine", |
|
warmup_ratio=0.05, |
|
max_length=4096, |
|
max_prompt_length=1024, |
|
max_completion_length=4096, |
|
beta=0.1, |
|
per_device_train_batch_size=1, |
|
per_device_eval_batch_size=1, |
|
gradient_accumulation_steps=64, |
|
optim="paged_adamw_8bit", |
|
num_train_epochs=3, |
|
max_grad_norm=0.5, |
|
bf16=True, |
|
) |
|
``` |