Commit
·
281ca8e
1
Parent(s):
f262779
Add more info
Browse files
README.md
CHANGED
|
@@ -2,4 +2,45 @@
|
|
| 2 |
datasets:
|
| 3 |
- CheshireAI/guanaco-unchained
|
| 4 |
---
|
| 5 |
-
Let's see how this goes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
datasets:
|
| 3 |
- CheshireAI/guanaco-unchained
|
| 4 |
---
|
| 5 |
+
Let's see how this goes.
|
| 6 |
+
|
| 7 |
+
Training in 8 bit and at full context. Is 8bit even a qlora?
|
| 8 |
+
|
| 9 |
+
```
|
| 10 |
+
python qlora.py \
|
| 11 |
+
--model_name_or_path /UI/text-generation-webui/models/llama-30b \
|
| 12 |
+
--output_dir ./output/guanaco-33b \
|
| 13 |
+
--logging_steps 1 \
|
| 14 |
+
--save_strategy steps \
|
| 15 |
+
--data_seed 42 \
|
| 16 |
+
--save_steps 69 \
|
| 17 |
+
--save_total_limit 999 \
|
| 18 |
+
--per_device_eval_batch_size 1 \
|
| 19 |
+
--dataloader_num_workers 3 \
|
| 20 |
+
--group_by_length \
|
| 21 |
+
--logging_strategy steps \
|
| 22 |
+
--remove_unused_columns False \
|
| 23 |
+
--do_train \
|
| 24 |
+
--do_eval false \
|
| 25 |
+
--do_mmlu_eval false \
|
| 26 |
+
--lora_r 64 \
|
| 27 |
+
--lora_alpha 16 \
|
| 28 |
+
--lora_modules all \
|
| 29 |
+
--bf16 \
|
| 30 |
+
--bits 8 \
|
| 31 |
+
--warmup_ratio 0.03 \
|
| 32 |
+
--lr_scheduler_type constant \
|
| 33 |
+
--gradient_checkpointing \
|
| 34 |
+
--gradient_accumulation_steps 32 \
|
| 35 |
+
--dataset oasst1 \
|
| 36 |
+
--source_max_len 2048 \
|
| 37 |
+
--target_max_len 2048 \
|
| 38 |
+
--per_device_train_batch_size 1 \
|
| 39 |
+
--num_train_epochs 3 \
|
| 40 |
+
--learning_rate 0.0001 \
|
| 41 |
+
--adam_beta2 0.999 \
|
| 42 |
+
--max_grad_norm 0.3 \
|
| 43 |
+
--lora_dropout 0.05 \
|
| 44 |
+
--weight_decay 0.0 \
|
| 45 |
+
--seed 0
|
| 46 |
+
```
|