chchen commited on
Commit
43764cb
·
verified ·
1 Parent(s): 200b0cd

Model save

Browse files
Files changed (2) hide show
  1. README.md +83 -0
  2. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ license: llama3.1
4
+ base_model: meta-llama/Llama-3.1-8B-Instruct
5
+ tags:
6
+ - trl
7
+ - dpo
8
+ - llama-factory
9
+ - generated_from_trainer
10
+ model-index:
11
+ - name: Llama-3.1-8B-Instruct-dpo-llama-1000
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # Llama-3.1-8B-Instruct-dpo-llama-1000
19
+
20
+ This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on an unknown dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.3613
23
+ - Rewards/chosen: 1.3392
24
+ - Rewards/rejected: -1.7432
25
+ - Rewards/accuracies: 0.8400
26
+ - Rewards/margins: 3.0824
27
+ - Logps/chosen: -9.1017
28
+ - Logps/rejected: -41.8256
29
+ - Logits/chosen: -0.1378
30
+ - Logits/rejected: -0.2410
31
+
32
+ ## Model description
33
+
34
+ More information needed
35
+
36
+ ## Intended uses & limitations
37
+
38
+ More information needed
39
+
40
+ ## Training and evaluation data
41
+
42
+ More information needed
43
+
44
+ ## Training procedure
45
+
46
+ ### Training hyperparameters
47
+
48
+ The following hyperparameters were used during training:
49
+ - learning_rate: 5e-06
50
+ - train_batch_size: 2
51
+ - eval_batch_size: 2
52
+ - seed: 42
53
+ - gradient_accumulation_steps: 8
54
+ - total_train_batch_size: 16
55
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
56
+ - lr_scheduler_type: cosine
57
+ - lr_scheduler_warmup_ratio: 0.1
58
+ - num_epochs: 10.0
59
+
60
+ ### Training results
61
+
62
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/chosen | Logps/rejected | Logits/chosen | Logits/rejected |
63
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:------------:|:--------------:|:-------------:|:---------------:|
64
+ | 0.6815 | 0.8889 | 50 | 0.6707 | 0.0833 | 0.0353 | 0.6900 | 0.0480 | -21.6601 | -24.0398 | -0.4114 | -0.4792 |
65
+ | 0.5082 | 1.7778 | 100 | 0.4428 | 1.0308 | 0.1943 | 0.7900 | 0.8366 | -12.1855 | -22.4506 | -0.3559 | -0.4377 |
66
+ | 0.2979 | 2.6667 | 150 | 0.3215 | 1.3481 | -0.4170 | 0.8600 | 1.7651 | -9.0131 | -28.5637 | -0.2695 | -0.3655 |
67
+ | 0.2862 | 3.5556 | 200 | 0.3077 | 1.4814 | -0.7600 | 0.8500 | 2.2414 | -7.6796 | -31.9936 | -0.2154 | -0.3106 |
68
+ | 0.2747 | 4.4444 | 250 | 0.3184 | 1.4147 | -1.2445 | 0.8600 | 2.6592 | -8.3466 | -36.8385 | -0.1872 | -0.2879 |
69
+ | 0.2688 | 5.3333 | 300 | 0.3195 | 1.4469 | -1.2794 | 0.8500 | 2.7263 | -8.0242 | -37.1874 | -0.1714 | -0.2705 |
70
+ | 0.2047 | 6.2222 | 350 | 0.3630 | 1.3019 | -1.5956 | 0.8400 | 2.8975 | -9.4749 | -40.3495 | -0.1553 | -0.2578 |
71
+ | 0.2268 | 7.1111 | 400 | 0.3526 | 1.3609 | -1.6635 | 0.8500 | 3.0245 | -8.8842 | -41.0287 | -0.1452 | -0.2479 |
72
+ | 0.144 | 8.0 | 450 | 0.3662 | 1.3488 | -1.7032 | 0.8400 | 3.0520 | -9.0059 | -41.4255 | -0.1421 | -0.2448 |
73
+ | 0.171 | 8.8889 | 500 | 0.3635 | 1.3313 | -1.7326 | 0.8400 | 3.0640 | -9.1805 | -41.7197 | -0.1399 | -0.2430 |
74
+ | 0.2313 | 9.7778 | 550 | 0.3613 | 1.3392 | -1.7432 | 0.8400 | 3.0824 | -9.1017 | -41.8256 | -0.1378 | -0.2410 |
75
+
76
+
77
+ ### Framework versions
78
+
79
+ - PEFT 0.12.0
80
+ - Transformers 4.46.1
81
+ - Pytorch 2.5.1+cu124
82
+ - Datasets 3.1.0
83
+ - Tokenizers 0.20.3
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7ce93f7ae2785c5ef2370c16c37beefe9e1e5ad81940f0901b4240ab58806f30
3
  size 83945296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f07d7347324c4456569dd8925d33df733dd8d1e2f6a661e8c26fec67edeafd9
3
  size 83945296