chchen commited on
Commit
6a3eef4
·
verified ·
1 Parent(s): 020824c

Model save

Browse files
Files changed (2) hide show
  1. README.md +83 -0
  2. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ license: llama3.1
4
+ base_model: meta-llama/Llama-3.1-8B-Instruct
5
+ tags:
6
+ - trl
7
+ - dpo
8
+ - llama-factory
9
+ - generated_from_trainer
10
+ model-index:
11
+ - name: Llama-3.1-8B-Instruct-dpo-mistral-1000
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # Llama-3.1-8B-Instruct-dpo-mistral-1000
19
+
20
+ This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on an unknown dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.4934
23
+ - Rewards/chosen: 0.8943
24
+ - Rewards/rejected: -0.7017
25
+ - Rewards/accuracies: 0.75
26
+ - Rewards/margins: 1.5960
27
+ - Logps/chosen: -14.2081
28
+ - Logps/rejected: -32.2463
29
+ - Logits/chosen: -0.0873
30
+ - Logits/rejected: -0.1548
31
+
32
+ ## Model description
33
+
34
+ More information needed
35
+
36
+ ## Intended uses & limitations
37
+
38
+ More information needed
39
+
40
+ ## Training and evaluation data
41
+
42
+ More information needed
43
+
44
+ ## Training procedure
45
+
46
+ ### Training hyperparameters
47
+
48
+ The following hyperparameters were used during training:
49
+ - learning_rate: 5e-06
50
+ - train_batch_size: 2
51
+ - eval_batch_size: 2
52
+ - seed: 42
53
+ - gradient_accumulation_steps: 8
54
+ - total_train_batch_size: 16
55
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
56
+ - lr_scheduler_type: cosine
57
+ - lr_scheduler_warmup_ratio: 0.1
58
+ - num_epochs: 10.0
59
+
60
+ ### Training results
61
+
62
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/chosen | Logps/rejected | Logits/chosen | Logits/rejected |
63
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:------------:|:--------------:|:-------------:|:---------------:|
64
+ | 0.6891 | 0.8909 | 50 | 0.6833 | 0.0487 | 0.0276 | 0.6200 | 0.0211 | -22.6647 | -24.9535 | -0.3207 | -0.3690 |
65
+ | 0.5716 | 1.7817 | 100 | 0.5618 | 0.6081 | 0.1913 | 0.7000 | 0.4168 | -17.0706 | -23.3165 | -0.2934 | -0.3456 |
66
+ | 0.4581 | 2.6726 | 150 | 0.4761 | 0.9362 | -0.0437 | 0.7600 | 0.9799 | -13.7892 | -25.6666 | -0.2093 | -0.2739 |
67
+ | 0.4032 | 3.5635 | 200 | 0.4709 | 0.9603 | -0.2844 | 0.8100 | 1.2447 | -13.5486 | -28.0732 | -0.1631 | -0.2306 |
68
+ | 0.3836 | 4.4543 | 250 | 0.4675 | 0.9903 | -0.3997 | 0.7900 | 1.3900 | -13.2488 | -29.2269 | -0.1396 | -0.2080 |
69
+ | 0.3588 | 5.3452 | 300 | 0.4752 | 0.9745 | -0.4525 | 0.7700 | 1.4270 | -13.4066 | -29.7545 | -0.1255 | -0.1931 |
70
+ | 0.2861 | 6.2361 | 350 | 0.4812 | 0.9392 | -0.5503 | 0.7700 | 1.4895 | -13.7591 | -30.7320 | -0.1102 | -0.1785 |
71
+ | 0.3662 | 7.1269 | 400 | 0.4868 | 0.9165 | -0.6356 | 0.7700 | 1.5522 | -13.9862 | -31.5858 | -0.0990 | -0.1679 |
72
+ | 0.2822 | 8.0178 | 450 | 0.4927 | 0.9099 | -0.6512 | 0.7600 | 1.5612 | -14.0519 | -31.7416 | -0.0936 | -0.1622 |
73
+ | 0.2416 | 8.9087 | 500 | 0.4979 | 0.8912 | -0.6958 | 0.7600 | 1.5870 | -14.2398 | -32.1878 | -0.0898 | -0.1585 |
74
+ | 0.3096 | 9.7996 | 550 | 0.4934 | 0.8943 | -0.7017 | 0.75 | 1.5960 | -14.2081 | -32.2463 | -0.0873 | -0.1548 |
75
+
76
+
77
+ ### Framework versions
78
+
79
+ - PEFT 0.12.0
80
+ - Transformers 4.46.1
81
+ - Pytorch 2.5.1+cu124
82
+ - Datasets 3.1.0
83
+ - Tokenizers 0.20.3
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f406ba5a9d77114ce92e45de18efc43b04c0750ae18973000b6e9ca2c4c9781c
3
  size 83945296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42deb574e845f8684c09bbefceefd19768e431e4053ca1d18e3f2c8bac98a403
3
  size 83945296