JayHyeon commited on
Commit
678f007
·
verified ·
1 Parent(s): 9ba7548

Model save

Browse files
Files changed (3) hide show
  1. README.md +2 -3
  2. all_results.json +16 -16
  3. eval_results.json +16 -16
README.md CHANGED
@@ -1,6 +1,5 @@
1
  ---
2
  base_model: JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep
3
- datasets: trl-lib/ultrafeedback_binarized
4
  library_name: transformers
5
  model_name: Qwen_0.5-rDPO_5e-7-3ep_0vpo_const
6
  tags:
@@ -12,7 +11,7 @@ licence: license
12
 
13
  # Model Card for Qwen_0.5-rDPO_5e-7-3ep_0vpo_const
14
 
15
- This model is a fine-tuned version of [JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep](https://huggingface.co/JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
16
  It has been trained using [TRL](https://github.com/huggingface/trl).
17
 
18
  ## Quick start
@@ -28,7 +27,7 @@ print(output["generated_text"])
28
 
29
  ## Training procedure
30
 
31
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/ezwht1g7)
32
 
33
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
34
 
 
1
  ---
2
  base_model: JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep
 
3
  library_name: transformers
4
  model_name: Qwen_0.5-rDPO_5e-7-3ep_0vpo_const
5
  tags:
 
11
 
12
  # Model Card for Qwen_0.5-rDPO_5e-7-3ep_0vpo_const
13
 
14
+ This model is a fine-tuned version of [JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep](https://huggingface.co/JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
 
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/pmi8lat0)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
all_results.json CHANGED
@@ -1,21 +1,21 @@
1
  {
2
  "epoch": 2.999098751126561,
3
- "eval_log_ratio_diff/mean": 17.375,
4
- "eval_logits/chosen": -2.359375,
5
- "eval_logits/rejected": -2.4375,
6
- "eval_logps/chosen": -346.0,
7
- "eval_logps/rejected": -338.0,
8
- "eval_loss": 0.2879411578178406,
9
- "eval_nll_loss": 1.234375,
10
  "eval_ref_probs/chosen": 0.0038271306548267603,
11
  "eval_ref_probs/rejected": 0.0034569373819977045,
12
- "eval_rejected_term/max": -4.116310446988791e-05,
13
- "eval_rejected_term/min": -4.116310446988791e-05,
14
- "eval_rewards/accuracies": 0.7440000176429749,
15
- "eval_rewards/chosen": -2.375,
16
- "eval_rewards/margins": 1.7421875,
17
- "eval_rewards/rejected": -4.125,
18
- "eval_runtime": 25.3378,
19
- "eval_samples_per_second": 39.467,
20
- "eval_steps_per_second": 9.867
21
  }
 
1
  {
2
  "epoch": 2.999098751126561,
3
+ "eval_log_ratio_diff/mean": 59.0,
4
+ "eval_logits/chosen": -2.703125,
5
+ "eval_logits/rejected": -2.8125,
6
+ "eval_logps/chosen": -444.0,
7
+ "eval_logps/rejected": -480.0,
8
+ "eval_loss": -2.3502578735351562,
9
+ "eval_nll_loss": 1.7734375,
10
  "eval_ref_probs/chosen": 0.0038271306548267603,
11
  "eval_ref_probs/rejected": 0.0034569373819977045,
12
+ "eval_rejected_term/max": -1.9564737385735498e-07,
13
+ "eval_rejected_term/min": -1.9564737385735498e-07,
14
+ "eval_rewards/accuracies": 0.6639999747276306,
15
+ "eval_rewards/chosen": -12.3125,
16
+ "eval_rewards/margins": 5.90625,
17
+ "eval_rewards/rejected": -18.25,
18
+ "eval_runtime": 25.3061,
19
+ "eval_samples_per_second": 39.516,
20
+ "eval_steps_per_second": 9.879
21
  }
eval_results.json CHANGED
@@ -1,21 +1,21 @@
1
  {
2
  "epoch": 2.999098751126561,
3
- "eval_log_ratio_diff/mean": 17.375,
4
- "eval_logits/chosen": -2.359375,
5
- "eval_logits/rejected": -2.4375,
6
- "eval_logps/chosen": -346.0,
7
- "eval_logps/rejected": -338.0,
8
- "eval_loss": 0.2879411578178406,
9
- "eval_nll_loss": 1.234375,
10
  "eval_ref_probs/chosen": 0.0038271306548267603,
11
  "eval_ref_probs/rejected": 0.0034569373819977045,
12
- "eval_rejected_term/max": -4.116310446988791e-05,
13
- "eval_rejected_term/min": -4.116310446988791e-05,
14
- "eval_rewards/accuracies": 0.7440000176429749,
15
- "eval_rewards/chosen": -2.375,
16
- "eval_rewards/margins": 1.7421875,
17
- "eval_rewards/rejected": -4.125,
18
- "eval_runtime": 25.3378,
19
- "eval_samples_per_second": 39.467,
20
- "eval_steps_per_second": 9.867
21
  }
 
1
  {
2
  "epoch": 2.999098751126561,
3
+ "eval_log_ratio_diff/mean": 59.0,
4
+ "eval_logits/chosen": -2.703125,
5
+ "eval_logits/rejected": -2.8125,
6
+ "eval_logps/chosen": -444.0,
7
+ "eval_logps/rejected": -480.0,
8
+ "eval_loss": -2.3502578735351562,
9
+ "eval_nll_loss": 1.7734375,
10
  "eval_ref_probs/chosen": 0.0038271306548267603,
11
  "eval_ref_probs/rejected": 0.0034569373819977045,
12
+ "eval_rejected_term/max": -1.9564737385735498e-07,
13
+ "eval_rejected_term/min": -1.9564737385735498e-07,
14
+ "eval_rewards/accuracies": 0.6639999747276306,
15
+ "eval_rewards/chosen": -12.3125,
16
+ "eval_rewards/margins": 5.90625,
17
+ "eval_rewards/rejected": -18.25,
18
+ "eval_runtime": 25.3061,
19
+ "eval_samples_per_second": 39.516,
20
+ "eval_steps_per_second": 9.879
21
  }