chansung commited on
Commit
ad065f1
·
verified ·
1 Parent(s): d6da3f1

Model save

Browse files
Files changed (5) hide show
  1. README.md +68 -0
  2. all_results.json +8 -0
  3. generation_config.json +14 -0
  4. train_results.json +8 -0
  5. trainer_state.json +1443 -0
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-1.5B-Instruct
3
+ library_name: transformers
4
+ model_name: Qwen2.5-1.5B-CCRL-1
5
+ tags:
6
+ - generated_from_trainer
7
+ - trl
8
+ - grpo
9
+ licence: license
10
+ ---
11
+
12
+ # Model Card for Qwen2.5-1.5B-CCRL-1
13
+
14
+ This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
+
17
+ ## Quick start
18
+
19
+ ```python
20
+ from transformers import pipeline
21
+
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="chansung/Qwen2.5-1.5B-CCRL-1", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
+
28
+ ## Training procedure
29
+
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chansung18/huggingface/runs/ljp86q65)
31
+
32
+
33
+ This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
34
+
35
+ ### Framework versions
36
+
37
+ - TRL: 0.16.0
38
+ - Transformers: 4.50.0
39
+ - Pytorch: 2.5.1
40
+ - Datasets: 3.4.1
41
+ - Tokenizers: 0.21.1
42
+
43
+ ## Citations
44
+
45
+ Cite GRPO as:
46
+
47
+ ```bibtex
48
+ @article{zhihong2024deepseekmath,
49
+ title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
50
+ author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
51
+ year = 2024,
52
+ eprint = {arXiv:2402.03300},
53
+ }
54
+
55
+ ```
56
+
57
+ Cite TRL as:
58
+
59
+ ```bibtex
60
+ @misc{vonwerra2022trl,
61
+ title = {{TRL: Transformer Reinforcement Learning}},
62
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
63
+ year = 2020,
64
+ journal = {GitHub repository},
65
+ publisher = {GitHub},
66
+ howpublished = {\url{https://github.com/huggingface/trl}}
67
+ }
68
+ ```
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 0.0,
3
+ "train_loss": 0.003220523014695118,
4
+ "train_runtime": 15510.0274,
5
+ "train_samples": 949,
6
+ "train_samples_per_second": 0.825,
7
+ "train_steps_per_second": 0.006
8
+ }
generation_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 151645,
6
+ 151643
7
+ ],
8
+ "pad_token_id": 151643,
9
+ "repetition_penalty": 1.1,
10
+ "temperature": 0.7,
11
+ "top_k": 20,
12
+ "top_p": 0.8,
13
+ "transformers_version": "4.50.0"
14
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 0.0,
3
+ "train_loss": 0.003220523014695118,
4
+ "train_runtime": 15510.0274,
5
+ "train_samples": 949,
6
+ "train_samples_per_second": 0.825,
7
+ "train_steps_per_second": 0.006
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,1443 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 1.6890756302521008,
6
+ "eval_steps": 500,
7
+ "global_step": 100,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "clip_ratio": 0.0,
14
+ "completion_length": 641.9296875,
15
+ "epoch": 0.01680672268907563,
16
+ "grad_norm": 0.1829340010881424,
17
+ "kl": 0.0,
18
+ "learning_rate": 3.333333333333333e-07,
19
+ "loss": -0.0027,
20
+ "num_tokens": 143935.0,
21
+ "reward": 0.015625,
22
+ "reward_std": 0.0289318785071373,
23
+ "rewards/code_format_reward": 0.015625,
24
+ "step": 1
25
+ },
26
+ {
27
+ "clip_ratio": 0.0,
28
+ "completion_length": 558.2109375,
29
+ "epoch": 0.03361344537815126,
30
+ "grad_norm": 0.24132254719734192,
31
+ "kl": 0.0,
32
+ "learning_rate": 6.666666666666666e-07,
33
+ "loss": 0.0106,
34
+ "num_tokens": 275234.0,
35
+ "reward": 0.015625,
36
+ "reward_std": 0.04419417306780815,
37
+ "rewards/code_format_reward": 0.015625,
38
+ "step": 2
39
+ },
40
+ {
41
+ "clip_ratio": 0.0,
42
+ "completion_length": 552.5546875,
43
+ "epoch": 0.05042016806722689,
44
+ "grad_norm": 0.2641178071498871,
45
+ "kl": 0.00027370452880859375,
46
+ "learning_rate": 1e-06,
47
+ "loss": 0.0038,
48
+ "num_tokens": 408041.0,
49
+ "reward": 0.015625,
50
+ "reward_std": 0.04419417306780815,
51
+ "rewards/code_format_reward": 0.015625,
52
+ "step": 3
53
+ },
54
+ {
55
+ "clip_ratio": 0.0,
56
+ "completion_length": 580.0703125,
57
+ "epoch": 0.06722689075630252,
58
+ "grad_norm": 0.0005861425888724625,
59
+ "kl": 0.00024080276489257812,
60
+ "learning_rate": 9.997640060704816e-07,
61
+ "loss": 0.0,
62
+ "num_tokens": 541898.0,
63
+ "reward": 0.0,
64
+ "reward_std": 0.0,
65
+ "rewards/code_format_reward": 0.0,
66
+ "step": 4
67
+ },
68
+ {
69
+ "clip_ratio": 0.0,
70
+ "completion_length": 610.8125,
71
+ "epoch": 0.08403361344537816,
72
+ "grad_norm": 0.19232627749443054,
73
+ "kl": 0.00027179718017578125,
74
+ "learning_rate": 9.990562718069702e-07,
75
+ "loss": 0.0049,
76
+ "num_tokens": 679066.0,
77
+ "reward": 0.0234375,
78
+ "reward_std": 0.051028965041041374,
79
+ "rewards/code_format_reward": 0.0234375,
80
+ "step": 5
81
+ },
82
+ {
83
+ "clip_ratio": 0.0,
84
+ "completion_length": 570.3046875,
85
+ "epoch": 0.10084033613445378,
86
+ "grad_norm": 0.000793790037278086,
87
+ "kl": 0.0002541542053222656,
88
+ "learning_rate": 9.978775395249762e-07,
89
+ "loss": 0.0,
90
+ "num_tokens": 811689.0,
91
+ "reward": 0.0,
92
+ "reward_std": 0.0,
93
+ "rewards/code_format_reward": 0.0,
94
+ "step": 6
95
+ },
96
+ {
97
+ "clip_ratio": 0.0,
98
+ "completion_length": 583.1171875,
99
+ "epoch": 0.11764705882352941,
100
+ "grad_norm": 0.14019447565078735,
101
+ "kl": 0.0003261566162109375,
102
+ "learning_rate": 9.962290455518912e-07,
103
+ "loss": 0.0085,
104
+ "num_tokens": 942160.0,
105
+ "reward": 0.0078125,
106
+ "reward_std": 0.022097086533904076,
107
+ "rewards/code_format_reward": 0.0078125,
108
+ "step": 7
109
+ },
110
+ {
111
+ "clip_ratio": 0.0,
112
+ "completion_length": 602.890625,
113
+ "epoch": 0.13445378151260504,
114
+ "grad_norm": 0.0008551861974410713,
115
+ "kl": 0.00027751922607421875,
116
+ "learning_rate": 9.941125189302508e-07,
117
+ "loss": 0.0,
118
+ "num_tokens": 1081786.0,
119
+ "reward": 0.0,
120
+ "reward_std": 0.0,
121
+ "rewards/code_format_reward": 0.0,
122
+ "step": 8
123
+ },
124
+ {
125
+ "clip_ratio": 0.0,
126
+ "completion_length": 538.1484375,
127
+ "epoch": 0.15126050420168066,
128
+ "grad_norm": 0.6493130326271057,
129
+ "kl": 0.0005221366882324219,
130
+ "learning_rate": 9.915301796042075e-07,
131
+ "loss": 0.0143,
132
+ "num_tokens": 1204973.0,
133
+ "reward": 0.0625,
134
+ "reward_std": 0.16151439771056175,
135
+ "rewards/code_format_reward": 0.0625,
136
+ "step": 9
137
+ },
138
+ {
139
+ "clip_ratio": 0.0,
140
+ "completion_length": 633.46875,
141
+ "epoch": 0.16806722689075632,
142
+ "grad_norm": 0.4631551206111908,
143
+ "kl": 0.0004162788391113281,
144
+ "learning_rate": 9.884847360911167e-07,
145
+ "loss": -0.0393,
146
+ "num_tokens": 1348945.0,
147
+ "reward": 0.0546875,
148
+ "reward_std": 0.12415501847863197,
149
+ "rewards/code_format_reward": 0.0546875,
150
+ "step": 10
151
+ },
152
+ {
153
+ "clip_ratio": 0.0,
154
+ "completion_length": 573.3671875,
155
+ "epoch": 0.18487394957983194,
156
+ "grad_norm": 0.46803170442581177,
157
+ "kl": 0.0006732940673828125,
158
+ "learning_rate": 9.84979382640675e-07,
159
+ "loss": -0.0024,
160
+ "num_tokens": 1478632.0,
161
+ "reward": 0.0703125,
162
+ "reward_std": 0.15308689512312412,
163
+ "rewards/code_format_reward": 0.0703125,
164
+ "step": 11
165
+ },
166
+ {
167
+ "clip_ratio": 0.0,
168
+ "completion_length": 603.7265625,
169
+ "epoch": 0.20168067226890757,
170
+ "grad_norm": 0.40413135290145874,
171
+ "kl": 0.0005345344543457031,
172
+ "learning_rate": 9.81017795884594e-07,
173
+ "loss": 0.0043,
174
+ "num_tokens": 1619165.0,
175
+ "reward": 0.0390625,
176
+ "reward_std": 0.09522314183413982,
177
+ "rewards/code_format_reward": 0.0390625,
178
+ "step": 12
179
+ },
180
+ {
181
+ "clip_ratio": 0.0,
182
+ "completion_length": 564.96875,
183
+ "epoch": 0.2184873949579832,
184
+ "grad_norm": 0.5944288372993469,
185
+ "kl": 0.0011510848999023438,
186
+ "learning_rate": 9.766041309803217e-07,
187
+ "loss": 0.009,
188
+ "num_tokens": 1751681.0,
189
+ "reward": 0.1015625,
190
+ "reward_std": 0.20411096327006817,
191
+ "rewards/code_format_reward": 0.1015625,
192
+ "step": 13
193
+ },
194
+ {
195
+ "clip_ratio": 0.0,
196
+ "completion_length": 509.171875,
197
+ "epoch": 0.23529411764705882,
198
+ "grad_norm": 0.584949254989624,
199
+ "kl": 0.0027894973754882812,
200
+ "learning_rate": 9.717430172528546e-07,
201
+ "loss": 0.0372,
202
+ "num_tokens": 1872271.0,
203
+ "reward": 0.125,
204
+ "reward_std": 0.2398776337504387,
205
+ "rewards/code_format_reward": 0.125,
206
+ "step": 14
207
+ },
208
+ {
209
+ "clip_ratio": 0.0,
210
+ "completion_length": 547.15625,
211
+ "epoch": 0.25210084033613445,
212
+ "grad_norm": 0.6391139030456543,
213
+ "kl": 0.003070831298828125,
214
+ "learning_rate": 9.66439553339217e-07,
215
+ "loss": 0.0615,
216
+ "num_tokens": 2002227.0,
217
+ "reward": 0.296875,
218
+ "reward_std": 0.32535743340849876,
219
+ "rewards/code_format_reward": 0.296875,
220
+ "step": 15
221
+ },
222
+ {
223
+ "clip_ratio": 0.0,
224
+ "completion_length": 557.8125,
225
+ "epoch": 0.2689075630252101,
226
+ "grad_norm": 0.5913247466087341,
227
+ "kl": 0.00417327880859375,
228
+ "learning_rate": 9.60699301840693e-07,
229
+ "loss": -0.013,
230
+ "num_tokens": 2130827.0,
231
+ "reward": 0.234375,
232
+ "reward_std": 0.2630178965628147,
233
+ "rewards/code_format_reward": 0.234375,
234
+ "step": 16
235
+ },
236
+ {
237
+ "clip_ratio": 0.0,
238
+ "completion_length": 454.71875,
239
+ "epoch": 0.2857142857142857,
240
+ "grad_norm": 0.6741214990615845,
241
+ "kl": 0.005828857421875,
242
+ "learning_rate": 9.54528283488428e-07,
243
+ "loss": 0.0233,
244
+ "num_tokens": 2246215.0,
245
+ "reward": 0.3125,
246
+ "reward_std": 0.282998226583004,
247
+ "rewards/code_format_reward": 0.3125,
248
+ "step": 17
249
+ },
250
+ {
251
+ "clip_ratio": 0.0,
252
+ "completion_length": 694.3671875,
253
+ "epoch": 0.3025210084033613,
254
+ "grad_norm": 0.2503214478492737,
255
+ "kl": 0.0009298324584960938,
256
+ "learning_rate": 9.479329708285106e-07,
257
+ "loss": 0.0084,
258
+ "num_tokens": 2399230.0,
259
+ "reward": 0.09375,
260
+ "reward_std": 0.0578637570142746,
261
+ "rewards/code_format_reward": 0.09375,
262
+ "step": 18
263
+ },
264
+ {
265
+ "clip_ratio": 0.0,
266
+ "completion_length": 606.7421875,
267
+ "epoch": 0.31932773109243695,
268
+ "grad_norm": 0.3981899917125702,
269
+ "kl": 0.003948211669921875,
270
+ "learning_rate": 9.409202814331679e-07,
271
+ "loss": -0.0357,
272
+ "num_tokens": 2537525.0,
273
+ "reward": 0.265625,
274
+ "reward_std": 0.14806943386793137,
275
+ "rewards/code_format_reward": 0.265625,
276
+ "step": 19
277
+ },
278
+ {
279
+ "clip_ratio": 0.0,
280
+ "completion_length": 442.84375,
281
+ "epoch": 0.33613445378151263,
282
+ "grad_norm": 0.5728533864021301,
283
+ "kl": 0.0042285919189453125,
284
+ "learning_rate": 9.334975706451861e-07,
285
+ "loss": 0.0178,
286
+ "num_tokens": 2653577.0,
287
+ "reward": 0.3671875,
288
+ "reward_std": 0.2314501255750656,
289
+ "rewards/code_format_reward": 0.3671875,
290
+ "step": 20
291
+ },
292
+ {
293
+ "clip_ratio": 0.0,
294
+ "completion_length": 666.9765625,
295
+ "epoch": 0.35294117647058826,
296
+ "grad_norm": 0.40784379839897156,
297
+ "kl": 0.003398895263671875,
298
+ "learning_rate": 9.256726238631719e-07,
299
+ "loss": 0.0296,
300
+ "num_tokens": 2801526.0,
301
+ "reward": 0.171875,
302
+ "reward_std": 0.12255740165710449,
303
+ "rewards/code_format_reward": 0.171875,
304
+ "step": 21
305
+ },
306
+ {
307
+ "clip_ratio": 0.0,
308
+ "completion_length": 561.1328125,
309
+ "epoch": 0.3697478991596639,
310
+ "grad_norm": 0.3966084122657776,
311
+ "kl": 0.0062713623046875,
312
+ "learning_rate": 9.174536483757448e-07,
313
+ "loss": -0.0061,
314
+ "num_tokens": 2930031.0,
315
+ "reward": 0.3515625,
316
+ "reward_std": 0.11784427240490913,
317
+ "rewards/code_format_reward": 0.3515625,
318
+ "step": 22
319
+ },
320
+ {
321
+ "clip_ratio": 0.0,
322
+ "completion_length": 603.8125,
323
+ "epoch": 0.3865546218487395,
324
+ "grad_norm": 0.4765741229057312,
325
+ "kl": 0.004345893859863281,
326
+ "learning_rate": 9.088492647532243e-07,
327
+ "loss": 0.0237,
328
+ "num_tokens": 3069271.0,
329
+ "reward": 0.25,
330
+ "reward_std": 0.1065337061882019,
331
+ "rewards/code_format_reward": 0.25,
332
+ "step": 23
333
+ },
334
+ {
335
+ "clip_ratio": 0.0,
336
+ "completion_length": 555.90625,
337
+ "epoch": 0.40336134453781514,
338
+ "grad_norm": 0.42516130208969116,
339
+ "kl": 0.0028820037841796875,
340
+ "learning_rate": 8.998684978058422e-07,
341
+ "loss": -0.0071,
342
+ "num_tokens": 3203811.0,
343
+ "reward": 0.3125,
344
+ "reward_std": 0.1422954723238945,
345
+ "rewards/code_format_reward": 0.3125,
346
+ "step": 24
347
+ },
348
+ {
349
+ "clip_ratio": 0.0,
350
+ "completion_length": 552.5703125,
351
+ "epoch": 0.42016806722689076,
352
+ "grad_norm": 0.4370390772819519,
353
+ "kl": 0.008594512939453125,
354
+ "learning_rate": 8.905207671179627e-07,
355
+ "loss": 0.0123,
356
+ "num_tokens": 3332956.0,
357
+ "reward": 0.3828125,
358
+ "reward_std": 0.12863079272210598,
359
+ "rewards/code_format_reward": 0.3828125,
360
+ "step": 25
361
+ },
362
+ {
363
+ "clip_ratio": 0.0,
364
+ "completion_length": 477.546875,
365
+ "epoch": 0.4369747899159664,
366
+ "grad_norm": 0.6266469955444336,
367
+ "kl": 0.006752490997314453,
368
+ "learning_rate": 8.808158771682401e-07,
369
+ "loss": 0.031,
370
+ "num_tokens": 3451954.0,
371
+ "reward": 0.625,
372
+ "reward_std": 0.1615143995732069,
373
+ "rewards/code_format_reward": 0.625,
374
+ "step": 26
375
+ },
376
+ {
377
+ "clip_ratio": 0.0,
378
+ "completion_length": 623.3125,
379
+ "epoch": 0.453781512605042,
380
+ "grad_norm": 0.3876335620880127,
381
+ "kl": 0.0042819976806640625,
382
+ "learning_rate": 8.707640070460731e-07,
383
+ "loss": -0.0099,
384
+ "num_tokens": 3592458.0,
385
+ "reward": 0.328125,
386
+ "reward_std": 0.10205793380737305,
387
+ "rewards/code_format_reward": 0.328125,
388
+ "step": 27
389
+ },
390
+ {
391
+ "clip_ratio": 0.0,
392
+ "completion_length": 568.046875,
393
+ "epoch": 0.47058823529411764,
394
+ "grad_norm": 0.3156704306602478,
395
+ "kl": 0.006195068359375,
396
+ "learning_rate": 8.60375699775147e-07,
397
+ "loss": 0.0065,
398
+ "num_tokens": 3722080.0,
399
+ "reward": 0.546875,
400
+ "reward_std": 0.04419417306780815,
401
+ "rewards/code_format_reward": 0.546875,
402
+ "step": 28
403
+ },
404
+ {
405
+ "clip_ratio": 0.0,
406
+ "completion_length": 612.5703125,
407
+ "epoch": 0.48739495798319327,
408
+ "grad_norm": 0.3149147927761078,
409
+ "kl": 0.004520416259765625,
410
+ "learning_rate": 8.496618512552564e-07,
411
+ "loss": 0.0076,
412
+ "num_tokens": 3861281.0,
413
+ "reward": 0.3125,
414
+ "reward_std": 0.0578637570142746,
415
+ "rewards/code_format_reward": 0.3125,
416
+ "step": 29
417
+ },
418
+ {
419
+ "clip_ratio": 0.0,
420
+ "completion_length": 626.0078125,
421
+ "epoch": 0.5042016806722689,
422
+ "grad_norm": 0.41246238350868225,
423
+ "kl": 0.009387016296386719,
424
+ "learning_rate": 8.386336988340129e-07,
425
+ "loss": -0.0028,
426
+ "num_tokens": 4004770.0,
427
+ "reward": 0.34375,
428
+ "reward_std": 0.13258251920342445,
429
+ "rewards/code_format_reward": 0.34375,
430
+ "step": 30
431
+ },
432
+ {
433
+ "clip_ratio": 0.0,
434
+ "completion_length": 594.40625,
435
+ "epoch": 0.5210084033613446,
436
+ "grad_norm": 0.2296294867992401,
437
+ "kl": 0.0028858184814453125,
438
+ "learning_rate": 8.273028095204173e-07,
439
+ "loss": 0.0033,
440
+ "num_tokens": 4142014.0,
441
+ "reward": 0.1796875,
442
+ "reward_std": 0.022097086533904076,
443
+ "rewards/code_format_reward": 0.1796875,
444
+ "step": 31
445
+ },
446
+ {
447
+ "clip_ratio": 0.0,
448
+ "completion_length": 549.1328125,
449
+ "epoch": 0.5378151260504201,
450
+ "grad_norm": 0.4845007061958313,
451
+ "kl": 0.0059051513671875,
452
+ "learning_rate": 8.156810678526652e-07,
453
+ "loss": -0.017,
454
+ "num_tokens": 4276015.0,
455
+ "reward": 0.3515625,
456
+ "reward_std": 0.14465449005365372,
457
+ "rewards/code_format_reward": 0.3515625,
458
+ "step": 32
459
+ },
460
+ {
461
+ "clip_ratio": 0.0,
462
+ "completion_length": 546.265625,
463
+ "epoch": 0.5546218487394958,
464
+ "grad_norm": 0.3567289412021637,
465
+ "kl": 0.005253791809082031,
466
+ "learning_rate": 8.037806634329078e-07,
467
+ "loss": 0.0042,
468
+ "num_tokens": 4407857.0,
469
+ "reward": 0.34375,
470
+ "reward_std": 0.05444391071796417,
471
+ "rewards/code_format_reward": 0.34375,
472
+ "step": 33
473
+ },
474
+ {
475
+ "clip_ratio": 0.0,
476
+ "completion_length": 505.0703125,
477
+ "epoch": 0.5714285714285714,
478
+ "grad_norm": 0.2423199564218521,
479
+ "kl": 0.00742340087890625,
480
+ "learning_rate": 7.916140781420428e-07,
481
+ "loss": 0.0159,
482
+ "num_tokens": 4534298.0,
483
+ "reward": 0.3828125,
484
+ "reward_std": 0.061278700828552246,
485
+ "rewards/code_format_reward": 0.3828125,
486
+ "step": 34
487
+ },
488
+ {
489
+ "clip_ratio": 0.0,
490
+ "completion_length": 544.9296875,
491
+ "epoch": 0.5882352941176471,
492
+ "grad_norm": 0.36438822746276855,
493
+ "kl": 0.011692047119140625,
494
+ "learning_rate": 7.791940730479434e-07,
495
+ "loss": 0.0236,
496
+ "num_tokens": 4665225.0,
497
+ "reward": 0.3359375,
498
+ "reward_std": 0.061278700828552246,
499
+ "rewards/code_format_reward": 0.3359375,
500
+ "step": 35
501
+ },
502
+ {
503
+ "clip_ratio": 0.0,
504
+ "completion_length": 549.703125,
505
+ "epoch": 0.6050420168067226,
506
+ "grad_norm": 0.2744235098361969,
507
+ "kl": 0.0067291259765625,
508
+ "learning_rate": 7.665336750208623e-07,
509
+ "loss": -0.0033,
510
+ "num_tokens": 4795883.0,
511
+ "reward": 0.484375,
512
+ "reward_std": 0.04419417306780815,
513
+ "rewards/code_format_reward": 0.484375,
514
+ "step": 36
515
+ },
516
+ {
517
+ "clip_ratio": 0.0,
518
+ "completion_length": 480.1328125,
519
+ "epoch": 0.6218487394957983,
520
+ "grad_norm": 0.19300399720668793,
521
+ "kl": 0.00769805908203125,
522
+ "learning_rate": 7.536461630700425e-07,
523
+ "loss": -0.0044,
524
+ "num_tokens": 4914988.0,
525
+ "reward": 0.4296875,
526
+ "reward_std": 0.022097086533904076,
527
+ "rewards/code_format_reward": 0.4296875,
528
+ "step": 37
529
+ },
530
+ {
531
+ "clip_ratio": 0.0,
532
+ "completion_length": 471.4296875,
533
+ "epoch": 0.6386554621848739,
534
+ "grad_norm": 0.01874758116900921,
535
+ "kl": 0.01021575927734375,
536
+ "learning_rate": 7.405450544158706e-07,
537
+ "loss": 0.0001,
538
+ "num_tokens": 5031363.0,
539
+ "reward": 0.5625,
540
+ "reward_std": 0.0,
541
+ "rewards/code_format_reward": 0.5625,
542
+ "step": 38
543
+ },
544
+ {
545
+ "clip_ratio": 0.0,
546
+ "completion_length": 529.78125,
547
+ "epoch": 0.6554621848739496,
548
+ "grad_norm": 0.056958042085170746,
549
+ "kl": 0.0102996826171875,
550
+ "learning_rate": 7.272440903121791e-07,
551
+ "loss": 0.0001,
552
+ "num_tokens": 5158719.0,
553
+ "reward": 0.375,
554
+ "reward_std": 0.0,
555
+ "rewards/code_format_reward": 0.375,
556
+ "step": 39
557
+ },
558
+ {
559
+ "clip_ratio": 0.0,
560
+ "completion_length": 587.6953125,
561
+ "epoch": 0.6722689075630253,
562
+ "grad_norm": 0.1463167667388916,
563
+ "kl": 0.006369590759277344,
564
+ "learning_rate": 7.137572216335694e-07,
565
+ "loss": -0.0141,
566
+ "num_tokens": 5294992.0,
567
+ "reward": 0.3671875,
568
+ "reward_std": 0.022097086533904076,
569
+ "rewards/code_format_reward": 0.3671875,
570
+ "step": 40
571
+ },
572
+ {
573
+ "clip_ratio": 0.0,
574
+ "completion_length": 509.125,
575
+ "epoch": 0.6890756302521008,
576
+ "grad_norm": 0.26510393619537354,
577
+ "kl": 0.0092620849609375,
578
+ "learning_rate": 7.000985942428693e-07,
579
+ "loss": -0.0208,
580
+ "num_tokens": 5417744.0,
581
+ "reward": 0.6015625,
582
+ "reward_std": 0.051028965041041374,
583
+ "rewards/code_format_reward": 0.6015625,
584
+ "step": 41
585
+ },
586
+ {
587
+ "clip_ratio": 0.0,
588
+ "completion_length": 408.5625,
589
+ "epoch": 0.7058823529411765,
590
+ "grad_norm": 0.21098148822784424,
591
+ "kl": 0.011627197265625,
592
+ "learning_rate": 6.862825341540778e-07,
593
+ "loss": 0.0141,
594
+ "num_tokens": 5528912.0,
595
+ "reward": 0.6015625,
596
+ "reward_std": 0.03234682232141495,
597
+ "rewards/code_format_reward": 0.6015625,
598
+ "step": 42
599
+ },
600
+ {
601
+ "clip_ratio": 0.0,
602
+ "completion_length": 544.2578125,
603
+ "epoch": 0.7226890756302521,
604
+ "grad_norm": 0.2268461138010025,
605
+ "kl": 0.00897979736328125,
606
+ "learning_rate": 6.723235325063543e-07,
607
+ "loss": -0.0046,
608
+ "num_tokens": 5657177.0,
609
+ "reward": 0.4921875,
610
+ "reward_std": 0.022097086533904076,
611
+ "rewards/code_format_reward": 0.4921875,
612
+ "step": 43
613
+ },
614
+ {
615
+ "clip_ratio": 0.0,
616
+ "completion_length": 474.453125,
617
+ "epoch": 0.7394957983193278,
618
+ "grad_norm": 0.012474104762077332,
619
+ "kl": 0.00955963134765625,
620
+ "learning_rate": 6.582362303648142e-07,
621
+ "loss": 0.0001,
622
+ "num_tokens": 5780899.0,
623
+ "reward": 0.625,
624
+ "reward_std": 0.0,
625
+ "rewards/code_format_reward": 0.625,
626
+ "step": 44
627
+ },
628
+ {
629
+ "clip_ratio": 0.0,
630
+ "completion_length": 563.0078125,
631
+ "epoch": 0.7563025210084033,
632
+ "grad_norm": 0.3377431333065033,
633
+ "kl": 0.00730133056640625,
634
+ "learning_rate": 6.440354033640738e-07,
635
+ "loss": 0.0134,
636
+ "num_tokens": 5912020.0,
637
+ "reward": 0.4765625,
638
+ "reward_std": 0.051028965041041374,
639
+ "rewards/code_format_reward": 0.4765625,
640
+ "step": 45
641
+ },
642
+ {
643
+ "clip_ratio": 0.0,
644
+ "completion_length": 476.9609375,
645
+ "epoch": 0.773109243697479,
646
+ "grad_norm": 0.0116049125790596,
647
+ "kl": 0.0107574462890625,
648
+ "learning_rate": 6.297359462106502e-07,
649
+ "loss": 0.0001,
650
+ "num_tokens": 6029263.0,
651
+ "reward": 0.625,
652
+ "reward_std": 0.0,
653
+ "rewards/code_format_reward": 0.625,
654
+ "step": 46
655
+ },
656
+ {
657
+ "clip_ratio": 0.0,
658
+ "completion_length": 536.6484375,
659
+ "epoch": 0.7899159663865546,
660
+ "grad_norm": 0.26681584119796753,
661
+ "kl": 0.006664276123046875,
662
+ "learning_rate": 6.153528570604707e-07,
663
+ "loss": 0.0023,
664
+ "num_tokens": 6159890.0,
665
+ "reward": 0.359375,
666
+ "reward_std": 0.04419417306780815,
667
+ "rewards/code_format_reward": 0.359375,
668
+ "step": 47
669
+ },
670
+ {
671
+ "clip_ratio": 0.0,
672
+ "completion_length": 558.078125,
673
+ "epoch": 0.8067226890756303,
674
+ "grad_norm": 0.3406285345554352,
675
+ "kl": 0.007928848266601562,
676
+ "learning_rate": 6.00901221787878e-07,
677
+ "loss": -0.024,
678
+ "num_tokens": 6292572.0,
679
+ "reward": 0.421875,
680
+ "reward_std": 0.04419417306780815,
681
+ "rewards/code_format_reward": 0.421875,
682
+ "step": 48
683
+ },
684
+ {
685
+ "clip_ratio": 0.0,
686
+ "completion_length": 543.0625,
687
+ "epoch": 0.8235294117647058,
688
+ "grad_norm": 0.2923266887664795,
689
+ "kl": 0.009540557861328125,
690
+ "learning_rate": 5.86396198162632e-07,
691
+ "loss": -0.011,
692
+ "num_tokens": 6423868.0,
693
+ "reward": 0.421875,
694
+ "reward_std": 0.04419417306780815,
695
+ "rewards/code_format_reward": 0.421875,
696
+ "step": 49
697
+ },
698
+ {
699
+ "clip_ratio": 0.0,
700
+ "completion_length": 571.78125,
701
+ "epoch": 0.8403361344537815,
702
+ "grad_norm": 0.3251660466194153,
703
+ "kl": 0.00586700439453125,
704
+ "learning_rate": 5.718529999515017e-07,
705
+ "loss": 0.0111,
706
+ "num_tokens": 6556552.0,
707
+ "reward": 0.3046875,
708
+ "reward_std": 0.022097086533904076,
709
+ "rewards/code_format_reward": 0.3046875,
710
+ "step": 50
711
+ },
712
+ {
713
+ "clip_ratio": 0.0,
714
+ "completion_length": 581.1640625,
715
+ "epoch": 0.8571428571428571,
716
+ "grad_norm": 0.17117629945278168,
717
+ "kl": 0.007695198059082031,
718
+ "learning_rate": 5.572868809611257e-07,
719
+ "loss": -0.019,
720
+ "num_tokens": 6694565.0,
721
+ "reward": 0.296875,
722
+ "reward_std": 0.0289318785071373,
723
+ "rewards/code_format_reward": 0.296875,
724
+ "step": 51
725
+ },
726
+ {
727
+ "clip_ratio": 0.0,
728
+ "completion_length": 507.4453125,
729
+ "epoch": 0.8739495798319328,
730
+ "grad_norm": 0.2713668644428253,
731
+ "kl": 0.00688934326171875,
732
+ "learning_rate": 5.427131190388743e-07,
733
+ "loss": 0.0161,
734
+ "num_tokens": 6817558.0,
735
+ "reward": 0.4921875,
736
+ "reward_std": 0.0657544732093811,
737
+ "rewards/code_format_reward": 0.4921875,
738
+ "step": 52
739
+ },
740
+ {
741
+ "clip_ratio": 0.0,
742
+ "completion_length": 524.703125,
743
+ "epoch": 0.8907563025210085,
744
+ "grad_norm": 0.2056041955947876,
745
+ "kl": 0.012969970703125,
746
+ "learning_rate": 5.281470000484985e-07,
747
+ "loss": 0.0066,
748
+ "num_tokens": 6943264.0,
749
+ "reward": 0.4296875,
750
+ "reward_std": 0.022097086533904076,
751
+ "rewards/code_format_reward": 0.4296875,
752
+ "step": 53
753
+ },
754
+ {
755
+ "clip_ratio": 0.0,
756
+ "completion_length": 549.6875,
757
+ "epoch": 0.907563025210084,
758
+ "grad_norm": 0.23319439589977264,
759
+ "kl": 0.007537841796875,
760
+ "learning_rate": 5.136038018373682e-07,
761
+ "loss": 0.0105,
762
+ "num_tokens": 7073792.0,
763
+ "reward": 0.4296875,
764
+ "reward_std": 0.022097086533904076,
765
+ "rewards/code_format_reward": 0.4296875,
766
+ "step": 54
767
+ },
768
+ {
769
+ "clip_ratio": 0.0,
770
+ "completion_length": 529.8125,
771
+ "epoch": 0.9243697478991597,
772
+ "grad_norm": 0.19098737835884094,
773
+ "kl": 0.00901031494140625,
774
+ "learning_rate": 4.990987782121221e-07,
775
+ "loss": -0.0105,
776
+ "num_tokens": 7201304.0,
777
+ "reward": 0.3671875,
778
+ "reward_std": 0.022097086533904076,
779
+ "rewards/code_format_reward": 0.3671875,
780
+ "step": 55
781
+ },
782
+ {
783
+ "clip_ratio": 0.0,
784
+ "completion_length": 526.3515625,
785
+ "epoch": 0.9411764705882353,
786
+ "grad_norm": 0.019878050312399864,
787
+ "kl": 0.00701141357421875,
788
+ "learning_rate": 4.846471429395295e-07,
789
+ "loss": 0.0001,
790
+ "num_tokens": 7331461.0,
791
+ "reward": 0.375,
792
+ "reward_std": 0.0,
793
+ "rewards/code_format_reward": 0.375,
794
+ "step": 56
795
+ },
796
+ {
797
+ "clip_ratio": 0.0,
798
+ "completion_length": 486.59375,
799
+ "epoch": 0.957983193277311,
800
+ "grad_norm": 0.010634235106408596,
801
+ "kl": 0.0092620849609375,
802
+ "learning_rate": 4.7026405378934975e-07,
803
+ "loss": 0.0001,
804
+ "num_tokens": 7452657.0,
805
+ "reward": 0.5625,
806
+ "reward_std": 0.0,
807
+ "rewards/code_format_reward": 0.5625,
808
+ "step": 57
809
+ },
810
+ {
811
+ "clip_ratio": 0.0,
812
+ "completion_length": 413.234375,
813
+ "epoch": 0.9747899159663865,
814
+ "grad_norm": 0.210678830742836,
815
+ "kl": 0.011688232421875,
816
+ "learning_rate": 4.5596459663592625e-07,
817
+ "loss": 0.0237,
818
+ "num_tokens": 7560991.0,
819
+ "reward": 0.6171875,
820
+ "reward_std": 0.022097086533904076,
821
+ "rewards/code_format_reward": 0.6171875,
822
+ "step": 58
823
+ },
824
+ {
825
+ "clip_ratio": 0.0,
826
+ "completion_length": 552.8452529907227,
827
+ "epoch": 0.9915966386554622,
828
+ "grad_norm": 0.3087288439273834,
829
+ "kl": 0.00782012939453125,
830
+ "learning_rate": 4.41763769635186e-07,
831
+ "loss": -0.0165,
832
+ "num_tokens": 7690230.0,
833
+ "reward": 0.4140625,
834
+ "reward_std": 0.06629125960171223,
835
+ "rewards/code_format_reward": 0.4140625,
836
+ "step": 59
837
+ },
838
+ {
839
+ "clip_ratio": 0.0,
840
+ "completion_length": 571.84375,
841
+ "epoch": 1.0168067226890756,
842
+ "grad_norm": 0.2766449451446533,
843
+ "kl": 0.009830474853515625,
844
+ "learning_rate": 4.2767646749364574e-07,
845
+ "loss": 0.0012,
846
+ "num_tokens": 7825994.0,
847
+ "reward": 0.359375,
848
+ "reward_std": 0.04419417306780815,
849
+ "rewards/code_format_reward": 0.359375,
850
+ "step": 60
851
+ },
852
+ {
853
+ "clip_ratio": 0.0,
854
+ "completion_length": 549.734375,
855
+ "epoch": 1.0336134453781514,
856
+ "grad_norm": 0.008657719939947128,
857
+ "kl": 0.00926971435546875,
858
+ "learning_rate": 4.1371746584592227e-07,
859
+ "loss": 0.0001,
860
+ "num_tokens": 7954624.0,
861
+ "reward": 0.4375,
862
+ "reward_std": 0.0,
863
+ "rewards/code_format_reward": 0.4375,
864
+ "step": 61
865
+ },
866
+ {
867
+ "clip_ratio": 0.0,
868
+ "completion_length": 607.4296875,
869
+ "epoch": 1.050420168067227,
870
+ "grad_norm": 0.00941454991698265,
871
+ "kl": 0.00757598876953125,
872
+ "learning_rate": 3.999014057571308e-07,
873
+ "loss": 0.0001,
874
+ "num_tokens": 8095783.0,
875
+ "reward": 0.3125,
876
+ "reward_std": 0.0,
877
+ "rewards/code_format_reward": 0.3125,
878
+ "step": 62
879
+ },
880
+ {
881
+ "clip_ratio": 0.0,
882
+ "completion_length": 466.0625,
883
+ "epoch": 1.0672268907563025,
884
+ "grad_norm": 0.2536774277687073,
885
+ "kl": 0.0111541748046875,
886
+ "learning_rate": 3.862427783664306e-07,
887
+ "loss": -0.0146,
888
+ "num_tokens": 8209831.0,
889
+ "reward": 0.609375,
890
+ "reward_std": 0.04419417306780815,
891
+ "rewards/code_format_reward": 0.609375,
892
+ "step": 63
893
+ },
894
+ {
895
+ "clip_ratio": 0.0,
896
+ "completion_length": 585.3203125,
897
+ "epoch": 1.084033613445378,
898
+ "grad_norm": 0.20105506479740143,
899
+ "kl": 0.004669189453125,
900
+ "learning_rate": 3.7275590968782087e-07,
901
+ "loss": 0.0095,
902
+ "num_tokens": 8347160.0,
903
+ "reward": 0.2421875,
904
+ "reward_std": 0.022097086533904076,
905
+ "rewards/code_format_reward": 0.2421875,
906
+ "step": 64
907
+ },
908
+ {
909
+ "clip_ratio": 0.0,
910
+ "completion_length": 493.7578125,
911
+ "epoch": 1.1008403361344539,
912
+ "grad_norm": 0.010381446219980717,
913
+ "kl": 0.00858306884765625,
914
+ "learning_rate": 3.594549455841296e-07,
915
+ "loss": 0.0001,
916
+ "num_tokens": 8471065.0,
917
+ "reward": 0.5,
918
+ "reward_std": 0.0,
919
+ "rewards/code_format_reward": 0.5,
920
+ "step": 65
921
+ },
922
+ {
923
+ "clip_ratio": 0.0,
924
+ "completion_length": 487.8125,
925
+ "epoch": 1.1176470588235294,
926
+ "grad_norm": 0.21298491954803467,
927
+ "kl": 0.007595062255859375,
928
+ "learning_rate": 3.4635383692995755e-07,
929
+ "loss": 0.011,
930
+ "num_tokens": 8591161.0,
931
+ "reward": 0.4296875,
932
+ "reward_std": 0.022097086533904076,
933
+ "rewards/code_format_reward": 0.4296875,
934
+ "step": 66
935
+ },
936
+ {
937
+ "clip_ratio": 0.0,
938
+ "completion_length": 570.90625,
939
+ "epoch": 1.134453781512605,
940
+ "grad_norm": 0.005000706762075424,
941
+ "kl": 0.005382537841796875,
942
+ "learning_rate": 3.3346632497913773e-07,
943
+ "loss": 0.0001,
944
+ "num_tokens": 8723157.0,
945
+ "reward": 0.25,
946
+ "reward_std": 0.0,
947
+ "rewards/code_format_reward": 0.25,
948
+ "step": 67
949
+ },
950
+ {
951
+ "clip_ratio": 0.0,
952
+ "completion_length": 406.5625,
953
+ "epoch": 1.1512605042016806,
954
+ "grad_norm": 0.2167414426803589,
955
+ "kl": 0.011280059814453125,
956
+ "learning_rate": 3.208059269520568e-07,
957
+ "loss": 0.0024,
958
+ "num_tokens": 8834893.0,
959
+ "reward": 0.5546875,
960
+ "reward_std": 0.022097086533904076,
961
+ "rewards/code_format_reward": 0.5546875,
962
+ "step": 68
963
+ },
964
+ {
965
+ "clip_ratio": 0.0,
966
+ "completion_length": 519.734375,
967
+ "epoch": 1.1680672268907564,
968
+ "grad_norm": 0.22041496634483337,
969
+ "kl": 0.011386871337890625,
970
+ "learning_rate": 3.083859218579573e-07,
971
+ "loss": 0.0064,
972
+ "num_tokens": 8962931.0,
973
+ "reward": 0.4765625,
974
+ "reward_std": 0.03234682232141495,
975
+ "rewards/code_format_reward": 0.4765625,
976
+ "step": 69
977
+ },
978
+ {
979
+ "clip_ratio": 0.0,
980
+ "completion_length": 552.875,
981
+ "epoch": 1.184873949579832,
982
+ "grad_norm": 0.008582229726016521,
983
+ "kl": 0.0049285888671875,
984
+ "learning_rate": 2.9621933656709207e-07,
985
+ "loss": 0.0,
986
+ "num_tokens": 9094475.0,
987
+ "reward": 0.25,
988
+ "reward_std": 0.0,
989
+ "rewards/code_format_reward": 0.25,
990
+ "step": 70
991
+ },
992
+ {
993
+ "clip_ratio": 0.0,
994
+ "completion_length": 505.3203125,
995
+ "epoch": 1.2016806722689075,
996
+ "grad_norm": 0.22665004432201385,
997
+ "kl": 0.00914764404296875,
998
+ "learning_rate": 2.843189321473349e-07,
999
+ "loss": -0.0036,
1000
+ "num_tokens": 9220108.0,
1001
+ "reward": 0.4921875,
1002
+ "reward_std": 0.022097086533904076,
1003
+ "rewards/code_format_reward": 0.4921875,
1004
+ "step": 71
1005
+ },
1006
+ {
1007
+ "clip_ratio": 0.0,
1008
+ "completion_length": 603.0859375,
1009
+ "epoch": 1.2184873949579833,
1010
+ "grad_norm": 0.007497387006878853,
1011
+ "kl": 0.004810333251953125,
1012
+ "learning_rate": 2.7269719047958267e-07,
1013
+ "loss": 0.0,
1014
+ "num_tokens": 9360135.0,
1015
+ "reward": 0.25,
1016
+ "reward_std": 0.0,
1017
+ "rewards/code_format_reward": 0.25,
1018
+ "step": 72
1019
+ },
1020
+ {
1021
+ "clip_ratio": 0.0,
1022
+ "completion_length": 546.234375,
1023
+ "epoch": 1.2352941176470589,
1024
+ "grad_norm": 0.007375677116215229,
1025
+ "kl": 0.005222320556640625,
1026
+ "learning_rate": 2.613663011659871e-07,
1027
+ "loss": 0.0001,
1028
+ "num_tokens": 9490949.0,
1029
+ "reward": 0.3125,
1030
+ "reward_std": 0.0,
1031
+ "rewards/code_format_reward": 0.3125,
1032
+ "step": 73
1033
+ },
1034
+ {
1035
+ "clip_ratio": 0.0,
1036
+ "completion_length": 489.0859375,
1037
+ "epoch": 1.2521008403361344,
1038
+ "grad_norm": 0.24114979803562164,
1039
+ "kl": 0.0102996826171875,
1040
+ "learning_rate": 2.5033814874474356e-07,
1041
+ "loss": 0.0336,
1042
+ "num_tokens": 9614784.0,
1043
+ "reward": 0.5546875,
1044
+ "reward_std": 0.022097086533904076,
1045
+ "rewards/code_format_reward": 0.5546875,
1046
+ "step": 74
1047
+ },
1048
+ {
1049
+ "clip_ratio": 0.0,
1050
+ "completion_length": 477.3828125,
1051
+ "epoch": 1.26890756302521,
1052
+ "grad_norm": 0.013432592153549194,
1053
+ "kl": 0.01093292236328125,
1054
+ "learning_rate": 2.3962430022485305e-07,
1055
+ "loss": 0.0001,
1056
+ "num_tokens": 9735377.0,
1057
+ "reward": 0.625,
1058
+ "reward_std": 0.0,
1059
+ "rewards/code_format_reward": 0.625,
1060
+ "step": 75
1061
+ },
1062
+ {
1063
+ "clip_ratio": 0.0,
1064
+ "completion_length": 555.890625,
1065
+ "epoch": 1.2857142857142856,
1066
+ "grad_norm": 0.013460615649819374,
1067
+ "kl": 0.0073699951171875,
1068
+ "learning_rate": 2.2923599295392694e-07,
1069
+ "loss": 0.0001,
1070
+ "num_tokens": 9867139.0,
1071
+ "reward": 0.4375,
1072
+ "reward_std": 0.0,
1073
+ "rewards/code_format_reward": 0.4375,
1074
+ "step": 76
1075
+ },
1076
+ {
1077
+ "clip_ratio": 0.0,
1078
+ "completion_length": 409.7734375,
1079
+ "epoch": 1.3025210084033614,
1080
+ "grad_norm": 0.3048118054866791,
1081
+ "kl": 0.0102081298828125,
1082
+ "learning_rate": 2.1918412283175994e-07,
1083
+ "loss": -0.003,
1084
+ "num_tokens": 9980222.0,
1085
+ "reward": 0.671875,
1086
+ "reward_std": 0.04419417306780815,
1087
+ "rewards/code_format_reward": 0.671875,
1088
+ "step": 77
1089
+ },
1090
+ {
1091
+ "clip_ratio": 0.0,
1092
+ "completion_length": 542.71875,
1093
+ "epoch": 1.319327731092437,
1094
+ "grad_norm": 0.2550560235977173,
1095
+ "kl": 0.007965087890625,
1096
+ "learning_rate": 2.0947923288203713e-07,
1097
+ "loss": 0.0955,
1098
+ "num_tokens": 10107666.0,
1099
+ "reward": 0.484375,
1100
+ "reward_std": 0.04419417306780815,
1101
+ "rewards/code_format_reward": 0.484375,
1102
+ "step": 78
1103
+ },
1104
+ {
1105
+ "clip_ratio": 0.0,
1106
+ "completion_length": 517.8125,
1107
+ "epoch": 1.3361344537815127,
1108
+ "grad_norm": 0.2195437252521515,
1109
+ "kl": 0.00948333740234375,
1110
+ "learning_rate": 2.0013150219415793e-07,
1111
+ "loss": -0.0028,
1112
+ "num_tokens": 10232114.0,
1113
+ "reward": 0.46875,
1114
+ "reward_std": 0.033407654613256454,
1115
+ "rewards/code_format_reward": 0.46875,
1116
+ "step": 79
1117
+ },
1118
+ {
1119
+ "clip_ratio": 0.0,
1120
+ "completion_length": 521.390625,
1121
+ "epoch": 1.3529411764705883,
1122
+ "grad_norm": 0.008532202802598476,
1123
+ "kl": 0.006832122802734375,
1124
+ "learning_rate": 1.9115073524677572e-07,
1125
+ "loss": 0.0001,
1126
+ "num_tokens": 10357436.0,
1127
+ "reward": 0.4375,
1128
+ "reward_std": 0.0,
1129
+ "rewards/code_format_reward": 0.4375,
1130
+ "step": 80
1131
+ },
1132
+ {
1133
+ "clip_ratio": 0.0,
1134
+ "completion_length": 485.96875,
1135
+ "epoch": 1.3697478991596639,
1136
+ "grad_norm": 0.011241392232477665,
1137
+ "kl": 0.00920867919921875,
1138
+ "learning_rate": 1.8254635162425503e-07,
1139
+ "loss": 0.0001,
1140
+ "num_tokens": 10481120.0,
1141
+ "reward": 0.625,
1142
+ "reward_std": 0.0,
1143
+ "rewards/code_format_reward": 0.625,
1144
+ "step": 81
1145
+ },
1146
+ {
1147
+ "clip_ratio": 0.0,
1148
+ "completion_length": 485.390625,
1149
+ "epoch": 1.3865546218487395,
1150
+ "grad_norm": 0.2583635151386261,
1151
+ "kl": 0.01049041748046875,
1152
+ "learning_rate": 1.7432737613682807e-07,
1153
+ "loss": 0.007,
1154
+ "num_tokens": 10603258.0,
1155
+ "reward": 0.5546875,
1156
+ "reward_std": 0.022097086533904076,
1157
+ "rewards/code_format_reward": 0.5546875,
1158
+ "step": 82
1159
+ },
1160
+ {
1161
+ "clip_ratio": 0.0,
1162
+ "completion_length": 583.484375,
1163
+ "epoch": 1.403361344537815,
1164
+ "grad_norm": 0.007445089519023895,
1165
+ "kl": 0.00675201416015625,
1166
+ "learning_rate": 1.6650242935481388e-07,
1167
+ "loss": 0.0001,
1168
+ "num_tokens": 10736448.0,
1169
+ "reward": 0.375,
1170
+ "reward_std": 0.0,
1171
+ "rewards/code_format_reward": 0.375,
1172
+ "step": 83
1173
+ },
1174
+ {
1175
+ "clip_ratio": 0.0,
1176
+ "completion_length": 565.8515625,
1177
+ "epoch": 1.4201680672268908,
1178
+ "grad_norm": 0.012575228698551655,
1179
+ "kl": 0.006989479064941406,
1180
+ "learning_rate": 1.59079718566832e-07,
1181
+ "loss": 0.0001,
1182
+ "num_tokens": 10867749.0,
1183
+ "reward": 0.375,
1184
+ "reward_std": 0.0,
1185
+ "rewards/code_format_reward": 0.375,
1186
+ "step": 84
1187
+ },
1188
+ {
1189
+ "clip_ratio": 0.0,
1190
+ "completion_length": 466.5,
1191
+ "epoch": 1.4369747899159664,
1192
+ "grad_norm": 0.22415108978748322,
1193
+ "kl": 0.0146026611328125,
1194
+ "learning_rate": 1.5206702917148945e-07,
1195
+ "loss": 0.0055,
1196
+ "num_tokens": 10988109.0,
1197
+ "reward": 0.6171875,
1198
+ "reward_std": 0.022097086533904076,
1199
+ "rewards/code_format_reward": 0.6171875,
1200
+ "step": 85
1201
+ },
1202
+ {
1203
+ "clip_ratio": 0.0,
1204
+ "completion_length": 425.671875,
1205
+ "epoch": 1.453781512605042,
1206
+ "grad_norm": 0.012631588615477085,
1207
+ "kl": 0.00989532470703125,
1208
+ "learning_rate": 1.4547171651157214e-07,
1209
+ "loss": 0.0001,
1210
+ "num_tokens": 11100171.0,
1211
+ "reward": 0.5625,
1212
+ "reward_std": 0.0,
1213
+ "rewards/code_format_reward": 0.5625,
1214
+ "step": 86
1215
+ },
1216
+ {
1217
+ "clip_ratio": 0.0,
1218
+ "completion_length": 484.8125,
1219
+ "epoch": 1.4705882352941178,
1220
+ "grad_norm": 0.19023600220680237,
1221
+ "kl": 0.01287078857421875,
1222
+ "learning_rate": 1.3930069815930697e-07,
1223
+ "loss": -0.0073,
1224
+ "num_tokens": 11218611.0,
1225
+ "reward": 0.5546875,
1226
+ "reward_std": 0.022097086533904076,
1227
+ "rewards/code_format_reward": 0.5546875,
1228
+ "step": 87
1229
+ },
1230
+ {
1231
+ "clip_ratio": 0.0,
1232
+ "completion_length": 542.421875,
1233
+ "epoch": 1.4873949579831933,
1234
+ "grad_norm": 0.18787921965122223,
1235
+ "kl": 0.005619049072265625,
1236
+ "learning_rate": 1.3356044666078315e-07,
1237
+ "loss": -0.0035,
1238
+ "num_tokens": 11349593.0,
1239
+ "reward": 0.2421875,
1240
+ "reward_std": 0.022097086533904076,
1241
+ "rewards/code_format_reward": 0.2421875,
1242
+ "step": 88
1243
+ },
1244
+ {
1245
+ "clip_ratio": 0.0,
1246
+ "completion_length": 414.28125,
1247
+ "epoch": 1.504201680672269,
1248
+ "grad_norm": 0.2461971491575241,
1249
+ "kl": 0.01172637939453125,
1250
+ "learning_rate": 1.2825698274714542e-07,
1251
+ "loss": 0.0123,
1252
+ "num_tokens": 11459157.0,
1253
+ "reward": 0.7421875,
1254
+ "reward_std": 0.022097086533904076,
1255
+ "rewards/code_format_reward": 0.7421875,
1256
+ "step": 89
1257
+ },
1258
+ {
1259
+ "clip_ratio": 0.0,
1260
+ "completion_length": 535.8671875,
1261
+ "epoch": 1.5210084033613445,
1262
+ "grad_norm": 0.009749805554747581,
1263
+ "kl": 0.0081024169921875,
1264
+ "learning_rate": 1.233958690196783e-07,
1265
+ "loss": 0.0001,
1266
+ "num_tokens": 11587884.0,
1267
+ "reward": 0.4375,
1268
+ "reward_std": 0.0,
1269
+ "rewards/code_format_reward": 0.4375,
1270
+ "step": 90
1271
+ },
1272
+ {
1273
+ "clip_ratio": 0.0,
1274
+ "completion_length": 599.015625,
1275
+ "epoch": 1.53781512605042,
1276
+ "grad_norm": 0.0069138724356889725,
1277
+ "kl": 0.00567626953125,
1278
+ "learning_rate": 1.1898220411540583e-07,
1279
+ "loss": 0.0001,
1280
+ "num_tokens": 11723398.0,
1281
+ "reward": 0.375,
1282
+ "reward_std": 0.0,
1283
+ "rewards/code_format_reward": 0.375,
1284
+ "step": 91
1285
+ },
1286
+ {
1287
+ "clip_ratio": 0.0,
1288
+ "completion_length": 516.984375,
1289
+ "epoch": 1.5546218487394958,
1290
+ "grad_norm": 0.26793172955513,
1291
+ "kl": 0.00763702392578125,
1292
+ "learning_rate": 1.1502061735932499e-07,
1293
+ "loss": -0.0202,
1294
+ "num_tokens": 11850164.0,
1295
+ "reward": 0.359375,
1296
+ "reward_std": 0.04419417306780815,
1297
+ "rewards/code_format_reward": 0.359375,
1298
+ "step": 92
1299
+ },
1300
+ {
1301
+ "clip_ratio": 0.0,
1302
+ "completion_length": 506.890625,
1303
+ "epoch": 1.5714285714285714,
1304
+ "grad_norm": 0.010911713354289532,
1305
+ "kl": 0.00727081298828125,
1306
+ "learning_rate": 1.115152639088833e-07,
1307
+ "loss": 0.0001,
1308
+ "num_tokens": 11975358.0,
1309
+ "reward": 0.5,
1310
+ "reward_std": 0.0,
1311
+ "rewards/code_format_reward": 0.5,
1312
+ "step": 93
1313
+ },
1314
+ {
1315
+ "clip_ratio": 0.0,
1316
+ "completion_length": 569.6796875,
1317
+ "epoch": 1.5882352941176472,
1318
+ "grad_norm": 0.00520605593919754,
1319
+ "kl": 0.0041522979736328125,
1320
+ "learning_rate": 1.0846982039579242e-07,
1321
+ "loss": 0.0,
1322
+ "num_tokens": 12111517.0,
1323
+ "reward": 0.3125,
1324
+ "reward_std": 0.0,
1325
+ "rewards/code_format_reward": 0.3125,
1326
+ "step": 94
1327
+ },
1328
+ {
1329
+ "clip_ratio": 0.0,
1330
+ "completion_length": 391.6796875,
1331
+ "epoch": 1.6050420168067228,
1332
+ "grad_norm": 0.014589727856218815,
1333
+ "kl": 0.0121002197265625,
1334
+ "learning_rate": 1.0588748106974918e-07,
1335
+ "loss": 0.0001,
1336
+ "num_tokens": 12219404.0,
1337
+ "reward": 0.625,
1338
+ "reward_std": 0.0,
1339
+ "rewards/code_format_reward": 0.625,
1340
+ "step": 95
1341
+ },
1342
+ {
1343
+ "clip_ratio": 0.0,
1344
+ "completion_length": 583.4453125,
1345
+ "epoch": 1.6218487394957983,
1346
+ "grad_norm": 0.010287419892847538,
1347
+ "kl": 0.0075836181640625,
1348
+ "learning_rate": 1.0377095444810871e-07,
1349
+ "loss": 0.0001,
1350
+ "num_tokens": 12354245.0,
1351
+ "reward": 0.4375,
1352
+ "reward_std": 0.0,
1353
+ "rewards/code_format_reward": 0.4375,
1354
+ "step": 96
1355
+ },
1356
+ {
1357
+ "clip_ratio": 0.0,
1358
+ "completion_length": 551.484375,
1359
+ "epoch": 1.638655462184874,
1360
+ "grad_norm": 0.21819841861724854,
1361
+ "kl": 0.00997161865234375,
1362
+ "learning_rate": 1.0212246047502372e-07,
1363
+ "loss": -0.0046,
1364
+ "num_tokens": 12482835.0,
1365
+ "reward": 0.4296875,
1366
+ "reward_std": 0.022097086533904076,
1367
+ "rewards/code_format_reward": 0.4296875,
1368
+ "step": 97
1369
+ },
1370
+ {
1371
+ "clip_ratio": 0.0,
1372
+ "completion_length": 486.953125,
1373
+ "epoch": 1.6554621848739495,
1374
+ "grad_norm": 0.011982027441263199,
1375
+ "kl": 0.0090484619140625,
1376
+ "learning_rate": 1.0094372819302977e-07,
1377
+ "loss": 0.0001,
1378
+ "num_tokens": 12605909.0,
1379
+ "reward": 0.5625,
1380
+ "reward_std": 0.0,
1381
+ "rewards/code_format_reward": 0.5625,
1382
+ "step": 98
1383
+ },
1384
+ {
1385
+ "clip_ratio": 0.0,
1386
+ "completion_length": 516.625,
1387
+ "epoch": 1.6722689075630253,
1388
+ "grad_norm": 0.008852253668010235,
1389
+ "kl": 0.00608062744140625,
1390
+ "learning_rate": 1.0023599392951829e-07,
1391
+ "loss": 0.0001,
1392
+ "num_tokens": 12732141.0,
1393
+ "reward": 0.3125,
1394
+ "reward_std": 0.0,
1395
+ "rewards/code_format_reward": 0.3125,
1396
+ "step": 99
1397
+ },
1398
+ {
1399
+ "clip_ratio": 0.0,
1400
+ "completion_length": 505.9921875,
1401
+ "epoch": 1.6890756302521008,
1402
+ "grad_norm": 0.016563883051276207,
1403
+ "kl": 0.0088958740234375,
1404
+ "learning_rate": 1e-07,
1405
+ "loss": 0.0001,
1406
+ "num_tokens": 12854748.0,
1407
+ "reward": 0.5,
1408
+ "reward_std": 0.0,
1409
+ "rewards/code_format_reward": 0.5,
1410
+ "step": 100
1411
+ },
1412
+ {
1413
+ "epoch": 1.6890756302521008,
1414
+ "step": 100,
1415
+ "total_flos": 0.0,
1416
+ "train_loss": 0.003220523014695118,
1417
+ "train_runtime": 15510.0274,
1418
+ "train_samples_per_second": 0.825,
1419
+ "train_steps_per_second": 0.006
1420
+ }
1421
+ ],
1422
+ "logging_steps": 1,
1423
+ "max_steps": 100,
1424
+ "num_input_tokens_seen": 0,
1425
+ "num_train_epochs": 2,
1426
+ "save_steps": 50,
1427
+ "stateful_callbacks": {
1428
+ "TrainerControl": {
1429
+ "args": {
1430
+ "should_epoch_stop": false,
1431
+ "should_evaluate": false,
1432
+ "should_log": false,
1433
+ "should_save": true,
1434
+ "should_training_stop": true
1435
+ },
1436
+ "attributes": {}
1437
+ }
1438
+ },
1439
+ "total_flos": 0.0,
1440
+ "train_batch_size": 8,
1441
+ "trial_name": null,
1442
+ "trial_params": null
1443
+ }