Lansechen commited on
Commit
dfe2499
·
verified ·
1 Parent(s): 71a07c5

Model save

Browse files
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-3B
3
+ library_name: transformers
4
+ model_name: Qwen2.5-3B-Open-R1-GRPO-math-selected-default
5
+ tags:
6
+ - generated_from_trainer
7
+ - trl
8
+ - grpo
9
+ licence: license
10
+ ---
11
+
12
+ # Model Card for Qwen2.5-3B-Open-R1-GRPO-math-selected-default
13
+
14
+ This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
+
17
+ ## Quick start
18
+
19
+ ```python
20
+ from transformers import pipeline
21
+
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="Lansechen/Qwen2.5-3B-Open-R1-GRPO-math-selected-default", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
+
28
+ ## Training procedure
29
+
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenran1995-the-chinese-university-of-hong-kong/huggingface/runs/wnriict3)
31
+
32
+
33
+ This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
34
+
35
+ ### Framework versions
36
+
37
+ - TRL: 0.16.0
38
+ - Transformers: 4.50.0
39
+ - Pytorch: 2.5.1+cu121
40
+ - Datasets: 3.5.0
41
+ - Tokenizers: 0.21.1
42
+
43
+ ## Citations
44
+
45
+ Cite GRPO as:
46
+
47
+ ```bibtex
48
+ @article{zhihong2024deepseekmath,
49
+ title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
50
+ author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
51
+ year = 2024,
52
+ eprint = {arXiv:2402.03300},
53
+ }
54
+
55
+ ```
56
+
57
+ Cite TRL as:
58
+
59
+ ```bibtex
60
+ @misc{vonwerra2022trl,
61
+ title = {{TRL: Transformer Reinforcement Learning}},
62
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
63
+ year = 2020,
64
+ journal = {GitHub repository},
65
+ publisher = {GitHub},
66
+ howpublished = {\url{https://github.com/huggingface/trl}}
67
+ }
68
+ ```
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 0.0,
3
+ "train_loss": 0.02676454978019156,
4
+ "train_runtime": 31339.2496,
5
+ "train_samples": 11040,
6
+ "train_samples_per_second": 0.705,
7
+ "train_steps_per_second": 0.006
8
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "eos_token_id": 151643,
4
+ "max_new_tokens": 2048,
5
+ "transformers_version": "4.50.0"
6
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 0.0,
3
+ "train_loss": 0.02676454978019156,
4
+ "train_runtime": 31339.2496,
5
+ "train_samples": 11040,
6
+ "train_samples_per_second": 0.705,
7
+ "train_steps_per_second": 0.006
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,2787 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 1.9936628643852978,
6
+ "eval_steps": 100,
7
+ "global_step": 196,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "clip_ratio": 0.0,
14
+ "completion_length": 544.7745819091797,
15
+ "epoch": 0.010139416983523447,
16
+ "grad_norm": 0.1392974704504013,
17
+ "learning_rate": 5e-08,
18
+ "loss": -0.0292,
19
+ "num_tokens": 636118.0,
20
+ "reward": 0.12946429196745157,
21
+ "reward_std": 0.22425302118062973,
22
+ "rewards/accuracy_reward": 0.1261160708963871,
23
+ "rewards/format_reward": 0.003348214435391128,
24
+ "step": 1
25
+ },
26
+ {
27
+ "clip_ratio": 0.0,
28
+ "completion_length": 549.2812843322754,
29
+ "epoch": 0.020278833967046894,
30
+ "grad_norm": 0.1566777527332306,
31
+ "learning_rate": 1e-07,
32
+ "loss": -0.0074,
33
+ "num_tokens": 1263330.0,
34
+ "reward": 0.152901791036129,
35
+ "reward_std": 0.25637512281537056,
36
+ "rewards/accuracy_reward": 0.14508928637951612,
37
+ "rewards/format_reward": 0.007812500349245965,
38
+ "step": 2
39
+ },
40
+ {
41
+ "clip_ratio": 0.0,
42
+ "completion_length": 585.7969131469727,
43
+ "epoch": 0.030418250950570342,
44
+ "grad_norm": 0.15250442922115326,
45
+ "learning_rate": 1.5e-07,
46
+ "loss": -0.0299,
47
+ "num_tokens": 1928324.0,
48
+ "reward": 0.17745536379516125,
49
+ "reward_std": 0.26186119951307774,
50
+ "rewards/accuracy_reward": 0.16964285913854837,
51
+ "rewards/format_reward": 0.007812500349245965,
52
+ "step": 3
53
+ },
54
+ {
55
+ "clip_ratio": 0.0,
56
+ "completion_length": 550.0346145629883,
57
+ "epoch": 0.04055766793409379,
58
+ "grad_norm": 0.15349256992340088,
59
+ "learning_rate": 2e-07,
60
+ "loss": -0.0336,
61
+ "num_tokens": 2558107.0,
62
+ "reward": 0.17075893748551607,
63
+ "reward_std": 0.2673841342329979,
64
+ "rewards/accuracy_reward": 0.16071428824216127,
65
+ "rewards/format_reward": 0.010044643306173384,
66
+ "step": 4
67
+ },
68
+ {
69
+ "clip_ratio": 0.0,
70
+ "completion_length": 578.6484642028809,
71
+ "epoch": 0.050697084917617236,
72
+ "grad_norm": 0.3040037155151367,
73
+ "learning_rate": 2.5e-07,
74
+ "loss": -0.0205,
75
+ "num_tokens": 3203360.0,
76
+ "reward": 0.15848214831203222,
77
+ "reward_std": 0.2771828528493643,
78
+ "rewards/accuracy_reward": 0.1517857126891613,
79
+ "rewards/format_reward": 0.006696428754366934,
80
+ "step": 5
81
+ },
82
+ {
83
+ "clip_ratio": 0.0,
84
+ "completion_length": 583.7399826049805,
85
+ "epoch": 0.060836501901140684,
86
+ "grad_norm": 0.13601042330265045,
87
+ "learning_rate": 3e-07,
88
+ "loss": -0.031,
89
+ "num_tokens": 3855583.0,
90
+ "reward": 0.15625000465661287,
91
+ "reward_std": 0.25207988917827606,
92
+ "rewards/accuracy_reward": 0.15066964365541935,
93
+ "rewards/format_reward": 0.005580357392318547,
94
+ "step": 6
95
+ },
96
+ {
97
+ "clip_ratio": 0.0,
98
+ "completion_length": 537.3594093322754,
99
+ "epoch": 0.07097591888466413,
100
+ "grad_norm": 0.16291461884975433,
101
+ "learning_rate": 3.5e-07,
102
+ "loss": -0.0137,
103
+ "num_tokens": 4461481.0,
104
+ "reward": 0.17522322200238705,
105
+ "reward_std": 0.2866264618933201,
106
+ "rewards/accuracy_reward": 0.16183035727590322,
107
+ "rewards/format_reward": 0.013392857508733869,
108
+ "step": 7
109
+ },
110
+ {
111
+ "clip_ratio": 0.0,
112
+ "completion_length": 537.2634048461914,
113
+ "epoch": 0.08111533586818757,
114
+ "grad_norm": 0.19215919077396393,
115
+ "learning_rate": 4e-07,
116
+ "loss": -0.0325,
117
+ "num_tokens": 5070421.0,
118
+ "reward": 0.1785714365541935,
119
+ "reward_std": 0.2945236265659332,
120
+ "rewards/accuracy_reward": 0.1696428582072258,
121
+ "rewards/format_reward": 0.008928571827709675,
122
+ "step": 8
123
+ },
124
+ {
125
+ "clip_ratio": 0.0,
126
+ "completion_length": 575.7366256713867,
127
+ "epoch": 0.09125475285171103,
128
+ "grad_norm": 0.14393264055252075,
129
+ "learning_rate": 4.5e-07,
130
+ "loss": -0.014,
131
+ "num_tokens": 5709609.0,
132
+ "reward": 0.14843750465661287,
133
+ "reward_std": 0.22998391091823578,
134
+ "rewards/accuracy_reward": 0.14285714272409678,
135
+ "rewards/format_reward": 0.005580357392318547,
136
+ "step": 9
137
+ },
138
+ {
139
+ "clip_ratio": 0.0,
140
+ "completion_length": 519.7522468566895,
141
+ "epoch": 0.10139416983523447,
142
+ "grad_norm": 0.16316372156143188,
143
+ "learning_rate": 5e-07,
144
+ "loss": -0.0266,
145
+ "num_tokens": 6302035.0,
146
+ "reward": 0.1930803656578064,
147
+ "reward_std": 0.2917625065892935,
148
+ "rewards/accuracy_reward": 0.1863839291036129,
149
+ "rewards/format_reward": 0.006696428870782256,
150
+ "step": 10
151
+ },
152
+ {
153
+ "clip_ratio": 0.0,
154
+ "completion_length": 595.2377395629883,
155
+ "epoch": 0.11153358681875793,
156
+ "grad_norm": 0.14569994807243347,
157
+ "learning_rate": 5.5e-07,
158
+ "loss": -0.0097,
159
+ "num_tokens": 6972904.0,
160
+ "reward": 0.17299108020961285,
161
+ "reward_std": 0.27831926569342613,
162
+ "rewards/accuracy_reward": 0.16183035913854837,
163
+ "rewards/format_reward": 0.01116071455180645,
164
+ "step": 11
165
+ },
166
+ {
167
+ "clip_ratio": 0.0,
168
+ "completion_length": 575.3605155944824,
169
+ "epoch": 0.12167300380228137,
170
+ "grad_norm": 0.15014010667800903,
171
+ "learning_rate": 6e-07,
172
+ "loss": -0.0216,
173
+ "num_tokens": 7619739.0,
174
+ "reward": 0.17187501303851604,
175
+ "reward_std": 0.2696072347462177,
176
+ "rewards/accuracy_reward": 0.16183035634458065,
177
+ "rewards/format_reward": 0.010044643306173384,
178
+ "step": 12
179
+ },
180
+ {
181
+ "clip_ratio": 0.0,
182
+ "completion_length": 579.7835006713867,
183
+ "epoch": 0.13181242078580482,
184
+ "grad_norm": 0.14231476187705994,
185
+ "learning_rate": 6.5e-07,
186
+ "loss": 0.0066,
187
+ "num_tokens": 8267409.0,
188
+ "reward": 0.18191965110599995,
189
+ "reward_std": 0.27643212117254734,
190
+ "rewards/accuracy_reward": 0.17299107369035482,
191
+ "rewards/format_reward": 0.008928571827709675,
192
+ "step": 13
193
+ },
194
+ {
195
+ "clip_ratio": 0.0,
196
+ "completion_length": 560.2623062133789,
197
+ "epoch": 0.14195183776932827,
198
+ "grad_norm": 0.1491479128599167,
199
+ "learning_rate": 7e-07,
200
+ "loss": -0.007,
201
+ "num_tokens": 8901580.0,
202
+ "reward": 0.17745536752045155,
203
+ "reward_std": 0.28200731612741947,
204
+ "rewards/accuracy_reward": 0.1704155197367072,
205
+ "rewards/format_reward": 0.007812500349245965,
206
+ "step": 14
207
+ },
208
+ {
209
+ "clip_ratio": 0.0,
210
+ "completion_length": 563.2969017028809,
211
+ "epoch": 0.1520912547528517,
212
+ "grad_norm": 0.16368582844734192,
213
+ "learning_rate": 7.5e-07,
214
+ "loss": -0.0445,
215
+ "num_tokens": 9534734.0,
216
+ "reward": 0.22544643841683865,
217
+ "reward_std": 0.3410548157989979,
218
+ "rewards/accuracy_reward": 0.20870535541325808,
219
+ "rewards/format_reward": 0.016741071944124997,
220
+ "step": 15
221
+ },
222
+ {
223
+ "clip_ratio": 0.0,
224
+ "completion_length": 575.6607360839844,
225
+ "epoch": 0.16223067173637515,
226
+ "grad_norm": 0.155720517039299,
227
+ "learning_rate": 8e-07,
228
+ "loss": -0.0125,
229
+ "num_tokens": 10177454.0,
230
+ "reward": 0.2332589402794838,
231
+ "reward_std": 0.33312445878982544,
232
+ "rewards/accuracy_reward": 0.21986606996506453,
233
+ "rewards/format_reward": 0.01339285762514919,
234
+ "step": 16
235
+ },
236
+ {
237
+ "clip_ratio": 0.0,
238
+ "completion_length": 605.8471221923828,
239
+ "epoch": 0.17237008871989862,
240
+ "grad_norm": 0.18089956045150757,
241
+ "learning_rate": 8.499999999999999e-07,
242
+ "loss": -0.0189,
243
+ "num_tokens": 10854805.0,
244
+ "reward": 0.22433037124574184,
245
+ "reward_std": 0.3147235903888941,
246
+ "rewards/accuracy_reward": 0.20647321455180645,
247
+ "rewards/format_reward": 0.017857143306173384,
248
+ "step": 17
249
+ },
250
+ {
251
+ "clip_ratio": 0.0,
252
+ "completion_length": 620.2031478881836,
253
+ "epoch": 0.18250950570342206,
254
+ "grad_norm": 0.14688049256801605,
255
+ "learning_rate": 9e-07,
256
+ "loss": -0.0215,
257
+ "num_tokens": 11533851.0,
258
+ "reward": 0.26227680034935474,
259
+ "reward_std": 0.32751886174082756,
260
+ "rewards/accuracy_reward": 0.2522321417927742,
261
+ "rewards/format_reward": 0.010044643189758062,
262
+ "step": 18
263
+ },
264
+ {
265
+ "clip_ratio": 0.0,
266
+ "completion_length": 567.4386444091797,
267
+ "epoch": 0.1926489226869455,
268
+ "grad_norm": 0.18814197182655334,
269
+ "learning_rate": 9.499999999999999e-07,
270
+ "loss": -0.0224,
271
+ "num_tokens": 12174260.0,
272
+ "reward": 0.33370537497103214,
273
+ "reward_std": 0.3732391260564327,
274
+ "rewards/accuracy_reward": 0.3169642798602581,
275
+ "rewards/format_reward": 0.016741071944124997,
276
+ "step": 19
277
+ },
278
+ {
279
+ "clip_ratio": 0.0,
280
+ "completion_length": 620.7399749755859,
281
+ "epoch": 0.20278833967046894,
282
+ "grad_norm": 0.1669970452785492,
283
+ "learning_rate": 1e-06,
284
+ "loss": -0.0292,
285
+ "num_tokens": 12850867.0,
286
+ "reward": 0.3582589402794838,
287
+ "reward_std": 0.3860209658741951,
288
+ "rewards/accuracy_reward": 0.3359375037252903,
289
+ "rewards/format_reward": 0.02232142922002822,
290
+ "step": 20
291
+ },
292
+ {
293
+ "clip_ratio": 0.0,
294
+ "completion_length": 607.200927734375,
295
+ "epoch": 0.21292775665399238,
296
+ "grad_norm": 1.3311952352523804,
297
+ "learning_rate": 9.999203468625015e-07,
298
+ "loss": -0.0164,
299
+ "num_tokens": 13516191.0,
300
+ "reward": 0.3962053768336773,
301
+ "reward_std": 0.3798532336950302,
302
+ "rewards/accuracy_reward": 0.3694196417927742,
303
+ "rewards/format_reward": 0.026785714901052415,
304
+ "step": 21
305
+ },
306
+ {
307
+ "clip_ratio": 0.0,
308
+ "completion_length": 682.4051742553711,
309
+ "epoch": 0.22306717363751585,
310
+ "grad_norm": 0.14331556856632233,
311
+ "learning_rate": 9.99681412828496e-07,
312
+ "loss": 0.0035,
313
+ "num_tokens": 14260786.0,
314
+ "reward": 0.4140625186264515,
315
+ "reward_std": 0.3847779668867588,
316
+ "rewards/accuracy_reward": 0.3750000037252903,
317
+ "rewards/format_reward": 0.03906250128056854,
318
+ "step": 22
319
+ },
320
+ {
321
+ "clip_ratio": 0.0,
322
+ "completion_length": 633.8303909301758,
323
+ "epoch": 0.2332065906210393,
324
+ "grad_norm": 0.1424940973520279,
325
+ "learning_rate": 9.992832740253644e-07,
326
+ "loss": 0.01,
327
+ "num_tokens": 14966946.0,
328
+ "reward": 0.427455373108387,
329
+ "reward_std": 0.3340139761567116,
330
+ "rewards/accuracy_reward": 0.4073660746216774,
331
+ "rewards/format_reward": 0.02008928614668548,
332
+ "step": 23
333
+ },
334
+ {
335
+ "clip_ratio": 0.0,
336
+ "completion_length": 693.4665451049805,
337
+ "epoch": 0.24334600760456274,
338
+ "grad_norm": 0.1358347088098526,
339
+ "learning_rate": 9.987260573051267e-07,
340
+ "loss": 0.021,
341
+ "num_tokens": 15717836.0,
342
+ "reward": 0.4095982313156128,
343
+ "reward_std": 0.3687121346592903,
344
+ "rewards/accuracy_reward": 0.3805803619325161,
345
+ "rewards/format_reward": 0.029017857974395156,
346
+ "step": 24
347
+ },
348
+ {
349
+ "clip_ratio": 0.0,
350
+ "completion_length": 679.7701263427734,
351
+ "epoch": 0.2534854245880862,
352
+ "grad_norm": 0.12858213484287262,
353
+ "learning_rate": 9.98009940204023e-07,
354
+ "loss": 0.0077,
355
+ "num_tokens": 16458238.0,
356
+ "reward": 0.4821428805589676,
357
+ "reward_std": 0.3479448612779379,
358
+ "rewards/accuracy_reward": 0.4531250074505806,
359
+ "rewards/format_reward": 0.029017857857979834,
360
+ "step": 25
361
+ },
362
+ {
363
+ "clip_ratio": 0.0,
364
+ "completion_length": 643.7042846679688,
365
+ "epoch": 0.26362484157160965,
366
+ "grad_norm": 0.13453267514705658,
367
+ "learning_rate": 9.971351508859486e-07,
368
+ "loss": 0.0002,
369
+ "num_tokens": 17153829.0,
370
+ "reward": 0.4787946566939354,
371
+ "reward_std": 0.33469830453395844,
372
+ "rewards/accuracy_reward": 0.457589291036129,
373
+ "rewards/format_reward": 0.02120535762514919,
374
+ "step": 26
375
+ },
376
+ {
377
+ "clip_ratio": 0.0,
378
+ "completion_length": 647.9174499511719,
379
+ "epoch": 0.2737642585551331,
380
+ "grad_norm": 0.13937269151210785,
381
+ "learning_rate": 9.961019680697591e-07,
382
+ "loss": 0.031,
383
+ "num_tokens": 17862539.0,
384
+ "reward": 0.5033482313156128,
385
+ "reward_std": 0.34470784291625023,
386
+ "rewards/accuracy_reward": 0.4676339291036129,
387
+ "rewards/format_reward": 0.0357142862631008,
388
+ "step": 27
389
+ },
390
+ {
391
+ "clip_ratio": 0.0,
392
+ "completion_length": 729.0391006469727,
393
+ "epoch": 0.28390367553865653,
394
+ "grad_norm": 0.12249790877103806,
395
+ "learning_rate": 9.949107209404663e-07,
396
+ "loss": 0.039,
397
+ "num_tokens": 18649398.0,
398
+ "reward": 0.5368303805589676,
399
+ "reward_std": 0.3495071418583393,
400
+ "rewards/accuracy_reward": 0.5122767835855484,
401
+ "rewards/format_reward": 0.02455357217695564,
402
+ "step": 28
403
+ },
404
+ {
405
+ "clip_ratio": 0.0,
406
+ "completion_length": 696.9040451049805,
407
+ "epoch": 0.29404309252217997,
408
+ "grad_norm": 0.12198278307914734,
409
+ "learning_rate": 9.935617890443554e-07,
410
+ "loss": 0.0252,
411
+ "num_tokens": 19414560.0,
412
+ "reward": 0.541294664144516,
413
+ "reward_std": 0.30416675843298435,
414
+ "rewards/accuracy_reward": 0.5145089328289032,
415
+ "rewards/format_reward": 0.026785714901052415,
416
+ "step": 29
417
+ },
418
+ {
419
+ "clip_ratio": 0.0,
420
+ "completion_length": 760.1384353637695,
421
+ "epoch": 0.3041825095057034,
422
+ "grad_norm": 0.12740154564380646,
423
+ "learning_rate": 9.92055602168058e-07,
424
+ "loss": 0.0327,
425
+ "num_tokens": 20241132.0,
426
+ "reward": 0.5569196678698063,
427
+ "reward_std": 0.30499308556318283,
428
+ "rewards/accuracy_reward": 0.5145089253783226,
429
+ "rewards/format_reward": 0.04241071501746774,
430
+ "step": 30
431
+ },
432
+ {
433
+ "clip_ratio": 0.0,
434
+ "completion_length": 727.1250381469727,
435
+ "epoch": 0.31432192648922685,
436
+ "grad_norm": 0.13364288210868835,
437
+ "learning_rate": 9.90392640201615e-07,
438
+ "loss": 0.0145,
439
+ "num_tokens": 21020244.0,
440
+ "reward": 0.6316964700818062,
441
+ "reward_std": 0.35084592178463936,
442
+ "rewards/accuracy_reward": 0.5758928619325161,
443
+ "rewards/format_reward": 0.05580357206054032,
444
+ "step": 31
445
+ },
446
+ {
447
+ "clip_ratio": 0.0,
448
+ "completion_length": 743.0826187133789,
449
+ "epoch": 0.3244613434727503,
450
+ "grad_norm": 0.1573316901922226,
451
+ "learning_rate": 9.885734329855797e-07,
452
+ "loss": 0.047,
453
+ "num_tokens": 21805790.0,
454
+ "reward": 0.6562500223517418,
455
+ "reward_std": 0.3445834815502167,
456
+ "rewards/accuracy_reward": 0.5792410708963871,
457
+ "rewards/format_reward": 0.0770089291036129,
458
+ "step": 32
459
+ },
460
+ {
461
+ "clip_ratio": 0.0,
462
+ "completion_length": 750.4531631469727,
463
+ "epoch": 0.33460076045627374,
464
+ "grad_norm": 0.15323247015476227,
465
+ "learning_rate": 9.865985601422017e-07,
466
+ "loss": 0.0446,
467
+ "num_tokens": 22612252.0,
468
+ "reward": 0.6238839514553547,
469
+ "reward_std": 0.41414720192551613,
470
+ "rewards/accuracy_reward": 0.49690933898091316,
471
+ "rewards/format_reward": 0.13058035727590322,
472
+ "step": 33
473
+ },
474
+ {
475
+ "clip_ratio": 0.0,
476
+ "completion_length": 765.1116485595703,
477
+ "epoch": 0.34474017743979724,
478
+ "grad_norm": 0.22268784046173096,
479
+ "learning_rate": 9.844686508907537e-07,
480
+ "loss": 0.0329,
481
+ "num_tokens": 23425280.0,
482
+ "reward": 0.7979911044239998,
483
+ "reward_std": 0.4762897975742817,
484
+ "rewards/accuracy_reward": 0.5691964253783226,
485
+ "rewards/format_reward": 0.22879464365541935,
486
+ "step": 34
487
+ },
488
+ {
489
+ "clip_ratio": 0.0,
490
+ "completion_length": 733.958740234375,
491
+ "epoch": 0.3548795944233207,
492
+ "grad_norm": 0.21707987785339355,
493
+ "learning_rate": 9.821843838470534e-07,
494
+ "loss": 0.034,
495
+ "num_tokens": 24225635.0,
496
+ "reward": 0.9587054029107094,
497
+ "reward_std": 0.53031075745821,
498
+ "rewards/accuracy_reward": 0.5680803619325161,
499
+ "rewards/format_reward": 0.390625,
500
+ "step": 35
501
+ },
502
+ {
503
+ "clip_ratio": 0.0,
504
+ "completion_length": 737.1797103881836,
505
+ "epoch": 0.3650190114068441,
506
+ "grad_norm": 0.1912337839603424,
507
+ "learning_rate": 9.797464868072486e-07,
508
+ "loss": 0.033,
509
+ "num_tokens": 25014260.0,
510
+ "reward": 1.112723246216774,
511
+ "reward_std": 0.574417520314455,
512
+ "rewards/accuracy_reward": 0.5881696492433548,
513
+ "rewards/format_reward": 0.5245535708963871,
514
+ "step": 36
515
+ },
516
+ {
517
+ "clip_ratio": 0.0,
518
+ "completion_length": 740.9598541259766,
519
+ "epoch": 0.37515842839036756,
520
+ "grad_norm": 0.22646164894104004,
521
+ "learning_rate": 9.771557365159319e-07,
522
+ "loss": 0.0325,
523
+ "num_tokens": 25803688.0,
524
+ "reward": 1.1941964775323868,
525
+ "reward_std": 0.5038099698722363,
526
+ "rewards/accuracy_reward": 0.5524553544819355,
527
+ "rewards/format_reward": 0.6417410746216774,
528
+ "step": 37
529
+ },
530
+ {
531
+ "clip_ratio": 0.0,
532
+ "completion_length": 741.780158996582,
533
+ "epoch": 0.385297845373891,
534
+ "grad_norm": 0.2826821208000183,
535
+ "learning_rate": 9.744129584186597e-07,
536
+ "loss": 0.0653,
537
+ "num_tokens": 26594931.0,
538
+ "reward": 1.340401828289032,
539
+ "reward_std": 0.47122547030448914,
540
+ "rewards/accuracy_reward": 0.5691964253783226,
541
+ "rewards/format_reward": 0.7712053582072258,
542
+ "step": 38
543
+ },
544
+ {
545
+ "clip_ratio": 0.0,
546
+ "completion_length": 783.4699020385742,
547
+ "epoch": 0.39543726235741444,
548
+ "grad_norm": 0.1316017210483551,
549
+ "learning_rate": 9.71519026398956e-07,
550
+ "loss": 0.0629,
551
+ "num_tokens": 27434704.0,
552
+ "reward": 1.3526786416769028,
553
+ "reward_std": 0.40762051939964294,
554
+ "rewards/accuracy_reward": 0.5200892835855484,
555
+ "rewards/format_reward": 0.832589291036129,
556
+ "step": 39
557
+ },
558
+ {
559
+ "clip_ratio": 0.0,
560
+ "completion_length": 709.8928909301758,
561
+ "epoch": 0.4055766793409379,
562
+ "grad_norm": 0.12696978449821472,
563
+ "learning_rate": 9.68474862499881e-07,
564
+ "loss": 0.0423,
565
+ "num_tokens": 28190032.0,
566
+ "reward": 1.491071492433548,
567
+ "reward_std": 0.35216479003429413,
568
+ "rewards/accuracy_reward": 0.5848214253783226,
569
+ "rewards/format_reward": 0.9062499925494194,
570
+ "step": 40
571
+ },
572
+ {
573
+ "clip_ratio": 0.0,
574
+ "completion_length": 700.1216735839844,
575
+ "epoch": 0.4157160963244613,
576
+ "grad_norm": 0.1216612383723259,
577
+ "learning_rate": 9.652814366302568e-07,
578
+ "loss": 0.0672,
579
+ "num_tokens": 28943317.0,
580
+ "reward": 1.5658482760190964,
581
+ "reward_std": 0.34097408317029476,
582
+ "rewards/accuracy_reward": 0.6484375,
583
+ "rewards/format_reward": 0.917410708963871,
584
+ "step": 41
585
+ },
586
+ {
587
+ "clip_ratio": 0.0,
588
+ "completion_length": 703.3582916259766,
589
+ "epoch": 0.42585551330798477,
590
+ "grad_norm": 0.11864102631807327,
591
+ "learning_rate": 9.619397662556433e-07,
592
+ "loss": 0.0279,
593
+ "num_tokens": 29702494.0,
594
+ "reward": 1.537946492433548,
595
+ "reward_std": 0.2801562137901783,
596
+ "rewards/accuracy_reward": 0.5892857164144516,
597
+ "rewards/format_reward": 0.9486607164144516,
598
+ "step": 42
599
+ },
600
+ {
601
+ "clip_ratio": 0.0,
602
+ "completion_length": 775.4074020385742,
603
+ "epoch": 0.43599493029150826,
604
+ "grad_norm": 0.10538308322429657,
605
+ "learning_rate": 9.5845091607416e-07,
606
+ "loss": 0.0566,
607
+ "num_tokens": 30515963.0,
608
+ "reward": 1.5178572088479996,
609
+ "reward_std": 0.33493360690772533,
610
+ "rewards/accuracy_reward": 0.5758928544819355,
611
+ "rewards/format_reward": 0.9419642984867096,
612
+ "step": 43
613
+ },
614
+ {
615
+ "clip_ratio": 0.0,
616
+ "completion_length": 792.2288360595703,
617
+ "epoch": 0.4461343472750317,
618
+ "grad_norm": 0.10541502386331558,
619
+ "learning_rate": 9.548159976772592e-07,
620
+ "loss": 0.0749,
621
+ "num_tokens": 31355848.0,
622
+ "reward": 1.53683041036129,
623
+ "reward_std": 0.3079599365592003,
624
+ "rewards/accuracy_reward": 0.5747767873108387,
625
+ "rewards/format_reward": 0.9620535746216774,
626
+ "step": 44
627
+ },
628
+ {
629
+ "clip_ratio": 0.0,
630
+ "completion_length": 785.3750228881836,
631
+ "epoch": 0.45627376425855515,
632
+ "grad_norm": 0.11054891347885132,
633
+ "learning_rate": 9.510361691955606e-07,
634
+ "loss": 0.0749,
635
+ "num_tokens": 32181992.0,
636
+ "reward": 1.4888393431901932,
637
+ "reward_std": 0.33042189106345177,
638
+ "rewards/accuracy_reward": 0.5234375037252903,
639
+ "rewards/format_reward": 0.9654017835855484,
640
+ "step": 45
641
+ },
642
+ {
643
+ "clip_ratio": 0.0,
644
+ "completion_length": 743.7846298217773,
645
+ "epoch": 0.4664131812420786,
646
+ "grad_norm": 0.10712749511003494,
647
+ "learning_rate": 9.471126349298556e-07,
648
+ "loss": 0.0636,
649
+ "num_tokens": 32978639.0,
650
+ "reward": 1.5468750447034836,
651
+ "reward_std": 0.34593483060598373,
652
+ "rewards/accuracy_reward": 0.5770089402794838,
653
+ "rewards/format_reward": 0.9698660746216774,
654
+ "step": 46
655
+ },
656
+ {
657
+ "clip_ratio": 0.0,
658
+ "completion_length": 732.7355194091797,
659
+ "epoch": 0.47655259822560203,
660
+ "grad_norm": 0.1037302166223526,
661
+ "learning_rate": 9.430466449674013e-07,
662
+ "loss": 0.0546,
663
+ "num_tokens": 33783522.0,
664
+ "reward": 1.537946492433548,
665
+ "reward_std": 0.2701996862888336,
666
+ "rewards/accuracy_reward": 0.5558035708963871,
667
+ "rewards/format_reward": 0.9821428582072258,
668
+ "step": 47
669
+ },
670
+ {
671
+ "clip_ratio": 0.0,
672
+ "completion_length": 794.5357513427734,
673
+ "epoch": 0.4866920152091255,
674
+ "grad_norm": 0.09865893423557281,
675
+ "learning_rate": 9.388394947836278e-07,
676
+ "loss": 0.0496,
677
+ "num_tokens": 34622162.0,
678
+ "reward": 1.4854911416769028,
679
+ "reward_std": 0.27795105054974556,
680
+ "rewards/accuracy_reward": 0.5133928544819355,
681
+ "rewards/format_reward": 0.972098208963871,
682
+ "step": 48
683
+ },
684
+ {
685
+ "clip_ratio": 0.0,
686
+ "completion_length": 784.8024978637695,
687
+ "epoch": 0.4968314321926489,
688
+ "grad_norm": 0.09806529432535172,
689
+ "learning_rate": 9.344925248293835e-07,
690
+ "loss": 0.0469,
691
+ "num_tokens": 35446353.0,
692
+ "reward": 1.5401786416769028,
693
+ "reward_std": 0.2609728015959263,
694
+ "rewards/accuracy_reward": 0.5658482201397419,
695
+ "rewards/format_reward": 0.9743303582072258,
696
+ "step": 49
697
+ },
698
+ {
699
+ "clip_ratio": 0.0,
700
+ "completion_length": 755.7600708007812,
701
+ "epoch": 0.5069708491761724,
702
+ "grad_norm": 0.09463509172201157,
703
+ "learning_rate": 9.300071201038501e-07,
704
+ "loss": 0.0447,
705
+ "num_tokens": 36249690.0,
706
+ "reward": 1.5591518729925156,
707
+ "reward_std": 0.27370808832347393,
708
+ "rewards/accuracy_reward": 0.5837053693830967,
709
+ "rewards/format_reward": 0.9754464253783226,
710
+ "step": 50
711
+ },
712
+ {
713
+ "clip_ratio": 0.0,
714
+ "completion_length": 683.0134201049805,
715
+ "epoch": 0.5171102661596958,
716
+ "grad_norm": 0.1009419783949852,
717
+ "learning_rate": 9.253847097132655e-07,
718
+ "loss": 0.0259,
719
+ "num_tokens": 36992494.0,
720
+ "reward": 1.6160715073347092,
721
+ "reward_std": 0.2591102607548237,
722
+ "rewards/accuracy_reward": 0.6316964328289032,
723
+ "rewards/format_reward": 0.984375,
724
+ "step": 51
725
+ },
726
+ {
727
+ "clip_ratio": 0.0,
728
+ "completion_length": 744.6250228881836,
729
+ "epoch": 0.5272496831432193,
730
+ "grad_norm": 0.10415738075971603,
731
+ "learning_rate": 9.206267664155906e-07,
732
+ "loss": 0.0411,
733
+ "num_tokens": 37780054.0,
734
+ "reward": 1.561383992433548,
735
+ "reward_std": 0.2772454805672169,
736
+ "rewards/accuracy_reward": 0.5825892947614193,
737
+ "rewards/format_reward": 0.9787946417927742,
738
+ "step": 52
739
+ },
740
+ {
741
+ "clip_ratio": 0.0,
742
+ "completion_length": 768.4509201049805,
743
+ "epoch": 0.5373891001267427,
744
+ "grad_norm": 0.09972584992647171,
745
+ "learning_rate": 9.157348061512726e-07,
746
+ "loss": 0.0551,
747
+ "num_tokens": 38604666.0,
748
+ "reward": 1.5491072237491608,
749
+ "reward_std": 0.27522944658994675,
750
+ "rewards/accuracy_reward": 0.5792410783469677,
751
+ "rewards/format_reward": 0.9698660746216774,
752
+ "step": 53
753
+ },
754
+ {
755
+ "clip_ratio": 0.0,
756
+ "completion_length": 828.3806228637695,
757
+ "epoch": 0.5475285171102662,
758
+ "grad_norm": 0.08794871717691422,
759
+ "learning_rate": 9.107103875602458e-07,
760
+ "loss": 0.0472,
761
+ "num_tokens": 39465447.0,
762
+ "reward": 1.5133929252624512,
763
+ "reward_std": 0.29366694390773773,
764
+ "rewards/accuracy_reward": 0.5345982164144516,
765
+ "rewards/format_reward": 0.9787946417927742,
766
+ "step": 54
767
+ },
768
+ {
769
+ "clip_ratio": 0.0,
770
+ "completion_length": 784.3370895385742,
771
+ "epoch": 0.5576679340937896,
772
+ "grad_norm": 0.10152813047170639,
773
+ "learning_rate": 9.055551114853295e-07,
774
+ "loss": 0.049,
775
+ "num_tokens": 40298605.0,
776
+ "reward": 1.507812574505806,
777
+ "reward_std": 0.2921369504183531,
778
+ "rewards/accuracy_reward": 0.5312499925494194,
779
+ "rewards/format_reward": 0.9765624925494194,
780
+ "step": 55
781
+ },
782
+ {
783
+ "clip_ratio": 0.0,
784
+ "completion_length": 721.1373062133789,
785
+ "epoch": 0.5678073510773131,
786
+ "grad_norm": 0.09925797581672668,
787
+ "learning_rate": 9.002706204621802e-07,
788
+ "loss": 0.0244,
789
+ "num_tokens": 41068424.0,
790
+ "reward": 1.585937574505806,
791
+ "reward_std": 0.26984195969998837,
792
+ "rewards/accuracy_reward": 0.5959821343421936,
793
+ "rewards/format_reward": 0.9899553582072258,
794
+ "step": 56
795
+ },
796
+ {
797
+ "clip_ratio": 0.0,
798
+ "completion_length": 754.9319458007812,
799
+ "epoch": 0.5779467680608364,
800
+ "grad_norm": 0.0985010415315628,
801
+ "learning_rate": 8.948585981959578e-07,
802
+ "loss": 0.055,
803
+ "num_tokens": 41876427.0,
804
+ "reward": 1.5926340222358704,
805
+ "reward_std": 0.2599882632493973,
806
+ "rewards/accuracy_reward": 0.618303582072258,
807
+ "rewards/format_reward": 0.9743303507566452,
808
+ "step": 57
809
+ },
810
+ {
811
+ "clip_ratio": 0.0,
812
+ "completion_length": 735.5156631469727,
813
+ "epoch": 0.5880861850443599,
814
+ "grad_norm": 0.09651587158441544,
815
+ "learning_rate": 8.893207690248775e-07,
816
+ "loss": 0.0342,
817
+ "num_tokens": 42656513.0,
818
+ "reward": 1.5904018431901932,
819
+ "reward_std": 0.2695194110274315,
820
+ "rewards/accuracy_reward": 0.6071428507566452,
821
+ "rewards/format_reward": 0.9832589328289032,
822
+ "step": 58
823
+ },
824
+ {
825
+ "clip_ratio": 0.0,
826
+ "completion_length": 701.5614166259766,
827
+ "epoch": 0.5982256020278834,
828
+ "grad_norm": 0.10193536430597305,
829
+ "learning_rate": 8.836588973708128e-07,
830
+ "loss": 0.0322,
831
+ "num_tokens": 43426704.0,
832
+ "reward": 1.537946492433548,
833
+ "reward_std": 0.2486334890127182,
834
+ "rewards/accuracy_reward": 0.5479910708963871,
835
+ "rewards/format_reward": 0.9899553507566452,
836
+ "step": 59
837
+ },
838
+ {
839
+ "clip_ratio": 0.0,
840
+ "completion_length": 767.4029312133789,
841
+ "epoch": 0.6083650190114068,
842
+ "grad_norm": 0.09067587554454803,
843
+ "learning_rate": 8.778747871771291e-07,
844
+ "loss": 0.0462,
845
+ "num_tokens": 44251545.0,
846
+ "reward": 1.5535715073347092,
847
+ "reward_std": 0.2701433580368757,
848
+ "rewards/accuracy_reward": 0.5714285783469677,
849
+ "rewards/format_reward": 0.9821428582072258,
850
+ "step": 60
851
+ },
852
+ {
853
+ "clip_ratio": 0.0,
854
+ "completion_length": 728.3716888427734,
855
+ "epoch": 0.6185044359949303,
856
+ "grad_norm": 0.10155454277992249,
857
+ "learning_rate": 8.719702813339247e-07,
858
+ "loss": 0.0326,
859
+ "num_tokens": 45026510.0,
860
+ "reward": 1.577008992433548,
861
+ "reward_std": 0.2642171438783407,
862
+ "rewards/accuracy_reward": 0.5904017873108387,
863
+ "rewards/format_reward": 0.9866071343421936,
864
+ "step": 61
865
+ },
866
+ {
867
+ "clip_ratio": 0.0,
868
+ "completion_length": 777.1127548217773,
869
+ "epoch": 0.6286438529784537,
870
+ "grad_norm": 0.09913618117570877,
871
+ "learning_rate": 8.659472610908627e-07,
872
+ "loss": 0.0358,
873
+ "num_tokens": 45852195.0,
874
+ "reward": 1.5312500894069672,
875
+ "reward_std": 0.27686445228755474,
876
+ "rewards/accuracy_reward": 0.5446428544819355,
877
+ "rewards/format_reward": 0.986607126891613,
878
+ "step": 62
879
+ },
880
+ {
881
+ "clip_ratio": 0.0,
882
+ "completion_length": 697.9274826049805,
883
+ "epoch": 0.6387832699619772,
884
+ "grad_norm": 0.10096921026706696,
885
+ "learning_rate": 8.598076454577814e-07,
886
+ "loss": 0.0279,
887
+ "num_tokens": 46601834.0,
888
+ "reward": 1.6305804252624512,
889
+ "reward_std": 0.2701211776584387,
890
+ "rewards/accuracy_reward": 0.6428571492433548,
891
+ "rewards/format_reward": 0.9877232015132904,
892
+ "step": 63
893
+ },
894
+ {
895
+ "clip_ratio": 0.0,
896
+ "completion_length": 724.1774826049805,
897
+ "epoch": 0.6489226869455006,
898
+ "grad_norm": 0.09761769324541092,
899
+ "learning_rate": 8.535533905932737e-07,
900
+ "loss": 0.0398,
901
+ "num_tokens": 47398825.0,
902
+ "reward": 1.601562574505806,
903
+ "reward_std": 0.23534459993243217,
904
+ "rewards/accuracy_reward": 0.6093749962747097,
905
+ "rewards/format_reward": 0.9921875,
906
+ "step": 64
907
+ },
908
+ {
909
+ "clip_ratio": 0.0,
910
+ "completion_length": 717.3169937133789,
911
+ "epoch": 0.6590621039290241,
912
+ "grad_norm": 0.11670506000518799,
913
+ "learning_rate": 8.471864891814304e-07,
914
+ "loss": 0.0516,
915
+ "num_tokens": 48160645.0,
916
+ "reward": 1.6060268580913544,
917
+ "reward_std": 0.26124117337167263,
918
+ "rewards/accuracy_reward": 0.6216517798602581,
919
+ "rewards/format_reward": 0.984375,
920
+ "step": 65
921
+ },
922
+ {
923
+ "clip_ratio": 0.0,
924
+ "completion_length": 766.2712478637695,
925
+ "epoch": 0.6692015209125475,
926
+ "grad_norm": 0.09610045701265335,
927
+ "learning_rate": 8.407089697969456e-07,
928
+ "loss": 0.0429,
929
+ "num_tokens": 48972088.0,
930
+ "reward": 1.5725447088479996,
931
+ "reward_std": 0.25589474849402905,
932
+ "rewards/accuracy_reward": 0.5937499850988388,
933
+ "rewards/format_reward": 0.9787946417927742,
934
+ "step": 66
935
+ },
936
+ {
937
+ "clip_ratio": 0.0,
938
+ "completion_length": 794.2879867553711,
939
+ "epoch": 0.679340937896071,
940
+ "grad_norm": 0.08830226212739944,
941
+ "learning_rate": 8.341228962587881e-07,
942
+ "loss": 0.0485,
943
+ "num_tokens": 49819898.0,
944
+ "reward": 1.545758992433548,
945
+ "reward_std": 0.2661575675010681,
946
+ "rewards/accuracy_reward": 0.5669642947614193,
947
+ "rewards/format_reward": 0.9787946343421936,
948
+ "step": 67
949
+ },
950
+ {
951
+ "clip_ratio": 0.0,
952
+ "completion_length": 702.197582244873,
953
+ "epoch": 0.6894803548795945,
954
+ "grad_norm": 0.08808374404907227,
955
+ "learning_rate": 8.274303669726426e-07,
956
+ "loss": 0.0427,
957
+ "num_tokens": 50570387.0,
958
+ "reward": 1.641741156578064,
959
+ "reward_std": 0.24876852426677942,
960
+ "rewards/accuracy_reward": 0.6595982052385807,
961
+ "rewards/format_reward": 0.9821428582072258,
962
+ "step": 68
963
+ },
964
+ {
965
+ "clip_ratio": 0.0,
966
+ "completion_length": 780.8024978637695,
967
+ "epoch": 0.6996197718631179,
968
+ "grad_norm": 0.0812569409608841,
969
+ "learning_rate": 8.206335142623304e-07,
970
+ "loss": 0.0349,
971
+ "num_tokens": 51399746.0,
972
+ "reward": 1.5446429252624512,
973
+ "reward_std": 0.21720573492348194,
974
+ "rewards/accuracy_reward": 0.5680803582072258,
975
+ "rewards/format_reward": 0.9765624925494194,
976
+ "step": 69
977
+ },
978
+ {
979
+ "clip_ratio": 0.0,
980
+ "completion_length": 757.3337326049805,
981
+ "epoch": 0.7097591888466414,
982
+ "grad_norm": 0.08954399079084396,
983
+ "learning_rate": 8.137345036904259e-07,
984
+ "loss": 0.0439,
985
+ "num_tokens": 52197693.0,
986
+ "reward": 1.5390625596046448,
987
+ "reward_std": 0.25285689160227776,
988
+ "rewards/accuracy_reward": 0.5624999962747097,
989
+ "rewards/format_reward": 0.9765624925494194,
990
+ "step": 70
991
+ },
992
+ {
993
+ "clip_ratio": 0.0,
994
+ "completion_length": 731.8973541259766,
995
+ "epoch": 0.7198986058301647,
996
+ "grad_norm": 0.09004837274551392,
997
+ "learning_rate": 8.067355333682797e-07,
998
+ "loss": 0.0354,
999
+ "num_tokens": 52992241.0,
1000
+ "reward": 1.562500074505806,
1001
+ "reward_std": 0.22563916258513927,
1002
+ "rewards/accuracy_reward": 0.5814732126891613,
1003
+ "rewards/format_reward": 0.9810267761349678,
1004
+ "step": 71
1005
+ },
1006
+ {
1007
+ "clip_ratio": 0.0,
1008
+ "completion_length": 734.0893249511719,
1009
+ "epoch": 0.7300380228136882,
1010
+ "grad_norm": 0.0921608954668045,
1011
+ "learning_rate": 7.996388332556734e-07,
1012
+ "loss": 0.0553,
1013
+ "num_tokens": 53778705.0,
1014
+ "reward": 1.561383992433548,
1015
+ "reward_std": 0.25903933122754097,
1016
+ "rewards/accuracy_reward": 0.5792410857975483,
1017
+ "rewards/format_reward": 0.9821428582072258,
1018
+ "step": 72
1019
+ },
1020
+ {
1021
+ "clip_ratio": 0.0,
1022
+ "completion_length": 783.1897735595703,
1023
+ "epoch": 0.7401774397972116,
1024
+ "grad_norm": 0.09094219654798508,
1025
+ "learning_rate": 7.924466644503264e-07,
1026
+ "loss": 0.0356,
1027
+ "num_tokens": 54612939.0,
1028
+ "reward": 1.5558036416769028,
1029
+ "reward_std": 0.25537824630737305,
1030
+ "rewards/accuracy_reward": 0.5691964365541935,
1031
+ "rewards/format_reward": 0.9866071343421936,
1032
+ "step": 73
1033
+ },
1034
+ {
1035
+ "clip_ratio": 0.0,
1036
+ "completion_length": 725.2801666259766,
1037
+ "epoch": 0.7503168567807351,
1038
+ "grad_norm": 0.09524378925561905,
1039
+ "learning_rate": 7.85161318467482e-07,
1040
+ "loss": 0.0229,
1041
+ "num_tokens": 55398174.0,
1042
+ "reward": 1.6205357909202576,
1043
+ "reward_std": 0.2613362595438957,
1044
+ "rewards/accuracy_reward": 0.6305803582072258,
1045
+ "rewards/format_reward": 0.9899553507566452,
1046
+ "step": 74
1047
+ },
1048
+ {
1049
+ "clip_ratio": 0.0,
1050
+ "completion_length": 710.1853103637695,
1051
+ "epoch": 0.7604562737642585,
1052
+ "grad_norm": 0.0883307084441185,
1053
+ "learning_rate": 7.777851165098011e-07,
1054
+ "loss": 0.0235,
1055
+ "num_tokens": 56165180.0,
1056
+ "reward": 1.6183036416769028,
1057
+ "reward_std": 0.21434970386326313,
1058
+ "rewards/accuracy_reward": 0.6305803619325161,
1059
+ "rewards/format_reward": 0.9877232015132904,
1060
+ "step": 75
1061
+ },
1062
+ {
1063
+ "clip_ratio": 0.0,
1064
+ "completion_length": 690.4687805175781,
1065
+ "epoch": 0.770595690747782,
1066
+ "grad_norm": 0.09709777683019638,
1067
+ "learning_rate": 7.703204087277988e-07,
1068
+ "loss": 0.0316,
1069
+ "num_tokens": 56911080.0,
1070
+ "reward": 1.5267857760190964,
1071
+ "reward_std": 0.23126200772821903,
1072
+ "rewards/accuracy_reward": 0.5401785671710968,
1073
+ "rewards/format_reward": 0.9866071492433548,
1074
+ "step": 76
1075
+ },
1076
+ {
1077
+ "clip_ratio": 0.0,
1078
+ "completion_length": 753.3359756469727,
1079
+ "epoch": 0.7807351077313055,
1080
+ "grad_norm": 0.08559317141771317,
1081
+ "learning_rate": 7.627695734710564e-07,
1082
+ "loss": 0.0362,
1083
+ "num_tokens": 57708293.0,
1084
+ "reward": 1.601562574505806,
1085
+ "reward_std": 0.22745812311768532,
1086
+ "rewards/accuracy_reward": 0.6216517761349678,
1087
+ "rewards/format_reward": 0.979910708963871,
1088
+ "step": 77
1089
+ },
1090
+ {
1091
+ "clip_ratio": 0.0,
1092
+ "completion_length": 771.0870895385742,
1093
+ "epoch": 0.7908745247148289,
1094
+ "grad_norm": 0.08436188846826553,
1095
+ "learning_rate": 7.551350165304499e-07,
1096
+ "loss": 0.0295,
1097
+ "num_tokens": 58516963.0,
1098
+ "reward": 1.6361608058214188,
1099
+ "reward_std": 0.22785574942827225,
1100
+ "rewards/accuracy_reward": 0.6540178693830967,
1101
+ "rewards/format_reward": 0.9821428582072258,
1102
+ "step": 78
1103
+ },
1104
+ {
1105
+ "clip_ratio": 0.0,
1106
+ "completion_length": 738.5379791259766,
1107
+ "epoch": 0.8010139416983524,
1108
+ "grad_norm": 0.08563719689846039,
1109
+ "learning_rate": 7.474191703716338e-07,
1110
+ "loss": 0.0205,
1111
+ "num_tokens": 59309253.0,
1112
+ "reward": 1.6015625894069672,
1113
+ "reward_std": 0.211506649851799,
1114
+ "rewards/accuracy_reward": 0.6071428544819355,
1115
+ "rewards/format_reward": 0.9944196417927742,
1116
+ "step": 79
1117
+ },
1118
+ {
1119
+ "clip_ratio": 0.0,
1120
+ "completion_length": 729.372802734375,
1121
+ "epoch": 0.8111533586818758,
1122
+ "grad_norm": 0.09525574743747711,
1123
+ "learning_rate": 7.396244933600284e-07,
1124
+ "loss": 0.0426,
1125
+ "num_tokens": 60087059.0,
1126
+ "reward": 1.6004464775323868,
1127
+ "reward_std": 0.24874755926430225,
1128
+ "rewards/accuracy_reward": 0.608258917927742,
1129
+ "rewards/format_reward": 0.9921874925494194,
1130
+ "step": 80
1131
+ },
1132
+ {
1133
+ "clip_ratio": 0.0,
1134
+ "completion_length": 742.4721298217773,
1135
+ "epoch": 0.8212927756653993,
1136
+ "grad_norm": 0.0932924821972847,
1137
+ "learning_rate": 7.317534689775527e-07,
1138
+ "loss": 0.035,
1139
+ "num_tokens": 60872018.0,
1140
+ "reward": 1.5714286267757416,
1141
+ "reward_std": 0.25846732780337334,
1142
+ "rewards/accuracy_reward": 0.5881696417927742,
1143
+ "rewards/format_reward": 0.9832589253783226,
1144
+ "step": 81
1145
+ },
1146
+ {
1147
+ "clip_ratio": 0.0,
1148
+ "completion_length": 709.4643173217773,
1149
+ "epoch": 0.8314321926489227,
1150
+ "grad_norm": 0.09472807496786118,
1151
+ "learning_rate": 7.238086050313562e-07,
1152
+ "loss": 0.0341,
1153
+ "num_tokens": 61642954.0,
1154
+ "reward": 1.5870536416769028,
1155
+ "reward_std": 0.23454621247947216,
1156
+ "rewards/accuracy_reward": 0.5993303582072258,
1157
+ "rewards/format_reward": 0.9877232015132904,
1158
+ "step": 82
1159
+ },
1160
+ {
1161
+ "clip_ratio": 0.0,
1162
+ "completion_length": 716.0413208007812,
1163
+ "epoch": 0.8415716096324461,
1164
+ "grad_norm": 0.10518212616443634,
1165
+ "learning_rate": 7.157924328548002e-07,
1166
+ "loss": 0.045,
1167
+ "num_tokens": 62418119.0,
1168
+ "reward": 1.6026786416769028,
1169
+ "reward_std": 0.2639338057488203,
1170
+ "rewards/accuracy_reward": 0.6160714328289032,
1171
+ "rewards/format_reward": 0.9866071343421936,
1172
+ "step": 83
1173
+ },
1174
+ {
1175
+ "clip_ratio": 0.0,
1176
+ "completion_length": 724.6919937133789,
1177
+ "epoch": 0.8517110266159695,
1178
+ "grad_norm": 0.0993557870388031,
1179
+ "learning_rate": 7.077075065009433e-07,
1180
+ "loss": 0.0374,
1181
+ "num_tokens": 63200755.0,
1182
+ "reward": 1.649553656578064,
1183
+ "reward_std": 0.2632820960134268,
1184
+ "rewards/accuracy_reward": 0.659598208963871,
1185
+ "rewards/format_reward": 0.9899553507566452,
1186
+ "step": 84
1187
+ },
1188
+ {
1189
+ "clip_ratio": 0.0,
1190
+ "completion_length": 694.9643096923828,
1191
+ "epoch": 0.861850443599493,
1192
+ "grad_norm": 0.0879211500287056,
1193
+ "learning_rate": 6.995564019287869e-07,
1194
+ "loss": 0.0205,
1195
+ "num_tokens": 63953259.0,
1196
+ "reward": 1.5792411267757416,
1197
+ "reward_std": 0.19979050569236279,
1198
+ "rewards/accuracy_reward": 0.5859375037252903,
1199
+ "rewards/format_reward": 0.9933035671710968,
1200
+ "step": 85
1201
+ },
1202
+ {
1203
+ "clip_ratio": 0.0,
1204
+ "completion_length": 764.1395416259766,
1205
+ "epoch": 0.8719898605830165,
1206
+ "grad_norm": 0.3273586332798004,
1207
+ "learning_rate": 6.913417161825449e-07,
1208
+ "loss": 0.027,
1209
+ "num_tokens": 64772840.0,
1210
+ "reward": 1.5647322237491608,
1211
+ "reward_std": 0.23416299000382423,
1212
+ "rewards/accuracy_reward": 0.5758928582072258,
1213
+ "rewards/format_reward": 0.9888392686843872,
1214
+ "step": 86
1215
+ },
1216
+ {
1217
+ "clip_ratio": 0.0,
1218
+ "completion_length": 723.0658798217773,
1219
+ "epoch": 0.8821292775665399,
1220
+ "grad_norm": 0.231331005692482,
1221
+ "learning_rate": 6.830660665641897e-07,
1222
+ "loss": 0.0197,
1223
+ "num_tokens": 65550427.0,
1224
+ "reward": 1.6183036714792252,
1225
+ "reward_std": 0.2430772576481104,
1226
+ "rewards/accuracy_reward": 0.6272321380674839,
1227
+ "rewards/format_reward": 0.9910714253783226,
1228
+ "step": 87
1229
+ },
1230
+ {
1231
+ "clip_ratio": 0.0,
1232
+ "completion_length": 756.286865234375,
1233
+ "epoch": 0.8922686945500634,
1234
+ "grad_norm": 0.09298628568649292,
1235
+ "learning_rate": 6.747320897995492e-07,
1236
+ "loss": 0.0426,
1237
+ "num_tokens": 66357052.0,
1238
+ "reward": 1.5915179252624512,
1239
+ "reward_std": 0.2366574928164482,
1240
+ "rewards/accuracy_reward": 0.6149553656578064,
1241
+ "rewards/format_reward": 0.9765625,
1242
+ "step": 88
1243
+ },
1244
+ {
1245
+ "clip_ratio": 0.0,
1246
+ "completion_length": 686.9029312133789,
1247
+ "epoch": 0.9024081115335868,
1248
+ "grad_norm": 0.0874035432934761,
1249
+ "learning_rate": 6.66342441198212e-07,
1250
+ "loss": 0.015,
1251
+ "num_tokens": 67096421.0,
1252
+ "reward": 1.6127232909202576,
1253
+ "reward_std": 0.1957838051021099,
1254
+ "rewards/accuracy_reward": 0.6205357126891613,
1255
+ "rewards/format_reward": 0.9921874925494194,
1256
+ "step": 89
1257
+ },
1258
+ {
1259
+ "clip_ratio": 0.0,
1260
+ "completion_length": 701.6573944091797,
1261
+ "epoch": 0.9125475285171103,
1262
+ "grad_norm": 0.09878430515527725,
1263
+ "learning_rate": 6.578997938075125e-07,
1264
+ "loss": 0.0477,
1265
+ "num_tokens": 67849578.0,
1266
+ "reward": 1.5691965073347092,
1267
+ "reward_std": 0.2424814011901617,
1268
+ "rewards/accuracy_reward": 0.5881696492433548,
1269
+ "rewards/format_reward": 0.9810267835855484,
1270
+ "step": 90
1271
+ },
1272
+ {
1273
+ "clip_ratio": 0.0,
1274
+ "completion_length": 717.0368499755859,
1275
+ "epoch": 0.9226869455006337,
1276
+ "grad_norm": 0.0928407534956932,
1277
+ "learning_rate": 6.494068375608646e-07,
1278
+ "loss": 0.0515,
1279
+ "num_tokens": 68626667.0,
1280
+ "reward": 1.5803571939468384,
1281
+ "reward_std": 0.2204410433769226,
1282
+ "rewards/accuracy_reward": 0.5993303582072258,
1283
+ "rewards/format_reward": 0.9810267835855484,
1284
+ "step": 91
1285
+ },
1286
+ {
1287
+ "clip_ratio": 0.0,
1288
+ "completion_length": 750.7065048217773,
1289
+ "epoch": 0.9328263624841572,
1290
+ "grad_norm": 0.09953666478395462,
1291
+ "learning_rate": 6.408662784207149e-07,
1292
+ "loss": 0.0434,
1293
+ "num_tokens": 69438676.0,
1294
+ "reward": 1.6037947237491608,
1295
+ "reward_std": 0.263325035572052,
1296
+ "rewards/accuracy_reward": 0.621651791036129,
1297
+ "rewards/format_reward": 0.9821428507566452,
1298
+ "step": 92
1299
+ },
1300
+ {
1301
+ "clip_ratio": 0.0,
1302
+ "completion_length": 673.391773223877,
1303
+ "epoch": 0.9429657794676806,
1304
+ "grad_norm": 0.09401652216911316,
1305
+ "learning_rate": 6.322808375163895e-07,
1306
+ "loss": 0.0215,
1307
+ "num_tokens": 70168915.0,
1308
+ "reward": 1.6227679252624512,
1309
+ "reward_std": 0.1998270694166422,
1310
+ "rewards/accuracy_reward": 0.629464291036129,
1311
+ "rewards/format_reward": 0.9933035746216774,
1312
+ "step": 93
1313
+ },
1314
+ {
1315
+ "clip_ratio": 0.0,
1316
+ "completion_length": 706.0636520385742,
1317
+ "epoch": 0.9531051964512041,
1318
+ "grad_norm": 0.10031445324420929,
1319
+ "learning_rate": 6.236532502771077e-07,
1320
+ "loss": 0.029,
1321
+ "num_tokens": 70935148.0,
1322
+ "reward": 1.600446492433548,
1323
+ "reward_std": 0.2434455920010805,
1324
+ "rewards/accuracy_reward": 0.6104910634458065,
1325
+ "rewards/format_reward": 0.9899553582072258,
1326
+ "step": 94
1327
+ },
1328
+ {
1329
+ "clip_ratio": 0.0,
1330
+ "completion_length": 702.7511444091797,
1331
+ "epoch": 0.9632446134347274,
1332
+ "grad_norm": 0.09827374666929245,
1333
+ "learning_rate": 6.149862655604403e-07,
1334
+ "loss": 0.0358,
1335
+ "num_tokens": 71690501.0,
1336
+ "reward": 1.5982143431901932,
1337
+ "reward_std": 0.2399199977517128,
1338
+ "rewards/accuracy_reward": 0.6149553507566452,
1339
+ "rewards/format_reward": 0.9832589253783226,
1340
+ "step": 95
1341
+ },
1342
+ {
1343
+ "clip_ratio": 0.0,
1344
+ "completion_length": 713.8114166259766,
1345
+ "epoch": 0.973384030418251,
1346
+ "grad_norm": 0.08831395208835602,
1347
+ "learning_rate": 6.062826447764883e-07,
1348
+ "loss": 0.031,
1349
+ "num_tokens": 72452820.0,
1350
+ "reward": 1.6127232760190964,
1351
+ "reward_std": 0.2217018250375986,
1352
+ "rewards/accuracy_reward": 0.6238839253783226,
1353
+ "rewards/format_reward": 0.9888392761349678,
1354
+ "step": 96
1355
+ },
1356
+ {
1357
+ "clip_ratio": 0.0,
1358
+ "completion_length": 701.4710083007812,
1359
+ "epoch": 0.9835234474017744,
1360
+ "grad_norm": 0.12909561395645142,
1361
+ "learning_rate": 5.975451610080642e-07,
1362
+ "loss": 0.0299,
1363
+ "num_tokens": 73209146.0,
1364
+ "reward": 1.5602679252624512,
1365
+ "reward_std": 0.21769515611231327,
1366
+ "rewards/accuracy_reward": 0.5680803507566452,
1367
+ "rewards/format_reward": 0.9921874925494194,
1368
+ "step": 97
1369
+ },
1370
+ {
1371
+ "clip_ratio": 0.0,
1372
+ "completion_length": 698.564453125,
1373
+ "epoch": 0.9936628643852978,
1374
+ "grad_norm": 0.09522398561239243,
1375
+ "learning_rate": 5.887765981271517e-07,
1376
+ "loss": 0.0337,
1377
+ "num_tokens": 73991697.0,
1378
+ "reward": 1.56808041036129,
1379
+ "reward_std": 0.24109553173184395,
1380
+ "rewards/accuracy_reward": 0.5792410746216774,
1381
+ "rewards/format_reward": 0.9888392761349678,
1382
+ "step": 98
1383
+ },
1384
+ {
1385
+ "clip_ratio": 0.0,
1386
+ "completion_length": 719.4676666259766,
1387
+ "epoch": 1.0101394169835234,
1388
+ "grad_norm": 0.09128111600875854,
1389
+ "learning_rate": 5.7997974990793e-07,
1390
+ "loss": 0.0236,
1391
+ "num_tokens": 74765332.0,
1392
+ "reward": 1.648437574505806,
1393
+ "reward_std": 0.22949284128844738,
1394
+ "rewards/accuracy_reward": 0.65625,
1395
+ "rewards/format_reward": 0.9921874925494194,
1396
+ "step": 99
1397
+ },
1398
+ {
1399
+ "clip_ratio": 0.0,
1400
+ "completion_length": 720.794677734375,
1401
+ "epoch": 1.020278833967047,
1402
+ "grad_norm": 0.09661993384361267,
1403
+ "learning_rate": 5.711574191366427e-07,
1404
+ "loss": 0.0317,
1405
+ "num_tokens": 75538540.0,
1406
+ "reward": 1.617187574505806,
1407
+ "reward_std": 0.21921591460704803,
1408
+ "rewards/accuracy_reward": 0.6316964328289032,
1409
+ "rewards/format_reward": 0.9854910671710968,
1410
+ "step": 100
1411
+ },
1412
+ {
1413
+ "clip_ratio": 0.0,
1414
+ "completion_length": 715.5815048217773,
1415
+ "epoch": 1.0304182509505704,
1416
+ "grad_norm": 0.09119318425655365,
1417
+ "learning_rate": 5.623124167185929e-07,
1418
+ "loss": 0.0238,
1419
+ "num_tokens": 76302229.0,
1420
+ "reward": 1.6149554401636124,
1421
+ "reward_std": 0.23784785345196724,
1422
+ "rewards/accuracy_reward": 0.6328125,
1423
+ "rewards/format_reward": 0.9821428507566452,
1424
+ "step": 101
1425
+ },
1426
+ {
1427
+ "clip_ratio": 0.0,
1428
+ "completion_length": 671.747802734375,
1429
+ "epoch": 1.0405576679340938,
1430
+ "grad_norm": 0.0963052362203598,
1431
+ "learning_rate": 5.534475607825565e-07,
1432
+ "loss": 0.0087,
1433
+ "num_tokens": 77031403.0,
1434
+ "reward": 1.6227679252624512,
1435
+ "reward_std": 0.22612315602600574,
1436
+ "rewards/accuracy_reward": 0.6272321417927742,
1437
+ "rewards/format_reward": 0.9955357015132904,
1438
+ "step": 102
1439
+ },
1440
+ {
1441
+ "clip_ratio": 0.0,
1442
+ "completion_length": 736.8772659301758,
1443
+ "epoch": 1.0506970849176172,
1444
+ "grad_norm": 0.09166496247053146,
1445
+ "learning_rate": 5.445656757828879e-07,
1446
+ "loss": 0.0313,
1447
+ "num_tokens": 77822949.0,
1448
+ "reward": 1.5044643431901932,
1449
+ "reward_std": 0.22669786028563976,
1450
+ "rewards/accuracy_reward": 0.5189732238650322,
1451
+ "rewards/format_reward": 0.9854910746216774,
1452
+ "step": 103
1453
+ },
1454
+ {
1455
+ "clip_ratio": 0.0,
1456
+ "completion_length": 719.7199020385742,
1457
+ "epoch": 1.0608365019011408,
1458
+ "grad_norm": 0.09398168325424194,
1459
+ "learning_rate": 5.356695915996161e-07,
1460
+ "loss": 0.0265,
1461
+ "num_tokens": 78602178.0,
1462
+ "reward": 1.5602679252624512,
1463
+ "reward_std": 0.25012633949518204,
1464
+ "rewards/accuracy_reward": 0.5691964328289032,
1465
+ "rewards/format_reward": 0.9910714253783226,
1466
+ "step": 104
1467
+ },
1468
+ {
1469
+ "clip_ratio": 0.0,
1470
+ "completion_length": 688.8962249755859,
1471
+ "epoch": 1.0709759188846641,
1472
+ "grad_norm": 0.11429024487733841,
1473
+ "learning_rate": 5.267621426368075e-07,
1474
+ "loss": 0.0246,
1475
+ "num_tokens": 79338645.0,
1476
+ "reward": 1.6383929550647736,
1477
+ "reward_std": 0.266002238728106,
1478
+ "rewards/accuracy_reward": 0.6495535746216774,
1479
+ "rewards/format_reward": 0.9888392761349678,
1480
+ "step": 105
1481
+ },
1482
+ {
1483
+ "clip_ratio": 0.0,
1484
+ "completion_length": 645.3884124755859,
1485
+ "epoch": 1.0811153358681875,
1486
+ "grad_norm": 0.0960325375199318,
1487
+ "learning_rate": 5.178461669194903e-07,
1488
+ "loss": 0.0291,
1489
+ "num_tokens": 80044121.0,
1490
+ "reward": 1.670758992433548,
1491
+ "reward_std": 0.21290560998022556,
1492
+ "rewards/accuracy_reward": 0.6808035671710968,
1493
+ "rewards/format_reward": 0.9899553582072258,
1494
+ "step": 106
1495
+ },
1496
+ {
1497
+ "clip_ratio": 0.0,
1498
+ "completion_length": 682.0167694091797,
1499
+ "epoch": 1.091254752851711,
1500
+ "grad_norm": 0.08784578740596771,
1501
+ "learning_rate": 5.08924505189423e-07,
1502
+ "loss": 0.0275,
1503
+ "num_tokens": 80781168.0,
1504
+ "reward": 1.649553656578064,
1505
+ "reward_std": 0.1967218518257141,
1506
+ "rewards/accuracy_reward": 0.65625,
1507
+ "rewards/format_reward": 0.9933035597205162,
1508
+ "step": 107
1509
+ },
1510
+ {
1511
+ "clip_ratio": 0.0,
1512
+ "completion_length": 718.1272659301758,
1513
+ "epoch": 1.1013941698352345,
1514
+ "grad_norm": 0.08991685509681702,
1515
+ "learning_rate": 5e-07,
1516
+ "loss": 0.0394,
1517
+ "num_tokens": 81548730.0,
1518
+ "reward": 1.626116156578064,
1519
+ "reward_std": 0.24366096779704094,
1520
+ "rewards/accuracy_reward": 0.6450892835855484,
1521
+ "rewards/format_reward": 0.981026791036129,
1522
+ "step": 108
1523
+ },
1524
+ {
1525
+ "clip_ratio": 0.0,
1526
+ "completion_length": 746.9297256469727,
1527
+ "epoch": 1.111533586818758,
1528
+ "grad_norm": 0.10698525607585907,
1529
+ "learning_rate": 4.91075494810577e-07,
1530
+ "loss": 0.0338,
1531
+ "num_tokens": 82335971.0,
1532
+ "reward": 1.6015625596046448,
1533
+ "reward_std": 0.2378039713948965,
1534
+ "rewards/accuracy_reward": 0.6160714253783226,
1535
+ "rewards/format_reward": 0.9854910671710968,
1536
+ "step": 109
1537
+ },
1538
+ {
1539
+ "clip_ratio": 0.0,
1540
+ "completion_length": 744.1752624511719,
1541
+ "epoch": 1.1216730038022813,
1542
+ "grad_norm": 0.0870419293642044,
1543
+ "learning_rate": 4.821538330805098e-07,
1544
+ "loss": 0.0489,
1545
+ "num_tokens": 83129272.0,
1546
+ "reward": 1.5982143580913544,
1547
+ "reward_std": 0.22117138653993607,
1548
+ "rewards/accuracy_reward": 0.6183035746216774,
1549
+ "rewards/format_reward": 0.979910708963871,
1550
+ "step": 110
1551
+ },
1552
+ {
1553
+ "clip_ratio": 0.0,
1554
+ "completion_length": 729.8170013427734,
1555
+ "epoch": 1.131812420785805,
1556
+ "grad_norm": 0.09355349838733673,
1557
+ "learning_rate": 4.732378573631924e-07,
1558
+ "loss": 0.0372,
1559
+ "num_tokens": 83905916.0,
1560
+ "reward": 1.6004465073347092,
1561
+ "reward_std": 0.24993346445262432,
1562
+ "rewards/accuracy_reward": 0.609375,
1563
+ "rewards/format_reward": 0.9910714253783226,
1564
+ "step": 111
1565
+ },
1566
+ {
1567
+ "clip_ratio": 0.0,
1568
+ "completion_length": 730.1663208007812,
1569
+ "epoch": 1.1419518377693283,
1570
+ "grad_norm": 0.2215433269739151,
1571
+ "learning_rate": 4.643304084003838e-07,
1572
+ "loss": 0.025,
1573
+ "num_tokens": 84687593.0,
1574
+ "reward": 1.5837054252624512,
1575
+ "reward_std": 0.23716635070741177,
1576
+ "rewards/accuracy_reward": 0.594866082072258,
1577
+ "rewards/format_reward": 0.9888392686843872,
1578
+ "step": 112
1579
+ },
1580
+ {
1581
+ "clip_ratio": 0.0,
1582
+ "completion_length": 704.9989242553711,
1583
+ "epoch": 1.1520912547528517,
1584
+ "grad_norm": 0.10191125422716141,
1585
+ "learning_rate": 4.55434324217112e-07,
1586
+ "loss": 0.0477,
1587
+ "num_tokens": 85448512.0,
1588
+ "reward": 1.5513393580913544,
1589
+ "reward_std": 0.24850638210773468,
1590
+ "rewards/accuracy_reward": 0.5691964291036129,
1591
+ "rewards/format_reward": 0.9821428433060646,
1592
+ "step": 113
1593
+ },
1594
+ {
1595
+ "clip_ratio": 0.0,
1596
+ "completion_length": 696.9486846923828,
1597
+ "epoch": 1.162230671736375,
1598
+ "grad_norm": 0.08355645090341568,
1599
+ "learning_rate": 4.4655243921744367e-07,
1600
+ "loss": 0.0175,
1601
+ "num_tokens": 86197938.0,
1602
+ "reward": 1.5982143729925156,
1603
+ "reward_std": 0.1789538525044918,
1604
+ "rewards/accuracy_reward": 0.6049107164144516,
1605
+ "rewards/format_reward": 0.9933035671710968,
1606
+ "step": 114
1607
+ },
1608
+ {
1609
+ "clip_ratio": 0.0,
1610
+ "completion_length": 697.7143096923828,
1611
+ "epoch": 1.1723700887198987,
1612
+ "grad_norm": 0.08945267647504807,
1613
+ "learning_rate": 4.37687583281407e-07,
1614
+ "loss": 0.0261,
1615
+ "num_tokens": 86950306.0,
1616
+ "reward": 1.5982143580913544,
1617
+ "reward_std": 0.19760245829820633,
1618
+ "rewards/accuracy_reward": 0.6071428619325161,
1619
+ "rewards/format_reward": 0.9910714253783226,
1620
+ "step": 115
1621
+ },
1622
+ {
1623
+ "clip_ratio": 0.0,
1624
+ "completion_length": 681.9843978881836,
1625
+ "epoch": 1.182509505703422,
1626
+ "grad_norm": 0.10237658023834229,
1627
+ "learning_rate": 4.2884258086335745e-07,
1628
+ "loss": 0.017,
1629
+ "num_tokens": 87700604.0,
1630
+ "reward": 1.625000074505806,
1631
+ "reward_std": 0.23482281155884266,
1632
+ "rewards/accuracy_reward": 0.6328124962747097,
1633
+ "rewards/format_reward": 0.9921875,
1634
+ "step": 116
1635
+ },
1636
+ {
1637
+ "clip_ratio": 0.0,
1638
+ "completion_length": 679.974365234375,
1639
+ "epoch": 1.1926489226869454,
1640
+ "grad_norm": 0.09299944341182709,
1641
+ "learning_rate": 4.2002025009206987e-07,
1642
+ "loss": 0.0178,
1643
+ "num_tokens": 88441813.0,
1644
+ "reward": 1.6104911416769028,
1645
+ "reward_std": 0.19839700311422348,
1646
+ "rewards/accuracy_reward": 0.6171875037252903,
1647
+ "rewards/format_reward": 0.9933035597205162,
1648
+ "step": 117
1649
+ },
1650
+ {
1651
+ "clip_ratio": 0.0,
1652
+ "completion_length": 686.677490234375,
1653
+ "epoch": 1.202788339670469,
1654
+ "grad_norm": 0.16542084515094757,
1655
+ "learning_rate": 4.1122340187284845e-07,
1656
+ "loss": 0.0392,
1657
+ "num_tokens": 89179364.0,
1658
+ "reward": 1.609375074505806,
1659
+ "reward_std": 0.2622632719576359,
1660
+ "rewards/accuracy_reward": 0.6183035597205162,
1661
+ "rewards/format_reward": 0.991071417927742,
1662
+ "step": 118
1663
+ },
1664
+ {
1665
+ "clip_ratio": 0.0,
1666
+ "completion_length": 694.8125228881836,
1667
+ "epoch": 1.2129277566539924,
1668
+ "grad_norm": 0.11430428177118301,
1669
+ "learning_rate": 4.0245483899193586e-07,
1670
+ "loss": 0.0139,
1671
+ "num_tokens": 89929292.0,
1672
+ "reward": 1.6049107760190964,
1673
+ "reward_std": 0.22252802923321724,
1674
+ "rewards/accuracy_reward": 0.610491082072258,
1675
+ "rewards/format_reward": 0.9944196417927742,
1676
+ "step": 119
1677
+ },
1678
+ {
1679
+ "clip_ratio": 0.0,
1680
+ "completion_length": 684.7466812133789,
1681
+ "epoch": 1.2230671736375158,
1682
+ "grad_norm": 0.08012707531452179,
1683
+ "learning_rate": 3.937173552235116e-07,
1684
+ "loss": 0.0213,
1685
+ "num_tokens": 90675633.0,
1686
+ "reward": 1.5915179401636124,
1687
+ "reward_std": 0.16910855192691088,
1688
+ "rewards/accuracy_reward": 0.6037946492433548,
1689
+ "rewards/format_reward": 0.9877232164144516,
1690
+ "step": 120
1691
+ },
1692
+ {
1693
+ "clip_ratio": 0.0,
1694
+ "completion_length": 690.8917694091797,
1695
+ "epoch": 1.2332065906210392,
1696
+ "grad_norm": 0.10144107788801193,
1697
+ "learning_rate": 3.850137344395598e-07,
1698
+ "loss": 0.0521,
1699
+ "num_tokens": 91417888.0,
1700
+ "reward": 1.6227679550647736,
1701
+ "reward_std": 0.22748247906565666,
1702
+ "rewards/accuracy_reward": 0.6350446417927742,
1703
+ "rewards/format_reward": 0.9877232164144516,
1704
+ "step": 121
1705
+ },
1706
+ {
1707
+ "clip_ratio": 0.0,
1708
+ "completion_length": 754.5803909301758,
1709
+ "epoch": 1.2433460076045628,
1710
+ "grad_norm": 0.09234697371721268,
1711
+ "learning_rate": 3.763467497228922e-07,
1712
+ "loss": 0.0348,
1713
+ "num_tokens": 92232024.0,
1714
+ "reward": 1.5513393580913544,
1715
+ "reward_std": 0.22778411209583282,
1716
+ "rewards/accuracy_reward": 0.5725446529686451,
1717
+ "rewards/format_reward": 0.9787946343421936,
1718
+ "step": 122
1719
+ },
1720
+ {
1721
+ "clip_ratio": 0.0,
1722
+ "completion_length": 718.1328506469727,
1723
+ "epoch": 1.2534854245880862,
1724
+ "grad_norm": 0.09252262860536575,
1725
+ "learning_rate": 3.677191624836106e-07,
1726
+ "loss": 0.0375,
1727
+ "num_tokens": 92997183.0,
1728
+ "reward": 1.6417411267757416,
1729
+ "reward_std": 0.2288501374423504,
1730
+ "rewards/accuracy_reward": 0.651785708963871,
1731
+ "rewards/format_reward": 0.9899553433060646,
1732
+ "step": 123
1733
+ },
1734
+ {
1735
+ "clip_ratio": 0.0,
1736
+ "completion_length": 738.9018173217773,
1737
+ "epoch": 1.2636248415716096,
1738
+ "grad_norm": 0.09162338823080063,
1739
+ "learning_rate": 3.591337215792851e-07,
1740
+ "loss": 0.0228,
1741
+ "num_tokens": 93787407.0,
1742
+ "reward": 1.6037947237491608,
1743
+ "reward_std": 0.22520777396857738,
1744
+ "rewards/accuracy_reward": 0.6149553656578064,
1745
+ "rewards/format_reward": 0.9888392686843872,
1746
+ "step": 124
1747
+ },
1748
+ {
1749
+ "clip_ratio": 0.0,
1750
+ "completion_length": 709.0446853637695,
1751
+ "epoch": 1.2737642585551332,
1752
+ "grad_norm": 0.09890223294496536,
1753
+ "learning_rate": 3.505931624391355e-07,
1754
+ "loss": 0.0222,
1755
+ "num_tokens": 94540975.0,
1756
+ "reward": 1.6272322088479996,
1757
+ "reward_std": 0.25062085315585136,
1758
+ "rewards/accuracy_reward": 0.6316964328289032,
1759
+ "rewards/format_reward": 0.995535708963871,
1760
+ "step": 125
1761
+ },
1762
+ {
1763
+ "clip_ratio": 0.0,
1764
+ "completion_length": 695.0937805175781,
1765
+ "epoch": 1.2839036755386566,
1766
+ "grad_norm": 0.09244479238986969,
1767
+ "learning_rate": 3.421002061924876e-07,
1768
+ "loss": 0.0234,
1769
+ "num_tokens": 95290643.0,
1770
+ "reward": 1.6171875894069672,
1771
+ "reward_std": 0.23021192848682404,
1772
+ "rewards/accuracy_reward": 0.6350446417927742,
1773
+ "rewards/format_reward": 0.9821428507566452,
1774
+ "step": 126
1775
+ },
1776
+ {
1777
+ "clip_ratio": 0.0,
1778
+ "completion_length": 705.2265853881836,
1779
+ "epoch": 1.29404309252218,
1780
+ "grad_norm": 0.08838233351707458,
1781
+ "learning_rate": 3.3365755880178807e-07,
1782
+ "loss": 0.0274,
1783
+ "num_tokens": 96044230.0,
1784
+ "reward": 1.6517858058214188,
1785
+ "reward_std": 0.19274440594017506,
1786
+ "rewards/accuracy_reward": 0.6629464291036129,
1787
+ "rewards/format_reward": 0.9888392761349678,
1788
+ "step": 127
1789
+ },
1790
+ {
1791
+ "clip_ratio": 0.0,
1792
+ "completion_length": 730.522346496582,
1793
+ "epoch": 1.3041825095057034,
1794
+ "grad_norm": 0.10294985771179199,
1795
+ "learning_rate": 3.2526791020045087e-07,
1796
+ "loss": 0.0338,
1797
+ "num_tokens": 96831378.0,
1798
+ "reward": 1.5580357909202576,
1799
+ "reward_std": 0.2531749904155731,
1800
+ "rewards/accuracy_reward": 0.5758928544819355,
1801
+ "rewards/format_reward": 0.9821428507566452,
1802
+ "step": 128
1803
+ },
1804
+ {
1805
+ "clip_ratio": 0.0,
1806
+ "completion_length": 683.2656555175781,
1807
+ "epoch": 1.3143219264892267,
1808
+ "grad_norm": 0.09631213545799255,
1809
+ "learning_rate": 3.169339334358104e-07,
1810
+ "loss": 0.019,
1811
+ "num_tokens": 97576928.0,
1812
+ "reward": 1.6283482909202576,
1813
+ "reward_std": 0.22065912000834942,
1814
+ "rewards/accuracy_reward": 0.6372767835855484,
1815
+ "rewards/format_reward": 0.9910714328289032,
1816
+ "step": 129
1817
+ },
1818
+ {
1819
+ "clip_ratio": 0.0,
1820
+ "completion_length": 700.1573944091797,
1821
+ "epoch": 1.3244613434727504,
1822
+ "grad_norm": 0.09407830238342285,
1823
+ "learning_rate": 3.086582838174551e-07,
1824
+ "loss": 0.0196,
1825
+ "num_tokens": 98336757.0,
1826
+ "reward": 1.6116072237491608,
1827
+ "reward_std": 0.20910983718931675,
1828
+ "rewards/accuracy_reward": 0.6183035671710968,
1829
+ "rewards/format_reward": 0.9933035597205162,
1830
+ "step": 130
1831
+ },
1832
+ {
1833
+ "clip_ratio": 0.0,
1834
+ "completion_length": 694.3426666259766,
1835
+ "epoch": 1.3346007604562737,
1836
+ "grad_norm": 0.10380922257900238,
1837
+ "learning_rate": 3.004435980712129e-07,
1838
+ "loss": 0.0372,
1839
+ "num_tokens": 99082352.0,
1840
+ "reward": 1.5892857611179352,
1841
+ "reward_std": 0.23904000967741013,
1842
+ "rewards/accuracy_reward": 0.6026785783469677,
1843
+ "rewards/format_reward": 0.9866071343421936,
1844
+ "step": 131
1845
+ },
1846
+ {
1847
+ "clip_ratio": 0.0,
1848
+ "completion_length": 689.7589569091797,
1849
+ "epoch": 1.3447401774397973,
1850
+ "grad_norm": 0.08147790282964706,
1851
+ "learning_rate": 2.922924934990568e-07,
1852
+ "loss": 0.0384,
1853
+ "num_tokens": 99825528.0,
1854
+ "reward": 1.6372768580913544,
1855
+ "reward_std": 0.19569051638245583,
1856
+ "rewards/accuracy_reward": 0.645089291036129,
1857
+ "rewards/format_reward": 0.9921874925494194,
1858
+ "step": 132
1859
+ },
1860
+ {
1861
+ "clip_ratio": 0.0,
1862
+ "completion_length": 652.4018096923828,
1863
+ "epoch": 1.3548795944233207,
1864
+ "grad_norm": 0.10540962219238281,
1865
+ "learning_rate": 2.8420756714519954e-07,
1866
+ "loss": 0.0049,
1867
+ "num_tokens": 100554384.0,
1868
+ "reward": 1.6138393431901932,
1869
+ "reward_std": 0.2248824257403612,
1870
+ "rewards/accuracy_reward": 0.6183035708963871,
1871
+ "rewards/format_reward": 0.995535708963871,
1872
+ "step": 133
1873
+ },
1874
+ {
1875
+ "clip_ratio": 0.0,
1876
+ "completion_length": 691.8192138671875,
1877
+ "epoch": 1.3650190114068441,
1878
+ "grad_norm": 0.10231253504753113,
1879
+ "learning_rate": 2.7619139496864376e-07,
1880
+ "loss": 0.0299,
1881
+ "num_tokens": 101310238.0,
1882
+ "reward": 1.5345982760190964,
1883
+ "reward_std": 0.22884078323841095,
1884
+ "rewards/accuracy_reward": 0.543526791036129,
1885
+ "rewards/format_reward": 0.991071417927742,
1886
+ "step": 134
1887
+ },
1888
+ {
1889
+ "clip_ratio": 0.0,
1890
+ "completion_length": 654.686408996582,
1891
+ "epoch": 1.3751584283903675,
1892
+ "grad_norm": 0.10083113610744476,
1893
+ "learning_rate": 2.6824653102244727e-07,
1894
+ "loss": 0.0201,
1895
+ "num_tokens": 102030909.0,
1896
+ "reward": 1.6171875596046448,
1897
+ "reward_std": 0.22183530405163765,
1898
+ "rewards/accuracy_reward": 0.6227678582072258,
1899
+ "rewards/format_reward": 0.9944196343421936,
1900
+ "step": 135
1901
+ },
1902
+ {
1903
+ "clip_ratio": 0.0,
1904
+ "completion_length": 673.8951187133789,
1905
+ "epoch": 1.385297845373891,
1906
+ "grad_norm": 0.10060420632362366,
1907
+ "learning_rate": 2.603755066399718e-07,
1908
+ "loss": 0.0253,
1909
+ "num_tokens": 102764799.0,
1910
+ "reward": 1.6138393431901932,
1911
+ "reward_std": 0.2160534653812647,
1912
+ "rewards/accuracy_reward": 0.620535708963871,
1913
+ "rewards/format_reward": 0.9933035671710968,
1914
+ "step": 136
1915
+ },
1916
+ {
1917
+ "clip_ratio": 0.0,
1918
+ "completion_length": 719.1886596679688,
1919
+ "epoch": 1.3954372623574145,
1920
+ "grad_norm": 0.08712632954120636,
1921
+ "learning_rate": 2.5258082962836614e-07,
1922
+ "loss": 0.0291,
1923
+ "num_tokens": 103543752.0,
1924
+ "reward": 1.502232238650322,
1925
+ "reward_std": 0.17017750442028046,
1926
+ "rewards/accuracy_reward": 0.5100446492433548,
1927
+ "rewards/format_reward": 0.9921874850988388,
1928
+ "step": 137
1929
+ },
1930
+ {
1931
+ "clip_ratio": 0.0,
1932
+ "completion_length": 696.5636520385742,
1933
+ "epoch": 1.4055766793409379,
1934
+ "grad_norm": 0.20818105340003967,
1935
+ "learning_rate": 2.4486498346955023e-07,
1936
+ "loss": 0.0298,
1937
+ "num_tokens": 104290705.0,
1938
+ "reward": 1.6026786416769028,
1939
+ "reward_std": 0.24658331461250782,
1940
+ "rewards/accuracy_reward": 0.6104910746216774,
1941
+ "rewards/format_reward": 0.9921874925494194,
1942
+ "step": 138
1943
+ },
1944
+ {
1945
+ "clip_ratio": 0.0,
1946
+ "completion_length": 707.4475784301758,
1947
+ "epoch": 1.4157160963244613,
1948
+ "grad_norm": 0.09881385415792465,
1949
+ "learning_rate": 2.372304265289436e-07,
1950
+ "loss": 0.0172,
1951
+ "num_tokens": 105048874.0,
1952
+ "reward": 1.56026791036129,
1953
+ "reward_std": 0.24689678102731705,
1954
+ "rewards/accuracy_reward": 0.5703125037252903,
1955
+ "rewards/format_reward": 0.9899553582072258,
1956
+ "step": 139
1957
+ },
1958
+ {
1959
+ "clip_ratio": 0.0,
1960
+ "completion_length": 685.3381958007812,
1961
+ "epoch": 1.4258555133079849,
1962
+ "grad_norm": 0.0895194411277771,
1963
+ "learning_rate": 2.2967959127220137e-07,
1964
+ "loss": 0.0223,
1965
+ "num_tokens": 105784897.0,
1966
+ "reward": 1.6450893580913544,
1967
+ "reward_std": 0.20549658872187138,
1968
+ "rewards/accuracy_reward": 0.6551339253783226,
1969
+ "rewards/format_reward": 0.9899553582072258,
1970
+ "step": 140
1971
+ },
1972
+ {
1973
+ "clip_ratio": 0.0,
1974
+ "completion_length": 710.3538131713867,
1975
+ "epoch": 1.4359949302915083,
1976
+ "grad_norm": 0.10867197811603546,
1977
+ "learning_rate": 2.2221488349019902e-07,
1978
+ "loss": 0.036,
1979
+ "num_tokens": 106569734.0,
1980
+ "reward": 1.5904018580913544,
1981
+ "reward_std": 0.25699869357049465,
1982
+ "rewards/accuracy_reward": 0.5993303544819355,
1983
+ "rewards/format_reward": 0.9910714253783226,
1984
+ "step": 141
1985
+ },
1986
+ {
1987
+ "clip_ratio": 0.0,
1988
+ "completion_length": 672.2600936889648,
1989
+ "epoch": 1.4461343472750317,
1990
+ "grad_norm": 0.09679386019706726,
1991
+ "learning_rate": 2.1483868153251788e-07,
1992
+ "loss": 0.0437,
1993
+ "num_tokens": 107303623.0,
1994
+ "reward": 1.6272322237491608,
1995
+ "reward_std": 0.22507693991065025,
1996
+ "rewards/accuracy_reward": 0.6484374925494194,
1997
+ "rewards/format_reward": 0.9787946343421936,
1998
+ "step": 142
1999
+ },
2000
+ {
2001
+ "clip_ratio": 0.0,
2002
+ "completion_length": 721.3147583007812,
2003
+ "epoch": 1.456273764258555,
2004
+ "grad_norm": 0.09544187784194946,
2005
+ "learning_rate": 2.0755333554967346e-07,
2006
+ "loss": 0.0345,
2007
+ "num_tokens": 108087121.0,
2008
+ "reward": 1.5524554252624512,
2009
+ "reward_std": 0.2506922725588083,
2010
+ "rewards/accuracy_reward": 0.5658482275903225,
2011
+ "rewards/format_reward": 0.9866071417927742,
2012
+ "step": 143
2013
+ },
2014
+ {
2015
+ "clip_ratio": 0.0,
2016
+ "completion_length": 728.9341812133789,
2017
+ "epoch": 1.4664131812420786,
2018
+ "grad_norm": 0.10036015510559082,
2019
+ "learning_rate": 2.0036116674432652e-07,
2020
+ "loss": 0.0161,
2021
+ "num_tokens": 108876694.0,
2022
+ "reward": 1.5703125447034836,
2023
+ "reward_std": 0.23400406539440155,
2024
+ "rewards/accuracy_reward": 0.5803571455180645,
2025
+ "rewards/format_reward": 0.9899553507566452,
2026
+ "step": 144
2027
+ },
2028
+ {
2029
+ "clip_ratio": 0.0,
2030
+ "completion_length": 633.6451225280762,
2031
+ "epoch": 1.476552598225602,
2032
+ "grad_norm": 0.10225114226341248,
2033
+ "learning_rate": 1.9326446663172035e-07,
2034
+ "loss": 0.0215,
2035
+ "num_tokens": 109575920.0,
2036
+ "reward": 1.6830357909202576,
2037
+ "reward_std": 0.23387996666133404,
2038
+ "rewards/accuracy_reward": 0.6863839328289032,
2039
+ "rewards/format_reward": 0.9966517761349678,
2040
+ "step": 145
2041
+ },
2042
+ {
2043
+ "clip_ratio": 0.0,
2044
+ "completion_length": 653.1841735839844,
2045
+ "epoch": 1.4866920152091254,
2046
+ "grad_norm": 1.6936026811599731,
2047
+ "learning_rate": 1.8626549630957395e-07,
2048
+ "loss": 0.028,
2049
+ "num_tokens": 110289317.0,
2050
+ "reward": 1.6517858058214188,
2051
+ "reward_std": 0.24972818791866302,
2052
+ "rewards/accuracy_reward": 0.6607142947614193,
2053
+ "rewards/format_reward": 0.9910714253783226,
2054
+ "step": 146
2055
+ },
2056
+ {
2057
+ "clip_ratio": 0.0,
2058
+ "completion_length": 688.7846298217773,
2059
+ "epoch": 1.496831432192649,
2060
+ "grad_norm": 0.10226480662822723,
2061
+ "learning_rate": 1.7936648573766954e-07,
2062
+ "loss": 0.0531,
2063
+ "num_tokens": 111043180.0,
2064
+ "reward": 1.5881697088479996,
2065
+ "reward_std": 0.2411833181977272,
2066
+ "rewards/accuracy_reward": 0.5993303693830967,
2067
+ "rewards/format_reward": 0.9888392835855484,
2068
+ "step": 147
2069
+ },
2070
+ {
2071
+ "clip_ratio": 0.0,
2072
+ "completion_length": 730.7690048217773,
2073
+ "epoch": 1.5069708491761724,
2074
+ "grad_norm": 0.09249524772167206,
2075
+ "learning_rate": 1.725696330273575e-07,
2076
+ "loss": 0.0272,
2077
+ "num_tokens": 111828293.0,
2078
+ "reward": 1.5714286267757416,
2079
+ "reward_std": 0.2146799899637699,
2080
+ "rewards/accuracy_reward": 0.5803571343421936,
2081
+ "rewards/format_reward": 0.9910714328289032,
2082
+ "step": 148
2083
+ },
2084
+ {
2085
+ "clip_ratio": 0.0,
2086
+ "completion_length": 666.1719055175781,
2087
+ "epoch": 1.5171102661596958,
2088
+ "grad_norm": 0.09008808434009552,
2089
+ "learning_rate": 1.65877103741212e-07,
2090
+ "loss": 0.018,
2091
+ "num_tokens": 112549911.0,
2092
+ "reward": 1.7042411416769028,
2093
+ "reward_std": 0.18795600719749928,
2094
+ "rewards/accuracy_reward": 0.706473208963871,
2095
+ "rewards/format_reward": 0.9977678507566452,
2096
+ "step": 149
2097
+ },
2098
+ {
2099
+ "clip_ratio": 0.0,
2100
+ "completion_length": 704.3828430175781,
2101
+ "epoch": 1.5272496831432192,
2102
+ "grad_norm": 0.09881480038166046,
2103
+ "learning_rate": 1.592910302030544e-07,
2104
+ "loss": 0.0242,
2105
+ "num_tokens": 113300342.0,
2106
+ "reward": 1.5926340222358704,
2107
+ "reward_std": 0.2511101048439741,
2108
+ "rewards/accuracy_reward": 0.6015624888241291,
2109
+ "rewards/format_reward": 0.9910714253783226,
2110
+ "step": 150
2111
+ },
2112
+ {
2113
+ "clip_ratio": 0.0,
2114
+ "completion_length": 697.3582992553711,
2115
+ "epoch": 1.5373891001267426,
2116
+ "grad_norm": 0.11632478982210159,
2117
+ "learning_rate": 1.5281351081856976e-07,
2118
+ "loss": 0.0248,
2119
+ "num_tokens": 114056431.0,
2120
+ "reward": 1.6015625596046448,
2121
+ "reward_std": 0.20664218626916409,
2122
+ "rewards/accuracy_reward": 0.6127232201397419,
2123
+ "rewards/format_reward": 0.9888392835855484,
2124
+ "step": 151
2125
+ },
2126
+ {
2127
+ "clip_ratio": 0.0,
2128
+ "completion_length": 680.4286041259766,
2129
+ "epoch": 1.5475285171102662,
2130
+ "grad_norm": 0.09516316652297974,
2131
+ "learning_rate": 1.4644660940672627e-07,
2132
+ "loss": 0.0338,
2133
+ "num_tokens": 114791687.0,
2134
+ "reward": 1.648437574505806,
2135
+ "reward_std": 0.19783248752355576,
2136
+ "rewards/accuracy_reward": 0.6584821343421936,
2137
+ "rewards/format_reward": 0.9899553507566452,
2138
+ "step": 152
2139
+ },
2140
+ {
2141
+ "clip_ratio": 0.0,
2142
+ "completion_length": 669.5949096679688,
2143
+ "epoch": 1.5576679340937896,
2144
+ "grad_norm": 0.09097380936145782,
2145
+ "learning_rate": 1.4019235454221856e-07,
2146
+ "loss": 0.0149,
2147
+ "num_tokens": 115525716.0,
2148
+ "reward": 1.6104911714792252,
2149
+ "reward_std": 0.20410908479243517,
2150
+ "rewards/accuracy_reward": 0.6171875,
2151
+ "rewards/format_reward": 0.9933035746216774,
2152
+ "step": 153
2153
+ },
2154
+ {
2155
+ "clip_ratio": 0.0,
2156
+ "completion_length": 677.7366333007812,
2157
+ "epoch": 1.5678073510773132,
2158
+ "grad_norm": 0.09991227835416794,
2159
+ "learning_rate": 1.3405273890913737e-07,
2160
+ "loss": 0.0334,
2161
+ "num_tokens": 116254984.0,
2162
+ "reward": 1.5892857760190964,
2163
+ "reward_std": 0.2337410505861044,
2164
+ "rewards/accuracy_reward": 0.6004464365541935,
2165
+ "rewards/format_reward": 0.9888392761349678,
2166
+ "step": 154
2167
+ },
2168
+ {
2169
+ "clip_ratio": 0.0,
2170
+ "completion_length": 691.3571853637695,
2171
+ "epoch": 1.5779467680608366,
2172
+ "grad_norm": 0.18712472915649414,
2173
+ "learning_rate": 1.280297186660752e-07,
2174
+ "loss": 0.0335,
2175
+ "num_tokens": 116992816.0,
2176
+ "reward": 1.6595982760190964,
2177
+ "reward_std": 0.2452637990936637,
2178
+ "rewards/accuracy_reward": 0.6707589328289032,
2179
+ "rewards/format_reward": 0.9888392835855484,
2180
+ "step": 155
2181
+ },
2182
+ {
2183
+ "clip_ratio": 0.0,
2184
+ "completion_length": 758.325927734375,
2185
+ "epoch": 1.58808618504436,
2186
+ "grad_norm": 0.09767128527164459,
2187
+ "learning_rate": 1.2212521282287093e-07,
2188
+ "loss": 0.0328,
2189
+ "num_tokens": 117792204.0,
2190
+ "reward": 1.5558036416769028,
2191
+ "reward_std": 0.23506544344127178,
2192
+ "rewards/accuracy_reward": 0.5647321492433548,
2193
+ "rewards/format_reward": 0.9910714104771614,
2194
+ "step": 156
2195
+ },
2196
+ {
2197
+ "clip_ratio": 0.0,
2198
+ "completion_length": 704.4464721679688,
2199
+ "epoch": 1.5982256020278833,
2200
+ "grad_norm": 0.08936415612697601,
2201
+ "learning_rate": 1.1634110262918717e-07,
2202
+ "loss": 0.0365,
2203
+ "num_tokens": 118567396.0,
2204
+ "reward": 1.5758929252624512,
2205
+ "reward_std": 0.21123607829213142,
2206
+ "rewards/accuracy_reward": 0.5970982126891613,
2207
+ "rewards/format_reward": 0.9787946417927742,
2208
+ "step": 157
2209
+ },
2210
+ {
2211
+ "clip_ratio": 0.0,
2212
+ "completion_length": 721.3761444091797,
2213
+ "epoch": 1.6083650190114067,
2214
+ "grad_norm": 0.9475556015968323,
2215
+ "learning_rate": 1.1067923097512255e-07,
2216
+ "loss": 0.051,
2217
+ "num_tokens": 119344461.0,
2218
+ "reward": 1.5680804252624512,
2219
+ "reward_std": 0.24346480891108513,
2220
+ "rewards/accuracy_reward": 0.5848214253783226,
2221
+ "rewards/format_reward": 0.9832589253783226,
2222
+ "step": 158
2223
+ },
2224
+ {
2225
+ "clip_ratio": 0.0,
2226
+ "completion_length": 720.2455749511719,
2227
+ "epoch": 1.6185044359949303,
2228
+ "grad_norm": 0.10232140868902206,
2229
+ "learning_rate": 1.0514140180404202e-07,
2230
+ "loss": 0.0421,
2231
+ "num_tokens": 120118857.0,
2232
+ "reward": 1.5736607909202576,
2233
+ "reward_std": 0.2383702825754881,
2234
+ "rewards/accuracy_reward": 0.5859375,
2235
+ "rewards/format_reward": 0.987723208963871,
2236
+ "step": 159
2237
+ },
2238
+ {
2239
+ "clip_ratio": 0.0,
2240
+ "completion_length": 724.2288284301758,
2241
+ "epoch": 1.6286438529784537,
2242
+ "grad_norm": 0.17790253460407257,
2243
+ "learning_rate": 9.972937953781984e-08,
2244
+ "loss": 0.0345,
2245
+ "num_tokens": 120914734.0,
2246
+ "reward": 1.5167411416769028,
2247
+ "reward_std": 0.22692781873047352,
2248
+ "rewards/accuracy_reward": 0.527901791036129,
2249
+ "rewards/format_reward": 0.9888392835855484,
2250
+ "step": 160
2251
+ },
2252
+ {
2253
+ "clip_ratio": 0.0,
2254
+ "completion_length": 671.4531478881836,
2255
+ "epoch": 1.6387832699619773,
2256
+ "grad_norm": 0.08403746038675308,
2257
+ "learning_rate": 9.444488851467041e-08,
2258
+ "loss": 0.0196,
2259
+ "num_tokens": 121634028.0,
2260
+ "reward": 1.6439733058214188,
2261
+ "reward_std": 0.18759915605187416,
2262
+ "rewards/accuracy_reward": 0.6517857015132904,
2263
+ "rewards/format_reward": 0.9921874925494194,
2264
+ "step": 161
2265
+ },
2266
+ {
2267
+ "clip_ratio": 0.0,
2268
+ "completion_length": 686.6283798217773,
2269
+ "epoch": 1.6489226869455007,
2270
+ "grad_norm": 0.11062318086624146,
2271
+ "learning_rate": 8.928961243975436e-08,
2272
+ "loss": 0.0421,
2273
+ "num_tokens": 122382103.0,
2274
+ "reward": 1.5970982909202576,
2275
+ "reward_std": 0.22882544435560703,
2276
+ "rewards/accuracy_reward": 0.6119505539536476,
2277
+ "rewards/format_reward": 0.991071417927742,
2278
+ "step": 162
2279
+ },
2280
+ {
2281
+ "clip_ratio": 0.0,
2282
+ "completion_length": 666.947566986084,
2283
+ "epoch": 1.659062103929024,
2284
+ "grad_norm": 0.09584940969944,
2285
+ "learning_rate": 8.426519384872732e-08,
2286
+ "loss": 0.0095,
2287
+ "num_tokens": 123101824.0,
2288
+ "reward": 1.6573661416769028,
2289
+ "reward_std": 0.2075268942862749,
2290
+ "rewards/accuracy_reward": 0.6607142835855484,
2291
+ "rewards/format_reward": 0.9966517835855484,
2292
+ "step": 163
2293
+ },
2294
+ {
2295
+ "clip_ratio": 0.0,
2296
+ "completion_length": 697.6964645385742,
2297
+ "epoch": 1.6692015209125475,
2298
+ "grad_norm": 0.10291486978530884,
2299
+ "learning_rate": 7.937323358440934e-08,
2300
+ "loss": 0.0226,
2301
+ "num_tokens": 123857992.0,
2302
+ "reward": 1.6450893431901932,
2303
+ "reward_std": 0.25325831212103367,
2304
+ "rewards/accuracy_reward": 0.6506696417927742,
2305
+ "rewards/format_reward": 0.9944196343421936,
2306
+ "step": 164
2307
+ },
2308
+ {
2309
+ "clip_ratio": 0.0,
2310
+ "completion_length": 691.475471496582,
2311
+ "epoch": 1.6793409378960709,
2312
+ "grad_norm": 0.09860701858997345,
2313
+ "learning_rate": 7.461529028673463e-08,
2314
+ "loss": 0.0197,
2315
+ "num_tokens": 124610362.0,
2316
+ "reward": 1.6473215073347092,
2317
+ "reward_std": 0.2233567014336586,
2318
+ "rewards/accuracy_reward": 0.652901791036129,
2319
+ "rewards/format_reward": 0.9944196343421936,
2320
+ "step": 165
2321
+ },
2322
+ {
2323
+ "clip_ratio": 0.0,
2324
+ "completion_length": 712.0446701049805,
2325
+ "epoch": 1.6894803548795945,
2326
+ "grad_norm": 0.10766546428203583,
2327
+ "learning_rate": 6.999287989614971e-08,
2328
+ "loss": 0.0403,
2329
+ "num_tokens": 125380794.0,
2330
+ "reward": 1.5881696939468384,
2331
+ "reward_std": 0.2603952419012785,
2332
+ "rewards/accuracy_reward": 0.5970982126891613,
2333
+ "rewards/format_reward": 0.9910714253783226,
2334
+ "step": 166
2335
+ },
2336
+ {
2337
+ "clip_ratio": 0.0,
2338
+ "completion_length": 671.4163131713867,
2339
+ "epoch": 1.6996197718631179,
2340
+ "grad_norm": 0.15764014422893524,
2341
+ "learning_rate": 6.550747517061656e-08,
2342
+ "loss": 0.0282,
2343
+ "num_tokens": 126108927.0,
2344
+ "reward": 1.6595982909202576,
2345
+ "reward_std": 0.20449624955654144,
2346
+ "rewards/accuracy_reward": 0.6662946343421936,
2347
+ "rewards/format_reward": 0.9933035671710968,
2348
+ "step": 167
2349
+ },
2350
+ {
2351
+ "clip_ratio": 0.0,
2352
+ "completion_length": 703.9542617797852,
2353
+ "epoch": 1.7097591888466415,
2354
+ "grad_norm": 0.09230643510818481,
2355
+ "learning_rate": 6.116050521637218e-08,
2356
+ "loss": 0.0282,
2357
+ "num_tokens": 126867998.0,
2358
+ "reward": 1.6395090073347092,
2359
+ "reward_std": 0.2097290549427271,
2360
+ "rewards/accuracy_reward": 0.6450892835855484,
2361
+ "rewards/format_reward": 0.9944196343421936,
2362
+ "step": 168
2363
+ },
2364
+ {
2365
+ "clip_ratio": 0.0,
2366
+ "completion_length": 672.6339569091797,
2367
+ "epoch": 1.7198986058301649,
2368
+ "grad_norm": 0.11055152863264084,
2369
+ "learning_rate": 5.6953355032598795e-08,
2370
+ "loss": 0.0277,
2371
+ "num_tokens": 127607750.0,
2372
+ "reward": 1.5736607760190964,
2373
+ "reward_std": 0.2487525064498186,
2374
+ "rewards/accuracy_reward": 0.5881696380674839,
2375
+ "rewards/format_reward": 0.9854910746216774,
2376
+ "step": 169
2377
+ },
2378
+ {
2379
+ "clip_ratio": 0.0,
2380
+ "completion_length": 696.6395416259766,
2381
+ "epoch": 1.7300380228136882,
2382
+ "grad_norm": 0.09632628411054611,
2383
+ "learning_rate": 5.288736507014435e-08,
2384
+ "loss": 0.0203,
2385
+ "num_tokens": 128355939.0,
2386
+ "reward": 1.640625074505806,
2387
+ "reward_std": 0.22130538150668144,
2388
+ "rewards/accuracy_reward": 0.6517857238650322,
2389
+ "rewards/format_reward": 0.988839291036129,
2390
+ "step": 170
2391
+ },
2392
+ {
2393
+ "clip_ratio": 0.0,
2394
+ "completion_length": 665.4888763427734,
2395
+ "epoch": 1.7401774397972116,
2396
+ "grad_norm": 0.185902938246727,
2397
+ "learning_rate": 4.896383080443933e-08,
2398
+ "loss": 0.014,
2399
+ "num_tokens": 129076969.0,
2400
+ "reward": 1.59933041036129,
2401
+ "reward_std": 0.22374757565557957,
2402
+ "rewards/accuracy_reward": 0.6037946306169033,
2403
+ "rewards/format_reward": 0.995535708963871,
2404
+ "step": 171
2405
+ },
2406
+ {
2407
+ "clip_ratio": 0.0,
2408
+ "completion_length": 655.0580596923828,
2409
+ "epoch": 1.750316856780735,
2410
+ "grad_norm": 0.0977133959531784,
2411
+ "learning_rate": 4.518400232274078e-08,
2412
+ "loss": 0.0151,
2413
+ "num_tokens": 129794405.0,
2414
+ "reward": 1.5870536416769028,
2415
+ "reward_std": 0.21878547966480255,
2416
+ "rewards/accuracy_reward": 0.599759615957737,
2417
+ "rewards/format_reward": 0.9933035597205162,
2418
+ "step": 172
2419
+ },
2420
+ {
2421
+ "clip_ratio": 0.0,
2422
+ "completion_length": 660.8594131469727,
2423
+ "epoch": 1.7604562737642584,
2424
+ "grad_norm": 0.10144926607608795,
2425
+ "learning_rate": 4.1549083925840165e-08,
2426
+ "loss": 0.0387,
2427
+ "num_tokens": 130503983.0,
2428
+ "reward": 1.6529018580913544,
2429
+ "reward_std": 0.22397751361131668,
2430
+ "rewards/accuracy_reward": 0.6662946417927742,
2431
+ "rewards/format_reward": 0.9866071417927742,
2432
+ "step": 173
2433
+ },
2434
+ {
2435
+ "clip_ratio": 0.0,
2436
+ "completion_length": 672.0837326049805,
2437
+ "epoch": 1.770595690747782,
2438
+ "grad_norm": 0.09892982989549637,
2439
+ "learning_rate": 3.806023374435663e-08,
2440
+ "loss": 0.0305,
2441
+ "num_tokens": 131234698.0,
2442
+ "reward": 1.6082590222358704,
2443
+ "reward_std": 0.21078570373356342,
2444
+ "rewards/accuracy_reward": 0.6149553582072258,
2445
+ "rewards/format_reward": 0.9933035597205162,
2446
+ "step": 174
2447
+ },
2448
+ {
2449
+ "clip_ratio": 0.0,
2450
+ "completion_length": 700.0781631469727,
2451
+ "epoch": 1.7807351077313056,
2452
+ "grad_norm": 0.08607691526412964,
2453
+ "learning_rate": 3.4718563369743213e-08,
2454
+ "loss": 0.0126,
2455
+ "num_tokens": 131995296.0,
2456
+ "reward": 1.5825893580913544,
2457
+ "reward_std": 0.19891513139009476,
2458
+ "rewards/accuracy_reward": 0.5870535746216774,
2459
+ "rewards/format_reward": 0.9955357164144516,
2460
+ "step": 175
2461
+ },
2462
+ {
2463
+ "clip_ratio": 0.0,
2464
+ "completion_length": 683.560302734375,
2465
+ "epoch": 1.790874524714829,
2466
+ "grad_norm": 0.0931280255317688,
2467
+ "learning_rate": 3.15251375001192e-08,
2468
+ "loss": 0.0266,
2469
+ "num_tokens": 132747870.0,
2470
+ "reward": 1.6540179401636124,
2471
+ "reward_std": 0.2136262021958828,
2472
+ "rewards/accuracy_reward": 0.6662946417927742,
2473
+ "rewards/format_reward": 0.987723208963871,
2474
+ "step": 176
2475
+ },
2476
+ {
2477
+ "clip_ratio": 0.0,
2478
+ "completion_length": 673.4799385070801,
2479
+ "epoch": 1.8010139416983524,
2480
+ "grad_norm": 0.09533898532390594,
2481
+ "learning_rate": 2.8480973601043955e-08,
2482
+ "loss": 0.0372,
2483
+ "num_tokens": 133486116.0,
2484
+ "reward": 1.6417411416769028,
2485
+ "reward_std": 0.18038041703402996,
2486
+ "rewards/accuracy_reward": 0.6529017798602581,
2487
+ "rewards/format_reward": 0.9888392761349678,
2488
+ "step": 177
2489
+ },
2490
+ {
2491
+ "clip_ratio": 0.0,
2492
+ "completion_length": 709.9821853637695,
2493
+ "epoch": 1.8111533586818758,
2494
+ "grad_norm": 0.1904437392950058,
2495
+ "learning_rate": 2.558704158134023e-08,
2496
+ "loss": 0.0261,
2497
+ "num_tokens": 134262284.0,
2498
+ "reward": 1.5825893431901932,
2499
+ "reward_std": 0.22126466780900955,
2500
+ "rewards/accuracy_reward": 0.5937500111758709,
2501
+ "rewards/format_reward": 0.9888392761349678,
2502
+ "step": 178
2503
+ },
2504
+ {
2505
+ "clip_ratio": 0.0,
2506
+ "completion_length": 719.8861846923828,
2507
+ "epoch": 1.8212927756653992,
2508
+ "grad_norm": 0.0858464315533638,
2509
+ "learning_rate": 2.2844263484068093e-08,
2510
+ "loss": 0.0161,
2511
+ "num_tokens": 135034302.0,
2512
+ "reward": 1.5937500447034836,
2513
+ "reward_std": 0.18957781419157982,
2514
+ "rewards/accuracy_reward": 0.6004464216530323,
2515
+ "rewards/format_reward": 0.9933035671710968,
2516
+ "step": 179
2517
+ },
2518
+ {
2519
+ "clip_ratio": 0.0,
2520
+ "completion_length": 670.2042694091797,
2521
+ "epoch": 1.8314321926489225,
2522
+ "grad_norm": 0.09248825907707214,
2523
+ "learning_rate": 2.025351319275137e-08,
2524
+ "loss": 0.0237,
2525
+ "num_tokens": 135765773.0,
2526
+ "reward": 1.6517857909202576,
2527
+ "reward_std": 0.20931300334632397,
2528
+ "rewards/accuracy_reward": 0.6607142835855484,
2529
+ "rewards/format_reward": 0.991071417927742,
2530
+ "step": 180
2531
+ },
2532
+ {
2533
+ "clip_ratio": 0.0,
2534
+ "completion_length": 697.1328506469727,
2535
+ "epoch": 1.8415716096324461,
2536
+ "grad_norm": 0.09593623131513596,
2537
+ "learning_rate": 1.781561615294652e-08,
2538
+ "loss": 0.0136,
2539
+ "num_tokens": 136524868.0,
2540
+ "reward": 1.5881697237491608,
2541
+ "reward_std": 0.23646764643490314,
2542
+ "rewards/accuracy_reward": 0.597098208963871,
2543
+ "rewards/format_reward": 0.9910714253783226,
2544
+ "step": 181
2545
+ },
2546
+ {
2547
+ "clip_ratio": 0.0,
2548
+ "completion_length": 709.1886520385742,
2549
+ "epoch": 1.8517110266159695,
2550
+ "grad_norm": 0.10065972059965134,
2551
+ "learning_rate": 1.553134910924636e-08,
2552
+ "loss": 0.0373,
2553
+ "num_tokens": 137296405.0,
2554
+ "reward": 1.539062574505806,
2555
+ "reward_std": 0.22722396813333035,
2556
+ "rewards/accuracy_reward": 0.5479910708963871,
2557
+ "rewards/format_reward": 0.991071417927742,
2558
+ "step": 182
2559
+ },
2560
+ {
2561
+ "clip_ratio": 0.0,
2562
+ "completion_length": 661.6339569091797,
2563
+ "epoch": 1.8618504435994931,
2564
+ "grad_norm": 0.09599631279706955,
2565
+ "learning_rate": 1.340143985779829e-08,
2566
+ "loss": 0.0283,
2567
+ "num_tokens": 138015989.0,
2568
+ "reward": 1.6540179550647736,
2569
+ "reward_std": 0.18772431463003159,
2570
+ "rewards/accuracy_reward": 0.661830373108387,
2571
+ "rewards/format_reward": 0.9921874925494194,
2572
+ "step": 183
2573
+ },
2574
+ {
2575
+ "clip_ratio": 0.0,
2576
+ "completion_length": 697.3761444091797,
2577
+ "epoch": 1.8719898605830165,
2578
+ "grad_norm": 0.09907021373510361,
2579
+ "learning_rate": 1.1426567014420297e-08,
2580
+ "loss": 0.047,
2581
+ "num_tokens": 138767542.0,
2582
+ "reward": 1.6383929550647736,
2583
+ "reward_std": 0.23151767998933792,
2584
+ "rewards/accuracy_reward": 0.6551339402794838,
2585
+ "rewards/format_reward": 0.9832589104771614,
2586
+ "step": 184
2587
+ },
2588
+ {
2589
+ "clip_ratio": 0.0,
2590
+ "completion_length": 706.0480117797852,
2591
+ "epoch": 1.88212927756654,
2592
+ "grad_norm": 0.09566432982683182,
2593
+ "learning_rate": 9.607359798384784e-09,
2594
+ "loss": 0.0372,
2595
+ "num_tokens": 139515641.0,
2596
+ "reward": 1.592633992433548,
2597
+ "reward_std": 0.2507201302796602,
2598
+ "rewards/accuracy_reward": 0.6015625074505806,
2599
+ "rewards/format_reward": 0.991071417927742,
2600
+ "step": 185
2601
+ },
2602
+ {
2603
+ "clip_ratio": 0.0,
2604
+ "completion_length": 675.1484756469727,
2605
+ "epoch": 1.8922686945500633,
2606
+ "grad_norm": 0.09894827008247375,
2607
+ "learning_rate": 7.944397831941951e-09,
2608
+ "loss": 0.031,
2609
+ "num_tokens": 140267070.0,
2610
+ "reward": 1.5837054252624512,
2611
+ "reward_std": 0.21183509565889835,
2612
+ "rewards/accuracy_reward": 0.5937499962747097,
2613
+ "rewards/format_reward": 0.9899553507566452,
2614
+ "step": 186
2615
+ },
2616
+ {
2617
+ "clip_ratio": 0.0,
2618
+ "completion_length": 740.9620971679688,
2619
+ "epoch": 1.9024081115335867,
2620
+ "grad_norm": 0.1215052530169487,
2621
+ "learning_rate": 6.438210955644452e-09,
2622
+ "loss": 0.0443,
2623
+ "num_tokens": 141073396.0,
2624
+ "reward": 1.5044643580913544,
2625
+ "reward_std": 0.2504011485725641,
2626
+ "rewards/accuracy_reward": 0.5256696455180645,
2627
+ "rewards/format_reward": 0.9787946343421936,
2628
+ "step": 187
2629
+ },
2630
+ {
2631
+ "clip_ratio": 0.0,
2632
+ "completion_length": 691.3471298217773,
2633
+ "epoch": 1.9125475285171103,
2634
+ "grad_norm": 0.08761589974164963,
2635
+ "learning_rate": 5.0892790595336575e-09,
2636
+ "loss": 0.0307,
2637
+ "num_tokens": 141829859.0,
2638
+ "reward": 1.6305804550647736,
2639
+ "reward_std": 0.220117699354887,
2640
+ "rewards/accuracy_reward": 0.6406250037252903,
2641
+ "rewards/format_reward": 0.9899553433060646,
2642
+ "step": 188
2643
+ },
2644
+ {
2645
+ "clip_ratio": 0.0,
2646
+ "completion_length": 687.7377548217773,
2647
+ "epoch": 1.9226869455006337,
2648
+ "grad_norm": 0.10144967585802078,
2649
+ "learning_rate": 3.898031930240797e-09,
2650
+ "loss": 0.03,
2651
+ "num_tokens": 142571072.0,
2652
+ "reward": 1.6037946790456772,
2653
+ "reward_std": 0.22627083584666252,
2654
+ "rewards/accuracy_reward": 0.6138392947614193,
2655
+ "rewards/format_reward": 0.9899553582072258,
2656
+ "step": 189
2657
+ },
2658
+ {
2659
+ "clip_ratio": 0.0,
2660
+ "completion_length": 743.5949020385742,
2661
+ "epoch": 1.9328263624841573,
2662
+ "grad_norm": 0.08726272732019424,
2663
+ "learning_rate": 2.8648491140513264e-09,
2664
+ "loss": 0.0307,
2665
+ "num_tokens": 143363781.0,
2666
+ "reward": 1.5948661267757416,
2667
+ "reward_std": 0.2067482192069292,
2668
+ "rewards/accuracy_reward": 0.6004464328289032,
2669
+ "rewards/format_reward": 0.9944196417927742,
2670
+ "step": 190
2671
+ },
2672
+ {
2673
+ "clip_ratio": 0.0,
2674
+ "completion_length": 631.5368728637695,
2675
+ "epoch": 1.9429657794676807,
2676
+ "grad_norm": 0.09442251920700073,
2677
+ "learning_rate": 1.9900597959770505e-09,
2678
+ "loss": 0.0309,
2679
+ "num_tokens": 144054766.0,
2680
+ "reward": 1.7064732909202576,
2681
+ "reward_std": 0.1611343901604414,
2682
+ "rewards/accuracy_reward": 0.7142857164144516,
2683
+ "rewards/format_reward": 0.9921874925494194,
2684
+ "step": 191
2685
+ },
2686
+ {
2687
+ "clip_ratio": 0.0,
2688
+ "completion_length": 702.8839721679688,
2689
+ "epoch": 1.953105196451204,
2690
+ "grad_norm": 0.09530234336853027,
2691
+ "learning_rate": 1.2739426948732424e-09,
2692
+ "loss": 0.0236,
2693
+ "num_tokens": 144818518.0,
2694
+ "reward": 1.6216518580913544,
2695
+ "reward_std": 0.22148274257779121,
2696
+ "rewards/accuracy_reward": 0.6283482164144516,
2697
+ "rewards/format_reward": 0.9933035597205162,
2698
+ "step": 192
2699
+ },
2700
+ {
2701
+ "clip_ratio": 0.0,
2702
+ "completion_length": 705.2678833007812,
2703
+ "epoch": 1.9632446134347274,
2704
+ "grad_norm": 0.11064296960830688,
2705
+ "learning_rate": 7.16725974635568e-10,
2706
+ "loss": 0.0467,
2707
+ "num_tokens": 145576278.0,
2708
+ "reward": 1.6149554401636124,
2709
+ "reward_std": 0.2686098553240299,
2710
+ "rewards/accuracy_reward": 0.6305803507566452,
2711
+ "rewards/format_reward": 0.9843749850988388,
2712
+ "step": 193
2713
+ },
2714
+ {
2715
+ "clip_ratio": 0.0,
2716
+ "completion_length": 709.090446472168,
2717
+ "epoch": 1.9733840304182508,
2718
+ "grad_norm": 0.1301695555448532,
2719
+ "learning_rate": 3.185871715041255e-10,
2720
+ "loss": 0.0256,
2721
+ "num_tokens": 146344471.0,
2722
+ "reward": 1.5904018580913544,
2723
+ "reward_std": 0.20222523249685764,
2724
+ "rewards/accuracy_reward": 0.5959821455180645,
2725
+ "rewards/format_reward": 0.9944196343421936,
2726
+ "step": 194
2727
+ },
2728
+ {
2729
+ "clip_ratio": 0.0,
2730
+ "completion_length": 649.944221496582,
2731
+ "epoch": 1.9835234474017744,
2732
+ "grad_norm": 0.10688284784555435,
2733
+ "learning_rate": 7.96531374983589e-11,
2734
+ "loss": 0.0264,
2735
+ "num_tokens": 147047301.0,
2736
+ "reward": 1.6361607760190964,
2737
+ "reward_std": 0.23337006010115147,
2738
+ "rewards/accuracy_reward": 0.6417410708963871,
2739
+ "rewards/format_reward": 0.9944196343421936,
2740
+ "step": 195
2741
+ },
2742
+ {
2743
+ "clip_ratio": 0.0,
2744
+ "completion_length": 693.314453125,
2745
+ "epoch": 1.9936628643852978,
2746
+ "grad_norm": 0.09134732186794281,
2747
+ "learning_rate": 0.0,
2748
+ "loss": 0.0173,
2749
+ "num_tokens": 147801661.0,
2750
+ "reward": 1.56026791036129,
2751
+ "reward_std": 0.21255235373973846,
2752
+ "rewards/accuracy_reward": 0.565848208963871,
2753
+ "rewards/format_reward": 0.9944196343421936,
2754
+ "step": 196
2755
+ },
2756
+ {
2757
+ "epoch": 1.9936628643852978,
2758
+ "step": 196,
2759
+ "total_flos": 0.0,
2760
+ "train_loss": 0.02676454978019156,
2761
+ "train_runtime": 31339.2496,
2762
+ "train_samples_per_second": 0.705,
2763
+ "train_steps_per_second": 0.006
2764
+ }
2765
+ ],
2766
+ "logging_steps": 1,
2767
+ "max_steps": 196,
2768
+ "num_input_tokens_seen": 0,
2769
+ "num_train_epochs": 2,
2770
+ "save_steps": 500,
2771
+ "stateful_callbacks": {
2772
+ "TrainerControl": {
2773
+ "args": {
2774
+ "should_epoch_stop": false,
2775
+ "should_evaluate": false,
2776
+ "should_log": false,
2777
+ "should_save": true,
2778
+ "should_training_stop": true
2779
+ },
2780
+ "attributes": {}
2781
+ }
2782
+ },
2783
+ "total_flos": 0.0,
2784
+ "train_batch_size": 16,
2785
+ "trial_name": null,
2786
+ "trial_params": null
2787
+ }