AaronHuangWei commited on
Commit
b8c3166
·
verified ·
1 Parent(s): 09557c1

End of training

Browse files
Files changed (2) hide show
  1. README.md +3 -1
  2. config.json +1 -1
README.md CHANGED
@@ -1,16 +1,18 @@
1
  ---
 
2
  library_name: transformers
3
  model_name: Qwen2.5-7B-GRPO
4
  tags:
5
  - generated_from_trainer
6
  - grpo
7
  - trl
 
8
  licence: license
9
  ---
10
 
11
  # Model Card for Qwen2.5-7B-GRPO
12
 
13
- This model is a fine-tuned version of [None](https://huggingface.co/None).
14
  It has been trained using [TRL](https://github.com/huggingface/trl).
15
 
16
  ## Quick start
 
1
  ---
2
+ datasets: open-r1/OpenR1-Math-220k
3
  library_name: transformers
4
  model_name: Qwen2.5-7B-GRPO
5
  tags:
6
  - generated_from_trainer
7
  - grpo
8
  - trl
9
+ - open-r1
10
  licence: license
11
  ---
12
 
13
  # Model Card for Qwen2.5-7B-GRPO
14
 
15
+ This model is a fine-tuned version of [None](https://huggingface.co/None) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
16
  It has been trained using [TRL](https://github.com/huggingface/trl).
17
 
18
  ## Quick start
config.json CHANGED
@@ -22,7 +22,7 @@
22
  "tie_word_embeddings": false,
23
  "torch_dtype": "bfloat16",
24
  "transformers_version": "4.52.4",
25
- "use_cache": false,
26
  "use_mrope": false,
27
  "use_sliding_window": false,
28
  "vocab_size": 152064
 
22
  "tie_word_embeddings": false,
23
  "torch_dtype": "bfloat16",
24
  "transformers_version": "4.52.4",
25
+ "use_cache": true,
26
  "use_mrope": false,
27
  "use_sliding_window": false,
28
  "vocab_size": 152064