wangyichen25 commited on
Commit
6ee368b
·
verified ·
1 Parent(s): 5a866c8

Training in progress, step 160, checkpoint

Browse files
.gitattributes CHANGED
@@ -47,3 +47,4 @@ checkpoint-481/tokenizer.json filter=lfs diff=lfs merge=lfs -text
47
  checkpoint-40/tokenizer.json filter=lfs diff=lfs merge=lfs -text
48
  checkpoint-80/tokenizer.json filter=lfs diff=lfs merge=lfs -text
49
  checkpoint-120/tokenizer.json filter=lfs diff=lfs merge=lfs -text
 
 
47
  checkpoint-40/tokenizer.json filter=lfs diff=lfs merge=lfs -text
48
  checkpoint-80/tokenizer.json filter=lfs diff=lfs merge=lfs -text
49
  checkpoint-120/tokenizer.json filter=lfs diff=lfs merge=lfs -text
50
+ checkpoint-160/tokenizer.json filter=lfs diff=lfs merge=lfs -text
checkpoint-160/README.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/medgemma-4b-it
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:google/medgemma-4b-it
7
+ - lora
8
+ - sft
9
+ - transformers
10
+ - trl
11
+ ---
12
+
13
+ # Model Card for Model ID
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+
18
+
19
+ ## Model Details
20
+
21
+ ### Model Description
22
+
23
+ <!-- Provide a longer summary of what this model is. -->
24
+
25
+
26
+
27
+ - **Developed by:** [More Information Needed]
28
+ - **Funded by [optional]:** [More Information Needed]
29
+ - **Shared by [optional]:** [More Information Needed]
30
+ - **Model type:** [More Information Needed]
31
+ - **Language(s) (NLP):** [More Information Needed]
32
+ - **License:** [More Information Needed]
33
+ - **Finetuned from model [optional]:** [More Information Needed]
34
+
35
+ ### Model Sources [optional]
36
+
37
+ <!-- Provide the basic links for the model. -->
38
+
39
+ - **Repository:** [More Information Needed]
40
+ - **Paper [optional]:** [More Information Needed]
41
+ - **Demo [optional]:** [More Information Needed]
42
+
43
+ ## Uses
44
+
45
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
46
+
47
+ ### Direct Use
48
+
49
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
50
+
51
+ [More Information Needed]
52
+
53
+ ### Downstream Use [optional]
54
+
55
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
56
+
57
+ [More Information Needed]
58
+
59
+ ### Out-of-Scope Use
60
+
61
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
62
+
63
+ [More Information Needed]
64
+
65
+ ## Bias, Risks, and Limitations
66
+
67
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
68
+
69
+ [More Information Needed]
70
+
71
+ ### Recommendations
72
+
73
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
74
+
75
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
76
+
77
+ ## How to Get Started with the Model
78
+
79
+ Use the code below to get started with the model.
80
+
81
+ [More Information Needed]
82
+
83
+ ## Training Details
84
+
85
+ ### Training Data
86
+
87
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
88
+
89
+ [More Information Needed]
90
+
91
+ ### Training Procedure
92
+
93
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
94
+
95
+ #### Preprocessing [optional]
96
+
97
+ [More Information Needed]
98
+
99
+
100
+ #### Training Hyperparameters
101
+
102
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
103
+
104
+ #### Speeds, Sizes, Times [optional]
105
+
106
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
107
+
108
+ [More Information Needed]
109
+
110
+ ## Evaluation
111
+
112
+ <!-- This section describes the evaluation protocols and provides the results. -->
113
+
114
+ ### Testing Data, Factors & Metrics
115
+
116
+ #### Testing Data
117
+
118
+ <!-- This should link to a Dataset Card if possible. -->
119
+
120
+ [More Information Needed]
121
+
122
+ #### Factors
123
+
124
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
125
+
126
+ [More Information Needed]
127
+
128
+ #### Metrics
129
+
130
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
131
+
132
+ [More Information Needed]
133
+
134
+ ### Results
135
+
136
+ [More Information Needed]
137
+
138
+ #### Summary
139
+
140
+
141
+
142
+ ## Model Examination [optional]
143
+
144
+ <!-- Relevant interpretability work for the model goes here -->
145
+
146
+ [More Information Needed]
147
+
148
+ ## Environmental Impact
149
+
150
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
151
+
152
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
153
+
154
+ - **Hardware Type:** [More Information Needed]
155
+ - **Hours used:** [More Information Needed]
156
+ - **Cloud Provider:** [More Information Needed]
157
+ - **Compute Region:** [More Information Needed]
158
+ - **Carbon Emitted:** [More Information Needed]
159
+
160
+ ## Technical Specifications [optional]
161
+
162
+ ### Model Architecture and Objective
163
+
164
+ [More Information Needed]
165
+
166
+ ### Compute Infrastructure
167
+
168
+ [More Information Needed]
169
+
170
+ #### Hardware
171
+
172
+ [More Information Needed]
173
+
174
+ #### Software
175
+
176
+ [More Information Needed]
177
+
178
+ ## Citation [optional]
179
+
180
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
181
+
182
+ **BibTeX:**
183
+
184
+ [More Information Needed]
185
+
186
+ **APA:**
187
+
188
+ [More Information Needed]
189
+
190
+ ## Glossary [optional]
191
+
192
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
193
+
194
+ [More Information Needed]
195
+
196
+ ## More Information [optional]
197
+
198
+ [More Information Needed]
199
+
200
+ ## Model Card Authors [optional]
201
+
202
+ [More Information Needed]
203
+
204
+ ## Model Card Contact
205
+
206
+ [More Information Needed]
207
+ ### Framework versions
208
+
209
+ - PEFT 0.16.0
checkpoint-160/adapter_config.json ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "google/medgemma-4b-it",
5
+ "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
+ "fan_in_fan_out": false,
10
+ "inference_mode": true,
11
+ "init_lora_weights": true,
12
+ "layer_replication": null,
13
+ "layers_pattern": null,
14
+ "layers_to_transform": null,
15
+ "loftq_config": {},
16
+ "lora_alpha": 16,
17
+ "lora_bias": false,
18
+ "lora_dropout": 0.05,
19
+ "megatron_config": null,
20
+ "megatron_core": "megatron.core",
21
+ "modules_to_save": [
22
+ "lm_head",
23
+ "embed_tokens"
24
+ ],
25
+ "peft_type": "LORA",
26
+ "qalora_group_size": 16,
27
+ "r": 16,
28
+ "rank_pattern": {},
29
+ "revision": null,
30
+ "target_modules": [
31
+ "o_proj",
32
+ "fc1",
33
+ "gate_proj",
34
+ "up_proj",
35
+ "q_proj",
36
+ "v_proj",
37
+ "down_proj",
38
+ "k_proj",
39
+ "out_proj",
40
+ "fc2"
41
+ ],
42
+ "task_type": "CAUSAL_LM",
43
+ "trainable_token_indices": null,
44
+ "use_dora": false,
45
+ "use_qalora": false,
46
+ "use_rslora": false
47
+ }
checkpoint-160/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e76d0c80433ea8d0337c233048ef0e05dcf9b7b2e2b9f0f5ddd64c44b0207f64
3
+ size 2839126480
checkpoint-160/added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "<image_soft_token>": 262144
3
+ }
checkpoint-160/chat_template.jinja ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {{ bos_token }}
2
+ {%- if messages[0]['role'] == 'system' -%}
3
+ {%- if messages[0]['content'] is string -%}
4
+ {%- set first_user_prefix = messages[0]['content'] + '
5
+
6
+ ' -%}
7
+ {%- else -%}
8
+ {%- set first_user_prefix = messages[0]['content'][0]['text'] + '
9
+
10
+ ' -%}
11
+ {%- endif -%}
12
+ {%- set loop_messages = messages[1:] -%}
13
+ {%- else -%}
14
+ {%- set first_user_prefix = "" -%}
15
+ {%- set loop_messages = messages -%}
16
+ {%- endif -%}
17
+ {%- for message in loop_messages -%}
18
+ {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) -%}
19
+ {{ raise_exception("Conversation roles must alternate user/assistant/user/assistant/...") }}
20
+ {%- endif -%}
21
+ {%- if (message['role'] == 'assistant') -%}
22
+ {%- set role = "model" -%}
23
+ {%- else -%}
24
+ {%- set role = message['role'] -%}
25
+ {%- endif -%}
26
+ {{ '<start_of_turn>' + role + '
27
+ ' + (first_user_prefix if loop.first else "") }}
28
+ {%- if message['content'] is string -%}
29
+ {{ message['content'] | trim }}
30
+ {%- elif message['content'] is iterable -%}
31
+ {%- for item in message['content'] -%}
32
+ {%- if item['type'] == 'image' -%}
33
+ {{ '<start_of_image>' }}
34
+ {%- elif item['type'] == 'text' -%}
35
+ {{ item['text'] | trim }}
36
+ {%- endif -%}
37
+ {%- endfor -%}
38
+ {%- else -%}
39
+ {{ raise_exception("Invalid content type") }}
40
+ {%- endif -%}
41
+ {{ '<end_of_turn>
42
+ ' }}
43
+ {%- endfor -%}
44
+ {%- if add_generation_prompt -%}
45
+ {{'<start_of_turn>model
46
+ '}}
47
+ {%- endif -%}
checkpoint-160/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84207718301f02105e5c8d1015e9c66f7cae221097893900f656db6a1ea5e4ce
3
+ size 5678690152
checkpoint-160/preprocessor_config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_convert_rgb": null,
3
+ "do_normalize": true,
4
+ "do_pan_and_scan": null,
5
+ "do_rescale": true,
6
+ "do_resize": true,
7
+ "image_mean": [
8
+ 0.5,
9
+ 0.5,
10
+ 0.5
11
+ ],
12
+ "image_processor_type": "Gemma3ImageProcessor",
13
+ "image_seq_length": 256,
14
+ "image_std": [
15
+ 0.5,
16
+ 0.5,
17
+ 0.5
18
+ ],
19
+ "pan_and_scan_max_num_crops": null,
20
+ "pan_and_scan_min_crop_size": null,
21
+ "pan_and_scan_min_ratio_to_activate": null,
22
+ "processor_class": "Gemma3Processor",
23
+ "resample": 2,
24
+ "rescale_factor": 0.00392156862745098,
25
+ "size": {
26
+ "height": 896,
27
+ "width": 896
28
+ }
29
+ }
checkpoint-160/processor_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "image_seq_length": 256,
3
+ "processor_class": "Gemma3Processor"
4
+ }
checkpoint-160/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3329cfafdeec77bac256abf84fcf467c64038d15ac8a566ba59458fef985208
3
+ size 14244
checkpoint-160/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1baaa49bf252fa8c6ffdbfc05307f10d963aa3cdd2d63ca1e87c1daf7f531f1
3
+ size 1064
checkpoint-160/special_tokens_map.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "boi_token": "<start_of_image>",
3
+ "bos_token": {
4
+ "content": "<bos>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ "eoi_token": "<end_of_image>",
11
+ "eos_token": {
12
+ "content": "<eos>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false
17
+ },
18
+ "image_token": "<image_soft_token>",
19
+ "pad_token": {
20
+ "content": "<pad>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false
25
+ },
26
+ "unk_token": {
27
+ "content": "<unk>",
28
+ "lstrip": false,
29
+ "normalized": false,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ }
checkpoint-160/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ebf1915455f8237564395182c49e3c685cfe3533b3d50ec6d49ce65ec43c32e
3
+ size 33384723
checkpoint-160/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
3
+ size 4689074
checkpoint-160/tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-160/trainer_state.json ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 0.33298647242455776,
6
+ "eval_steps": 20,
7
+ "global_step": 160,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.04162330905306972,
14
+ "grad_norm": 2.6779890060424805,
15
+ "learning_rate": 0.00019828326180257511,
16
+ "loss": 5.368,
17
+ "mean_token_accuracy": 0.8072595901787281,
18
+ "num_tokens": 161548.0,
19
+ "step": 20
20
+ },
21
+ {
22
+ "epoch": 0.04162330905306972,
23
+ "eval_loss": 0.12305498868227005,
24
+ "eval_mean_token_accuracy": 0.9847841203212738,
25
+ "eval_num_tokens": 161548.0,
26
+ "eval_runtime": 68.9772,
27
+ "eval_samples_per_second": 2.9,
28
+ "eval_steps_per_second": 0.725,
29
+ "step": 20
30
+ },
31
+ {
32
+ "epoch": 0.08324661810613944,
33
+ "grad_norm": 0.5366652011871338,
34
+ "learning_rate": 0.00018969957081545064,
35
+ "loss": 0.1516,
36
+ "mean_token_accuracy": 0.9917096085846424,
37
+ "num_tokens": 323168.0,
38
+ "step": 40
39
+ },
40
+ {
41
+ "epoch": 0.08324661810613944,
42
+ "eval_loss": 0.013321125879883766,
43
+ "eval_mean_token_accuracy": 0.9938336944580078,
44
+ "eval_num_tokens": 323168.0,
45
+ "eval_runtime": 68.8811,
46
+ "eval_samples_per_second": 2.904,
47
+ "eval_steps_per_second": 0.726,
48
+ "step": 40
49
+ },
50
+ {
51
+ "epoch": 0.12486992715920915,
52
+ "grad_norm": 0.4724605977535248,
53
+ "learning_rate": 0.0001811158798283262,
54
+ "loss": 0.0462,
55
+ "mean_token_accuracy": 0.9939016968011856,
56
+ "num_tokens": 484780.0,
57
+ "step": 60
58
+ },
59
+ {
60
+ "epoch": 0.12486992715920915,
61
+ "eval_loss": 0.010577572509646416,
62
+ "eval_mean_token_accuracy": 0.99473925948143,
63
+ "eval_num_tokens": 484780.0,
64
+ "eval_runtime": 68.7899,
65
+ "eval_samples_per_second": 2.907,
66
+ "eval_steps_per_second": 0.727,
67
+ "step": 60
68
+ },
69
+ {
70
+ "epoch": 0.16649323621227888,
71
+ "grad_norm": 0.33190304040908813,
72
+ "learning_rate": 0.00017253218884120172,
73
+ "loss": 0.0388,
74
+ "mean_token_accuracy": 0.9948750860989094,
75
+ "num_tokens": 646431.0,
76
+ "step": 80
77
+ },
78
+ {
79
+ "epoch": 0.16649323621227888,
80
+ "eval_loss": 0.00940256379544735,
81
+ "eval_mean_token_accuracy": 0.9951227140426636,
82
+ "eval_num_tokens": 646431.0,
83
+ "eval_runtime": 69.1687,
84
+ "eval_samples_per_second": 2.891,
85
+ "eval_steps_per_second": 0.723,
86
+ "step": 80
87
+ },
88
+ {
89
+ "epoch": 0.2081165452653486,
90
+ "grad_norm": 0.20264093577861786,
91
+ "learning_rate": 0.00016394849785407727,
92
+ "loss": 0.0367,
93
+ "mean_token_accuracy": 0.9950500458478928,
94
+ "num_tokens": 808054.0,
95
+ "step": 100
96
+ },
97
+ {
98
+ "epoch": 0.2081165452653486,
99
+ "eval_loss": 0.008246215991675854,
100
+ "eval_mean_token_accuracy": 0.9951834440231323,
101
+ "eval_num_tokens": 808054.0,
102
+ "eval_runtime": 68.7059,
103
+ "eval_samples_per_second": 2.911,
104
+ "eval_steps_per_second": 0.728,
105
+ "step": 100
106
+ },
107
+ {
108
+ "epoch": 0.2497398543184183,
109
+ "grad_norm": 0.30675262212753296,
110
+ "learning_rate": 0.0001553648068669528,
111
+ "loss": 0.0341,
112
+ "mean_token_accuracy": 0.994870014488697,
113
+ "num_tokens": 969623.0,
114
+ "step": 120
115
+ },
116
+ {
117
+ "epoch": 0.2497398543184183,
118
+ "eval_loss": 0.00862042885273695,
119
+ "eval_mean_token_accuracy": 0.994941633939743,
120
+ "eval_num_tokens": 969623.0,
121
+ "eval_runtime": 68.8073,
122
+ "eval_samples_per_second": 2.907,
123
+ "eval_steps_per_second": 0.727,
124
+ "step": 120
125
+ },
126
+ {
127
+ "epoch": 0.29136316337148804,
128
+ "grad_norm": 0.1850423365831375,
129
+ "learning_rate": 0.00014678111587982832,
130
+ "loss": 0.034,
131
+ "mean_token_accuracy": 0.9949115067720413,
132
+ "num_tokens": 1131222.0,
133
+ "step": 140
134
+ },
135
+ {
136
+ "epoch": 0.29136316337148804,
137
+ "eval_loss": 0.007929541170597076,
138
+ "eval_mean_token_accuracy": 0.9951228404045105,
139
+ "eval_num_tokens": 1131222.0,
140
+ "eval_runtime": 68.7004,
141
+ "eval_samples_per_second": 2.911,
142
+ "eval_steps_per_second": 0.728,
143
+ "step": 140
144
+ },
145
+ {
146
+ "epoch": 0.33298647242455776,
147
+ "grad_norm": 0.3047815263271332,
148
+ "learning_rate": 0.00013819742489270387,
149
+ "loss": 0.0328,
150
+ "mean_token_accuracy": 0.9950883395969867,
151
+ "num_tokens": 1292839.0,
152
+ "step": 160
153
+ },
154
+ {
155
+ "epoch": 0.33298647242455776,
156
+ "eval_loss": 0.007993862964212894,
157
+ "eval_mean_token_accuracy": 0.9951430022716522,
158
+ "eval_num_tokens": 1292839.0,
159
+ "eval_runtime": 68.1668,
160
+ "eval_samples_per_second": 2.934,
161
+ "eval_steps_per_second": 0.733,
162
+ "step": 160
163
+ }
164
+ ],
165
+ "logging_steps": 20,
166
+ "max_steps": 481,
167
+ "num_input_tokens_seen": 0,
168
+ "num_train_epochs": 1,
169
+ "save_steps": 40,
170
+ "stateful_callbacks": {
171
+ "TrainerControl": {
172
+ "args": {
173
+ "should_epoch_stop": false,
174
+ "should_evaluate": false,
175
+ "should_log": false,
176
+ "should_save": true,
177
+ "should_training_stop": false
178
+ },
179
+ "attributes": {}
180
+ }
181
+ },
182
+ "total_flos": 3.375947623271309e+16,
183
+ "train_batch_size": 4,
184
+ "trial_name": null,
185
+ "trial_params": null
186
+ }
checkpoint-160/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a99c92db5718c8e3fa68a50a104ce7f740a033660d2ea251fbb6febbc7e4942
3
+ size 5816