willtensora commited on
Commit
2de50d8
·
verified ·
1 Parent(s): 5c1e4e6

End of training

Browse files
Files changed (3) hide show
  1. README.md +30 -26
  2. generation_config.json +3 -3
  3. pytorch_model.bin +2 -2
README.md CHANGED
@@ -1,12 +1,12 @@
1
  ---
2
  library_name: transformers
3
- license: mit
4
- base_model: fxmarty/tiny-random-GemmaForCausalLM
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
8
  model-index:
9
- - name: fd1980a0-7e71-4e52-addb-318dca5991d5
10
  results: []
11
  ---
12
 
@@ -18,21 +18,20 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  axolotl version: `0.4.1`
20
  ```yaml
21
- base_model: fxmarty/tiny-random-GemmaForCausalLM
22
  batch_size: 32
23
  bf16: true
24
  chat_template: tokenizer_default_fallback_alpaca
25
  datasets:
26
  - data_files:
27
- - b7c2a4a781c93416_train_data.json
28
  ds_type: json
29
  format: custom
30
- path: /workspace/input_data/b7c2a4a781c93416_train_data.json
31
  type:
32
- field_input: context
33
- field_instruction: question
34
- field_output: answer
35
- format: '{instruction} {input}'
36
  no_input_format: '{instruction}'
37
  system_format: '{system}'
38
  system_prompt: ''
@@ -41,7 +40,7 @@ flash_attention: true
41
  gpu_memory_limit: 80GiB
42
  gradient_checkpointing: true
43
  group_by_length: true
44
- hub_model_id: willtensora/fd1980a0-7e71-4e52-addb-318dca5991d5
45
  hub_strategy: checkpoint
46
  learning_rate: 0.0002
47
  logging_steps: 10
@@ -57,13 +56,15 @@ sample_packing: false
57
  save_steps: 40
58
  save_total_limit: 1
59
  sequence_len: 2048
60
- tokenizer_type: GemmaTokenizerFast
 
 
61
  train_on_inputs: false
62
  trust_remote_code: true
63
  val_set_size: 0.1
64
  wandb_entity: ''
65
  wandb_mode: online
66
- wandb_name: fxmarty/tiny-random-GemmaForCausalLM-/workspace/input_data/b7c2a4a781c93416_train_data.json
67
  wandb_project: Gradients-On-Demand
68
  wandb_run: your_name
69
  wandb_runid: default
@@ -74,11 +75,11 @@ xformers_attention: true
74
 
75
  </details><br>
76
 
77
- # fd1980a0-7e71-4e52-addb-318dca5991d5
78
 
79
- This model is a fine-tuned version of [fxmarty/tiny-random-GemmaForCausalLM](https://huggingface.co/fxmarty/tiny-random-GemmaForCausalLM) on the None dataset.
80
  It achieves the following results on the evaluation set:
81
- - Loss: 11.7971
82
 
83
  ## Model description
84
 
@@ -107,21 +108,24 @@ The following hyperparameters were used during training:
107
  - total_eval_batch_size: 32
108
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
109
  - lr_scheduler_type: cosine
110
- - lr_scheduler_warmup_steps: 7
111
- - training_steps: 156
112
 
113
  ### Training results
114
 
115
  | Training Loss | Epoch | Step | Validation Loss |
116
  |:-------------:|:------:|:----:|:---------------:|
117
- | No log | 0.0008 | 1 | 12.4537 |
118
- | 12.4357 | 0.0161 | 20 | 12.4267 |
119
- | 12.392 | 0.0322 | 40 | 12.3762 |
120
- | 12.3026 | 0.0483 | 60 | 12.2651 |
121
- | 12.1177 | 0.0645 | 80 | 12.0658 |
122
- | 11.9286 | 0.0806 | 100 | 11.8860 |
123
- | 11.8324 | 0.0967 | 120 | 11.8100 |
124
- | 11.798 | 0.1128 | 140 | 11.7971 |
 
 
 
125
 
126
 
127
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
+ license: apache-2.0
4
+ base_model: JackFram/llama-68m
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
8
  model-index:
9
+ - name: 4ada8092-cc1e-445c-9260-a580ef2586ae
10
  results: []
11
  ---
12
 
 
18
 
19
  axolotl version: `0.4.1`
20
  ```yaml
21
+ base_model: JackFram/llama-68m
22
  batch_size: 32
23
  bf16: true
24
  chat_template: tokenizer_default_fallback_alpaca
25
  datasets:
26
  - data_files:
27
+ - ff3a521d02fa72b2_train_data.json
28
  ds_type: json
29
  format: custom
30
+ path: /workspace/input_data/ff3a521d02fa72b2_train_data.json
31
  type:
32
+ field_instruction: context
33
+ field_output: question
34
+ format: '{instruction}'
 
35
  no_input_format: '{instruction}'
36
  system_format: '{system}'
37
  system_prompt: ''
 
40
  gpu_memory_limit: 80GiB
41
  gradient_checkpointing: true
42
  group_by_length: true
43
+ hub_model_id: willtensora/4ada8092-cc1e-445c-9260-a580ef2586ae
44
  hub_strategy: checkpoint
45
  learning_rate: 0.0002
46
  logging_steps: 10
 
56
  save_steps: 40
57
  save_total_limit: 1
58
  sequence_len: 2048
59
+ special_tokens:
60
+ pad_token: </s>
61
+ tokenizer_type: LlamaTokenizerFast
62
  train_on_inputs: false
63
  trust_remote_code: true
64
  val_set_size: 0.1
65
  wandb_entity: ''
66
  wandb_mode: online
67
+ wandb_name: JackFram/llama-68m-/workspace/input_data/ff3a521d02fa72b2_train_data.json
68
  wandb_project: Gradients-On-Demand
69
  wandb_run: your_name
70
  wandb_runid: default
 
75
 
76
  </details><br>
77
 
78
+ # 4ada8092-cc1e-445c-9260-a580ef2586ae
79
 
80
+ This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset.
81
  It achieves the following results on the evaluation set:
82
+ - Loss: 0.2208
83
 
84
  ## Model description
85
 
 
108
  - total_eval_batch_size: 32
109
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
110
  - lr_scheduler_type: cosine
111
+ - lr_scheduler_warmup_steps: 10
112
+ - training_steps: 205
113
 
114
  ### Training results
115
 
116
  | Training Loss | Epoch | Step | Validation Loss |
117
  |:-------------:|:------:|:----:|:---------------:|
118
+ | No log | 0.0006 | 1 | 6.7193 |
119
+ | 1.5212 | 0.0122 | 20 | 1.0774 |
120
+ | 0.7826 | 0.0244 | 40 | 0.6352 |
121
+ | 0.5492 | 0.0366 | 60 | 0.4713 |
122
+ | 0.3663 | 0.0488 | 80 | 0.3924 |
123
+ | 0.3533 | 0.0610 | 100 | 0.3112 |
124
+ | 0.2434 | 0.0732 | 120 | 0.2761 |
125
+ | 0.2989 | 0.0854 | 140 | 0.2445 |
126
+ | 0.2464 | 0.0976 | 160 | 0.2251 |
127
+ | 0.2233 | 0.1098 | 180 | 0.2203 |
128
+ | 0.2213 | 0.1220 | 200 | 0.2208 |
129
 
130
 
131
  ### Framework versions
generation_config.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "_from_model_config": true,
3
- "bos_token_id": 2,
4
  "do_sample": true,
5
- "eos_token_id": 1,
6
- "pad_token_id": 0,
7
  "transformers_version": "4.46.0"
8
  }
 
1
  {
2
  "_from_model_config": true,
3
+ "bos_token_id": 0,
4
  "do_sample": true,
5
+ "eos_token_id": 2,
6
+ "pad_token_id": 1,
7
  "transformers_version": "4.46.0"
8
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4bc76ae4e72c9fc13bfe9567ae655234c8d3f2fcf4460d169dedaebd1865dcc9
3
- size 16392015
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:432d9ed4d450961d63ceeda6070006f3b7eae9f4bfd1ec6ba4cd115f7bdb6b5a
3
+ size 136067757