duyphu commited on
Commit
8a8c871
·
verified ·
1 Parent(s): e354326

End of training

Browse files
Files changed (2) hide show
  1. README.md +11 -4
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -65,7 +65,7 @@ lora_model_dir: null
65
  lora_r: 8
66
  lora_target_linear: true
67
  lr_scheduler: cosine
68
- max_steps: 1
69
  micro_batch_size: 2
70
  mlflow_experiment_name: /tmp/7cecb5f0cbdfe3e6_train_data.json
71
  model_type: AutoModelForCausalLM
@@ -92,7 +92,7 @@ wandb_name: 9c40171a-a397-4067-8fba-d0d97f9c3fb5
92
  wandb_project: Gradients-On-Demand
93
  wandb_run: your_name
94
  wandb_runid: 9c40171a-a397-4067-8fba-d0d97f9c3fb5
95
- warmup_steps: 1
96
  weight_decay: 0.0
97
  xformers_attention: null
98
 
@@ -103,6 +103,8 @@ xformers_attention: null
103
  # d1c8e05a-bc03-c6ca-5c8d-0b7b025a0462
104
 
105
  This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on the None dataset.
 
 
106
 
107
  ## Model description
108
 
@@ -129,14 +131,19 @@ The following hyperparameters were used during training:
129
  - total_train_batch_size: 8
130
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
131
  - lr_scheduler_type: cosine
132
- - lr_scheduler_warmup_steps: 2
133
- - training_steps: 1
134
 
135
  ### Training results
136
 
137
  | Training Loss | Epoch | Step | Validation Loss |
138
  |:-------------:|:------:|:----:|:---------------:|
139
  | No log | 0.0003 | 1 | 8.7375 |
 
 
 
 
 
140
 
141
 
142
  ### Framework versions
 
65
  lora_r: 8
66
  lora_target_linear: true
67
  lr_scheduler: cosine
68
+ max_steps: 50
69
  micro_batch_size: 2
70
  mlflow_experiment_name: /tmp/7cecb5f0cbdfe3e6_train_data.json
71
  model_type: AutoModelForCausalLM
 
92
  wandb_project: Gradients-On-Demand
93
  wandb_run: your_name
94
  wandb_runid: 9c40171a-a397-4067-8fba-d0d97f9c3fb5
95
+ warmup_steps: 10
96
  weight_decay: 0.0
97
  xformers_attention: null
98
 
 
103
  # d1c8e05a-bc03-c6ca-5c8d-0b7b025a0462
104
 
105
  This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on the None dataset.
106
+ It achieves the following results on the evaluation set:
107
+ - Loss: 8.1050
108
 
109
  ## Model description
110
 
 
131
  - total_train_batch_size: 8
132
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
133
  - lr_scheduler_type: cosine
134
+ - lr_scheduler_warmup_steps: 10
135
+ - training_steps: 50
136
 
137
  ### Training results
138
 
139
  | Training Loss | Epoch | Step | Validation Loss |
140
  |:-------------:|:------:|:----:|:---------------:|
141
  | No log | 0.0003 | 1 | 8.7375 |
142
+ | 32.7749 | 0.0026 | 10 | 8.6132 |
143
+ | 34.1939 | 0.0052 | 20 | 8.3340 |
144
+ | 33.0339 | 0.0078 | 30 | 8.1862 |
145
+ | 31.6229 | 0.0104 | 40 | 8.1304 |
146
+ | 34.0631 | 0.0130 | 50 | 8.1050 |
147
 
148
 
149
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fb47e48d8068a70345b5edff8b5e58b84668e713c68a996a1c2785f0db0ebcac
3
  size 410814
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f04213f7fdfd9d7646783f6a8b54df9680b0297f22e1ba7385d1f843e279057b
3
  size 410814