lapp0 commited on
Commit
8abcce6
·
verified ·
1 Parent(s): f80052b

End of training

Browse files
README.md CHANGED
@@ -1,7 +1,9 @@
1
  ---
2
- library_name: transformers
3
- license: mit
4
  base_model: gpt2
 
 
 
 
5
  tags:
6
  - bitnet
7
  - 1.58b
@@ -11,75 +13,147 @@ model-index:
11
  results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
 
17
- # distily_multi_experiment
18
 
19
- This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 11.8595
22
 
23
- ## Model description
24
-
25
- More information needed
26
 
27
- ## Intended uses & limitations
28
 
29
  More information needed
30
 
31
- ## Training and evaluation data
32
 
33
  More information needed
34
-
35
- ## Training procedure
36
-
37
- ### Training hyperparameters
38
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
  The following hyperparameters were used during training:
40
- - learning_rate: 0.0001
41
- - train_batch_size: 4
42
- - eval_batch_size: 8
43
- - seed: 42
44
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
- - lr_scheduler_type: linear
46
- - lr_scheduler_warmup_ratio: 0.5
47
- - num_epochs: 1.0
48
-
49
- ### Training results
50
-
51
- | Training Loss | Epoch | Step | Validation Loss |
52
- |:-------------:|:------:|:-----:|:---------------:|
53
- | No log | 0 | 0 | 45.5392 |
54
- | 19.25 | 0.0404 | 2500 | 20.5160 |
55
- | 17.0 | 0.0808 | 5000 | 18.1646 |
56
- | 16.375 | 0.1212 | 7500 | 16.8100 |
57
- | 18.5 | 0.1616 | 10000 | 15.9662 |
58
- | 18.125 | 0.2020 | 12500 | 14.8913 |
59
- | 16.125 | 0.2424 | 15000 | 14.2909 |
60
- | 13.875 | 0.2828 | 17500 | 13.9054 |
61
- | 12.5625 | 0.3232 | 20000 | 13.4260 |
62
- | 13.8125 | 0.3636 | 22500 | 12.9026 |
63
- | 14.5625 | 0.4040 | 25000 | 12.6783 |
64
- | 15.1875 | 0.4444 | 27500 | 12.5651 |
65
- | 13.4375 | 0.4848 | 30000 | 12.5742 |
66
- | 6.8125 | 0.5253 | 32500 | 12.5106 |
67
- | 12.0 | 0.5657 | 35000 | 12.3849 |
68
- | 13.9375 | 0.6061 | 37500 | 12.3297 |
69
- | 5.375 | 0.6465 | 40000 | 12.2764 |
70
- | 20.625 | 0.6869 | 42500 | 12.2612 |
71
- | 10.0 | 0.7273 | 45000 | 12.0058 |
72
- | 18.75 | 0.7677 | 47500 | 11.9614 |
73
- | 10.0625 | 0.8081 | 50000 | 11.9339 |
74
- | 16.0 | 0.8485 | 52500 | 11.9123 |
75
- | 18.625 | 0.8889 | 55000 | 11.8770 |
76
- | 15.875 | 0.9293 | 57500 | 11.8680 |
77
- | 11.25 | 0.9697 | 60000 | 11.8611 |
78
-
79
-
80
- ### Framework versions
81
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
  - Transformers 4.44.1
83
  - Pytorch 2.5.0.dev20240821+cu121
84
  - Datasets 2.21.0
85
- - Tokenizers 0.19.1
 
1
  ---
 
 
2
  base_model: gpt2
3
+ datasets:
4
+ - wikimedia/wikipedia
5
+ library_name: Distily
6
+ license: mit
7
  tags:
8
  - bitnet
9
  - 1.58b
 
13
  results: []
14
  ---
15
 
 
 
16
 
17
+ # Summary
18
 
19
+ Distilled with [Distily](https://github.com/lapp0/distily) library
20
+ using teacher model [gpt2](https://huggingface.co/gpt2)
21
+ on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
22
 
23
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
24
+ should probably proofread and complete it, then remove this comment.
 
25
 
26
+ # Model description
27
 
28
  More information needed
29
 
30
+ # Intended uses & limitations
31
 
32
  More information needed
33
+ -->
34
+
35
+ # Model Architecture:
36
+ - **Architecture**: `GPT2LMHeadModel`
37
+ - **Total Parameters**: 124,439,808
38
+ - **Data Type (dtype)**: torch.bfloat16
39
+ - **Model Size**: 0.24 GB
40
+
41
+
42
+ # Evaluation Metrics Comparison
43
+
44
+ | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
45
+ | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
46
+ | **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
47
+ | 0 | 0 | 1133871366144.0 | 97306779058176.0 | 44.6892 | 25.2062 | 99.182 | 12.418 | 2785017856.0 | 54425825574912.0 |
48
+ | 2500 | 0.0404 | 1648.0 | 17664.0 | 20.4150 | 25.2355 | 99.067 | 12.403 | 1368.0 | 30464.0 |
49
+ | 5000 | 0.0808 | 498.0 | 3152.0 | 18.2270 | 25.2737 | 98.917 | 12.384 | 338.0 | 620.0 |
50
+ | 7500 | 0.1212 | 272.0 | 1288.0 | 16.8848 | 25.2363 | 99.064 | 12.403 | 241.0 | 262.0 |
51
+ | 10000 | 0.1616 | 199.0 | 852.0 | 16.0482 | 25.2552 | 98.99 | 12.394 | 181.0 | 160.0 |
52
+ | 12500 | 0.2020 | 142.0 | 544.0 | 14.9888 | 25.2627 | 98.96 | 12.39 | 122.0 | 159.0 |
53
+ | 15000 | 0.2424 | 119.0 | 506.0 | 14.4049 | 25.2537 | 98.995 | 12.394 | 95.5 | 151.0 |
54
+ | 17500 | 0.2828 | 98.0 | 376.0 | 14.0632 | 25.1898 | 99.247 | 12.426 | 74.0 | 128.0 |
55
+ | 20000 | 0.3232 | 76.5 | 280.0 | 13.5213 | 25.2312 | 99.084 | 12.405 | 68.0 | 94.0 |
56
+ | 22500 | 0.3636 | 66.0 | 210.0 | 13.0349 | 25.2005 | 99.204 | 12.42 | 49.25 | 73.5 |
57
+ | 25000 | 0.4040 | 62.25 | 187.0 | 12.8246 | 25.2755 | 98.91 | 12.384 | 44.75 | 65.5 |
58
+ | 27500 | 0.4444 | 60.25 | 175.0 | 12.7070 | 25.2654 | 98.949 | 12.388 | 43.25 | 72.5 |
59
+ | 30000 | 0.4848 | 62.25 | 183.0 | 12.7168 | 25.2653 | 98.95 | 12.389 | 42.25 | 87.0 |
60
+ | 32500 | 0.5253 | 59.0 | 184.0 | 12.6674 | 25.2119 | 99.16 | 12.415 | 37.75 | 70.5 |
61
+ | 35000 | 0.5657 | 58.0 | 176.0 | 12.5288 | 25.2238 | 99.113 | 12.409 | 34.75 | 50.0 |
62
+ | 37500 | 0.6061 | 56.5 | 166.0 | 12.4810 | 25.192 | 99.238 | 12.425 | 36.75 | 69.5 |
63
+ | 40000 | 0.6465 | 55.0 | 151.0 | 12.4422 | 25.2105 | 99.165 | 12.415 | 34.0 | 48.25 |
64
+ | 42500 | 0.6869 | 52.75 | 161.0 | 12.3894 | 25.258 | 98.979 | 12.392 | 33.5 | 58.75 |
65
+ | 45000 | 0.7273 | 51.25 | 134.0 | 12.1660 | 25.1916 | 99.239 | 12.425 | 29.75 | 43.0 |
66
+ | 47500 | 0.7677 | 48.75 | 129.0 | 12.125 | 25.243 | 99.037 | 12.399 | 28.625 | 38.25 |
67
+ | 50000 | 0.8081 | 49.75 | 126.5 | 12.0924 | 25.25 | 99.01 | 12.396 | 28.375 | 35.0 |
68
+ | 52500 | 0.8485 | 50.75 | 125.0 | 12.0760 | 25.2184 | 99.134 | 12.412 | 28.0 | 39.0 |
69
+ | 55000 | 0.8889 | 49.75 | 124.5 | 12.0411 | 25.2538 | 98.995 | 12.394 | 27.625 | 36.75 |
70
+ | 57500 | 0.9293 | 49.0 | 120.5 | 12.0289 | 25.2405 | 99.047 | 12.401 | 27.375 | 34.5 |
71
+ | 60000 | 0.9697 | 48.75 | 120.5 | 12.0196 | 25.192 | 99.238 | 12.425 | 27.375 | 35.0 |
72
+ | 61875 | 1.0 | 49.0 | 121.0 | 12.0190 | 25.1853 | 99.264 | 12.428 | 27.375 | 35.0 |
73
+
74
+ # Resource Usage Comparison
75
+
76
+ - VRAM Use: 7.7823 GB
77
+
78
+ # Distillation (Teacher -> Student) Architecture Difference:
79
+
80
+ - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
81
+ - **Total Parameters**: 124,439,808 -> 124,439,808
82
+ - **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16
83
+ - **Model Size**: 0.24 GB -> 0.24 GB
84
+
85
+ <details>
86
+ <summary>Module Diff Details</summary>
87
+
88
+ ```diff
89
+
90
+ ```
91
+
92
+ </details>
93
+ <br/>
94
+
95
+ # Train Dataset
96
+ Trained on 145,731,804 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
97
+
98
+ - Num Samples: `247,500`
99
+ - Subset: `20231101.en`
100
+ - Split: `train`
101
+
102
+
103
+ # Training Objective
104
+
105
+ ```
106
+ DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2))
107
+ ```
108
+
109
+ # Hyperparameters
110
  The following hyperparameters were used during training:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
111
 
112
+ <details>
113
+ <summary>Expand</summary>
114
+
115
+ - learning_rate: `0.0001`
116
+ - train_batch_size: `4`
117
+ - eval_batch_size: `8`
118
+ - seed: `42`
119
+ - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
120
+ - lr_scheduler_type: `linear`
121
+ - lr_scheduler_warmup_ratio: `0.5`
122
+ - num_epochs: `1.0`
123
+ - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2))`
124
+ - train_embeddings: `True`
125
+ - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f040856c4f0>`
126
+ - student_model_name_or_path: `None`
127
+ - student_config_name_or_path: `None`
128
+ - student_model_config: `None`
129
+ - reinitialize_weights: `None`
130
+ - copy_teacher_modules: `[('lm_head', False)]`
131
+ - student_model_as_bitnet: `True`
132
+ - student_model_compile: `False`
133
+ - dropout: `None`
134
+ - teacher_model_name_or_path: `gpt2`
135
+ - teacher_load_in_8bit: `False`
136
+ - teacher_load_in_4bit: `False`
137
+ - teacher_model_compile: `False`
138
+ - dataset_uri: `wikimedia/wikipedia`
139
+ - dataset_subset: `20231101.en`
140
+ - dataset_split: `train`
141
+ - dataset_column_name: `text`
142
+ - dataset_sample_size: `250000`
143
+ - dataset_test_size: `0.01`
144
+ - gradient_accumulation_steps: `1`
145
+ - weight_decay: `0.0`
146
+ - max_grad_norm: `1.0`
147
+ - warmup_ratio: `0.5`
148
+ - warmup_steps: `0`
149
+ - gradient_checkpointing: `True`
150
+
151
+ </details>
152
+ <br/>
153
+
154
+
155
+ # Framework Versions
156
+ - Distily 0.2.0
157
  - Transformers 4.44.1
158
  - Pytorch 2.5.0.dev20240821+cu121
159
  - Datasets 2.21.0
 
logs/attn_loss_fn=cos, attn_weight=25.0, layer_mapper=layer-2, projector=linear/events.out.tfevents.1724403552.e3f806ea38c9 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:631e80d7ae1c1bec2f21aa59d1e5e27212e73f59403668997c98e51e71cf8cad
3
+ size 588