--- base_model: gpt2 datasets: - wikimedia/wikipedia library_name: Distily license: mit tags: - bitnet - 1.58b - generated_from_trainer model-index: - name: distily_multi_experiment results: [] --- # Summary Distilled with [Distily](https://github.com/lapp0/distily) library using teacher model [gpt2](https://huggingface.co/gpt2) on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia). # Model Architecture: - **Architecture**: `GPT2LMHeadModel` - **Total Parameters**: 124,439,808 - **Data Type (dtype)**: torch.bfloat16 - **Model Size**: 0.24 GB # Evaluation Metrics Comparison | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 | | 0 | 0 | 2473901162496.0 | 170424302305280.0 | 25.7744 | 30.2402 | 82.672 | 10.35 | 4060086272.0 | 71468255805440.0 | | 2500 | 0.0404 | 940.0 | 7712.0 | 6.1058 | 30.4416 | 82.125 | 10.282 | 640.0 | 6272.0 | | 5000 | 0.0808 | 378.0 | 1880.0 | 5.0293 | 30.4354 | 82.141 | 10.284 | 270.0 | 288.0 | | 7500 | 0.1212 | 230.0 | 820.0 | 4.5127 | 30.4162 | 82.193 | 10.291 | 201.0 | 174.0 | | 10000 | 0.1616 | 173.0 | 632.0 | 4.2293 | 30.2401 | 82.672 | 10.351 | 151.0 | 172.0 | | 12500 | 0.2020 | 127.5 | 482.0 | 3.8556 | 30.2143 | 82.742 | 10.359 | 106.5 | 156.0 | | 15000 | 0.2424 | 109.0 | 436.0 | 3.6684 | 30.2343 | 82.688 | 10.352 | 87.5 | 144.0 | | 17500 | 0.2828 | 93.5 | 348.0 | 3.5229 | 30.333 | 82.419 | 10.319 | 73.5 | 122.5 | | 20000 | 0.3232 | 73.5 | 276.0 | 3.3349 | 30.3063 | 82.491 | 10.328 | 63.25 | 99.5 | | 22500 | 0.3636 | 67.0 | 219.0 | 3.1509 | 30.3619 | 82.34 | 10.309 | 52.25 | 79.0 | | 25000 | 0.4040 | 64.5 | 189.0 | 3.0823 | 30.4079 | 82.215 | 10.293 | 45.75 | 97.0 | | 27500 | 0.4444 | 59.0 | 194.0 | 3.0271 | 30.4181 | 82.188 | 10.29 | 41.25 | 85.5 | | 30000 | 0.4848 | 59.25 | 194.0 | 3.0192 | 30.2505 | 82.643 | 10.347 | 42.75 | 57.75 | | 32500 | 0.5253 | 58.5 | 175.0 | 3.0025 | 30.2733 | 82.581 | 10.339 | 40.0 | 62.75 | | 35000 | 0.5657 | 57.0 | 170.0 | 2.9448 | 30.2658 | 82.601 | 10.342 | 37.0 | 54.25 | | 37500 | 0.6061 | 57.25 | 155.0 | 2.9182 | 30.2187 | 82.73 | 10.358 | 38.75 | 73.5 | | 40000 | 0.6465 | 54.75 | 164.0 | 2.8978 | 30.2683 | 82.595 | 10.341 | 35.25 | 70.0 | | 42500 | 0.6869 | 54.25 | 156.0 | 2.8775 | 30.4126 | 82.203 | 10.292 | 34.75 | 61.75 | | 45000 | 0.7273 | 50.25 | 137.0 | 2.7761 | 30.3396 | 82.401 | 10.317 | 30.5 | 60.75 | | 47500 | 0.7677 | 50.25 | 126.5 | 2.7499 | 30.3808 | 82.289 | 10.303 | 29.5 | 37.25 | | 50000 | 0.8081 | 49.25 | 126.5 | 2.7359 | 30.3056 | 82.493 | 10.328 | 28.625 | 37.75 | | 52500 | 0.8485 | 48.5 | 122.0 | 2.7258 | 30.3024 | 82.502 | 10.329 | 29.125 | 36.25 | | 55000 | 0.8889 | 48.0 | 119.0 | 2.7099 | 30.201 | 82.779 | 10.364 | 28.125 | 34.0 | | 57500 | 0.9293 | 47.5 | 119.0 | 2.7046 | 30.1798 | 82.837 | 10.371 | 27.875 | 33.5 | | 60000 | 0.9697 | 47.75 | 118.5 | 2.7011 | 30.355 | 82.359 | 10.311 | 27.75 | 33.0 | | 61875 | 1.0 | 47.75 | 119.0 | 2.7006 | 30.4772 | 82.028 | 10.27 | 27.875 | 33.0 | # Resource Usage Comparison - VRAM Use: 7.7831 GB `# Distillation (Teacher -> Student) Architecture Difference: - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel` - **Total Parameters**: 124,439,808 -> 124,439,808 - **Data Type (dtype)**: 124439808 -> torch.bfloat16 - **Model Size**: 0.24 GB -> 0.24 GB
Module Diff Details ```diff ```

# Train Dataset Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. - Num Samples: `247,500` - Subset: `20231101.en` - Split: `train` # Training Objective ``` DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=cos, layer_mapper=layer-2)) ``` # Hyperparameters The following hyperparameters were used during training:
Expand - learning_rate: `0.0001` - train_batch_size: `4` - eval_batch_size: `8` - seed: `42` - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08` - lr_scheduler_type: `linear` - lr_scheduler_warmup_ratio: `0.5` - num_epochs: `1.0` - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=cos, layer_mapper=layer-2))` - train_embeddings: `True` - lr_scheduler: `` - student_model_name_or_path: `None` - student_config_name_or_path: `None` - student_model_config: `None` - reinitialize_weights: `None` - copy_teacher_modules: `[('lm_head', False)]` - student_model_as_bitnet: `True` - student_model_compile: `False` - dropout: `None` - teacher_model_name_or_path: `gpt2` - teacher_load_in_8bit: `False` - teacher_load_in_4bit: `False` - teacher_model_compile: `False` - dataset_uri: `wikimedia/wikipedia` - dataset_subset: `20231101.en` - dataset_split: `train` - dataset_column_name: `text` - dataset_sample_size: `250000` - dataset_test_size: `0.01` - gradient_accumulation_steps: `1` - weight_decay: `0.0` - max_grad_norm: `1.0` - warmup_ratio: `0.5` - warmup_steps: `0` - gradient_checkpointing: `True`

# Framework Versions - Distily 0.2.0 - Transformers 4.44.0 - Pytorch 2.3.0 - Datasets 2.21.0