--- base_model: gpt2 datasets: - wikimedia/wikipedia library_name: Distily license: mit tags: - bitnet - 1.58b - generated_from_trainer model-index: - name: distily_multi_experiment results: [] --- # Summary Distilled with [Distily](https://github.com/lapp0/distily) library using teacher model [gpt2](https://huggingface.co/gpt2) on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia). # Model Architecture: - **Architecture**: `GPT2LMHeadModel` - **Total Parameters**: 124,439,808 - **Data Type (dtype)**: torch.bfloat16 - **Model Size**: 0.24 GB # Evaluation Metrics Comparison | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 | | 0 | 0 | 2473901162496.0 | 170424302305280.0 | 45.7764 | 25.1947 | 99.227 | 12.423 | 4060086272.0 | 71468255805440.0 | | 2500 | 0.0404 | 2496.0 | 25856.0 | 21.0926 | 25.2323 | 99.079 | 12.405 | 2576.0 | 47616.0 | | 5000 | 0.0808 | 486.0 | 3120.0 | 18.4104 | 25.2286 | 99.094 | 12.407 | 338.0 | 1104.0 | | 7500 | 0.1212 | 276.0 | 1296.0 | 17.0012 | 25.2472 | 99.021 | 12.397 | 254.0 | 247.0 | | 10000 | 0.1616 | 202.0 | 752.0 | 16.2103 | 25.2451 | 99.029 | 12.398 | 187.0 | 296.0 | | 12500 | 0.2020 | 145.0 | 536.0 | 15.0990 | 25.1956 | 99.224 | 12.423 | 131.0 | 173.0 | | 15000 | 0.2424 | 123.5 | 488.0 | 14.5275 | 25.2611 | 98.966 | 12.391 | 93.5 | 147.0 | | 17500 | 0.2828 | 95.5 | 376.0 | 14.1507 | 25.2295 | 99.09 | 12.406 | 76.0 | 134.0 | | 20000 | 0.3232 | 78.0 | 308.0 | 13.6430 | 25.2434 | 99.036 | 12.399 | 63.5 | 136.0 | | 22500 | 0.3636 | 66.5 | 218.0 | 13.1470 | 25.2484 | 99.016 | 12.397 | 49.75 | 83.0 | | 25000 | 0.4040 | 63.0 | 204.0 | 12.9643 | 25.2039 | 99.191 | 12.419 | 43.25 | 82.5 | | 27500 | 0.4444 | 59.75 | 196.0 | 12.8397 | 25.1629 | 99.353 | 12.439 | 40.0 | 76.5 | | 30000 | 0.4848 | 58.5 | 192.0 | 12.8201 | 25.1971 | 99.218 | 12.422 | 41.0 | 61.5 | | 32500 | 0.5253 | 58.25 | 170.0 | 12.7767 | 25.2324 | 99.079 | 12.405 | 39.5 | 58.75 | | 35000 | 0.5657 | 57.75 | 170.0 | 12.6563 | 25.193 | 99.234 | 12.424 | 36.25 | 44.75 | | 37500 | 0.6061 | 56.25 | 155.0 | 12.5982 | 25.2106 | 99.165 | 12.415 | 36.75 | 50.5 | | 40000 | 0.6465 | 55.5 | 163.0 | 12.5850 | 25.222 | 99.12 | 12.41 | 33.5 | 62.25 | | 42500 | 0.6869 | 54.75 | 151.0 | 12.5192 | 25.2047 | 99.188 | 12.418 | 34.25 | 50.75 | | 45000 | 0.7273 | 51.25 | 135.0 | 12.2871 | 25.2624 | 98.961 | 12.39 | 29.5 | 42.0 | | 47500 | 0.7677 | 51.0 | 125.5 | 12.2422 | 25.232 | 99.081 | 12.405 | 28.375 | 35.75 | | 50000 | 0.8081 | 50.5 | 124.5 | 12.2162 | 25.177 | 99.297 | 12.432 | 28.375 | 37.75 | | 52500 | 0.8485 | 49.25 | 121.0 | 12.1929 | 25.1839 | 99.27 | 12.429 | 28.5 | 34.5 | | 55000 | 0.8889 | 49.25 | 120.5 | 12.1608 | 25.2063 | 99.182 | 12.418 | 27.625 | 34.75 | | 57500 | 0.9293 | 48.75 | 119.5 | 12.1482 | 25.2548 | 98.991 | 12.394 | 27.5 | 32.75 | | 60000 | 0.9697 | 48.75 | 119.0 | 12.1412 | 25.2068 | 99.18 | 12.417 | 27.5 | 32.5 | | 61875 | 1.0 | 48.75 | 119.5 | 12.1400 | 25.2169 | 99.14 | 12.412 | 27.5 | 32.75 | # Resource Usage Comparison - VRAM Use: 7.7830 GB # Distillation (Teacher -> Student) Architecture Difference: - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel` - **Total Parameters**: 124,439,808 -> 124,439,808 - **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16 - **Model Size**: 0.24 GB -> 0.24 GB
Module Diff Details ```diff ```

# Train Dataset Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. - Num Samples: `247,500` - Subset: `20231101.en` - Split: `train` # Training Objective ``` DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2)) ``` # Hyperparameters The following hyperparameters were used during training:
Expand - learning_rate: `0.0001` - train_batch_size: `4` - eval_batch_size: `8` - seed: `42` - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08` - lr_scheduler_type: `linear` - lr_scheduler_warmup_ratio: `0.5` - num_epochs: `1.0` - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2))` - train_embeddings: `True` - lr_scheduler: `` - student_model_name_or_path: `None` - student_config_name_or_path: `None` - student_model_config: `None` - reinitialize_weights: `None` - copy_teacher_modules: `[('lm_head', False)]` - student_model_as_bitnet: `True` - student_model_compile: `False` - dropout: `None` - teacher_model_name_or_path: `gpt2` - teacher_load_in_8bit: `False` - teacher_load_in_4bit: `False` - teacher_model_compile: `False` - dataset_uri: `wikimedia/wikipedia` - dataset_subset: `20231101.en` - dataset_split: `train` - dataset_column_name: `text` - dataset_sample_size: `250000` - dataset_test_size: `0.01` - gradient_accumulation_steps: `1` - weight_decay: `0.0` - max_grad_norm: `1.0` - warmup_ratio: `0.5` - warmup_steps: `0` - gradient_checkpointing: `True`

# Framework Versions - Distily 0.2.0 - Transformers 4.44.1 - Pytorch 2.5.0.dev20240821+cu121 - Datasets 2.21.0