--- base_model: gpt2 datasets: - wikimedia/wikipedia library_name: Distily license: mit tags: - bitnet - 1.58b - generated_from_trainer model-index: - name: distily_multi_experiment results: [] --- # Summary Distilled with [Distily](https://github.com/lapp0/distily) library using teacher model [gpt2](https://huggingface.co/gpt2) on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia). # Model Architecture: - **Architecture**: `GPT2LMHeadModel` - **Total Parameters**: 124,439,808 - **Data Type (dtype)**: torch.bfloat16 - **Model Size**: 0.24 GB # Evaluation Metrics Comparison | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 | | 0 | 0 | 1133871366144.0 | 97306779058176.0 | 44.6892 | 25.2062 | 99.182 | 12.418 | 2785017856.0 | 54425825574912.0 | | 2500 | 0.0404 | 1648.0 | 17664.0 | 20.4150 | 25.2355 | 99.067 | 12.403 | 1368.0 | 30464.0 | | 5000 | 0.0808 | 498.0 | 3152.0 | 18.2270 | 25.2737 | 98.917 | 12.384 | 338.0 | 620.0 | | 7500 | 0.1212 | 272.0 | 1288.0 | 16.8848 | 25.2363 | 99.064 | 12.403 | 241.0 | 262.0 | | 10000 | 0.1616 | 199.0 | 852.0 | 16.0482 | 25.2552 | 98.99 | 12.394 | 181.0 | 160.0 | | 12500 | 0.2020 | 142.0 | 544.0 | 14.9888 | 25.2627 | 98.96 | 12.39 | 122.0 | 159.0 | | 15000 | 0.2424 | 119.0 | 506.0 | 14.4049 | 25.2537 | 98.995 | 12.394 | 95.5 | 151.0 | | 17500 | 0.2828 | 98.0 | 376.0 | 14.0632 | 25.1898 | 99.247 | 12.426 | 74.0 | 128.0 | | 20000 | 0.3232 | 76.5 | 280.0 | 13.5213 | 25.2312 | 99.084 | 12.405 | 68.0 | 94.0 | | 22500 | 0.3636 | 66.0 | 210.0 | 13.0349 | 25.2005 | 99.204 | 12.42 | 49.25 | 73.5 | | 25000 | 0.4040 | 62.25 | 187.0 | 12.8246 | 25.2755 | 98.91 | 12.384 | 44.75 | 65.5 | | 27500 | 0.4444 | 60.25 | 175.0 | 12.7070 | 25.2654 | 98.949 | 12.388 | 43.25 | 72.5 | | 30000 | 0.4848 | 62.25 | 183.0 | 12.7168 | 25.2653 | 98.95 | 12.389 | 42.25 | 87.0 | | 32500 | 0.5253 | 59.0 | 184.0 | 12.6674 | 25.2119 | 99.16 | 12.415 | 37.75 | 70.5 | | 35000 | 0.5657 | 58.0 | 176.0 | 12.5288 | 25.2238 | 99.113 | 12.409 | 34.75 | 50.0 | | 37500 | 0.6061 | 56.5 | 166.0 | 12.4810 | 25.192 | 99.238 | 12.425 | 36.75 | 69.5 | | 40000 | 0.6465 | 55.0 | 151.0 | 12.4422 | 25.2105 | 99.165 | 12.415 | 34.0 | 48.25 | | 42500 | 0.6869 | 52.75 | 161.0 | 12.3894 | 25.258 | 98.979 | 12.392 | 33.5 | 58.75 | | 45000 | 0.7273 | 51.25 | 134.0 | 12.1660 | 25.1916 | 99.239 | 12.425 | 29.75 | 43.0 | | 47500 | 0.7677 | 48.75 | 129.0 | 12.125 | 25.243 | 99.037 | 12.399 | 28.625 | 38.25 | | 50000 | 0.8081 | 49.75 | 126.5 | 12.0924 | 25.25 | 99.01 | 12.396 | 28.375 | 35.0 | | 52500 | 0.8485 | 50.75 | 125.0 | 12.0760 | 25.2184 | 99.134 | 12.412 | 28.0 | 39.0 | | 55000 | 0.8889 | 49.75 | 124.5 | 12.0411 | 25.2538 | 98.995 | 12.394 | 27.625 | 36.75 | | 57500 | 0.9293 | 49.0 | 120.5 | 12.0289 | 25.2405 | 99.047 | 12.401 | 27.375 | 34.5 | | 60000 | 0.9697 | 48.75 | 120.5 | 12.0196 | 25.192 | 99.238 | 12.425 | 27.375 | 35.0 | | 61875 | 1.0 | 49.0 | 121.0 | 12.0190 | 25.1853 | 99.264 | 12.428 | 27.375 | 35.0 | # Resource Usage Comparison - VRAM Use: 7.7823 GB # Distillation (Teacher -> Student) Architecture Difference: - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel` - **Total Parameters**: 124,439,808 -> 124,439,808 - **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16 - **Model Size**: 0.24 GB -> 0.24 GB
Module Diff Details ```diff ```

# Train Dataset Trained on 145,731,804 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. - Num Samples: `247,500` - Subset: `20231101.en` - Split: `train` # Training Objective ``` DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2)) ``` # Hyperparameters The following hyperparameters were used during training:
Expand - learning_rate: `0.0001` - train_batch_size: `4` - eval_batch_size: `8` - seed: `42` - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08` - lr_scheduler_type: `linear` - lr_scheduler_warmup_ratio: `0.5` - num_epochs: `1.0` - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2))` - train_embeddings: `True` - lr_scheduler: `` - student_model_name_or_path: `None` - student_config_name_or_path: `None` - student_model_config: `None` - reinitialize_weights: `None` - copy_teacher_modules: `[('lm_head', False)]` - student_model_as_bitnet: `True` - student_model_compile: `False` - dropout: `None` - teacher_model_name_or_path: `gpt2` - teacher_load_in_8bit: `False` - teacher_load_in_4bit: `False` - teacher_model_compile: `False` - dataset_uri: `wikimedia/wikipedia` - dataset_subset: `20231101.en` - dataset_split: `train` - dataset_column_name: `text` - dataset_sample_size: `250000` - dataset_test_size: `0.01` - gradient_accumulation_steps: `1` - weight_decay: `0.0` - max_grad_norm: `1.0` - warmup_ratio: `0.5` - warmup_steps: `0` - gradient_checkpointing: `True`

# Framework Versions - Distily 0.2.0 - Transformers 4.44.1 - Pytorch 2.5.0.dev20240821+cu121 - Datasets 2.21.0