--- base_model: gpt2 datasets: - wikimedia/wikipedia library_name: Distily license: mit tags: - bitnet - 1.58b - generated_from_trainer model-index: - name: distily_multi_experiment results: [] --- # Summary Distilled with [Distily](https://github.com/lapp0/distily) library using teacher model [gpt2](https://huggingface.co/gpt2) on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia). # Model Architecture: - **Architecture**: `GPT2LMHeadModel` - **Total Parameters**: 124,439,808 - **Data Type (dtype)**: torch.bfloat16 - **Model Size**: 0.24 GB # Evaluation Metrics Comparison | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 | | 0 | 0 | 2473901162496.0 | 170424302305280.0 | 25.7744 | 30.1836 | 82.826 | 10.37 | 4060086272.0 | 71468255805440.0 | | 2500 | 0.0404 | 956.0 | 8192.0 | 6.1231 | 30.1407 | 82.944 | 10.385 | 660.0 | 6464.0 | | 5000 | 0.0808 | 378.0 | 1880.0 | 5.0294 | 30.1017 | 83.052 | 10.398 | 270.0 | 290.0 | | 7500 | 0.1212 | 230.0 | 820.0 | 4.5126 | 30.2328 | 82.692 | 10.353 | 201.0 | 174.0 | | 10000 | 0.1616 | 173.0 | 628.0 | 4.2290 | 30.1373 | 82.954 | 10.386 | 152.0 | 172.0 | | 12500 | 0.2020 | 127.5 | 482.0 | 3.8556 | 30.2081 | 82.759 | 10.361 | 106.5 | 156.0 | | 15000 | 0.2424 | 109.0 | 436.0 | 3.6682 | 30.1834 | 82.827 | 10.37 | 87.5 | 146.0 | | 17500 | 0.2828 | 93.5 | 348.0 | 3.5219 | 30.1772 | 82.844 | 10.372 | 73.0 | 120.5 | | 20000 | 0.3232 | 75.5 | 272.0 | 3.3368 | 30.1313 | 82.97 | 10.388 | 63.5 | 134.0 | | 22500 | 0.3636 | 67.5 | 217.0 | 3.1528 | 30.1675 | 82.871 | 10.375 | 52.75 | 77.5 | | 25000 | 0.4040 | 63.75 | 196.0 | 3.0848 | 30.1989 | 82.784 | 10.365 | 45.5 | 77.0 | | 27500 | 0.4444 | 58.0 | 205.0 | 3.0296 | 30.1798 | 82.837 | 10.371 | 40.25 | 79.5 | | 30000 | 0.4848 | 60.5 | 198.0 | 3.0189 | 30.2126 | 82.747 | 10.36 | 43.0 | 64.5 | | 32500 | 0.5253 | 59.0 | 172.0 | 3.0013 | 30.176 | 82.847 | 10.372 | 41.0 | 76.5 | | 35000 | 0.5657 | 56.0 | 172.0 | 2.9437 | 30.2238 | 82.716 | 10.356 | 38.25 | 59.5 | | 37500 | 0.6061 | 57.5 | 161.0 | 2.9153 | 30.1666 | 82.873 | 10.376 | 38.25 | 67.5 | | 40000 | 0.6465 | 54.75 | 156.0 | 2.8906 | 30.1878 | 82.815 | 10.368 | 35.75 | 58.75 | | 42500 | 0.6869 | 54.0 | 154.0 | 2.8788 | 30.1733 | 82.855 | 10.373 | 34.75 | 52.0 | | 45000 | 0.7273 | 50.5 | 136.0 | 2.7766 | 30.1315 | 82.97 | 10.388 | 30.75 | 45.25 | | 47500 | 0.7677 | 50.0 | 124.5 | 2.7505 | 30.4536 | 82.092 | 10.278 | 29.875 | 37.25 | | 50000 | 0.8081 | 48.75 | 123.5 | 2.7359 | 30.1393 | 82.948 | 10.385 | 28.75 | 37.0 | | 52500 | 0.8485 | 48.25 | 120.5 | 2.7269 | 30.1607 | 82.889 | 10.378 | 28.875 | 35.5 | | 55000 | 0.8889 | 48.0 | 118.5 | 2.7099 | 30.2151 | 82.74 | 10.359 | 27.875 | 34.25 | | 57500 | 0.9293 | 47.5 | 118.0 | 2.7048 | 30.1727 | 82.856 | 10.374 | 27.625 | 33.0 | | 60000 | 0.9697 | 47.5 | 117.5 | 2.7013 | 30.2816 | 82.558 | 10.336 | 27.5 | 32.75 | | 61875 | 1.0 | 47.5 | 117.5 | 2.7006 | 30.27 | 82.59 | 10.34 | 27.625 | 33.0 | # Resource Usage Comparison - VRAM Use: 7.7831 GB `# Distillation (Teacher -> Student) Architecture Difference: - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel` - **Total Parameters**: 124,439,808 -> 124,439,808 - **Data Type (dtype)**: 124439808 -> torch.bfloat16 - **Model Size**: 0.24 GB -> 0.24 GB
Module Diff Details ```diff ```

# Train Dataset Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. - Num Samples: `247,500` - Subset: `20231101.en` - Split: `train` # Training Objective ``` DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=cos, layer_mapper=layer-2)) ``` # Hyperparameters The following hyperparameters were used during training:
Expand - learning_rate: `0.0001` - train_batch_size: `4` - eval_batch_size: `8` - seed: `42` - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08` - lr_scheduler_type: `linear` - lr_scheduler_warmup_ratio: `0.5` - num_epochs: `1.0` - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=cos, layer_mapper=layer-2))` - train_embeddings: `True` - lr_scheduler: `` - student_model_name_or_path: `None` - student_config_name_or_path: `None` - student_model_config: `None` - reinitialize_weights: `None` - copy_teacher_modules: `[('lm_head', False)]` - student_model_as_bitnet: `True` - student_model_compile: `False` - dropout: `None` - teacher_model_name_or_path: `gpt2` - teacher_load_in_8bit: `False` - teacher_load_in_4bit: `False` - teacher_model_compile: `False` - dataset_uri: `wikimedia/wikipedia` - dataset_subset: `20231101.en` - dataset_split: `train` - dataset_column_name: `text` - dataset_sample_size: `250000` - dataset_test_size: `0.01` - gradient_accumulation_steps: `1` - weight_decay: `0.0` - max_grad_norm: `1.0` - warmup_ratio: `0.5` - warmup_steps: `0` - gradient_checkpointing: `True`

# Framework Versions - Distily 0.2.0 - Transformers 4.44.0 - Pytorch 2.3.0 - Datasets 2.21.0