---
base_model: gpt2
datasets:
- wikimedia/wikipedia
library_name: Distily
license: mit
tags:
- bitnet
- 1.58b
- generated_from_trainer
model-index:
- name: distily_multi_experiment
results: []
---
# Summary
Distilled with [Distily](https://github.com/lapp0/distily) library
using teacher model [gpt2](https://huggingface.co/gpt2)
on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
# Model Architecture:
- **Architecture**: `GPT2LMHeadModel`
- **Total Parameters**: 124,439,808
- **Data Type (dtype)**: torch.bfloat16
- **Model Size**: 0.24 GB
# Evaluation Metrics Comparison
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
| 0 | 0 | 957777707008.0 | 56624848830464.0 | 45.7836 | 30.1745 | 82.852 | 10.373 | 2566914048.0 | 36283883716608.0 |
| 2500 | 0.0404 | 2032.0 | 25856.0 | 20.6614 | 30.1889 | 82.812 | 10.368 | 1432.0 | 92672.0 |
| 5000 | 0.0808 | 488.0 | 3008.0 | 18.3376 | 30.1572 | 82.899 | 10.379 | 412.0 | 984.0 |
| 7500 | 0.1212 | 276.0 | 1488.0 | 16.9326 | 30.1713 | 82.86 | 10.374 | 239.0 | 231.0 |
| 10000 | 0.1616 | 204.0 | 880.0 | 16.0592 | 30.189 | 82.812 | 10.368 | 182.0 | 294.0 |
| 12500 | 0.2020 | 149.0 | 506.0 | 14.9386 | 30.151 | 82.916 | 10.381 | 125.0 | 158.0 |
| 15000 | 0.2424 | 119.5 | 470.0 | 14.3804 | 30.2036 | 82.772 | 10.363 | 86.5 | 144.0 |
| 17500 | 0.2828 | 93.5 | 430.0 | 14.0105 | 30.2613 | 82.614 | 10.343 | 72.5 | 178.0 |
| 20000 | 0.3232 | 77.0 | 280.0 | 13.4770 | 30.1964 | 82.791 | 10.365 | 59.75 | 85.5 |
| 22500 | 0.3636 | 63.75 | 219.0 | 12.9954 | 30.296 | 82.519 | 10.331 | 50.25 | 75.0 |
| 25000 | 0.4040 | 60.75 | 185.0 | 12.7840 | 30.2946 | 82.523 | 10.332 | 46.0 | 74.5 |
| 27500 | 0.4444 | 58.75 | 190.0 | 12.6366 | 30.3968 | 82.246 | 10.297 | 41.0 | 51.25 |
| 30000 | 0.4848 | 58.75 | 177.0 | 12.6497 | 30.3256 | 82.439 | 10.321 | 42.5 | 62.5 |
| 32500 | 0.5253 | 59.5 | 171.0 | 12.5958 | 30.3473 | 82.38 | 10.314 | 38.75 | 69.0 |
| 35000 | 0.5657 | 55.5 | 164.0 | 12.4809 | 30.4047 | 82.224 | 10.294 | 36.25 | 49.25 |
| 37500 | 0.6061 | 55.75 | 165.0 | 12.4218 | 30.2813 | 82.559 | 10.336 | 35.0 | 51.5 |
| 40000 | 0.6465 | 54.0 | 147.0 | 12.3726 | 30.199 | 82.784 | 10.365 | 33.75 | 51.75 |
| 42500 | 0.6869 | 55.0 | 144.0 | 12.3525 | 30.6915 | 81.456 | 10.198 | 34.0 | 53.75 |
| 45000 | 0.7273 | 50.75 | 129.0 | 12.1198 | 30.7649 | 81.262 | 10.174 | 29.875 | 36.5 |
| 47500 | 0.7677 | 50.5 | 122.5 | 12.0744 | 30.289 | 82.538 | 10.334 | 28.875 | 34.75 |
| 50000 | 0.8081 | 49.5 | 121.5 | 12.0388 | 30.3972 | 82.244 | 10.297 | 28.75 | 34.25 |
| 52500 | 0.8485 | 50.0 | 122.5 | 12.0214 | 30.351 | 82.37 | 10.313 | 28.5 | 38.75 |
| 55000 | 0.8889 | 49.5 | 119.0 | 11.9902 | 30.2303 | 82.698 | 10.354 | 27.625 | 34.5 |
| 57500 | 0.9293 | 49.25 | 119.0 | 11.9806 | 30.6005 | 81.698 | 10.229 | 27.625 | 33.25 |
| 60000 | 0.9697 | 49.25 | 118.0 | 11.9745 | 31.1957 | 80.139 | 10.033 | 27.5 | 33.0 |
| 61875 | 1.0 | 49.0 | 118.0 | 11.9734 | 31.2236 | 80.068 | 10.024 | 27.5 | 33.0 |
# Resource Usage Comparison
- VRAM Use: 7.7830 GB
`# Distillation (Teacher -> Student) Architecture Difference:
- **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
- **Total Parameters**: 124,439,808 -> 124,439,808
- **Data Type (dtype)**: 124439808 -> torch.bfloat16
- **Model Size**: 0.24 GB -> 0.24 GB
Module Diff Details
```diff
```
# Train Dataset
Trained on 145,725,467 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
- Num Samples: `247,500`
- Subset: `20231101.en`
- Split: `train`
# Training Objective
```
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2))
```
# Hyperparameters
The following hyperparameters were used during training:
Expand
- learning_rate: `0.0001`
- train_batch_size: `4`
- eval_batch_size: `8`
- seed: `42`
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
- lr_scheduler_type: `linear`
- lr_scheduler_warmup_ratio: `0.5`
- num_epochs: `1.0`
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2))`
- train_embeddings: `True`
- lr_scheduler: ``
- student_model_name_or_path: `None`
- student_config_name_or_path: `None`
- student_model_config: `None`
- reinitialize_weights: `None`
- copy_teacher_modules: `[('lm_head', False)]`
- student_model_as_bitnet: `True`
- student_model_compile: `False`
- dropout: `None`
- teacher_model_name_or_path: `gpt2`
- teacher_load_in_8bit: `False`
- teacher_load_in_4bit: `False`
- teacher_model_compile: `False`
- dataset_uri: `wikimedia/wikipedia`
- dataset_subset: `20231101.en`
- dataset_split: `train`
- dataset_column_name: `text`
- dataset_sample_size: `250000`
- dataset_test_size: `0.01`
- gradient_accumulation_steps: `1`
- weight_decay: `0.0`
- max_grad_norm: `1.0`
- warmup_ratio: `0.5`
- warmup_steps: `0`
- gradient_checkpointing: `True`
# Framework Versions
- Distily 0.2.0
- Transformers 4.44.0
- Pytorch 2.3.0
- Datasets 2.21.0