---
base_model: gpt2
datasets:
- wikimedia/wikipedia
library_name: Distily
license: mit
tags:
- bitnet
- 1.58b
- generated_from_trainer
model-index:
- name: distily_multi_experiment
results: []
---
# Summary
Distilled with [Distily](https://github.com/lapp0/distily) library
using teacher model [gpt2](https://huggingface.co/gpt2)
on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
# Model Architecture:
- **Architecture**: `GPT2LMHeadModel`
- **Total Parameters**: 124,439,808
- **Data Type (dtype)**: torch.bfloat16
- **Model Size**: 0.24 GB
# Evaluation Metrics Comparison
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
| 0 | 0 | 2473901162496.0 | 170424302305280.0 | 45.7764 | 30.605 | 81.686 | 10.227 | 4060086272.0 | 71468255805440.0 |
| 2500 | 0.0404 | 2096.0 | 18560.0 | 20.7820 | 30.3978 | 82.243 | 10.297 | 1352.0 | 77312.0 |
| 5000 | 0.0808 | 486.0 | 3120.0 | 18.4128 | 30.5404 | 81.859 | 10.249 | 338.0 | 1144.0 |
| 7500 | 0.1212 | 276.0 | 1296.0 | 17.0012 | 30.2492 | 82.647 | 10.347 | 255.0 | 250.0 |
| 10000 | 0.1616 | 202.0 | 760.0 | 16.2091 | 30.3937 | 82.254 | 10.298 | 188.0 | 306.0 |
| 12500 | 0.2020 | 145.0 | 536.0 | 15.0998 | 30.5518 | 81.828 | 10.245 | 131.0 | 177.0 |
| 15000 | 0.2424 | 124.0 | 488.0 | 14.5293 | 30.3974 | 82.244 | 10.297 | 93.0 | 147.0 |
| 17500 | 0.2828 | 94.5 | 376.0 | 14.1505 | 30.3113 | 82.477 | 10.326 | 76.0 | 137.0 |
| 20000 | 0.3232 | 78.5 | 308.0 | 13.6441 | 30.3336 | 82.417 | 10.319 | 63.25 | 129.0 |
| 22500 | 0.3636 | 66.5 | 217.0 | 13.1484 | 30.4175 | 82.189 | 10.29 | 49.25 | 84.0 |
| 25000 | 0.4040 | 63.25 | 200.0 | 12.9620 | 30.3475 | 82.379 | 10.314 | 44.25 | 82.0 |
| 27500 | 0.4444 | 59.5 | 202.0 | 12.8450 | 30.6172 | 81.653 | 10.223 | 40.0 | 86.5 |
| 30000 | 0.4848 | 59.25 | 201.0 | 12.8301 | 30.3687 | 82.322 | 10.307 | 42.25 | 67.0 |
| 32500 | 0.5253 | 58.5 | 175.0 | 12.7672 | 30.3752 | 82.304 | 10.304 | 38.75 | 67.5 |
| 35000 | 0.5657 | 57.75 | 171.0 | 12.6596 | 30.4274 | 82.163 | 10.287 | 36.5 | 51.5 |
| 37500 | 0.6061 | 56.25 | 158.0 | 12.5992 | 30.4042 | 82.226 | 10.295 | 37.25 | 47.25 |
| 40000 | 0.6465 | 56.25 | 157.0 | 12.5809 | 30.4949 | 81.981 | 10.264 | 34.0 | 65.5 |
| 42500 | 0.6869 | 55.25 | 149.0 | 12.5176 | 30.4213 | 82.179 | 10.289 | 34.25 | 50.25 |
| 45000 | 0.7273 | 51.0 | 135.0 | 12.2865 | 30.4957 | 81.979 | 10.264 | 30.375 | 42.75 |
| 47500 | 0.7677 | 50.75 | 126.5 | 12.2402 | 30.2924 | 82.529 | 10.333 | 29.375 | 35.5 |
| 50000 | 0.8081 | 50.25 | 124.5 | 12.2185 | 30.3557 | 82.357 | 10.311 | 28.875 | 39.25 |
| 52500 | 0.8485 | 49.25 | 122.0 | 12.1911 | 30.3408 | 82.397 | 10.316 | 28.625 | 35.5 |
| 55000 | 0.8889 | 49.25 | 120.5 | 12.1575 | 30.4254 | 82.168 | 10.287 | 28.0 | 35.0 |
| 57500 | 0.9293 | 48.75 | 119.5 | 12.1461 | 30.4084 | 82.214 | 10.293 | 28.0 | 33.75 |
| 60000 | 0.9697 | 48.75 | 119.5 | 12.1392 | 30.3285 | 82.431 | 10.32 | 27.875 | 33.5 |
| 61875 | 1.0 | 48.75 | 119.5 | 12.1389 | 30.3428 | 82.392 | 10.315 | 27.75 | 33.75 |
# Resource Usage Comparison
- VRAM Use: 7.7830 GB
`# Distillation (Teacher -> Student) Architecture Difference:
- **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
- **Total Parameters**: 124,439,808 -> 124,439,808
- **Data Type (dtype)**: 124439808 -> torch.bfloat16
- **Model Size**: 0.24 GB -> 0.24 GB
Module Diff Details
```diff
```
# Train Dataset
Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
- Num Samples: `247,500`
- Subset: `20231101.en`
- Split: `train`
# Training Objective
```
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2))
```
# Hyperparameters
The following hyperparameters were used during training:
Expand
- learning_rate: `0.0001`
- train_batch_size: `4`
- eval_batch_size: `8`
- seed: `42`
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
- lr_scheduler_type: `linear`
- lr_scheduler_warmup_ratio: `0.5`
- num_epochs: `1.0`
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2))`
- train_embeddings: `True`
- lr_scheduler: ``
- student_model_name_or_path: `None`
- student_config_name_or_path: `None`
- student_model_config: `None`
- reinitialize_weights: `None`
- copy_teacher_modules: `[('lm_head', False)]`
- student_model_as_bitnet: `True`
- student_model_compile: `False`
- dropout: `None`
- teacher_model_name_or_path: `gpt2`
- teacher_load_in_8bit: `False`
- teacher_load_in_4bit: `False`
- teacher_model_compile: `False`
- dataset_uri: `wikimedia/wikipedia`
- dataset_subset: `20231101.en`
- dataset_split: `train`
- dataset_column_name: `text`
- dataset_sample_size: `250000`
- dataset_test_size: `0.01`
- gradient_accumulation_steps: `1`
- weight_decay: `0.0`
- max_grad_norm: `1.0`
- warmup_ratio: `0.5`
- warmup_steps: `0`
- gradient_checkpointing: `True`
# Framework Versions
- Distily 0.2.0
- Transformers 4.44.0
- Pytorch 2.3.0
- Datasets 2.21.0