---
base_model: gpt2
datasets:
- wikimedia/wikipedia
library_name: Distily
license: mit
tags:
- bitnet
- 1.58b
- generated_from_trainer
model-index:
- name: distily_multi_experiment
results: []
---
# Summary
Distilled with [Distily](https://github.com/lapp0/distily) library
using teacher model [gpt2](https://huggingface.co/gpt2)
on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
# Model Architecture:
- **Architecture**: `GPT2LMHeadModel`
- **Total Parameters**: 124,439,808
- **Data Type (dtype)**: torch.bfloat16
- **Model Size**: 0.24 GB
# Evaluation Metrics Comparison
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
| 0 | 0 | 2130303778816.0 | 135239930216448.0 | 25.4492 | 25.2159 | 99.144 | 12.413 | 10334765056.0 | 38482906972160.0 |
| 2500 | 0.0404 | 892.0 | 6624.0 | 6.0041 | 25.2726 | 98.921 | 12.385 | 572.0 | 7200.0 |
| 5000 | 0.0808 | 372.0 | 1912.0 | 5.0307 | 25.3204 | 98.734 | 12.362 | 294.0 | 298.0 |
| 7500 | 0.1212 | 228.0 | 856.0 | 4.5171 | 25.3333 | 98.684 | 12.355 | 199.0 | 182.0 |
| 10000 | 0.1616 | 181.0 | 696.0 | 4.2020 | 25.3363 | 98.673 | 12.354 | 152.0 | 171.0 |
| 12500 | 0.2020 | 125.0 | 478.0 | 3.8242 | 25.3199 | 98.737 | 12.362 | 98.0 | 141.0 |
| 15000 | 0.2424 | 114.5 | 414.0 | 3.6365 | 25.3181 | 98.744 | 12.363 | 81.0 | 143.0 |
| 17500 | 0.2828 | 97.5 | 346.0 | 3.5083 | 25.3564 | 98.595 | 12.344 | 79.0 | 102.0 |
| 20000 | 0.3232 | 76.0 | 290.0 | 3.3324 | 25.2953 | 98.833 | 12.374 | 66.5 | 106.0 |
| 22500 | 0.3636 | 66.5 | 215.0 | 3.1472 | 25.3749 | 98.523 | 12.335 | 50.5 | 82.0 |
| 25000 | 0.4040 | 60.0 | 197.0 | 3.0655 | 25.3266 | 98.711 | 12.359 | 47.25 | 65.0 |
| 27500 | 0.4444 | 61.75 | 197.0 | 3.0053 | 25.3492 | 98.622 | 12.348 | 43.0 | 80.0 |
| 30000 | 0.4848 | 60.0 | 182.0 | 2.9949 | 25.3186 | 98.741 | 12.362 | 43.5 | 68.5 |
| 32500 | 0.5253 | 59.75 | 179.0 | 2.9704 | 25.3443 | 98.642 | 12.35 | 40.25 | 83.5 |
| 35000 | 0.5657 | 57.5 | 175.0 | 2.9129 | 25.3354 | 98.676 | 12.354 | 38.25 | 64.5 |
| 37500 | 0.6061 | 58.75 | 183.0 | 2.8906 | 25.3463 | 98.634 | 12.349 | 36.75 | 62.5 |
| 40000 | 0.6465 | 54.75 | 157.0 | 2.8612 | 25.3397 | 98.659 | 12.352 | 37.25 | 56.5 |
| 42500 | 0.6869 | 55.25 | 168.0 | 2.8460 | 25.3447 | 98.64 | 12.35 | 36.5 | 54.75 |
| 45000 | 0.7273 | 50.5 | 135.0 | 2.7442 | 25.34 | 98.658 | 12.352 | 30.125 | 42.5 |
| 47500 | 0.7677 | 50.25 | 129.0 | 2.7166 | 25.3456 | 98.637 | 12.349 | 30.75 | 39.5 |
| 50000 | 0.8081 | 49.75 | 132.0 | 2.7034 | 25.2334 | 99.075 | 12.404 | 29.5 | 41.75 |
| 52500 | 0.8485 | 48.5 | 129.0 | 2.6925 | 25.3023 | 98.805 | 12.37 | 28.875 | 37.25 |
| 55000 | 0.8889 | 48.5 | 127.5 | 2.6764 | 25.2784 | 98.898 | 12.382 | 28.25 | 34.5 |
| 57500 | 0.9293 | 48.25 | 122.0 | 2.6696 | 25.3217 | 98.729 | 12.361 | 28.25 | 33.75 |
| 60000 | 0.9697 | 48.25 | 121.5 | 2.6663 | 25.3402 | 98.657 | 12.352 | 28.0 | 33.5 |
| 61875 | 1.0 | 48.25 | 121.5 | 2.6658 | 25.2795 | 98.895 | 12.382 | 28.0 | 33.5 |
# Resource Usage Comparison
- VRAM Use: 7.7830 GB
# Distillation (Teacher -> Student) Architecture Difference:
- **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
- **Total Parameters**: 124,439,808 -> 124,439,808
- **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16
- **Model Size**: 0.24 GB -> 0.24 GB
Module Diff Details
```diff
```
# Train Dataset
Trained on 145,721,245 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
- Num Samples: `247,500`
- Subset: `20231101.en`
- Split: `train`
# Training Objective
```
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=cos, layer_mapper=layer-2))
```
# Hyperparameters
The following hyperparameters were used during training:
Expand
- learning_rate: `0.0001`
- train_batch_size: `4`
- eval_batch_size: `8`
- seed: `42`
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
- lr_scheduler_type: `linear`
- lr_scheduler_warmup_ratio: `0.5`
- num_epochs: `1.0`
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=cos, layer_mapper=layer-2))`
- train_embeddings: `True`
- lr_scheduler: ``
- student_model_name_or_path: `None`
- student_config_name_or_path: `None`
- student_model_config: `None`
- reinitialize_weights: `None`
- copy_teacher_modules: `[('lm_head', False)]`
- student_model_as_bitnet: `True`
- student_model_compile: `False`
- dropout: `None`
- teacher_model_name_or_path: `gpt2`
- teacher_load_in_8bit: `False`
- teacher_load_in_4bit: `False`
- teacher_model_compile: `False`
- dataset_uri: `wikimedia/wikipedia`
- dataset_subset: `20231101.en`
- dataset_split: `train`
- dataset_column_name: `text`
- dataset_sample_size: `250000`
- dataset_test_size: `0.01`
- gradient_accumulation_steps: `1`
- weight_decay: `0.0`
- max_grad_norm: `1.0`
- warmup_ratio: `0.5`
- warmup_steps: `0`
- gradient_checkpointing: `True`
# Framework Versions
- Distily 0.2.0
- Transformers 4.44.1
- Pytorch 2.5.0.dev20240821+cu121
- Datasets 2.21.0