---
base_model: gpt2
datasets:
- wikimedia/wikipedia
library_name: Distily
license: mit
tags:
- bitnet
- 1.58b
- generated_from_trainer
model-index:
- name: distily_multi_experiment
results: []
---
# Summary
Distilled with [Distily](https://github.com/lapp0/distily) library
using teacher model [gpt2](https://huggingface.co/gpt2)
on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
# Model Architecture:
- **Architecture**: `GPT2LMHeadModel`
- **Total Parameters**: 124,439,808
- **Data Type (dtype)**: torch.bfloat16
- **Model Size**: 0.24 GB
# Evaluation Metrics Comparison
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
| 0 | 0 | 2473901162496.0 | 170424302305280.0 | 45.7764 | 25.2689 | 98.936 | 12.387 | 4060086272.0 | 71468255805440.0 |
| 2500 | 0.0404 | 2256.0 | 19712.0 | 20.8732 | 25.2001 | 99.206 | 12.421 | 1480.0 | 77312.0 |
| 5000 | 0.0808 | 486.0 | 3104.0 | 18.4096 | 25.2957 | 98.831 | 12.374 | 338.0 | 1128.0 |
| 7500 | 0.1212 | 276.0 | 1296.0 | 17.0014 | 25.2794 | 98.895 | 12.382 | 255.0 | 249.0 |
| 10000 | 0.1616 | 202.0 | 756.0 | 16.2107 | 25.3199 | 98.736 | 12.362 | 187.0 | 296.0 |
| 12500 | 0.2020 | 145.0 | 540.0 | 15.1004 | 25.2816 | 98.886 | 12.381 | 131.0 | 176.0 |
| 15000 | 0.2424 | 123.5 | 488.0 | 14.5287 | 25.2154 | 99.146 | 12.413 | 93.5 | 146.0 |
| 17500 | 0.2828 | 94.5 | 374.0 | 14.1501 | 25.2965 | 98.828 | 12.373 | 76.0 | 134.0 |
| 20000 | 0.3232 | 78.5 | 306.0 | 13.6428 | 25.2525 | 99.0 | 12.395 | 63.25 | 149.0 |
| 22500 | 0.3636 | 66.0 | 218.0 | 13.1496 | 25.2593 | 98.973 | 12.391 | 50.0 | 82.0 |
| 25000 | 0.4040 | 63.5 | 202.0 | 12.9562 | 25.3101 | 98.775 | 12.367 | 44.0 | 74.0 |
| 27500 | 0.4444 | 59.0 | 200.0 | 12.8429 | 25.2626 | 98.96 | 12.39 | 40.25 | 65.0 |
| 30000 | 0.4848 | 58.25 | 198.0 | 12.8220 | 25.2928 | 98.843 | 12.375 | 39.75 | 61.25 |
| 32500 | 0.5253 | 59.25 | 171.0 | 12.7662 | 25.3006 | 98.812 | 12.371 | 39.25 | 55.25 |
| 35000 | 0.5657 | 58.25 | 171.0 | 12.6525 | 25.3008 | 98.811 | 12.371 | 36.5 | 47.25 |
| 37500 | 0.6061 | 57.0 | 158.0 | 12.6054 | 25.3065 | 98.789 | 12.368 | 36.5 | 49.5 |
| 40000 | 0.6465 | 55.5 | 159.0 | 12.5787 | 25.1987 | 99.212 | 12.421 | 33.5 | 64.5 |
| 42500 | 0.6869 | 55.0 | 151.0 | 12.5195 | 25.2996 | 98.816 | 12.372 | 34.75 | 48.25 |
| 45000 | 0.7273 | 50.75 | 135.0 | 12.2859 | 25.2872 | 98.864 | 12.378 | 29.25 | 43.5 |
| 47500 | 0.7677 | 50.75 | 125.5 | 12.2428 | 25.1481 | 99.411 | 12.446 | 28.75 | 38.25 |
| 50000 | 0.8081 | 50.25 | 123.5 | 12.2168 | 25.2928 | 98.842 | 12.375 | 28.375 | 37.25 |
| 52500 | 0.8485 | 49.0 | 121.0 | 12.1935 | 25.3029 | 98.803 | 12.37 | 28.125 | 36.0 |
| 55000 | 0.8889 | 49.0 | 121.0 | 12.1614 | 25.271 | 98.927 | 12.386 | 27.5 | 35.75 |
| 57500 | 0.9293 | 48.75 | 119.5 | 12.1483 | 25.2276 | 99.098 | 12.407 | 27.5 | 34.25 |
| 60000 | 0.9697 | 48.75 | 118.5 | 12.1410 | 25.3014 | 98.809 | 12.371 | 27.375 | 33.75 |
| 61875 | 1.0 | 48.5 | 119.0 | 12.1404 | 25.2551 | 98.99 | 12.394 | 27.25 | 33.75 |
# Resource Usage Comparison
- VRAM Use: 7.7830 GB
# Distillation (Teacher -> Student) Architecture Difference:
- **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
- **Total Parameters**: 124,439,808 -> 124,439,808
- **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16
- **Model Size**: 0.24 GB -> 0.24 GB
Module Diff Details
```diff
```
# Train Dataset
Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
- Num Samples: `247,500`
- Subset: `20231101.en`
- Split: `train`
# Training Objective
```
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2))
```
# Hyperparameters
The following hyperparameters were used during training:
Expand
- learning_rate: `0.0001`
- train_batch_size: `4`
- eval_batch_size: `8`
- seed: `42`
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
- lr_scheduler_type: `linear`
- lr_scheduler_warmup_ratio: `0.5`
- num_epochs: `1.0`
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2))`
- train_embeddings: `True`
- lr_scheduler: ``
- student_model_name_or_path: `None`
- student_config_name_or_path: `None`
- student_model_config: `None`
- reinitialize_weights: `None`
- copy_teacher_modules: `[('lm_head', False)]`
- student_model_as_bitnet: `True`
- student_model_compile: `False`
- dropout: `None`
- teacher_model_name_or_path: `gpt2`
- teacher_load_in_8bit: `False`
- teacher_load_in_4bit: `False`
- teacher_model_compile: `False`
- dataset_uri: `wikimedia/wikipedia`
- dataset_subset: `20231101.en`
- dataset_split: `train`
- dataset_column_name: `text`
- dataset_sample_size: `250000`
- dataset_test_size: `0.01`
- gradient_accumulation_steps: `1`
- weight_decay: `0.0`
- max_grad_norm: `1.0`
- warmup_ratio: `0.5`
- warmup_steps: `0`
- gradient_checkpointing: `True`
# Framework Versions
- Distily 0.2.0
- Transformers 4.44.1
- Pytorch 2.5.0.dev20240821+cu121
- Datasets 2.21.0