---
base_model: gpt2
datasets:
- wikimedia/wikipedia
library_name: Distily
license: mit
tags:
- bitnet
- 1.58b
- generated_from_trainer
model-index:
- name: distily_multi_experiment
results: []
---
# Summary
Distilled with [Distily](https://github.com/lapp0/distily) library
using teacher model [gpt2](https://huggingface.co/gpt2)
on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
# Model Architecture:
- **Architecture**: `GPT2LMHeadModel`
- **Total Parameters**: 124,439,808
- **Data Type (dtype)**: torch.bfloat16
- **Model Size**: 0.24 GB
# Evaluation Metrics Comparison
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
| 0 | 0 | 2473901162496.0 | 170424302305280.0 | 25.7744 | 25.1565 | 99.378 | 12.442 | 4060086272.0 | 71468255805440.0 |
| 2500 | 0.0404 | 956.0 | 7968.0 | 6.1178 | 25.2813 | 98.887 | 12.381 | 668.0 | 6432.0 |
| 5000 | 0.0808 | 380.0 | 1896.0 | 5.0307 | 25.2529 | 98.999 | 12.395 | 270.0 | 286.0 |
| 7500 | 0.1212 | 230.0 | 824.0 | 4.5130 | 25.1994 | 99.209 | 12.421 | 202.0 | 174.0 |
| 10000 | 0.1616 | 171.0 | 628.0 | 4.2264 | 25.2715 | 98.926 | 12.386 | 150.0 | 173.0 |
| 12500 | 0.2020 | 127.0 | 482.0 | 3.8535 | 25.2552 | 98.99 | 12.394 | 106.0 | 156.0 |
| 15000 | 0.2424 | 109.5 | 432.0 | 3.6651 | 25.2716 | 98.925 | 12.385 | 88.0 | 154.0 |
| 17500 | 0.2828 | 93.0 | 350.0 | 3.5204 | 25.3046 | 98.796 | 12.369 | 73.5 | 121.0 |
| 20000 | 0.3232 | 76.5 | 280.0 | 3.3349 | 25.2856 | 98.87 | 12.379 | 64.0 | 115.0 |
| 22500 | 0.3636 | 67.5 | 217.0 | 3.1514 | 25.2809 | 98.889 | 12.381 | 52.75 | 79.5 |
| 25000 | 0.4040 | 64.0 | 191.0 | 3.0794 | 25.2078 | 99.176 | 12.417 | 45.25 | 79.0 |
| 27500 | 0.4444 | 59.5 | 208.0 | 3.0351 | 25.2765 | 98.906 | 12.383 | 41.5 | 77.0 |
| 30000 | 0.4848 | 60.25 | 200.0 | 3.0180 | 25.2853 | 98.872 | 12.379 | 43.5 | 69.5 |
| 32500 | 0.5253 | 58.75 | 174.0 | 2.9977 | 25.2495 | 99.012 | 12.396 | 40.5 | 63.75 |
| 35000 | 0.5657 | 58.25 | 172.0 | 2.9422 | 25.2895 | 98.855 | 12.377 | 37.75 | 51.0 |
| 37500 | 0.6061 | 56.5 | 156.0 | 2.9150 | 25.2595 | 98.973 | 12.391 | 38.25 | 59.5 |
| 40000 | 0.6465 | 54.75 | 164.0 | 2.8974 | 25.2591 | 98.974 | 12.392 | 34.75 | 74.5 |
| 42500 | 0.6869 | 54.5 | 155.0 | 2.8849 | 25.2872 | 98.864 | 12.378 | 34.5 | 66.5 |
| 45000 | 0.7273 | 50.75 | 137.0 | 2.7764 | 25.258 | 98.978 | 12.392 | 30.875 | 39.75 |
| 47500 | 0.7677 | 50.25 | 126.0 | 2.7507 | 25.2765 | 98.906 | 12.383 | 29.5 | 39.0 |
| 50000 | 0.8081 | 49.5 | 124.5 | 2.7361 | 25.2105 | 99.165 | 12.415 | 28.75 | 39.0 |
| 52500 | 0.8485 | 48.5 | 120.0 | 2.7265 | 25.2508 | 99.007 | 12.396 | 29.125 | 35.5 |
| 55000 | 0.8889 | 48.0 | 117.5 | 2.7110 | 25.2721 | 98.923 | 12.385 | 28.375 | 33.75 |
| 57500 | 0.9293 | 47.5 | 118.0 | 2.7049 | 25.2774 | 98.903 | 12.383 | 28.0 | 32.25 |
| 60000 | 0.9697 | 47.5 | 117.0 | 2.7018 | 25.2647 | 98.952 | 12.389 | 27.875 | 32.0 |
| 61875 | 1.0 | 47.5 | 117.0 | 2.7011 | 25.2892 | 98.856 | 12.377 | 28.0 | 32.25 |
# Resource Usage Comparison
- VRAM Use: 7.7830 GB
# Distillation (Teacher -> Student) Architecture Difference:
- **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
- **Total Parameters**: 124,439,808 -> 124,439,808
- **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16
- **Model Size**: 0.24 GB -> 0.24 GB
Module Diff Details
```diff
```
# Train Dataset
Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
- Num Samples: `247,500`
- Subset: `20231101.en`
- Split: `train`
# Training Objective
```
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=cos, layer_mapper=layer-2))
```
# Hyperparameters
The following hyperparameters were used during training:
Expand
- learning_rate: `0.0001`
- train_batch_size: `4`
- eval_batch_size: `8`
- seed: `42`
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
- lr_scheduler_type: `linear`
- lr_scheduler_warmup_ratio: `0.5`
- num_epochs: `1.0`
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=cos, layer_mapper=layer-2))`
- train_embeddings: `True`
- lr_scheduler: ``
- student_model_name_or_path: `None`
- student_config_name_or_path: `None`
- student_model_config: `None`
- reinitialize_weights: `None`
- copy_teacher_modules: `[('lm_head', False)]`
- student_model_as_bitnet: `True`
- student_model_compile: `False`
- dropout: `None`
- teacher_model_name_or_path: `gpt2`
- teacher_load_in_8bit: `False`
- teacher_load_in_4bit: `False`
- teacher_model_compile: `False`
- dataset_uri: `wikimedia/wikipedia`
- dataset_subset: `20231101.en`
- dataset_split: `train`
- dataset_column_name: `text`
- dataset_sample_size: `250000`
- dataset_test_size: `0.01`
- gradient_accumulation_steps: `1`
- weight_decay: `0.0`
- max_grad_norm: `1.0`
- warmup_ratio: `0.5`
- warmup_steps: `0`
- gradient_checkpointing: `True`
# Framework Versions
- Distily 0.2.0
- Transformers 4.44.1
- Pytorch 2.5.0.dev20240821+cu121
- Datasets 2.21.0