--- base_model: gpt2 datasets: - wikimedia/wikipedia library_name: Distily license: mit tags: - bitnet - 1.58b - generated_from_trainer model-index: - name: distily_multi_experiment results: [] --- # Summary Distilled with [Distily](https://github.com/lapp0/distily) library using teacher model [gpt2](https://huggingface.co/gpt2) on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia). # Model Architecture: - **Architecture**: `GPT2LMHeadModel` - **Total Parameters**: 124,439,808 - **Data Type (dtype)**: torch.bfloat16 - **Model Size**: 0.24 GB # Evaluation Metrics Comparison | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 | | 0 | 0 | 2473901162496.0 | 170424302305280.0 | 45.7764 | 30.2746 | 82.577 | 10.339 | 4060086272.0 | 71468255805440.0 | | 2500 | 0.0404 | 2040.0 | 20608.0 | 20.8472 | 30.2064 | 82.764 | 10.362 | 1472.0 | 60160.0 | | 5000 | 0.0808 | 488.0 | 3120.0 | 18.4130 | 30.6595 | 81.541 | 10.209 | 338.0 | 1112.0 | | 7500 | 0.1212 | 276.0 | 1296.0 | 17.0012 | 30.289 | 82.538 | 10.334 | 255.0 | 249.0 | | 10000 | 0.1616 | 202.0 | 756.0 | 16.2078 | 30.2709 | 82.588 | 10.34 | 188.0 | 304.0 | | 12500 | 0.2020 | 145.0 | 540.0 | 15.0996 | 30.2185 | 82.731 | 10.358 | 131.0 | 176.0 | | 15000 | 0.2424 | 123.5 | 490.0 | 14.5283 | 30.2533 | 82.636 | 10.346 | 93.0 | 146.0 | | 17500 | 0.2828 | 95.0 | 376.0 | 14.1520 | 30.2403 | 82.671 | 10.35 | 75.5 | 137.0 | | 20000 | 0.3232 | 79.0 | 306.0 | 13.6446 | 30.1204 | 83.0 | 10.392 | 63.25 | 130.0 | | 22500 | 0.3636 | 66.0 | 219.0 | 13.1452 | 30.158 | 82.897 | 10.379 | 50.0 | 80.5 | | 25000 | 0.4040 | 63.0 | 200.0 | 12.9619 | 30.1269 | 82.982 | 10.389 | 43.75 | 77.5 | | 27500 | 0.4444 | 59.0 | 197.0 | 12.8388 | 30.3214 | 82.45 | 10.323 | 40.5 | 73.5 | | 30000 | 0.4848 | 59.5 | 204.0 | 12.8191 | 30.3164 | 82.464 | 10.324 | 40.5 | 70.5 | | 32500 | 0.5253 | 58.25 | 176.0 | 12.7778 | 30.2231 | 82.718 | 10.356 | 38.75 | 61.75 | | 35000 | 0.5657 | 58.25 | 169.0 | 12.6562 | 30.35 | 82.372 | 10.313 | 36.5 | 45.5 | | 37500 | 0.6061 | 56.75 | 158.0 | 12.6014 | 30.3685 | 82.322 | 10.307 | 37.0 | 50.5 | | 40000 | 0.6465 | 55.0 | 156.0 | 12.5674 | 30.3598 | 82.346 | 10.31 | 33.75 | 59.5 | | 42500 | 0.6869 | 54.5 | 147.0 | 12.5141 | 30.3209 | 82.451 | 10.323 | 34.25 | 52.5 | | 45000 | 0.7273 | 50.75 | 135.0 | 12.2860 | 30.244 | 82.661 | 10.349 | 29.5 | 41.75 | | 47500 | 0.7677 | 50.5 | 127.0 | 12.2408 | 30.3366 | 82.409 | 10.318 | 28.875 | 35.0 | | 50000 | 0.8081 | 50.25 | 125.5 | 12.2160 | 30.2563 | 82.627 | 10.345 | 28.625 | 39.0 | | 52500 | 0.8485 | 49.25 | 123.0 | 12.1936 | 30.2253 | 82.712 | 10.356 | 28.5 | 35.5 | | 55000 | 0.8889 | 49.25 | 121.0 | 12.1620 | 30.1898 | 82.81 | 10.368 | 27.875 | 35.0 | | 57500 | 0.9293 | 48.75 | 120.0 | 12.1488 | 30.2559 | 82.628 | 10.345 | 27.75 | 33.5 | | 60000 | 0.9697 | 48.75 | 119.5 | 12.1404 | 30.1517 | 82.914 | 10.381 | 27.625 | 33.25 | | 61875 | 1.0 | 48.75 | 120.0 | 12.1402 | 30.2129 | 82.746 | 10.36 | 27.625 | 33.5 | # Resource Usage Comparison - VRAM Use: 7.7831 GB `# Distillation (Teacher -> Student) Architecture Difference: - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel` - **Total Parameters**: 124,439,808 -> 124,439,808 - **Data Type (dtype)**: 124439808 -> torch.bfloat16 - **Model Size**: 0.24 GB -> 0.24 GB
Module Diff Details ```diff ```

# Train Dataset Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. - Num Samples: `247,500` - Subset: `20231101.en` - Split: `train` # Training Objective ``` DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2)) ``` # Hyperparameters The following hyperparameters were used during training:
Expand - learning_rate: `0.0001` - train_batch_size: `4` - eval_batch_size: `8` - seed: `42` - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08` - lr_scheduler_type: `linear` - lr_scheduler_warmup_ratio: `0.5` - num_epochs: `1.0` - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2))` - train_embeddings: `True` - lr_scheduler: `` - student_model_name_or_path: `None` - student_config_name_or_path: `None` - student_model_config: `None` - reinitialize_weights: `None` - copy_teacher_modules: `[('lm_head', False)]` - student_model_as_bitnet: `True` - student_model_compile: `False` - dropout: `None` - teacher_model_name_or_path: `gpt2` - teacher_load_in_8bit: `False` - teacher_load_in_4bit: `False` - teacher_model_compile: `False` - dataset_uri: `wikimedia/wikipedia` - dataset_subset: `20231101.en` - dataset_split: `train` - dataset_column_name: `text` - dataset_sample_size: `250000` - dataset_test_size: `0.01` - gradient_accumulation_steps: `1` - weight_decay: `0.0` - max_grad_norm: `1.0` - warmup_ratio: `0.5` - warmup_steps: `0` - gradient_checkpointing: `True`

# Framework Versions - Distily 0.2.0 - Transformers 4.44.0 - Pytorch 2.3.0 - Datasets 2.21.0