--- base_model: gpt2 datasets: - wikimedia/wikipedia library_name: Distily license: mit tags: - bitnet - 1.58b - generated_from_trainer model-index: - name: distily_multi_experiment results: [] --- # Summary Distilled with [Distily](https://github.com/lapp0/distily) library using teacher model [gpt2](https://huggingface.co/gpt2) on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia). # Model Architecture: - **Architecture**: `GPT2LMHeadModel` - **Total Parameters**: 124,439,808 - **Data Type (dtype)**: torch.bfloat16 - **Model Size**: 0.24 GB # Evaluation Metrics Comparison | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 | | 0 | 0 | 2473901162496.0 | 170424302305280.0 | 25.7744 | 25.1005 | 99.6 | 12.47 | 4060086272.0 | 71468255805440.0 | | 2500 | 0.0404 | 964.0 | 7552.0 | 6.1230 | 25.2379 | 99.057 | 12.402 | 612.0 | 7936.0 | | 5000 | 0.0808 | 380.0 | 1896.0 | 5.0307 | 25.2556 | 98.988 | 12.393 | 270.0 | 286.0 | | 7500 | 0.1212 | 230.0 | 824.0 | 4.5126 | 25.2187 | 99.133 | 12.411 | 201.0 | 174.0 | | 10000 | 0.1616 | 171.0 | 628.0 | 4.2264 | 25.2809 | 98.889 | 12.381 | 150.0 | 174.0 | | 12500 | 0.2020 | 127.0 | 482.0 | 3.8535 | 25.192 | 99.238 | 12.425 | 106.0 | 156.0 | | 15000 | 0.2424 | 109.5 | 432.0 | 3.6645 | 25.2166 | 99.141 | 12.412 | 88.0 | 155.0 | | 17500 | 0.2828 | 93.0 | 350.0 | 3.5201 | 25.2663 | 98.946 | 12.388 | 73.5 | 120.5 | | 20000 | 0.3232 | 75.5 | 284.0 | 3.3344 | 25.2826 | 98.882 | 12.38 | 63.75 | 143.0 | | 22500 | 0.3636 | 67.0 | 213.0 | 3.1515 | 25.2744 | 98.914 | 12.384 | 52.0 | 81.0 | | 25000 | 0.4040 | 62.5 | 196.0 | 3.0761 | 25.2089 | 99.171 | 12.416 | 44.75 | 100.5 | | 27500 | 0.4444 | 58.5 | 192.0 | 3.0319 | 25.2844 | 98.875 | 12.379 | 40.75 | 69.0 | | 30000 | 0.4848 | 59.25 | 194.0 | 3.0142 | 25.2883 | 98.86 | 12.377 | 45.0 | 72.5 | | 32500 | 0.5253 | 59.75 | 173.0 | 2.9997 | 25.2846 | 98.874 | 12.379 | 39.75 | 62.25 | | 35000 | 0.5657 | 56.5 | 172.0 | 2.9418 | 25.2601 | 98.97 | 12.391 | 37.25 | 60.0 | | 37500 | 0.6061 | 57.25 | 155.0 | 2.9178 | 25.2746 | 98.914 | 12.384 | 38.0 | 60.5 | | 40000 | 0.6465 | 55.75 | 166.0 | 2.8984 | 25.2563 | 98.985 | 12.393 | 35.75 | 67.0 | | 42500 | 0.6869 | 54.75 | 150.0 | 2.8788 | 25.2709 | 98.928 | 12.386 | 35.25 | 58.0 | | 45000 | 0.7273 | 50.25 | 134.0 | 2.7761 | 25.1686 | 99.33 | 12.436 | 30.25 | 42.5 | | 47500 | 0.7677 | 50.5 | 124.5 | 2.7511 | 25.2315 | 99.083 | 12.405 | 29.125 | 38.0 | | 50000 | 0.8081 | 49.0 | 125.0 | 2.7362 | 25.2205 | 99.126 | 12.411 | 28.375 | 40.25 | | 52500 | 0.8485 | 48.25 | 121.0 | 2.7264 | 25.2403 | 99.048 | 12.401 | 29.0 | 36.25 | | 55000 | 0.8889 | 48.0 | 118.0 | 2.7109 | 25.1954 | 99.224 | 12.423 | 27.875 | 33.75 | | 57500 | 0.9293 | 47.5 | 117.5 | 2.7053 | 25.258 | 98.979 | 12.392 | 27.625 | 32.5 | | 60000 | 0.9697 | 47.5 | 117.0 | 2.7012 | 25.1729 | 99.313 | 12.434 | 27.5 | 32.5 | | 61875 | 1.0 | 47.5 | 117.5 | 2.7010 | 25.2633 | 98.958 | 12.39 | 27.625 | 32.5 | # Resource Usage Comparison - VRAM Use: 7.7830 GB # Distillation (Teacher -> Student) Architecture Difference: - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel` - **Total Parameters**: 124,439,808 -> 124,439,808 - **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16 - **Model Size**: 0.24 GB -> 0.24 GB
Module Diff Details ```diff ```

# Train Dataset Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. - Num Samples: `247,500` - Subset: `20231101.en` - Split: `train` # Training Objective ``` DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=cos, layer_mapper=layer-2)) ``` # Hyperparameters The following hyperparameters were used during training:
Expand - learning_rate: `0.0001` - train_batch_size: `4` - eval_batch_size: `8` - seed: `42` - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08` - lr_scheduler_type: `linear` - lr_scheduler_warmup_ratio: `0.5` - num_epochs: `1.0` - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=cos, layer_mapper=layer-2))` - train_embeddings: `True` - lr_scheduler: `` - student_model_name_or_path: `None` - student_config_name_or_path: `None` - student_model_config: `None` - reinitialize_weights: `None` - copy_teacher_modules: `[('lm_head', False)]` - student_model_as_bitnet: `True` - student_model_compile: `False` - dropout: `None` - teacher_model_name_or_path: `gpt2` - teacher_load_in_8bit: `False` - teacher_load_in_4bit: `False` - teacher_model_compile: `False` - dataset_uri: `wikimedia/wikipedia` - dataset_subset: `20231101.en` - dataset_split: `train` - dataset_column_name: `text` - dataset_sample_size: `250000` - dataset_test_size: `0.01` - gradient_accumulation_steps: `1` - weight_decay: `0.0` - max_grad_norm: `1.0` - warmup_ratio: `0.5` - warmup_steps: `0` - gradient_checkpointing: `True`

# Framework Versions - Distily 0.2.0 - Transformers 4.44.1 - Pytorch 2.5.0.dev20240821+cu121 - Datasets 2.21.0