--- library_name: transformers license: mit base_model: gpt2 tags: - bitnet - 1.58b - generated_from_trainer model-index: - name: distily_multi_experiment results: [] --- # distily_multi_experiment This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 11.8595 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.5 - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | No log | 0 | 0 | 45.5392 | | 19.25 | 0.0404 | 2500 | 20.5160 | | 17.0 | 0.0808 | 5000 | 18.1646 | | 16.375 | 0.1212 | 7500 | 16.8100 | | 18.5 | 0.1616 | 10000 | 15.9662 | | 18.125 | 0.2020 | 12500 | 14.8913 | | 16.125 | 0.2424 | 15000 | 14.2909 | | 13.875 | 0.2828 | 17500 | 13.9054 | | 12.5625 | 0.3232 | 20000 | 13.4260 | | 13.8125 | 0.3636 | 22500 | 12.9026 | | 14.5625 | 0.4040 | 25000 | 12.6783 | | 15.1875 | 0.4444 | 27500 | 12.5651 | | 13.4375 | 0.4848 | 30000 | 12.5742 | | 6.8125 | 0.5253 | 32500 | 12.5106 | | 12.0 | 0.5657 | 35000 | 12.3849 | | 13.9375 | 0.6061 | 37500 | 12.3297 | | 5.375 | 0.6465 | 40000 | 12.2764 | | 20.625 | 0.6869 | 42500 | 12.2612 | | 10.0 | 0.7273 | 45000 | 12.0058 | | 18.75 | 0.7677 | 47500 | 11.9614 | | 10.0625 | 0.8081 | 50000 | 11.9339 | | 16.0 | 0.8485 | 52500 | 11.9123 | | 18.625 | 0.8889 | 55000 | 11.8770 | | 15.875 | 0.9293 | 57500 | 11.8680 | | 11.25 | 0.9697 | 60000 | 11.8611 | ### Framework versions - Transformers 4.44.1 - Pytorch 2.5.0.dev20240821+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1