lmind_hotpot_train1000_eval200_v1_recite_qa_gpt2-large
This model is a fine-tuned version of gpt2-large on the tyzhu/lmind_hotpot_train1000_eval200_v1_recite_qa dataset. It achieves the following results on the evaluation set:
- Loss: 0.9935
- Accuracy: 0.6450
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10.0
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
2.3056 | 1.0 | 106 | 2.0514 | 0.5723 |
1.9276 | 2.0 | 212 | 1.7741 | 0.5876 |
1.5937 | 3.0 | 318 | 1.5708 | 0.6004 |
1.2822 | 4.0 | 424 | 1.3977 | 0.6123 |
1.0983 | 5.0 | 530 | 1.2644 | 0.6224 |
0.97 | 6.0 | 636 | 1.1759 | 0.6291 |
0.815 | 7.0 | 742 | 1.0930 | 0.6361 |
0.7608 | 8.0 | 848 | 1.0381 | 0.6406 |
0.6872 | 9.0 | 954 | 1.0050 | 0.6437 |
0.6498 | 10.0 | 1060 | 0.9935 | 0.6450 |
Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
- Downloads last month
- -
Model tree for tyzhu/lmind_hotpot_train1000_eval200_v1_recite_qa_gpt2-large
Base model
openai-community/gpt2-largeDataset used to train tyzhu/lmind_hotpot_train1000_eval200_v1_recite_qa_gpt2-large
Evaluation results
- Accuracy on tyzhu/lmind_hotpot_train1000_eval200_v1_recite_qaself-reported0.645