|
--- |
|
license: apache-2.0 |
|
base_model: Salesforce/codet5-base |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: my_awesome_t5 |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# my_awesome_t5 |
|
|
|
This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) on the None dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 1.2171 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 2e-05 |
|
- train_batch_size: 1 |
|
- eval_batch_size: 1 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 50 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:-----:|:-----:|:---------------:| |
|
| No log | 1.0 | 230 | 1.2051 | |
|
| No log | 2.0 | 460 | 1.1370 | |
|
| 0.6555 | 3.0 | 690 | 1.0823 | |
|
| 0.6555 | 4.0 | 920 | 1.0613 | |
|
| 0.6297 | 5.0 | 1150 | 1.0527 | |
|
| 0.6297 | 6.0 | 1380 | 1.0374 | |
|
| 0.5191 | 7.0 | 1610 | 1.0325 | |
|
| 0.5191 | 8.0 | 1840 | 1.0476 | |
|
| 0.3767 | 9.0 | 2070 | 1.0445 | |
|
| 0.3767 | 10.0 | 2300 | 1.0559 | |
|
| 0.3019 | 11.0 | 2530 | 1.0426 | |
|
| 0.3019 | 12.0 | 2760 | 1.0470 | |
|
| 0.3019 | 13.0 | 2990 | 1.0517 | |
|
| 0.2442 | 14.0 | 3220 | 1.0504 | |
|
| 0.2442 | 15.0 | 3450 | 1.0704 | |
|
| 0.2052 | 16.0 | 3680 | 1.0704 | |
|
| 0.2052 | 17.0 | 3910 | 1.0751 | |
|
| 0.1635 | 18.0 | 4140 | 1.0692 | |
|
| 0.1635 | 19.0 | 4370 | 1.1003 | |
|
| 0.1434 | 20.0 | 4600 | 1.0987 | |
|
| 0.1434 | 21.0 | 4830 | 1.1031 | |
|
| 0.1245 | 22.0 | 5060 | 1.1043 | |
|
| 0.1245 | 23.0 | 5290 | 1.1033 | |
|
| 0.1124 | 24.0 | 5520 | 1.1323 | |
|
| 0.1124 | 25.0 | 5750 | 1.1335 | |
|
| 0.1124 | 26.0 | 5980 | 1.1224 | |
|
| 0.1033 | 27.0 | 6210 | 1.1446 | |
|
| 0.1033 | 28.0 | 6440 | 1.1607 | |
|
| 0.0816 | 29.0 | 6670 | 1.1571 | |
|
| 0.0816 | 30.0 | 6900 | 1.1723 | |
|
| 0.0783 | 31.0 | 7130 | 1.1534 | |
|
| 0.0783 | 32.0 | 7360 | 1.1756 | |
|
| 0.0704 | 33.0 | 7590 | 1.1762 | |
|
| 0.0704 | 34.0 | 7820 | 1.1752 | |
|
| 0.0635 | 35.0 | 8050 | 1.1784 | |
|
| 0.0635 | 36.0 | 8280 | 1.1868 | |
|
| 0.0564 | 37.0 | 8510 | 1.1890 | |
|
| 0.0564 | 38.0 | 8740 | 1.1972 | |
|
| 0.0564 | 39.0 | 8970 | 1.2013 | |
|
| 0.0524 | 40.0 | 9200 | 1.1970 | |
|
| 0.0524 | 41.0 | 9430 | 1.2033 | |
|
| 0.0503 | 42.0 | 9660 | 1.2091 | |
|
| 0.0503 | 43.0 | 9890 | 1.2124 | |
|
| 0.048 | 44.0 | 10120 | 1.2094 | |
|
| 0.048 | 45.0 | 10350 | 1.2155 | |
|
| 0.0405 | 46.0 | 10580 | 1.2146 | |
|
| 0.0405 | 47.0 | 10810 | 1.2163 | |
|
| 0.0457 | 48.0 | 11040 | 1.2156 | |
|
| 0.0457 | 49.0 | 11270 | 1.2167 | |
|
| 0.0407 | 50.0 | 11500 | 1.2171 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.35.2 |
|
- Pytorch 2.1.0+cu121 |
|
- Datasets 2.16.0 |
|
- Tokenizers 0.15.0 |
|
|