VieModTest
This model is a fine-tuned version of KoichiYasuoka/bert-base-vietnamese-ud-goeswith model and taidng/UIT-ViQuAD2 dataset. It achieves the following results on the evaluation set:
- Loss: 1.6558
- Exact Match (EM): 50.28
- F1 Score: 70.75
Model description
- This model maybe not too bad and not is the best 馃榿 maybe 馃槄
- Trained total 115,354,368 parameters
Intended uses & limitations
- Create a simple chatbot 馃
- Maybe just understand Vietnamese
Training and evaluation data
- taidng/UIT-ViQuAD2.0 - "train"
- taidng/UIT-ViQuAD2.0 - "validation"
Training procedure
Based on Question Answering HuggingFace 馃
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.8987 | 1.0 | 1423 | 1.6217 |
1.3346 | 2.0 | 2846 | 1.5800 |
1.0633 | 3.0 | 4269 | 1.6558 |
Framework versions
- Transformers 4.51.1
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
- Downloads last month
- 37
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support
Model tree for ZycckZ/Zk1-QA-VN-test
Base model
FPTAI/vibert-base-cased