jp-speech-classifier
This model is a fine-tuned version of cl-tohoku/bert-base-japanese-v3 on a dataset created from speech records in the Japanese diet. It achieves the following results on the evaluation set:
- Loss: 1.1895
- Accuracy: 0.7053
Model description
This model classifies Japanese sentences into factual, question, descriptive, opinion based and other sentences.
Intended uses & limitations
This model can be used for any purpose that requires sentence categorization of Japanese sentences. The dataset is fairly small but it gets the job done.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
No log | 1.0 | 72 | 1.1048 | 0.6772 |
No log | 2.0 | 144 | 1.1895 | 0.7053 |
Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for kkatodus/jp-speech-classifier
Base model
tohoku-nlp/bert-base-japanese-v3