|
|
--- |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- en |
|
|
- fr |
|
|
- es |
|
|
- it |
|
|
- de |
|
|
tags: |
|
|
- multilingual-classification |
|
|
- tweet-classification |
|
|
datasets: |
|
|
- None |
|
|
model-index: |
|
|
- name: multilingual-ili-detection-bernice |
|
|
results: [] |
|
|
--- |
|
|
|
|
|
# multilingual-ili-detection-bernice |
|
|
|
|
|
This model is a fine-tuned version of [jhu-clsp/bernice](https://huggingface.co/jhu-clsp/bernice) for Influenza-Like-Illness(ILI) detection in multilingual tweets. |
|
|
|
|
|
Please reach out to Niti Mishra K.C. (nitimkc at gmail.com) or open an issue if there are questions. |
|
|
|
|
|
## Model description |
|
|
The model can be loaded with the following lines of code: |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForSequenceClassification |
|
|
|
|
|
ili_classification_model = AutoModelForSequenceClassification.from_pretrained('nitimkc/multilingual-ili-detection-bernice', num_labels = 2) |
|
|
``` |
|
|
|
|
|
### Training hyperparameters |
|
|
The following hyperparametesr were used during training: |
|
|
- learning_rate: 0.0000227436540339458 |
|
|
- train_batch_size: 32 |
|
|
- eval_batch_size: 32 |
|
|
- seed: 712 |
|
|
- num_epochs: 3 |
|
|
- max_len: 64 |
|
|
|
|
|
### Training results |
|
|
|
|
|
| Training Loss | Epoch | Validation Loss | Validation f1 | |
|
|
|:-------------:|:-----:|:---------------:|:-------------:| |
|
|
| 0.4067 | 1.0 | 0.3081 | 0.8658 | |
|
|
| 0.2686 | 2.0 | 0.3037 | 0.8821 | |
|
|
| 0.1881 | 3.0 | 0.3140 | 0.8800 | |
|
|
|
|
|
|
|
|
### Framework versions |
|
|
|
|
|
- Transformers 4.35.2 |
|
|
- Pytorch 2.1.2 |
|
|
- scikit-learn 1.3.2 |
|
|
- sentencepiece 0.1.99 |
|
|
- Tokenizers 0.15.2 |
|
|
- wandb 0.16.3 |