File size: 4,592 Bytes
1dd0078
 
2071dcc
 
 
 
 
 
 
 
 
 
 
 
1dd0078
 
2071dcc
1dd0078
2071dcc
1dd0078
 
 
 
 
2071dcc
1dd0078
2071dcc
 
 
 
 
1dd0078
 
 
2071dcc
 
1dd0078
 
 
 
 
2071dcc
 
 
1dd0078
 
 
2071dcc
 
 
 
1dd0078
 
 
2071dcc
 
1dd0078
 
 
2071dcc
1dd0078
 
 
2071dcc
 
1dd0078
 
 
2071dcc
 
 
 
1dd0078
2071dcc
 
 
1dd0078
 
 
 
 
2071dcc
1dd0078
 
 
2071dcc
 
 
1dd0078
 
 
2071dcc
 
 
 
 
1dd0078
 
 
 
 
 
 
2071dcc
1dd0078
 
 
2071dcc
 
1dd0078
 
 
2071dcc
1dd0078
2071dcc
 
 
 
1dd0078
2071dcc
1dd0078
2071dcc
1dd0078
 
 
2071dcc
1dd0078
2071dcc
 
 
 
 
1dd0078
2071dcc
1dd0078
 
 
2071dcc
1dd0078
 
 
 
2071dcc
1dd0078
 
2071dcc
 
 
1dd0078
 
 
2071dcc
1dd0078
2071dcc
 
 
 
 
 
 
 
 
 
1dd0078
2071dcc
1dd0078
2071dcc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
---
library_name: transformers
tags:
- sinhala
- bert
- masked-language-model
- sinhala-news
license: apache-2.0
language:
- si
metrics:
- perplexity
base_model:
- Ransaka/sinhala-bert-medium-v2
---

# Model Card for Sinhala-BERT Fine-Tuned MLM

This model is a fine-tuned version of `Ransaka/sinhala-bert-medium-v2` on the Sinhala News Corpus dataset for Masked Language Modeling (MLM).

## Model Details

### Model Description

This Sinhala-BERT model was fine-tuned specifically for the Sinhala language to improve its capabilities in Masked Language Modeling. It leverages the architecture of BERT and was further optimized on the Sinhala News Corpus dataset, aiming to achieve better contextual language understanding for Sinhala text.

- **Developed by:** [Thilina Gunathilaka]
- **Model type:** Transformer-based Language Model (BERT)
- **Language(s) (NLP):** Sinhala (si)
- **License:** Apache-2.0
- **Finetuned from model [optional]:** [Ransaka/sinhala-bert-medium-v2](https://huggingface.co/Ransaka/sinhala-bert-medium-v2)

### Model Sources [optional]

- **Repository:** [Your Hugging Face Repository URL]
- **Dataset:** [TestData-CrossLingualDocumentSimilarityMeasurement](https://github.com/UdeshAthukorala/TestData-CrossLingualDocumentSimilarityMeasurement)

## Uses

### Direct Use

This model can directly be used for:
- Masked Language Modeling (filling missing words or predicting masked tokens)
- Feature extraction for Sinhala text

### Downstream Use [optional]

This model can be fine-tuned further for various downstream NLP tasks in Sinhala, such as:
- Text Classification
- Named Entity Recognition (NER)
- Sentiment Analysis

### Out-of-Scope Use

- This model is specifically trained for Sinhala. Performance on other languages is likely poor.
- Not suitable for tasks unrelated to textual data.

## Bias, Risks, and Limitations

Like any language model, this model may inherit biases from its training data. It's recommended to assess model predictions for biases before deployment in critical applications.

### Recommendations

- Evaluate model biases before deployment.
- Ensure fair and transparent use of this model in sensitive contexts.

## How to Get Started with the Model

Use the code below to get started with this model:

```python
from transformers import AutoTokenizer, AutoModelForMaskedLM

tokenizer = AutoTokenizer.from_pretrained("your-username/your-model-name")
model = AutoModelForMaskedLM.from_pretrained("your-username/your-model-name")
```

## Training Details

### Training Data

The model was trained on the Sinhala News Corpus dataset, comprising Sinhala news articles.

### Training Procedure

- **Tokenization**: Sinhala-specific tokenization and text normalization
- **Max Sequence Length**: 128
- **MLM Probability**: 15%

#### Training Hyperparameters

- **Epochs:** 25
- **Batch Size:** 2 (Gradient accumulation steps: 2)
- **Optimizer:** AdamW
- **Learning Rate:** 3e-5
- **Mixed Precision:** FP32

## Evaluation

### Testing Data, Factors & Metrics

#### Testing Data

Sinhala News Corpus dataset test split was used.

#### Metrics

- **Perplexity:** Used to measure language modeling capability.
- **Loss (Cross-Entropy):** Lower is better.

### Results

The final evaluation metrics obtained:

| Metric        | Value |
|---------------|-------|
| Perplexity    | [15.95] |
| Validation Loss | [2.77] |

#### Summary

The model achieved strong MLM results on the Sinhala News Corpus dataset, demonstrating improved language understanding.

## Environmental Impact

Carbon emissions were not explicitly tracked. For estimation, refer to [Machine Learning Impact calculator](https://mlco2.github.io/impact).

- **Hardware Type:** GPU (Tesla T4)
- **Hours used:** [Approximate training hours]
- **Cloud Provider:** Kaggle
- **Compute Region:** [Region used, e.g., us-central]
- **Carbon Emitted:** [Estimated CO2 emissions]

## Technical Specifications

### Model Architecture and Objective

Transformer-based BERT architecture optimized for Masked Language Modeling tasks.

### Compute Infrastructure

#### Hardware
- NVIDIA Tesla T4 GPU

#### Software
- Python 3.10
- Transformers library by Hugging Face
- PyTorch

## Citation [optional]

If you use this model, please cite it as:

```bibtex
@misc{yourusername2024sinhalabert,
  author = {Your Name},
  title = {Sinhala-BERT Fine-Tuned on Sinhala News Corpus},
  year = {2024},
  publisher = {Hugging Face},
  journal = {Hugging Face Model Hub},
  howpublished = {\url{https://huggingface.co/your-username/your-model-name}}
}
```

## Model Card Authors

- [Thilina Gunathilaka]