Update README.md (#1)
Browse files- Update README.md (a9202b413084cd223cf1c869d2dcf47330d4fcbb)
README.md
CHANGED
@@ -1,199 +1,167 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
tags:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
# Model Card for
|
7 |
-
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
-
|
10 |
|
|
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
-
|
18 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
|
20 |
-
- **Developed by:** [
|
21 |
-
- **
|
22 |
-
- **
|
23 |
-
- **
|
24 |
-
- **
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
### Model Sources [optional]
|
29 |
|
30 |
-
|
31 |
-
|
32 |
-
- **Repository:** [More Information Needed]
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
|
46 |
### Downstream Use [optional]
|
47 |
|
48 |
-
|
49 |
-
|
50 |
-
|
|
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
-
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
|
64 |
### Recommendations
|
65 |
|
66 |
-
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
|
70 |
## How to Get Started with the Model
|
71 |
|
72 |
-
Use the code below to get started with
|
|
|
|
|
|
|
73 |
|
74 |
-
|
|
|
|
|
75 |
|
76 |
## Training Details
|
77 |
|
78 |
### Training Data
|
79 |
|
80 |
-
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
|
84 |
### Training Procedure
|
85 |
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
|
93 |
#### Training Hyperparameters
|
94 |
|
95 |
-
- **
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
|
103 |
## Evaluation
|
104 |
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
### Testing Data, Factors & Metrics
|
108 |
|
109 |
#### Testing Data
|
110 |
|
111 |
-
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
|
121 |
#### Metrics
|
122 |
|
123 |
-
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
|
127 |
### Results
|
128 |
|
129 |
-
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
|
135 |
-
|
|
|
|
|
|
|
136 |
|
137 |
-
|
138 |
|
139 |
-
|
140 |
|
141 |
## Environmental Impact
|
142 |
|
143 |
-
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
|
147 |
-
- **Hardware Type:**
|
148 |
-
- **Hours used:** [
|
149 |
-
- **Cloud Provider:**
|
150 |
-
- **Compute Region:** [
|
151 |
-
- **Carbon Emitted:** [
|
152 |
|
153 |
-
## Technical Specifications
|
154 |
|
155 |
### Model Architecture and Objective
|
156 |
|
157 |
-
|
158 |
|
159 |
### Compute Infrastructure
|
160 |
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
|
167 |
#### Software
|
168 |
-
|
169 |
-
|
|
|
170 |
|
171 |
## Citation [optional]
|
172 |
|
173 |
-
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
|
195 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
196 |
|
197 |
-
## Model Card
|
198 |
|
199 |
-
[
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
tags:
|
4 |
+
- sinhala
|
5 |
+
- bert
|
6 |
+
- masked-language-model
|
7 |
+
- sinhala-news
|
8 |
+
license: apache-2.0
|
9 |
+
language:
|
10 |
+
- si
|
11 |
+
metrics:
|
12 |
+
- perplexity
|
13 |
+
base_model:
|
14 |
+
- Ransaka/sinhala-bert-medium-v2
|
15 |
---
|
16 |
|
17 |
+
# Model Card for Sinhala-BERT Fine-Tuned MLM
|
|
|
|
|
|
|
18 |
|
19 |
+
This model is a fine-tuned version of `Ransaka/sinhala-bert-medium-v2` on the Sinhala News Corpus dataset for Masked Language Modeling (MLM).
|
20 |
|
21 |
## Model Details
|
22 |
|
23 |
### Model Description
|
24 |
|
25 |
+
This Sinhala-BERT model was fine-tuned specifically for the Sinhala language to improve its capabilities in Masked Language Modeling. It leverages the architecture of BERT and was further optimized on the Sinhala News Corpus dataset, aiming to achieve better contextual language understanding for Sinhala text.
|
|
|
|
|
26 |
|
27 |
+
- **Developed by:** [Thilina Gunathilaka]
|
28 |
+
- **Model type:** Transformer-based Language Model (BERT)
|
29 |
+
- **Language(s) (NLP):** Sinhala (si)
|
30 |
+
- **License:** Apache-2.0
|
31 |
+
- **Finetuned from model [optional]:** [Ransaka/sinhala-bert-medium-v2](https://huggingface.co/Ransaka/sinhala-bert-medium-v2)
|
|
|
|
|
32 |
|
33 |
### Model Sources [optional]
|
34 |
|
35 |
+
- **Repository:** [Your Hugging Face Repository URL]
|
36 |
+
- **Dataset:** [TestData-CrossLingualDocumentSimilarityMeasurement](https://github.com/UdeshAthukorala/TestData-CrossLingualDocumentSimilarityMeasurement)
|
|
|
|
|
|
|
37 |
|
38 |
## Uses
|
39 |
|
|
|
|
|
40 |
### Direct Use
|
41 |
|
42 |
+
This model can directly be used for:
|
43 |
+
- Masked Language Modeling (filling missing words or predicting masked tokens)
|
44 |
+
- Feature extraction for Sinhala text
|
45 |
|
46 |
### Downstream Use [optional]
|
47 |
|
48 |
+
This model can be fine-tuned further for various downstream NLP tasks in Sinhala, such as:
|
49 |
+
- Text Classification
|
50 |
+
- Named Entity Recognition (NER)
|
51 |
+
- Sentiment Analysis
|
52 |
|
53 |
### Out-of-Scope Use
|
54 |
|
55 |
+
- This model is specifically trained for Sinhala. Performance on other languages is likely poor.
|
56 |
+
- Not suitable for tasks unrelated to textual data.
|
|
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
+
Like any language model, this model may inherit biases from its training data. It's recommended to assess model predictions for biases before deployment in critical applications.
|
|
|
|
|
61 |
|
62 |
### Recommendations
|
63 |
|
64 |
+
- Evaluate model biases before deployment.
|
65 |
+
- Ensure fair and transparent use of this model in sensitive contexts.
|
|
|
66 |
|
67 |
## How to Get Started with the Model
|
68 |
|
69 |
+
Use the code below to get started with this model:
|
70 |
+
|
71 |
+
```python
|
72 |
+
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
73 |
|
74 |
+
tokenizer = AutoTokenizer.from_pretrained("your-username/your-model-name")
|
75 |
+
model = AutoModelForMaskedLM.from_pretrained("your-username/your-model-name")
|
76 |
+
```
|
77 |
|
78 |
## Training Details
|
79 |
|
80 |
### Training Data
|
81 |
|
82 |
+
The model was trained on the Sinhala News Corpus dataset, comprising Sinhala news articles.
|
|
|
|
|
83 |
|
84 |
### Training Procedure
|
85 |
|
86 |
+
- **Tokenization**: Sinhala-specific tokenization and text normalization
|
87 |
+
- **Max Sequence Length**: 128
|
88 |
+
- **MLM Probability**: 15%
|
|
|
|
|
|
|
89 |
|
90 |
#### Training Hyperparameters
|
91 |
|
92 |
+
- **Epochs:** 25
|
93 |
+
- **Batch Size:** 2 (Gradient accumulation steps: 2)
|
94 |
+
- **Optimizer:** AdamW
|
95 |
+
- **Learning Rate:** 3e-5
|
96 |
+
- **Mixed Precision:** FP32
|
|
|
|
|
97 |
|
98 |
## Evaluation
|
99 |
|
|
|
|
|
100 |
### Testing Data, Factors & Metrics
|
101 |
|
102 |
#### Testing Data
|
103 |
|
104 |
+
Sinhala News Corpus dataset test split was used.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
105 |
|
106 |
#### Metrics
|
107 |
|
108 |
+
- **Perplexity:** Used to measure language modeling capability.
|
109 |
+
- **Loss (Cross-Entropy):** Lower is better.
|
|
|
110 |
|
111 |
### Results
|
112 |
|
113 |
+
The final evaluation metrics obtained:
|
|
|
|
|
|
|
|
|
114 |
|
115 |
+
| Metric | Value |
|
116 |
+
|---------------|-------|
|
117 |
+
| Perplexity | [15.95] |
|
118 |
+
| Validation Loss | [2.77] |
|
119 |
|
120 |
+
#### Summary
|
121 |
|
122 |
+
The model achieved strong MLM results on the Sinhala News Corpus dataset, demonstrating improved language understanding.
|
123 |
|
124 |
## Environmental Impact
|
125 |
|
126 |
+
Carbon emissions were not explicitly tracked. For estimation, refer to [Machine Learning Impact calculator](https://mlco2.github.io/impact).
|
|
|
|
|
127 |
|
128 |
+
- **Hardware Type:** GPU (Tesla T4)
|
129 |
+
- **Hours used:** [Approximate training hours]
|
130 |
+
- **Cloud Provider:** Kaggle
|
131 |
+
- **Compute Region:** [Region used, e.g., us-central]
|
132 |
+
- **Carbon Emitted:** [Estimated CO2 emissions]
|
133 |
|
134 |
+
## Technical Specifications
|
135 |
|
136 |
### Model Architecture and Objective
|
137 |
|
138 |
+
Transformer-based BERT architecture optimized for Masked Language Modeling tasks.
|
139 |
|
140 |
### Compute Infrastructure
|
141 |
|
|
|
|
|
142 |
#### Hardware
|
143 |
+
- NVIDIA Tesla T4 GPU
|
|
|
144 |
|
145 |
#### Software
|
146 |
+
- Python 3.10
|
147 |
+
- Transformers library by Hugging Face
|
148 |
+
- PyTorch
|
149 |
|
150 |
## Citation [optional]
|
151 |
|
152 |
+
If you use this model, please cite it as:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
153 |
|
154 |
+
```bibtex
|
155 |
+
@misc{yourusername2024sinhalabert,
|
156 |
+
author = {Your Name},
|
157 |
+
title = {Sinhala-BERT Fine-Tuned on Sinhala News Corpus},
|
158 |
+
year = {2024},
|
159 |
+
publisher = {Hugging Face},
|
160 |
+
journal = {Hugging Face Model Hub},
|
161 |
+
howpublished = {\url{https://huggingface.co/your-username/your-model-name}}
|
162 |
+
}
|
163 |
+
```
|
164 |
|
165 |
+
## Model Card Authors
|
166 |
|
167 |
+
- [Thilina Gunathilaka]
|