Model Card for Sinhala-BERT Fine-Tuned MLM
This model is a fine-tuned version of Ransaka/sinhala-bert-medium-v2
on the Sinhala News Corpus dataset for Masked Language Modeling (MLM).
Model Details
Model Description
This Sinhala-BERT model was fine-tuned specifically for the Sinhala language to improve its capabilities in Masked Language Modeling. It leverages the architecture of BERT and was further optimized on the Sinhala News Corpus dataset, aiming to achieve better contextual language understanding for Sinhala text.
- Developed by: [Thilina Gunathilaka]
- Model type: Transformer-based Language Model (BERT)
- Language(s) (NLP): Sinhala (si)
- License: Apache-2.0
- Finetuned from model [optional]: Ransaka/sinhala-bert-medium-v2
Model Sources [optional]
- Repository: [Your Hugging Face Repository URL]
- Dataset: TestData-CrossLingualDocumentSimilarityMeasurement
Uses
Direct Use
This model can directly be used for:
- Masked Language Modeling (filling missing words or predicting masked tokens)
- Feature extraction for Sinhala text
Downstream Use [optional]
This model can be fine-tuned further for various downstream NLP tasks in Sinhala, such as:
- Text Classification
- Named Entity Recognition (NER)
- Sentiment Analysis
Out-of-Scope Use
- This model is specifically trained for Sinhala. Performance on other languages is likely poor.
- Not suitable for tasks unrelated to textual data.
Bias, Risks, and Limitations
Like any language model, this model may inherit biases from its training data. It's recommended to assess model predictions for biases before deployment in critical applications.
Recommendations
- Evaluate model biases before deployment.
- Ensure fair and transparent use of this model in sensitive contexts.
How to Get Started with the Model
Use the code below to get started with this model:
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("your-username/your-model-name")
model = AutoModelForMaskedLM.from_pretrained("your-username/your-model-name")
Training Details
Training Data
The model was trained on the Sinhala News Corpus dataset, comprising Sinhala news articles.
Training Procedure
- Tokenization: Sinhala-specific tokenization and text normalization
- Max Sequence Length: 128
- MLM Probability: 15%
Training Hyperparameters
- Epochs: 25
- Batch Size: 2 (Gradient accumulation steps: 2)
- Optimizer: AdamW
- Learning Rate: 3e-5
- Mixed Precision: FP32
Evaluation
Testing Data, Factors & Metrics
Testing Data
Sinhala News Corpus dataset test split was used.
Metrics
- Perplexity: Used to measure language modeling capability.
- Loss (Cross-Entropy): Lower is better.
Results
The final evaluation metrics obtained:
Metric | Value |
---|---|
Perplexity | [15.95] |
Validation Loss | [2.77] |
Summary
The model achieved strong MLM results on the Sinhala News Corpus dataset, demonstrating improved language understanding.
Environmental Impact
Carbon emissions were not explicitly tracked. For estimation, refer to Machine Learning Impact calculator.
- Hardware Type: GPU (Tesla T4)
- Hours used: [Approximate training hours]
- Cloud Provider: Kaggle
- Compute Region: [Region used, e.g., us-central]
- Carbon Emitted: [Estimated CO2 emissions]
Technical Specifications
Model Architecture and Objective
Transformer-based BERT architecture optimized for Masked Language Modeling tasks.
Compute Infrastructure
Hardware
- NVIDIA Tesla T4 GPU
Software
- Python 3.10
- Transformers library by Hugging Face
- PyTorch
Citation [optional]
If you use this model, please cite it as:
@misc{yourusername2024sinhalabert,
author = {Your Name},
title = {Sinhala-BERT Fine-Tuned on Sinhala News Corpus},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face Model Hub},
howpublished = {\url{https://huggingface.co/your-username/your-model-name}}
}
Model Card Authors
- [Thilina Gunathilaka]
- Downloads last month
- 70
Model tree for ThilinaGunathilaka/fine-tune-sinhala-bert-v3
Base model
Ransaka/sinhala-bert-medium-v2