Model Card for AmoooEBI/xlm-roberta-fa-qa-finetuned-on-PersianQA
This model is a version of pedramyazdipoor/persian_xlm_roberta_large
, fine-tuned with LoRA for extractive question answering on the Persian language using the PersianQA dataset.
Model Details
Model Description
This is an XLM-RoBERTa model fine-tuned on the SajjadAyoubi/persian_qa
dataset for extractive question answering in Persian. The model was trained using the parameter-efficient LoRA method, which significantly speeds up training while achieving high performance. It is designed to extract an answer to a question directly from a given context. The fine-tuning process has made it a top-performing model for this task in Persian.
- Developed by: Amir Mohammad Ebrahiminasab
- Shared by: Amir Mohammad Ebrahiminasab
- Model type: xlm-roberta
- Language(s): fa (Persian)
- License: MIT
- Finetuned from model:
pedramyazdipoor/persian_xlm_roberta_large
Model Sources
- Repository: AmoooEBI/xlm-roberta-fa-qa-finetuned-on-PersianQA
- Demo: Persian QA Chatbot – Hugging Face Space
Uses
Direct Use
The model can be used directly for extractive question answering in Persian using the pipeline function.
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="AmoooEBI/xlm-roberta-fa-qa-finetuned-on-PersianQA",
tokenizer="AmoooEBI/xlm-roberta-fa-qa-finetuned-on-PersianQA"
)
context = "مهانداس کارامچاند گاندی رهبر سیاسی و معنوی هندیها بود که ملت هند را در راه آزادی از استعمار امپراتوری بریتانیا رهبری کرد."
question = "گاندی که بود؟"
result = qa_pipeline(question=question, context=context)
print(f"Answer: '{result['answer']}'")
Bias, Risks, and Limitations
The model's performance is highly dependent on the quality and domain of the context provided. It was trained on the PersianQA dataset, which is largely based on Wikipedia articles. Therefore, its performance may degrade on texts with different styles, such as conversational or technical documents.
Like ParsBERT, this model also shows a preference for shorter answers, with its Exact Match score dropping for answers longer than the dataset's average. However, its F1-score remains high, indicating it can still identify the correct span of text with high token overlap.
Recommendations
Users should be aware of the model's limitations, particularly the potential for lower Exact Match scores on long-form answers. For applications requiring high precision, outputs should be validated.
How to Get Started with the Model
Use the code below to get started with the model using PyTorch.
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("AmoooEBI/xlm-roberta-fa-qa-finetuned-on-PersianQA")
model = AutoModelForQuestionAnswering.from_pretrained("AmoooEBI/xlm-roberta-fa-qa-finetuned-on-PersianQA")
context = "پایتخت اسپانیا شهر مادرید است."
question = "پایتخت اسپانیا کجاست؟"
inputs = tokenizer(question, context, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
answer = tokenizer.decode(predict_answer_tokens)
print(f"Question: {question}")
print(f"Answer: {answer}")
Training Details
Training Data
The model was fine-tuned on the SajjadAyoubi/persian_qa
dataset, which contains question-context-answer triplets in Persian, primarily from Wikipedia.
Training Procedure
Preprocessing
The training data was tokenized using the XLM-RoBERTa tokenizer. Contexts longer than the model's maximum input size were split into overlapping chunks using a sliding window (doc_stride=128
). The start and end positions of the answer token were then mapped to these chunks.
Training Hyperparameters
- Training regime: LoRA (Parameter-Efficient Fine-Tuning)
r
: 16lora_alpha
: 32lora_dropout
: 0.1target_modules
:["query", "value"]
- Learning Rate: 2 × 10⁻⁵
- Epochs: 8
- Batch Size: 8
Speeds, Sizes, Times
- Training Time: ~3 hours on a single GPU
- Trainable Parameters: 0.281% of model parameters
Evaluation
Testing Data, Factors & Metrics
Testing Data
The evaluation was performed on the validation set of the SajjadAyoubi/persian_qa
dataset.
Factors
- Answer Presence: Questions with and without answers
- Answer Length: Shorter vs. longer than the average (22.78 characters)
Metrics
- F1-Score: Measures token overlap
- Exact Match (EM): Measures perfect span match
Results
Overall Performance on the Validation Set (LoRA Fine-Tuned)
Model Status | Exact Match | F1-Score |
---|---|---|
Fine-Tuned Model (LoRA) | 69.90% | 84.85% |
Performance on Data Subsets
Case Type | Exact Match | F1-Score |
---|---|---|
Has Answer | 62.06% | 83.42% |
No Answer | 88.17% | 88.17% |
Answer Length | Exact Match | F1-Score |
---|---|---|
Longer than Avg. | 49.22% | 81.88% |
Shorter than Avg. | 62.95% | 80.20% |
Environmental Impact
- Hardware Type: T4 GPU
- Training Time: ~3 hours
- Cloud Provider: Google Colab
- Carbon Emitted: Not calculated
Technical Specifications
Model Architecture and Objective
The model uses the XLM-RoBERTa-Large architecture with a question-answering head. The training objective was to minimize the loss for the start and end token classification.
Compute Infrastructure
- Hardware: Single NVIDIA T4 GPU
- Software:
transformers
,torch
,datasets
,evaluate
,peft
Model Card Authors
Amir Mohammad Ebrahiminasab
Model Card Contact
- Downloads last month
- 104
Model tree for AmoooEBI/xlm-roberta-fa-qa-finetuned-on-PersianQA
Base model
pedramyazdipoor/persian_xlm_roberta_large