Subodh_MFND_xlm_roberta

This model is a LoRA fine-tuned version of xlm-roberta-base for multilingual fake news detection (Bangla, English, Hindi, Spanish).
Final evaluation set results:

  • Accuracy: 95.12%
  • F1: 0.95
  • (Precision/Recall can be filled in if you have them.)

Model description

  • Privacy-preserved, multi-lingual fake news detection.
  • Fine-tuned with LoRA adapters (r=8, α=16, dropout=0.1).
  • Batch size: 8, Epochs: 3, Learning rate: 2e-4.

Intended uses & limitations

  • Intended for research and production on multilingual fake news detection tasks.
  • Works on Bangla, English, Hindi, and Spanish news content.
  • Not intended for languages outside the fine-tuning set.

Training and evaluation data

  • Dataset: Custom multilingual fake news corpus (Bangla, English, Hindi, Spanish)
  • Supervised classification (fake/real)

Training procedure

Training hyperparameters

  • learning_rate: 0.0002
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: AdamW
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
0.4057 1.0 9375 0.4236 0.8075 0.8039
0.4334 2.0 18750 0.4312 0.8049 0.7999
0.466 3.0 28125 0.4236 0.8090 0.8047
Final Test - - - 0.9512 0.95

Framework versions

  • PEFT 0.17.1
  • Transformers 4.56.1
  • Pytorch 2.8.0+cu126
  • Datasets 4.0.0
  • Tokenizers 0.22.0
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Generative-Subodh/Subodh_MFND_xlm_roberta

Adapter
(42)
this model

Evaluation results