--- language: - vi library_name: transformers pipeline_tag: text-classification license: mit tags: - SemViQA - binary-classification - fact-checking --- # SemViQA-BC: Vietnamese Binary Classification for Claim Verification ## Model Description **SemViQA-BC** is a core component of the **SemViQA** system, specifically designed for **binary classification** in Vietnamese fact-checking tasks. This model predicts whether a given claim is **SUPPORTED** or **REFUTED** based on retrieved evidence. ### **Model Information** - **Developed by:** [SemViQA Research Team](https://huggingface.co/SemViQA) - **Fine-tuned model:** [XLM-R](https://huggingface.co/FacebookAI/xlm-roberta-large) - **Supported Language:** Vietnamese - **Task:** Binary Classification (Fact Verification) - **Dataset:** [ViWikiFC](https://arxiv.org/abs/2405.07615) SemViQA-BC is one of the key components of the two-step classification (TVC) approach in the SemViQA system. It focuses on binary classification, determining whether a claim is SUPPORTED or REFUTED. This step follows an initial three-class classification, where claims are first categorized as SUPPORTED, REFUTED, or NOT ENOUGH INFORMATION (NEI). By incorporating Cross-Entropy Loss and Focal Loss, SemViQA-BC enhances precision in claim verification, ensuring more accurate fact-checking results ## Usage Example Direct Model Usage ```Python # Install semviqa !pip install semviqa # Initalize a pipeline import torch import torch.nn.functional as F from transformers import AutoTokenizer from semviqa.tvc.model import ClaimModelForClassification tokenizer = AutoTokenizer.from_pretrained("SemViQA/bc-xlmr-viwikifc") model = ClaimModelForClassification.from_pretrained("SemViQA/bc-xlmr-viwikifc", num_labels=2) claim = "Chiến tranh với Campuchia đã kết thúc trước khi Việt Nam thống nhất." evidence = "Sau khi thống nhất, Việt Nam tiếp tục gặp khó khăn do sự sụp đổ và tan rã của đồng minh Liên Xô cùng Khối phía Đông, các lệnh cấm vận của Hoa Kỳ, chiến tranh với Campuchia, biên giới giáp Trung Quốc và hậu quả của chính sách bao cấp sau nhiều năm áp dụng." inputs = tokenizer( claim, evidence, truncation="only_second", add_special_tokens=True, max_length=256, padding='max_length', return_attention_mask=True, return_token_type_ids=False, return_tensors='pt', ) labels = ["SUPPORTED", "REFUTED"] with torch.no_grad(): outputs = model(**inputs) logits = outputs["logits"] probabilities = F.softmax(logits, dim=1).squeeze() for i, (label, prob) in enumerate(zip(labels, probabilities.tolist()), start=1): print(f"{i}) {label} {prob:.4f}") # 1) SUPPORTED 0.0028 # 2) REFUTED 0.9972 ``` ## **Evaluation Results** SemViQA-BC achieved impressive results on the test set, demonstrating accurate and efficient classification capabilities. The detailed evaluation of SemViQA-BC is presented in the table below.
Method ViWikiFC
ER VC Strict Acc VC Acc ER Acc Time (s)
TF-IDF InfoXLMlarge 75.56 82.21 90.15 131
XLM-Rlarge 76.47 82.78 90.15 134
Ernie-Mlarge 75.56 81.83 90.15 144
BM25 InfoXLMlarge 70.44 79.01 83.50 130
XLM-Rlarge 70.97 78.91 83.50 132
Ernie-Mlarge 70.21 78.29 83.50 141
SBert InfoXLMlarge 74.99 81.59 89.72 195
XLM-Rlarge 75.80 82.35 89.72 194
Ernie-Mlarge 75.13 81.44 89.72 203
QA-based approaches VC
ViMRClarge InfoXLMlarge 77.28 81.97 92.49 3778
XLM-Rlarge 78.29 82.83 92.49 3824
Ernie-Mlarge 77.38 81.92 92.49 3785
InfoXLMlarge InfoXLMlarge 78.14 82.07 93.45 4092
XLM-Rlarge 79.20 83.07 93.45 4096
Ernie-Mlarge 78.24 82.21 93.45 4102
LLM
Qwen2.5-1.5B-Instruct 51.03 65.18 78.96 7665
Qwen2.5-3B-Instruct 44.38 62.31 71.35 12123
LLM VC
Qwen2.5-1.5B-Instruct InfoXLMlarge 66.14 76.47 78.96 7788
XLM-Rlarge 67.67 78.10 78.96 7789
Ernie-Mlarge 66.52 76.52 78.96 7794
Qwen2.5-3B-Instruct InfoXLMlarge 59.88 72.50 71.35 12246
XLM-Rlarge 60.74 73.08 71.35 12246
Ernie-Mlarge 60.02 72.21 71.35 12251
SER Faster (ours) TVC (ours)
TF-IDF + ViMRClarge Ernie-Mlarge 79.44 82.93 94.60 410
TF-IDF + InfoXLMlarge Ernie-Mlarge 79.77 83.07 95.03 487
SER (ours) TVC (ours)
TF-IDF + ViMRClarge InfoXLMlarge 80.25 83.84 94.69 2731
XLM-Rlarge 80.34 83.64 94.69 2733
Ernie-Mlarge 79.53 82.97 94.69 2733
TF-IDF + InfoXLMlarge InfoXLMlarge 80.68 83.98 95.31 3860
XLM-Rlarge 80.82 83.88 95.31 3843
Ernie-Mlarge 80.06 83.17 95.31 3891
## **Citation** If you use **SemViQA-BC** in your research, please cite: ```bibtex @misc{tran2025semviqasemanticquestionanswering, title={SemViQA: A Semantic Question Answering System for Vietnamese Information Fact-Checking}, author={Dien X. Tran and Nam V. Nguyen and Thanh T. Tran and Anh T. Hoang and Tai V. Duong and Di T. Le and Phuc-Lu Le}, year={2025}, eprint={2503.00955}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2503.00955}, } ``` 🔗 **Paper Link:** [SemViQA on arXiv](https://arxiv.org/abs/2503.00955) 🔗 **Source Code:** [GitHub - SemViQA](https://github.com/DAVID-NGUYEN-S16/SemViQA) ## About *Built by Dien X. Tran* [![LinkedIn](https://img.shields.io/badge/LinkedIn-Profile-blue?logo=linkedin)](https://www.linkedin.com/in/xndien2004/) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/DAVID-NGUYEN-S16/SemViQA?style=social)](https://github.com/DAVID-NGUYEN-S16/SemViQA)