--- language: - vi library_name: transformers license: mit pipeline_tag: text-classification tags: - SemViQA - binary-classification - fact-checking --- # SemViQA-BC: Vietnamese Binary Classification for Claim Verification ## Model Description **SemViQA-BC** is a core component of the **SemViQA** system, specifically designed for **binary classification** in Vietnamese fact-checking tasks. This model predicts whether a given claim is **SUPPORTED** or **REFUTED** based on retrieved evidence. ### **Model Information** - **Developed by:** [SemViQA Research Team](https://huggingface.co/SemViQA) - **Fine-tuned model:** [Ernie-M](https://huggingface.co/MoritzLaurer/ernie-m-large-mnli-xnli) - **Supported Language:** Vietnamese - **Task:** Binary Classification (Fact Verification) - **Dataset:** [ISE-DSC01](https://codalab.lisn.upsaclay.fr/competitions/15497) SemViQA-BC is one of the key components of the two-step classification (TVC) approach in the SemViQA system. It focuses on binary classification, determining whether a claim is SUPPORTED or REFUTED. This step follows an initial three-class classification, where claims are first categorized as SUPPORTED, REFUTED, or NOT ENOUGH INFORMATION (NEI). By incorporating Cross-Entropy Loss and Focal Loss, SemViQA-BC enhances precision in claim verification, ensuring more accurate fact-checking results ## Usage Example Direct Model Usage ```Python # Install semviqa !pip install semviqa # Initalize a pipeline import torch import torch.nn.functional as F from transformers import AutoTokenizer from semviqa.tvc.model import ClaimModelForClassification tokenizer = AutoTokenizer.from_pretrained("SemViQA/bc-erniem-isedsc01") model = ClaimModelForClassification.from_pretrained("SemViQA/bc-erniem-isedsc01", num_labels=2) claim = "Chiến tranh với Campuchia đã kết thúc trước khi Việt Nam thống nhất." evidence = "Sau khi thống nhất, Việt Nam tiếp tục gặp khó khăn do sự sụp đổ và tan rã của đồng minh Liên Xô cùng Khối phía Đông, các lệnh cấm vận của Hoa Kỳ, chiến tranh với Campuchia, biên giới giáp Trung Quốc và hậu quả của chính sách bao cấp sau nhiều năm áp dụng." inputs = tokenizer( claim, evidence, truncation="only_second", add_special_tokens=True, max_length=256, padding='max_length', return_attention_mask=True, return_token_type_ids=False, return_tensors='pt', ) labels = ["SUPPORTED", "REFUTED"] with torch.no_grad(): outputs = model(**inputs) logits = outputs["logits"] probabilities = F.softmax(logits, dim=1).squeeze() for i, (label, prob) in enumerate(zip(labels, probabilities.tolist()), start=1): print(f"{i}) {label} {prob:.4f}") # 1) SUPPORTED 0.0007 # 2) REFUTED 0.9993 ``` ## **Evaluation Results** SemViQA-BC achieved impressive results on the test set, demonstrating accurate and efficient classification capabilities. The detailed evaluation of SemViQA-BC is presented in the table below.
Method ISE-DSC01
ER VC Strict Acc VC Acc ER Acc Time (s)
TF-IDF InfoXLMlarge 73.59 78.08 76.61 378
XLM-Rlarge 75.61 80.50 78.58 366
Ernie-Mlarge 78.19 81.69 80.65 403
BM25 InfoXLMlarge 72.09 77.37 75.04 320
XLM-Rlarge 73.94 79.37 76.95 333
Ernie-Mlarge 76.58 80.76 79.02 381
SBert InfoXLMlarge 71.20 76.59 74.15 915
XLM-Rlarge 72.85 78.78 75.89 835
Ernie-Mlarge 75.46 79.89 77.91 920
QA-based approaches VC
ViMRClarge InfoXLMlarge 54.36 64.14 56.84 9798
XLM-Rlarge 53.98 66.70 57.77 9809
Ernie-Mlarge 56.62 62.19 58.91 9833
InfoXLMlarge InfoXLMlarge 53.50 63.83 56.17 10057
XLM-Rlarge 53.32 66.70 57.25 10066
Ernie-Mlarge 56.34 62.36 58.69 10078
LLM
Qwen2.5-1.5B-Instruct 59.23 66.68 65.51 19780
Qwen2.5-3B-Instruct 60.87 66.92 66.10 31284
LLM VC
Qwen2.5-1.5B-Instruct InfoXLMlarge 64.40 68.37 66.49 19970
XLM-Rlarge 64.66 69.63 66.72 19976
Ernie-Mlarge 65.70 68.37 67.33 20003
Qwen2.5-3B-Instruct InfoXLMlarge 65.72 69.66 67.51 31477
XLM-Rlarge 66.12 70.44 67.83 31483
Ernie-Mlarge 67.48 70.77 68.75 31512
SER Faster (ours) TVC (ours)
TF-IDF + ViMRClarge Ernie-Mlarge 78.32 81.91 80.26 995
TF-IDF + InfoXLMlarge Ernie-Mlarge 78.37 81.91 80.32 925
SER (ours) TVC (ours)
TF-IDF + ViMRClarge InfoXLMlarge 75.13 79.54 76.87 5191
XLM-Rlarge 76.71 81.65 78.91 5219
Ernie-Mlarge 78.97 82.54 80.91 5225
TF-IDF + InfoXLMlarge InfoXLMlarge 75.13 79.60 76.87 5175
XLM-Rlarge 76.74 81.71 78.95 5200
Ernie-Mlarge 78.97 82.49 80.91 5297
## **Citation** If you use **SemViQA-BC** in your research, please cite: ```bibtex @misc{tran2025semviqasemanticquestionanswering, title={SemViQA: A Semantic Question Answering System for Vietnamese Information Fact-Checking}, author={Dien X. Tran and Nam V. Nguyen and Thanh T. Tran and Anh T. Hoang and Tai V. Duong and Di T. Le and Phuc-Lu Le}, year={2025}, eprint={2503.00955}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2503.00955}, } ``` 🔗 **Paper Link:** [SemViQA on arXiv](https://arxiv.org/abs/2503.00955) 🔗 **Source Code:** [GitHub - SemViQA](https://github.com/DAVID-NGUYEN-S16/SemViQA) ## About *Built by Dien X. Tran* [![LinkedIn](https://img.shields.io/badge/LinkedIn-Profile-blue?logo=linkedin)](https://www.linkedin.com/in/xndien2004/) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/DAVID-NGUYEN-S16/SemViQA?style=social)](https://github.com/DAVID-NGUYEN-S16/SemViQA)