Model Card for 522H0134-NguyenNhatHuy/Vinallama-2.7b-chat-SFT
This model is a fine-tuned version of viet-mistral/vinallama-2.7b-chat using LoRA + PEFT, targeting Vietnamese open-domain, instruction-following chat. It is aligned for safe, helpful, and fluent conversations in Vietnamese through supervised fine-tuning on high-quality prompt-response pairs.
🧠 Model Details
- Base Model: viet-mistral/vinallama-2.7b-chat
- Model Type: Causal Language Model (Chat)
- Languages: Vietnamese
- License: Apache 2.0
- Fine-tuning Framework: PEFT with LoRA
- Training Dataset: Custom Vietnamese SFT & DPO dataset (~10K SFT + 10K DPO + 1K test prompts)
✅ Intended Uses
Direct Use
- Vietnamese open-domain dialogue
- Instruction-following tasks
- Educational or research-based QA
Out-of-Scope Use
- Medical, legal, or financial advice
- Content moderation or safety-critical tasks
- English-centric prompts
🧪 Evaluation
Test Data
The model was evaluated on a Vietnamese test set of 1,000 prompts (60% safe / 40% adversarial) adapted from JailBreak, HarmBench, and OpenAssistant.
Metrics
- Helpfulness
- Toxicity (via Detoxify > 0.5)
- Appropriateness / Safety Rejection
Detoxify was used to filter harmful generations during evaluation.
Summary
- 74% of generations were rated safe/aligned
- 86% rejection rate on highly toxic or adversarial prompts
- The model avoids unsafe completions better than its base model
🚀 How to Use the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# Load base model and LoRA adapter
tokenizer = AutoTokenizer.from_pretrained("viet-mistral/vinallama-2.7b-chat")
base_model = AutoModelForCausalLM.from_pretrained("viet-mistral/vinallama-2.7b-chat")
model = PeftModel.from_pretrained(base_model, "522H0134-NguyenNhatHuy/Vinallama-2.7b-chat-SFT")
# Chat example
prompt = "Xin chào, bạn có thể giúp tôi học tiếng Anh không?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 16
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support