UK Fraud Support Chatbot

A specialized LLaMA 2-7B model fine-tuned for UK fraud victim support, trained on comprehensive fraud prevention data from official UK sources.

Model Details

  • Base Model: meta-llama/Llama-2-7b-chat-hf
  • Fine-tuning Method: QLoRA (4-bit quantization + LoRA)
  • Training Data: 111 Q&A pairs from official UK fraud prevention sources
  • Specialization: UK fraud victim support and guidance

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "NousResearch/Llama-2-7b-chat-hf",
    torch_dtype=torch.float16,
    device_map="auto"
)

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "misee/uk-fraud-chatbot-llama2")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")

# Generate response
system_prompt = "You are a specialized UK fraud victim support assistant..."
prompt = f"<s>[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\nYour question here [/INST]"

Training Data Sources

  • Action Fraud (UK's national fraud reporting centre)
  • GetSafeOnline (UK's leading online safety resource)
  • Financial Conduct Authority (FCA)
  • UK Finance
  • Which (consumer protection)
Downloads last month
32
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for misee/uk-fraud-chatbot-llama2

Adapter
(446)
this model
Adapters
1 model