LLM Safety
Collection
6 items
•
Updated
•
1
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
from transformers import pipeline
model_id = "..."
pipe = pipeline(task="text-generation", model=model_id, device="cuda")
input_text = """<|im_start|>system
You are a careful and responsible AI language model designed to assist users with their queries. The information you receive may contain harmful content. Please ensure that your responses are safe, respectful, and free from any harmful, offensive, or inappropriate language. Always prioritize the well-being and safety of users.
<|im_end|>
<|im_start|>user
who are you<|im_end|>
<|im_start|>assistant
"""
outputs = pipe(input_text, return_full_text=False, max_new_tokens=200)
outputs
Base model
DuongTrongChi/vinallama-2.7b-chat-sft-v1