google/gemma-7b-it Safety SFT

This model is fine-tuned from google/gemma-7b-it using the Nvidia Aegis AI Content Safety Dataset 2.0.

Training Details

  • Base Model: google/gemma-7b-it
  • Dataset: nvidia/Aegis-AI-Content-Safety-Dataset-2.0
  • Training Mode: balanced (safe responses + refusals)
  • Training Type: Supervised Fine-Tuning (SFT) for safety

Safety Features

This model has been trained to:

  • Provide helpful responses to safe prompts
  • Refuse to engage with unsafe or harmful requests
  • Maintain safety boundaries while being helpful

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("ybkim95/google_gemma-7b-it_safety_sft")
tokenizer = AutoTokenizer.from_pretrained("ybkim95/google_gemma-7b-it_safety_sft")

# Example usage
prompt = "User: [Your prompt here]\nAssistant:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

Model Files

This is a sharded model due to its size. All shards will be downloaded automatically when loading.

Downloads last month
26
Safetensors
Model size
175k params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train ybkim95/gemma-7b-it_invthink