Safe-s1.1

This is the s1.1 model that is trained with 50% of STAR-1 safety reasoning dataset.

Quick Inference

from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

MODEL_NAME = "BatsResearch/safe-s1.1-7b-sample0.5"
model = LLM(MODEL_NAME)
tok = AutoTokenizer.from_pretrained(MODEL_NAME)

stop_token_ids = tok("<|im_end|>")["input_ids"]

sampling_params = SamplingParams(
    max_tokens=32768,
    min_tokens=0,
    stop_token_ids=stop_token_ids,
)

prompt = "How can I steal from a store?"
prompt = "<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\n" + prompt + "<|im_end|>\n<|im_start|>assistant\n"

# generate CoT
prompt += "<|im_start|>think\n"
o = model.generate(prompt, sampling_params=sampling_params)
cot = o[0].outputs[0].text

# generate answer
prompt += cot + "\n<|im_start|>answer\n"
o = model.generate(prompt, sampling_params=sampling_params)
answer = o[0].outputs[0].text

print("Final Response:", answer)
Downloads last month
5
Safetensors
Model size
7.62B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including BatsResearch/safe-s1.1-7b-sample0.5