sanguine-scribe-4bit-bnb
4-bit quantized version using BitsAndBytes for efficient GPU inference.
This is a quantized version of gpt-oss-sanguine-20b-v1, a consequence-based alignment model for character roleplay.
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("paperboygold/sanguine-scribe-4bit-bnb")
model = AutoModelForCausalLM.from_pretrained(
"paperboygold/sanguine-scribe-4bit-bnb",
device_map="auto",
trust_remote_code=True
)
Original Model
- Base Model: openai/gpt-oss-20b
- Training Dataset: sanguine-dataset-v1 (350K examples)
- Training Loss: 4.1 → 1.31 (500 steps)
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support