sanguine-scribe-4bit-bnb

4-bit quantized version using BitsAndBytes for efficient GPU inference.

This is a quantized version of gpt-oss-sanguine-20b-v1, a consequence-based alignment model for character roleplay.

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("paperboygold/sanguine-scribe-4bit-bnb")
model = AutoModelForCausalLM.from_pretrained(
    "paperboygold/sanguine-scribe-4bit-bnb",
    device_map="auto",
    trust_remote_code=True
)

Original Model

  • Base Model: openai/gpt-oss-20b
  • Training Dataset: sanguine-dataset-v1 (350K examples)
  • Training Loss: 4.1 → 1.31 (500 steps)
Downloads last month
3
Safetensors
Model size
20.9B params
Tensor type
F32
·
BF16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for paperboygold/gpt-oss-sanguine-20b-4bit-bnb

Base model

openai/gpt-oss-20b
Quantized
(2)
this model

Dataset used to train paperboygold/gpt-oss-sanguine-20b-4bit-bnb