πŸͺ„ GPT-OSS 20B β€” FableFlux (MXFP4)

This is a merged and re-exported version of gpt-oss-20b-children-qlora,
fine-tuned on garethpaul/children-stories-dataset to generate structured JSON bedtime stories.


✨ What it does

Produces structured JSON outputs in the form:

{
  "title": "string",
  "characters": ["string"],
  "setting": "string",
  "story": "string (500–800 words, bedtime tone, positive ending)",
  "moral": "string"
}

πŸš€ Usage

Transformers (CPU/GPU)

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "garethpaul/gpt-oss-20b-fableflux"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype="bfloat16"
)

messages = [
    {"role": "system", "content": "You are StoryWeaver. Always respond in valid JSON with keys: {title, characters, setting, story, moral}."},
    {"role": "user", "content": "Tell me a bedtime story about a brave little car."}
]

prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=700, temperature=0.7, top_p=0.9)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

vLLM (recommended for serving)

pip install vllm==0.10.1+gptoss --extra-index-url https://wheels.vllm.ai/gpt-oss/

vllm serve garethpaul/gpt-oss-20b-fableflux \
  --max-model-len 8192 \
  --tensor-parallel-size 1

Then query with the OpenAI API format:

from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="not-needed")

resp = client.chat.completions.create(
    model="garethpaul/gpt-oss-20b-fableflux",
    messages=[
        {"role": "system", "content": "You are StoryWeaver. Respond ONLY in JSON."},
        {"role": "user", "content": "Tell me a bedtime story about a ballet dancer named Jones."}
    ]
)
print(resp.choices[0].message["content"])

πŸ›  Training Details

Method: QLoRA β†’ merged β†’ MXFP4 re-export

Dataset: garethpaul/children-stories-dataset

LoRA config: rank=8, Ξ±=16, dropout=0.05

Frameworks: transformers, peft, trl

Merged to: BF16 β†’ MXFP4 (vLLM-compatible safetensors)

πŸ“š Related

openai/gpt-oss-20b β€” base model

garethpaul/gpt-oss-20b-children-qlora β€” adapter repo

garethpaul/children-stories-dataset β€” training datas

Downloads last month
24
Safetensors
Model size
20.9B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for garethpaul/gpt-oss-20b-fableflux

Base model

openai/gpt-oss-20b
Finetuned
(294)
this model
Quantizations
1 model

Dataset used to train garethpaul/gpt-oss-20b-fableflux