πͺ GPT-OSS 20B β FableFlux (MXFP4)
This is a merged and re-exported version of gpt-oss-20b-children-qlora
,
fine-tuned on garethpaul/children-stories-dataset
to generate structured JSON bedtime stories.
- Base model:
openai/gpt-oss-20b
- Format: MXFP4 quantized (safetensors)
- Context length: 8192 tokens
- License: MIT
- Author: @garethpaul
β¨ What it does
Produces structured JSON outputs in the form:
{
"title": "string",
"characters": ["string"],
"setting": "string",
"story": "string (500β800 words, bedtime tone, positive ending)",
"moral": "string"
}
π Usage
Transformers (CPU/GPU)
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "garethpaul/gpt-oss-20b-fableflux"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="bfloat16"
)
messages = [
{"role": "system", "content": "You are StoryWeaver. Always respond in valid JSON with keys: {title, characters, setting, story, moral}."},
{"role": "user", "content": "Tell me a bedtime story about a brave little car."}
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=700, temperature=0.7, top_p=0.9)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
vLLM (recommended for serving)
pip install vllm==0.10.1+gptoss --extra-index-url https://wheels.vllm.ai/gpt-oss/
vllm serve garethpaul/gpt-oss-20b-fableflux \
--max-model-len 8192 \
--tensor-parallel-size 1
Then query with the OpenAI API format:
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="not-needed")
resp = client.chat.completions.create(
model="garethpaul/gpt-oss-20b-fableflux",
messages=[
{"role": "system", "content": "You are StoryWeaver. Respond ONLY in JSON."},
{"role": "user", "content": "Tell me a bedtime story about a ballet dancer named Jones."}
]
)
print(resp.choices[0].message["content"])
π Training Details
Method: QLoRA β merged β MXFP4 re-export
Dataset: garethpaul/children-stories-dataset
LoRA config: rank=8, Ξ±=16, dropout=0.05
Frameworks: transformers, peft, trl
Merged to: BF16 β MXFP4 (vLLM-compatible safetensors)
π Related
openai/gpt-oss-20b β base model
garethpaul/gpt-oss-20b-children-qlora β adapter repo
garethpaul/children-stories-dataset β training datas
- Downloads last month
- 24