Overview
gpt-oss-12b-4bit — Unsloth LoRA Adapter
Training
Unsloth + QLoRA (4‑bit) + TRL GRPO (reinforcement learning)
QuickStart
messages = [
{"role": "system", "content": "reasoning language: French\n\nYou are a helpful assistant that can solve mathematical problems."},
{"role": "user", "content": "Solve x^5 + 3x^4 - 10 = 3."},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt = True,
return_tensors = "pt",
return_dict = True,
reasoning_effort = "medium",
).to(model.device)
from transformers import TextStreamer
_ = model.generate(**inputs, max_new_tokens = 2048, streamer = TextStreamer(tokenizer))
Acknowledgements
gpt‑oss authors and maintainers
Unsloth / PEFT / TRL / Transformers / Datasets communities
Contact
Author: Ryota Ozawa (zawatti)
X (Twitter): zawattizawawa
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for zary0/gpt-oss-20b-4bit-grpo
Base model
openai/gpt-oss-20b
Quantized
unsloth/gpt-oss-20b-unsloth-bnb-4bit