GPT-OSS-20B โ€” Multilingual Reasoning (QLoRA, SFT)

Adapter weights for unsloth/gpt-oss-20b, fine-tuned with QLoRA + SFT to improve multilingual instruction-following and reasoning. This repo contains LoRA adapters only; load them on top of the base model.

Quick start

from unsloth import FastLanguageModel
from peft import PeftModel
from transformers import AutoTokenizer

BASE = "unsloth/gpt-oss-20b"
ADAPTER = "llmimplementation/gpt-oss-20b-sft-multilingual-reasoning-qlora-v1"

base, tok = FastLanguageModel.from_pretrained(
    BASE, load_in_4bit=True, max_seq_length=1024
)
try:
    tok = AutoTokenizer.from_pretrained(ADAPTER, use_fast=True)
except Exception:
    pass

model = PeftModel.from_pretrained(base, ADAPTER)
FastLanguageModel.for_inference(model)

prompt = "<|start|>user<|message|>List 3 creative uses for paper clips.<|end|>\n<|start|>assistant<|message|>"
out = model.generate(**tok(prompt, return_tensors="pt").to(model.device), max_new_tokens=200)
print(tok.decode(out[0], skip_special_tokens=False))

Prompt format (GPT-OSS)

<|start|>user<|message|>{your text}<|end|>
<|start|>assistant<|message|>

Intended use

General multilingual instruction following, brainstorming, and light reasoning. Not for high-risk domains without human review.

Notes

  • Inherits base model capabilities; adapters nudge behavior toward multilingual reasoning.
  • May still hallucinate or reflect dataset biases.

License

Apache-2.0 (respect any upstream licenses of the base model and data).


### Tip: include the README when pushing
- Easiest: create a local folder with your adapter files **and** `README.md`, then call `model.push_to_hub(repo_id)` from there (or use `huggingface_hub`โ€™s `upload_file` if pushing after the fact).
- Donโ€™t forget to push the tokenizer if you customized it:
```python
tokenizer.push_to_hub("llmimplementation/gpt-oss-20b-sft-multilingual-reasoning-qlora-v1")
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for LLMImplementation/gpt-oss-20b-sft-multilingual-reasoning-qlora-v1

Base model

openai/gpt-oss-20b
Adapter
(29)
this model