🧠 Neur-0.0-Full
Fine-tuned variant of openai/gpt-oss-20b using supervised fine-tuning (SFT) with QLoRA adapters, later merged into a full standalone model.
📋 Overview
Neur n0.0 is a 20B-parameter transformer language model derived from GPT-OSS-20B, fine-tuned to improve reasoning, coding, and multi-step tool-use behaviors.
n0.0 is intended to be used in conjunction with Agentic tools for the purpose of helping the user complete everyday tasks on the computer.
It can also directly write or edit code on any IDE or platform at the user's request.
This release merges LoRA adapters into the base model so it can be loaded directly with from_pretrained()—no external adapter weights required.
🔧 Model Details
| Attribute | Value |
|---|---|
| Base Model | openai/gpt-oss-20b |
| Architecture | Decoder-only Transformer (GPT-style) |
| Parameters | ~20.9B |
| Precision | bfloat16 (BF16) |
| Fine-tuning Method | QLoRA (4-bit quantization, r=16, α=32) |
| Dataset | CodeAlpaca-20k + curated reasoning data |
| Training Steps | 1500 |
| Optimizer | AdamW with cosine LR schedule |
| Output Directory | out-sft-qlora/merged-standalone |
| Frameworks | 🤗 Transformers, PEFT, BitsAndBytes, Accelerate |
🚀 Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "xenon111/neur-0.0-full"
tok = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
dtype=torch.bfloat16 if torch.cuda.is_available() and torch.cuda.is_bf16_supported()
else (torch.float16 if torch.cuda.is_available() else torch.float32),
trust_remote_code=True,
)
prompt = "Write a Python function that sorts a list of numbers using merge sort."
inputs = tok(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=200, temperature=0.7)
print(tok.decode(output[0], skip_special_tokens=True))
- Downloads last month
- 98
Model tree for xenon111/neur-0.0-full
Base model
openai/gpt-oss-20b