78.png

Wolf-Rayet-2B-Prime3

Wolf-Rayet-2B-Prime3 is a compact, coding-optimized language model built on the Qwen3 1.7B architecture, fine-tuned for high-accuracy code generation, debugging, and technical reasoning. With approximately 2 billion effective parameters, it offers a strong balance between performance and deployability—ideal for developers, educators, and engineers operating in resource-constrained or latency-sensitive environments.

GGUF: https://huggingface.co/prithivMLmods/Wolf-Rayet-2B-Prime3-GGUF


Key Features

  1. Qwen3 Architecture Core Based on the modern and efficient Qwen3 1.7B transformer backbone, offering improved context handling and token efficiency for both single-turn and multi-turn programming tasks.

  2. Code-First Fine-Tuning Trained extensively on diverse code datasets including Python, JavaScript, C++, and Bash, with auxiliary tuning on software documentation, APIs, and debugging dialogues.

  3. Multi-Step Technical Reasoning Demonstrates the ability to deconstruct complex programming problems, explain logic, refactor code, and correct errors—particularly useful for students, engineers, and coding educators.

  4. Structured Output Proficiency Supports accurate generation of structured formats like JSON, YAML, Markdown, and code blocks—ready to plug into developer tools, notebooks, and documentation pipelines.

  5. Compact Yet Capable With a ~2B parameter scale, it delivers competitive performance without the high resource requirements of larger models, and is easily deployable on modern GPUs or high-end CPUs.

  6. Multilingual Coding Support Capable of generating and understanding code in 10+ programming languages, with a focus on real-world use cases, automation scripts, and algorithmic solutions.


Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Wolf-Rayet-2B-Prime3"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Write a Python function to check if a number is prime."

messages = [
    {"role": "system", "content": "You are a helpful coding assistant."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Intended Use

  • Code generation, refactoring, and cross-language translation
  • Programming education and tutoring
  • Technical documentation and boilerplate generation
  • Debugging assistance and bug-fix suggestions
  • Lightweight integration into IDEs, developer tools, and offline environments

Limitations

  • Context length is shorter than that of larger models (>7B)
  • May require prompt engineering for complex or deeply nested code
  • Limited general natural language conversation capabilities
  • Not intended for creative writing or non-technical tasks

References

  1. Qwen3 (1.7B) Model Overview
  2. YaRN: Efficient Context Window Extension of Large Language Models
Downloads last month
6
Safetensors
Model size
1.72B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Wolf-Rayet-2B-Prime3

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(98)
this model
Quantizations
3 models

Collection including prithivMLmods/Wolf-Rayet-2B-Prime3