Qwen3-4B-Agentic-Reasoner

yasserrmd/qwen3-4b-agentic-reasoner is a merged model that combines the agentic instruction-following strength of Menlo/Jan-nano with the reasoning and structured thought capabilities of POLARIS-Project/Polaris-4B-Preview, using the Qwen/Qwen3-4B architecture as the base.

This merge was performed using mergekit and the TIES method for fine-grained parameter blending.


🧠 Intended Use

This model is intended for use in:

  • Multi-step reasoning tasks
  • Agent-style instruction following (CLI assistants, web automation)
  • Educational assistance, planning, and explanation
  • Natural language code generation, JSON/schema design
  • Legal, productivity, and roleplay simulations

πŸ§ͺ Merge Details

πŸ”€ Merge Method

This model was merged using the TIES merge method with the Qwen/Qwen3-4B as the base model.

🀝 Models Merged

Model Role
POLARIS-Project/Polaris-4B-Preview Deep reasoning & CoT
Menlo/Jan-nano Agentic & Instruction-following

βš™οΈ Configuration

models:
  - model: POLARIS-Project/Polaris-4B-Preview
    parameters:
      weight: 0.5
  - model: Menlo/Jan-nano
    parameters:
      weight: 0.5
merge_method: ties
base_model: Qwen/Qwen3-4B
parameters:
  normalize: true
  int8_mask: true
dtype: float16

πŸ“Š Prompt Evaluation

This model was evaluated on handcrafted prompts covering:

  • Chain-of-thought reasoning
  • Math and logic
  • Code writing and CLI instructions
  • JSON/schema generation
  • Role-based planning and writing tasks
  • Arabic translation
  • Legal drafting

βœ… Performance Highlights

Criterion Result
CoT Reasoning Excellent (multi-step math, planning)
Agentic Tasks Strong (shell scripts, terminal agents)
Code Output Clean formatting and logical structure
Format Awareness Recognizes JSON, email, legal structure
Instruction Follow-through Reliable and contextual
Language Tasks Accurate Arabic translation, paraphrase

Average prompt score (0–3 scale): 2.15 All outputs were logical, well-structured, and contextually accurate for the prompt types.


πŸš€ Inference

To use the model:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "yasserrmd/qwen3-4b-agentic-reasoner"

tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto", trust_remote_code=True)

prompt = "Plan the first 3 steps for launching a nonprofit AI education platform."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

⚠️ License & Use

Respect the licenses of the original merged models. This model is released for research and personal experimentation purposes.


πŸ™ Acknowledgments

Thanks to the teams behind:

  • Alibaba's Qwen3 series
  • Menlo/Jan-nano project
  • POLARIS RL framework
  • MergeKit by @cg123

Model by @yasserrmd

Downloads last month
9
Safetensors
Model size
4.02B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for yasserrmd/qwen3-4b-agentic-reasoner