Qwen3-4B-Agentic-Reasoner

yasserrmd/qwen3-4b-agentic-reasoner
is a merged model that combines the agentic instruction-following strength of Menlo/Jan-nano with the reasoning and structured thought capabilities of POLARIS-Project/Polaris-4B-Preview, using the Qwen/Qwen3-4B architecture as the base.
This merge was performed using mergekit and the TIES method for fine-grained parameter blending.
π§ Intended Use
This model is intended for use in:
- Multi-step reasoning tasks
- Agent-style instruction following (CLI assistants, web automation)
- Educational assistance, planning, and explanation
- Natural language code generation, JSON/schema design
- Legal, productivity, and roleplay simulations
π§ͺ Merge Details
π Merge Method
This model was merged using the TIES merge method with the Qwen/Qwen3-4B as the base model.
π€ Models Merged
Model | Role |
---|---|
POLARIS-Project/Polaris-4B-Preview | Deep reasoning & CoT |
Menlo/Jan-nano | Agentic & Instruction-following |
βοΈ Configuration
models:
- model: POLARIS-Project/Polaris-4B-Preview
parameters:
weight: 0.5
- model: Menlo/Jan-nano
parameters:
weight: 0.5
merge_method: ties
base_model: Qwen/Qwen3-4B
parameters:
normalize: true
int8_mask: true
dtype: float16
π Prompt Evaluation
This model was evaluated on handcrafted prompts covering:
- Chain-of-thought reasoning
- Math and logic
- Code writing and CLI instructions
- JSON/schema generation
- Role-based planning and writing tasks
- Arabic translation
- Legal drafting
β Performance Highlights
Criterion | Result |
---|---|
CoT Reasoning | Excellent (multi-step math, planning) |
Agentic Tasks | Strong (shell scripts, terminal agents) |
Code Output | Clean formatting and logical structure |
Format Awareness | Recognizes JSON, email, legal structure |
Instruction Follow-through | Reliable and contextual |
Language Tasks | Accurate Arabic translation, paraphrase |
Average prompt score (0β3 scale): 2.15 All outputs were logical, well-structured, and contextually accurate for the prompt types.
π Inference
To use the model:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "yasserrmd/qwen3-4b-agentic-reasoner"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto", trust_remote_code=True)
prompt = "Plan the first 3 steps for launching a nonprofit AI education platform."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
β οΈ License & Use
Respect the licenses of the original merged models. This model is released for research and personal experimentation purposes.
π Acknowledgments
Thanks to the teams behind:
- Alibaba's Qwen3 series
- Menlo/Jan-nano project
- POLARIS RL framework
- MergeKit by @cg123
Model by @yasserrmd
- Downloads last month
- 9