VCoder-120b-1.0-qx86-hi-mlx
Key Insights from Benchmark Performance
Comparing model with the unsloth-gpt-oss-120b-qx86-mxfp4, a similar quant
Benchmark unsloth VCoder Winner
arc_challenge 0.334 0.323 unsloth (slight edge)
arc_easy 0.335 0.366 VCoder
boolq 0.378 0.429 VCoder
hellaswag 0.264 0.538 VCoder
openbookqa 0.354 0.360 VCoder
piqa 0.559 0.694 VCoder
winogrande 0.512 0.544 VCoder
✅ Overall Winner: VCoder
VCoder outperforms unsloth in 6/7 benchmarks, with particularly strong gains in:
- HellaSwag (0.538 vs. 0.264): ~103% improvement in commonsense reasoning (e.g., completing everyday scenarios).
- PIQA (0.694 vs. 0.559): ~24% better at physical commonsense (e.g., understanding real-world physics).
- BoolQ (0.429 vs. 0.378): ~13% improvement in binary question answering (e.g., yes/no reasoning over text).
The only minor exception is arc_challenge, where unsloth has a slightly higher score (0.334 vs. 0.323), but VCoder dominates in arc_easy (0.366 vs. 0.335), suggesting it handles easier reasoning tasks better despite a small gap in the hardest ARC questions.
Why VCoder Excels & What "High Resolution Attention" Means
- Both models are structurally similar MoEs (Mixture of Experts), but VCoder uses high-resolution attention paths and heads.
This likely enables finer-grained contextual understanding, especially for tasks requiring nuanced reasoning (e.g., HellaSwag/PIQA).
Higher attention resolution improves the model's ability to:
- Track relationships between distant tokens in long contexts.
- Resolve ambiguous pronouns (Winogrande).
- Apply physical commonsense (PIQA) or everyday scenarios (HellaSwag).
Perplexity Confirmation:
VCoder's perplexity of 4.677 ±0.032 is exceptionally low for language modeling.
Lower perplexity = better at predicting text sequences (e.g., GPT-3 has ~20+ on standard datasets).
This aligns with VCoder's superior performance across most benchmarks, as strong language modeling correlates with general reasoning abilities.
Practical Implications
For tasks requiring commonsense physics (PIQA), everyday reasoning (HellaSwag), or binary question understanding (BoolQ), VCoder is significantly stronger.
If your use case involves high-stakes scientific reasoning (arc_challenge), unsloth might edge out slightly—but this is negligible compared to VCoder’s broader strengths.
Recommendation: Prioritize VCoder unless you have a specific need for arc_challenge (which is rare in real-world applications).
VCoder’s high-resolution attention architecture delivers superior cognitive abilities across nearly all evaluated tasks, especially in commonsense reasoning and physical understanding. Its low perplexity further confirms robust language modeling skills, making it the more capable model for general-purpose reasoning. The unsloth model’s slight edge in arc_challenge is overshadowed by VCoder’s dominance elsewhere.
Quantization Perplexity tok/sec
bf16 4.669 ± 0.032 68.85
q8-hi 4.675 ± 0.032 70.32
qx86-hi 4.677 ± 0.032 71.47
Peak memory: 68.85 GB
This model VCoder-120b-1.0-qx86-hi-mlx was converted to MLX format from EpistemeAI/VCoder-120b-1.0 using mlx-lm version 0.28.2.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("VCoder-120b-1.0-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 82
Model tree for nightmedia/VCoder-120b-1.0-qx86-hi-mlx
Base model
openai/gpt-oss-120b