Hercules LLM collection
Collection
Hercules Gemma 3N collection
β’
6 items
β’
Updated
This finetuned model is specialized in STEM like LCB, CodeForce, AIME24, AIME25, AMC23, MATH500.
Note:
Use unsloth inference
!pip install --upgrade transformers
import torch
from transformers import pipeline
model_id = "EpistemeAI/Hercules-Coder-E4B-it"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
print(pipe("Write me a Python function to calculate the nth fibonacci number."))
Benchmark results (5 shot):
Tasks | Version | Filter | n-shot | Metric | Value | |
---|---|---|---|---|---|---|
arc_challenge | 1 | none | 5 | acc | β | 0.5759 |
hellaswag | 1 | none | 5 | acc | β | 0.7651 |
winogrande | 1 | none | 5 | acc | β | 0.7526 |
GPQA Diamond result
Tasks | Version | Filter | n-shot | Metric | Value | |
---|---|---|---|---|---|---|
gpqa_diamond_zeroshot | 1 | none | 0 | acc | β | 0.2516 |
none | 0 | acc_norm | β | 0.2516 |
This gemma3n model was trained 2x faster with Unsloth and Huggingface's TRL library.
@misc{liu2025rstarcoderscalingcompetitivecode,
title={rStar-Coder: Scaling Competitive Code Reasoning with a Large-Scale Verified Dataset},
author={Yifei Liu and Li Lyna Zhang and Yi Zhu and Bingcheng Dong and Xudong Zhou and Ning Shang and Fan Yang and Mao Yang},
year={2025},
eprint={2505.21297},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.21297},
}