- safetensors version: Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ Phi-4 Technical Report (SuperThoughts 14B is based on phi-4)
You must use this prompt format: https://huggingface.co/Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ-GGUF#format
We are very proud to announce, SuperThoughts, but you can just call it o1 mini 😉
A reasoning ai model based on Phi-4, which is better that QwQ at everything but Ifeval, but at a smaller size, really good at math and answers step by step in multiple languages with any prompt as reasoning is built into the prompt format.
Please check the examples we provided: https://huggingface.co/Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ-GGUF#%F0%9F%A7%80-examples Beats qwen/qwq at MATH & MuSR & GPQA (MuSR being a reasoning benchmark) Evaluation:
Unlike previous models we've uploaded, this one is the best one we've published! Answers in two steps: Reasoning -> Final answer like o1 mini and other similar reasoning ai models.
🧀 Which quant is right for you? (all tested!)
- Q3: This quant should be used on most high-end devices like rtx 2080TI's, Responses are very high quality, but its slightly slower than Q4. (Runs at ~1 tokens per second or less on a Samsung z fold 5 smartphone.)
- Q4: This quant should be used on high-end modern devices like rtx 3080's or any GPU,TPU,CPU that is powerful enough and has at minimum 15gb of available memory, (On servers and high-end computers we personally use it.) reccomened.
- Q8: This quant should be used on very high-end modern devices which can handle it's power, it is very powerful but q4 is more well rounded, not recommended.
Evaluation Results
Detailed results can be found here! Summarized results can be found here! Please note, the low IFEVAL results is probably due to it always reasoning, it does have issues with instruction following.
Metric | Value (%) |
---|---|
Average | 31.17 |
IFEval (0-Shot) | 5.15 |
BBH (3-Shot) | 52.85 |
MATH Lvl 5 (4-Shot) | 40.79 |
GPQA (0-shot) | 19.02 |
MuSR (0-shot) | 21.79 |
MMLU-PRO (5-shot) | 47.43 |
Format
the model uses this prompt format: (modified phi-4 prompt)
{{ if .System }}<|system|>
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|user|>
{{ .Prompt }}<|im_end|>
{{ end }}<|assistant|>{{ .CoT }}<|CoT|>
{{ .Response }}<|FinalAnswer|><|im_end|>
It is recommended to use a system prompt like this one:
You are a helpful ai assistant. Make sure to put your finalanswer at the end.
🧀 Examples:
(q4_k_m, 10GB rtx 3080, 64GB memory, running inside of MSTY, all use "You are a friendly ai assistant." as the System prompt.) example 1: example 2: example 3: example 4:
All generated locally and pretty quickly too!
🧀 Information
- ⚠️ A low temperature must be used to ensure it won't fail at reasoning. we use 0.3 - 0.8!
- ⚠️ Due to the current prompt format, it may sometimes put <|FinalAnswer|> without providing a final answer at the end, you can ignore this or modify the prompt format.
- this is out flagship model, with top-tier reasoning, rivaling gemini-flash-exp-2.0-thinking and o1 mini. results are overall similar to both of them, and it even beats QwQ at certain benchmarks.
Supported languages: Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian
🧀 Uploaded model
- Developed by: Pinkstack
- License: MIT
- Finetuned from model : Pinkstack/PARM-V1-phi-4-4k-CoT-pytorch
This Phi-4 model was trained with Unsloth and Huggingface's TRL library.
- Downloads last month
- 145
Model tree for Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ-GGUF
Base model
microsoft/phi-4Collection including Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ-GGUF
Evaluation results
- averaged accuracy on IFEval (0-Shot)Open LLM Leaderboard5.150
- normalized accuracy on BBH (3-Shot)test set Open LLM Leaderboard52.850
- exact match on MATH Lvl 5 (4-Shot)test set Open LLM Leaderboard40.790
- acc_norm on GPQA (0-shot)Open LLM Leaderboard19.020
- acc_norm on MuSR (0-shot)Open LLM Leaderboard21.790
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard47.430