• safetensors version: Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ Phi-4 Technical Report (SuperThoughts 14B is based on phi-4)

You must use this prompt format: https://huggingface.co/Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ-GGUF#format

We are very proud to announce, SuperThoughts, but you can just call it o1 mini 😉

A reasoning ai model based on Phi-4, which is better that QwQ at everything but Ifeval, but at a smaller size, really good at math and answers step by step in multiple languages with any prompt as reasoning is built into the prompt format.

Please check the examples we provided: https://huggingface.co/Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ-GGUF#%F0%9F%A7%80-examples image/png Beats qwen/qwq at MATH & MuSR & GPQA (MuSR being a reasoning benchmark) Evaluation:

image/png image/png image/png

image/png image/png image/png

Unlike previous models we've uploaded, this one is the best one we've published! Answers in two steps: Reasoning -> Final answer like o1 mini and other similar reasoning ai models.

🧀 Which quant is right for you? (all tested!)

  • Q3: This quant should be used on most high-end devices like rtx 2080TI's, Responses are very high quality, but its slightly slower than Q4. (Runs at ~1 tokens per second or less on a Samsung z fold 5 smartphone.)
  • Q4: This quant should be used on high-end modern devices like rtx 3080's or any GPU,TPU,CPU that is powerful enough and has at minimum 15gb of available memory, (On servers and high-end computers we personally use it.) reccomened.
  • Q8: This quant should be used on very high-end modern devices which can handle it's power, it is very powerful but q4 is more well rounded, not recommended.

Evaluation Results

Detailed results can be found here! Summarized results can be found here! Please note, the low IFEVAL results is probably due to it always reasoning, it does have issues with instruction following.

Metric Value (%)
Average 31.17
IFEval (0-Shot) 5.15
BBH (3-Shot) 52.85
MATH Lvl 5 (4-Shot) 40.79
GPQA (0-shot) 19.02
MuSR (0-shot) 21.79
MMLU-PRO (5-shot) 47.43

Format

the model uses this prompt format: (modified phi-4 prompt)

{{ if .System }}<|system|>
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|user|>
{{ .Prompt }}<|im_end|>
{{ end }}<|assistant|>{{ .CoT }}<|CoT|>
{{ .Response }}<|FinalAnswer|><|im_end|>

It is recommended to use a system prompt like this one:

You are a helpful ai assistant. Make sure to put your finalanswer at the end. 

🧀 Examples:

(q4_k_m, 10GB rtx 3080, 64GB memory, running inside of MSTY, all use "You are a friendly ai assistant." as the System prompt.) example 1: example1 example 2: 2 example 3: example2 example 4: example1part1.png example1part2.png

All generated locally and pretty quickly too!

🧀 Information

  • ⚠️ A low temperature must be used to ensure it won't fail at reasoning. we use 0.3 - 0.8!
  • ⚠️ Due to the current prompt format, it may sometimes put <|FinalAnswer|> without providing a final answer at the end, you can ignore this or modify the prompt format.
  • this is out flagship model, with top-tier reasoning, rivaling gemini-flash-exp-2.0-thinking and o1 mini. results are overall similar to both of them, and it even beats QwQ at certain benchmarks.

Supported languages: Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian

🧀 Uploaded model

  • Developed by: Pinkstack
  • License: MIT
  • Finetuned from model : Pinkstack/PARM-V1-phi-4-4k-CoT-pytorch

This Phi-4 model was trained with Unsloth and Huggingface's TRL library.

Downloads last month
145
GGUF
Model size
14.7B params
Architecture
llama

3-bit

4-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ-GGUF

Base model

microsoft/phi-4
Quantized
(3)
this model

Collection including Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ-GGUF

Evaluation results