--- base_model: - Pinkstack/Superthoughts-lite-v1 tags: - text-generation-inference - transformers - unsloth - llama - gguf - cot - superthoughts - reasoning - grpo license: apache-2.0 language: - en datasets: - openai/gsm8k - Pinkstack/intructions-sft-sharegpt --- Demo: https://huggingface.co/spaces/Pinkstack/Chat-with-superthoughts-lite  # Information Advanced, high-quality and **lite** reasoning for a tiny size that you can run on your phone. At original quality, it runs at ~400 tokens/second on a single H100 Nvidia GPU from Friendli. Trained similarly to Deepseek R1, we used Smollm2 as a base model, then we've SFT fine tuned on reasoning using our own private superthoughts instruct dataset which includes a mix of code, website generation, day-to-day chats, math and counting problems. And then we modified the tokenizer slightly, after the SFT fine tuning we used GRPO to further amplify it's mathematics & problem solving abilities. # Which quant is right for you? ***F16***: Least hallucinations, high-quality reasoning yet heavy to run. ***Q8_0***: Limited amount of hallucinations high-quality reasoning, recommended ***Q6_k***: Hallucinates more, good reasoning but may fail at counting etc. only use if you cannot run Q8_0. ***Q4_k_m***: Not recommended, Hallucinates, doesn't always think properly. easier to run though.
We did not put additional safety filters when doing SFT, thus this AI is pretty uncensored and can be rude at times. unless you specify in the system prompt that it is harmless, it won't be. users are soley responsible for the use of this AI. no output from the AI represents the views of Pinkstack or any other third party, it may create biased, inccorect and harmful information unless you set it up properly. for commercial use, it is reccomeneded that you either use another AI like llamaguard for filtering or in the system prompt ensure that it would be harmless.