stm4 commited on
Commit
c0970fc
·
verified ·
1 Parent(s): db4d8ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -0
README.md CHANGED
@@ -31,6 +31,7 @@ license: mit
31
 
32
  **Highlights**
33
  - Half the size of SOTA models like QWQ-32b and EXAONE-32b and hence **memory efficient**.
 
34
  - On par or outperforms on tasks like - MBPP, BFCL, Enterprise RAG, MT Bench, MixEval, IFEval and Multi-Challenge making it great for **Agentic / Enterprise tasks**.
35
  - Competitive performance on academic benchmarks like AIME-24 AIME-25, AMC-23, MATH-500 and GPQA considering model size.
36
 
@@ -48,6 +49,12 @@ Evaluations were conducted using [lm-eval-harness](https://github.com/EleutherAI
48
 
49
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63d3095c2727d7888cbb54e2/psk628cePXQZ9AghKuI5u.png)
50
 
 
 
 
 
 
 
51
  ---
52
 
53
 
 
31
 
32
  **Highlights**
33
  - Half the size of SOTA models like QWQ-32b and EXAONE-32b and hence **memory efficient**.
34
+ - It consumes a fraction of thinking tokens as compared to other SOTA models. Hence mot only is it memory efficient but **also faster** 🚀🚀🚀
35
  - On par or outperforms on tasks like - MBPP, BFCL, Enterprise RAG, MT Bench, MixEval, IFEval and Multi-Challenge making it great for **Agentic / Enterprise tasks**.
36
  - Competitive performance on academic benchmarks like AIME-24 AIME-25, AMC-23, MATH-500 and GPQA considering model size.
37
 
 
49
 
50
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63d3095c2727d7888cbb54e2/psk628cePXQZ9AghKuI5u.png)
51
 
52
+
53
+
54
+
55
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63d3095c2727d7888cbb54e2/jmvBwsJAIZVTP0xtWSTIz.png)
56
+
57
+
58
  ---
59
 
60