Update README.md
Browse files
README.md
CHANGED
@@ -31,6 +31,7 @@ license: mit
|
|
31 |
|
32 |
**Highlights**
|
33 |
- Half the size of SOTA models like QWQ-32b and EXAONE-32b and hence **memory efficient**.
|
|
|
34 |
- On par or outperforms on tasks like - MBPP, BFCL, Enterprise RAG, MT Bench, MixEval, IFEval and Multi-Challenge making it great for **Agentic / Enterprise tasks**.
|
35 |
- Competitive performance on academic benchmarks like AIME-24 AIME-25, AMC-23, MATH-500 and GPQA considering model size.
|
36 |
|
@@ -48,6 +49,12 @@ Evaluations were conducted using [lm-eval-harness](https://github.com/EleutherAI
|
|
48 |
|
49 |

|
50 |
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
---
|
52 |
|
53 |
|
|
|
31 |
|
32 |
**Highlights**
|
33 |
- Half the size of SOTA models like QWQ-32b and EXAONE-32b and hence **memory efficient**.
|
34 |
+
- It consumes a fraction of thinking tokens as compared to other SOTA models. Hence mot only is it memory efficient but **also faster** 🚀🚀🚀
|
35 |
- On par or outperforms on tasks like - MBPP, BFCL, Enterprise RAG, MT Bench, MixEval, IFEval and Multi-Challenge making it great for **Agentic / Enterprise tasks**.
|
36 |
- Competitive performance on academic benchmarks like AIME-24 AIME-25, AMC-23, MATH-500 and GPQA considering model size.
|
37 |
|
|
|
49 |
|
50 |

|
51 |
|
52 |
+
|
53 |
+
|
54 |
+
|
55 |
+

|
56 |
+
|
57 |
+
|
58 |
---
|
59 |
|
60 |
|