Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ Built on Qwen 2.5‑32B‑Base, AM-Thinking‑v1 shows strong performance on r
|
|
18 |
</div>
|
19 |
|
20 |
|
21 |
-
## 🧩 Why Another Reasoning
|
22 |
|
23 |
Large Mixture‑of‑Experts (MoE) models such as **DeepSeek‑R1** or **Qwen3‑235B‑A22B** dominate leaderboards—but they also demand clusters of high‑end GPUs. Many teams just need *the best dense model that fits on a single card*.
|
24 |
**AM‑Thinking‑v1** fills that gap **while remaining fully based on open-source components**:
|
|
|
18 |
</div>
|
19 |
|
20 |
|
21 |
+
## 🧩 Why Another 32B Reasoning Model Matters?
|
22 |
|
23 |
Large Mixture‑of‑Experts (MoE) models such as **DeepSeek‑R1** or **Qwen3‑235B‑A22B** dominate leaderboards—but they also demand clusters of high‑end GPUs. Many teams just need *the best dense model that fits on a single card*.
|
24 |
**AM‑Thinking‑v1** fills that gap **while remaining fully based on open-source components**:
|