tjadamlee commited on
Commit
c9b6d55
·
verified ·
1 Parent(s): e245354

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -18,7 +18,7 @@ Built on Qwen 2.5‑32B‑Base, AM-Thinking‑v1 shows strong performance on r
18
  </div>
19
 
20
 
21
- ## 🧩 Why Another Reasoning 32B Model Matters?
22
 
23
  Large Mixture‑of‑Experts (MoE) models such as **DeepSeek‑R1** or **Qwen3‑235B‑A22B** dominate leaderboards—but they also demand clusters of high‑end GPUs. Many teams just need *the best dense model that fits on a single card*.
24
  **AM‑Thinking‑v1** fills that gap **while remaining fully based on open-source components**:
 
18
  </div>
19
 
20
 
21
+ ## 🧩 Why Another 32B Reasoning Model Matters?
22
 
23
  Large Mixture‑of‑Experts (MoE) models such as **DeepSeek‑R1** or **Qwen3‑235B‑A22B** dominate leaderboards—but they also demand clusters of high‑end GPUs. Many teams just need *the best dense model that fits on a single card*.
24
  **AM‑Thinking‑v1** fills that gap **while remaining fully based on open-source components**: