File size: 427 Bytes
7c55d67
 
 
 
 
 
8d107c6
 
 
474ad46
1
2
3
4
5
6
7
8
9
10
11
---
license: apache-2.0
base_model:
- DavidAU/Qwen3-30B-A6B-16-Extreme-128k-context
pipeline_tag: text-generation
library_name: mlx
---

## 📌 Overview
A 4-bit MLX quantized version of **Qwen3-30B—A6B** optimized for efficient inference using the **MLX library**, designed to handle long-context tasks (192k tokens) with reduced resource usage. Retains core capabilities of Qwen3 while enabling deployment on edge devices.