--- license: apache-2.0 base_model: - DavidAU/Qwen3-30B-A6B-16-Extreme-128k-context pipeline_tag: text-generation library_name: mlx --- ## 📌 Overview A 4-bit MLX quantized version of **Qwen3-30B—A6B** optimized for efficient inference using the **MLX library**, designed to handle long-context tasks (192k tokens) with reduced resource usage. Retains core capabilities of Qwen3 while enabling deployment on edge devices.