-
mlx-community/Apriel-1.5-15b-Thinker-4bit
Text Generation • Updated • 422 • 2 -
mlx-community/Apriel-1.5-15b-Thinker-5bit
Text Generation • Updated • 93 -
mlx-community/Apriel-1.5-15b-Thinker-6bit-MLX
Image-Text-to-Text • Updated • 79 -
mlx-community/Apriel-1.5-15b-Thinker-3bit-MLX
Image-Text-to-Text • Updated • 42
AI & ML interests
None defined yet.
Recent Activity
MLX Community
A community org for MLX model weights that run on Apple Silicon. This organization hosts ready-to-use models compatible with:
- mlx-lm - A Python package for LLM text generation and fine-tuning with MLX.
- mlx-swift-examples – a Swift package to run MLX models.
- mlx-vlm – package for inference and fine-tuning of Vision Language Models (VLMs) using MLX.
These are pre-converted weights, ready to use in the example scripts or integrate in your apps.
Quick start for LLMs
Install mlx-lm
:
pip install mlx-lm
You can use mlx-lm
from the command line. For example:
mlx_lm.generate --model mlx-community/Mistral-7B-Instruct-v0.3-4bit --prompt "hello"
This will download a Mistral 7B model from the Hugging Face Hub and generate text using the given prompt.
To chat with an LLM use:
mlx_lm.chat
This will give you a chat REPL that you can use to interact with the LLM. The chat context is preserved during the lifetime of the REPL.
For a full list of options run --help
on the command of your interest, for example:
mlx_lm.chat --help
Conversion and Quantization
To quantize a model from the command line run:
mlx_lm.convert --hf-path mistralai/Mistral-7B-Instruct-v0.3 -q
For more options run:
mlx_lm.convert --help
You can upload new models to Hugging Face by specifying --upload-repo
to
convert
. For example, to upload a quantized Mistral-7B model to the
MLX Hugging Face community you can do:
mlx_lm.convert \
--hf-path mistralai/Mistral-7B-Instruct-v0.3 \
-q \
--upload-repo mlx-community/my-4bit-mistral
Models can also be converted and quantized directly in the mlx-my-repo Hugging Face Space.
For more details on the API checkout the full README
Other Examples:
For more examples, visit the MLX Examples repo. The repo includes examples of:
- Image generation with Flux and Stable Diffusion
- Parameter efficient fine tuning with LoRA
- Speech recognition with Whisper
- Multimodal models such as CLIP and LLaVA
- Many other examples of different machine learning applications and algorithms
-
mlx-community/Qwen3-Coder-30B-A3B-Instruct-4bit
Text Generation • 31B • Updated • 1.35k • 9 -
mlx-community/Qwen3-Coder-480B-A35B-Instruct-4bit
Text Generation • 480B • Updated • 1.15k • 18 -
mlx-community/Qwen3-Coder-30B-A3B-Instruct-8bit
Text Generation • 31B • Updated • 218 • 2 -
mlx-community/Qwen3-Coder-30B-A3B-Instruct-8bit-DWQ-lr9e8
Text Generation • 31B • Updated • 183 • 1
-
mlx-community/Apriel-1.5-15b-Thinker-4bit
Text Generation • Updated • 422 • 2 -
mlx-community/Apriel-1.5-15b-Thinker-5bit
Text Generation • Updated • 93 -
mlx-community/Apriel-1.5-15b-Thinker-6bit-MLX
Image-Text-to-Text • Updated • 79 -
mlx-community/Apriel-1.5-15b-Thinker-3bit-MLX
Image-Text-to-Text • Updated • 42
-
mlx-community/Qwen3-Coder-30B-A3B-Instruct-4bit
Text Generation • 31B • Updated • 1.35k • 9 -
mlx-community/Qwen3-Coder-480B-A35B-Instruct-4bit
Text Generation • 480B • Updated • 1.15k • 18 -
mlx-community/Qwen3-Coder-30B-A3B-Instruct-8bit
Text Generation • 31B • Updated • 218 • 2 -
mlx-community/Qwen3-Coder-30B-A3B-Instruct-8bit-DWQ-lr9e8
Text Generation • 31B • Updated • 183 • 1
models
2,955

mlx-community/MinerU2.5-2509-1.2B-bf16

mlx-community/granite-4.0-h-tiny-4bit

mlx-community/granite-4.0-h-micro-4bit

mlx-community/granite-4.0-h-micro-6bit

mlx-community/InternVL3_5-30B-A3B-4bit

mlx-community/InternVL3_5-GPT-OSS-20B-A4B-Preview-4bit

mlx-community/InternVL3_5-1B-4bit

mlx-community/Granite-4.0-H-Tiny-4bit-DWQ

mlx-community/Apriel-1.5-15b-Thinker-bf16
