common-pile/comma-v0.1-2t ported to MLX
See common-pile/comma-v0.1-2t for the original model card.
Try this model out using uv like this, which will download a 15GB model the first time you run it:
uv run --python 3.12 \
--with mlx-lm \
mlx_lm.generate \
--model simonw/comma-v0.1-2t-mlx \
--prompt 'Facts about pelicans:'
More notes on my blog. I created the MLX port using this command:
uv run --python 3.12 \
--with mlx-lm \
python -m mlx_lm convert \
--hf-path common-pile/comma-v0.1-2t \
--mlx-path ./comma-v0.1-2t-mlx
- Downloads last month
- 20
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for simonw/comma-v0.1-2t-mlx
Base model
common-pile/comma-v0.1-2t