AI & ML interests

Medical LLM | MLX | MLX-LM | Self-hosting | Finetuning | Conversational Agents

Recent Activity

div0-space  updated a model about 1 month ago
LibraxisAI/Qwen3-14b-MLX-Q5
div0-space  published a model about 1 month ago
LibraxisAI/Qwen3-14b-MLX-Q5
View all activity

div0-space 
posted an update 19 days ago
view post
Post
254
Hello community!

TL;DR
We are proudly announcing that the SoTA Llama-3_1-Nemotron-Ultra-253B is now available for Apple Silicon 192gb+ machines thanks to mlx>=0.26.1 and newest mlx-lm runtime!
Enjoy!
LibraxisAI/Llama-3_1-Nemotron-Ultra-253B-v1-MLX-Q5



All the details covered in README.md file (As we hope).
You can try to run on LMStudio if you are not comfortable with the native mlx_lm.server runner using our modified .jinja template and custom tokenizer_config.json.

@mlx-community @prince-canuma @bartowski @lmstudio-community
div0-space 
posted an update about 1 month ago
view post
Post
227
Hello Community!
I am experimenting with q5 quantization on mlx==0.26.2 and it is so promising! Our beloved ( CohereLabs/c4ai-command-a-03-2025) the model we use for our medical-nlp-use : 200gb+ bfloat16 vs. ~70gb q5 mlx with perfect quality ( LibraxisAI/c4ai-command-a-03-2025-q5-mlx)

The q5 and mixed quants came up with the release of mlx>=0.26.0. This quant is not yet supported by lmstudio but we didn't check the Ollama compatibility. For inference at this moment we us native mlx_lm.chat or mlx_lm.server, which works perfectly with python mx.generate pipeline. Repository consists of complete safetensors and config files **plus** we put there also our everyday command-generation scripts - convert-to-mlx.sh for mlx_lm.convert with variable parameters customization, and mlx-serve.sh for easy server building. Enjoy

@lmstudio-community @mlx-community
@bartowski
  • 2 replies
·