Hinova's picture
Update README.md
e74ca6a verified
metadata
base_model:
  - mistralai/Mistral-7B-v0.1
tags:
  - mlx

Mistral7B-v0.1-4bit-mlx-LoRA-WikiSQL

A 4-bit LoRA-fine-tuned Mistral-7B model for WikiSQL tasks, via Apple’s MLX framework and the lora.py.

This model was converted to MLX format from Hinova/mistral-7B-v0.1-4bit-mlx. Refer to the original model card for more details on the model.


πŸš€ Overview

This model was trained using the MLX Examples LoRA tutorial:

  • Fine-tuning based on lora.py (ml-explore/mlx-examples/lora)
  • Adapted Mistral-7B on the WikiSQL dataset
  • Applied low-rank Ξ”W adapters with LoRA, freezing base weights
  • Quantized to 4-bit for Apple Silicon efficiency
  • Packaged in MLX format for seamless inference via mlx-lm

πŸ“¦ Model Files

File Description
weights.npz Fused weights: base + LoRA adapters
config.json Model config with quantization metadata
tokenizer.model SentencePiece tokenizer for Mistral-7B

πŸ’‘ How to Use

Install

pip install mlx-lm

## Use with mlx
```bash
pip install mlx
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/llms/hf_llm
python generate.py --model Hinova/mistral7b-v0.1-4bit-mlx-lora-wikisql --prompt "My name is"