|
--- |
|
base_model: |
|
- mistralai/Mistral-7B-v0.1 |
|
tags: |
|
- mlx |
|
--- |
|
# Mistral7B-v0.1-4bit-mlx-LoRA-WikiSQL |
|
|
|
A 4-bit LoRA-fine-tuned Mistral-7B model for WikiSQL tasks, via Apple’s MLX framework and the `lora.py`. |
|
|
|
This model was converted to MLX format from [`Hinova/mistral-7B-v0.1-4bit-mlx`](). |
|
Refer to the [original model card](https://huggingface.co/Hinova/mistral-7B-v0.1-4bit-mlx) for more details on the model. |
|
|
|
--- |
|
|
|
## 🚀 Overview |
|
|
|
This model was trained using the MLX Examples LoRA tutorial: |
|
|
|
- Fine-tuning based on `lora.py` (ml-explore/mlx-examples/lora) |
|
- Adapted Mistral-7B on the WikiSQL dataset |
|
- Applied low-rank ΔW adapters with LoRA, freezing base weights |
|
- Quantized to 4-bit for Apple Silicon efficiency |
|
- Packaged in MLX format for seamless inference via `mlx-lm` |
|
|
|
--- |
|
|
|
## 📦 Model Files |
|
|
|
| File | Description | |
|
|-------------------|----------------------------------------------------| |
|
| `weights.npz` | Fused weights: base + LoRA adapters | |
|
| `config.json` | Model config with quantization metadata | |
|
| `tokenizer.model` | SentencePiece tokenizer for Mistral-7B | |
|
|
|
--- |
|
|
|
## 💡 How to Use |
|
|
|
### Install |
|
|
|
```bash |
|
pip install mlx-lm |
|
|
|
## Use with mlx |
|
```bash |
|
pip install mlx |
|
git clone https://github.com/ml-explore/mlx-examples.git |
|
cd mlx-examples/llms/hf_llm |
|
python generate.py --model Hinova/mistral7b-v0.1-4bit-mlx-lora-wikisql --prompt "My name is" |
|
``` |
|
|