Hinova commited on
Commit
e74ca6a
·
verified ·
1 Parent(s): c5713b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -1
README.md CHANGED
@@ -4,10 +4,44 @@ base_model:
4
  tags:
5
  - mlx
6
  ---
 
 
 
7
 
8
- # mistral7b-v0.1-4bit-mlx-lora-wikisql
9
  This model was converted to MLX format from [`Hinova/mistral-7B-v0.1-4bit-mlx`]().
10
  Refer to the [original model card](https://huggingface.co/Hinova/mistral-7B-v0.1-4bit-mlx) for more details on the model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ## Use with mlx
12
  ```bash
13
  pip install mlx
 
4
  tags:
5
  - mlx
6
  ---
7
+ # Mistral7B-v0.1-4bit-mlx-LoRA-WikiSQL
8
+
9
+ A 4-bit LoRA-fine-tuned Mistral-7B model for WikiSQL tasks, via Apple’s MLX framework and the `lora.py`.
10
 
 
11
  This model was converted to MLX format from [`Hinova/mistral-7B-v0.1-4bit-mlx`]().
12
  Refer to the [original model card](https://huggingface.co/Hinova/mistral-7B-v0.1-4bit-mlx) for more details on the model.
13
+
14
+ ---
15
+
16
+ ## 🚀 Overview
17
+
18
+ This model was trained using the MLX Examples LoRA tutorial:
19
+
20
+ - Fine-tuning based on `lora.py` (ml-explore/mlx-examples/lora)
21
+ - Adapted Mistral-7B on the WikiSQL dataset
22
+ - Applied low-rank ΔW adapters with LoRA, freezing base weights
23
+ - Quantized to 4-bit for Apple Silicon efficiency
24
+ - Packaged in MLX format for seamless inference via `mlx-lm`
25
+
26
+ ---
27
+
28
+ ## 📦 Model Files
29
+
30
+ | File | Description |
31
+ |-------------------|----------------------------------------------------|
32
+ | `weights.npz` | Fused weights: base + LoRA adapters |
33
+ | `config.json` | Model config with quantization metadata |
34
+ | `tokenizer.model` | SentencePiece tokenizer for Mistral-7B |
35
+
36
+ ---
37
+
38
+ ## 💡 How to Use
39
+
40
+ ### Install
41
+
42
+ ```bash
43
+ pip install mlx-lm
44
+
45
  ## Use with mlx
46
  ```bash
47
  pip install mlx