Update README.md
Browse files
README.md
CHANGED
@@ -23,9 +23,9 @@ tags:
|
|
23 |
- mlx
|
24 |
---
|
25 |
|
26 |
-
#
|
27 |
|
28 |
-
The Model [
|
29 |
converted to MLX format from [cognitivecomputations/Dolphin3.0-R1-Mistral-24B](https://huggingface.co/cognitivecomputations/Dolphin3.0-R1-Mistral-24B)
|
30 |
using mlx-lm version **0.21.4**.
|
31 |
|
@@ -38,7 +38,7 @@ pip install mlx-lm
|
|
38 |
```python
|
39 |
from mlx_lm import load, generate
|
40 |
|
41 |
-
model, tokenizer = load("
|
42 |
|
43 |
prompt = "hello"
|
44 |
|
|
|
23 |
- mlx
|
24 |
---
|
25 |
|
26 |
+
# mlx-community/Dolphin3.0-R1-Mistral-24B-4bit
|
27 |
|
28 |
+
The Model [mlx-community/Dolphin3.0-R1-Mistral-24B-4bit](https://huggingface.co/mlx-community/Dolphin3.0-R1-Mistral-24B-4bit) was
|
29 |
converted to MLX format from [cognitivecomputations/Dolphin3.0-R1-Mistral-24B](https://huggingface.co/cognitivecomputations/Dolphin3.0-R1-Mistral-24B)
|
30 |
using mlx-lm version **0.21.4**.
|
31 |
|
|
|
38 |
```python
|
39 |
from mlx_lm import load, generate
|
40 |
|
41 |
+
model, tokenizer = load("mlx-community/Dolphin3.0-R1-Mistral-24B-4bit")
|
42 |
|
43 |
prompt = "hello"
|
44 |
|