lokinfey commited on
Commit
65e6cde
·
verified ·
1 Parent(s): a049423

init README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -3
README.md CHANGED
@@ -1,3 +1,51 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # **Phi-3.5-vision-mlx-int4**
6
+
7
+ This is a quantized INT4 model based on Apple MLX Framework Phi-3.5-vision-instruct. You can deploy it on Apple Silicon devices.
8
+
9
+ ## **Installation**
10
+
11
+ ```bash
12
+
13
+ pip install -U mlx-vlm
14
+
15
+ ```
16
+
17
+ ## **Conversion**
18
+
19
+ ```bash
20
+
21
+ python -m mlxv_lm.convert --hf-path microsoft/Phi-3.5-vision-instruct -q
22
+
23
+ ```
24
+
25
+ ## **Samples**
26
+
27
+ ```python
28
+
29
+ import mlx.core as mx
30
+ from mlx_vlm import load, generate
31
+
32
+ model_path = "Your Phi-3.5-vision-mlx-int4 Path"
33
+ model, processor = load(model_path,processor_config={"trust_remote_code":"True"})
34
+
35
+ messages = [
36
+ {"role": "user", "content": "<|image|>\n Summarize the pic"},
37
+ ]
38
+
39
+ prompt = processor.tokenizer.apply_chat_template(
40
+ messages,
41
+ tokenize=False,
42
+ add_generation_prompt=True,
43
+ )
44
+
45
+ output = generate(model, processor, "Your Image Path", placeholder+prompt, verbose=False, max_tokens=1024)
46
+
47
+ output
48
+
49
+
50
+
51
+ ```