giangndm commited on
Commit
cc80234
·
verified ·
1 Parent(s): c097d66

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -11
README.md CHANGED
@@ -18,24 +18,32 @@ This model [giangndm/qwen2.5-omni-7b-mlx](https://huggingface.co/giangndm/qwen2.
18
  converted to MLX format from [Qwen/Qwen2.5-Omni-7B](https://huggingface.co/Qwen/Qwen2.5-Omni-7B)
19
  using mlx-lm version **0.24.0**.
20
 
21
- ## Use with mlx
22
 
23
  ```bash
24
- pip install mlx-lm
 
 
25
  ```
26
 
27
  ```python
28
- from mlx_lm import load, generate
 
 
 
29
 
30
- model, tokenizer = load("giangndm/qwen2.5-omni-7b-mlx")
31
 
32
- prompt = "hello"
 
33
 
34
- if tokenizer.chat_template is not None:
35
- messages = [{"role": "user", "content": prompt}]
36
- prompt = tokenizer.apply_chat_template(
37
- messages, add_generation_prompt=True
38
- )
 
 
39
 
40
- response = generate(model, tokenizer, prompt=prompt, verbose=True)
41
  ```
 
18
  converted to MLX format from [Qwen/Qwen2.5-Omni-7B](https://huggingface.co/Qwen/Qwen2.5-Omni-7B)
19
  using mlx-lm version **0.24.0**.
20
 
21
+ ## Use with mlx (https://github.com/giangndm/mlx-lm-omni)
22
 
23
  ```bash
24
+ uv add mlx-lm-omni
25
+ # or
26
+ uv add https://github.com/giangndm/mlx-lm-omni.git
27
  ```
28
 
29
  ```python
30
+ from mlx_lm_omni import load, generate
31
+ import librosa
32
+ from io import BytesIO
33
+ from urllib.request import urlopen
34
 
35
+ model, tokenizer = load("giangndm/qwen2.5-omni-7b-mlx-4bit")
36
 
37
+ audio_path = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/1272-128104-0000.flac"
38
+ audio = librosa.load(BytesIO(urlopen(audio_path).read()), sr=16000)[0]
39
 
40
+ messages = [
41
+ {"role": "system", "content": "You are a speech recognition model."},
42
+ {"role": "user", "content": "Transcribe the English audio into text without any punctuation marks.", "audio": audio},
43
+ ]
44
+ prompt = tokenizer.apply_chat_template(
45
+ messages, add_generation_prompt=True
46
+ )
47
 
48
+ text = generate(model, tokenizer, prompt=prompt, verbose=True)
49
  ```