JustinLin610 commited on
Commit
c955ea6
·
verified ·
1 Parent(s): a437c5a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -49,7 +49,7 @@ The following contains a code snippet illustrating how to use the model generate
49
  from mlx_lm import load, generate
50
 
51
  model, tokenizer = load("Qwen/Qwen3-4B-MLX-4bit")
52
- prompt = "hello, Introduce yourself, and what can you do?"
53
 
54
  if tokenizer.chat_template is not None:
55
  messages = [{"role": "user", "content": prompt}]
 
49
  from mlx_lm import load, generate
50
 
51
  model, tokenizer = load("Qwen/Qwen3-4B-MLX-4bit")
52
+ prompt = "Hello, please introduce yourself and tell me what you can do."
53
 
54
  if tokenizer.chat_template is not None:
55
  messages = [{"role": "user", "content": prompt}]