JustinLin610 commited on
Commit
635fb47
·
verified ·
1 Parent(s): bda9ae8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -48,7 +48,7 @@ The following contains a code snippet illustrating how to use the model generate
48
  from mlx_lm import load, generate
49
 
50
  model, tokenizer = load("Qwen/Qwen3-4B-MLX-6bit")
51
- prompt = "hello, Introduce yourself, and what can you do?"
52
 
53
  if tokenizer.chat_template is not None:
54
  messages = [{"role": "user", "content": prompt}]
 
48
  from mlx_lm import load, generate
49
 
50
  model, tokenizer = load("Qwen/Qwen3-4B-MLX-6bit")
51
+ prompt = "Hello, please introduce yourself and tell me what you can do."
52
 
53
  if tokenizer.chat_template is not None:
54
  messages = [{"role": "user", "content": prompt}]