Unexpected and Incoherent Responses from mlx-community/Phi-4-mini-instruct-8bit
#2
by
TeluguLLM
- opened
Description:
When using the mlx-community/Phi-4-mini-instruct-8bit
model, I encountered issues where it generates unexpected or incoherent responses. The output does not align with the provided prompt, and the model behavior appears inconsistent. Additionally, I observed a possible degradation in generation quality, with unusually low coherence and relevancy.
Steps to reproduce:
- Run the following command:
mlx_lm.generate --model mlx-community/Phi-4-mini-instruct-8bit --prompt "What is your name?"
- Observe the generated output.
mlx_lm.generate --model mlx-community/Phi-4-mini-instruct-8bit --prompt "What is your name?"
config.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3.30k/3.30k [00:00<00:00, 26.2MB/s]
sample_finetune.py: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.17k/6.17k [00:00<00:00, 17.8MB/s]
model.safetensors.index.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββ| 32.6k/32.6k [00:00<00:00, 22.1MB/s]
modeling_phi3.py: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 54.3k/54.3k [00:00<00:00, 3.84MB/s]
configuration_phi3.py: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 10.9k/10.9k [00:00<00:00, 8.84MB/s]
added_tokens.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 249/249 [00:00<00:00, 450kB/s]
special_tokens_map.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 587/587 [00:00<00:00, 1.52MB/s]
tokenizer_config.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2.96k/2.96k [00:00<00:00, 6.78MB/s]
merges.txt: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2.42M/2.42M [00:00<00:00, 2.48MB/s]
tokenizer.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 15.5M/15.5M [00:01<00:00, 12.6MB/s]
vocab.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3.91M/3.91M [00:01<00:00, 2.84MB/s]
model.safetensors: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.08G/4.08G [02:35<00:00, 26.1MB/s]
Fetching 12 files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12/12 [02:36<00:00, 13.05s/it]
==========tensors: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.08G/4.08G [02:35<00:00, 26.3MB/s]
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!01<00:00, 2.85MB/s]
==========
Prompt: 8 tokens, 15.097 tokens-per-sec
Generation: 100 tokens, 11.463 tokens-per-sec
Peak memory: 4.116 GB