Weird repetition issue
running with vllm
--model Qwen/Qwen2.5-Coder-32B-Instruct-GPTQ-Int8
--trust-remote-code
--host 0.0.0.0
--port 8001
--max-model-len 10000
--tensor-parallel-size 4
--gpu_memory_utilization 0.8
When prompted with: "3+3"
it does repeat 3+3 until max sequence length
Received request cmpl-86e5261a2d4247719d83a2e7fb08d88d-13: prompt: '3+3', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=-1, min_p=0.0, seed=None, stop=[], stop_token_ids=[
], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=3000, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None), prompt_token_ids: [3838, 374, 220, 18, 10, 18], lora_request: None, prompt_adapter_request: None.
Any Ideas ?
Edit: Hmm Same happens on unquant Version, maybe missing system template ?
Edit2: temp=0.7 fixes this behaviour for the tested prompts.
Previously tested with fireworksai temp 0.0 -> they might dont even apply the sampling params who knows.
Edit3: Does only fix for batch size of 1
Might be related to vllm
https://github.com/vllm-project/vllm/issues/5898