我在ollama上部署了codeqwen-1_5-7b-chat-q5_0.gguf可是对话很奇怪,这个是为什么,有什么解决办法吗
1
#4 opened 11 months ago
by
monsterbeasts
broken output when gpu enable
1
#3 opened 11 months ago
by
imareo
Using llama.cpp server, responses always end with <|im_end|>
1
#2 opened about 1 year ago
by
gilankpam