Update - Tool Calling + Chat Template bug fixes
Just updated DeepSeek-R1-0528 GGUFs and BF16 safetensors (the big 671B model)
- Native tool calling is now supported. Uses https://github.com/sgl-project/sglang/pull/6765 and https://github.com/vllm-project/vllm/pull/18874 which shows DeepSeek-R1 getting 93.25% on the BFCL** Berkeley Function-Calling Leaderboard https://gorilla.cs.berkeley.edu/leaderboard.html.
Use it via--jinja
in llama.cpp. Native transformers and vLLM should work as well.
Had to fix multiple issues in SGLang and vLLM's PRs (dangling newlines etc) - Chat template bug fixes
add_generation_prompt
now works - previously<|Assistant|>
was auto appended - now it's toggle-able. Fixes many issues, and should streamline chat sessions. - UTF-8 encoding of
tokenizer_config.json
is now fixed - now works in Windows. - Ollama is now fixed on using more memory - I removed
num_ctx
andnum_predict
-> it'll now default to Ollama's defaults. This allocated more KV cache VRAM, thus spiking VRAM usage. Please update your context length manually. - [10th June 2025] Update - LM Studio now also works
- Ollama works by using the TQ1_0 quant!
ollama run hf.co/unsloth/DeepSeek-R1-0528-GGUF:TQ1_0
Please re-download all weights to get the latest updates!
What is 3. about? I think I can ignore all the other ones and not re-download.
Why UD-Q2_XL was deleted? Is UD-IQ2_M better?
What is 3. about? I think I can ignore all the other ones and not re-download.
It's not that important
Why UD-Q2_XL was deleted? Is UD-IQ2_M better?
Oh crap you're right, it was never supposed to be deleted lol thanks for the warning
I also noticed Q8_0 was gone!! I'll redo Q8_0 and Q2_K_XL
Thank you!
Why is DeepSeek-R1-0528-UD-IQ2_M-00001-of-00005.gguf much newer than the rest of its parts? Are all the files updated (as mentioned above, or just the first one?
Hello.
I am trying to run this on a machine with Mi300X AMD GPU (if that matters), but I get weird too calling issues
./build/bin/llama-server -hf unsloth/DeepSeek-R1-0528-GGUF:TQ1_0 --cache-type-k q4_0 --threads -1 --n-gpu-layers 99 --prio 3 --temp 0.6 --top_p 0.95 --min_p 0.01 --ctx-size 16384 --seed 3407 -ot ".ffn_.*_exps.=CPU" --jinja
When the LLM attempts to call a tool, it throws
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007423997107e3 in __GI___wait4 (pid=33102, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
warning: 30 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory
#0 0x00007423997107e3 in __GI___wait4 (pid=33102, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
30 in ../sysdeps/unix/sysv/linux/wait4.c
#1 0x000074239b74b196 in ggml_print_backtrace () from /root/llama.cpp/build/bin/libggml-base.so
#2 0x000074239b75d9a6 in ggml_uncaught_exception() () from /root/llama.cpp/build/bin/libggml-base.so
#3 0x0000742399abb0da in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#4 0x0000742399aa5a55 in std::terminate() () from /lib/x86_64-linux-gnu/libstdc++.so.6
#5 0x0000742399abb391 in __cxa_throw () from /lib/x86_64-linux-gnu/libstdc++.so.6
#6 0x00000000003d2d3b in common_chat_msg_diff::compute_diffs(common_chat_msg const&, common_chat_msg const&) ()
#7 0x00000000003508b9 in server_slot::update_chat_msg(std::vector<common_chat_msg_diff, std::allocator<common_chat_msg_diff> >&) ()
#8 0x000000000034e26d in server_context::send_final_response(server_slot&) ()
#9 0x000000000034c97d in server_context::update_slots() ()
#10 0x00000000002cc464 in server_queue::start_loop() ()
#11 0x0000000000288a5f in main ()
[Inferior 1 (process 33044) detached]
terminate called after throwing an instance of 'std::runtime_error'
what(): Invalid diff: now finding less tool calls!
Aborted (core dumped)
Seem this same issue on a different (nvidia) machine, and V3-0528. Is there a way to get these models to perform tool calls with llama.cpp correctly?
Template supports tool calls but does not natively describe tools. The fallback behaviour used may produce bad results, inspect prompt w/ --verbose & consider overriding the template.
srv params_from_: Chat format: DeepSeek R1
....
/home/ubuntu/llama.cpp/build/bin/libggml-base.so(+0x158fb)[0x7f51338028fb]
/home/ubuntu/llama.cpp/build/bin/libggml-base.so(ggml_print_backtrace+0x21c)[0x7f5133802d5c]
/home/ubuntu/llama.cpp/build/bin/libggml-base.so(+0x24bff)[0x7f5133811bff]
/lib/x86_64-linux-gnu/libstdc++.so.6(+0xbb0da)[0x7f51334bb0da]
/lib/x86_64-linux-gnu/libstdc++.so.6(_ZSt10unexpectedv+0x0)[0x7f51334a5a55]
/lib/x86_64-linux-gnu/libstdc++.so.6(+0xbb391)[0x7f51334bb391]
./llama-server(+0x33bcc)[0x62c8fd587bcc]
./llama-server(+0xa596b)[0x62c8fd5f996b]
./llama-server(+0xa79c1)[0x62c8fd5fb9c1]
./llama-server(+0xc014f)[0x62c8fd61414f]
./llama-server(+0x858b5)[0x62c8fd5d98b5]
./llama-server(+0x4e103)[0x62c8fd5a2103]
/lib/x86_64-linux-gnu/libc.so.6(+0x2a1ca)[0x7f513302a1ca]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x8b)[0x7f513302a28b]
./llama-server(+0x500f5)[0x62c8fd5a40f5]
terminate called after throwing an instance of 'std::runtime_error'
what(): Invalid diff: now finding less tool calls!
Aborted (core dumped)