Model Performance Comparison (BFCL)

task name minpeter/Llama-3.2-1B-chatml-tool-v4 meta-llama/Llama-3.2-1B-Instruct (measure) meta-llama/Llama-3.2-1B-Instruct (Reported)
parallel_multiple 0.000 0.025 0.15
parallel 0.000 0.035 0.36
simple 0.7725 0.215 0.2925
multiple 0.765 0.17 0.335

Parallel calls are not taken into account. 0 points are expected. We plan to fix this in next version.

Downloads last month
26
Safetensors
Model size
1.24B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for minpeter/Llama-3.2-1B-chatml-tool-v4

Datasets used to train minpeter/Llama-3.2-1B-chatml-tool-v4