MCP Tool Use Quality Ranger 0.6B (GGUF)
Designed for evaluating function calls in the context of Model Context Protocol (MCP) tools. It can assess whether a function call is correct, uses the wrong tool, has incorrect parameter names, or has incorrect parameter values.
The mcp-tool-use-quality-ranger-0.6b is a fine-tuned sequence classification model created to evaluate the quality of function calls in conversational AI systems.
This repository provides a GGUF-converted version of the mcp-tool-use-quality-ranger-0.6b
model.

Installation
macOS
Follow the official llama-cpp-python macOS installation guide.
General Installation
pip install llama-cpp-python
Usage
- Load the GGUF Model, Classification Head, and Prompt Template
from llama_cpp import Llama
import numpy as np
import torch
import torch.nn.functional as F
from huggingface_hub import hf_hub_download
# Load your GGUF model locally
llm = Llama.from_pretrained(
repo_id="qualifire/mcp-tool-use-quality-ranger-0.6b-GGUF",
filename="mcp-tool-use-quality-ranger-0.6b.q8_0.gguf",
embedding=True,
n_ctx=12000,
n_batch=32048,
n_gpu_layers=-1
)
# Load prompt template
file_path = hf_hub_download(
repo_id="qualifire/mcp-tool-use-quality-ranger-0.6b-GGUF",
filename="prompt_template.txt"
)
with open(file_path, encoding="utf-8") as f:
PROMPT_TEMPLATE = f.read()
# Download the classification head
cls_head_path = hf_hub_download(
repo_id="qualifire/mcp-tool-use-quality-ranger-0.6b-GGUF",
filename="cls_head.pt"
)
print(f"Downloaded classification head to: {cls_head_path}")
# Load classification head weights
cls_head_weights = torch.load(cls_head_path)
print(f"Loaded classification head weights: {cls_head_weights.shape}")
- Run Inference with example
# Example tools list
example_tools_list = '''[
{
"name": "order_food",
"description": "Order food from a restaurant.\nArgs:\nrestaurant_url: URL of the restaurant\nitem_name: Name of the item to order",
"inputSchema": {
"type": "object",
"title": "order_foodArguments",
"required": ["item_url", "item_name"],
"properties": {
"item_url": {
"type": "string",
"title": "Item Url"
},
"item_name": {
"type": "string",
"title": "Item Name"
}
}
}
}
'''
# Example conversation history
example_message_history = '''[
{
"role": "user",
"content": "Could you please order 2 Margherita pizzas for delivery to 123 Main Street, Anytown?"
},
{
"completion_message": {
"content": {
"type": "text",
"text": ""
},
"role": "assistant",
"stop_reason": "tool_calls",
"tool_calls": [
{
"id": "call_p8yj1p",
"function": {
"name": "order_food",
"arguments": {
"item": "Margherita Pizza",
"quantity": 3,
"delivery_address": "123 Main Street, Anytown"
}
}
}
]
}
}
]'''
# Format input
example_input = PROMPT_TEMPLATE.format(
message_history=example_message_history,
available_tools=example_tools_list
)
# Generate embedding
output = llm.embed(example_input)
# Classification
device = cls_head_weights.device
cls_vector = torch.tensor(output[-1]).to(device)
logits_manual = cls_vector @ cls_head_weights.T
# Softmax probabilities
probs = F.softmax(logits_manual, dim=-1).flatten()
id2label = {
0: "VALID_CALL",
1: "TOOL_ERROR",
2: "PARAM_NAME_ERROR",
3: "PARAM_VALUE_ERROR"
}
# Map probabilities to labels
label_probs = {id2label[i]: float(probs[i]) for i in range(len(probs))}
# Print results
for label, prob in label_probs.items():
print(f"{label}: {prob:.4f}")
# Predicted class
pred_idx = torch.argmax(probs).item()
pred_label = id2label[pred_idx]
print(f"\nPredicted class: {pred_label} with probability {probs[pred_idx]:.4f}")
VALID_CALL: 0.0862
TOOL_ERROR: 0.0196
PARAM_NAME_ERROR: 0.0107
PARAM_VALUE_ERROR: 0.8835
Predicted class: PARAM_VALUE_ERROR with probability 0.8835
Here, the value for 'quantity' should be 2, not 3. Therefore, the correct label is: PARAM_VALUE_ERROR.
- Downloads last month
- 37
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
32-bit
Model tree for qualifire/mcp-tool-use-quality-ranger-0.6b-GGUF
Base model
Qwen/Qwen3-0.6B-Base