Qwen-Function-Calling-xLAM
Collection
5 items
•
Updated
This is a fine-tuned version of the Qwen2.5-14B-Instruct model. The model was trained using Hugging Face's TRL library on the xLAM dataset for function calling capabilities.
<|im_end|>
(ID: 151645)The model was fine-tuned using the following configuration:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
"ermiaazarkhalili/Qwen2.5-14B-Instruct_Function_Calling_xLAM",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
"ermiaazarkhalili/Qwen2.5-14B-Instruct_Function_Calling_xLAM",
trust_remote_code=True
)
text= "<user>Check if the numbers 8 and 1233 are powers of two.</user>\n\n<tools>"
# Tokenize and generate
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.7,
do_sample=True,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode response
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
generated_text = response[len(text):].strip()
print(generated_text)
The model was trained on the xLAM dataset.
This fine-tuned model demonstrates improved capabilities in:
This model was developed by ermiaazarkhalili and leverages the capabilities of:
For any inquiries or support, please reach out to the developer at ermiaazarkhalili.
We would like to thank the creators of:
If you use this model, please cite:
@misc{ermiaazarkhalili_Qwen2.5-14B-Instruct_Function_Calling_xLAM,
author = {ermiaazarkhalili},
title = { Fine-tuning Qwen2.5-14B-Instruct on xLAM for Function Calling},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/ermiaazarkhalili/Qwen2.5-14B-Instruct_Function_Calling_xLAM}}
}