retrain-pipelines Function Caller
version 0.48 - 2025-03-23 10:45:39 UTC (retraining source-code | pipeline-card)

Training dataset :

  • retrain-pipelines/func_calls v0.5 (1dea612 - 2025-03-23 10:30:06 UTC)
     

Base model :

The herein LoRa adapter can for instance be used as follows :

from transformers import AutoModelForCausalLM, AutoTokenizer
from torch import device, cuda

repo_id = "retrain-pipelines/function_caller"
revision = "<model_revision_commit_hash>"
model = AutoModelForCausalLM.from_pretrained(
    repo_id, revision=revision, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(
    repo_id, revision=revision, torch_dtype="auto", device_map="auto")

device = device("cuda" if cuda.is_available() else "cpu")
def generate_tool_calls_list(query, max_new_tokens=400) -> str:
    formatted_query = tokenizer.chat_template.format(query, "")
    inputs = tokenizer(formatted_query, return_tensors="pt").input_ids.to(device)
    outputs = model.generate(inputs, max_new_tokens=max_new_tokens, do_sample=False)
    generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
    return generated_text[len(formatted_query):].strip()

generate_tool_calls_list("Is 49 a perfect square ?")


Powered by retrain-pipelines 0.1.1 - Run by Aurelien-Morgan-Bot - UnslothFuncCallFlow - mf_run_id : 1806
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
This model is not currently available via any of the supported Inference Providers.

Model tree for retrain-pipelines/function_caller

Base model

Qwen/Qwen2.5-1.5B
Adapter
(305)
this model

Dataset used to train retrain-pipelines/function_caller

Evaluation results