Text Generation
English

Model Card for Blake-XTM Arc

Blake-XTM Arc is a 7B large language model used for text generation. It was trained to reason and optionally call provided tools.

Model Details

Model Description

Blake-XTM Arc is a 7B parameter instruct LLM trained to think and optionally call a tool. It only supports using one tool per assistant message (no parallel tool calling). The model was LoRA fine-tuned with CatNyanster-7B as base model, which was fine-tuned on Mistral-7B.

Chat Format

Blake-XTM Arc uses the ChatML format, e.g.:

<|im_start|>system
System message<|im_end|>
<|im_start|>user
User prompt<|im_end|>
<|im_start|>assistant
Assistant response<|im_end|>

Model Usage

The assistant response can have the following three formats (the contents are examples and were not generated from the model):

  1. Only response:
    <|im_start|>assistant
    Hello! How may I assist you today?<|im_end|>
    
  2. Thought process and response:
    <|im_start|>assistant
    <|think_start|>The user has greeted me with a simple message. I should think about how to respond to them.
    
    Since the user sent a simple greeting, I should reply with a greeting that matches their energy.
    
    Alright, I can reply with a message like 'Hello! How can I help you?'<|think_end|>
    
    Hello! How may I assist you today?<|im_end|>
    
  3. Thought process and tool call:
    <|im_start|>assistant
    <|think_start|>The user has asked me to find all restaurants near Paris. Hmm... let me think this through thoroughly.
    
    I can see that I have a tool available called 'find_restaurants', which I might be able to use for this purpose.
    
    Alright, I think I should use the `find_restaurants` tool to find the restaurants near Paris. For the `city` parameter, I'll use 'Paris', and for the `country` parameter, I'll fill in `France`.
    
    Okay, I can go ahead and make the tool call now.<|think_end|>
    
    <|tool_start|>{'name': 'find_restaurants', 'arguments': {'city': 'Paris', 'country': 'France'}}<|tool_end|><|im_end|>
    

Warning: The model seems to bias towards thought process + response, even for short prompts like "Hello," which may cause it to overthink.

We recommend using the following system prompts for your situation:

  • Only thought process:
    You are an advanced reasoning model.
    
    You think between <|think_start|>...<|think_end|> tags. You must think if the user's request involves math or logical thinking/reasoning.
    
  • Thought process and tool calling:
    You are an advanced reasoning model with tool-calling capabilities.
    
    You think between <|think_start|>...<|think_end|> tags. You must think if the user's request involves math, logical thinking/reasoning, or when you want to consider using a tool.
    
    # Tools
    You have access to the following tools:
    [{'type': 'function', 'function': {'name': 'convert_currency', 'description': 'Convert currency from one type to another', 'parameters': {'type': 'object', 'properties': {'amount': {'type': 'number', 'description': 'The amount to be converted'}, 'from_currency': {'type': 'string', 'description': 'The currency to convert from'}, 'to_currency': {'type': 'string', 'description': 'The currency to convert to'}}, 'required': ['amount', 'from_currency', 'to_currency']}}}, {'type': 'function', 'function': {'name': 'get_random_joke', 'description': 'Get a random joke', 'parameters': {'type': 'object', 'properties': {}, 'required': []}}}] <\/tools>Use the following pydantic model json schema for each tool call you will make: {'title': 'FunctionCall', 'type': 'object', 'properties': {'arguments': {'title': 'Arguments', 'type': 'object'}, 'name': {'title': 'Name', 'type': 'string'}}, 'required': ['arguments', 'name']}
    
    To call a tool, write a JSON object with the name and arguments inside <|tool_start|>...<|tool_end|>.
    

For responding with a tool response, you can send a message as the tool user:

<|im_start|>assistant
<|think_start|>The user has asked me to find all restaurants near Paris. Hmm... let me think this through thoroughly.

I can see that I have a tool available called 'find_restaurants', which I might be able to use for this purpose.

Alright, I think I should use the `find_restaurants` tool to find the restaurants near Paris. For the `city` parameter, I'll use 'Paris', and for the `country` parameter, I'll fill in `France`.

Okay, I can go ahead and make the tool call now.<|think_end|>

<|tool_start|>{'name': 'find_restaurants', 'arguments': {'city': 'Paris', 'country': 'France'}}<|tool_end|><|im_end|>
<|im_start|>tool
{'restaurants': [{'name': 'A Restaurant Name', 'rating': 4.5}]}<|im_end|>
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Flexan/Blake-XTM-Arc

Finetuned
(2)
this model
Quantizations
1 model

Datasets used to train Flexan/Blake-XTM-Arc

Collection including Flexan/Blake-XTM-Arc