image/jpeg

Click here to support our open-source dataset and model releases!

Fireplace is a function-calling model for Llama 3 70b Instruct.

  • combines function-calling abilities with a high-performance, versatile chat model
  • system message function-calling utilizing the Llama 3 Instruct format

This version of Fireplace focuses on combining chat-instruct and system-message function-calling only.

We've just released Fireplace 2 for Llama 3.1 8b, which includes inline function calls as one of several technical skills (including JSON, SQL, and more!) Try it today!

Version

This is the 2024-05-09 release of Fireplace for Llama 3 70b.

We've also released Fireplace 2 for Llama 3.1 8b and we're working on more Fireplace releases to come :)

Prompting Guide

Fireplace uses the Llama 3 Instruct prompt format:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>{{ user_msg_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>{{ model_answer_1 }}<|eot_id|>

Example input for function calling:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n You are Fireplace, an expert code assistant with access to the following functions. Use them if required - { "name": "calculate_tip", "description": "Calculate the tip amount for a bill", "parameters": { "type": "object", "properties": { "bill_amount": { "type": "number", "description": "The total amount of the bill" }, "tip_percentage": { "type": "number", "description": "The percentage of tip to be given" } }, "required": [ "bill_amount", "tip_percentage" ] } } { "name": "check_website_availability", "description": "Check the availability of a website", "parameters": { "type": "object", "properties": { "url": { "type": "string", "description": "The URL of the website" } }, "required": [ "url" ] } } <|eot_id|><|start_header_id|>user<|end_header_id|>\n\nHi, I need help with calculating a tip. My bill is $100 and I want to leave a 30% tip. <|eot_id|><|start_header_id|>assistant<|end_header_id|>

For assistant handling of function responses, deliver them in a new user message:

<|start_header_id|>user<|end_header_id|>\n\n FUNCTION RESPONSE: {"status": "success", "message": "Email has been sent successfully"} <|eot_id|>

WARNING: text-generation-webui

When using Llama 3 Instruct models (including Fireplace) with text-generation-webui note that a current bug in webui can result in incorrect reading of the model's ending tokens, causing unfinished outputs and incorrect structure.

For a temporary workaround if you encounter this issue, edit Fireplace's tokenizer_config file as indicated:

from "eos_token": "<|end_of_text|>",

to "eos_token": "<|eot_id|>",

The Model

Fireplace is built on top of Llama 3 70b Instruct, the highest performance open-source model currently available.

This version of Fireplace uses the glaiveai/glaive-function-calling-v2 dataset converted to Llama 3 Instruct format.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 36.82
IFEval (0-Shot) 77.74
BBH (3-Shot) 49.56
MATH Lvl 5 (4-Shot) 19.64
GPQA (0-shot) 13.98
MuSR (0-shot) 16.77
MMLU-PRO (5-shot) 43.25

image/jpeg

Fireplace is created by Valiant Labs.

Check out our HuggingFace page for Shining Valiant 2 and our other models!

We care about open source. For everyone to use.

We encourage others to finetune further from our models.

Downloads last month
9
Safetensors
Model size
70.6B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ValiantLabs/Llama3-70B-Fireplace

Quantizations
3 models

Collection including ValiantLabs/Llama3-70B-Fireplace

Evaluation results