minpeter's picture
Upload folder using huggingface_hub
c1855e9 verified
metadata
license: apache-2.0
task_categories:
  - text-generation
language:
  - en
configs:
  - config_name: simple
    data_files: simple.parquet
    default: true
  - config_name: parallel
    data_files: parallel.parquet
  - config_name: multiple
    data_files: multiple.parquet
  - config_name: parallel_multiple
    data_files: parallel_multiple.parquet

[PARSED] BFCL V1 AST (non-live python)

The data in this dataset is a subset of the original gorilla-llm/Berkeley-Function-Calling-Leaderboard

Subset name multi-turn parallel multiple definition Last turn type number of dataset
simple no no no tool_calls 400
multiple no no yes tool_calls 200
parallel no yes no tool_calls 200
parallel_multiple no yes yes tool_calls 200

This is a re-parsing formatting dataset for Python AST parts from V1 of the official dataset of BFCL.


Simple (400 AST): Single function evaluation contains the simplest but most commonly seen format, where the user supplies a single JSON function document, with one and only one function call being invoked.

Multiple Function (200 AST): Multiple function category contains a user question that only invokes one function call out of 2 to 4 JSON function documentations. The model needs to be capable of selecting the best function to invoke according to user-provided context.

Parallel Function (200 AST): Parallel function is defined as invoking multiple function calls in parallel with one user query. The model needs to digest how many function calls need to be made and the question to model can be a single sentence or multiple sentence.

Parallel Multiple Function (200 AST): Parallel Multiple function is the combination of parallel function and multiple function. In other words, the model is provided with multiple function documentation, and each of the corresponding function calls will be invoked zero or more times.

Load the dataset

from datasets import load_dataset

ds = load_dataset("minpeter/bfcl-v1-non-live-ast-parsed", data_files="*.parquet")
print(ds)

# DatasetDict({
#     train: Dataset({
#         features: ['messages', 'tools', 'extra'],
#         num_rows: 1000
#     })
# })