The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: TypeError
Message: Couldn't cast array of type
struct<ad0272: struct<id: string, status: string, tier: string>, bd0272: struct<id: string, status: string, tier: string>, ad0276: struct<id: string, status: string, tier: string>, bd0276: struct<id: string, status: string, tier: string>, ad0280: struct<id: string, status: string, tier: string>, bd0280: struct<id: string, status: string, tier: string>, ad0284: struct<id: string, status: string, tier: string>, bd0284: struct<id: string, status: string, tier: string>, ad0288: struct<id: string, status: string, tier: string>, bd0288: struct<id: string, status: string, tier: string>, ad0292: struct<id: string, status: string, tier: string>, bd0292: struct<id: string, status: string, tier: string>, ad0296: struct<id: string, status: string, tier: string>, bd0296: struct<id: string, status: string, tier: string>, ad0300: struct<id: string, status: string, tier: string>, bd0300: struct<id: string, status: string, tier: string>, ad0304: struct<id: string, status: string, tier: string>, bd0304: struct<id: string, status: string, tier: string>, ad0308: struct<id: string, status: string, tier: string>, bd0308: struct<id: string, status: string, tier: string>, ad0312: struct<id: string, status: string, tier: string>, bd0312: struct<id: string, status: string, tier: string>, ad0316: struct<id: string, status: string, tier: string>, bd0316: struct<id: string, status: string, tier: string>, ad0320: struct<id: string, status: string, tier: string>, bd0320: struct<id: string, status: string,
...
struct<id: string, status: string, tier: string>, bd0220: struct<id: string, status: string, tier: string>, ad0224: struct<id: string, status: string, tier: string>, bd0224: struct<id: string, status: string, tier: string>, ad0228: struct<id: string, status: string, tier: string>, bd0228: struct<id: string, status: string, tier: string>, ad0232: struct<id: string, status: string, tier: string>, bd0232: struct<id: string, status: string, tier: string>, ad0236: struct<id: string, status: string, tier: string>, bd0236: struct<id: string, status: string, tier: string>, ad0240: struct<id: string, status: string, tier: string>, bd0240: struct<id: string, status: string, tier: string>, ad0244: struct<id: string, status: string, tier: string>, bd0244: struct<id: string, status: string, tier: string>, ad0248: struct<id: string, status: string, tier: string>, bd0248: struct<id: string, status: string, tier: string>, ad0252: struct<id: string, status: string, tier: string>, bd0252: struct<id: string, status: string, tier: string>, ad0256: struct<id: string, status: string, tier: string>, bd0256: struct<id: string, status: string, tier: string>, ad0260: struct<id: string, status: string, tier: string>, bd0260: struct<id: string, status: string, tier: string>, ad0264: struct<id: string, status: string, tier: string>, bd0264: struct<id: string, status: string, tier: string>, ad0268: struct<id: string, status: string, tier: string>, bd0268: struct<id: string, status: string, tier: string>>
to
{'at1604': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1604': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at1608': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1608': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at1612': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1612': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at1616': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1616': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at1620': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1620': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at1624': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1624': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at1628': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1628': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at1632': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1632': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at1636': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1636': {'id': Val
...
r': Value('string')}, 'bt4964': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at4968': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt4968': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at4972': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt4972': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at4976': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt4976': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at4980': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt4980': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at4984': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt4984': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at4988': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt4988': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at4992': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt4992': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at4996': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt4996': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}}
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 675, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2224, in cast_table_to_schema
cast_array_to_feature(
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2002, in cast_array_to_feature
_c(array.field(name) if name in array_fields else null_array, subfeature)
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1797, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2092, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
TypeError: Couldn't cast array of type
struct<ad0272: struct<id: string, status: string, tier: string>, bd0272: struct<id: string, status: string, tier: string>, ad0276: struct<id: string, status: string, tier: string>, bd0276: struct<id: string, status: string, tier: string>, ad0280: struct<id: string, status: string, tier: string>, bd0280: struct<id: string, status: string, tier: string>, ad0284: struct<id: string, status: string, tier: string>, bd0284: struct<id: string, status: string, tier: string>, ad0288: struct<id: string, status: string, tier: string>, bd0288: struct<id: string, status: string, tier: string>, ad0292: struct<id: string, status: string, tier: string>, bd0292: struct<id: string, status: string, tier: string>, ad0296: struct<id: string, status: string, tier: string>, bd0296: struct<id: string, status: string, tier: string>, ad0300: struct<id: string, status: string, tier: string>, bd0300: struct<id: string, status: string, tier: string>, ad0304: struct<id: string, status: string, tier: string>, bd0304: struct<id: string, status: string, tier: string>, ad0308: struct<id: string, status: string, tier: string>, bd0308: struct<id: string, status: string, tier: string>, ad0312: struct<id: string, status: string, tier: string>, bd0312: struct<id: string, status: string, tier: string>, ad0316: struct<id: string, status: string, tier: string>, bd0316: struct<id: string, status: string, tier: string>, ad0320: struct<id: string, status: string, tier: string>, bd0320: struct<id: string, status: string,
...
struct<id: string, status: string, tier: string>, bd0220: struct<id: string, status: string, tier: string>, ad0224: struct<id: string, status: string, tier: string>, bd0224: struct<id: string, status: string, tier: string>, ad0228: struct<id: string, status: string, tier: string>, bd0228: struct<id: string, status: string, tier: string>, ad0232: struct<id: string, status: string, tier: string>, bd0232: struct<id: string, status: string, tier: string>, ad0236: struct<id: string, status: string, tier: string>, bd0236: struct<id: string, status: string, tier: string>, ad0240: struct<id: string, status: string, tier: string>, bd0240: struct<id: string, status: string, tier: string>, ad0244: struct<id: string, status: string, tier: string>, bd0244: struct<id: string, status: string, tier: string>, ad0248: struct<id: string, status: string, tier: string>, bd0248: struct<id: string, status: string, tier: string>, ad0252: struct<id: string, status: string, tier: string>, bd0252: struct<id: string, status: string, tier: string>, ad0256: struct<id: string, status: string, tier: string>, bd0256: struct<id: string, status: string, tier: string>, ad0260: struct<id: string, status: string, tier: string>, bd0260: struct<id: string, status: string, tier: string>, ad0264: struct<id: string, status: string, tier: string>, bd0264: struct<id: string, status: string, tier: string>, ad0268: struct<id: string, status: string, tier: string>, bd0268: struct<id: string, status: string, tier: string>>
to
{'at1604': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1604': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at1608': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1608': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at1612': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1612': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at1616': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1616': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at1620': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1620': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at1624': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1624': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at1628': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1628': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at1632': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1632': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at1636': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt1636': {'id': Val
...
r': Value('string')}, 'bt4964': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at4968': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt4968': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at4972': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt4972': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at4976': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt4976': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at4980': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt4980': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at4984': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt4984': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at4988': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt4988': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at4992': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt4992': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'at4996': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}, 'bt4996': {'id': Value('string'), 'status': Value('string'), 'tier': Value('string')}}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1736, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1919, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
budget dict | difficulty string | domain string | fault_plan list | gold_summary null | initial_state dict | instruction string | seed int64 | split string | success_criteria list | task_id string | tool_schemas list | toolset_id string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
{
"max_retries": 2,
"max_steps": 6,
"max_tool_calls": 6,
"timeout_ms": 1000
} | easy | crud | [] | null | {
"accounts": {
"at1068": null,
"bt1068": null,
"at1072": null,
"bt1072": null,
"at1076": null,
"bt1076": null,
"at1080": null,
"bt1080": null,
"at1084": null,
"bt1084": null,
"at1088": null,
"bt1088": null,
"at1092": null,
"bt1092": null,
"at1096": null,
... | Close account at0000 and mark the reason as risk_review. | 42,000 | train | [
{
"op": "equals",
"path": "accounts.at0000.status",
"value": "closed",
"min_count": null,
"tool_name": null,
"key": null
},
{
"op": "equals",
"path": "accounts.at0000.closed_reason",
"value": "risk_review",
"min_count": null,
"tool_name": null,
"key": null
}
] | train_crud_0000 | [
{
"additionalProperties": false,
"name": "get_account",
"properties": {
"id": {
"type": "string"
},
"patch": null,
"fields": null,
"reason": null,
"query": null,
"top_k": null,
"doc_id": null,
"path": null,
"content": null,
"comma... | crud_v1 |
{
"max_retries": 2,
"max_steps": 6,
"max_tool_calls": 6,
"timeout_ms": 1000
} | medium | retrieval | [
{
"fault_type": "timeout",
"payload": {
"timeout_ms": 50,
"after_n_calls": null,
"retry_after_ms": null,
"rename_args": null,
"deny": null,
"required_role": null,
"code": null,
"message": null
},
"severity": "soft",
"trigger": {
"on_nth_call"... | null | {
"accounts": null,
"docs": [
{
"id": "dt0001",
"text": "security incident 1 root cause and mitigation timeline",
"title": "Security Incident Brief 1"
},
{
"id": "xt0001",
"text": "weekly housekeeping checklist and routine updates",
"title": "General Operations Note 1... | Find relevant docs for 'security incident 1'. | 42,001 | train | [
{
"op": "successful_tool_call",
"path": null,
"value": null,
"min_count": 1,
"tool_name": "search_docs",
"key": null
}
] | train_retrieval_0001 | [
{
"additionalProperties": false,
"name": "search_docs",
"properties": {
"id": null,
"patch": null,
"fields": null,
"reason": null,
"query": {
"type": "string"
},
"top_k": {
"type": "integer"
},
"doc_id": null,
"path": null,
... | retrieval_v1 |
{
"max_retries": 2,
"max_steps": 6,
"max_tool_calls": 6,
"timeout_ms": 1000
} | hard | files | [
{
"fault_type": "rate_limit",
"payload": {
"timeout_ms": null,
"after_n_calls": 0,
"retry_after_ms": 100,
"rename_args": null,
"deny": null,
"required_role": null,
"code": null,
"message": null
},
"severity": "soft",
"trigger": {
"on_nth_call... | null | {
"accounts": null,
"docs": null,
"files": {
"/notes/README.md": "team workspace",
"/notes/seed_6.txt": null,
"/notes/seed_3.txt": null,
"/reports/README.md": null,
"/reports/seed_0.txt": null,
"/exports/README.md": null,
"/exports/seed_4.txt": null,
"/drafts/README.md": null,
... | Write 'entry-train-2-532' to /notes/summary_train_2.log. | 42,002 | train | [
{
"op": "in",
"path": "files",
"value": "/notes/summary_train_2.log",
"min_count": null,
"tool_name": null,
"key": null
},
{
"op": "map_value_equals",
"path": "files",
"value": "entry-train-2-532",
"min_count": null,
"tool_name": null,
"key": "/notes/summary_train... | train_files_0002 | [
{
"additionalProperties": false,
"name": "read_file",
"properties": {
"id": null,
"patch": null,
"fields": null,
"reason": null,
"query": null,
"top_k": null,
"doc_id": null,
"path": {
"type": "string"
},
"content": null,
"command... | files_v1 |
{
"max_retries": 2,
"max_steps": 6,
"max_tool_calls": 6,
"timeout_ms": 1000
} | easy | scheduling | [
{
"fault_type": "schema_drift",
"payload": {
"timeout_ms": null,
"after_n_calls": null,
"retry_after_ms": null,
"rename_args": {
"cron": "schedule",
"query": null
},
"deny": null,
"required_role": null,
"code": null,
"message": null
}... | null | {
"accounts": null,
"docs": null,
"files": null,
"jobs": [
{
"command": "seed_task",
"cron": "0 * * * *",
"name": "seed_job_train_3"
}
]
} | Validate cron expression '0 * * * *'. | 42,003 | train | [
{
"op": "successful_tool_call",
"path": null,
"value": null,
"min_count": 1,
"tool_name": "validate_cron",
"key": null
}
] | train_scheduling_0003 | [
{
"additionalProperties": false,
"name": "create_job",
"properties": {
"id": null,
"patch": null,
"fields": null,
"reason": null,
"query": null,
"top_k": null,
"doc_id": null,
"path": null,
"content": null,
"command": {
"type": "string"... | scheduler_v1 |
{
"max_retries": 2,
"max_steps": 6,
"max_tool_calls": 6,
"timeout_ms": 1000
} | medium | crud | [
{
"fault_type": "authz",
"payload": {
"timeout_ms": null,
"after_n_calls": null,
"retry_after_ms": null,
"rename_args": null,
"deny": false,
"required_role": "admin",
"code": null,
"message": null
},
"severity": "hard",
"trigger": {
"on_nth_c... | null | {
"accounts": {
"at1068": null,
"bt1068": null,
"at1072": null,
"bt1072": null,
"at1076": null,
"bt1076": null,
"at1080": null,
"bt1080": null,
"at1084": null,
"bt1084": null,
"at1088": null,
"bt1088": null,
"at1092": null,
"bt1092": null,
"at1096": null,
... | Patch account at0004 with tier=enterprise. | 42,004 | train | [
{
"op": "equals",
"path": "accounts.at0004.tier",
"value": "enterprise",
"min_count": null,
"tool_name": null,
"key": null
}
] | train_crud_0004 | [
{
"additionalProperties": false,
"name": "get_account",
"properties": {
"id": {
"type": "string"
},
"patch": null,
"fields": null,
"reason": null,
"query": null,
"top_k": null,
"doc_id": null,
"path": null,
"content": null,
"comma... | crud_v1 |
{
"max_retries": 2,
"max_steps": 6,
"max_tool_calls": 6,
"timeout_ms": 1000
} | hard | retrieval | [
{
"fault_type": "timeout",
"payload": {
"timeout_ms": 50,
"after_n_calls": null,
"retry_after_ms": null,
"rename_args": null,
"deny": null,
"required_role": null,
"code": null,
"message": null
},
"severity": "soft",
"trigger": {
"on_nth_call"... | null | {
"accounts": null,
"docs": [
{
"id": "dt0005",
"text": "security incident 5 root cause and mitigation timeline",
"title": "Security Incident Brief 5"
},
{
"id": "xt0005",
"text": "weekly housekeeping checklist and routine updates",
"title": "General Operations Note 5... | Find relevant docs for 'security incident 5'. | 42,005 | train | [
{
"op": "successful_tool_call",
"path": null,
"value": null,
"min_count": 1,
"tool_name": "search_docs",
"key": null
}
] | train_retrieval_0005 | [
{
"additionalProperties": false,
"name": "search_docs",
"properties": {
"id": null,
"patch": null,
"fields": null,
"reason": null,
"query": {
"type": "string"
},
"top_k": {
"type": "integer"
},
"doc_id": null,
"path": null,
... | retrieval_v1 |
{
"max_retries": 2,
"max_steps": 6,
"max_tool_calls": 6,
"timeout_ms": 1000
} | easy | files | [] | null | {
"accounts": null,
"docs": null,
"files": {
"/notes/README.md": null,
"/notes/seed_6.txt": null,
"/notes/seed_3.txt": null,
"/reports/README.md": null,
"/reports/seed_0.txt": null,
"/exports/README.md": "team workspace",
"/exports/seed_4.txt": null,
"/drafts/README.md": null,
... | Create file /exports/summary_train_6.txt with content 'entry-train-6-814'. | 42,006 | train | [
{
"op": "in",
"path": "files",
"value": "/exports/summary_train_6.txt",
"min_count": null,
"tool_name": null,
"key": null
},
{
"op": "map_value_equals",
"path": "files",
"value": "entry-train-6-814",
"min_count": null,
"tool_name": null,
"key": "/exports/summary_t... | train_files_0006 | [
{
"additionalProperties": false,
"name": "read_file",
"properties": {
"id": null,
"patch": null,
"fields": null,
"reason": null,
"query": null,
"top_k": null,
"doc_id": null,
"path": {
"type": "string"
},
"content": null,
"command... | files_v1 |
{
"max_retries": 2,
"max_steps": 6,
"max_tool_calls": 6,
"timeout_ms": 1000
} | medium | scheduling | [
{
"fault_type": "timeout",
"payload": {
"timeout_ms": 50,
"after_n_calls": null,
"retry_after_ms": null,
"rename_args": null,
"deny": null,
"required_role": null,
"code": null,
"message": null
},
"severity": "soft",
"trigger": {
"on_nth_call"... | null | {
"accounts": null,
"docs": null,
"files": null,
"jobs": [
{
"command": "seed_task",
"cron": "0 * * * *",
"name": "seed_job_train_2"
}
]
} | Validate cron expression '0 0 * * *'. | 42,007 | train | [
{
"op": "successful_tool_call",
"path": null,
"value": null,
"min_count": 1,
"tool_name": "validate_cron",
"key": null
}
] | train_scheduling_0007 | [
{
"additionalProperties": false,
"name": "create_job",
"properties": {
"id": null,
"patch": null,
"fields": null,
"reason": null,
"query": null,
"top_k": null,
"doc_id": null,
"path": null,
"content": null,
"command": {
"type": "string"... | scheduler_v1 |
{
"max_retries": 2,
"max_steps": 6,
"max_tool_calls": 6,
"timeout_ms": 1000
} | hard | crud | [{"fault_type":"rate_limit","payload":{"timeout_ms":null,"after_n_calls":0,"retry_after_ms":100,"ren(...TRUNCATED) | null | {"accounts":{"at1068":null,"bt1068":null,"at1072":null,"bt1072":null,"at1076":null,"bt1076":null,"at(...TRUNCATED) | Register account nt0008 with active status. | 42,008 | train | [{"op":"in","path":"accounts","value":"nt0008","min_count":null,"tool_name":null,"key":null},{"op":"(...TRUNCATED) | train_crud_0008 | [{"additionalProperties":false,"name":"get_account","properties":{"id":{"type":"string"},"patch":nul(...TRUNCATED) | crud_v1 |
{
"max_retries": 2,
"max_steps": 6,
"max_tool_calls": 6,
"timeout_ms": 1000
} | easy | retrieval | [{"fault_type":"schema_drift","payload":{"timeout_ms":null,"after_n_calls":null,"retry_after_ms":nul(...TRUNCATED) | null | {"accounts":null,"docs":[{"id":"dt0009","text":"billing incident 9 root cause and mitigation timelin(...TRUNCATED) | Run document search with query 'billing incident 9'. | 42,009 | train | [{"op":"successful_tool_call","path":null,"value":null,"min_count":1,"tool_name":"search_docs","key"(...TRUNCATED) | train_retrieval_0009 | [{"additionalProperties":false,"name":"search_docs","properties":{"id":null,"patch":null,"fields":nu(...TRUNCATED) | retrieval_v1 |
ToolMisuseBench
ToolMisuseBench is a deterministic, offline benchmark dataset for evaluating tool-using agents under realistic failure conditions, including schema misuse, execution failures, interface drift, and recovery under budget constraints.
This dataset is intended for reproducible evaluation of agent tool-use behavior, not for training a general-purpose language model.
Dataset Summary
ToolMisuseBench evaluates whether an agent can:
- make valid tool calls under schema constraints
- recover after failures (timeouts, rate limits, authz, drift, adversarial errors)
- satisfy task goals under bounded tool-call/step/retry budgets
- minimize policy violations and invalid tool invocations
All tasks are synthetic and generated with deterministic seeds to ensure reproducibility.
Repository and Evaluator
- Project repository (code + evaluator + baselines): https://github.com/akgitrepos/toolmisusebench
- Recommended evaluation flow uses the project CLI and harness.
Supported Evaluation Use Cases
- baseline benchmarking for tool-using agents
- robustness testing under controlled tool failures
- recovery-quality analysis after failure injection
- budgeted success tradeoff analysis (success vs tool-call cap)
Data Structure
Dataset layout:
train/tasks.jsonldev/tasks.jsonltest_public/tasks.jsonlmanifest.jsonv0_1_freeze.json
Each row in tasks.jsonl is a single benchmark task containing:
task_idsplit(train | dev | test_public)difficulty(easy | medium | hard)domain(crud | retrieval | files | scheduling | mixed)instructiontoolset_idtool_schemasinitial_statesuccess_criteriabudget(max_steps,max_tool_calls,max_retries,timeout_ms)fault_plangold_summary(optional)seed
Dataset Size (v0.1 Release)
- Train: 5000
- Dev: 800
- Test Public: 1000
- Total: 6800
Domains
- CRUD
- Retrieval
- Files
- Scheduling
Fault Model
Supported fault types:
schema_driftrate_limittimeoutauthzadversarial_error
Faults are declaratively specified per task and replayed deterministically.
Viewer Note on Null Values
In the Hugging Face table viewer, nested fields inside fault_plan.trigger and fault_plan.payload
may appear as null for some rows.
This is expected: different fault types use different subsets of fields, and the viewer displays a
unified schema across all rows. A null value in this context typically means "not applicable for
this fault type," not missing or corrupted data.
Data Generation
Generated synthetically using deterministic templates, seeded randomization, and task-level coherence checks.
Generation reference command:
toolmisusebench generate \
--version v0.1 \
--out data/toolmisusebench_v0_1 \
--seed 42 \
--size-profile large
Coherence and quality audit reference command:
python -m generator.quality_report \
--dataset data/toolmisusebench_v0_1 \
--splits train,dev,test_public
Scoring and Evaluation
Use the official evaluator in the project repo.
Example:
toolmisusebench eval \
--dataset data/toolmisusebench_v0_1 \
--split test_public \
--agent heuristic \
--report out/report.json
For detailed metric definitions, see SCORING.md in this dataset repository.
Reproducibility Notes
- deterministic generation and replay under fixed seeds
- per-task fault plans are deterministic
- checksums included in
manifest.json - freeze metadata included in
v0_1_freeze.json
Limitations
- synthetic tasks do not capture all real-world API/tool semantics
- benchmark is focused on controlled robustness comparisons, not full production realism
Ethics and Privacy
- no personal data
- no proprietary user logs
- no sensitive external data sources used
License
Dataset: CC-BY-4.0 Code/evaluator: MIT (see project repository)
Citation
If you use ToolMisuseBench, please cite the project.
@misc{toolmisusebench2026,
title={ToolMisuseBench: A Deterministic Benchmark for Tool Misuse and Recovery in Agentic Systems},
author={ToolMisuseBench Authors},
year={2026},
howpublished={\url{https://github.com/akgitrepos/toolmisusebench}}
}
- Downloads last month
- 80