Dataset Viewer
images
images listlengths 1
1
| data_source
stringclasses 1
value | reward_model
dict | extra_info
dict | prompt
listlengths 2
2
| ability
stringclasses 1
value | env_name
stringclasses 1
value |
---|---|---|---|---|---|---|
sealvqa_spatial
|
{
"ground_truth": "Yes, the hot dog is on the right side of the cup."
}
|
{
"answer": "Yes, the hot dog is on the right side of the cup.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000558840.jpg",
"index": "sealvqa_spatial_0",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the hot dog on the right side of the cup?"
},
{
"from": "gpt",
"value": "Yes, the hot dog is on the right side of the cup."
}
],
"question": "the object; the object.\nIs the hot dog on the right side of the cup?",
"question_id": "0",
"search": 1,
"target_instances": [
{
"bbox": [
199.84,
200.46,
77.71,
70.88
],
"name": "hot dog"
},
{
"bbox": [
1.92,
87.91,
34.95,
175.35
],
"name": "cup"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the hot dog on the right side of the cup?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The handbag is to the left of the dog."
}
|
{
"answer": "The handbag is to the left of the dog.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000116061.jpg",
"index": "sealvqa_spatial_1",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the handbag to the left or right of the dog?"
},
{
"from": "gpt",
"value": "The handbag is to the left of the dog."
}
],
"question": "the object; the object.\nIs the handbag to the left or right of the dog?",
"question_id": "1",
"search": 1,
"target_instances": [
{
"bbox": [
213.81,
192.39,
53.94,
70.28
],
"name": "dog"
},
{
"bbox": [
34.31,
135.73,
39.98,
75.62
],
"name": "handbag"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the handbag to the left or right of the dog?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the handbag is on the left side of the bottle."
}
|
{
"answer": "Yes, the handbag is on the left side of the bottle.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000116061.jpg",
"index": "sealvqa_spatial_2",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the handbag on the left side of the bottle?"
},
{
"from": "gpt",
"value": "Yes, the handbag is on the left side of the bottle."
}
],
"question": "the object; the object.\nIs the handbag on the left side of the bottle?",
"question_id": "2",
"search": 1,
"target_instances": [
{
"bbox": [
34.31,
135.73,
39.98,
75.62
],
"name": "handbag"
},
{
"bbox": [
240.52,
182.44,
11.23,
30.92
],
"name": "bottle"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the handbag on the left side of the bottle?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the dog is not above the motorcycle."
}
|
{
"answer": "No, the dog is not above the motorcycle.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000400728.jpg",
"index": "sealvqa_spatial_3",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the dog on the left side of the motorcycle?"
},
{
"from": "gpt",
"value": "No, the dog is not on the left side of the motorcycle."
},
{
"from": "human",
"value": "Is the dog above the motorcycle?"
},
{
"from": "gpt",
"value": "No, the dog is not above the motorcycle."
}
],
"question": "Is the dog above the motorcycle?",
"question_id": "3",
"search": 1,
"target_instances": [
{
"bbox": [
289.47,
214.76,
52.47,
72.72
],
"name": "dog"
},
{
"bbox": [
146.02,
125.28,
79.47,
65.02
],
"name": "motorcycle"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the dog above the motorcycle?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the traffic light is above the person."
}
|
{
"answer": "Yes, the traffic light is above the person.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000370210.jpg",
"index": "sealvqa_spatial_4",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the traffic light on the left side of the person?"
},
{
"from": "gpt",
"value": "No, the traffic light is not on the left side of the person."
},
{
"from": "human",
"value": "Is the traffic light above the person?"
},
{
"from": "gpt",
"value": "Yes, the traffic light is above the person."
}
],
"question": "Is the traffic light above the person?",
"question_id": "4",
"search": 1,
"target_instances": [
{
"bbox": [
222.92,
143.43,
15.47,
39.41
],
"name": "person"
},
{
"bbox": [
257.66,
80.96,
13.78,
22.04
],
"name": "traffic light"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the traffic light above the person?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the car is not on the left side of the person."
}
|
{
"answer": "No, the car is not on the left side of the person.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000370210.jpg",
"index": "sealvqa_spatial_5",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the car on the left side of the person?"
},
{
"from": "gpt",
"value": "No, the car is not on the left side of the person."
}
],
"question": "the object; the object.\nIs the car on the left side of the person?",
"question_id": "5",
"search": 1,
"target_instances": [
{
"bbox": [
222.92,
143.43,
15.47,
39.41
],
"name": "person"
},
{
"bbox": [
334.66,
159.99,
33.18,
21.52
],
"name": "car"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the car on the left side of the person?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The car is below the traffic light."
}
|
{
"answer": "The car is below the traffic light.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000370210.jpg",
"index": "sealvqa_spatial_6",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the car to the left or right of the traffic light?"
},
{
"from": "gpt",
"value": "The car is to the right of the traffic light."
},
{
"from": "human",
"value": "Is the car above or below the traffic light?"
},
{
"from": "gpt",
"value": "The car is below the traffic light."
}
],
"question": "Is the car above or below the traffic light?",
"question_id": "6",
"search": 1,
"target_instances": [
{
"bbox": [
334.66,
159.99,
33.18,
21.52
],
"name": "car"
},
{
"bbox": [
257.66,
80.96,
13.78,
22.04
],
"name": "traffic light"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the car above or below the traffic light?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the car is above the chair."
}
|
{
"answer": "Yes, the car is above the chair.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000178939.jpg",
"index": "sealvqa_spatial_7",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the car on the left side of the chair?"
},
{
"from": "gpt",
"value": "No, the car is not on the left side of the chair."
},
{
"from": "human",
"value": "Is the car above the chair?"
},
{
"from": "gpt",
"value": "Yes, the car is above the chair."
}
],
"question": "Is the car above the chair?",
"question_id": "7",
"search": 1,
"target_instances": [
{
"bbox": [
0.98,
257.78,
30.57,
140.63
],
"name": "chair"
},
{
"bbox": [
138.11,
181.87,
29.32,
15.74
],
"name": "car"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the car above the chair?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The wine glass is above the knife."
}
|
{
"answer": "The wine glass is above the knife.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000158497.jpg",
"index": "sealvqa_spatial_8",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the knife on the left or right side of the wine glass?"
},
{
"from": "gpt",
"value": "The knife is on the right side of the wine glass."
},
{
"from": "human",
"value": "Is the wine glass above or below the knife?"
},
{
"from": "gpt",
"value": "The wine glass is above the knife."
}
],
"question": "Is the wine glass above or below the knife?",
"question_id": "8",
"search": 1,
"target_instances": [
{
"bbox": [
309.6,
163.21,
16.88,
21.59
],
"name": "knife"
},
{
"bbox": [
257.58,
96.12,
24.5,
26.23
],
"name": "wine glass"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the wine glass above or below the knife?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The backpack is above the dog."
}
|
{
"answer": "The backpack is above the dog.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000390348.jpg",
"index": "sealvqa_spatial_9",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the backpack on the right side of the dog?"
},
{
"from": "gpt",
"value": "No, the backpack is not on the right side of the dog."
},
{
"from": "human",
"value": "Is the backpack above or below the dog?"
},
{
"from": "gpt",
"value": "The backpack is above the dog."
}
],
"question": "Is the backpack above or below the dog?",
"question_id": "9",
"search": 1,
"target_instances": [
{
"bbox": [
465.31,
216.49,
26.44,
46.16
],
"name": "dog"
},
{
"bbox": [
170.58,
102.22,
39.98,
56.42
],
"name": "backpack"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the backpack above or below the dog?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The person is above the dog."
}
|
{
"answer": "The person is above the dog.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000047263.jpg",
"index": "sealvqa_spatial_10",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the person to the left or right of the dog?"
},
{
"from": "gpt",
"value": "The person is to the right of the dog."
},
{
"from": "human",
"value": "Is the person above or below the dog?"
},
{
"from": "gpt",
"value": "The person is above the dog."
}
],
"question": "Is the person above or below the dog?",
"question_id": "10",
"search": 1,
"target_instances": [
{
"bbox": [
222.86,
186.51,
30.04,
41.67
],
"name": "dog"
},
{
"bbox": [
344.44,
96,
29.49,
85.27
],
"name": "person"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the person above or below the dog?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The dog is to the left of the bottle."
}
|
{
"answer": "The dog is to the left of the bottle.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000519838.jpg",
"index": "sealvqa_spatial_11",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the dog to the left or right of the bottle?"
},
{
"from": "gpt",
"value": "The dog is to the left of the bottle."
}
],
"question": "the object; the object.\nIs the dog to the left or right of the bottle?",
"question_id": "11",
"search": 1,
"target_instances": [
{
"bbox": [
55.29,
168.27,
70.63,
51.07
],
"name": "dog"
},
{
"bbox": [
244.65,
190.11,
18.24,
27.63
],
"name": "bottle"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the dog to the left or right of the bottle?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the bottle is not to the left of the person."
}
|
{
"answer": "No, the bottle is not to the left of the person.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000519838.jpg",
"index": "sealvqa_spatial_12",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the bottle to the left of the person?"
},
{
"from": "gpt",
"value": "No, the bottle is not to the left of the person."
}
],
"question": "the object; the object.\nIs the bottle to the left of the person?",
"question_id": "12",
"search": 1,
"target_instances": [
{
"bbox": [
244.65,
190.11,
18.24,
27.63
],
"name": "bottle"
},
{
"bbox": [
20.9,
86.97,
39.1,
125.39
],
"name": "person"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the bottle to the left of the person?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The dog is below the book."
}
|
{
"answer": "The dog is below the book.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000151988.jpg",
"index": "sealvqa_spatial_13",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the dog on the right side of the book?"
},
{
"from": "gpt",
"value": "Yes, the dog is on the right side of the book."
},
{
"from": "human",
"value": "Is the dog above or below the book?"
},
{
"from": "gpt",
"value": "The dog is below the book."
}
],
"question": "Is the dog above or below the book?",
"question_id": "13",
"search": 1,
"target_instances": [
{
"bbox": [
321.92,
292.11,
159.76,
55.71
],
"name": "dog"
},
{
"bbox": [
275.61,
144.87,
21.36,
25.71
],
"name": "book"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the dog above or below the book?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the book is not below the bottle."
}
|
{
"answer": "No, the book is not below the bottle.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000151988.jpg",
"index": "sealvqa_spatial_14",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the bottle to the left or right of the book?"
},
{
"from": "gpt",
"value": "The bottle is to the right of the book."
},
{
"from": "human",
"value": "Is the book below the bottle?"
},
{
"from": "gpt",
"value": "No, the book is not below the bottle."
}
],
"question": "Is the book below the bottle?",
"question_id": "14",
"search": 1,
"target_instances": [
{
"bbox": [
381.02,
277.74,
25.12,
82.27
],
"name": "bottle"
},
{
"bbox": [
275.61,
144.87,
21.36,
25.71
],
"name": "book"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the book below the bottle?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the dog is not above the traffic light."
}
|
{
"answer": "No, the dog is not above the traffic light.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000205573.jpg",
"index": "sealvqa_spatial_15",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the dog on the left side of the traffic light?"
},
{
"from": "gpt",
"value": "Yes, the dog is on the left side of the traffic light."
},
{
"from": "human",
"value": "Is the dog above the traffic light?"
},
{
"from": "gpt",
"value": "No, the dog is not above the traffic light."
}
],
"question": "Is the dog above the traffic light?",
"question_id": "15",
"search": 1,
"target_instances": [
{
"bbox": [
217.41,
159.26,
54.72,
44.77
],
"name": "dog"
},
{
"bbox": [
468.48,
63.23,
22.3,
36.56
],
"name": "traffic light"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the dog above the traffic light?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the truck is above the dog."
}
|
{
"answer": "Yes, the truck is above the dog.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000070815.jpg",
"index": "sealvqa_spatial_16",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the truck to the left or right of the dog?"
},
{
"from": "gpt",
"value": "The truck is to the left of the dog."
},
{
"from": "human",
"value": "Is the truck above the dog?"
},
{
"from": "gpt",
"value": "Yes, the truck is above the dog."
}
],
"question": "Is the truck above the dog?",
"question_id": "16",
"search": 1,
"target_instances": [
{
"bbox": [
186.36,
269.28,
59.15,
97.85
],
"name": "dog"
},
{
"bbox": [
0.39,
144.83,
64.79,
49.55
],
"name": "truck"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the truck above the dog?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the apple is not below the oven."
}
|
{
"answer": "No, the apple is not below the oven.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000057936.jpg",
"index": "sealvqa_spatial_17",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the oven on the right side of the apple?"
},
{
"from": "gpt",
"value": "Yes, the oven is on the right side of the apple."
},
{
"from": "human",
"value": "Is the apple below the oven?"
},
{
"from": "gpt",
"value": "No, the apple is not below the oven."
}
],
"question": "Is the apple below the oven?",
"question_id": "17",
"search": 1,
"target_instances": [
{
"bbox": [
421.32,
81.38,
34.24,
154.84
],
"name": "oven"
},
{
"bbox": [
190.15,
62.96,
9.16,
10.82
],
"name": "apple"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the apple below the oven?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The bottle is above the apple."
}
|
{
"answer": "The bottle is above the apple.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000057936.jpg",
"index": "sealvqa_spatial_18",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the bottle on the right side of the apple?"
},
{
"from": "gpt",
"value": "Yes, the bottle is on the right side of the apple."
},
{
"from": "human",
"value": "Is the bottle above or below the apple?"
},
{
"from": "gpt",
"value": "The bottle is above the apple."
}
],
"question": "Is the bottle above or below the apple?",
"question_id": "18",
"search": 1,
"target_instances": [
{
"bbox": [
421.12,
16.89,
10.46,
28.26
],
"name": "bottle"
},
{
"bbox": [
190.15,
62.96,
9.16,
10.82
],
"name": "apple"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the bottle above or below the apple?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The dining table is below the cup."
}
|
{
"answer": "The dining table is below the cup.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000560830.jpg",
"index": "sealvqa_spatial_19",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the cup to the left of the dining table?"
},
{
"from": "gpt",
"value": "Yes, the cup is to the left of the dining table."
},
{
"from": "human",
"value": "Is the dining table above or below the cup?"
},
{
"from": "gpt",
"value": "The dining table is below the cup."
}
],
"question": "Is the dining table above or below the cup?",
"question_id": "19",
"search": 1,
"target_instances": [
{
"bbox": [
61.68,
186.99,
9.43,
8.28
],
"name": "cup"
},
{
"bbox": [
200.68,
216.26,
80.68,
86.4
],
"name": "dining table"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the dining table above or below the cup?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The dog is to the left of the cell phone."
}
|
{
"answer": "The dog is to the left of the cell phone.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000535483.jpg",
"index": "sealvqa_spatial_20",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the dog to the left or right of the cell phone?"
},
{
"from": "gpt",
"value": "The dog is to the left of the cell phone."
}
],
"question": "the object; the object.\nIs the dog to the left or right of the cell phone?",
"question_id": "20",
"search": 1,
"target_instances": [
{
"bbox": [
236.2,
146.01,
26.17,
19.49
],
"name": "dog"
},
{
"bbox": [
416.66,
138.71,
17.25,
21.87
],
"name": "cell phone"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the dog to the left or right of the cell phone?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The cell phone is above the bird."
}
|
{
"answer": "The cell phone is above the bird.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000535483.jpg",
"index": "sealvqa_spatial_21",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the cell phone on the left side of the bird?"
},
{
"from": "gpt",
"value": "No, the cell phone is not on the left side of the bird."
},
{
"from": "human",
"value": "Is the cell phone above or below the bird?"
},
{
"from": "gpt",
"value": "The cell phone is above the bird."
}
],
"question": "Is the cell phone above or below the bird?",
"question_id": "21",
"search": 1,
"target_instances": [
{
"bbox": [
16.25,
185.57,
16.21,
11.76
],
"name": "bird"
},
{
"bbox": [
416.66,
138.71,
17.25,
21.87
],
"name": "cell phone"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the cell phone above or below the bird?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the bird is below the dog."
}
|
{
"answer": "Yes, the bird is below the dog.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000535483.jpg",
"index": "sealvqa_spatial_22",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the dog on the left side of the bird?"
},
{
"from": "gpt",
"value": "No, the dog is not on the left side of the bird."
},
{
"from": "human",
"value": "Is the bird below the dog?"
},
{
"from": "gpt",
"value": "Yes, the bird is below the dog."
}
],
"question": "Is the bird below the dog?",
"question_id": "22",
"search": 1,
"target_instances": [
{
"bbox": [
236.2,
146.01,
26.17,
19.49
],
"name": "dog"
},
{
"bbox": [
16.25,
185.57,
16.21,
11.76
],
"name": "bird"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the bird below the dog?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The truck is above the fire hydrant."
}
|
{
"answer": "The truck is above the fire hydrant.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000187349.jpg",
"index": "sealvqa_spatial_23",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the truck to the right of the fire hydrant?"
},
{
"from": "gpt",
"value": "No, the truck is not to the right of the fire hydrant."
},
{
"from": "human",
"value": "Is the truck above or below the fire hydrant?"
},
{
"from": "gpt",
"value": "The truck is above the fire hydrant."
}
],
"question": "Is the truck above or below the fire hydrant?",
"question_id": "23",
"search": 1,
"target_instances": [
{
"bbox": [
558.09,
431.71,
4.78,
7.31
],
"name": "fire hydrant"
},
{
"bbox": [
259.17,
73.32,
71.71,
51.58
],
"name": "truck"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the truck above or below the fire hydrant?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the truck is above the stop sign."
}
|
{
"answer": "Yes, the truck is above the stop sign.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000187349.jpg",
"index": "sealvqa_spatial_24",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the truck to the left of the stop sign?"
},
{
"from": "gpt",
"value": "Yes, the truck is to the left of the stop sign."
},
{
"from": "human",
"value": "Is the truck above the stop sign?"
},
{
"from": "gpt",
"value": "Yes, the truck is above the stop sign."
}
],
"question": "Is the truck above the stop sign?",
"question_id": "24",
"search": 1,
"target_instances": [
{
"bbox": [
445.65,
424.39,
5.51,
8.09
],
"name": "stop sign"
},
{
"bbox": [
259.17,
73.32,
71.71,
51.58
],
"name": "truck"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the truck above the stop sign?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the fire hydrant is not to the left of the stop sign."
}
|
{
"answer": "No, the fire hydrant is not to the left of the stop sign.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000187349.jpg",
"index": "sealvqa_spatial_25",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the fire hydrant to the left of the stop sign?"
},
{
"from": "gpt",
"value": "No, the fire hydrant is not to the left of the stop sign."
}
],
"question": "the object; the object.\nIs the fire hydrant to the left of the stop sign?",
"question_id": "25",
"search": 1,
"target_instances": [
{
"bbox": [
558.09,
431.71,
4.78,
7.31
],
"name": "fire hydrant"
},
{
"bbox": [
445.65,
424.39,
5.51,
8.09
],
"name": "stop sign"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the fire hydrant to the left of the stop sign?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The sink is on the right side of the bottle."
}
|
{
"answer": "The sink is on the right side of the bottle.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000282134.jpg",
"index": "sealvqa_spatial_26",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the sink on the left or right side of the bottle?"
},
{
"from": "gpt",
"value": "The sink is on the right side of the bottle."
}
],
"question": "the object; the object.\nIs the sink on the left or right side of the bottle?",
"question_id": "26",
"search": 1,
"target_instances": [
{
"bbox": [
336.37,
24.12,
21.29,
68.24
],
"name": "bottle"
},
{
"bbox": [
534.05,
71.74,
104.79,
20.71
],
"name": "sink"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the sink on the left or right side of the bottle?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The bottle is below the dog."
}
|
{
"answer": "The bottle is below the dog.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000007125.jpg",
"index": "sealvqa_spatial_27",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the dog to the left or right of the bottle?"
},
{
"from": "gpt",
"value": "The dog is to the right of the bottle."
},
{
"from": "human",
"value": "Is the bottle above or below the dog?"
},
{
"from": "gpt",
"value": "The bottle is below the dog."
}
],
"question": "Is the bottle above or below the dog?",
"question_id": "27",
"search": 1,
"target_instances": [
{
"bbox": [
377.68,
258.81,
13.28,
9.36
],
"name": "dog"
},
{
"bbox": [
69.83,
359.86,
18.24,
24.95
],
"name": "bottle"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the bottle above or below the dog?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the bottle is not above the backpack."
}
|
{
"answer": "No, the bottle is not above the backpack.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000007125.jpg",
"index": "sealvqa_spatial_28",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the backpack to the left or right of the bottle?"
},
{
"from": "gpt",
"value": "The backpack is to the right of the bottle."
},
{
"from": "human",
"value": "Is the bottle above the backpack?"
},
{
"from": "gpt",
"value": "No, the bottle is not above the backpack."
}
],
"question": "Is the bottle above the backpack?",
"question_id": "28",
"search": 1,
"target_instances": [
{
"bbox": [
69.83,
359.86,
18.24,
24.95
],
"name": "bottle"
},
{
"bbox": [
88.45,
255.88,
27.06,
48.02
],
"name": "backpack"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the bottle above the backpack?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The dog is to the right of the backpack."
}
|
{
"answer": "The dog is to the right of the backpack.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000007125.jpg",
"index": "sealvqa_spatial_29",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the dog to the left or right of the backpack?"
},
{
"from": "gpt",
"value": "The dog is to the right of the backpack."
}
],
"question": "the object; the object.\nIs the dog to the left or right of the backpack?",
"question_id": "29",
"search": 1,
"target_instances": [
{
"bbox": [
377.68,
258.81,
13.28,
9.36
],
"name": "dog"
},
{
"bbox": [
88.45,
255.88,
27.06,
48.02
],
"name": "backpack"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the dog to the left or right of the backpack?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the handbag is on the right side of the dog."
}
|
{
"answer": "Yes, the handbag is on the right side of the dog.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000386165.jpg",
"index": "sealvqa_spatial_30",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the handbag on the right side of the dog?"
},
{
"from": "gpt",
"value": "Yes, the handbag is on the right side of the dog."
}
],
"question": "the object; the object.\nIs the handbag on the right side of the dog?",
"question_id": "30",
"search": 1,
"target_instances": [
{
"bbox": [
161.51,
290.24,
79.88,
84.08
],
"name": "dog"
},
{
"bbox": [
286.14,
199.76,
55.61,
116.69
],
"name": "handbag"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the handbag on the right side of the dog?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The stop sign is above the person."
}
|
{
"answer": "The stop sign is above the person.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000559950.jpg",
"index": "sealvqa_spatial_31",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the person on the right side of the stop sign?"
},
{
"from": "gpt",
"value": "Yes, the person is on the right side of the stop sign."
},
{
"from": "human",
"value": "Is the stop sign above or below the person?"
},
{
"from": "gpt",
"value": "The stop sign is above the person."
}
],
"question": "Is the stop sign above or below the person?",
"question_id": "31",
"search": 1,
"target_instances": [
{
"bbox": [
252.39,
326.51,
17.12,
45.78
],
"name": "person"
},
{
"bbox": [
106.5,
280.62,
21.67,
22.01
],
"name": "stop sign"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the stop sign above or below the person?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the clock is not on the left side of the bottle."
}
|
{
"answer": "No, the clock is not on the left side of the bottle.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000020671.jpg",
"index": "sealvqa_spatial_32",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the clock on the left side of the bottle?"
},
{
"from": "gpt",
"value": "No, the clock is not on the left side of the bottle."
}
],
"question": "the object; the object.\nIs the clock on the left side of the bottle?",
"question_id": "32",
"search": 1,
"target_instances": [
{
"bbox": [
384.69,
252.2,
18,
16.65
],
"name": "clock"
},
{
"bbox": [
341.19,
228.58,
38.08,
89.17
],
"name": "bottle"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the clock on the left side of the bottle?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The dog is to the left of the person."
}
|
{
"answer": "The dog is to the left of the person.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000278550.jpg",
"index": "sealvqa_spatial_33",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the dog to the left or right of the person?"
},
{
"from": "gpt",
"value": "The dog is to the left of the person."
}
],
"question": "the object; the object.\nIs the dog to the left or right of the person?",
"question_id": "33",
"search": 1,
"target_instances": [
{
"bbox": [
12.47,
82.61,
107.86,
62.35
],
"name": "dog"
},
{
"bbox": [
294.44,
142.48,
16.35,
22.69
],
"name": "person"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the dog to the left or right of the person?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the oven is not above the cup."
}
|
{
"answer": "No, the oven is not above the cup.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000209468.jpg",
"index": "sealvqa_spatial_34",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the cup to the left or right of the oven?"
},
{
"from": "gpt",
"value": "The cup is to the left of the oven."
},
{
"from": "human",
"value": "Is the oven above the cup?"
},
{
"from": "gpt",
"value": "No, the oven is not above the cup."
}
],
"question": "Is the oven above the cup?",
"question_id": "34",
"search": 1,
"target_instances": [
{
"bbox": [
255.07,
303.44,
64.57,
117.31
],
"name": "oven"
},
{
"bbox": [
162.88,
255.29,
18.17,
23.61
],
"name": "cup"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the oven above the cup?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The bowl is on the left side of the laptop."
}
|
{
"answer": "The bowl is on the left side of the laptop.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000209468.jpg",
"index": "sealvqa_spatial_35",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the bowl on the left or right side of the laptop?"
},
{
"from": "gpt",
"value": "The bowl is on the left side of the laptop."
}
],
"question": "the object; the object.\nIs the bowl on the left or right side of the laptop?",
"question_id": "35",
"search": 1,
"target_instances": [
{
"bbox": [
268.58,
222.2,
88.45,
65.8
],
"name": "laptop"
},
{
"bbox": [
136.12,
262.21,
26.33,
12.08
],
"name": "bowl"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the bowl on the left or right side of the laptop?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The cup is on the left side of the microwave."
}
|
{
"answer": "The cup is on the left side of the microwave.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000210012.jpg",
"index": "sealvqa_spatial_36",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the cup on the left or right side of the microwave?"
},
{
"from": "gpt",
"value": "The cup is on the left side of the microwave."
}
],
"question": "the object; the object.\nIs the cup on the left or right side of the microwave?",
"question_id": "36",
"search": 1,
"target_instances": [
{
"bbox": [
354.88,
107.87,
66.87,
33.43
],
"name": "microwave"
},
{
"bbox": [
125.6,
131.24,
13.8,
14.13
],
"name": "cup"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the cup on the left or right side of the microwave?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the cup is above the dog."
}
|
{
"answer": "Yes, the cup is above the dog.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000210012.jpg",
"index": "sealvqa_spatial_37",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the cup to the right of the dog?"
},
{
"from": "gpt",
"value": "No, the cup is not to the right of the dog."
},
{
"from": "human",
"value": "Is the cup above the dog?"
},
{
"from": "gpt",
"value": "Yes, the cup is above the dog."
}
],
"question": "Is the cup above the dog?",
"question_id": "37",
"search": 1,
"target_instances": [
{
"bbox": [
314.29,
209,
89.38,
71.19
],
"name": "dog"
},
{
"bbox": [
125.6,
131.24,
13.8,
14.13
],
"name": "cup"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the cup above the dog?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the surfboard is not to the right of the dog."
}
|
{
"answer": "No, the surfboard is not to the right of the dog.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000272153.jpg",
"index": "sealvqa_spatial_38",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the surfboard to the right of the dog?"
},
{
"from": "gpt",
"value": "No, the surfboard is not to the right of the dog."
}
],
"question": "the object; the object.\nIs the surfboard to the right of the dog?",
"question_id": "38",
"search": 1,
"target_instances": [
{
"bbox": [
261.04,
293.65,
100.2,
97.49
],
"name": "dog"
},
{
"bbox": [
134.51,
183.08,
55.44,
146.28
],
"name": "surfboard"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the surfboard to the right of the dog?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The clock is above the handbag."
}
|
{
"answer": "The clock is above the handbag.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000386968.jpg",
"index": "sealvqa_spatial_39",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the handbag to the right of the clock?"
},
{
"from": "gpt",
"value": "Yes, the handbag is to the right of the clock."
},
{
"from": "human",
"value": "Is the clock above or below the handbag?"
},
{
"from": "gpt",
"value": "The clock is above the handbag."
}
],
"question": "Is the clock above or below the handbag?",
"question_id": "39",
"search": 1,
"target_instances": [
{
"bbox": [
432.26,
204.98,
70.98,
88.61
],
"name": "handbag"
},
{
"bbox": [
187.27,
24.83,
15.3,
20.6
],
"name": "clock"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the clock above or below the handbag?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the dog is not above the backpack."
}
|
{
"answer": "No, the dog is not above the backpack.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000210060.jpg",
"index": "sealvqa_spatial_40",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the dog to the left or right of the backpack?"
},
{
"from": "gpt",
"value": "The dog is to the right of the backpack."
},
{
"from": "human",
"value": "Is the dog above the backpack?"
},
{
"from": "gpt",
"value": "No, the dog is not above the backpack."
}
],
"question": "Is the dog above the backpack?",
"question_id": "40",
"search": 1,
"target_instances": [
{
"bbox": [
388.18,
222.56,
57.8,
150.09
],
"name": "dog"
},
{
"bbox": [
254.3,
120.08,
42.46,
45.73
],
"name": "backpack"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the dog above the backpack?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The person is above the dog."
}
|
{
"answer": "The person is above the dog.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000106166.jpg",
"index": "sealvqa_spatial_41",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the person on the left or right side of the dog?"
},
{
"from": "gpt",
"value": "The person is on the left side of the dog."
},
{
"from": "human",
"value": "Is the person above or below the dog?"
},
{
"from": "gpt",
"value": "The person is above the dog."
}
],
"question": "Is the person above or below the dog?",
"question_id": "41",
"search": 1,
"target_instances": [
{
"bbox": [
372.2,
205.54,
76.05,
82.66
],
"name": "dog"
},
{
"bbox": [
160.15,
145.02,
27.73,
28.7
],
"name": "person"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the person above or below the dog?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the motorcycle is to the left of the chair."
}
|
{
"answer": "Yes, the motorcycle is to the left of the chair.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000276719.jpg",
"index": "sealvqa_spatial_42",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the motorcycle to the left of the chair?"
},
{
"from": "gpt",
"value": "Yes, the motorcycle is to the left of the chair."
}
],
"question": "the object; the object.\nIs the motorcycle to the left of the chair?",
"question_id": "42",
"search": 1,
"target_instances": [
{
"bbox": [
365.68,
120.52,
30.54,
53.37
],
"name": "chair"
},
{
"bbox": [
272.7,
73.86,
44.73,
56.14
],
"name": "motorcycle"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the motorcycle to the left of the chair?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the dog is to the left of the person."
}
|
{
"answer": "Yes, the dog is to the left of the person.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000444576.jpg",
"index": "sealvqa_spatial_43",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the dog to the left of the person?"
},
{
"from": "gpt",
"value": "Yes, the dog is to the left of the person."
}
],
"question": "the object; the object.\nIs the dog to the left of the person?",
"question_id": "43",
"search": 1,
"target_instances": [
{
"bbox": [
280.51,
178.63,
37.91,
45.84
],
"name": "dog"
},
{
"bbox": [
324.27,
190.57,
25.83,
35.52
],
"name": "person"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the dog to the left of the person?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the handbag is above the dog."
}
|
{
"answer": "Yes, the handbag is above the dog.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000319055.jpg",
"index": "sealvqa_spatial_44",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the handbag to the left of the dog?"
},
{
"from": "gpt",
"value": "Yes, the handbag is to the left of the dog."
},
{
"from": "human",
"value": "Is the handbag above the dog?"
},
{
"from": "gpt",
"value": "Yes, the handbag is above the dog."
}
],
"question": "Is the handbag above the dog?",
"question_id": "44",
"search": 1,
"target_instances": [
{
"bbox": [
484.33,
224.38,
34.03,
22.54
],
"name": "dog"
},
{
"bbox": [
465.89,
215.5,
5.96,
5.28
],
"name": "handbag"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the handbag above the dog?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the dog is on the right side of the car."
}
|
{
"answer": "Yes, the dog is on the right side of the car.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000027519.jpg",
"index": "sealvqa_spatial_45",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the dog on the right side of the car?"
},
{
"from": "gpt",
"value": "Yes, the dog is on the right side of the car."
}
],
"question": "the object; the object.\nIs the dog on the right side of the car?",
"question_id": "45",
"search": 1,
"target_instances": [
{
"bbox": [
237.1,
375.1,
62.9,
81.29
],
"name": "dog"
},
{
"bbox": [
111.8,
269.47,
40.56,
115.67
],
"name": "car"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the dog on the right side of the car?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the bowl is to the left of the dog."
}
|
{
"answer": "Yes, the bowl is to the left of the dog.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000346560.jpg",
"index": "sealvqa_spatial_46",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the bowl to the left of the dog?"
},
{
"from": "gpt",
"value": "Yes, the bowl is to the left of the dog."
}
],
"question": "the object; the object.\nIs the bowl to the left of the dog?",
"question_id": "46",
"search": 1,
"target_instances": [
{
"bbox": [
547.36,
238.72,
56.67,
121.77
],
"name": "dog"
},
{
"bbox": [
6.81,
249.02,
75.77,
46.92
],
"name": "bowl"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the bowl to the left of the dog?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The handbag is below the backpack."
}
|
{
"answer": "The handbag is below the backpack.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000018783.jpg",
"index": "sealvqa_spatial_47",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the handbag to the left of the backpack?"
},
{
"from": "gpt",
"value": "No, the handbag is not to the left of the backpack."
},
{
"from": "human",
"value": "Is the handbag above or below the backpack?"
},
{
"from": "gpt",
"value": "The handbag is below the backpack."
}
],
"question": "Is the handbag above or below the backpack?",
"question_id": "47",
"search": 1,
"target_instances": [
{
"bbox": [
246.88,
190.25,
82.78,
90.56
],
"name": "backpack"
},
{
"bbox": [
361.97,
361.05,
65.54,
71.49
],
"name": "handbag"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the handbag above or below the backpack?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the backpack is on the left side of the car."
}
|
{
"answer": "Yes, the backpack is on the left side of the car.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000018783.jpg",
"index": "sealvqa_spatial_48",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the backpack on the left side of the car?"
},
{
"from": "gpt",
"value": "Yes, the backpack is on the left side of the car."
}
],
"question": "the object; the object.\nIs the backpack on the left side of the car?",
"question_id": "48",
"search": 1,
"target_instances": [
{
"bbox": [
612.18,
269.8,
27.48,
100.29
],
"name": "car"
},
{
"bbox": [
246.88,
190.25,
82.78,
90.56
],
"name": "backpack"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the backpack on the left side of the car?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the handbag is to the left of the car."
}
|
{
"answer": "Yes, the handbag is to the left of the car.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000018783.jpg",
"index": "sealvqa_spatial_49",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the handbag to the left of the car?"
},
{
"from": "gpt",
"value": "Yes, the handbag is to the left of the car."
}
],
"question": "the object; the object.\nIs the handbag to the left of the car?",
"question_id": "49",
"search": 1,
"target_instances": [
{
"bbox": [
612.18,
269.8,
27.48,
100.29
],
"name": "car"
},
{
"bbox": [
361.97,
361.05,
65.54,
71.49
],
"name": "handbag"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the handbag to the left of the car?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the microwave is above the bottle."
}
|
{
"answer": "Yes, the microwave is above the bottle.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000206784.jpg",
"index": "sealvqa_spatial_50",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the bottle on the left side of the microwave?"
},
{
"from": "gpt",
"value": "No, the bottle is not on the left side of the microwave."
},
{
"from": "human",
"value": "Is the microwave above the bottle?"
},
{
"from": "gpt",
"value": "Yes, the microwave is above the bottle."
}
],
"question": "Is the microwave above the bottle?",
"question_id": "50",
"search": 1,
"target_instances": [
{
"bbox": [
570.95,
302.58,
22.77,
56.21
],
"name": "bottle"
},
{
"bbox": [
445.12,
237.04,
67.49,
41.19
],
"name": "microwave"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the microwave above the bottle?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The bottle is below the clock."
}
|
{
"answer": "The bottle is below the clock.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000206784.jpg",
"index": "sealvqa_spatial_51",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the bottle to the left of the clock?"
},
{
"from": "gpt",
"value": "No, the bottle is not to the left of the clock."
},
{
"from": "human",
"value": "Is the bottle above or below the clock?"
},
{
"from": "gpt",
"value": "The bottle is below the clock."
}
],
"question": "Is the bottle above or below the clock?",
"question_id": "51",
"search": 1,
"target_instances": [
{
"bbox": [
570.95,
302.58,
22.77,
56.21
],
"name": "bottle"
},
{
"bbox": [
430.21,
203.48,
19.05,
16.12
],
"name": "clock"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the bottle above or below the clock?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The sink is above the bottle."
}
|
{
"answer": "The sink is above the bottle.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000206784.jpg",
"index": "sealvqa_spatial_52",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the bottle to the right of the sink?"
},
{
"from": "gpt",
"value": "Yes, the bottle is to the right of the sink."
},
{
"from": "human",
"value": "Is the sink above or below the bottle?"
},
{
"from": "gpt",
"value": "The sink is above the bottle."
}
],
"question": "Is the sink above or below the bottle?",
"question_id": "52",
"search": 1,
"target_instances": [
{
"bbox": [
570.95,
302.58,
22.77,
56.21
],
"name": "bottle"
},
{
"bbox": [
366.21,
254.72,
59.79,
34.39
],
"name": "sink"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the sink above or below the bottle?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The vase is above the toilet."
}
|
{
"answer": "The vase is above the toilet.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000221869.jpg",
"index": "sealvqa_spatial_53",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the toilet to the left or right of the vase?"
},
{
"from": "gpt",
"value": "The toilet is to the left of the vase."
},
{
"from": "human",
"value": "Is the vase above or below the toilet?"
},
{
"from": "gpt",
"value": "The vase is above the toilet."
}
],
"question": "Is the vase above or below the toilet?",
"question_id": "53",
"search": 1,
"target_instances": [
{
"bbox": [
123.27,
339.37,
7.72,
26.72
],
"name": "toilet"
},
{
"bbox": [
215.94,
246.11,
58.73,
48.4
],
"name": "vase"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the vase above or below the toilet?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the sink is not above the clock."
}
|
{
"answer": "No, the sink is not above the clock.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000022621.jpg",
"index": "sealvqa_spatial_54",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the sink to the left of the clock?"
},
{
"from": "gpt",
"value": "Yes, the sink is to the left of the clock."
},
{
"from": "human",
"value": "Is the sink above the clock?"
},
{
"from": "gpt",
"value": "No, the sink is not above the clock."
}
],
"question": "Is the sink above the clock?",
"question_id": "54",
"search": 1,
"target_instances": [
{
"bbox": [
115.06,
290.52,
159.64,
31.64
],
"name": "sink"
},
{
"bbox": [
410.7,
201.86,
10.3,
15.77
],
"name": "clock"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the sink above the clock?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the cup is not above the sink."
}
|
{
"answer": "No, the cup is not above the sink.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000078260.jpg",
"index": "sealvqa_spatial_55",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the sink on the left side of the cup?"
},
{
"from": "gpt",
"value": "No, the sink is not on the left side of the cup."
},
{
"from": "human",
"value": "Is the cup above the sink?"
},
{
"from": "gpt",
"value": "No, the cup is not above the sink."
}
],
"question": "Is the cup above the sink?",
"question_id": "55",
"search": 1,
"target_instances": [
{
"bbox": [
185.62,
249.99,
25.36,
27.7
],
"name": "cup"
},
{
"bbox": [
365.2,
222.19,
34.13,
7.1
],
"name": "sink"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the cup above the sink?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the microwave is on the right side of the bottle."
}
|
{
"answer": "Yes, the microwave is on the right side of the bottle.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000078260.jpg",
"index": "sealvqa_spatial_56",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the microwave on the right side of the bottle?"
},
{
"from": "gpt",
"value": "Yes, the microwave is on the right side of the bottle."
}
],
"question": "the object; the object.\nIs the microwave on the right side of the bottle?",
"question_id": "56",
"search": 1,
"target_instances": [
{
"bbox": [
160.65,
182.84,
10.11,
32.86
],
"name": "bottle"
},
{
"bbox": [
200.19,
179.2,
56.36,
32.39
],
"name": "microwave"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the microwave on the right side of the bottle?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the oven is not below the cup."
}
|
{
"answer": "No, the oven is not below the cup.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000128201.jpg",
"index": "sealvqa_spatial_57",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the oven to the left of the cup?"
},
{
"from": "gpt",
"value": "No, the oven is not to the left of the cup."
},
{
"from": "human",
"value": "Is the oven below the cup?"
},
{
"from": "gpt",
"value": "No, the oven is not below the cup."
}
],
"question": "Is the oven below the cup?",
"question_id": "57",
"search": 1,
"target_instances": [
{
"bbox": [
173.88,
294.11,
14.22,
18.44
],
"name": "cup"
},
{
"bbox": [
428.93,
130.95,
55.6,
63.42
],
"name": "oven"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the oven below the cup?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The oven is below the cup."
}
|
{
"answer": "The oven is below the cup.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000092998.jpg",
"index": "sealvqa_spatial_58",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the oven to the left or right of the cup?"
},
{
"from": "gpt",
"value": "The oven is to the right of the cup."
},
{
"from": "human",
"value": "Is the oven above or below the cup?"
},
{
"from": "gpt",
"value": "The oven is below the cup."
}
],
"question": "Is the oven above or below the cup?",
"question_id": "58",
"search": 1,
"target_instances": [
{
"bbox": [
239.26,
142.58,
15.99,
19.19
],
"name": "cup"
},
{
"bbox": [
376.22,
232.43,
37.83,
105.95
],
"name": "oven"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the oven above or below the cup?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The cup is above the orange."
}
|
{
"answer": "The cup is above the orange.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000092998.jpg",
"index": "sealvqa_spatial_59",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the orange on the left or right side of the cup?"
},
{
"from": "gpt",
"value": "The orange is on the left side of the cup."
},
{
"from": "human",
"value": "Is the cup above or below the orange?"
},
{
"from": "gpt",
"value": "The cup is above the orange."
}
],
"question": "Is the cup above or below the orange?",
"question_id": "59",
"search": 1,
"target_instances": [
{
"bbox": [
239.26,
142.58,
15.99,
19.19
],
"name": "cup"
},
{
"bbox": [
185.5,
259.23,
9.58,
11.25
],
"name": "orange"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the cup above or below the orange?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the oven is on the right side of the orange."
}
|
{
"answer": "Yes, the oven is on the right side of the orange.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000092998.jpg",
"index": "sealvqa_spatial_60",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the oven on the right side of the orange?"
},
{
"from": "gpt",
"value": "Yes, the oven is on the right side of the orange."
}
],
"question": "the object; the object.\nIs the oven on the right side of the orange?",
"question_id": "60",
"search": 1,
"target_instances": [
{
"bbox": [
185.5,
259.23,
9.58,
11.25
],
"name": "orange"
},
{
"bbox": [
376.22,
232.43,
37.83,
105.95
],
"name": "oven"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the oven on the right side of the orange?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The chair is below the sink."
}
|
{
"answer": "The chair is below the sink.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000093959.jpg",
"index": "sealvqa_spatial_61",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the chair on the left side of the sink?"
},
{
"from": "gpt",
"value": "No, the chair is not on the left side of the sink."
},
{
"from": "human",
"value": "Is the chair above or below the sink?"
},
{
"from": "gpt",
"value": "The chair is below the sink."
}
],
"question": "Is the chair above or below the sink?",
"question_id": "61",
"search": 1,
"target_instances": [
{
"bbox": [
116.37,
313.13,
169.27,
31.53
],
"name": "sink"
},
{
"bbox": [
397.37,
563.14,
79.93,
76.86
],
"name": "chair"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the chair above or below the sink?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the sink is not to the right of the toaster."
}
|
{
"answer": "No, the sink is not to the right of the toaster.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000093959.jpg",
"index": "sealvqa_spatial_62",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the sink to the right of the toaster?"
},
{
"from": "gpt",
"value": "No, the sink is not to the right of the toaster."
}
],
"question": "the object; the object.\nIs the sink to the right of the toaster?",
"question_id": "62",
"search": 1,
"target_instances": [
{
"bbox": [
381.31,
329.13,
58.51,
57.83
],
"name": "toaster"
},
{
"bbox": [
116.37,
313.13,
169.27,
31.53
],
"name": "sink"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the sink to the right of the toaster?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the bowl is on the right side of the wine glass."
}
|
{
"answer": "Yes, the bowl is on the right side of the wine glass.",
"conversation_turns": 2,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000077102.jpg",
"index": "sealvqa_spatial_63",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the bowl on the right side of the wine glass?"
},
{
"from": "gpt",
"value": "Yes, the bowl is on the right side of the wine glass."
}
],
"question": "the object; the object.\nIs the bowl on the right side of the wine glass?",
"question_id": "63",
"search": 1,
"target_instances": [
{
"bbox": [
82.95,
293.52,
82.28,
30.66
],
"name": "bowl"
},
{
"bbox": [
39.48,
260.01,
31.08,
62.5
],
"name": "wine glass"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nthe object; the object.\nIs the bowl on the right side of the wine glass?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The wine glass is below the bottle."
}
|
{
"answer": "The wine glass is below the bottle.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000077102.jpg",
"index": "sealvqa_spatial_64",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the wine glass on the left side of the bottle?"
},
{
"from": "gpt",
"value": "Yes, the wine glass is on the left side of the bottle."
},
{
"from": "human",
"value": "Is the wine glass above or below the bottle?"
},
{
"from": "gpt",
"value": "The wine glass is below the bottle."
}
],
"question": "Is the wine glass above or below the bottle?",
"question_id": "64",
"search": 1,
"target_instances": [
{
"bbox": [
73.55,
177.5,
26.9,
81.43
],
"name": "bottle"
},
{
"bbox": [
39.48,
260.01,
31.08,
62.5
],
"name": "wine glass"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the wine glass above or below the bottle?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the bottle is not above the scissors."
}
|
{
"answer": "No, the bottle is not above the scissors.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000087567.jpg",
"index": "sealvqa_spatial_65",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the scissors on the left or right side of the bottle?"
},
{
"from": "gpt",
"value": "The scissors is on the right side of the bottle."
},
{
"from": "human",
"value": "Is the bottle above the scissors?"
},
{
"from": "gpt",
"value": "No, the bottle is not above the scissors."
}
],
"question": "Is the bottle above the scissors?",
"question_id": "65",
"search": 1,
"target_instances": [
{
"bbox": [
261.59,
159.02,
11.9,
45.68
],
"name": "scissors"
},
{
"bbox": [
9.71,
212.13,
55.48,
40.56
],
"name": "bottle"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the bottle above the scissors?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The oven is below the wine glass."
}
|
{
"answer": "The oven is below the wine glass.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000293611.jpg",
"index": "sealvqa_spatial_66",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the wine glass on the right side of the oven?"
},
{
"from": "gpt",
"value": "Yes, the wine glass is on the right side of the oven."
},
{
"from": "human",
"value": "Is the oven above or below the wine glass?"
},
{
"from": "gpt",
"value": "The oven is below the wine glass."
}
],
"question": "Is the oven above or below the wine glass?",
"question_id": "66",
"search": 1,
"target_instances": [
{
"bbox": [
0,
312.95,
41.26,
103.77
],
"name": "oven"
},
{
"bbox": [
338.67,
228.47,
30.89,
20.17
],
"name": "wine glass"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the oven above or below the wine glass?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the potted plant is not below the wine glass."
}
|
{
"answer": "No, the potted plant is not below the wine glass.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000293611.jpg",
"index": "sealvqa_spatial_67",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the potted plant to the left or right of the wine glass?"
},
{
"from": "gpt",
"value": "The potted plant is to the left of the wine glass."
},
{
"from": "human",
"value": "Is the potted plant below the wine glass?"
},
{
"from": "gpt",
"value": "No, the potted plant is not below the wine glass."
}
],
"question": "Is the potted plant below the wine glass?",
"question_id": "67",
"search": 1,
"target_instances": [
{
"bbox": [
293.71,
96.17,
36.62,
50.3
],
"name": "potted plant"
},
{
"bbox": [
338.67,
228.47,
30.89,
20.17
],
"name": "wine glass"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the potted plant below the wine glass?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the oven is below the potted plant."
}
|
{
"answer": "Yes, the oven is below the potted plant.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000293611.jpg",
"index": "sealvqa_spatial_68",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the oven to the right of the potted plant?"
},
{
"from": "gpt",
"value": "No, the oven is not to the right of the potted plant."
},
{
"from": "human",
"value": "Is the oven below the potted plant?"
},
{
"from": "gpt",
"value": "Yes, the oven is below the potted plant."
}
],
"question": "Is the oven below the potted plant?",
"question_id": "68",
"search": 1,
"target_instances": [
{
"bbox": [
293.71,
96.17,
36.62,
50.3
],
"name": "potted plant"
},
{
"bbox": [
0,
312.95,
41.26,
103.77
],
"name": "oven"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the oven below the potted plant?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "Yes, the potted plant is below the orange."
}
|
{
"answer": "Yes, the potted plant is below the orange.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000107516.jpg",
"index": "sealvqa_spatial_69",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the potted plant on the left or right side of the orange?"
},
{
"from": "gpt",
"value": "The potted plant is on the right side of the orange."
},
{
"from": "human",
"value": "Is the potted plant below the orange?"
},
{
"from": "gpt",
"value": "Yes, the potted plant is below the orange."
}
],
"question": "Is the potted plant below the orange?",
"question_id": "69",
"search": 1,
"target_instances": [
{
"bbox": [
290.6,
315.52,
34.97,
42.94
],
"name": "potted plant"
},
{
"bbox": [
134.79,
278.62,
31.51,
9.43
],
"name": "orange"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the potted plant below the orange?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "The sink is below the potted plant."
}
|
{
"answer": "The sink is below the potted plant.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000107516.jpg",
"index": "sealvqa_spatial_70",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the sink on the left or right side of the potted plant?"
},
{
"from": "gpt",
"value": "The sink is on the left side of the potted plant."
},
{
"from": "human",
"value": "Is the sink above or below the potted plant?"
},
{
"from": "gpt",
"value": "The sink is below the potted plant."
}
],
"question": "Is the sink above or below the potted plant?",
"question_id": "70",
"search": 1,
"target_instances": [
{
"bbox": [
290.6,
315.52,
34.97,
42.94
],
"name": "potted plant"
},
{
"bbox": [
118.87,
403.14,
130.62,
40.82
],
"name": "sink"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the sink above or below the potted plant?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
|
sealvqa_spatial
|
{
"ground_truth": "No, the sink is not below the vase."
}
|
{
"answer": "No, the sink is not below the vase.",
"conversation_turns": 4,
"data_split": "train",
"dataset_type": "spatial_relation",
"has_targets": true,
"image_path": "coco2017/train2017/000000200116.jpg",
"index": "sealvqa_spatial_71",
"original_conversations": [
{
"from": "human",
"value": "<image>\nAdditional visual information to focus on: <object>; <object>.\nIs the vase to the right of the sink?"
},
{
"from": "gpt",
"value": "Yes, the vase is to the right of the sink."
},
{
"from": "human",
"value": "Is the sink below the vase?"
},
{
"from": "gpt",
"value": "No, the sink is not below the vase."
}
],
"question": "Is the sink below the vase?",
"question_id": "71",
"search": 1,
"target_instances": [
{
"bbox": [
539.36,
335.73,
39.57,
47.02
],
"name": "vase"
},
{
"bbox": [
227.25,
234.09,
51.91,
9.68
],
"name": "sink"
}
]
}
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "<image>\nIs the sink below the vase?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
sealvqa_spatial_spatial
|
visual_toolbox_v2
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 67