The dataset viewer is not available for this split.
Error code: StreamingRowsError Exception: CastError Message: Couldn't cast type: string new_observations: list<item: string> child 0, item: string turn_id: int64 scan_id: string origin_question: string option: list<item: string> child 0, item: string answer: string user_message: string system_prompt: string -- schema metadata -- pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 1219 to {'scan_id': Value('string'), 'turn_id': Value('int64'), 'type': Value('string'), 'new_observations': List(Value('string')), 'origin_question': Value('string'), 'option': List(Value('string')), 'answer': Value('string')} because column names don't match Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise return get_rows( File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 77, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2361, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1914, in _iter_arrow pa_table = cast_table_to_features(pa_table, self.features) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2192, in cast_table_to_features raise CastError( datasets.table.CastError: Couldn't cast type: string new_observations: list<item: string> child 0, item: string turn_id: int64 scan_id: string origin_question: string option: list<item: string> child 0, item: string answer: string user_message: string system_prompt: string -- schema metadata -- pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 1219 to {'scan_id': Value('string'), 'turn_id': Value('int64'), 'type': Value('string'), 'new_observations': List(Value('string')), 'origin_question': Value('string'), 'option': List(Value('string')), 'answer': Value('string')} because column names don't match
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
This page contains the data for the paper "OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Unstanding."
๐ Homepage | ๐ Paper | ๐ป Code | ๐ arXiv
Dataset Description
The imgs
folder contains image data corresponding to 1,386 scenes. Each scene has its own subfolder, which stores the observations captured by the agent while exploring that scene.
ost-bench.json consists of 10k data samples, where each sample represents one round of Q&A (question and answer) and includes the new observations for that round. The structure of each sample (dictionary) is as follows:
{
"scan_id" (str): Unique identifier for the scene scan,
"system_prompt" (str): Shared context/prompt for the multi-turn conversation,
"turn_id" (int): Index of the current turn in the dialogue,
"type" (str): Question subtype/category,
"origin_question" (str): Original question text,
"answer" (str): Ground-truth answer,
"option" (list[str]): Multiple-choice options,
"new_observations" (list[str]): Relative paths to new observation images (within `imgs` dir),
"user_message" (str): Formatted input prompt for the model,
}
Samples with the same scan_id
belong to the same multi-turn conversation group. During model evaluation, each multi-turn conversation group is processed as a unit: the shared system_prompt
is provided, and new observations along with questions are fed in sequentially according to turn_id
.
Evaluation Instructions
Please refer to our evaluation code for details.
- Downloads last month
- 94