Datasets:
ArXiv:
License:
Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code: FeaturesError Exception: ArrowInvalid Message: Schema at index 1 was different: 1: string 2: string vs 1: string 2: string 3: string Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head return next(iter(self.iter(batch_size=n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter for key, example in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow yield from self.ex_iterable._iter_arrow() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 527, in _iter_arrow yield new_key, pa.Table.from_batches(chunks_buffer) File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 1: string 2: string vs 1: string 2: string 3: string
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
This dataset contains benchmarks for open-world object part segmentation proposed by Find Any Part in 3D (ICCV 2025). This dataset includes two human-annotated benchmarks: Objaverse-General (of 100 object categories) and ShapeNetPart-Objaverse (of the same categories of ShapeNetPart, but with objects source from Objaverse to study distribution shift).
Usage
Inside both objaverse-general
and objaverse-shapanetepart
directories, the benchmark has the following directory structure:
Each object is stored in a folder with the structure below:
{object_name}_{objaverse_uid}/
βββ label_map.json # Maps part IDs to natural language part names
βββ labels.npy # Per-point label assignment (0 means no label)
βββ points5000.pcd # Object point cloud with 5000 points
- Downloads last month
- 44