JadeRay-42 commited on
Commit
ee7da17
·
verified ·
1 Parent(s): 3e81f45

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +195 -0
README.md CHANGED
@@ -1,3 +1,198 @@
1
  ---
2
  license: mit
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - visual-question-answering
5
+ language:
6
+ - en
7
  ---
8
+
9
+ # VLM 3TP Data Processing
10
+
11
+ This document outlines the steps for processing VLM 3TP data, focusing on datasets with ground truth annotations.
12
+
13
+ ## Table of Contents
14
+
15
+ - [VLM 3TP Data Processing](#vlm-3tp-data-processing)
16
+ - [Table of Contents](#table-of-contents)
17
+ - [Quick Start: Mono3DRefer Preprocessing Workflow](#quick-start-mono3drefer-preprocessing-workflow)
18
+ - [1. Datasets with Ground Truth Annotations](#1-datasets-with-ground-truth-annotations)
19
+ - [Downloading Raw Data](#downloading-raw-data)
20
+ - [2. Data Preprocessing Details](#2-data-preprocessing-details)
21
+ - [Input Data Requirements](#input-data-requirements)
22
+ - [Preprocessing Pipeline](#preprocessing-pipeline)
23
+ - [Output Metadata Format](#output-metadata-format)
24
+ - [3. Downstream Task Generation (QA)](#3-downstream-task-generation-qa)
25
+ - [VSI-Bench Task Details](#vsi-bench-task-details)
26
+ - [3D Fundamental Tasks (VSI-Bench)](#3d-fundamental-tasks-vsi-bench)
27
+ - [Metric Estimation Tasks (VSI-Bench)](#metric-estimation-tasks-vsi-bench)
28
+ - [4. Upload Processed Data to Hugging Face Hub](#4-upload-processed-data-to-hugging-face-hub)
29
+ - [Prerequisites](#prerequisites)
30
+ - [Usage](#usage)
31
+
32
+ ## Quick Start: Mono3DRefer Preprocessing Workflow
33
+
34
+ We provide a complete, end-to-end workflow for processing the ScanNet dataset, from raw data to the final QA pairs. This includes detailed steps and command-line examples for each stage of the process. For the full guide, please refer to the documentation at [`src/metadata_generation/Mono3DRefer/README.md`](src/metadata_generation/Mono3DRefer/README.md).
35
+
36
+ ## 1. Datasets with Ground Truth Annotations
37
+
38
+ ### Downloading Raw Data
39
+
40
+ Follow the instructions from the respective repositories to download the raw datasets:
41
+
42
+ - **Mono3DRefer**: Download data to `data/Mono3DRefer`. Follow instructions at [Mono3DVG](https://github.com/ZhanYang-nwpu/Mono3DVG).
43
+
44
+ ## 2. Data Preprocessing Details
45
+
46
+ This section outlines the general pipeline for preprocessing 3D scene data for tasks. The goal is to extract structured metadata from raw inputs, which can then be used to generate diverse question-answering (QA) datasets.
47
+
48
+ ### Input Data Requirements
49
+
50
+ The preprocessing pipeline requires the following types of data for each scene:
51
+
52
+ 1. **Calibration Data**: The intrinsic parameters of the camera of the scene (e.g., from `.txt` files), typically containing focal length `fx`, `fy` and principal point `cx`, `cy`.
53
+ 2. **Color Images**: RGB images of the scene (e.g., from `.jpg` or `.png` files).
54
+
55
+ ### Preprocessing Pipeline
56
+
57
+ The core preprocessing involves generating metadata files:
58
+
59
+ 1. **Metadata Generation:**
60
+ - Processes the Sampled Frame Data (Color) and Camera Calibration Data.
61
+ - Extracts object annotations such as:
62
+ - Category.
63
+ - 2D bounding boxes (xmin, ymin, xmax, ymax) in image coordinates.
64
+ - 3D size (width, height, length) in real-world units (e.g., meters).
65
+ - 3D location (x, y, z) in real-world coordinates.
66
+ - rotation_y and angle.
67
+ - occlusion and truncation levels.
68
+ - Typically saves this information in a structured format like JSON (e.g., `metadata.json`).
69
+
70
+ ### Output Metadata Format
71
+
72
+ The specific structure of the output metadata JSON files, based on the current implementation (e.g., `metadata.py`), is as follows:
73
+
74
+ 1. **`metadata.json`:**
75
+ A JSON file containing a dictionary where keys are scene IDs (e.g., "000000"). Each scene ID maps to a dictionary with the following structure:
76
+
77
+ ```json
78
+ {
79
+ "scene_id": {
80
+ "camera_intrinsics": [ // 3 x 4 matrix
81
+ [fx, 0, cx, Tx],
82
+ [0, fy, cy, Ty],
83
+ [0, 0, 1, 0]
84
+ ],
85
+ "frames": [
86
+ {
87
+ "frame_id": 0, // Integer frame index/number from sampled data
88
+ "file_path_color": "images/000000.png", // Relative path to color image within processed dir
89
+ "objects": [
90
+ {
91
+ "instance_id": 0, // Unique instance ID for the object
92
+ "category": "Car", // Object category
93
+ "bbox_2d": [xmin, ymin, xmax, ymax], // 2D bounding box in image coordinates
94
+ "size_3d": [height, width, length], // 3D size in meters
95
+ "location_3d": [x, y, z], // 3D location of center of object's upper plane in meters
96
+ "rotation_y": rotation_y, // Rotation around Y-axis in radians, face to camera is [0, pi], instead [0, -pi]
97
+ "angle": angle, // Viewing angle in radians
98
+ "occlusion": occlusion_level, // Occlusion level (0-3)
99
+ "truncation": truncation_level, // Truncation level (0-1)
100
+ "center_3d": [x_center, y_center, z_center], // 3D center coordinates in meters
101
+ "center_3d_proj_2d": [x_center_2d, y_center_2d], // 2D projection of 3D center in image coordinates
102
+ "depth": depth_value, // Object's depth from the camera in meters
103
+ "corners_3d": [ // 8 corners of the 3D bounding box in meters
104
+ [x1, y1, z1],
105
+ [x2, y2, z2],
106
+ ...
107
+ [x8, y8, z8]
108
+ ],
109
+ "corners_3d_proj_2d": [ // 2D projections of the 8 corners in image coordinates
110
+ [x1_2d, y1_2d],
111
+ [x2_2d, y2_2d],
112
+ ...
113
+ [x8_2d, y8_2d]
114
+ ],
115
+ "descriptions": [ // Optional list of textual descriptions for the object
116
+ "This vehicle is ...",
117
+ "The vehicle ...",
118
+ ]
119
+ },
120
+ ... // other objects
121
+ ],
122
+ },
123
+ ... // other frames
124
+ ],
125
+ },
126
+ ... // other scenes
127
+ }
128
+ ```
129
+
130
+ ## 3. Downstream Task Generation (QA)
131
+
132
+ This section details the Question-Answering (QA) tasks. Inspired by the VSI-Bench, a benchmark for visual spatial intelligence, from "Thinking in Space: How Multimodal Large Language Models See, Remember, and Recall Spaces". We hope to involve this intelligence to more complicated scene, the monocular visual grounding. Without external sptatial knowledge, like cloud points or depth maps, we assume the advenced vision-language models, already understand the 2D spatial information, can implicitly cultivate the perception ablity of 3D spatia cues based on the monocular image and calibration data.
133
+
134
+ ### VSI-Bench Task Details
135
+
136
+ **Task Summary Table (VSI-Bench)**
137
+
138
+ | Task Name | Task Category | Answer Type |
139
+ | :------------------ | :---------------- | :-------------- |
140
+ | Object Count | Fundamental | Multiple Choice |
141
+ | Ind Object Size | Fundamental | Multiple Choice |
142
+ | Ind Object Depth | Fundamental | Multiple Choice |
143
+ | Ind Object Rotation | Fundamental | Multiple Choice |
144
+ | Ind Object 3D BBox | Fundamental | JSON |
145
+ | Object 3D BBox | Fundamental | JSON |
146
+ | Ind Object Detect | Metric Estimation | JSON |
147
+ | Object 3D Detect | Metric Estimation | JSON |
148
+
149
+ #### 3D Fundamental Tasks (VSI-Bench)
150
+
151
+ These tasks extend the advenced vision-language models' capabilities to understand basic 3D spatial properties of objects in a scene.
152
+
153
+ - **Indicted Object 3D Attributes:** Asks the basic 3D attributes of the indicated object by box range, point coord, or description indicator.
154
+ - **QA Generation (`get_ind_obj_size_qa.py`, `...depth_qa.py`, `...ry_qa.py`, `...3dBbox_qa.py`):** This task generates multiple-choice questions asking about the 3D size, depth, rotation_y of an indicated object. And asking for the 3D bounding box coordinates of an indicated object in JSON format.
155
+ - **Indicator Types:** The indicated object is specified using three types, including object's bounding box range, object's center point coordinate, or object's textual description.
156
+ - **Answer Types:** The answers are provided in two formats: multiple-choice (for size, depth, rotation_y) and JSON (for 3D bounding box coordinates).
157
+ - **Object Count:** Asks for the total number of instances of a specific object category (e.g., "How many cars are there?").
158
+ - **QA Generation (`get_obj_count_qa.py`):** This task generates multiple-choice questions based on `metadata.json`. It iterates through the `object_counts` for each object type in the scene. For categories with more than one instance, it formulates a question. The correct answer is the actual count from the metadata. The other three distractor options are generated by adding small, random offsets to the correct answer, creating a four-choice question.
159
+ - **Object 3D Bbox:** Asks for the 3D coordinates of the eight corners of a specific category of objects, visualizing the ablity of 3D spatial understanding.
160
+ - **QA Generation (`get_obj_3dBbox_qa.py`):** For all categories in the scene, this task generates random combinations of categories for each question. The answer is provided in JSON format, detailing the 3D coordinates of the eight corners of the bounding boxes in image coordinates.
161
+
162
+ #### Metric Estimation Tasks (VSI-Bench)
163
+
164
+ These tasks require estimating quantitative metrics based on existing 3D Object detection benchmarks.
165
+
166
+ - **Indicted Object 3D Detection:** Asks the indicated object's 3D size and position to estimate the 3D grounding performance with 3D BBox IoU metric.
167
+ - **QA Generation (`get_ind_obj_3dIou_qa.py`):** This task generates questions asking for the 3D size and position of an indicated object in JSON format.
168
+ - **Indicator Types:** The indicated object is specified using three types, including object's bounding box range, object's center point coordinate, or object's textual description.
169
+ - **Answer Types:** The answers are provided in JSON format.
170
+ - **Ambiguity Filtering:** To ensure clarity, the script excludes cases where the objects are inappropriately spaced (too close or too far) based on occlusion and truncation levels.
171
+ - **Multiple Objects 3D Detection:** Asks for the all 3D attributes (category, angle, 2D bbox, 3D size, 3D location, rotation_y) of all objects in the scene to evaluate the 3D detection performance with KITTI 3D AP metric.
172
+ - **QA Generation (`get_obj_3dDetect_qa.py`):** (Implementation in progress)
173
+
174
+ ## 4. Upload Processed Data to Hugging Face Hub
175
+
176
+ After generating the QA datasets, due to different tasks having different answer formats (multiple-choice, JSON, etc.), we provide a script `upload_datasets.py` to convert these datasets into a conversational format and upload them to the Hugging Face Hub.
177
+
178
+ ### Prerequisites
179
+
180
+ Ensure you have the required libraries installed:
181
+
182
+ ```bash
183
+ pip install huggingface_hub
184
+ ```
185
+
186
+ ### Usage
187
+
188
+ - Just covert the generated QA JSON files to conversational format, and save them to a specified output directory.
189
+
190
+ ```bash
191
+ python upload_datasets.py --input_dir data/qa_output --split_type val --output_dir data/vlm_3tp_data
192
+ ```
193
+
194
+ - Convert and upload the datasets to Hugging Face Hub. Make sure to set your repository name.
195
+
196
+ ```bash
197
+ python upload_datasets.py --input_dir data/qa_output --split_type val --output_dir data/vlm_3tp_data --repo_name your_hf_repo_name
198
+ ```