Datasets:

rassulya commited on
Commit
8d7caf4
·
verified ·
1 Parent(s): a1c6f00

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +143 -31
README.md CHANGED
@@ -3,34 +3,146 @@
3
  **Repository:** [https://github.com/IS2AI/COHI-O365](https://github.com/IS2AI/COHI-O365)
4
 
5
 
6
- **Summary:**
7
-
8
- This work introduces COHI-O365, a benchmark dataset for object detection in hemispherical/fisheye images, designed for field-of-view invariant applications. It also presents RMFV365, a large synthetic fisheye dataset created to train object detection models for COHI-O365. The COHI-O365 dataset comprises 1,000 real fisheye images from the Objects365 dataset, featuring 74 classes and an average of 20,798 object instances per image. These images were captured using a hemispherical camera and manually annotated. The RMFV365 dataset, derived from Objects365, contains 5.1 million fisheye images created through a non-linear mapping. YOLOv7 models were trained on these datasets and evaluated on COHI-O365, demonstrating improved performance compared to other models.
9
-
10
-
11
- **Abstract Summary:** The paper introduces COHI-O365, a real-world benchmark dataset for fisheye object detection, and RMFV365, a large synthetic fisheye dataset for training models. COHI-O365 contains 1000 real images with 74 classes and extensive annotations, while RMFV365 provides 5.1 million synthetic fisheye images derived from Objects365. Experiments using YOLOv7 showcase the effectiveness of RMFV365 in training models that perform well on COHI-O365.
12
-
13
-
14
- **Datasets:**
15
-
16
- The COHI-O365 dataset consists of 1,000 real fisheye images with 74 classes and an average of 20,798 object instances per image. The RMFV365 dataset is a synthetic dataset containing 5.1 million fisheye images. Both datasets are used to train and evaluate object detection models.
17
-
18
- **Sample Images:**
19
-
20
- *(Images would be included here from the provided links if this were a true Hugging Face card. The markdown would point to the images stored within the Hugging Face dataset.)*
21
-
22
- **Results:**
23
-
24
- | S/N | Model | Objects365 mAP50 | Objects365 mAP50:95 | RMFV365-v1 mAP50 | RMFV365-v1 mAP50:95 | RMFV365 mAP50 | RMFV365 mAP50:95 | COHI-365 mAP50 | COHI-365 mAP50:95 |
25
- |-----|-------------|-------------------|----------------------|--------------------|----------------------|-----------------|-----------------|-----------------|--------------------|
26
- | 1 | FPN | 35.5 | 22.5 | N/A | N/A | N/A | N/A | N/A | N/A |
27
- | 2 | RetinaNet | 27.3 | 18.7 | N/A | N/A | N/A | N/A | N/A | N/A |
28
- | 3 | YOLOv5m | 27.3 | 18.8 | 22.6 | 14.1 | 18.7 | 10.1 | 40.4 | 28.0 |
29
- | 4 | YOLOv7-0 | 34.97 | 24.57 | 29.1 | 18.3 | 24.2 | 13.0 | 47.5 | 33.5 |
30
- | 5 | YOLOv7-T1 | 34.3 | 24.0 | 32.7 | 22.7 | 32.0 | 22.0 | 49.1 | 34.6 |
31
- | 6 | YOLOv7-T2 | 34 | 23.1 | 32.9 | 23 | 33 | 22.8 | 49.9 | 34.9 |
32
-
33
- **Table:** Object recognition results on Objects365, RMFV365-v1, RMFV365, and COHI-365 Testing Sets
34
-
35
-
36
- **Citation:** *(Citation information missing from provided text)*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  **Repository:** [https://github.com/IS2AI/COHI-O365](https://github.com/IS2AI/COHI-O365)
4
 
5
 
6
+ This repository introduces COHI-O365, a benchmark dataset for object detection in hemispherical/fisheye images, designed for field-of-view invariant applications. It also includes the RMFV365 dataset, a large-scale synthetic fisheye dataset used for training. Pre-trained YOLOv7 models are provided, trained on various combinations of these datasets.
7
+
8
+
9
+ ## Dataset Summary
10
+
11
+ COHI-O365 is a real-world dataset containing 1,000 fisheye images of 74 classes, sampled from the Objects365 dataset. These images, captured using an ELP-USB8MP02G-L180 hemispherical camera (2448x3264 resolution), feature an average of 20,798 object instances per image and are annotated with axis-aligned bounding boxes.
12
+
13
+
14
+ RMFV365 is a synthetic fisheye dataset created by applying non-linear mapping to the Objects365 dataset. It comprises 5.1 million images offering a diverse range of perspectives and distortions, useful for training robust object detection models.
15
+
16
+
17
+ <img src="https://github.com/IS2AI/COHI-O365/blob/main/pictures/COHI-365 Sample Images.png" width="750">
18
+
19
+ *COHI-O365 Sample Images*
20
+
21
+
22
+ <img src="https://github.com/IS2AI/COHI-O365/blob/main/pictures/RMFV365 Sample Images.png" width="750">
23
+
24
+ *RMFV365 Sample Images*
25
+
26
+
27
+ ## Pre-trained Models and Results
28
+
29
+ Three YOLOv7 models were trained and evaluated:
30
+
31
+ * **YOLOv7-0:** Trained on Objects365.
32
+ * **YOLOv7-T1:** Trained on Objects365 and a variant of RMFV365 (RMFV365-v1), using a lens and camera-independent fisheye transformation (n=4).
33
+ * **YOLOv7-T2:** Trained on RMFV365.
34
+
35
+
36
+ <img src="https://github.com/IS2AI/COHI-O365/blob/main/pictures/training_scheme.jpg" width="500">
37
+
38
+ *Model Training Scheme*
39
+
40
+
41
+ The models' performance, measured by mAP<sub>50</sub>, is summarized below:
42
+
43
+ <table>
44
+ <thead>
45
+ <tr>
46
+ <th rowspan="3">S/N</th>
47
+ <th rowspan="3">Model</th>
48
+ <th colspan="8">Test Results (%)</th>
49
+ </tr>
50
+ <tr>
51
+ <th colspan="2">Objects365</th>
52
+ <th colspan="2">RMFV365-v1</th>
53
+ <th colspan="2">RMFV365</th>
54
+ <th colspan="2">COHI-365</th>
55
+ </tr>
56
+ <tr>
57
+ <th>mAP50</th>
58
+ <th>mAP50:95</th>
59
+ <th>mAP50</th>
60
+ <th>mAP50:95</th>
61
+ <th>mAP50</th>
62
+ <th>mAP50:95</th>
63
+ <th>mAP50</th>
64
+ <th>mAP50:95</th>
65
+ </tr>
66
+ </thead>
67
+ <tbody>
68
+ <tr>
69
+ <td>1</td>
70
+ <td>FPN</td>
71
+ <td><strong>35.5</strong></td>
72
+ <td>22.5</td>
73
+ <td>N/A</td>
74
+ <td>N/A</td>
75
+ <td>N/A</td>
76
+ <td>N/A</td>
77
+ <td>N/A</td>
78
+ <td>N/A</td>
79
+ </tr>
80
+ <tr>
81
+ <td>2</td>
82
+ <td>RetinaNet</td>
83
+ <td>27.3</td>
84
+ <td>18.7</td>
85
+ <td>N/A</td>
86
+ <td>N/A</td>
87
+ <td>N/A</td>
88
+ <td>N/A</td>
89
+ <td>N/A</td>
90
+ <td>N/A</td>
91
+ </tr>
92
+ <tr>
93
+ <td>3</td>
94
+ <td>YOLOv5m</td>
95
+ <td>27.3</td>
96
+ <td>18.8</td>
97
+ <td>22.6</td>
98
+ <td>14.1</td>
99
+ <td>18.7</td>
100
+ <td>10.1</td>
101
+ <td>40.4</td>
102
+ <td>28.0</td>
103
+ </tr>
104
+ <tr>
105
+ <td>4</td>
106
+ <td>YOLOv7-0</td>
107
+ <td>34.97</td>
108
+ <td><strong>24.57</strong></td>
109
+ <td>29.1</td>
110
+ <td>18.3</td>
111
+ <td>24.2</td>
112
+ <td>13.0</td>
113
+ <td>47.5</td>
114
+ <td>33.5</td>
115
+ </tr>
116
+ <tr>
117
+ <td>5</td>
118
+ <td>YOLOv7-T1</td>
119
+ <td>34.3</td>
120
+ <td>24.0</td>
121
+ <td>32.7</td>
122
+ <td>22.7</td>
123
+ <td>32.0</td>
124
+ <td>22.0</td>
125
+ <td>49.1</td>
126
+ <td>34.6</td>
127
+ </tr>
128
+ <tr>
129
+ <td>6</td>
130
+ <td>YOLOv7-T2</td>
131
+ <td>34</td>
132
+ <td>23.1</td>
133
+ <td><strong>32.9</strong></td>
134
+ <td><strong>23</strong></td>
135
+ <td><strong>33</strong></td>
136
+ <td><strong>22.8</strong></td>
137
+ <td><strong>49.9</strong></td>
138
+ <td><strong>34.9</strong></td>
139
+ </tr>
140
+ </tbody>
141
+ </table>
142
+
143
+ **Table:** Objects Recognition Results on Objects365, RMFV365-v1, RMFV365, and COHI-365 Testing Sets
144
+
145
+
146
+ ## Citation
147
+
148
+ *(Citation information not provided in the source content)*