Datasets:

rassulya commited on
Commit
eef883d
·
verified ·
1 Parent(s): f0ec404

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +32 -139
README.md CHANGED
@@ -1,148 +1,41 @@
1
  # COHI-O365: A Benchmark Dataset for Fisheye Object Detection
2
 
3
- **Repository:** [https://github.com/IS2AI/COHI-O365](https://github.com/IS2AI/COHI-O365)
4
 
 
5
 
6
- This repository introduces COHI-O365, a benchmark dataset for object detection in hemispherical/fisheye images, designed for field-of-view invariant applications. It also includes the RMFV365 dataset, a large-scale synthetic fisheye dataset used for training. Pre-trained YOLOv7 models are provided, trained on various combinations of these datasets.
7
 
 
8
 
9
- ## Dataset Summary
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
- COHI-O365 is a real-world dataset containing 1,000 fisheye images of 74 classes, sampled from the Objects365 dataset. These images, captured using an ELP-USB8MP02G-L180 hemispherical camera (2448x3264 resolution), feature an average of 20,798 object instances per image and are annotated with axis-aligned bounding boxes.
12
-
13
-
14
- RMFV365 is a synthetic fisheye dataset created by applying non-linear mapping to the Objects365 dataset. It comprises 5.1 million images offering a diverse range of perspectives and distortions, useful for training robust object detection models.
15
-
16
-
17
-
18
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/67056b2e6409e548690b1b6f/3v4VU8qDxSczHCFHIz9bj.png)
19
-
20
- *COHI-O365 Sample Images*
21
-
22
-
23
-
24
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/67056b2e6409e548690b1b6f/Sww7NkXYVnEiXgc4NQyr6.png)
25
-
26
- *RMFV365 Sample Images*
27
-
28
-
29
- ## Pre-trained Models and Results
30
-
31
- Three YOLOv7 models were trained and evaluated:
32
-
33
- * **YOLOv7-0:** Trained on Objects365.
34
- * **YOLOv7-T1:** Trained on Objects365 and a variant of RMFV365 (RMFV365-v1), using a lens and camera-independent fisheye transformation (n=4).
35
- * **YOLOv7-T2:** Trained on RMFV365.
36
-
37
-
38
-
39
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/67056b2e6409e548690b1b6f/JMJCqaGedoORAURGclSgP.png)
40
-
41
- *Model Training Scheme*
42
-
43
-
44
- The models' performance, measured by mAP<sub>50</sub>, is summarized below:
45
-
46
- <table>
47
- <thead>
48
- <tr>
49
- <th rowspan="3">S/N</th>
50
- <th rowspan="3">Model</th>
51
- <th colspan="8">Test Results (%)</th>
52
- </tr>
53
- <tr>
54
- <th colspan="2">Objects365</th>
55
- <th colspan="2">RMFV365-v1</th>
56
- <th colspan="2">RMFV365</th>
57
- <th colspan="2">COHI-365</th>
58
- </tr>
59
- <tr>
60
- <th>mAP50</th>
61
- <th>mAP50:95</th>
62
- <th>mAP50</th>
63
- <th>mAP50:95</th>
64
- <th>mAP50</th>
65
- <th>mAP50:95</th>
66
- <th>mAP50</th>
67
- <th>mAP50:95</th>
68
- </tr>
69
- </thead>
70
- <tbody>
71
- <tr>
72
- <td>1</td>
73
- <td>FPN</td>
74
- <td><strong>35.5</strong></td>
75
- <td>22.5</td>
76
- <td>N/A</td>
77
- <td>N/A</td>
78
- <td>N/A</td>
79
- <td>N/A</td>
80
- <td>N/A</td>
81
- <td>N/A</td>
82
- </tr>
83
- <tr>
84
- <td>2</td>
85
- <td>RetinaNet</td>
86
- <td>27.3</td>
87
- <td>18.7</td>
88
- <td>N/A</td>
89
- <td>N/A</td>
90
- <td>N/A</td>
91
- <td>N/A</td>
92
- <td>N/A</td>
93
- <td>N/A</td>
94
- </tr>
95
- <tr>
96
- <td>3</td>
97
- <td>YOLOv5m</td>
98
- <td>27.3</td>
99
- <td>18.8</td>
100
- <td>22.6</td>
101
- <td>14.1</td>
102
- <td>18.7</td>
103
- <td>10.1</td>
104
- <td>40.4</td>
105
- <td>28.0</td>
106
- </tr>
107
- <tr>
108
- <td>4</td>
109
- <td>YOLOv7-0</td>
110
- <td>34.97</td>
111
- <td><strong>24.57</strong></td>
112
- <td>29.1</td>
113
- <td>18.3</td>
114
- <td>24.2</td>
115
- <td>13.0</td>
116
- <td>47.5</td>
117
- <td>33.5</td>
118
- </tr>
119
- <tr>
120
- <td>5</td>
121
- <td>YOLOv7-T1</td>
122
- <td>34.3</td>
123
- <td>24.0</td>
124
- <td>32.7</td>
125
- <td>22.7</td>
126
- <td>32.0</td>
127
- <td>22.0</td>
128
- <td>49.1</td>
129
- <td>34.6</td>
130
- </tr>
131
- <tr>
132
- <td>6</td>
133
- <td>YOLOv7-T2</td>
134
- <td>34</td>
135
- <td>23.1</td>
136
- <td><strong>32.9</strong></td>
137
- <td><strong>23</strong></td>
138
- <td><strong>33</strong></td>
139
- <td><strong>22.8</strong></td>
140
- <td><strong>49.9</strong></td>
141
- <td><strong>34.9</strong></td>
142
- </tr>
143
- </tbody>
144
- </table>
145
-
146
- **Table:** Objects Recognition Results on Objects365, RMFV365-v1, RMFV365, and COHI-365 Testing Sets
147
 
 
148
 
 
 
1
  # COHI-O365: A Benchmark Dataset for Fisheye Object Detection
2
 
3
+ ## Dataset Summary
4
 
5
+ This work introduces COHI-O365, a benchmark dataset for object detection in hemispherical/fisheye images, designed for field-of-view invariant applications. It complements a synthetic training dataset, RMFV365, created by applying fisheye transformations to the Objects365 dataset. COHI-O365 contains 1,000 real fisheye images with 74 classes and an average of 20,798 object instances per image. These images were captured using an ELP-USB8MP02G-L180 hemispherical camera (2448x3264 pixels) and manually annotated with axis-aligned bounding boxes. The RMFV365 dataset, used for model training, comprises 5.1 million fisheye images generated from Objects365. YOLOv7 models were trained on Objects365, RMFV365, and a variant (RMFV365-v1), and evaluated on COHI-O365.
6
 
 
7
 
8
+ ## Dataset Contents
9
 
10
+ The dataset includes:
11
+
12
+ * **COHI-O365:** A benchmark testing dataset with 1,000 real fisheye images of 74 classes.
13
+ * **RMFV365:** A large-scale synthetic fisheye dataset derived from Objects365, containing 5.1 million images.
14
+
15
+ A visualization of sample images from both datasets is provided in the GitHub repository. A table detailing the number of bounding boxes per class in COHI-O365 is planned for future inclusion.
16
+
17
+
18
+ ## Benchmarks
19
+
20
+ YOLOv7 models were trained on different datasets and evaluated on COHI-O365. The results are summarized below:
21
+
22
+ | S/N | Model | Objects365 mAP50 | Objects365 mAP50:95 | RMFV365-v1 mAP50 | RMFV365-v1 mAP50:95 | RMFV365 mAP50 | RMFV365 mAP50:95 | COHI-365 mAP50 | COHI-365 mAP50:95 |
23
+ |-----|-------------|--------------------|----------------------|--------------------|---------------------|-----------------|--------------------|-----------------|--------------------|
24
+ | 1 | FPN | **35.5** | 22.5 | N/A | N/A | N/A | N/A | N/A | N/A |
25
+ | 2 | RetinaNet | 27.3 | 18.7 | N/A | N/A | N/A | N/A | N/A | N/A |
26
+ | 3 | YOLOv5m | 27.3 | 18.8 | 22.6 | 14.1 | 18.7 | 10.1 | 40.4 | 28.0 |
27
+ | 4 | YOLOv7-0 | 34.97 | **24.57** | 29.1 | 18.3 | 24.2 | 13.0 | 47.5 | 33.5 |
28
+ | 5 | YOLOv7-T1 | 34.3 | 24.0 | 32.7 | 22.7 | 32.0 | 22.0 | 49.1 | 34.6 |
29
+ | 6 | YOLOv7-T2 | 34 | 23.1 | **32.9** | **23** | **33** | **22.8** | **49.9** | **34.9** |
30
+
31
+ **Table:** Object recognition results on Objects365, RMFV365-v1, RMFV365, and COHI-365 testing sets. Bold values represent the best performance within each column.
32
+
33
+
34
+ ## Citation
35
+
36
+ *(Citation information to be added)*
37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
+ ## GitHub Repository
40
 
41
+ [https://github.com/IS2AI/COHI-O365](https://github.com/IS2AI/COHI-O365)