Datasets:

File size: 3,180 Bytes
a1c6f00
fb27d38
eef883d
fb27d38
eef883d
fb27d38
8d7caf4
eef883d
8d7caf4
eef883d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8d7caf4
eef883d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# COHI-O365: A Benchmark Dataset for Fisheye Object Detection

## Dataset Summary

This work introduces COHI-O365, a benchmark dataset for object detection in hemispherical/fisheye images, designed for field-of-view invariant applications.  It complements a synthetic training dataset, RMFV365, created by applying fisheye transformations to the Objects365 dataset.  COHI-O365 contains 1,000 real fisheye images with 74 classes and an average of 20,798 object instances per image.  These images were captured using an ELP-USB8MP02G-L180 hemispherical camera (2448x3264 pixels) and manually annotated with axis-aligned bounding boxes.  The RMFV365 dataset, used for model training, comprises 5.1 million fisheye images generated from Objects365.  YOLOv7 models were trained on Objects365, RMFV365, and a variant (RMFV365-v1), and evaluated on COHI-O365.


## Dataset Contents

The dataset includes:

* **COHI-O365:** A benchmark testing dataset with 1,000 real fisheye images of 74 classes.
* **RMFV365:** A large-scale synthetic fisheye dataset derived from Objects365, containing 5.1 million images.

A visualization of sample images from both datasets is provided in the GitHub repository.  A table detailing the number of bounding boxes per class in COHI-O365 is planned for future inclusion.


## Benchmarks

YOLOv7 models were trained on different datasets and evaluated on COHI-O365. The results are summarized below:

| S/N | Model       | Objects365 mAP50 | Objects365 mAP50:95 | RMFV365-v1 mAP50 | RMFV365-v1 mAP50:95 | RMFV365 mAP50 | RMFV365 mAP50:95 | COHI-365 mAP50 | COHI-365 mAP50:95 |
|-----|-------------|--------------------|----------------------|--------------------|---------------------|-----------------|--------------------|-----------------|--------------------|
| 1   | FPN         | **35.5**           | 22.5                  | N/A                | N/A                 | N/A             | N/A                | N/A             | N/A                |
| 2   | RetinaNet   | 27.3               | 18.7                  | N/A                | N/A                 | N/A             | N/A                | N/A             | N/A                |
| 3   | YOLOv5m     | 27.3               | 18.8                  | 22.6               | 14.1                | 18.7            | 10.1               | 40.4            | 28.0               |
| 4   | YOLOv7-0    | 34.97              | **24.57**             | 29.1               | 18.3                | 24.2            | 13.0               | 47.5            | 33.5               |
| 5   | YOLOv7-T1   | 34.3               | 24.0                  | 32.7               | 22.7                | 32.0            | 22.0               | 49.1            | 34.6               |
| 6   | YOLOv7-T2   | 34                 | 23.1                  | **32.9**           | **23**               | **33**          | **22.8**           | **49.9**        | **34.9**           |

**Table:** Object recognition results on Objects365, RMFV365-v1, RMFV365, and COHI-365 testing sets.  Bold values represent the best performance within each column.


## GitHub Repository

[https://github.com/IS2AI/COHI-O365](https://github.com/IS2AI/COHI-O365)