File size: 8,694 Bytes
6af9e55 73c100e 6af9e55 73c100e 6af9e55 73c100e 6af9e55 2bc4481 6af9e55 ae43d66 6af9e55 ae43d66 6af9e55 ae43d66 6af9e55 ae43d66 6af9e55 ae43d66 6af9e55 ae43d66 6af9e55 ae43d66 6af9e55 ae43d66 6af9e55 ae43d66 6af9e55 ae43d66 6af9e55 ae43d66 6af9e55 ae43d66 6af9e55 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 |
---
license: mit
---
> **Dataset and Benchmark for Enhancing Critical Retained Foreign Object Detection**
[](https://creativecommons.org/licenses/by-nc-sa/4.0/)
[Dataset on 🤗 Hugging Face](https://huggingface.co/datasets/Yuliiiiiiiione/Hopkins_RFO_Bench/tree/main)
*For academic, non-commercial use only*
---
## 🔖 Citation
If you find this dataset useful in your work, please consider citing it:
<details>
<summary>📖 Click to show citation (⚠️ **Double-blind review warning**: If you are a reviewer, please **do not expand** this section if anonymity must be preserved.)</summary>
```bibtex
@misc{wang2025datasetbenchmarkenhancingcritical,
title={Dataset and Benchmark for Enhancing Critical Retained Foreign Object Detection},
author={Yuli Wang and Victoria R. Shi and Liwei Zhou and Richard Chin and Yuwei Dai and Yuanyun Hu and Cheng-Yi Li and Haoyue Guan and Jiashu Cheng and Yu Sun and Cheng Ting Lin and Ihab Kamel and Premal Trivedi and Pamela Johnson and John Eng and Harrison Bai},
year={2025},
eprint={2507.06937},
archivePrefix={arXiv},
primaryClass={eess.IV},
url={https://arxiv.org/abs/2507.06937},
}
```
</details>
[Paper](https://arxiv.org/pdf/2507.06937)
---
## 📦 Usage
```python
from datasets import load_dataset
dataset = load_dataset("Yuliiiiiiiione/Hopkins_RFO_Bench")
```
## 🧪 Try it out in Colab
You can explore the Hopkins RFO Bench dataset directly on your computer:
[Github/Baseline](https://anonymous.4open.science/r/RFO_Bench-8742/README.md)
---
## 💡 Motivation
Critical retained foreign objects (RFOs), including surgical instruments like sponges and needles, pose serious patient safety risks and carry significant financial and legal implications for healthcare institutions. Detecting critical RFOs using artificial intelligence remains challenging due to their rarity and the limited availability of chest X-ray datasets that specifically feature critical RFOs cases. Existing datasets only contain non-critical RFOs, like necklace or zipper, further limiting their utility for developing clinically impactful detection algorithms. To address these limitations, we introduce "Hopkins RFO Bench" a novel dataset containing 144 chest X-ray images of critical RFO cases collected over 18 years from the Johns Hopkins Health System. Using this dataset, we benchmark several state-of-the-art object detection models, highlighting the need for enhanced detection methodologies for critical RFO cases. Recognizing data scarcity challenges, we further explore image synthesis methods to bridge this gap. We evaluate two advanced synthetic image methods—DeepDRR-RFO, a physics-based method, and RoentGen-RFO, a diffusion-based method—for creating realistic radiographs featuring critical RFOs. Our comprehensive analysis identifies the strengths and limitations of each synthetic method, providing insights into effectively utilizing synthetic data to enhance model training. The Hopkins RFO Bench and our findings significantly advance the development of reliable, generalizable AI-driven solutions for detecting critical RFOs in clinical chest X-rays.
In this project, we
- Developed and open-access dataset, called [Hopkins RFOs Bench](https://huggingface.co/datasets/Yuliiiiiiiione/Hopkins_RFO_Bench/tree/main), the first and largest dataset of its kind, comprising 144 chest X-ray images containing critical RFOs collected from the Johns Hopkins Health System over the past 18 years.
- Benchmarking existing object detection models on our proposed dataset.
- Evaluating two customized synthetic image generation models, [DeepDRR-RFO](https://anonymous.4open.science/r/RFO_DeepDRR-25D5/README.md) and [RoentGen-RFO](), for creating images with critical RFOs. We train object detection models using these synthetic images, analyze the strengths and weaknesses of each approach, and provide valuable insights to guide future improvements utilizing our openly accessible dataset.
---
## 🧠 Dataset Overview
| Type | No. (cases) | Format | Access Link |
| --------------------------| ------------| ---------- | ------------|
| Hopkins RFO Bench | 144 | jpg & json | [link](https://huggingface.co/datasets/Yuliiiiiiiione/Hopkins_RFO_Bench/tree/main/Hopkins_RFO_Bench) |
| Physics-based systhetic images | 4000 | jpg & csv | [link](https://huggingface.co/datasets/Yuliiiiiiiione/Hopkins_RFO_Bench/tree/main/Physics-based_rendering_models) |
| Physics-based rendering models | 14 | obj | [link](https://huggingface.co/datasets/Yuliiiiiiiione/Hopkins_RFO_Bench/tree/main/Physics-based_rendering_models) |
| DDPM-based sythetic images | 4000 | jpg & csv | [link](https://huggingface.co/datasets/Yuliiiiiiiione/Hopkins_RFO_Bench/tree/main/DDPM-based%20sythetic%20images) |
For each data type, the example of a dataset includes the following files (will release the full dataset if the paper gets accepted):
**Dataset organizations**:
1. xxxxx.jpg % high-resolution chest x-ray images
2. xxxxx.csv % image-level or object-level annotations
3. xxxxx.json % image-level or object-level annotations
4. xxxxx.obj % rendering volumes of RFO used for physics-based synthetic methods
🛠️ All cases are **2D PNG slices** and are available under CC BY-NC-SA 4.0.
---
## 📊 Benchmark Tasks
We use two metrics to evaluate the classification and localization performance of foreign objects detection on chest X-rays: Area Under Curve (AUC) and Free-response Receiver Operating Characteristic (FROC), which is adopted from [object-CXR](https://github.com/hlk-1135/object-CXR).
### 🔍 1. Classification
For the classification task, the baseline model will generate a `prediction_classification.csv` file in the format below:
```
image_path,prediction
/path/#####.jpg,0.90
...
```
Each line in the prediction file represents one image. The first column is the image path, followed by the predicted probability (0 to 1) indicating the presence of foreign objects.
We use the Area Under the Curve (AUC) to evaluate binary classification performance—whether a chest X-ray contains foreign objects. AUC is a standard metric in medical imaging and is well-suited for our task, especially given the balanced distribution of positive and negative cases.
### 📝 2. Localization
For the localization task, each algorithm is required to generate a `prediction_localization.csv` file in the format below:
```
image_path,prediction
/path/#####.jpg,0.90 1000 500;0.80 200 400
...
```
Each line in the prediction file corresponds to one image. The first column is the image path, followed by a comma. The second column contains space-separated tuples in the format (probability x y), representing the confidence and coordinates of predicted foreign objects. If no object is detected, a zero-valued placeholder tuple is used. The comma must always follow the image path, even for empty predictions.
We evaluate localization performance using the Free-response Receiver Operating Characteristic (FROC) curve, which suits our heterogeneous annotations (boxes, ellipses, masks) better than mAP. A prediction is correct if any predicted point falls within a ground truth region. Sensitivity is the number of correctly localized objects divided by the total number of annotated objects. False positives are predictions outside all annotations. FROC is calculated as average sensitivity at false positive rates per image: 0.125, 0.25, 0.5, 1, 2, 4, and 8. [froc.py](https://github.com/jfhealthcare/object-CXR/tree/master/froc.py) provides the details of how FROC is computed.
# Usage
## Dependencies
To establish the environment, run this code in the shell:
```
pip install -e requirement.txt.
```
That will create the environment Hopkins RFO Bench we used.
## Environment setup
Activate the environment by running
```
conda activate Hopkins_RFO
```
## Baseline
We provide the code for each baseline model under [Baseline](https://anonymous.4open.science/r/RFO_Bench-8742/README.md).
When you download the dataset from Hugging Face, and can run the code Python by using the following instructions:
Baseline Model for FasterRCNN:
```
python ./main_fasterrcnn.py
```
Baseline Model for FCOS:
```
python ./main_fcos.py
```
Baseline Model for Retina:
```
python ./main_retina.py
```
Baseline Model for YOLO:
```
python ./main_yolo.py
```
Baseline Model for VIT:
```
python ./main_vit.py
```
---
## 📬 Contact
Stay tuned for the **public leaderboard** coming soon.
--- |