Hopkins_RFO_Bench / README.md
Yuliiiiiiiione's picture
Update README.md
2bc4481 verified
metadata
license: mit

Dataset and Benchmark for Enhancing Critical Retained Foreign Object Detection

License: CC BY-NC-SA 4.0
Dataset on ๐Ÿค— Hugging Face
For academic, non-commercial use only


๐Ÿ”– Citation

If you find this dataset useful in your work, please consider citing it:

๐Ÿ“– Click to show citation (โš ๏ธ **Double-blind review warning**: If you are a reviewer, please **do not expand** this section if anonymity must be preserved.)
@misc{wang2025datasetbenchmarkenhancingcritical,
    title={Dataset and Benchmark for Enhancing Critical Retained Foreign Object Detection}, 
    author={Yuli Wang and Victoria R. Shi and Liwei Zhou and Richard Chin and Yuwei Dai and Yuanyun Hu and Cheng-Yi Li and Haoyue Guan and Jiashu Cheng and Yu Sun and Cheng Ting Lin and Ihab Kamel and Premal Trivedi and Pamela Johnson and John Eng and Harrison Bai},
    year={2025},
    eprint={2507.06937},
    archivePrefix={arXiv},
    primaryClass={eess.IV},
    url={https://arxiv.org/abs/2507.06937}, 
}

Paper


๐Ÿ“ฆ Usage

from datasets import load_dataset
dataset = load_dataset("Yuliiiiiiiione/Hopkins_RFO_Bench")

๐Ÿงช Try it out in Colab

You can explore the Hopkins RFO Bench dataset directly on your computer: Github/Baseline


๐Ÿ’ก Motivation

Critical retained foreign objects (RFOs), including surgical instruments like sponges and needles, pose serious patient safety risks and carry significant financial and legal implications for healthcare institutions. Detecting critical RFOs using artificial intelligence remains challenging due to their rarity and the limited availability of chest X-ray datasets that specifically feature critical RFOs cases. Existing datasets only contain non-critical RFOs, like necklace or zipper, further limiting their utility for developing clinically impactful detection algorithms. To address these limitations, we introduce "Hopkins RFO Bench" a novel dataset containing 144 chest X-ray images of critical RFO cases collected over 18 years from the Johns Hopkins Health System. Using this dataset, we benchmark several state-of-the-art object detection models, highlighting the need for enhanced detection methodologies for critical RFO cases. Recognizing data scarcity challenges, we further explore image synthesis methods to bridge this gap. We evaluate two advanced synthetic image methodsโ€”DeepDRR-RFO, a physics-based method, and RoentGen-RFO, a diffusion-based methodโ€”for creating realistic radiographs featuring critical RFOs. Our comprehensive analysis identifies the strengths and limitations of each synthetic method, providing insights into effectively utilizing synthetic data to enhance model training. The Hopkins RFO Bench and our findings significantly advance the development of reliable, generalizable AI-driven solutions for detecting critical RFOs in clinical chest X-rays.

In this project, we

  • Developed and open-access dataset, called Hopkins RFOs Bench, the first and largest dataset of its kind, comprising 144 chest X-ray images containing critical RFOs collected from the Johns Hopkins Health System over the past 18 years.
  • Benchmarking existing object detection models on our proposed dataset.
  • Evaluating two customized synthetic image generation models, DeepDRR-RFO and RoentGen-RFO, for creating images with critical RFOs. We train object detection models using these synthetic images, analyze the strengths and weaknesses of each approach, and provide valuable insights to guide future improvements utilizing our openly accessible dataset.

๐Ÿง  Dataset Overview

Type No. (cases) Format Access Link
Hopkins RFO Bench 144 jpg & json link
Physics-based systhetic images 4000 jpg & csv link
Physics-based rendering models 14 obj link
DDPM-based sythetic images 4000 jpg & csv link

For each data type, the example of a dataset includes the following files (will release the full dataset if the paper gets accepted):

Dataset organizations:

  1. xxxxx.jpg % high-resolution chest x-ray images
  2. xxxxx.csv % image-level or object-level annotations
  3. xxxxx.json % image-level or object-level annotations
  4. xxxxx.obj % rendering volumes of RFO used for physics-based synthetic methods

๐Ÿ› ๏ธ All cases are 2D PNG slices and are available under CC BY-NC-SA 4.0.


๐Ÿ“Š Benchmark Tasks

We use two metrics to evaluate the classification and localization performance of foreign objects detection on chest X-rays: Area Under Curve (AUC) and Free-response Receiver Operating Characteristic (FROC), which is adopted from object-CXR.

๐Ÿ” 1. Classification

For the classification task, the baseline model will generate a prediction_classification.csv file in the format below:

image_path,prediction
/path/#####.jpg,0.90
...

Each line in the prediction file represents one image. The first column is the image path, followed by the predicted probability (0 to 1) indicating the presence of foreign objects.

We use the Area Under the Curve (AUC) to evaluate binary classification performanceโ€”whether a chest X-ray contains foreign objects. AUC is a standard metric in medical imaging and is well-suited for our task, especially given the balanced distribution of positive and negative cases.

๐Ÿ“ 2. Localization

For the localization task, each algorithm is required to generate a prediction_localization.csv file in the format below:

image_path,prediction
/path/#####.jpg,0.90 1000 500;0.80 200 400
...

Each line in the prediction file corresponds to one image. The first column is the image path, followed by a comma. The second column contains space-separated tuples in the format (probability x y), representing the confidence and coordinates of predicted foreign objects. If no object is detected, a zero-valued placeholder tuple is used. The comma must always follow the image path, even for empty predictions.

We evaluate localization performance using the Free-response Receiver Operating Characteristic (FROC) curve, which suits our heterogeneous annotations (boxes, ellipses, masks) better than mAP. A prediction is correct if any predicted point falls within a ground truth region. Sensitivity is the number of correctly localized objects divided by the total number of annotated objects. False positives are predictions outside all annotations. FROC is calculated as average sensitivity at false positive rates per image: 0.125, 0.25, 0.5, 1, 2, 4, and 8. froc.py provides the details of how FROC is computed.

Usage

Dependencies

To establish the environment, run this code in the shell:

pip install -e requirement.txt.

That will create the environment Hopkins RFO Bench we used.

Environment setup

Activate the environment by running

conda activate Hopkins_RFO

Baseline

We provide the code for each baseline model under Baseline.

When you download the dataset from Hugging Face, and can run the code Python by using the following instructions:

Baseline Model for FasterRCNN:

python ./main_fasterrcnn.py

Baseline Model for FCOS:

python ./main_fcos.py

Baseline Model for Retina:

python ./main_retina.py

Baseline Model for YOLO:

python ./main_yolo.py

Baseline Model for VIT:

python ./main_vit.py

๐Ÿ“ฌ Contact

Stay tuned for the public leaderboard coming soon.