Yasintuncer's picture
Update README.md
30f0037 verified
metadata
pretty_name: NIH-CXR14-BiomedCLIP-Features
dataset_info:
  features:
    - name: Image Index
      dtype: string
    - name: Texts
      dtype: string
    - name: View Position
      dtype: string
    - name: Image Features
      sequence: float32
    - name: Text Features
      sequence: float32
    - name: Atelectasis
      dtype: int32
    - name: Cardiomegaly
      dtype: int32
    - name: Effusion
      dtype: int32
    - name: Infiltration
      dtype: int32
    - name: Mass
      dtype: int32
    - name: Nodule
      dtype: int32
    - name: Pneumonia
      dtype: int32
    - name: Pneumothorax
      dtype: int32
    - name: Consolidation
      dtype: int32
    - name: Edema
      dtype: int32
    - name: Emphysema
      dtype: int32
    - name: Fibrosis
      dtype: int32
    - name: Hernia
      dtype: int32
    - name: Pleural_Thickening
      dtype: int32
    - name: No_Finding
      dtype: int32
  splits:
    - name: train
      num_bytes: 328886878
      num_examples: 112120
  citation: |
    @article{wang2017chestx,
      title={ChestX-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases},
      author={Wang, Xiaosong and Peng, Yifan and Lu, Le and Lu, Zhipeng and Bagheri, Mohammadhadi and Summers, Ronald M},
      journal={arXiv preprint arXiv:1705.02315},
      year={2017}
    }

    @article{zhang2024biomedclip,
        title={A Multimodal Biomedical Foundation Model Trained from Fifteen Million Image–Text Pairs},
        author={Sheng Zhang and Yanbo Xu and Naoto Usuyama and Hanwen Xu and Jaspreet Bagga and Robert Tinn and Sam Preston and Rajesh Rao and Mu Wei and Naveen Valluri and Cliff Wong and Andrea Tupini and Yu Wang and Matt Mazzola and Swadheen Shukla and Lars Liden and Jianfeng Gao and Angela Crabtree and Brian Piening and Carlo Bifulco and Matthew P. Lungren and Tristan Naumann and Sheng Wang and Hoifung Poon},
        journal={NEJM AI},
        year={2024},
        volume={2},
        number={1},
        doi={10.1056/AIoa2400640},
        url={https://ai.nejm.org/doi/full/10.1056/AIoa2400640}
    }
language:
  - en
license: cc-by-4.0
size_in_bytes: 328886878
task_categories:
  - image-classification
  - text-retrieval
  - text-classification
  - image-feature-extraction
  - feature-extraction
  - image-to-text
task_ids:
  - multi-input-text-classification
tags:
  - medical
  - chest-xray
  - biomedclip
  - multi-modal
  - image-features
  - text-features
  - nih-cxr14
  - healthcare
size_categories:
  - 100M<n<1B

NIH-CXR14-BiomedCLIP-Features Dataset

This dataset is derived from the NIH Chest X-ray Dataset (NIH-CXR14) and processed using the BiomedCLIP-PubMedBERT_256-vit_base_patch16_224 model from Microsoft. It contains image and text features extracted from chest X-ray images and their corresponding textual findings.

Dataset Description

The original NIH-CXR14 dataset comprises 112,120 chest X-ray images with disease labels from 30,805 unique patients. This processed dataset includes:

  • Image Features: Extracted using the vision encoder of BiomedCLIP (512 dimensions).
  • Text Features: Extracted using the text encoder of BiomedCLIP (512 dimensions).
  • Finding Labels: The original disease labels, processed and converted into a multi-label format.
  • Image Index: Unique identifiers for each image.
  • View Position: The view position of the X-ray (e.g., PA, AP).
  • Processed Text: A grammatically correct text prompt generated from the finding labels, designed for use with the BiomedCLIP model.

Processing Steps

  1. Data Loading: The original NIH-CXR14 image and text datasets were loaded.
  2. Text Preprocessing:
    • Problematic characters (|) were replaced with commas.
    • "No Finding" labels were converted to "No_Finding".
    • Finding labels were split into individual findings.
    • Grammatically correct text prompts were generated based on the finding labels and view position.
  3. Feature Extraction:
    • Images and text prompts were preprocessed using the BiomedCLIP preprocessors.
    • Image and text features were extracted using the BiomedCLIP model.
  4. Data Storage:
    • Extracted features, image indices, view positions, raw texts, and finding labels were stored in Parquet files.
    • The dataset was chunked into multiple Parquet files for efficient storage and retrieval.

Dataset Structure

The dataset is organized into Parquet files, each containing the following columns:

  • Image Index: String, unique identifier for each image.
  • Image Features: List of floats, image features extracted by BiomedCLIP.
  • Text Features: List of floats, text features extracted by BiomedCLIP.
  • View Position: String, view position of the X-ray.
  • Texts: String, processed text prompts.
  • [Finding Label]: Integer (0 or 1), multi-label representation of each finding.

Usage

This dataset can be used for various tasks, including:

  • Multi-label classification: Using the extracted features to predict disease findings.
  • Retrieval: Retrieving relevant X-ray images based on text queries or vice versa.
  • Fine-tuning: Fine-tuning models for medical image analysis tasks.

Installation

To load the dataset, you can use the datasets library from Hugging Face:

from datasets import load_dataset

dataset = load_dataset("Yasintuncer/NIH-CXR14-BiomedCLIP-Features")