YaroslavPrytula's picture
Update README.md
a6e7e9c verified
metadata
library_name: pytorch
pipeline_tag: image-segmentation
license: cc-by-nc-4.0
datasets:
  - YaroslavPrytula/Revvity-25
tags:
  - image
  - instance-segmentation
  - cell-segmentation
  - microscopy
  - microscopy-images
  - brightfield
  - biomedical-imaging
  - computer-vision
  - unet
  - resnet50
  - R50
  - coco
  - revvity-25
  - research
  - model
language:
  - en
model-index:
  - name: IAUNet-R50 (Revvity-25)
    results:
      - task:
          type: instance-segmentation
        dataset:
          name: Revvity-25
          type: YaroslavPrytula/Revvity-25
        metrics:
          - name: mAP
            type: mean_average_precision
            value: 52.3
          - name: mAP@50
            type: mean_average_precision
            value: 85.1
          - name: mAP@75
            type: mean_average_precision
            value: 58.4
          - name: mAP_S
            type: mean_average_precision
            value: 1.8
          - name: mAP_M
            type: mean_average_precision
            value: 28.8
          - name: mAP_L
            type: mean_average_precision
            value: 58.5

IAUNet‑R50 (trained on Revvity‑25)

Paper   GitHub  GitHub Stars   Dataset Project WebPage

Yaroslav Prytula1,2  |  Illia Tsiporenko1  |  Ali Zeynalli1  |  Dmytro Fishman1,3
1Institute of Computer Science, University of Tartu,
2Ukrainian Catholic University, 3STACC OÜ, Tartu, Estonia
IAUNet_v2-main_v2.png

🔥 Paper: https://arxiv.org/abs/2508.01928
🤗 Dataset: https://huggingface.co/datasets/YaroslavPrytula/Revvity-25
⭐️ Github: https://github.com/SlavkoPrytula/IAUNet
🌐 Project page: https://slavkoprytula.github.io/IAUNet/

IAUNet is a novel query-based U‑Net architecture for cell instance segmentation in microscopy images. This checkpoint uses a ResNet‑50 backbone (R50) and was trained on the Revvity‑25 brightfield microscopy dataset for cell instance segmentation.

Evaluation Results

Epoch mAP mAP@50 mAP@75 mAPS mAPM mAPL
2000 52.3 85.1 58.4 1.8 28.8 58.5

Files

  • model.pth - pretrained weights (PyTorch)
  • config.yaml - model/backbone and dataset‑specific settings (e.g., num_classes, input size, model params)
  • README.md - this model card

How to use (PyTorch)

Install the model code (either from your repo or provided model.py), then load weights from the Hub:

1) Get the checkpoint from the Hub

You can download the model checkpoint directly from the Hugging Face Hub using:

from huggingface_hub import hf_hub_download

ckpt_path = hf_hub_download(
    repo_id="YaroslavPrytula/iaunet-r50-revvity25",
    filename="model.ckpt"
)
print("Checkpoint downloaded to:", ckpt_path)

2) Install from GitHub

For more information refer to the official GitHub Clone the repository and install the dependencies:

git clone https://github.com/SlavkoPrytula/IAUNet.git
cd IAUNet
pip install -r requirements.txt
python main.py model=v2/iaunet-r50 \
               model.ckpt_path=<path_to_checkpoint> \
               model.decoder.type=iadecoder_ml_fpn/experimental/deep_supervision \
               model.decoder.num_classes=1 \
               model.decoder.dec_layers=3 \
               model.decoder.num_queries=100 \
               model.decoder.dim_feedforward=1024 \
               dataset=<dataset_name>

Citing IAUNet

If you use this work in your research, please cite:

@InProceedings{Prytula_2025_CVPR,
    author    = {Prytula, Yaroslav and Tsiporenko, Illia and Zeynalli, Ali and Fishman, Dmytro},
    title     = {IAUNet: Instance-Aware U-Net},
    booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops},
    month     = {June},
    year      = {2025},
    pages     = {4739--4748}
}

License

License: CC BY-NC 4.0

This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0). You are free to share and adapt the work for non-commercial purposes as long as you give appropriate credit. For more details, see the LICENSE file or visit Creative Commons.


Contact

📧 [email protected] or [email protected]


Acknowledgements

This work was supported by Revvity and funded by the TEM-TA101 grant “Artificial Intelligence for Smart Automation.” Computational resources were provided by the High-Performance Computing Cluster at the University of Tartu 🇪🇪. We thank the Biomedical Computer Vision Lab for their invaluable support. We express gratitude to the Armed Forces of Ukraine 🇺🇦 and the bravery of the Ukrainian people for enabling a secure working environment, without which this work would not have been possible.