You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.


IAUNet‑R50 (trained on Revvity‑25)

Paper   GitHub  GitHub Stars   Dataset Project WebPage

Yaroslav Prytula1,2  |  Illia Tsiporenko1  |  Ali Zeynalli1  |  Dmytro Fishman1,3
1Institute of Computer Science, University of Tartu,
2Ukrainian Catholic University, 3STACC OÜ, Tartu, Estonia
IAUNet_v2-main_v2.png

🔥 Paper: https://arxiv.org/abs/2508.01928
🤗 Dataset: https://huggingface.co/datasets/YaroslavPrytula/Revvity-25
⭐️ Github: https://github.com/SlavkoPrytula/IAUNet
🌐 Project page: https://slavkoprytula.github.io/IAUNet/

IAUNet is a novel query-based U‑Net architecture for cell instance segmentation in microscopy images. This checkpoint uses a ResNet‑50 backbone (R50) and was trained on the Revvity‑25 brightfield microscopy dataset for cell instance segmentation.

Evaluation Results

Epoch mAP mAP@50 mAP@75 mAPS mAPM mAPL
2000 52.3 85.1 58.4 1.8 28.8 58.5

Files

  • model.pth - pretrained weights (PyTorch)
  • config.yaml - model/backbone and dataset‑specific settings (e.g., num_classes, input size, model params)
  • README.md - this model card

How to use (PyTorch)

Install the model code (either from your repo or provided model.py), then load weights from the Hub:

1) Get the checkpoint from the Hub

You can download the model checkpoint directly from the Hugging Face Hub using:

from huggingface_hub import hf_hub_download

ckpt_path = hf_hub_download(
    repo_id="YaroslavPrytula/iaunet-r50-revvity25",
    filename="model.ckpt"
)
print("Checkpoint downloaded to:", ckpt_path)

2) Install from GitHub

For more information refer to the official GitHub Clone the repository and install the dependencies:

git clone https://github.com/SlavkoPrytula/IAUNet.git
cd IAUNet
pip install -r requirements.txt
python main.py model=v2/iaunet-r50 \
               model.ckpt_path=<path_to_checkpoint> \
               model.decoder.type=iadecoder_ml_fpn/experimental/deep_supervision \
               model.decoder.num_classes=1 \
               model.decoder.dec_layers=3 \
               model.decoder.num_queries=100 \
               model.decoder.dim_feedforward=1024 \
               dataset=<dataset_name>

Citing IAUNet

If you use this work in your research, please cite:

@InProceedings{Prytula_2025_CVPR,
    author    = {Prytula, Yaroslav and Tsiporenko, Illia and Zeynalli, Ali and Fishman, Dmytro},
    title     = {IAUNet: Instance-Aware U-Net},
    booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops},
    month     = {June},
    year      = {2025},
    pages     = {4739--4748}
}

License

License: CC BY-NC 4.0

This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0). You are free to share and adapt the work for non-commercial purposes as long as you give appropriate credit. For more details, see the LICENSE file or visit Creative Commons.


Contact

📧 [email protected] or [email protected]


Acknowledgements

This work was supported by Revvity and funded by the TEM-TA101 grant “Artificial Intelligence for Smart Automation.” Computational resources were provided by the High-Performance Computing Cluster at the University of Tartu 🇪🇪. We thank the Biomedical Computer Vision Lab for their invaluable support. We express gratitude to the Armed Forces of Ukraine 🇺🇦 and the bravery of the Ukrainian people for enabling a secure working environment, without which this work would not have been possible.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train YaroslavPrytula/iaunet-r50-revvity-25

Evaluation results