pit-gravit-b1 / README.md
parlange's picture
Upload PiT model from experiment b1
a775efa verified
---
license: apache-2.0
tags:
- vision-transformer
- image-classification
- pytorch
- timm
- pit
- gravitational-lensing
- strong-lensing
- astronomy
- astrophysics
datasets:
- J24
metrics:
- accuracy
- auc
- f1
model-index:
- name: PiT-b1
results:
- task:
type: image-classification
name: Strong Gravitational Lens Discovery
dataset:
type: common-test-sample
name: Common Test Sample (More et al. 2024)
metrics:
- type: accuracy
value: 0.7504
name: Average Accuracy
- type: auc
value: 0.7049
name: Average AUC-ROC
- type: f1
value: 0.4053
name: Average F1-Score
---
# 🌌 pit-gravit-b1
πŸ”­ This model is part of **GraViT**: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery
πŸ”— **GitHub Repository**: [https://github.com/parlange/gravit](https://github.com/parlange/gravit)
## πŸ›°οΈ Model Details
- **πŸ€– Model Type**: PiT
- **πŸ§ͺ Experiment**: B1 - J24-classification-head
- **🌌 Dataset**: J24
- **πŸͺ Fine-tuning Strategy**: classification-head
## πŸ’» Quick Start
```python
import torch
import timm
# Load the model directly from the Hub
model = timm.create_model(
'hf-hub:parlange/pit-gravit-b1',
pretrained=True
)
model.eval()
# Example inference
dummy_input = torch.randn(1, 3, 224, 224)
with torch.no_grad():
output = model(dummy_input)
predictions = torch.softmax(output, dim=1)
print(f"Lens probability: {predictions[0][1]:.4f}")
```
## ⚑️ Training Configuration
**Training Dataset:** J24 (Jaelani et al. 2024)
**Fine-tuning Strategy:** classification-head
| πŸ”§ Parameter | πŸ“ Value |
|--------------|----------|
| Batch Size | 192 |
| Learning Rate | AdamW with ReduceLROnPlateau |
| Epochs | 100 |
| Patience | 10 |
| Optimizer | AdamW |
| Scheduler | ReduceLROnPlateau |
| Image Size | 224x224 |
| Fine Tune Mode | classification_head |
| Stochastic Depth Probability | 0.1 |
## πŸ“ˆ Training Curves
![Combined Training Metrics](https://huggingface.co/parlange/pit-gravit-b1/resolve/main/training_curves/PiT_combined_metrics.png)
## 🏁 Final Epoch Training Metrics
| Metric | Training | Validation |
|:---------:|:-----------:|:-------------:|
| πŸ“‰ Loss | 0.2301 | 0.2265 |
| 🎯 Accuracy | 0.9086 | 0.9105 |
| πŸ“Š AUC-ROC | 0.9665 | 0.9668 |
| βš–οΈ F1 Score | 0.9071 | 0.9094 |
## β˜‘οΈ Evaluation Results
### ROC Curves and Confusion Matrices
Performance across all test datasets (a through l) in the Common Test Sample (More et al. 2024):
![ROC + Confusion Matrix - Dataset A](https://huggingface.co/parlange/pit-gravit-b1/resolve/main/roc_confusion_matrix/PiT_roc_confusion_matrix_a.png)
![ROC + Confusion Matrix - Dataset B](https://huggingface.co/parlange/pit-gravit-b1/resolve/main/roc_confusion_matrix/PiT_roc_confusion_matrix_b.png)
![ROC + Confusion Matrix - Dataset C](https://huggingface.co/parlange/pit-gravit-b1/resolve/main/roc_confusion_matrix/PiT_roc_confusion_matrix_c.png)
![ROC + Confusion Matrix - Dataset D](https://huggingface.co/parlange/pit-gravit-b1/resolve/main/roc_confusion_matrix/PiT_roc_confusion_matrix_d.png)
![ROC + Confusion Matrix - Dataset E](https://huggingface.co/parlange/pit-gravit-b1/resolve/main/roc_confusion_matrix/PiT_roc_confusion_matrix_e.png)
![ROC + Confusion Matrix - Dataset F](https://huggingface.co/parlange/pit-gravit-b1/resolve/main/roc_confusion_matrix/PiT_roc_confusion_matrix_f.png)
![ROC + Confusion Matrix - Dataset G](https://huggingface.co/parlange/pit-gravit-b1/resolve/main/roc_confusion_matrix/PiT_roc_confusion_matrix_g.png)
![ROC + Confusion Matrix - Dataset H](https://huggingface.co/parlange/pit-gravit-b1/resolve/main/roc_confusion_matrix/PiT_roc_confusion_matrix_h.png)
![ROC + Confusion Matrix - Dataset I](https://huggingface.co/parlange/pit-gravit-b1/resolve/main/roc_confusion_matrix/PiT_roc_confusion_matrix_i.png)
![ROC + Confusion Matrix - Dataset J](https://huggingface.co/parlange/pit-gravit-b1/resolve/main/roc_confusion_matrix/PiT_roc_confusion_matrix_j.png)
![ROC + Confusion Matrix - Dataset K](https://huggingface.co/parlange/pit-gravit-b1/resolve/main/roc_confusion_matrix/PiT_roc_confusion_matrix_k.png)
![ROC + Confusion Matrix - Dataset L](https://huggingface.co/parlange/pit-gravit-b1/resolve/main/roc_confusion_matrix/PiT_roc_confusion_matrix_l.png)
### πŸ“‹ Performance Summary
Average performance across 12 test datasets from the Common Test Sample (More et al. 2024):
| Metric | Value |
|-----------|----------|
| 🎯 Average Accuracy | 0.7504 |
| πŸ“ˆ Average AUC-ROC | 0.7049 |
| βš–οΈ Average F1-Score | 0.4053 |
## πŸ“˜ Citation
If you use this model in your research, please cite:
```bibtex
@misc{parlange2025gravit,
title={GraViT: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery},
author={RenΓ© Parlange and Juan C. Cuevas-Tello and Octavio Valenzuela and Omar de J. Cabrera-Rosas and TomΓ‘s Verdugo and Anupreeta More and Anton T. Jaelani},
year={2025},
eprint={2509.00226},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.00226},
}
```
---
## Model Card Contact
For questions about this model, please contact the author through: https://github.com/parlange/