BLIP for RSICD image captioning:
blip-image-captioning-base
model has been finetuned on thersicd
dataset. Training parameters used are as follows:- learning_rate = 5e-7
- optimizer = AdamW
- scheduler = ReduceLROnPlateau
- epochs = 5
- More details (demo, testing, evaluation, metrics) available at
github repo
- Downloads last month
- 25
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.