paligemma_vqav2
This model is a fine-tuned version of google/paligemma-3b-pt-224 on the vq_av2 dataset.
Transformers PaliGemma 3B weights, pre-trained with 224*224 input images and 128 token input/output text sequences. The models are available in float32, bfloat16 and float16 formats for fine-tuning.
Original model: Google/PaliGemma
Model description
PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by PaLI-3 and based on open components such as the SigLIP vision model and the Gemma language model. It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation.
Intended uses & limitations
PaliGemma is a single-turn vision language model not meant for conversational use, and it works best when fine-tuning to a specific use case.
You can configure which task the model will solve by conditioning it with task prefixes, such as “detect” or “segment”. The pretrained models were trained in this fashion to imbue them with a rich set of capabilities (question answering, captioning, segmentation, etc.). However, they are not designed to be used directly, but to be transferred (by fine-tuning) to specific tasks using a similar prompt structure. For interactive testing, you can use the "mix" family of models, which have been fine-tuned on a mixture of tasks. To see model google/paligemma-3b-mix-448 in action, check this Space that uses the Transformers codebase.
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
Training results
Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
- Downloads last month
- 6
Model tree for eagle0504/paligemma_vqav2
Base model
google/paligemma-3b-pt-224