Word of Caution: The captions are pretty low entropy.

#11
by pshishodia - opened

Hey, first of all, thank you for creating this multimodal dataset across all modalities - text, audio, images.

But a word of caution for someone using this in future, and feedback that can probably be incorporated in the future: The captions generated are very simple and have very very low entropy. This is the byproduct of asking people impromptu what comes to their mind when looking at an image making the generated captions very very simple and "low entropy".

This makes the dataset not-so-usable for connecting image-to-text modalities (clip, sd) since the captions only capture very high level features rather. I do think that the high level features contained are good enough still, if the dataset was very huge with O(100M) images. But since it's small - a lot of value can be unlocked just by increasing the information content in each caption.

Probably, for future versions, a better way would be to ask people to describe the image in as much detail as possible.

Example:
यह एक जिला परिवहन कार्यालय पूर्णिया का फ़ोटो है।
image.png

यह कार्यालय नगद परिसर
image.png

Siglip2's (multilingual) performance on text to image retrieval:
image.png

Thank you for pointing this out.

Not sure whether the fact that each image has multiple descriptions from different speakers was taken into account.

When these transcriptions are combined, they can produce a much richer and more detailed caption for the image.

For the first image you shared, I found the following associated transcriptions:

[ "यह एक जिला परिवहन कार्यालय पूर्णिया का फ़ोटो है।",
"एक कार्यालय भवन है उसमें बहुत सारी विंडो हैं।",
" यहां पे जो है घर बने है जो की बहोत लम्बा-चौड़ा घर है और घर जो है देखनें में काफी खूबसूरत लग रहे हैं यह जो है जिला परिवहन कार्यालय पूर्णिया का घर है। ",
"यहां पे एक सी.पी.यु कूलर भी लगे है जो की काफी खूबसूरत हैबोहोत {बहुत} ही शानदार है जो देखने में यह नज़ारा बार-बार नहीं मिलता है। " ]

For the second image, the transcriptions are:

[ "यह कार्यालय नगद परिसर ", "इस फोटो में आप देख सकते है की एक टेंपू नजर आ रहा है टेंपू पीले रंग का है उसके अगल-बगल काफी सारे बाइके और साइकिल दिखाई दे रहे है।" ]

Another possible way to further enrich the image descriptions is by using a reliable ASR system to transcribe the remaining audio data—since less than 10% has been transcribed so far,
this could significantly improve both coverage and detail.

There are also two categories of images: generic images, which are common across districts, and specific images, which are unique to a particular district.
The number of utterances associated with generic images is significantly higher compared to specific images.

We estimated the number of words obtained by combining transcriptions from multiple utterances associated with each image in the Hindi portion of the dataset:Details,

Number of generic images: 1,323
Number of specific images: 53,241
Average number of words per generic image: 1,527.11
Average number of words per specific image: 36.51
Overall average (generic + specific): 72.65
Min number of words per image :1
Max number of words per image:2226

(Please note that multiple people may describe the same details, leading to overlaps in content, and this redundancy has not been accounted for in the current analysis.)
sigLIP2.png

We also attempted to fine-tune SigLIP-2 for Hindi, using approximately 45,000 images for training and 9,500 for testing. The model’s performance is shown in the figure.

Thank you @SujithPulikodan for taking time for such a detailed response.

When these transcriptions are combined, they can produce a much richer and more detailed caption for the image.

This is a great point, and I agree on this that once we've good transcriptions we would be able to create much better information dense captions. We're working towards improving the transcription quality so we can transcribe the whole Vaani dataset, since from what I know, Vaani is one of the most diverse Indian dataset with the highest # speaker count, # languages

Sign up or log in to comment