metadata
license: mit
Pre-computed vision-language model image embeddings
Embeddings are stored as Parquet files with the following structure:
<DATASET_NAME>_<OP>_<MODEL_NAME>.parquet
"""
DATASET_NAME: name of the dataset, e.g. "imagenette".
OP: split of the dataset (either "train" or "test").
MODEL_NAME: name of the model, e.g. "clip_vit-l_14".
"""
dataset["embedding"] contains the embeddings
dataset["label"] contains the labels
To generate the dataset, run
$ python make_dataset.py
Supported dataset names (see supported_datasets.txt):
imagenette
[dataset]
Supported model names (see supported_models.txt):
clip:ViT-RN:50
[model]clip:ViT-B/32
[model]clip:ViT-L/14
[model]open_clip:ViT-B-32
[model]open_clip:ViT-L-14
[model]FLAVA
[model]ALIGN
[model]BLIP
[model]
References
@inproceedings{teneggi24testing,
title={Testing Semantic Importance via Betting},
author={Teneggi, Jacopo and Sulam, Jeremias},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
}