|
---
|
|
license: mit
|
|
---
|
|
|
|
### Pre-computed vision-language model image embeddings
|
|
|
|
Embeddings are stored as [Parquet](https://parquet.apache.org/) files with the following structure:
|
|
|
|
```python
|
|
<DATASET_NAME>_<OP>_<MODEL_NAME>.parquet
|
|
|
|
"""
|
|
DATASET_NAME: name of the dataset, e.g. "imagenette".
|
|
OP: split of the dataset (either "train" or "test").
|
|
MODEL_NAME: name of the model, e.g. "clip_vit-l_14".
|
|
"""
|
|
|
|
dataset["embedding"] contains the embeddings
|
|
dataset["label"] contains the labels
|
|
```
|
|
|
|
To generate the dataset, run
|
|
|
|
```bash
|
|
$ python make_dataset.py
|
|
```
|
|
|
|
Supported dataset names (see [supported_datasets.txt](supported_datasets.txt)):
|
|
|
|
* `imagenette` [[dataset](https://github.com/fastai/imagenette)]
|
|
|
|
Supported model names (see [supported_models.txt](supported_models.txt)):
|
|
|
|
* `clip:ViT-RN:50` [[model](https://github.com/openai/CLIP)]
|
|
* `clip:ViT-B/32` [[model](https://github.com/openai/CLIP)]
|
|
* `clip:ViT-L/14` [[model](https://github.com/openai/CLIP)]
|
|
* `open_clip:ViT-B-32` [[model](https://huggingface.co/laion/CLIP-ViT-B-32-laion2B-s34B-b79K)]
|
|
* `open_clip:ViT-L-14` [[model](https://huggingface.co/laion/CLIP-ViT-L-14-laion2B-s32B-b82K)]
|
|
* `FLAVA` [[model](https://huggingface.co/facebook/flava-full)]
|
|
* `ALIGN` [[model](https://huggingface.co/kakaobrain/align-base)]
|
|
* `BLIP` [[model](https://huggingface.co/Salesforce/blip-itm-base-coco)]
|
|
|
|
**References**
|
|
```
|
|
@inproceedings{teneggi24testing,
|
|
title={Testing Semantic Importance via Betting},
|
|
author={Teneggi, Jacopo and Sulam, Jeremias},
|
|
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
|
|
year={2024},
|
|
}
|
|
``` |