BenSchneider's picture
Update README.md
bf95de8 verified
---
language:
- en
license: mit
size_categories:
- 1M<n<10M
task_categories:
- visual-question-answering
- image-text-to-text
pretty_name: ABC-Pretraining-Data
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: caption
dtype: string
- name: url
dtype: string
- name: id
dtype: int64
- name: image
dtype: string
- name: negatives
sequence: int64
splits:
- name: train
num_bytes: 2289772991
num_examples: 2252041
download_size: 1855548818
dataset_size: 2289772991
tags:
- visual
---
## ABC Pretraining Data
<!-- Provide a quick summary of the dataset. -->
This the the pretraining data for ABC. This dataset is derived from Google's [Conceptual Captions](https://ai.google.com/research/ConceptualCaptions/) dataset.
The each item in the dataset contain a URL where the corresponding image can be downloaded and mined negatives for each item. Full dataaset is ~300 GB of images. For a detailed description of how we mined the negatives please check out our ppaer ;).
**Update** I have added the images to this repository, for an example of how to use and download this dataset see our [repository](https://github.com/TIGER-AI-Lab/ABC).
## Paper and Website
For more information, please refer to [Website](https://tiger-ai-lab.github.io/ABC/).
## Citation
If you find any of our work helpful please connsider citing:
```
@misc{schneider2025abcachievingbettercontrol,
title={ABC: Achieving Better Control of Multimodal Embeddings using VLMs},
author={Benjamin Schneider and Florian Kerschbaum and Wenhu Chen},
year={2025},
eprint={2503.00329},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.00329},
}
```