File size: 1,718 Bytes
421b349 85c97f1 421b349 e793b46 421b349 650243e 421b349 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
license: mit
library_name: colpali
base_model: HuggingFaceTB/SmolVLM-500M-Instruct
language:
- en
tags:
- colsmolvlm
- vidore-experimental
- vidore
---
# ColSmolVLM-500M-Instruct: Visual Retriever based on SmolVLM-500M-Instruct with ColBERT strategy
ColSmolVLM is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
It is a SmolVLM extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)
This version is the untrained base version to guarantee deterministic projection layer initialization.
## License
ColSmol's vision language backbone model (ColSmolVLM) is under `apache2.0` license. The adapters attached to the model are under MIT license.
## Contact
- Manuel Faysse: [email protected]
- Hugues Sibille: [email protected]
- Tony Wu: [email protected]
## Citation
If you use any datasets or models from this organization in your research, please cite the original dataset as follows:
```bibtex
@misc{faysse2024colpaliefficientdocumentretrieval,
title={ColPali: Efficient Document Retrieval with Vision Language Models},
author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2407.01449},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2407.01449},
}
``` |