How to run the Vidore Evaluation
If you want to run the vidore evaluation on the jina-embeddings-v4 model (and on the Document Retrieval Benchmark curated by Jina AI), you need to install requirements in this fork/branch (these changes should be merged with the source code of Vidore soon).
pip install vidore-benchmark[jina-v4]
You can run the evaluation with the following command:
vidore-benchmark evaluate-retriever \
--model-class jev4 \
--model-name jinaai/jina-embeddings-v4 \
--collection-name jinaai/jinavdr-visual-document-retrieval-684831c022c53b21c313b449 \
--dataset-format qa \
--split test
Evaluate Pure Text Retrieval Models on Refined Vidore Tasks
The original Vidore dataset contain multiple text chunks per image to evaluate text retrieval models on them.
Those text chunks are extracted from the document pages using different tools like Unstructured, OCR models, and LLMs.
For evaluating text retrieval models on our filtered versions of the Vidore datasets, you can use the datasets in the collection https://huggingface.co/collections/jinaai/jina-vdr-vidoreocr-tasks-6852cfc55ccf837e7fecfa1b
.
It is also possible to evaluate jina-embeddings-v4 and other vision retrieval models on them. This however takes more time and should lead to the same evaluation results as running the vesions of the datasets in the Jina VDR collection.