This repo contains the code needed to run AIR-Bench using llamaindex.
In this repo, I implement a custom retriever that uses text-embedding-ada-002
as the dense embedding model, bm25 for sparse embeddings, and the QueryFusionRetriever()
to combine results from these two, as well as generating extra queries for retrieval.
Usage
pip install llama-index llama-index-retrievers-bm25
python ./run_airbench.py
Customization
Feel free to use this as a template to evaluate other llama-index retrieval pipelines! You just need to customize the setup in create_retriever_fn()
with the setup that you want to test against.
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.