File size: 1,436 Bytes
aba9483 d3e2300 aba9483 d3e2300 aba9483 d3e2300 aba9483 a97a4b5 aba9483 a97a4b5 aba9483 a97a4b5 aba9483 a97a4b5 aba9483 a97a4b5 aba9483 a97a4b5 aba9483 a97a4b5 aba9483 a97a4b5 aba9483 a97a4b5 aba9483 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
license: mit
base_model: BAAI/bge-reranker-v2-m3
tags:
- generated_from_trainer
library_name: sentence-transformers
pipeline_tag: text-ranking
model-index:
- name: bge_reranker
results: []
---
# Reranker model
- [Reranker model](#reranker-model)
- [Brief information](#brief-information)
- [Supporting architectures](#supporting-architectures)
- [Local inference example](#local-inference-example)
## Brief information
This repository contains reranker model ```bge-reranker-v2-m3``` which you can run on HuggingFace Inference Endpoints.
- Base model: [BAAI/bge-reranker-v2-m3](https://huggingface.co/BAAI/bge-reranker-v2-m3) with no any fine tune.
- Commit: [953dc6f6f85a1b2dbfca4c34a2796e7dde08d41e](https://huggingface.co/BAAI/bge-reranker-v2-m3/commit/953dc6f6f85a1b2dbfca4c34a2796e7dde08d41e)
**More details please refer to the [repo of bse model](https://huggingface.co/BAAI/bge-reranker-v2-m3).**
## Supporting architectures
- Apple Silicon MPS
- Nvidia GPU
- HuggingFace Inference Endpoints (AWS)
- CPU (Intel Sapphire Rapids, 4 vCPU, 8 Gb)
- GPU (Nvidia T4)
- Infernia 2 (2 cores, 32 Gb RAM)
## Local inference example
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('netandreus/bge-reranker-v2-m3', use_fp16=True)
scores = reranker.compute_score(arr, normalize=True)
if not isinstance(scores, list):
scores = [scores]
print(scores) # [-8.1875, 5.26171875]
``` |