id
stringlengths 14
14
| text
stringlengths 89
1.21k
| source
stringclasses 1
value |
---|---|---|
147d8b44b127-0 | 111An Analysis of Fusion Functions for Hybrid Retrieval
SEBASTIAN BRUCH, Pinecone, USA
SIYU GAIโ,University of California, Berkeley, USA
AMIR INGBER, Pinecone, Israel
We study hybrid search in text retrieval where lexical and semantic search are fused together with the intuition
that the two are complementary in how they model relevance. In particular, we examine fusion by a convex
combination (CC) of lexical and semantic scores, as well as the Reciprocal Rank Fusion (RRF) method, and
identify their advantages and potential pitfalls. Contrary to existing studies, we find RRF to be sensitive to
its parameters; that the learning of a CC fusion is generally agnostic to the choice of score normalization;
that CC outperforms RRF in in-domain and out-of-domain settings; and finally, that CC is sample efficient,
requiring only a small set of training examples to tune its only parameter to a target domain.
CCS Concepts: โขInformation systems โRetrieval models and ranking ;Combination, fusion and
federated search . | 2210.11934.pdf |
147d8b44b127-1 | federated search .
Additional Key Words and Phrases: Hybrid Retrieval, Lexical and Semantic Search, Fusion Functions
1 INTRODUCTION
Retrieval is the first stage in a multi-stage ranking system [ 1,2,43], where the objective is to find
the top-๐set of documents, that are the most relevant to a given query ๐, from a large collection of
documentsD. Implicit in this task are two major research questions: a) How do we measure the
relevance between a query ๐and a document ๐โD?; and, b) How do we find the top- ๐documents
according to a given similarity metric efficiently ? In this work, we are primarily concerned with
the former question in the context of text retrieval.
As a fundamental problem in Information Retrieval ( IR), the question of the similarity between
queries and documents has been explored extensively. Early methods model text as a Bag of
Words ( BoW ) and compute the similarity of two pieces of text using a statistical measure such as | 2210.11934.pdf |
147d8b44b127-2 | Words ( BoW ) and compute the similarity of two pieces of text using a statistical measure such as
the term frequency-inverse document frequency (TF-IDF) family, with BM25 [ 32,33] being its most
prominent member. We refer to retrieval with a BoW model as lexical search and the similarity
scores computed by such a system as lexical scores .
Lexical search is simple, efficient, (naturally) โzero-shot, โ and generally effective, but has important
limitations: It is susceptible to the vocabulary mismatch problem and, moreover, does not take
into account the semantic similarity of queries and documents [ 5]. That, it turns out, is what deep
learning models are excellent at. With the rise of pre-trained language models such as BERT [ 8],
it is now standard practice to learn a vector representation of queries and documents that does
capture their semantics, and thereby, reduce top- ๐retrieval to the problem of finding ๐nearest
neighbors in the resulting vector space [ 9,12,15,31,39,42]โwhere closeness is measured using | 2210.11934.pdf |
147d8b44b127-3 | vector similarity or distance. We refer to this method as semantic search and the similarity scores
computed by such a system as semantic scores .
Hypothesizing that lexical and semantic search are complementary in how they model relevance,
recent works [ 5,12,13,18,19,41] began exploring methods to fusetogether lexical and semantic
retrieval: For a query ๐and ranked lists of documents ๐
Lexand๐
Semretrieved separately by lexical
and semantic search systems respectively, the task is to construct a final ranked list ๐
Fusion so as to
improve retrieval quality. This is often referred to as hybrid search .
โContributed to this work during a research internship at Pinecone.
Authorsโ addresses: Sebastian Bruch, [email protected], Pinecone, New York, NY, USA; Siyu Gai, [email protected], | 2210.11934.pdf |
147d8b44b127-4 | University of California, Berkeley, Berkeley, CA, USA; Amir Ingber, Pinecone, Tel Aviv, Israel, [email protected]:2210.11934v1 [cs.IR] 21 Oct 2022 | 2210.11934.pdf |
d2f12e5b35df-0 | 111:2 Sebastian Bruch, Siyu Gai, and Amir Ingber
It is becoming increasingly clear that hybrid search does indeed lead to meaningful gains in
retrieval quality, especially when applied to out-of-domain datasets [ 5,39]โsettings in which the
semantic retrieval component uses a model that was not trained or fine-tuned on the target dataset.
What is less clear and is worthy of further investigation, however, is how this fusion is done.
One intuitive and common approach is to linearly combine lexical and semantic scores [ 12,19,39].
If๐Lex(๐,๐)and๐Sem(๐,๐)represent the lexical and semantic scores of document ๐with respect
to query๐, then a linear (or more accurately, convex) combination is expressed as ๐Convex =
๐ผ๐Sem+(1โ๐ผ)๐Lexwhere 0โค๐ผโค1. Because lexical scores (such as BM25) and semantic scores | 2210.11934.pdf |
d2f12e5b35df-1 | (such as dot product) may be unbounded, often they are normalized with min-max scaling [ 15,39]
prior to fusion.
A recent study [ 5] argues that convex combination is sensitive to its parameter ๐ผand the
choice of score normalization.1They claim and show empirically, instead, that Reciprocal Rank
Fusion ( RRF) [6] may be a more suitable fusion as it is non-parametric and may be utilized in a
zero-shot manner. They demonstrate its impressive performance even in zero-shot settings on a
number of benchmark datasets.
This work was inspired by the claims made in [ 5]; whereas [ 5] addresses how various hybrid
methods perform relative to one another in an empirical study, we re-examine their findings
and analyze why these methods work and what contributes to their relative performance. Our
contributions thus can best be summarized as an in-depth examination of fusion functions and
their behavior.
As our first research question (RQ1), we investigate whether the convex combination fusion is
a reasonable choice and study its sensitivity to the normalization protocol. We show that, while | 2210.11934.pdf |
d2f12e5b35df-2 | a reasonable choice and study its sensitivity to the normalization protocol. We show that, while
normalization is essential to create a bounded function and thereby bestow consistency to the
fusion across domains, the specific choice of normalization is a rather small detail: There always
exist convex combinations of scores normalized by min-max, standard score, or any other linear
transformation that are rank-equivalent. In fact, when formulated as a per-query learning problem,
the solution found for a dataset that is normalized with one scheme can be transformed to a solution
for a different choice.
We next investigate the properties of RRF. We first unpack RRF and examine its sensitivity to its
parameters as our second research question (RQ2)โcontrary to [ 5], we adopt a parametric view
ofRRF where we have as many parameters as there are retrieval functions to fuse, a quantity
that is always one more than that in a convex combination. We find that, in contrast to a convex
combination, a tuned RRFgeneralizes poorly to out-of-domain datasets. We then intuit that, because
RRF is a function of ranks , it disregards the distribution of scores and, as such, discards useful | 2210.11934.pdf |
d2f12e5b35df-3 | information. Observe that the distance between raw scores plays no role in determining their
hybrid scoreโa behavior we find counter-intuitive in a metric space where distance does matter.
Examining this property constitutes our third and final research question (RQ3).
Finally, we empirically demonstrate an unsurprising yet important fact: Tuning ๐ผin a convex
combination fusion function is extremely sample-efficient, requiring just a handful of labeled
queries to arrive at a value suitable for a target domain, regardless of the magnitude of shift in
the data distribution. RRF, on the other hand, is relatively less sample-efficient and converges to a
relatively less effective retrieval system.
We believe our findings, both theoretical and empirical, are important and pertinent to the
research in this field. Our analysis leads us to believe that the convex combination formulation is
theoretically sound, empirically effective, sample-efficient, and robust to domain shift. Moreover, | 2210.11934.pdf |
a1a34c5d3489-0 | research in this field. Our analysis leads us to believe that the convex combination formulation is
theoretically sound, empirically effective, sample-efficient, and robust to domain shift. Moreover,
1c.f. Section 3.1 in [ 5]: โThis fusion method is sensitive to the score scales ...which needs careful score normalizationโ
(emphasis ours). | 2210.11934.pdf |
f957f16ac078-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:3
unlike the parameters in RRF, the parameter(s) of a convex function are highly interpretable and, if
no training samples are available, can be adjusted to incorporate domain knowledge.
We organized the remainder of this article as follows. In Section 2, we review the relevant
literature on hybrid search. Section 3 then introduces our adopted notation and provides details of
our empirical setup, thereby providing context for the theoretical and empirical analysis of fusion
functions. In Section 4, we begin our analysis by a detailed look at the convex combination of
retrieval scores. We then examine RRF in Section 5. In Section 6, we summarize our observations
and identify the properties a fusion function should have to behave well in hybrid retrieval. We
then conclude this work and state future research directions in Section 7.
2 RELATED WORK
A multi-stage ranking system is typically comprised of a retrieval stage and several subsequent re-
ranking stages, where the retrieved candidates are ordered using more complex ranking functions [ 2,
38]. Conventional wisdom has that retrieval must be recall-oriented while improving ranking quality | 2210.11934.pdf |
f957f16ac078-1 | 38]. Conventional wisdom has that retrieval must be recall-oriented while improving ranking quality
may be left to the re-ranking stages, which are typically Learning to Rank ( LtR) models [ 16,23,
28,38,40]. There is indeed much research on the trade-offs between recall and precision in such
multi-stage cascades [ 7,20], but a recent study [ 44] challenges that established convention and
presents theoretical analysis that suggests retrieval must instead optimize precision . We therefore
report both recall andNDCG [ 10], but focus on NDCG where space constraints prevent us from
presenting both or when similar conclusions can be reached regardless of the metric used.
One choice for retrieval that remains popular to date is BM25 [ 32,33]. This additive statistic
computes a weighted lexical match between query and document terms: It computes, for each
query term, the product of its โimportanceโ (i.e., frequency of a term in a document, normalized
by document and global statistics such as average length) and its propensityโa quantity that is
inversely proportionate to the fraction of documents that contain the termโand adds the scores of | 2210.11934.pdf |
f957f16ac078-2 | inversely proportionate to the fraction of documents that contain the termโand adds the scores of
query terms to arrive at the final similarity or relevance score. Because BM25, like other lexical
scoring functions, insists on an exact match of terms, even a slight typo can throw the function off.
This vocabulary mismatch problem has been subject to much research in the past, with remedies
ranging from pseudo-relevance feedback to document and query expansion techniques [ 14,29,35].
Trying to address the limitations of lexical search can only go so far, however. After all, they
additionally do not capture the semantic similarity between queries and documents, which may
be an important signal indicative of relevance. It has been shown that both of these issues can
be remedied by Transformer-based [ 37] pre-trained language models such as BERT [ 8]. Applied
to the ranking task, such models [ 24,26โ28] have advanced the state-of-the-art dramatically on
benchmark datasets [25].
The computationally intensive inference of these deep models often renders them too ineffi-
cient for first-stage retrieval, however, making them more suitable for re-ranking stages. But by | 2210.11934.pdf |
f957f16ac078-3 | cient for first-stage retrieval, however, making them more suitable for re-ranking stages. But by
cleverly disentangling the query and document transformations into the so-called dual-encoder
architecture, where, in the resulting design, the โembeddingโ of a document can be computed
independently of queries, we can pre-compute document vectors and store them offline. In this
way, we substantially reduce the computational cost during inference as it is only necessary to
obtain the vector representation of the query during inference. At a high level, these models project
queries and documents onto a low-dimensional vector space where semantically-similar points | 2210.11934.pdf |
357f2ffd45bc-0 | obtain the vector representation of the query during inference. At a high level, these models project
queries and documents onto a low-dimensional vector space where semantically-similar points
stay closer to each other. By doing so we transform the retrieval problem to one of similarity search
or Approximate Nearest Neighbor (ANN) searchโthe ๐nearest neighbors to a query vector are the
desired top-๐documents. This ANN problem can be solved efficiently using a number of algorithms
such as FAISS [ 11] or Hierarchical Navigable Small World Graphs [ 21] available as open source | 2210.11934.pdf |
f16a465ce965-0 | 111:4 Sebastian Bruch, Siyu Gai, and Amir Ingber
packages or through a managed service such as Pinecone2, creating an opportunity to use deep
models and vector representations for first-stage retrieval [ 12,42]โa setup that we refer to as
semantic search.
Semantic search, however, has its own limitations. Previous studies [ 5,36] have shown, for
example, that when applied to out-of-domain datasets, their performance is often worse than
BM25. Observing that lexical and semantic retrievers can be complementary in the way they
model relevance [ 5], it is only natural to consider a hybrid approach where lexical and semantic
similarities both contribute to the makeup of final retrieved list. To date there have been many
studies [ 12,13,17โ19,39,41,45] that do just that, where most focus on in-domain tasks with one
exception [ 5] that considers a zero-shot application too. Most of these works only use one of the
many existing fusion functions in experiments, but none compares the main ideas comprehensively. | 2210.11934.pdf |
f16a465ce965-1 | many existing fusion functions in experiments, but none compares the main ideas comprehensively.
We review the popular fusion functions from these works in the subsequent sections and, through
a comparative study, elaborate what about their behavior may or may not be problematic.
3 SETUP
In the sections that follow, we study fusion functions with a mix of theoretical and empirical analysis.
For that reason, we present our notation as well as empirical setup and evaluation measures here
to provide sufficient context for our arguments.
3.1 Notation
We adopt the following notation in this work. We use ๐o(๐,๐):QรDโ Rto denote the score of
document๐โD to query๐โQaccording to the retrieval system oโO. Ifois a semantic retriever,
Sem, thenQandDare the space of (dense) vectors in R๐and๐Semis typically cosine similarity or
inner product. Similarly, when ois a lexical retriever, Lex,QandDare high-dimensional sparse | 2210.11934.pdf |
f16a465ce965-2 | vectors in R|๐|, with|๐|denoting the size of the vocabulary, and ๐Lexis typically BM25. A retrieval
system ois the spaceQรD equipped with a metric ๐o(ยท,ยท)โwhich need not be a proper metric.
We denote the set of top- ๐documents retrieved for query ๐by retrieval system oby๐
๐
o(๐). We
write๐o(๐,๐)to denote the rank of document ๐with respect to query ๐according to retrieval
system o. Note that,๐o(๐,๐๐)can be expressed as the sum of indicator functions:
๐o(๐,๐๐)=1+โ๏ธ
๐๐โ๐
๐o(๐)1๐o(๐,๐๐)>๐o(๐,๐๐), (1) | 2210.11934.pdf |
f16a465ce965-3 | where 1๐is1when the predicate ๐holds and 0otherwise. In words, and ignoring the subtleties
introduced by the presence of score ties, the rank of document ๐is the count of documents whose
score is larger than the score of ๐.
Hybrid retrieval operates on the product space ofรo๐with metric ๐Fusion :ร๐o๐โR. Without
loss of generality, in this work, we restrictรo๐to be LexรSem. That is, we only consider the
problem of fusing two retrieval scores, but note that much of the anlysis can be trivially extended
to the fusion of multiple retrieval systems. We refer to this hybrid metric as a fusion function.
A fusion function ๐Fusion is typically applied to documents in the union of retrieved sets U๐(๐)=ร
o๐
๐
o(๐), which we simply call the union set . When a document ๐in the union set is not present in | 2210.11934.pdf |
f16a465ce965-4 | one of the top- ๐sets (i.e.,๐โU๐(๐)but๐โ๐
๐
o๐(๐)for some o๐), we compute its missing score
(i.e.,๐o๐(๐,๐)) prior to fusion.
2http://pinecone.io | 2210.11934.pdf |
2509e69bc750-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:5
3.2 Empirical Setup
Datasets : We evaluate our methods on a variety of publicly available benchmark datasets, both in
in-domain and out-of-domain, zero-shot settings. One of the datasets is the MS MARCO Passage
Retrieval v1 dataset [ 25], a publicly available retrieval and ranking collection from Microsoft. It
consists of roughly 8.8million short passages which, along with queries in natural language,
originate from Bing. The queries are split into train, dev, and eval non-overlapping subsets. We
use the train queries for any learning or tuning and evaluate exclusively on the small dev query
set (consisting of 6,980labeled queries) in our analysis. Included in the dataset also are relevance
labels.
We additionally experiment with 8datasets from the BeIR collection [ 36]3: Natural Questions
(NQ, question answering), Quora (duplicate detection), NFCorpus (medical), HotpotQA (question
answering), Fever (fact extraction), SciFact (scientific claim verification), DBPedia (entity search), | 2210.11934.pdf |
2509e69bc750-1 | andFiQA (financial). For details of and statistics for each dataset, we refer the reader to [36].
Lexical search : We use PISA [ 22] for keyword-based lexical retrieval. We tokenize queries
and documents by space and apply stemming available in PISAโwe do not employ any other
preprocessing steps such as stopword removal, lemmatization, or expansion. We use BM25 with
the same hyperparameters as [5] (k1= 0.9and b= 0.4) to retrieve the top 1000 candidates.
Semantic search : We use the all-MiniLM-L6-v2 model checkpoint available on HuggingFace4
to project queries and documents into 384-dimensional vectors, which can subsequently be used
for indexing and top- ๐retrieval using cosine similarity. This model has been shown to achieve
competitive quality on an array of benchmark datasets while remaining compact in size and efficient
to infer5, thereby allowing us to conduct extensive experiments with results that are competitive
with existing state-of-the-art models. This model has been fine-tuned on a large number of datasets, | 2210.11934.pdf |
2509e69bc750-2 | exceeding a total of 1 billion pairs of text, including NQ, MS MARCO Passage, and Quora . As such,
we consider all experiments on these three datasets as in-domain, and the rest as out-of-domain.
We use the exact search for inner product algorithm ( IndexFlatIP ) from FAISS [ 11] to retrieve top
1000 approximate nearest neighbors.
Evaluation : Unless noted otherwise, we form the union set for every query from the candidates
retrieved by the lexical and semantic search systems. We then compute missing scores where
required, compute ๐Fusion on the union set, and re-order according to the hybrid scores. We then
measure Recall@ 1000 and NDCG@ 1000 to quantify ranking quality, as recommended by Zamani
et al. [ 44]. On SciFact andNFCorpus , we evaluate Recall and NDCG at rank cutoff 100due to the
small size of these two collections. Note that, we choose to evaluate deep (i.e., with a larger rank
cut-off) rather than shallow metrics per the discussion in [ 39] to understand the performance of | 2210.11934.pdf |
2509e69bc750-3 | cut-off) rather than shallow metrics per the discussion in [ 39] to understand the performance of
each system more completely.
4 ANALYSIS OF CONVEX COMBINATION OF RETRIEVAL SCORES
We are interested in understanding the behavior and properties of fusion functions. In the remainder
of this work, we study through that lens two popular methods that are representative of existing
ideas in the literature, beginning with a convex combination of scores.
As noted earlier, most existing works use a convex combination of lexical and semantic scores as
follows:๐Convex(๐,๐)=๐ผ๐Sem(๐,๐)+(1โ๐ผ)๐Lex(๐,๐)for some 0โค๐ผโค1. When๐ผ=1the above
collapses to semantic scores and when it is 0, to lexical scores.
3Available at https://github.com/beir-cellar/beir
4https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2
5c.f. https://sbert.net for details. | 2210.11934.pdf |
daf800e846e5-0 | 111:6 Sebastian Bruch, Siyu Gai, and Amir Ingber
An interesting property of this fusion is that it takes into account the distribution of scores. In
other words, the distance between lexical (or semantic) scores of two documents plays a significant
role in determining their final hybrid score. One disadvantage, however, is that the range of ๐Sem
can be very different from ๐Lex. Moreover, as with TF-IDF in lexical search or with inner product in
semantic search, the range of individual functions ๐omay depend on the norm of the query and
document vectors (e.g., BM25 is a function of the number of query terms). As such any constant ๐ผ
is likely to yield inconsistently-scaled hybrid scores.
The problem above is trivially addressed by applying score normalization prior to fusion [ 15,39].
Suppose we have collected a union set U๐(๐)for๐, and that for every candidate we have computed
both lexical and semantic scores. Now, consider the min-max scaling of scores ๐mm:Rโ[0,1]
below: | 2210.11934.pdf |
daf800e846e5-1 | below:
๐mm(๐o(๐,๐))=๐o(๐,๐)โ๐๐
๐๐โ๐๐, (2)
where๐๐=min๐โU๐(๐)๐o(๐,๐)and๐๐=max๐โU๐(๐)๐o(๐,๐). We note that, min-max scaling is
thede facto method in the literature, but other choices of ๐o(ยท)in the more general expression
below:
๐Convex(๐,๐)=๐ผ๐Sem(๐Sem(๐,๐))+( 1โ๐ผ)๐Lex(๐Lex(๐,๐)), (3) | 2210.11934.pdf |
daf800e846e5-2 | are valid as well so long as ๐Sem,๐Lex:RโRare monotone in their argument. For example, for
reasons that will become clearer later, we can redefine the normalization by replacing the minimum
of the set with the theoretical minimum of the function (i.e., the maximum value that is always less
than or equal to all values attainable by the scoring function, or its infimum) to arrive at:
๐tmm(๐o(๐,๐))=๐o(๐,๐)โinf๐o(๐,ยท)
๐๐โinf๐o(๐,ยท). (4)
As an example, when ๐Lexis BM25, then its infimum is 0. When๐Semis cosine similarity, then that
quantity isโ1.
Another popular choice is the standard score (z-score) normalization which is defined as follows: | 2210.11934.pdf |
daf800e846e5-3 | Another popular choice is the standard score (z-score) normalization which is defined as follows:
๐z(๐o(๐,๐))=๐o(๐,๐)โ๐
๐, (5)
where๐and๐denote the mean and standard deviation of the set of scores ๐o(๐,ยท)for query๐.
We will return to normalization shortly, but we make note of one small but important fact: In
cases where the variance of lexical (semantic) scores in the union set is 0, we may skip the fusion
step altogether because retrieval quality will be unaffected by lexical (semantic) scores. The case
where the variance is arbitrarily close to 0, however, creates challenges for certain normalization
functions. While this would make for an interesting theoretical analysis, we do not study this
particular setting in this work as, empirically, we do observe a reasonably large variance among
scores in the union set on all datasets using state-of-the-art lexical and semantic retrieval functions.
4.1 Suitability of Convex Combination | 2210.11934.pdf |
daf800e846e5-4 | 4.1 Suitability of Convex Combination
A convex combination of scores is a natural choice for creating a mixture of two retrieval systems,
but is it a reasonable choice? It has been established in many past empirical studies that ๐Convex with
min-max normalization often serves as a strong baseline. So the answer to our question appears to
be positive. Nonetheless, we believe it is important to understand precisely why this fusion works.
We investigate this question empirically, by visualizing lexical and semantic scores of query-
document pairs from an array of datasets. Because we operate in a two-dimensional space, observing
the pattern of positive (where document is relevant to query) and negative samples in a plot can
reveal a lot about whether and how they are separable and how the fusion function behaves. To | 2210.11934.pdf |
6598f6af8980-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:7
(a) MS MARCO
(b)Quora
(c)NQ
(d)FiQA
(e)HotpotQA
(f)Fever
Fig. 1. Visualization of the normalized lexical ( ๐tmm(๐Lex)) and semantic ( ๐tmm(๐Sem)) scores of query-
document pairs sampled from the validation split of each dataset. Shown in red are up to 20,000positive
samples where document is relevant to query, and in black up to the same number of negative samples.
Adding a lexical (semantic) dimension to query-document pairs helps tease out the relevant documents
that would be statistically indistinguishable in a one-dimensional semantic (lexical) view of the dataโwhen
samples are projected onto the ๐ฅ(๐ฆ) axis.
that end, we sample up to 20,000positive and up to the same number of negative query-document
pairs from the validation split of each dataset, and illustrate the collected points in a scatter plot in
Figure 1. | 2210.11934.pdf |
8eacecb53293-0 | 111:8 Sebastian Bruch, Siyu Gai, and Amir Ingber
(a)๐ผ=0.6
(b)๐ผ=0.8
Fig. 2. Effect of ๐Convex on pairs of lexical and semantic scores.
From these figures, it is clear that positive and negative samples form clusters that are, with
some error, separable by a linear function. What is different between datasets is the slope of this
separating line. For example, in MS MARCO, Quora , and NQ, which are in-domain datasets, the
separating line is almost vertical, suggesting that the semantic scores serve as a sufficiently strong
signal for relevance. This is somewhat true of FiQA . In other out-of-domain datasets, however, the
line is rotated counter-clockwise, indicating a more balanced weighting of lexical and semantic
scores. Said differently, adding a lexical (semantic) dimension to query-document pairs helps tease
out the relevant documents that would be statistically indistinguishable in a one-dimensional
semantic (lexical) view of the data. Interestingly, across all datasets, there is a higher concentration
of negative samples where lexical scores vanish. | 2210.11934.pdf |
8eacecb53293-1 | of negative samples where lexical scores vanish.
This empirical evidence suggests that lexical and semantic scores may indeed be complementaryโ
an observation that is in agreement with prior work [ 5]โand a line may be a reasonable choice for
distinguishing between positive and negative samples. But while these figures shed light on the
shape of positive and negative clusters and their separability, our problem is not classification but
ranking . We seek to order query-document pairs and, as such, separability is less critical and, in fact,
not required. It is therefore instructive to understand the effect of a particular convex combination
on pairs of lexical and semantic scores. This is visualized in Figure 2 for two values of ๐ผin๐Convex .
The plots in Figure 2 illustrate how the parameter ๐ผdetermines how different regions of the
plane are ranked relative to each other. This is a trivial fact, but it is nonetheless interesting to map
these patterns to the distributions in Figure 1. In-domain datasets, for example, form a pattern of | 2210.11934.pdf |
8eacecb53293-2 | positives and negatives that is unsurprisingly more in tune with the ๐ผ=0.8setting of๐Convex than
๐ผ=0.6.
4.2 Role of Normalization
We have thus far used min-max normalization to be consistent with the literature. In this section,
we ask the question first raised by Chen et al. [ 5] on whether and to what extent the choice of nor-
malization matters and how carefully one must choose the normalization protocol. In other words,
we wish to examine the effect of ๐Sem(ยท)and๐Lex(ยท)on the convex combination in Equation (3).
Before we begin, let us consider the following suite of functions: | 2210.11934.pdf |
a09d9440f156-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:9
โข๐mm: Min-max scaling of Equation (2);
โข๐tmm: Theoretical min-max scaling of Equation (4);
โข๐z: z-score normalization of Equation (5);
โข๐mmโLex: Min-max scaling of lexical scores, unnormalized semantic scores;
โข๐tmmโLex: Theoretical min-max normalized lexical scores, unnormalized semantic scores;
โข๐zโLex: z-score normalized lexical scores, unnormalized semantic scores; and,
โข๐ผ: The identity transformation, leaving both semantic and lexical scores unnormalized.
We believe these transformations together test the various conditions in our upcoming arguments.
Let us first state the notion of rank-equivalence more formally:
Definition 4.1. We say two functions ๐and๐arerank-equivalent on the setUand write๐๐=๐, if
the order among documents in a set Uinduced by ๐is the same as that induced by ๐. | 2210.11934.pdf |
a09d9440f156-1 | For example, when ๐Sem(๐ฅ)=๐๐ฅ+๐and๐Lex(๐ฅ)=๐๐ฅ+๐are linear transformations of scores
for some positive coefficients ๐,๐and real intercepts ๐,๐, then they can be reduced to the following
rank-equivalent form:
๐Convex(๐,๐)๐=(๐๐ผ)๐Sem(๐,๐)+๐(1โ๐ผ)๐Lex(๐,๐).
In fact, letting ๐ผโฒ=๐๐ผ/[๐๐ผ+๐(1โ๐ผ)]transforms the problem to one of learning a convex
combination of the original scores with a modified weight. This family of functions includes ๐mm,
๐z, and๐tmm, and as such solutions for one family can be transformed to solutions for another
normalization protocol. More formally: | 2210.11934.pdf |
a09d9440f156-2 | normalization protocol. More formally:
Lemma 4.2. For every query, given an arbitrary ๐ผ, there exists a ๐ผโฒsuch that the convex combination
of min-max normalized scores with parameter ๐ผis rank-equivalent to a convex combination of z-score
normalized scores with ๐ผโฒ, and vice versa.
Proof. Write๐oand๐ofor the minimum and maximum scores retrieved by system o, and๐o
and๐ofor their mean and standard deviation. We also write ๐
o=๐oโ๐ofor brevity. For every
document๐, we have the following:
๐ผ๐Sem(๐,๐)โ๐Sem
๐
Sem+(1โ๐ผ)๐Lex(๐,๐)โ๐Lex
๐
Lex๐=๐ผ
๐
sem๐Sem(๐,๐)+1โ๐ผ | 2210.11934.pdf |
a09d9440f156-3 | ๐
Lex๐Lex(๐,๐)
๐=1
๐Sem๐Lex๐ผ
๐
Sem๐Sem(๐,๐)+1โ๐ผ
๐
Lex๐Lex(๐,๐)โ๐ผ
๐
Sem๐Semโ1โ๐ผ
๐
Lex๐Lex
๐=๐ผ
๐
Sem๐Lex ๐Sem(๐,๐)โ๐Sem
๐Sem+1โ๐ผ
๐
Lex๐Sem ๐Lex(๐,๐)โ๐Lex
๐Lex,
where in every step we either added a constant or multiplied the expression by a positive constant,
both rank-preserving operations. Finally, setting
๐ผโฒ=๐ผ
๐
Sem๐Lex/(๐ผ | 2210.11934.pdf |
a09d9440f156-4 | ๐
Sem๐Lex/(๐ผ
๐
Sem๐Lex+1โ๐ผ
๐
Lex๐Sem)
completes the proof. The other direction is similar. โก
The fact above implies that the problem of tuning ๐ผfor a query in a min-max normalization
regime is equivalent to learning ๐ผโฒin a z-score normalized setting. In other words, there is a
one-to-one relationship between these parameters, and as a result solutions can be mapped from
one problem space to the other. However, this statement is only true for individual queries and
does not have any implications for the learning of the weight in the convex combination over an
entire collection of queries. Let us now consider this more complex setup. | 2210.11934.pdf |
79a058f189f6-0 | 111:10 Sebastian Bruch, Siyu Gai, and Amir Ingber
The question we wish to answer is as follows: Under what conditions is ๐Convex with parameter
๐ผand a pair of normalization functions (๐Sem,๐Sem)rank-equivalent to an ๐โฒ
Convexof a new pair of
normalization functions (๐โฒ
Sem,๐โฒ
Lex)with weight ๐ผโฒ? That is, for a constant ๐ผwith one normalization
protocol, when is there a constant ๐ผโฒthat produces the same ranked lists for every query but with
a different normalization protocol? The answer to this question helps us understand whether and
when changing normalization schemes from min-max to z-score, for example, matters. We state
the following definitions followed by a theorem that answers this question.
Definition 4.3. We say๐:RโRis a๐ฟ-expansion with respect to ๐:RโRif for any๐ฅand๐ฆ | 2210.11934.pdf |
79a058f189f6-1 | in the domains of ๐and๐we have that|๐(๐ฆ)โ๐(๐ฅ)|โฅ๐ฟ|๐(๐ฆ)โ๐(๐ฅ)|for some๐ฟโฅ1.
For example, ๐mm(ยท)is an expansion with respect to ๐tmm(ยท)with a factor ๐ฟthat depends on the
range of the scores. As another example, ๐z(ยท)is an expansion with respect to ๐mm(ยท).
Definition 4.4. For two pairs of functions ๐,๐:RโRand๐โฒ,๐โฒ:RโR, and two points ๐ฅand
๐ฆin their domains, we say that ๐โฒexpands with respect to ๐more rapidly than๐โฒexpands with
respect to๐, with a relative expansion rate of๐โฅ1, if the following condition holds: | 2210.11934.pdf |
79a058f189f6-2 | |๐โฒ(๐ฆ)โ๐โฒ(๐ฅ)|
|๐(๐ฆ)โ๐(๐ฅ)|=๐|๐โฒ(๐ฆ)โ๐โฒ(๐ฅ)|
|๐(๐ฆ)โ๐(๐ฅ)|.
When๐is independent of the points ๐ฅand๐ฆ, we call this relative expansion uniform :
|ฮ๐โฒ|/|ฮ๐|
|ฮ๐โฒ|/|ฮ๐|=๐,โ๐ฅ,๐ฆ.
As an example, if ๐and๐are min-max scaling and ๐โฒand๐โฒare z-score normalization, then
their respective rate of expansion is roughly similar. We will later show that this property often
holds empirically across different transformations.
Theorem 4.5. For every choice of ๐ผ, there exists a constant ๐ผโฒsuch that the following functions are | 2210.11934.pdf |
79a058f189f6-3 | rank-equivalent on a collection of queries ๐:
๐Convex =๐ผ๐(๐Sem(๐,๐))+( 1โ๐ผ)๐(๐Lex(๐,๐)),
and
๐โฒ
Convex =๐ผโฒ๐โฒ(๐Sem(๐,๐))+( 1โ๐ผโฒ)๐โฒ(๐Lex(๐,๐)),
if for the monotone functions ๐,๐,๐โฒ,๐โฒ:RโR,๐โฒexpands with respect to ๐more rapidly than ๐โฒ
expands with respect to ๐with a uniform rate ๐.
Proof. Consider a pair of documents ๐๐and๐๐in the ranked list of a query ๐such that๐๐is | 2210.11934.pdf |
79a058f189f6-4 | ranked above ๐๐according to ๐Convex . Shortening ๐o(๐,๐๐)to๐(๐)
ofor brevity, we have that:
๐(๐)
Convex>๐(๐)
Convex=โ๐ผ
(๐(๐(๐)
Sem)โ๐(๐(๐)
Sem))
| {z }
ฮ๐๐ ๐+(๐(๐(๐)
Lex)โ๐(๐(๐)
Lex))
| {z }
ฮ๐๐๐
>๐(๐(๐)
Lex)โ๐(๐(๐)
Lex)
This holds if and only if we have the following:
( | 2210.11934.pdf |
79a058f189f6-5 | Lex)
This holds if and only if we have the following:
(
๐ผ>1/(1+ฮ๐๐ ๐
ฮ๐๐๐),ifฮ๐๐ ๐+ฮ๐๐๐>0,
๐ผ<1/(1+ฮ๐๐ ๐
ฮ๐๐๐),otherwise.(6)
Observe that, because of the monotonicity of a convex combination and the monotonicity of
the normalization functions, the case ฮ๐๐ ๐<0andฮ๐๐๐>0(which implies that the semantic and
lexical scores of ๐๐are both larger than ๐๐) is not valid as it leads to a reversal of ranks. Similarly, | 2210.11934.pdf |
525a513a60ab-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:11
the opposite case ฮ๐๐ ๐>0andฮ๐๐๐<0always leads to the correct order regardless of the weight
in the convex combination. We consider the other two cases separately below.
Case 1: ฮ๐๐ ๐>0andฮ๐๐๐>0. Because of the monotonicity property, we can deduce that
ฮ๐โฒ
๐ ๐>0andฮ๐โฒ
๐๐>0. From Equation (6), for the order between ๐๐and๐๐to be preserved under
the image of ๐โฒ
Convex, we must therefore have the following:
๐ผโฒ>1/(1+ฮ๐โฒ
๐ ๐
ฮ๐โฒ
๐๐).
By assumption, using Definition 4.4, we observe that: | 2210.11934.pdf |
525a513a60ab-1 | By assumption, using Definition 4.4, we observe that:
ฮ๐โฒ
๐ ๐
ฮ๐๐ ๐โฅฮ๐โฒ
๐๐
ฮ๐๐๐=โฮ๐โฒ
๐ ๐
ฮ๐โฒ
๐๐โฅฮ๐๐ ๐
ฮ๐๐๐.
As such, the lower-bound on ๐ผโฒimposed by documents ๐๐and๐๐of query๐,๐ฟโฒ
๐ ๐(๐), is smaller than
the lower-bound on ๐ผ,๐ฟ๐ ๐(๐). Like๐ผ, this case does not additionally constrain ๐ผโฒfrom above (i.e.,
the upper-bound does not change: ๐โฒ | 2210.11934.pdf |
525a513a60ab-2 | the upper-bound does not change: ๐โฒ
๐ ๐(๐)=๐๐ ๐(๐)=1).
Case 2: ฮ๐๐ ๐<0,ฮ๐๐๐<0. Once again, due to monotonicity, it is easy to see that ฮ๐โฒ
๐ ๐<0and
ฮ๐โฒ
๐๐<0. Equation (6) tells us that, for the order to be preserved under ๐โฒ
Convex, we must similarly
have that:
๐ผโฒ<1/(1+ฮ๐โฒ
๐ ๐
ฮ๐โฒ
๐๐).
Once again, by assumption we have that the upper-bound on ๐ผโฒis a translation of the upper-bound
on๐ผto the left. The lower-bound is unaffected and remains 0.
For๐โฒ | 2210.11934.pdf |
525a513a60ab-3 | For๐โฒ
Convexto induce the same order as ๐Convex among all pairs of documents for all queries in ๐,
the intersection of the intervals produced by the constraints on ๐ผโฒhas to be non-empty:
๐ผโฒโร
๐[max
๐ ๐๐ฟโฒ
๐ ๐(๐),min
๐ ๐๐โฒ
๐ ๐(๐)]=[max
๐,๐ ๐๐ฟโฒ
๐ ๐(๐),min
๐,๐ ๐๐โฒ
๐ ๐(๐)]โ โ
.
We next prove that ๐ผโฒis always non-empty to conclude the proof of the theorem. | 2210.11934.pdf |
525a513a60ab-4 | We next prove that ๐ผโฒis always non-empty to conclude the proof of the theorem.
By Equation (6) and the existence of ๐ผ, we know that max ๐,๐ ๐๐ฟ๐ ๐(๐)โคmin ๐,๐ ๐๐๐ ๐(๐). Suppose
that documents ๐๐and๐๐of query๐1maximize the lower-bound, and that documents ๐๐and๐๐of
query๐2minimize the upper-bound. We therefore have that:
1/(1+ฮ๐๐ ๐
ฮ๐๐๐)โค1/(1+ฮ๐๐๐
ฮ๐๐๐)=โฮ๐๐ ๐
ฮ๐๐๐โฅฮ๐๐๐ | 2210.11934.pdf |
525a513a60ab-5 | ฮ๐๐๐
Because of the uniformity of the relative expansion rate, we can deduce that:
ฮ๐โฒ
๐ ๐
ฮ๐โฒ
๐๐โฅฮ๐โฒ
๐๐
ฮ๐โฒ๐๐=โmax
๐,๐ ๐๐ฟโฒ
๐ ๐(๐)โคmin
๐,๐ ๐๐โฒ
๐ ๐(๐).
โก
It is easy to show that the theorem above also holds when the condition is updated to reflect a
shift of lower- and upper-bounds to the right, which happens when ๐โฒcontracts with respect to ๐
more rapidly than ๐โฒdoes with respect to ๐.
The picture painted by Theorem 4.5 is that switching from min-max scaling to z-score nor- | 2210.11934.pdf |
525a513a60ab-6 | malization or any other linear transformation that is bounded and does not severely distort the
distribution of scores, especially among the top-ranking documents, results in a rank-equivalent
function. At most, for any given value of the ranking metric of interest such as NDCG, we should
observe a shift of the weight in the convex combination to the right or left. Figure 3 illustrates this | 2210.11934.pdf |
dc53db166775-0 | 111:12 Sebastian Bruch, Siyu Gai, and Amir Ingber
(a) MS MARCO
(b)Quora
(c)HotpotQA
(d)FiQA
Fig. 3. Effect of normalization on the performance of ๐Convex as a function of ๐ผon the validation set.
effect empirically on select datasets. As anticipated, the peak performance in terms of NDCG shifts
to the left or right depending on the type of normalization.
The uniformity requirement on the relative expansion rate, ๐, in Theorem 4.5 is not as strict and
restrictive as it may appear. First, it is only necessary for ๐to be stable on the set of ordered pairs
of documents as ranked by ๐Convex :
|ฮ๐โฒ
๐ ๐|/|ฮ๐๐ ๐|
|ฮ๐โฒ | 2210.11934.pdf |
dc53db166775-1 | |ฮ๐โฒ
๐๐|/|ฮ๐๐๐|=๐,โ(๐๐,๐๐)st๐Convex(๐๐)>๐Convex(๐๐).
Second, it turns out, close to uniformity (i.e., when ๐isconcentrated around one value) is often
sufficient for the effect to materialize in practice. We observe this phenomenon empirically by fixing
the parameter ๐ผin๐Convex with one transformation and forming ranked lists, then choosing another
transformation and computing its relative expansion rate ๐on all ordered pairs of documents. We
show the measured relative expansion rate in Figure 4 for various transformations.
Figure 4 shows that most pairs of transformations yield a stable relative expansion rate. For
example, if๐Convex uses๐tmmand๐โฒ | 2210.11934.pdf |
dc53db166775-2 | example, if๐Convex uses๐tmmand๐โฒ
Convexuses๐mmโdenoted by ๐tmmโ๐mmโfor every choice of ๐ผ, | 2210.11934.pdf |
415eb4f03f65-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:13
(a) MS MARCO
(b)Quora
(c)HotpotQA
(d)FiQA
Fig. 4. Relative expansion rate of semantic scores with respect to lexical scores, ๐, when changing from one
transformation to another, with 95% confidence intervals. Prior to visualization, we normalize values of ๐
to bring them into a similar scaleโthis only affects aesthetics and readability, but is the reason why the
vertical axis is not scaled. For most transformations and every value of ๐ผ, we observe a stable relative rate of
expansion where ๐concentrates around one value for the vast majority of queries.
the relative expansion rate ๐is concentrated around a constant value. This implies that any ranked
list obtained from ๐Convex can be reconstructed by ๐โฒ
Convex. Interestingly, ๐zโLexโ๐mmโLexhas a
comparatively less stable ๐, but removing normalization altogether (i.e., ๐mmโLexโ๐ผ) dramatically | 2210.11934.pdf |
415eb4f03f65-1 | distorts the expansion rates. This goes some way to explain why normalization and boundedness
are important properties.
In the last two sections, we have answered RQ1: Convex combination is an appropriate fusion
function and its performance is not sensitive to the choice of normalization so long as the transfor-
mation has reasonable properties. Interestingly, the behavior of ๐tmmappears to be more robust to
the data distributionโits peak remains within a small neighborhood as we move from one dataset
to another. We believe the reason ๐tmm-normalized scores are more stable is because it has one
fewer data-dependent statistic in the transformation (i.e., minimum score in the retrieved set is | 2210.11934.pdf |
fe3c91102a70-0 | 111:14 Sebastian Bruch, Siyu Gai, and Amir Ingber
Table 1. Recall@1000 and NDCG@1000 (except SciFact andNFCorpus where cutoff is 100) on the test
split of various datasets for lexical and semantic search as well as hybrid retrieval using RRF [5] (๐=60)
and TM2C2 ( ๐ผ=0.8). The symbolsโกandโindicate statistical significance ( ๐-value <0.01) with respect to
TM2C2 and RRF respectively, according to a paired two-tailed ๐ก-test.
Recall NDCG
Dataset Lex. Sem. TM2C2 RRF Lex. Sem. TM2C2 RRF Oraclein-domainMS MARCO 0.836โกโ0.964โกโ0.974 0.969โก0.309โกโ0.441โกโ0.454 0.425โก0.547 | 2210.11934.pdf |
fe3c91102a70-1 | NQ 0.886โกโ0.978โกโ0.985 0.984 0.382โกโ0.505โก0.542 0.514โก0.637
Quora 0.992โกโ0.999 0.999 0.999 0.800โกโ0.889โกโ0.901 0.877โก0.936zero-shotNFCorpus 0.283โกโ0.314โกโ0.348 0.344 0.298โกโ0.309โกโ0.343 0.326โก0.371
HotpotQA 0.878โกโ0.756โกโ0.884 0.888 0.682โกโ0.520โกโ0.699 0.675โก0.767 | 2210.11934.pdf |
fe3c91102a70-2 | FEVER 0.969โกโ0.931โกโ0.972 0.972 0.689โกโ0.558โกโ0.744 0.721โก0.814
SciFact 0.900โกโ0.932โกโ0.958 0.955 0.698โกโ0.681โกโ0.753 0.730โก0.796
DBPedia 0.540โกโ0.408โกโ0.564 0.567 0.415โกโ0.425โกโ0.512 0.489โก0.553
FiQA 0.720โกโ0.908 0.907 0.904 0.315โกโ0.467โก0.496 0.464โก0.561
replaced with minimum feasible value regardless of the candidate set). In the remainder of this
work, we use ๐tmmand denote a convex combination of scores normalized by it by TM2C2 for | 2210.11934.pdf |
fe3c91102a70-3 | brevity.
5 ANALYSIS OF RECIPROCAL RANK FUSION
Chen et al. [ 5] show that RRF performs better and more reliably than a convex combination of
normalized scores. RRF is computed as follows:
๐RRF(๐,๐)=1
๐+๐Lex(๐,๐)+1
๐+๐Sem(๐,๐), (7)
where๐is a free parameter. The authors of [ 5] take a non-parametric view of RRF, where the
parameter๐is set to its default value 60, in order to apply the fusion to out-of-domain datasets
in a zero-shot manner. In this work, we additionally take a parametric view of RRF, where as we
elaborate later, the number of free parameters is the same as the number of functions being fused
together, a quantity that is always larger than the number of parameters in a convex combination. | 2210.11934.pdf |
fe3c91102a70-4 | together, a quantity that is always larger than the number of parameters in a convex combination.
Let us begin by comparing the performance of RRF and TM2C2 empirically to get a sense of their
relative efficacy. We first verify whether hybrid retrieval leads to significant gains in in-domain and
out-of-domain experiments. In a way, we seek to confirm the findings reported in [ 5] and compare
the two fusion functions in the process.
Table 1 summarizes our results. We note that, we set RRFโs๐to60per [ 5] but tuned TM2C2โs ๐ผ
on the validation set of the in-domain datasets and found that ๐ผ=0.8works well for the three
datasets. In the experiments leading to Table 1, we fix ๐ผ=0.8and evaluate methods on the test
split of the datasets. Per [ 5,39], we have also included the performance of an oracle system that
uses a per-query ๐ผ, to establish an upper-boundโthe oracle knows which value of ๐ผworks best for
any given query. | 2210.11934.pdf |
fe3c91102a70-5 | any given query.
Our results show that hybrid retrieval using RRF outperforms pure-lexical and pure-semantic
retrieval on most datasets. This fusion method is particularly effective on out-of-domain datasets, | 2210.11934.pdf |
82bb45ff1fd9-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:15
(a) in-domain
(b) out-of-domain
Fig. 5. Difference in NDCG@ 1000 of TM2C2 and RRF (positive indicates better ranking quality by TM2C2)
as a function of ๐ผ. When๐ผ=0the model is rank-equivalent to lexical search while ๐ผ=1is rank-equivalent
to semantic search.
rendering the observation of [ 5] a robust finding and asserting once more the remarkable perfor-
mance of RRF in zeros-shot settings.
Contrary to [ 5], however, we find that TM2C2 significantly outperforms RRF on all datasets
in terms of NDCG, and does generally better in terms of Recall. Our observation is consistent
with [39] that TM2C2 substantially boosts NDCG even on in-domain datasets.
To contextualize the effect of ๐ผon ranking quality, we visualize a parameter sweep on the
validation split of in-domain datasets in Figure 5(a), and for completeness, on the test split of | 2210.11934.pdf |
82bb45ff1fd9-1 | out-of-domain datasets in Figure 5(b). These figures also compare the performance of TM2C2 with
RRF by reporting the difference between NDCG of the two methods. These plots show that there
always exists an interval of ๐ผfor which๐TM2C2โป๐RRFwithโปindicating better rank quality.
5.1 Effect of Parameters
Chen et al. rightly argue that because RRF is merely a function of ranks, rather than scores, it
naturally addresses the scale and range problem without requiring normalizationโwhich, as we
showed, is not a consequential choice anyway. While that statement is accurate, we believe it
introduces new problems that must be recognized too.
The first, more minor issue is that ranks cannot be computed exactly unless the entire collection
Dis ranked by retrieval system ofor every query. That is because, there may be documents that
appear in the union set, but not in one of the individual top- ๐sets. Their true rank is therefore
unknown, though is often approximated by ranking documents within the union set. We take this
approach when computing ranks. | 2210.11934.pdf |
82bb45ff1fd9-2 | approach when computing ranks.
The second issue is that, unlike TM2C2, RRF ignores the raw scores and discards information
about their distribution. In this regime, whether or not a document has a low or high semantic score
does not matter so long as its rank in ๐
๐
Semstays the same. It is arguable in this case whether rank is
a stronger signal of relevance than score, a measurement in a metric space where distance matters
greatly. We intuit that, such distortion of distances may result in a loss of valuable information that
would lead to better final ranked lists. | 2210.11934.pdf |
4ea204941815-0 | 111:16 Sebastian Bruch, Siyu Gai, and Amir Ingber
(a) MS MARCO
(b)Quora
(c)NQ
(d)FiQA
(e)HotpotQA
(f)Fever
Fig. 6. Visualization of the reciprocal rank determined by lexical ( ๐๐(๐Lex)=1/(60+๐Lex)) and semantic
(๐๐(๐Sem)=1/(60+๐Sem)) retrieval for query-document pairs sampled from the validation split of each
dataset. Shown in red are up to 20,000positive samples where document is relevant to query, and in black
up to the same number of negative samples.
To understand these issues better, let us first repeat the exercise in Section 4.1 for RRF. In Figure 6,
we have plotted the reciprocal rank (i.e., ๐๐(๐o)=1/(๐+๐o)with๐=60) for sampled query- | 2210.11934.pdf |
4ea204941815-1 | document pairs as before. From the figure, we can see that samples are pulled towards one of the
poles at(0,0)and(1/61,1/61). The former attracts a higher concentration of negative samples
while the latter positive samples. While this separation is somewhat consistent across datasets, | 2210.11934.pdf |
e88b15fd1cbc-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:17
(a) MS MARCO
(b)HotpotQA
Fig. 7. Difference in NDCG@1000 of ๐RRFwith distinct values ๐Lexand๐Sem, and๐RRFwith๐Lex=๐Sem=60
(positive indicates better ranking quality by the former). On MS MARCO, an in-domain dataset, NDCG
improves when ๐Lex>๐Sem, while the opposite effect can be seen for HotpotQA , an out-of-domain dataset.
the concentration around poles and axes changes. Indeed, on HotpotQA andFever there is a
higher concentration of positive documents near the top, whereas on FiQA and the in-domain
datasets more positive samples end up along the vertical line at ๐๐(๐Sem)=1/61, indicating that
lexical ranks matter less. This suggests that a simple addition of reciprocal ranks does not behave
consistently across domains. | 2210.11934.pdf |
e88b15fd1cbc-1 | consistently across domains.
We argued earlier that RRF is parametric and that it, in fact, has as many parameters as there are
retrieval functions to fuse. To see this more clearly, let us rewrite Equation (7) as follows:
๐RRF(๐,๐)=1
๐Lex+๐Lex(๐,๐)+1
๐Sem+๐Sem(๐,๐). (8)
We study the effect of parameters on ๐RRFby comparing the NDCG obtained from RRF with a
particular choice of ๐Lexand๐Semagainst a realization of RRF with๐Lex=๐Sem=60. In this way,
we are able to visualize the impact on performance relative to the baseline configuration that is
typically used in the literature. This difference in NDCG is rendered as a heatmap in Figure 7 for
select datasetsโfigures for all other datasets show a similar pattern.
As a general observation, we note that NDCG swings wildly as a function of RRF parameters. | 2210.11934.pdf |
e88b15fd1cbc-2 | Crucially, performance improves off-diagonal, where the parameter takes on different values for
the semantic and lexical components. On MS MARCO, shown in Figure 7(a), NDCG improves when
๐Lex>๐Sem, while the opposite effect can be seen for HotpotQA , an out-of-domain dataset. This
can be easily explained by the fact that increasing ๐ofor retrieval system oeffectively discounts the
contribution of ranks from oto the final hybrid score. On in-domain datasets where the semantic
model already performs strongly, for example, discounting the lexical system by increasing ๐Lex
leads to better performance.
Having observed that tuning RRF potentially leads to gains in NDCG, we ask if tuned parameters
generalize on out-of-domain datasets. To investigate that question, we tune RRF on in-domain
datasets and pick the value of parameters that maximize NDCG on the validation split of in-domain
datasets, and measure the performance of the resulting function on the test split of all (in-domain
and out-of-domain) datasets. We present the results in Table 2. While tuning a parametric RRF does | 2210.11934.pdf |
4346c658ecf5-0 | 111:18 Sebastian Bruch, Siyu Gai, and Amir Ingber
(a)๐Lex=60,๐Sem=60
(b)๐Lex=10,๐Sem=4
(c)๐Lex=3,๐Sem=5
Fig. 8. Effect of ๐RRFwith select configurations of ๐Lexand๐Semon pairs of ranks from lexical and semantic
systems. When ๐Lex>๐Sem, the fusion function discounts the lexical systemโs contribution.
Table 2. Mean NDCG@1000 (NDCG@100 for SciFact andNFCorpus ) on the test split of various datasets
for hybrid retrieval using TM2C2 ( ๐ผ=0.8) and RRF (๐Lex,๐Sem). The symbolsโกandโindicate statistical
significance ( ๐-value <0.01) with respect to TM2C2 and baseline RRF ( 60,60) respectively, according to a
paired two-tailed ๐ก-test.
NDCG | 2210.11934.pdf |
4346c658ecf5-1 | paired two-tailed ๐ก-test.
NDCG
Dataset TM2C2 RRF(60,60)RRF(5,5)RRF(10,4)in-domainMS MARCO 0.454 0.425โก0.435โกโ0.451โ
NQ 0.542 0.514โก0.521โกโ0.528โกโ
Quora 0.901 0.877โก0.885โกโ0.896โzero-shotNFCorpus 0.343 0.326โก0.335โกโ0.327โก
HotpotQA 0.699 0.675โก0.693โ0.621โกโ
FEVER 0.744 0.721โก0.727โกโ0.649โกโ
SciFact 0.753 0.730โก0.738โก0.715โกโ | 2210.11934.pdf |
4346c658ecf5-2 | DBPedia 0.512 0.489โก0.489โก0.480โกโ
FiQA 0.496 0.464โก0.470โกโ0.482โกโ
indeed lead to gains in NDCG on in-domain datasets, the tuned function does not generalize well
to out-of-domain datasets.
The poor generalization can be explained by the reversal of patterns observed in Figure 7 where
๐Lex>๐Semsuits in-domain datasets better but the opposite is true for out-of-domain datasets. By
modifying๐Lexand๐Semwe modify the fusion of ranks and boost certain regions and discount
others in an imbalanced manner. Figure 8 visualizes this effect on ๐RRFfor particular values of its
parameters. This addresses RQ2.
5.2 Effect of Lipschitz Continuity
In the previous section, we stated an intuition that because RRF does not preserve the distribution
of raw scores, it loses valuable information in the process of fusing retrieval systems. In our final | 2210.11934.pdf |
4346c658ecf5-3 | of raw scores, it loses valuable information in the process of fusing retrieval systems. In our final
research question, RQ3, we investigate if this indeed matters in practice. | 2210.11934.pdf |
6c143f58ddf2-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:19
(a) in-domain
(b) out-of-domain
Fig. 9. The difference in NDCG@1000 of ๐SRRF and๐RRFwith๐=60(positive indicates better ranking quality
bySRRF ) as a function of ๐ฝ.
The notion of โpreservingโ information is well captured by the concept of Lipschitz continuity.6
When a function is Lipschitz continuous with a small Lipschitz constant, it does not oscillate wildly
with a small change to its input. RRF does not have this property because the moment one lexical
(or semantic) score becomes larger than another the function makes a hard transition to a new
value.
We can therefore cast RQ3 as a question of whether Lipschitz continuity is an important property
in practice. To put that hypothesis to the test, we design a smooth approximation of RRF using
known techniques [4, 30].
As expressed in Equation (1), the rank of a document is simply the sum of indicators. It is | 2210.11934.pdf |
6c143f58ddf2-1 | thus trivial to approximate this quantity using a generalized sigmoid with parameter ๐ฝ:๐๐ฝ(๐ฅ)=
1/(1+exp(โ๐ฝ๐ฅ)). As๐ฝapproaches 1, the sigmoid takes its usual Sshape, while ๐ฝโโ produces
a very close approximation of the indicator. Interestingly, the Lipschitz constant of ๐๐ฝ(ยท)is, in fact,
๐ฝ. As๐ฝincreases, the approximation of ranks becomes more accurate, but the Lipschitz constant
becomes larger. When ๐ฝis too small, however, the approximation breaks down but the function
transitions more slowly, thereby preserving much of the characteristics of the underlying data
distribution.
RRF being a function of ranks can now be approximated by plugging in approximate ranks in
Equation (7), resulting in SRRF:
๐SRRF(๐,๐)=1
๐+ห๐Lex(๐,๐)+1 | 2210.11934.pdf |
6c143f58ddf2-2 | ๐+ห๐Lex(๐,๐)+1
๐+ห๐Sem(๐,๐), (9)
where ห๐o(๐,๐๐)=0.5+ร
๐๐โ๐
๐o(๐)๐๐ฝ(๐o(๐,๐๐)โ๐o(๐,๐๐)). By increasing ๐ฝwe increase the Lipschitz
constant of ๐SRRF. This is the lever we need to test the idea that Lipschitz continuity matters and
that functions that do not distort the distributional properties of raw scores lead to better ranking
quality. | 2210.11934.pdf |
6c143f58ddf2-3 | that functions that do not distort the distributional properties of raw scores lead to better ranking
quality.
6A function ๐is Lipschitz continous with constant ๐ฟif||๐(๐ฆ)โ๐(๐ฅ)||๐โค๐ฟ||๐ฆโ๐ฅ||๐for some norm||ยท|| ๐and||ยท|| ๐
on the output and input space of ๐. | 2210.11934.pdf |
7f308cf6cc65-0 | 111:20 Sebastian Bruch, Siyu Gai, and Amir Ingber
(a) in-domain
(b) out-of-domain
Fig. 10. The difference in NDCG@1000 of ๐SRRF and๐RRFwith๐=5(positive indicates better ranking quality
bySRRF ) as a function of ๐ฝ.
Figures 9 and 10 visualize the difference between SRRF and RRF for two settings of ๐selected
based on the results in Table 2. As anticipated, when ๐ฝis too small, the approximation error is
large and ranking quality degrades. As ๐ฝbecomes larger, ranking quality trends in the direction
ofRRF. Interestingly, as ๐ฝbecomes gradually smaller, the performance of SRRF improves over
theRRF baseline. This effect is more pronounced for the ๐=60setting of RRF, as well as on the
out-of-domain datasets.
While we acknowledge the possibility that the approximation in Equation (9) may cause a change | 2210.11934.pdf |
7f308cf6cc65-1 | While we acknowledge the possibility that the approximation in Equation (9) may cause a change
in ranking quality, we expected that change to be a degradation, not an improvement. However,
given we do observe gains by smoothing the function, and that the only other difference between
SRRF and RRF is their Lipschitz constant, we believe these results highlight the role of Lipschitz
continuity in ranking quality. For completeness, we have also included a comparison of SRRF, RRF,
and TM2C2 in Table 3.
6 DISCUSSION
The analysis in this work motivates us to identify and document the properties of a well-behaved
fusion function, and present the principles that, we hope, will guide future research in this space.
These desiderata are stated below.
Monotonicity : When๐ois positively correlated with a target ranking metric (i.e., ordering
documents in decreasing order of ๐omust lead to higher quality), then it is natural to require
that๐Hybrid be monotone increasing in its arguments. We have already seen and indeed used this | 2210.11934.pdf |
7f308cf6cc65-2 | property in our analysis of the convex combination fusion function. It is trivial to show why this
property is crucial.
Homogeneity : The order induced by a fusion function must be unaffected by a positive re-
scaling of query and document vectors. That is: ๐Hybrid(๐,๐)๐=๐Hybrid(๐,๐พ๐)๐=๐Hybrid(๐พ๐,๐)where
๐=denotes rank-equivalence and ๐พ>0. This property prevents any retrieval system from inflating
its contribution to the final hybrid score by simply boosting its document or query vectors. | 2210.11934.pdf |
b436cf251fee-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:21
Table 3. Mean NDCG@1000 (NDCG@100 for SciFact andNFCorpus ) on the test split of various datasets
for hybrid retrieval using TM2C2 ( ๐ผ=0.8),RRF (๐), and SRRF( ๐,๐ฝ). The parameters ๐ฝare fixed to values
that maximize NDCG on the validation split of in-domain datasets. The symbols โกandโindicate statistical
significance ( ๐-value <0.01) with respect to TM2C2 and RRF respectively, according to a paired two-tailed
๐ก-test.
NDCG
Dataset TM2C2 RRF(60) SRRF ( 60,40)RRF(5) SRRF ( 5,100)in-domainMS MARCO 0.454 0.425โก0.431โกโ0.435โก0.431โกโ | 2210.11934.pdf |
b436cf251fee-1 | NQ 0.542 0.514โก0.516โก0.521โก0.517โก
Quora 0.901 0.877โก0.889โกโ0.885โก0.889โกโzero-shotNFCorpus 0.343 0.326โก0.338โกโ0.335โก0.339โก
HotpotQA 0.699 0.675โก0.695โ0.693โก0.705โกโ
FEVER 0.744 0.721โก0.725โก0.727โก0.735โกโ
SciFact 0.753 0.730โก0.740โก0.738โก0.740โก
DBPedia 0.512 0.489โก0.501โกโ0.489โก0.492โก
FiQA 0.496 0.464โก0.468โก0.470โก0.469โก | 2210.11934.pdf |
b436cf251fee-2 | Boundedness : Recall that, a convex combination without score normalization is often ineffective
and inconsistent because BM25 is unbounded and that lexical and semantic scores are on different
scales. To see this effect we turn to Figure 11.
We observe in Figure 11(a) that, for in-domain datasets, adding the unnormalized lexical scores
using a convex combination leads to a severe degradation of ranking quality. We believe this is
because of the fact that the semantic retrieval model, which is fine-tuned on these datasets, already
produces ranked lists of high quality, and that adding the lexical scores which are on a very different
scale distorts the rankings and leads to poor performance. In out-of-domain experiments as shown
in Figure 11(b), however, the addition of lexical scores leads to often significant gains in quality.
We believe this can be explained exactly as the in-domain observations: The semantic model
generally does poorly on out-of-domain datasets while the lexical retriever does well. But because
the semantic scores are bounded and relatively small, they do not significantly distort the rankings
produced by the lexical retriever. | 2210.11934.pdf |
b436cf251fee-3 | produced by the lexical retriever.
To avoid that pitfall, we require that ๐Hybrid be bounded:|๐Hybrid|โค๐for some๐>0. As we
have seen before, normalizing the raw scores addresses this issue.
Lipschitz Continuity : We argued that because RRF does not take into consideration the raw
scores, it distorts their distribution and thereby loses valuable information. On the other hand,
TM2C2 (or any convex combination of scores) is a smooth function of scores and preserves much
of the characteristics of its underlying distribution. We formalized this idea using the notion of
Lipschitz continuity: A larger Lipschitz constant leads to a larger distortion of retrieval score
distribution.
Interpretability and Sample Efficiency : The question of hybrid retrieval is an important topic
inIR. What makes it particularly pertinent is its zero-shot applicability, a property that makes
deep models reusable , reducing computational costs and emissions as a result [ 3,34], and enabling
resource-constrained research labs to innovate. Given the strong evidence supporting the idea that | 2210.11934.pdf |
b436cf251fee-4 | resource-constrained research labs to innovate. Given the strong evidence supporting the idea that
hybrid retrieval is most valuable when applied to out-of-domain datasets [ 5], we believe that ๐Hybrid
should be robust to distributional shifts and should not need training or fine-tuning on target | 2210.11934.pdf |
94824fc6dcf4-0 | 111:22 Sebastian Bruch, Siyu Gai, and Amir Ingber
(a) in-domain
(b) out-of-domain
Fig. 11. The difference in NDCG of convex combination of unnormalized scores and a pure semantic search
(positive indicates better ranking quality by a convex combination) as a function of ๐ผ.
datasets. This implies that either the function must be non-parametric, that its parameters can be
tuned efficiently with respect to the training samples required, or that they are highly interpretable
such that their value can be guided by expert knowledge.
In the absence of a truly non-parametric approach, however, we believe a fusion that is more
sample-efficient to tune is preferred. Because convex combination has fewer parameters than the
fully parameterized RRF, we believe it should have this property. To confirm, we ask how many
training queries it takes to converge to the correct ๐ผon a target dataset.
Figure 12 visualizes our experiments, where we plot NDCG of RRF (๐=60) and TM2C2 with | 2210.11934.pdf |
94824fc6dcf4-1 | ๐ผ=0.8from Table 1. Additionally, we take the train split of each dataset and sample from it
progressively larger subsets (with a step size of 5%), and use it to tune the parameters of each
function. We then measure NDCG of the tuned functions on the test split. For the depicted datasets
as well as all other datasets in this work, we observe a similar trend: With less than 5%of the training
data, which is often a small set of queries, TM2C2โs ๐ผconverges, regardless of the magnitude of
domain shift. This sample efficiency is remarkable because it enables significant gains with little
labeling effort. Finally, while RRF does not settle on a value and its parameters are sensitive to the
training sample, its performance does more or less converge. However, the performance of the
fully parameterized RRF is still sub-optimal compared with TM2C2.
In Figure 12, we also include a convex combination of fully parameterized RRF terms, denoted
by RRF-CC and defined as: | 2210.11934.pdf |
94824fc6dcf4-2 | by RRF-CC and defined as:
๐RRF(๐,๐)=(1โ๐ผ)1
๐Lex+๐Lex(๐,๐)+๐ผ1
๐Sem+๐Sem(๐,๐), (10)
where๐ผ,๐Lex, and๐Semare tunable parameters. The question this particular formulation tries to
answer is whether adding an additional weight to the combination of the RRF terms affects retrieval
quality. From the figure, it is clear that the addition of this parameter does not have a significant
impact on the overall performance. This also serves as additional evidence supporting the claim
that Lipschitz continuity is an important property. | 2210.11934.pdf |
0e96be142e14-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:23
(a) MS MARCO
(b)Quora
(c)HotpotQA
(d)Fever
Fig. 12. Sample efficiency of TM2C2 and the parameterized variants of RRF (single parameter where ๐Sem=
๐Lex, and two parameters where we allow different values of ๐Semand๐Lex, and a third variation that is a
convex combination of RRF terms defined in Equation 10). We sample progressively larger subsets of the
validation set (with a step size of 5%), tune the parameters of each function on the resulting set, and evaluate
the resulting function on the test split. These figures depict NDCG@1000 as a function of the size of the
tuning set, averaged over 5trials with the shaded regions illustrating the 95% confidence intervals. For
reference, we have also plotted NDCG on the test split for RRF (๐=60) and TM2C2 with ๐ผ=0.8from Table 1.
7 CONCLUSION | 2210.11934.pdf |
0e96be142e14-1 | 7 CONCLUSION
We studied the behavior of two popular functions that fuse together lexical and semantic retrieval
to produce hybrid retrieval, and identified their advantages and pitfalls. Importantly, we inves-
tigated several questions and claims in prior work. We established theoretically that the choice
of normalization is not as consequential as once thought for a convex combination-based fusion
function. We found that RRF is sensitive to its parameters. We also observed empirically that convex
combination of normalized scores outperforms RRF on in-domain and out-of-domain datasetsโa
finding that is in disagreement with [5]. | 2210.11934.pdf |
8fb80edd1315-0 | 111:24 Sebastian Bruch, Siyu Gai, and Amir Ingber
We believe that a convex combination with theoretical minimum-maximum normalization
(TM2C2) indeed enjoys properties that are important in a fusion function. Its parameter, too,
can be tuned sample-efficiently or set to a reasonable value based on domain knowledge. In our
experiments, for example, we found the range ๐ผโ[0.6,0.8]to consistently lead to improvements.
While we observed that a line appears to be appropriate for a collection of query-document pairs,
we acknowledge that that may change if our analysis was conducted on a per-query basisโitself a
rather non-trivial effort. For example, it is unclear if bringing non-linearity to the design of the
fusion function or the normalization itself leads to a more accurate prediction of ๐ผon a per-query
basis. We leave an exploration of this question to future work.
We also note that, while our analysis does not exclude the use of multiple retrieval engines as
input, and indeed can be extended, both theoretically and empirically, to a setting where we have | 2210.11934.pdf |
8fb80edd1315-1 | input, and indeed can be extended, both theoretically and empirically, to a setting where we have
more than just lexical and semantic scores, it is nonetheless important to conduct experiments
and validate that our findings generalize. We believe, however, that our current assumptions are
practical and are reflective of the current state of hybrid search where we typically fuse only lexical
and semantic retrieval systems. As such, we leave an extended analysis of fusion on multiple
retrieval systems to future work.
ACKNOWLEDGMENTS
We benefited greatly from conversations with Brian Hentschel, Edo Liberty, and Michael Bendersky.
We are grateful to them for their insight and time.
REFERENCES
[1] Nima Asadi. 2013. Multi-Stage Search Architectures for Streaming Documents . University of Maryland.
[2]Nima Asadi and Jimmy Lin. 2013. Effectiveness/Efficiency Tradeoffs for Candidate Generation in Multi-Stage Retrieval
Architectures. In Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information
Retrieval (Dublin, Ireland). 997โ1000. | 2210.11934.pdf |
8fb80edd1315-2 | Retrieval (Dublin, Ireland). 997โ1000.
[3]Sebastian Bruch, Claudio Lucchese, and Franco Maria Nardini. 2022. ReNeuIR: Reaching Efficiency in Neural Information
Retrieval. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information
Retrieval (Madrid, Spain). 3462โ3465.
[4]Sebastian Bruch, Masrour Zoghi, Michael Bendersky, and Marc Najork. 2019. Revisiting Approximate Metric Optimiza-
tion in the Age of Deep Neural Networks. In Proceedings of the 42nd International ACM SIGIR Conference on Research
and Development in Information Retrieval (Paris, France). 1241โ1244.
[5]Tao Chen, Mingyang Zhang, Jing Lu, Michael Bendersky, and Marc Najork. 2022. Out-of-Domain Semantics to the
Rescue! Zero-Shot Hybrid Retrieval Models. In Advances in Information Retrieval: 44th European Conference on IR | 2210.11934.pdf |
8fb80edd1315-3 | Research, ECIR 2022, Stavanger, Norway, April 10โ14, 2022, Proceedings, Part I (Stavanger, Norway). 95โ110.
[6]Gordon V. Cormack, Charles L A Clarke, and Stefan Buettcher. 2009. Reciprocal Rank Fusion Outperforms Condorcet
and Individual Rank Learning Methods. 758โ759.
[7]Van Dang, Michael Bendersky, and W Bruce Croft. 2013. Two-Stage learning to rank for information retrieval. In
Advances in Information Retrieval . Springer, 423โ434.
[8]Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional
Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the
Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) . Association
for Computational Linguistics, Minneapolis, Minnesota, 4171โ4186. | 2210.11934.pdf |
8fb80edd1315-4 | for Computational Linguistics, Minneapolis, Minnesota, 4171โ4186.
[9]Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and Stรฉphane Clinchant. 2022. From Distillation to Hard
Negative Sampling: Making Sparse Neural IR Models More Effective. In Proceedings of the 45th International ACM | 2210.11934.pdf |
3ca2c422a8e4-0 | Negative Sampling: Making Sparse Neural IR Models More Effective. In Proceedings of the 45th International ACM
SIGIR Conference on Research and Development in Information Retrieval (Madrid, Spain). 2353โ2359.
[10] Kalervo Jรคrvelin and Jaana Kekรคlรคinen. 2000. IR evaluation methods for retrieving highly relevant documents. In
Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval .
ACM, 41โ48.
[11] Jeff Johnson, Matthijs Douze, and Hervรฉ Jรฉgou. 2021. Billion-Scale Similarity Search with GPUs. IEEE Transactions on
Big Data 7 (2021), 535โ547.
[12] Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau
Yih. 2020. Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the 2020 Conference on | 2210.11934.pdf |
c98cf58f7990-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:25
Empirical Methods in Natural Language Processing (EMNLP) .
[13] Saar Kuzi, Mingyang Zhang, Cheng Li, Michael Bendersky, and Marc Najork. 2020. Leveraging Semantic and Lexical
Matching to Improve the Recall of Document Retrieval Systems: A Hybrid Approach. ArXiv (2020).
[14] Hang Li, Shuai Wang, Shengyao Zhuang, Ahmed Mourad, Xueguang Ma, Jimmy Lin, and Guido Zuccon. 2022. To
Interpolate or Not to Interpolate: PRF, Dense and Sparse Retrievers. In Proceedings of the 45th International ACM SIGIR
Conference on Research and Development in Information Retrieval (Madrid, Spain). 2495โ2500.
[15] Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2021. Pretrained Transformers for Text Ranking: BERT and Beyond.
arXiv:2010.06467 [cs.IR] | 2210.11934.pdf |
c98cf58f7990-1 | arXiv:2010.06467 [cs.IR]
[16] Tie-Yan Liu. 2009. Learning to Rank for Information Retrieval. Found. Trends Inf. Retr. 3, 3 (2009), 225โ331.
[17] Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2021. Sparse, Dense, and Attentional Representations
for Text Retrieval. Transactions of the Association for Computational Linguistics 9 (2021), 329โ345.
[18] Ji Ma, Ivan Korotkov, Keith Hall, and Ryan T. McDonald. 2020. Hybrid First-stage Retrieval Models for Biomedical
Literature. In CLEF .
[19] Xueguang Ma, Kai Sun, Ronak Pradeep, and Jimmy J. Lin. 2021. A Replication Study of Dense Passage Retriever. ArXiv
(2021).
[20] Craig Macdonald, Rodrygo LT Santos, and Iadh Ounis. 2013. The whens and hows of learning to rank for web search. | 2210.11934.pdf |
c98cf58f7990-2 | Information Retrieval 16, 5 (2013), 584โ628.
[21] Yu. A. Malkov and D. A. Yashunin. 2016. Efficient and robust approximate nearest neighbor search using Hierarchical
Navigable Small World graphs.
[22] Antonio Mallia, Michal Siedlaczek, Joel Mackenzie, and Torsten Suel. 2019. PISA: Performant Indexes and Search for
Academia. In Proceedings of the Open-Source IR Replicability Challenge co-located with 42nd International ACM SIGIR
Conference on Research and Development in Information Retrieval, OSIRRC@SIGIR 2019, Paris, France, July 25, 2019.
50โ56. http://ceur-ws.org/Vol-2409/docker08.pdf
[23] Yoshitomo Matsubara, Thuy Vu, and Alessandro Moschitti. 2020. Reranking for Efficient Transformer-Based Answer
Selection . 1577โ1580. | 2210.11934.pdf |