id
stringlengths
14
14
text
stringlengths
89
1.21k
source
stringclasses
1 value
147d8b44b127-0
111An Analysis of Fusion Functions for Hybrid Retrieval SEBASTIAN BRUCH, Pinecone, USA SIYU GAIโˆ—,University of California, Berkeley, USA AMIR INGBER, Pinecone, Israel We study hybrid search in text retrieval where lexical and semantic search are fused together with the intuition that the two are complementary in how they model relevance. In particular, we examine fusion by a convex combination (CC) of lexical and semantic scores, as well as the Reciprocal Rank Fusion (RRF) method, and identify their advantages and potential pitfalls. Contrary to existing studies, we find RRF to be sensitive to its parameters; that the learning of a CC fusion is generally agnostic to the choice of score normalization; that CC outperforms RRF in in-domain and out-of-domain settings; and finally, that CC is sample efficient, requiring only a small set of training examples to tune its only parameter to a target domain. CCS Concepts: โ€ขInformation systems โ†’Retrieval models and ranking ;Combination, fusion and federated search .
2210.11934.pdf
147d8b44b127-1
federated search . Additional Key Words and Phrases: Hybrid Retrieval, Lexical and Semantic Search, Fusion Functions 1 INTRODUCTION Retrieval is the first stage in a multi-stage ranking system [ 1,2,43], where the objective is to find the top-๐‘˜set of documents, that are the most relevant to a given query ๐‘ž, from a large collection of documentsD. Implicit in this task are two major research questions: a) How do we measure the relevance between a query ๐‘žand a document ๐‘‘โˆˆD?; and, b) How do we find the top- ๐‘˜documents according to a given similarity metric efficiently ? In this work, we are primarily concerned with the former question in the context of text retrieval. As a fundamental problem in Information Retrieval ( IR), the question of the similarity between queries and documents has been explored extensively. Early methods model text as a Bag of Words ( BoW ) and compute the similarity of two pieces of text using a statistical measure such as
2210.11934.pdf
147d8b44b127-2
Words ( BoW ) and compute the similarity of two pieces of text using a statistical measure such as the term frequency-inverse document frequency (TF-IDF) family, with BM25 [ 32,33] being its most prominent member. We refer to retrieval with a BoW model as lexical search and the similarity scores computed by such a system as lexical scores . Lexical search is simple, efficient, (naturally) โ€œzero-shot, โ€ and generally effective, but has important limitations: It is susceptible to the vocabulary mismatch problem and, moreover, does not take into account the semantic similarity of queries and documents [ 5]. That, it turns out, is what deep learning models are excellent at. With the rise of pre-trained language models such as BERT [ 8], it is now standard practice to learn a vector representation of queries and documents that does capture their semantics, and thereby, reduce top- ๐‘˜retrieval to the problem of finding ๐‘˜nearest neighbors in the resulting vector space [ 9,12,15,31,39,42]โ€”where closeness is measured using
2210.11934.pdf
147d8b44b127-3
vector similarity or distance. We refer to this method as semantic search and the similarity scores computed by such a system as semantic scores . Hypothesizing that lexical and semantic search are complementary in how they model relevance, recent works [ 5,12,13,18,19,41] began exploring methods to fusetogether lexical and semantic retrieval: For a query ๐‘žand ranked lists of documents ๐‘…Lexand๐‘…Semretrieved separately by lexical and semantic search systems respectively, the task is to construct a final ranked list ๐‘…Fusion so as to improve retrieval quality. This is often referred to as hybrid search . โˆ—Contributed to this work during a research internship at Pinecone. Authorsโ€™ addresses: Sebastian Bruch, [email protected], Pinecone, New York, NY, USA; Siyu Gai, [email protected],
2210.11934.pdf
147d8b44b127-4
University of California, Berkeley, Berkeley, CA, USA; Amir Ingber, Pinecone, Tel Aviv, Israel, [email protected]:2210.11934v1 [cs.IR] 21 Oct 2022
2210.11934.pdf
d2f12e5b35df-0
111:2 Sebastian Bruch, Siyu Gai, and Amir Ingber It is becoming increasingly clear that hybrid search does indeed lead to meaningful gains in retrieval quality, especially when applied to out-of-domain datasets [ 5,39]โ€”settings in which the semantic retrieval component uses a model that was not trained or fine-tuned on the target dataset. What is less clear and is worthy of further investigation, however, is how this fusion is done. One intuitive and common approach is to linearly combine lexical and semantic scores [ 12,19,39]. If๐‘“Lex(๐‘ž,๐‘‘)and๐‘“Sem(๐‘ž,๐‘‘)represent the lexical and semantic scores of document ๐‘‘with respect to query๐‘ž, then a linear (or more accurately, convex) combination is expressed as ๐‘“Convex = ๐›ผ๐‘“Sem+(1โˆ’๐›ผ)๐‘“Lexwhere 0โ‰ค๐›ผโ‰ค1. Because lexical scores (such as BM25) and semantic scores
2210.11934.pdf
d2f12e5b35df-1
(such as dot product) may be unbounded, often they are normalized with min-max scaling [ 15,39] prior to fusion. A recent study [ 5] argues that convex combination is sensitive to its parameter ๐›ผand the choice of score normalization.1They claim and show empirically, instead, that Reciprocal Rank Fusion ( RRF) [6] may be a more suitable fusion as it is non-parametric and may be utilized in a zero-shot manner. They demonstrate its impressive performance even in zero-shot settings on a number of benchmark datasets. This work was inspired by the claims made in [ 5]; whereas [ 5] addresses how various hybrid methods perform relative to one another in an empirical study, we re-examine their findings and analyze why these methods work and what contributes to their relative performance. Our contributions thus can best be summarized as an in-depth examination of fusion functions and their behavior. As our first research question (RQ1), we investigate whether the convex combination fusion is a reasonable choice and study its sensitivity to the normalization protocol. We show that, while
2210.11934.pdf
d2f12e5b35df-2
a reasonable choice and study its sensitivity to the normalization protocol. We show that, while normalization is essential to create a bounded function and thereby bestow consistency to the fusion across domains, the specific choice of normalization is a rather small detail: There always exist convex combinations of scores normalized by min-max, standard score, or any other linear transformation that are rank-equivalent. In fact, when formulated as a per-query learning problem, the solution found for a dataset that is normalized with one scheme can be transformed to a solution for a different choice. We next investigate the properties of RRF. We first unpack RRF and examine its sensitivity to its parameters as our second research question (RQ2)โ€”contrary to [ 5], we adopt a parametric view ofRRF where we have as many parameters as there are retrieval functions to fuse, a quantity that is always one more than that in a convex combination. We find that, in contrast to a convex combination, a tuned RRFgeneralizes poorly to out-of-domain datasets. We then intuit that, because RRF is a function of ranks , it disregards the distribution of scores and, as such, discards useful
2210.11934.pdf
d2f12e5b35df-3
information. Observe that the distance between raw scores plays no role in determining their hybrid scoreโ€”a behavior we find counter-intuitive in a metric space where distance does matter. Examining this property constitutes our third and final research question (RQ3). Finally, we empirically demonstrate an unsurprising yet important fact: Tuning ๐›ผin a convex combination fusion function is extremely sample-efficient, requiring just a handful of labeled queries to arrive at a value suitable for a target domain, regardless of the magnitude of shift in the data distribution. RRF, on the other hand, is relatively less sample-efficient and converges to a relatively less effective retrieval system. We believe our findings, both theoretical and empirical, are important and pertinent to the research in this field. Our analysis leads us to believe that the convex combination formulation is theoretically sound, empirically effective, sample-efficient, and robust to domain shift. Moreover,
2210.11934.pdf
a1a34c5d3489-0
research in this field. Our analysis leads us to believe that the convex combination formulation is theoretically sound, empirically effective, sample-efficient, and robust to domain shift. Moreover, 1c.f. Section 3.1 in [ 5]: โ€œThis fusion method is sensitive to the score scales ...which needs careful score normalizationโ€ (emphasis ours).
2210.11934.pdf
f957f16ac078-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:3 unlike the parameters in RRF, the parameter(s) of a convex function are highly interpretable and, if no training samples are available, can be adjusted to incorporate domain knowledge. We organized the remainder of this article as follows. In Section 2, we review the relevant literature on hybrid search. Section 3 then introduces our adopted notation and provides details of our empirical setup, thereby providing context for the theoretical and empirical analysis of fusion functions. In Section 4, we begin our analysis by a detailed look at the convex combination of retrieval scores. We then examine RRF in Section 5. In Section 6, we summarize our observations and identify the properties a fusion function should have to behave well in hybrid retrieval. We then conclude this work and state future research directions in Section 7. 2 RELATED WORK A multi-stage ranking system is typically comprised of a retrieval stage and several subsequent re- ranking stages, where the retrieved candidates are ordered using more complex ranking functions [ 2, 38]. Conventional wisdom has that retrieval must be recall-oriented while improving ranking quality
2210.11934.pdf
f957f16ac078-1
38]. Conventional wisdom has that retrieval must be recall-oriented while improving ranking quality may be left to the re-ranking stages, which are typically Learning to Rank ( LtR) models [ 16,23, 28,38,40]. There is indeed much research on the trade-offs between recall and precision in such multi-stage cascades [ 7,20], but a recent study [ 44] challenges that established convention and presents theoretical analysis that suggests retrieval must instead optimize precision . We therefore report both recall andNDCG [ 10], but focus on NDCG where space constraints prevent us from presenting both or when similar conclusions can be reached regardless of the metric used. One choice for retrieval that remains popular to date is BM25 [ 32,33]. This additive statistic computes a weighted lexical match between query and document terms: It computes, for each query term, the product of its โ€œimportanceโ€ (i.e., frequency of a term in a document, normalized by document and global statistics such as average length) and its propensityโ€”a quantity that is inversely proportionate to the fraction of documents that contain the termโ€”and adds the scores of
2210.11934.pdf
f957f16ac078-2
inversely proportionate to the fraction of documents that contain the termโ€”and adds the scores of query terms to arrive at the final similarity or relevance score. Because BM25, like other lexical scoring functions, insists on an exact match of terms, even a slight typo can throw the function off. This vocabulary mismatch problem has been subject to much research in the past, with remedies ranging from pseudo-relevance feedback to document and query expansion techniques [ 14,29,35]. Trying to address the limitations of lexical search can only go so far, however. After all, they additionally do not capture the semantic similarity between queries and documents, which may be an important signal indicative of relevance. It has been shown that both of these issues can be remedied by Transformer-based [ 37] pre-trained language models such as BERT [ 8]. Applied to the ranking task, such models [ 24,26โ€“28] have advanced the state-of-the-art dramatically on benchmark datasets [25]. The computationally intensive inference of these deep models often renders them too ineffi- cient for first-stage retrieval, however, making them more suitable for re-ranking stages. But by
2210.11934.pdf
f957f16ac078-3
cient for first-stage retrieval, however, making them more suitable for re-ranking stages. But by cleverly disentangling the query and document transformations into the so-called dual-encoder architecture, where, in the resulting design, the โ€œembeddingโ€ of a document can be computed independently of queries, we can pre-compute document vectors and store them offline. In this way, we substantially reduce the computational cost during inference as it is only necessary to obtain the vector representation of the query during inference. At a high level, these models project queries and documents onto a low-dimensional vector space where semantically-similar points
2210.11934.pdf
357f2ffd45bc-0
obtain the vector representation of the query during inference. At a high level, these models project queries and documents onto a low-dimensional vector space where semantically-similar points stay closer to each other. By doing so we transform the retrieval problem to one of similarity search or Approximate Nearest Neighbor (ANN) searchโ€”the ๐‘˜nearest neighbors to a query vector are the desired top-๐‘˜documents. This ANN problem can be solved efficiently using a number of algorithms such as FAISS [ 11] or Hierarchical Navigable Small World Graphs [ 21] available as open source
2210.11934.pdf
f16a465ce965-0
111:4 Sebastian Bruch, Siyu Gai, and Amir Ingber packages or through a managed service such as Pinecone2, creating an opportunity to use deep models and vector representations for first-stage retrieval [ 12,42]โ€”a setup that we refer to as semantic search. Semantic search, however, has its own limitations. Previous studies [ 5,36] have shown, for example, that when applied to out-of-domain datasets, their performance is often worse than BM25. Observing that lexical and semantic retrievers can be complementary in the way they model relevance [ 5], it is only natural to consider a hybrid approach where lexical and semantic similarities both contribute to the makeup of final retrieved list. To date there have been many studies [ 12,13,17โ€“19,39,41,45] that do just that, where most focus on in-domain tasks with one exception [ 5] that considers a zero-shot application too. Most of these works only use one of the many existing fusion functions in experiments, but none compares the main ideas comprehensively.
2210.11934.pdf
f16a465ce965-1
many existing fusion functions in experiments, but none compares the main ideas comprehensively. We review the popular fusion functions from these works in the subsequent sections and, through a comparative study, elaborate what about their behavior may or may not be problematic. 3 SETUP In the sections that follow, we study fusion functions with a mix of theoretical and empirical analysis. For that reason, we present our notation as well as empirical setup and evaluation measures here to provide sufficient context for our arguments. 3.1 Notation We adopt the following notation in this work. We use ๐‘“o(๐‘ž,๐‘‘):Qร—Dโ†’ Rto denote the score of document๐‘‘โˆˆD to query๐‘žโˆˆQaccording to the retrieval system oโˆˆO. Ifois a semantic retriever, Sem, thenQandDare the space of (dense) vectors in R๐‘‘and๐‘“Semis typically cosine similarity or inner product. Similarly, when ois a lexical retriever, Lex,QandDare high-dimensional sparse
2210.11934.pdf
f16a465ce965-2
vectors in R|๐‘‰|, with|๐‘‰|denoting the size of the vocabulary, and ๐‘“Lexis typically BM25. A retrieval system ois the spaceQร—D equipped with a metric ๐‘“o(ยท,ยท)โ€”which need not be a proper metric. We denote the set of top- ๐‘˜documents retrieved for query ๐‘žby retrieval system oby๐‘…๐‘˜ o(๐‘ž). We write๐œ‹o(๐‘ž,๐‘‘)to denote the rank of document ๐‘‘with respect to query ๐‘žaccording to retrieval system o. Note that,๐œ‹o(๐‘ž,๐‘‘๐‘–)can be expressed as the sum of indicator functions: ๐œ‹o(๐‘ž,๐‘‘๐‘–)=1+โˆ‘๏ธ ๐‘‘๐‘—โˆˆ๐‘…๐‘˜o(๐‘ž)1๐‘“o(๐‘ž,๐‘‘๐‘—)>๐‘“o(๐‘ž,๐‘‘๐‘–), (1)
2210.11934.pdf
f16a465ce965-3
where 1๐‘is1when the predicate ๐‘holds and 0otherwise. In words, and ignoring the subtleties introduced by the presence of score ties, the rank of document ๐‘‘is the count of documents whose score is larger than the score of ๐‘‘. Hybrid retrieval operates on the product space ofรŽo๐‘–with metric ๐‘“Fusion :รŽ๐‘“o๐‘–โ†’R. Without loss of generality, in this work, we restrictรŽo๐‘–to be Lexร—Sem. That is, we only consider the problem of fusing two retrieval scores, but note that much of the anlysis can be trivially extended to the fusion of multiple retrieval systems. We refer to this hybrid metric as a fusion function. A fusion function ๐‘“Fusion is typically applied to documents in the union of retrieved sets U๐‘˜(๐‘ž)=ร o๐‘…๐‘˜ o(๐‘ž), which we simply call the union set . When a document ๐‘‘in the union set is not present in
2210.11934.pdf
f16a465ce965-4
one of the top- ๐‘˜sets (i.e.,๐‘‘โˆˆU๐‘˜(๐‘ž)but๐‘‘โˆ‰๐‘…๐‘˜ o๐‘–(๐‘ž)for some o๐‘–), we compute its missing score (i.e.,๐‘“o๐‘–(๐‘ž,๐‘‘)) prior to fusion. 2http://pinecone.io
2210.11934.pdf
2509e69bc750-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:5 3.2 Empirical Setup Datasets : We evaluate our methods on a variety of publicly available benchmark datasets, both in in-domain and out-of-domain, zero-shot settings. One of the datasets is the MS MARCO Passage Retrieval v1 dataset [ 25], a publicly available retrieval and ranking collection from Microsoft. It consists of roughly 8.8million short passages which, along with queries in natural language, originate from Bing. The queries are split into train, dev, and eval non-overlapping subsets. We use the train queries for any learning or tuning and evaluate exclusively on the small dev query set (consisting of 6,980labeled queries) in our analysis. Included in the dataset also are relevance labels. We additionally experiment with 8datasets from the BeIR collection [ 36]3: Natural Questions (NQ, question answering), Quora (duplicate detection), NFCorpus (medical), HotpotQA (question answering), Fever (fact extraction), SciFact (scientific claim verification), DBPedia (entity search),
2210.11934.pdf
2509e69bc750-1
andFiQA (financial). For details of and statistics for each dataset, we refer the reader to [36]. Lexical search : We use PISA [ 22] for keyword-based lexical retrieval. We tokenize queries and documents by space and apply stemming available in PISAโ€”we do not employ any other preprocessing steps such as stopword removal, lemmatization, or expansion. We use BM25 with the same hyperparameters as [5] (k1= 0.9and b= 0.4) to retrieve the top 1000 candidates. Semantic search : We use the all-MiniLM-L6-v2 model checkpoint available on HuggingFace4 to project queries and documents into 384-dimensional vectors, which can subsequently be used for indexing and top- ๐‘˜retrieval using cosine similarity. This model has been shown to achieve competitive quality on an array of benchmark datasets while remaining compact in size and efficient to infer5, thereby allowing us to conduct extensive experiments with results that are competitive with existing state-of-the-art models. This model has been fine-tuned on a large number of datasets,
2210.11934.pdf
2509e69bc750-2
exceeding a total of 1 billion pairs of text, including NQ, MS MARCO Passage, and Quora . As such, we consider all experiments on these three datasets as in-domain, and the rest as out-of-domain. We use the exact search for inner product algorithm ( IndexFlatIP ) from FAISS [ 11] to retrieve top 1000 approximate nearest neighbors. Evaluation : Unless noted otherwise, we form the union set for every query from the candidates retrieved by the lexical and semantic search systems. We then compute missing scores where required, compute ๐‘“Fusion on the union set, and re-order according to the hybrid scores. We then measure Recall@ 1000 and NDCG@ 1000 to quantify ranking quality, as recommended by Zamani et al. [ 44]. On SciFact andNFCorpus , we evaluate Recall and NDCG at rank cutoff 100due to the small size of these two collections. Note that, we choose to evaluate deep (i.e., with a larger rank cut-off) rather than shallow metrics per the discussion in [ 39] to understand the performance of
2210.11934.pdf
2509e69bc750-3
cut-off) rather than shallow metrics per the discussion in [ 39] to understand the performance of each system more completely. 4 ANALYSIS OF CONVEX COMBINATION OF RETRIEVAL SCORES We are interested in understanding the behavior and properties of fusion functions. In the remainder of this work, we study through that lens two popular methods that are representative of existing ideas in the literature, beginning with a convex combination of scores. As noted earlier, most existing works use a convex combination of lexical and semantic scores as follows:๐‘“Convex(๐‘ž,๐‘‘)=๐›ผ๐‘“Sem(๐‘ž,๐‘‘)+(1โˆ’๐›ผ)๐‘“Lex(๐‘ž,๐‘‘)for some 0โ‰ค๐›ผโ‰ค1. When๐›ผ=1the above collapses to semantic scores and when it is 0, to lexical scores. 3Available at https://github.com/beir-cellar/beir 4https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2 5c.f. https://sbert.net for details.
2210.11934.pdf
daf800e846e5-0
111:6 Sebastian Bruch, Siyu Gai, and Amir Ingber An interesting property of this fusion is that it takes into account the distribution of scores. In other words, the distance between lexical (or semantic) scores of two documents plays a significant role in determining their final hybrid score. One disadvantage, however, is that the range of ๐‘“Sem can be very different from ๐‘“Lex. Moreover, as with TF-IDF in lexical search or with inner product in semantic search, the range of individual functions ๐‘“omay depend on the norm of the query and document vectors (e.g., BM25 is a function of the number of query terms). As such any constant ๐›ผ is likely to yield inconsistently-scaled hybrid scores. The problem above is trivially addressed by applying score normalization prior to fusion [ 15,39]. Suppose we have collected a union set U๐‘˜(๐‘ž)for๐‘ž, and that for every candidate we have computed both lexical and semantic scores. Now, consider the min-max scaling of scores ๐œ™mm:Rโ†’[0,1] below:
2210.11934.pdf
daf800e846e5-1
below: ๐œ™mm(๐‘“o(๐‘ž,๐‘‘))=๐‘“o(๐‘ž,๐‘‘)โˆ’๐‘š๐‘ž ๐‘€๐‘žโˆ’๐‘š๐‘ž, (2) where๐‘š๐‘ž=min๐‘‘โˆˆU๐‘˜(๐‘ž)๐‘“o(๐‘ž,๐‘‘)and๐‘€๐‘ž=max๐‘‘โˆˆU๐‘˜(๐‘ž)๐‘“o(๐‘ž,๐‘‘). We note that, min-max scaling is thede facto method in the literature, but other choices of ๐œ™o(ยท)in the more general expression below: ๐‘“Convex(๐‘ž,๐‘‘)=๐›ผ๐œ™Sem(๐‘“Sem(๐‘ž,๐‘‘))+( 1โˆ’๐›ผ)๐œ™Lex(๐‘“Lex(๐‘ž,๐‘‘)), (3)
2210.11934.pdf
daf800e846e5-2
are valid as well so long as ๐œ™Sem,๐œ™Lex:Rโ†’Rare monotone in their argument. For example, for reasons that will become clearer later, we can redefine the normalization by replacing the minimum of the set with the theoretical minimum of the function (i.e., the maximum value that is always less than or equal to all values attainable by the scoring function, or its infimum) to arrive at: ๐œ™tmm(๐‘“o(๐‘ž,๐‘‘))=๐‘“o(๐‘ž,๐‘‘)โˆ’inf๐‘“o(๐‘ž,ยท) ๐‘€๐‘žโˆ’inf๐‘“o(๐‘ž,ยท). (4) As an example, when ๐‘“Lexis BM25, then its infimum is 0. When๐‘“Semis cosine similarity, then that quantity isโˆ’1. Another popular choice is the standard score (z-score) normalization which is defined as follows:
2210.11934.pdf
daf800e846e5-3
Another popular choice is the standard score (z-score) normalization which is defined as follows: ๐œ™z(๐‘“o(๐‘ž,๐‘‘))=๐‘“o(๐‘ž,๐‘‘)โˆ’๐œ‡ ๐œŽ, (5) where๐œ‡and๐œŽdenote the mean and standard deviation of the set of scores ๐‘“o(๐‘ž,ยท)for query๐‘ž. We will return to normalization shortly, but we make note of one small but important fact: In cases where the variance of lexical (semantic) scores in the union set is 0, we may skip the fusion step altogether because retrieval quality will be unaffected by lexical (semantic) scores. The case where the variance is arbitrarily close to 0, however, creates challenges for certain normalization functions. While this would make for an interesting theoretical analysis, we do not study this particular setting in this work as, empirically, we do observe a reasonably large variance among scores in the union set on all datasets using state-of-the-art lexical and semantic retrieval functions. 4.1 Suitability of Convex Combination
2210.11934.pdf
daf800e846e5-4
4.1 Suitability of Convex Combination A convex combination of scores is a natural choice for creating a mixture of two retrieval systems, but is it a reasonable choice? It has been established in many past empirical studies that ๐‘“Convex with min-max normalization often serves as a strong baseline. So the answer to our question appears to be positive. Nonetheless, we believe it is important to understand precisely why this fusion works. We investigate this question empirically, by visualizing lexical and semantic scores of query- document pairs from an array of datasets. Because we operate in a two-dimensional space, observing the pattern of positive (where document is relevant to query) and negative samples in a plot can reveal a lot about whether and how they are separable and how the fusion function behaves. To
2210.11934.pdf
6598f6af8980-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:7 (a) MS MARCO (b)Quora (c)NQ (d)FiQA (e)HotpotQA (f)Fever Fig. 1. Visualization of the normalized lexical ( ๐œ™tmm(๐‘“Lex)) and semantic ( ๐œ™tmm(๐‘“Sem)) scores of query- document pairs sampled from the validation split of each dataset. Shown in red are up to 20,000positive samples where document is relevant to query, and in black up to the same number of negative samples. Adding a lexical (semantic) dimension to query-document pairs helps tease out the relevant documents that would be statistically indistinguishable in a one-dimensional semantic (lexical) view of the dataโ€”when samples are projected onto the ๐‘ฅ(๐‘ฆ) axis. that end, we sample up to 20,000positive and up to the same number of negative query-document pairs from the validation split of each dataset, and illustrate the collected points in a scatter plot in Figure 1.
2210.11934.pdf
8eacecb53293-0
111:8 Sebastian Bruch, Siyu Gai, and Amir Ingber (a)๐›ผ=0.6 (b)๐›ผ=0.8 Fig. 2. Effect of ๐‘“Convex on pairs of lexical and semantic scores. From these figures, it is clear that positive and negative samples form clusters that are, with some error, separable by a linear function. What is different between datasets is the slope of this separating line. For example, in MS MARCO, Quora , and NQ, which are in-domain datasets, the separating line is almost vertical, suggesting that the semantic scores serve as a sufficiently strong signal for relevance. This is somewhat true of FiQA . In other out-of-domain datasets, however, the line is rotated counter-clockwise, indicating a more balanced weighting of lexical and semantic scores. Said differently, adding a lexical (semantic) dimension to query-document pairs helps tease out the relevant documents that would be statistically indistinguishable in a one-dimensional semantic (lexical) view of the data. Interestingly, across all datasets, there is a higher concentration of negative samples where lexical scores vanish.
2210.11934.pdf
8eacecb53293-1
of negative samples where lexical scores vanish. This empirical evidence suggests that lexical and semantic scores may indeed be complementaryโ€” an observation that is in agreement with prior work [ 5]โ€”and a line may be a reasonable choice for distinguishing between positive and negative samples. But while these figures shed light on the shape of positive and negative clusters and their separability, our problem is not classification but ranking . We seek to order query-document pairs and, as such, separability is less critical and, in fact, not required. It is therefore instructive to understand the effect of a particular convex combination on pairs of lexical and semantic scores. This is visualized in Figure 2 for two values of ๐›ผin๐‘“Convex . The plots in Figure 2 illustrate how the parameter ๐›ผdetermines how different regions of the plane are ranked relative to each other. This is a trivial fact, but it is nonetheless interesting to map these patterns to the distributions in Figure 1. In-domain datasets, for example, form a pattern of
2210.11934.pdf
8eacecb53293-2
positives and negatives that is unsurprisingly more in tune with the ๐›ผ=0.8setting of๐‘“Convex than ๐›ผ=0.6. 4.2 Role of Normalization We have thus far used min-max normalization to be consistent with the literature. In this section, we ask the question first raised by Chen et al. [ 5] on whether and to what extent the choice of nor- malization matters and how carefully one must choose the normalization protocol. In other words, we wish to examine the effect of ๐œ™Sem(ยท)and๐œ™Lex(ยท)on the convex combination in Equation (3). Before we begin, let us consider the following suite of functions:
2210.11934.pdf
a09d9440f156-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:9 โ€ข๐œ™mm: Min-max scaling of Equation (2); โ€ข๐œ™tmm: Theoretical min-max scaling of Equation (4); โ€ข๐œ™z: z-score normalization of Equation (5); โ€ข๐œ™mmโˆ’Lex: Min-max scaling of lexical scores, unnormalized semantic scores; โ€ข๐œ™tmmโˆ’Lex: Theoretical min-max normalized lexical scores, unnormalized semantic scores; โ€ข๐œ™zโˆ’Lex: z-score normalized lexical scores, unnormalized semantic scores; and, โ€ข๐ผ: The identity transformation, leaving both semantic and lexical scores unnormalized. We believe these transformations together test the various conditions in our upcoming arguments. Let us first state the notion of rank-equivalence more formally: Definition 4.1. We say two functions ๐‘“and๐‘”arerank-equivalent on the setUand write๐‘“๐œ‹=๐‘”, if the order among documents in a set Uinduced by ๐‘“is the same as that induced by ๐‘”.
2210.11934.pdf
a09d9440f156-1
For example, when ๐œ™Sem(๐‘ฅ)=๐‘Ž๐‘ฅ+๐‘and๐œ™Lex(๐‘ฅ)=๐‘๐‘ฅ+๐‘‘are linear transformations of scores for some positive coefficients ๐‘Ž,๐‘and real intercepts ๐‘,๐‘, then they can be reduced to the following rank-equivalent form: ๐‘“Convex(๐‘ž,๐‘‘)๐œ‹=(๐‘Ž๐›ผ)๐‘“Sem(๐‘ž,๐‘‘)+๐‘(1โˆ’๐›ผ)๐‘“Lex(๐‘ž,๐‘‘). In fact, letting ๐›ผโ€ฒ=๐‘๐›ผ/[๐‘๐›ผ+๐‘(1โˆ’๐›ผ)]transforms the problem to one of learning a convex combination of the original scores with a modified weight. This family of functions includes ๐œ™mm, ๐œ™z, and๐œ™tmm, and as such solutions for one family can be transformed to solutions for another normalization protocol. More formally:
2210.11934.pdf
a09d9440f156-2
normalization protocol. More formally: Lemma 4.2. For every query, given an arbitrary ๐›ผ, there exists a ๐›ผโ€ฒsuch that the convex combination of min-max normalized scores with parameter ๐›ผis rank-equivalent to a convex combination of z-score normalized scores with ๐›ผโ€ฒ, and vice versa. Proof. Write๐‘šoand๐‘€ofor the minimum and maximum scores retrieved by system o, and๐œ‡o and๐œŽofor their mean and standard deviation. We also write ๐‘…o=๐‘€oโˆ’๐‘šofor brevity. For every document๐‘‘, we have the following: ๐›ผ๐‘“Sem(๐‘ž,๐‘‘)โˆ’๐‘šSem ๐‘…Sem+(1โˆ’๐›ผ)๐‘“Lex(๐‘ž,๐‘‘)โˆ’๐‘šLex ๐‘…Lex๐œ‹=๐›ผ ๐‘…sem๐‘“Sem(๐‘ž,๐‘‘)+1โˆ’๐›ผ
2210.11934.pdf
a09d9440f156-3
๐‘…Lex๐‘“Lex(๐‘ž,๐‘‘) ๐œ‹=1 ๐œŽSem๐œŽLex๐›ผ ๐‘…Sem๐‘“Sem(๐‘ž,๐‘‘)+1โˆ’๐›ผ ๐‘…Lex๐‘“Lex(๐‘ž,๐‘‘)โˆ’๐›ผ ๐‘…Sem๐œ‡Semโˆ’1โˆ’๐›ผ ๐‘…Lex๐œ‡Lex ๐œ‹=๐›ผ ๐‘…Sem๐œŽLex๐‘“Sem(๐‘ž,๐‘‘)โˆ’๐œ‡Sem ๐œŽSem+1โˆ’๐›ผ ๐‘…Lex๐œŽSem๐‘“Lex(๐‘ž,๐‘‘)โˆ’๐œ‡Lex ๐œŽLex, where in every step we either added a constant or multiplied the expression by a positive constant, both rank-preserving operations. Finally, setting ๐›ผโ€ฒ=๐›ผ ๐‘…Sem๐œŽLex/(๐›ผ
2210.11934.pdf
a09d9440f156-4
๐‘…Sem๐œŽLex/(๐›ผ ๐‘…Sem๐œŽLex+1โˆ’๐›ผ ๐‘…Lex๐œŽSem) completes the proof. The other direction is similar. โ–ก The fact above implies that the problem of tuning ๐›ผfor a query in a min-max normalization regime is equivalent to learning ๐›ผโ€ฒin a z-score normalized setting. In other words, there is a one-to-one relationship between these parameters, and as a result solutions can be mapped from one problem space to the other. However, this statement is only true for individual queries and does not have any implications for the learning of the weight in the convex combination over an entire collection of queries. Let us now consider this more complex setup.
2210.11934.pdf
79a058f189f6-0
111:10 Sebastian Bruch, Siyu Gai, and Amir Ingber The question we wish to answer is as follows: Under what conditions is ๐‘“Convex with parameter ๐›ผand a pair of normalization functions (๐œ™Sem,๐œ™Sem)rank-equivalent to an ๐‘“โ€ฒ Convexof a new pair of normalization functions (๐œ™โ€ฒ Sem,๐œ™โ€ฒ Lex)with weight ๐›ผโ€ฒ? That is, for a constant ๐›ผwith one normalization protocol, when is there a constant ๐›ผโ€ฒthat produces the same ranked lists for every query but with a different normalization protocol? The answer to this question helps us understand whether and when changing normalization schemes from min-max to z-score, for example, matters. We state the following definitions followed by a theorem that answers this question. Definition 4.3. We say๐‘“:Rโ†’Ris a๐›ฟ-expansion with respect to ๐‘”:Rโ†’Rif for any๐‘ฅand๐‘ฆ
2210.11934.pdf
79a058f189f6-1
in the domains of ๐‘“and๐‘”we have that|๐‘“(๐‘ฆ)โˆ’๐‘“(๐‘ฅ)|โ‰ฅ๐›ฟ|๐‘”(๐‘ฆ)โˆ’๐‘”(๐‘ฅ)|for some๐›ฟโ‰ฅ1. For example, ๐œ™mm(ยท)is an expansion with respect to ๐œ™tmm(ยท)with a factor ๐›ฟthat depends on the range of the scores. As another example, ๐œ™z(ยท)is an expansion with respect to ๐œ™mm(ยท). Definition 4.4. For two pairs of functions ๐‘“,๐‘”:Rโ†’Rand๐‘“โ€ฒ,๐‘”โ€ฒ:Rโ†’R, and two points ๐‘ฅand ๐‘ฆin their domains, we say that ๐‘“โ€ฒexpands with respect to ๐‘“more rapidly than๐‘”โ€ฒexpands with respect to๐‘”, with a relative expansion rate of๐œ†โ‰ฅ1, if the following condition holds:
2210.11934.pdf
79a058f189f6-2
|๐‘“โ€ฒ(๐‘ฆ)โˆ’๐‘“โ€ฒ(๐‘ฅ)| |๐‘“(๐‘ฆ)โˆ’๐‘“(๐‘ฅ)|=๐œ†|๐‘”โ€ฒ(๐‘ฆ)โˆ’๐‘”โ€ฒ(๐‘ฅ)| |๐‘”(๐‘ฆ)โˆ’๐‘”(๐‘ฅ)|. When๐œ†is independent of the points ๐‘ฅand๐‘ฆ, we call this relative expansion uniform : |ฮ”๐‘“โ€ฒ|/|ฮ”๐‘“| |ฮ”๐‘”โ€ฒ|/|ฮ”๐‘”|=๐œ†,โˆ€๐‘ฅ,๐‘ฆ. As an example, if ๐‘“and๐‘”are min-max scaling and ๐‘“โ€ฒand๐‘”โ€ฒare z-score normalization, then their respective rate of expansion is roughly similar. We will later show that this property often holds empirically across different transformations. Theorem 4.5. For every choice of ๐›ผ, there exists a constant ๐›ผโ€ฒsuch that the following functions are
2210.11934.pdf
79a058f189f6-3
rank-equivalent on a collection of queries ๐‘„: ๐‘“Convex =๐›ผ๐œ™(๐‘“Sem(๐‘ž,๐‘‘))+( 1โˆ’๐›ผ)๐œ”(๐‘“Lex(๐‘ž,๐‘‘)), and ๐‘“โ€ฒ Convex =๐›ผโ€ฒ๐œ™โ€ฒ(๐‘“Sem(๐‘ž,๐‘‘))+( 1โˆ’๐›ผโ€ฒ)๐œ”โ€ฒ(๐‘“Lex(๐‘ž,๐‘‘)), if for the monotone functions ๐œ™,๐œ”,๐œ™โ€ฒ,๐œ”โ€ฒ:Rโ†’R,๐œ™โ€ฒexpands with respect to ๐œ™more rapidly than ๐œ”โ€ฒ expands with respect to ๐œ”with a uniform rate ๐œ†. Proof. Consider a pair of documents ๐‘‘๐‘–and๐‘‘๐‘—in the ranked list of a query ๐‘žsuch that๐‘‘๐‘–is
2210.11934.pdf
79a058f189f6-4
ranked above ๐‘‘๐‘—according to ๐‘“Convex . Shortening ๐‘“o(๐‘ž,๐‘‘๐‘˜)to๐‘“(๐‘˜) ofor brevity, we have that: ๐‘“(๐‘–) Convex>๐‘“(๐‘—) Convex=โ‡’๐›ผ (๐œ™(๐‘“(๐‘–) Sem)โˆ’๐œ™(๐‘“(๐‘—) Sem)) | {z } ฮ”๐œ™๐‘– ๐‘—+(๐œ”(๐‘“(๐‘—) Lex)โˆ’๐œ”(๐‘“(๐‘–) Lex)) | {z } ฮ”๐œ”๐‘—๐‘– >๐œ”(๐‘“(๐‘—) Lex)โˆ’๐œ”(๐‘“(๐‘–) Lex) This holds if and only if we have the following: (
2210.11934.pdf
79a058f189f6-5
Lex) This holds if and only if we have the following: ( ๐›ผ>1/(1+ฮ”๐œ™๐‘– ๐‘— ฮ”๐œ”๐‘—๐‘–),ifฮ”๐œ™๐‘– ๐‘—+ฮ”๐œ”๐‘—๐‘–>0, ๐›ผ<1/(1+ฮ”๐œ™๐‘– ๐‘— ฮ”๐œ”๐‘—๐‘–),otherwise.(6) Observe that, because of the monotonicity of a convex combination and the monotonicity of the normalization functions, the case ฮ”๐œ™๐‘– ๐‘—<0andฮ”๐œ”๐‘—๐‘–>0(which implies that the semantic and lexical scores of ๐‘‘๐‘—are both larger than ๐‘‘๐‘–) is not valid as it leads to a reversal of ranks. Similarly,
2210.11934.pdf
525a513a60ab-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:11 the opposite case ฮ”๐œ™๐‘– ๐‘—>0andฮ”๐œ”๐‘—๐‘–<0always leads to the correct order regardless of the weight in the convex combination. We consider the other two cases separately below. Case 1: ฮ”๐œ™๐‘– ๐‘—>0andฮ”๐œ”๐‘—๐‘–>0. Because of the monotonicity property, we can deduce that ฮ”๐œ™โ€ฒ ๐‘– ๐‘—>0andฮ”๐œ”โ€ฒ ๐‘—๐‘–>0. From Equation (6), for the order between ๐‘‘๐‘–and๐‘‘๐‘—to be preserved under the image of ๐‘“โ€ฒ Convex, we must therefore have the following: ๐›ผโ€ฒ>1/(1+ฮ”๐œ™โ€ฒ ๐‘– ๐‘— ฮ”๐œ”โ€ฒ ๐‘—๐‘–). By assumption, using Definition 4.4, we observe that:
2210.11934.pdf
525a513a60ab-1
By assumption, using Definition 4.4, we observe that: ฮ”๐œ™โ€ฒ ๐‘– ๐‘— ฮ”๐œ™๐‘– ๐‘—โ‰ฅฮ”๐œ”โ€ฒ ๐‘—๐‘– ฮ”๐œ”๐‘—๐‘–=โ‡’ฮ”๐œ™โ€ฒ ๐‘– ๐‘— ฮ”๐œ”โ€ฒ ๐‘—๐‘–โ‰ฅฮ”๐œ™๐‘– ๐‘— ฮ”๐œ”๐‘—๐‘–. As such, the lower-bound on ๐›ผโ€ฒimposed by documents ๐‘‘๐‘–and๐‘‘๐‘—of query๐‘ž,๐ฟโ€ฒ ๐‘– ๐‘—(๐‘ž), is smaller than the lower-bound on ๐›ผ,๐ฟ๐‘– ๐‘—(๐‘ž). Like๐›ผ, this case does not additionally constrain ๐›ผโ€ฒfrom above (i.e., the upper-bound does not change: ๐‘ˆโ€ฒ
2210.11934.pdf
525a513a60ab-2
the upper-bound does not change: ๐‘ˆโ€ฒ ๐‘– ๐‘—(๐‘ž)=๐‘ˆ๐‘– ๐‘—(๐‘ž)=1). Case 2: ฮ”๐œ™๐‘– ๐‘—<0,ฮ”๐œ”๐‘—๐‘–<0. Once again, due to monotonicity, it is easy to see that ฮ”๐œ™โ€ฒ ๐‘– ๐‘—<0and ฮ”๐œ”โ€ฒ ๐‘—๐‘–<0. Equation (6) tells us that, for the order to be preserved under ๐‘“โ€ฒ Convex, we must similarly have that: ๐›ผโ€ฒ<1/(1+ฮ”๐œ™โ€ฒ ๐‘– ๐‘— ฮ”๐œ”โ€ฒ ๐‘—๐‘–). Once again, by assumption we have that the upper-bound on ๐›ผโ€ฒis a translation of the upper-bound on๐›ผto the left. The lower-bound is unaffected and remains 0. For๐‘“โ€ฒ
2210.11934.pdf
525a513a60ab-3
For๐‘“โ€ฒ Convexto induce the same order as ๐‘“Convex among all pairs of documents for all queries in ๐‘„, the intersection of the intervals produced by the constraints on ๐›ผโ€ฒhas to be non-empty: ๐ผโ€ฒโ‰œร™ ๐‘ž[max ๐‘– ๐‘—๐ฟโ€ฒ ๐‘– ๐‘—(๐‘ž),min ๐‘– ๐‘—๐‘ˆโ€ฒ ๐‘– ๐‘—(๐‘ž)]=[max ๐‘ž,๐‘– ๐‘—๐ฟโ€ฒ ๐‘– ๐‘—(๐‘ž),min ๐‘ž,๐‘– ๐‘—๐‘ˆโ€ฒ ๐‘– ๐‘—(๐‘ž)]โ‰ โˆ…. We next prove that ๐ผโ€ฒis always non-empty to conclude the proof of the theorem.
2210.11934.pdf
525a513a60ab-4
We next prove that ๐ผโ€ฒis always non-empty to conclude the proof of the theorem. By Equation (6) and the existence of ๐›ผ, we know that max ๐‘ž,๐‘– ๐‘—๐ฟ๐‘– ๐‘—(๐‘ž)โ‰คmin ๐‘ž,๐‘– ๐‘—๐‘ˆ๐‘– ๐‘—(๐‘ž). Suppose that documents ๐‘‘๐‘–and๐‘‘๐‘—of query๐‘ž1maximize the lower-bound, and that documents ๐‘‘๐‘šand๐‘‘๐‘›of query๐‘ž2minimize the upper-bound. We therefore have that: 1/(1+ฮ”๐œ™๐‘– ๐‘— ฮ”๐œ”๐‘—๐‘–)โ‰ค1/(1+ฮ”๐œ™๐‘š๐‘› ฮ”๐œ”๐‘›๐‘š)=โ‡’ฮ”๐œ™๐‘– ๐‘— ฮ”๐œ”๐‘—๐‘–โ‰ฅฮ”๐œ™๐‘š๐‘›
2210.11934.pdf
525a513a60ab-5
ฮ”๐œ”๐‘›๐‘š Because of the uniformity of the relative expansion rate, we can deduce that: ฮ”๐œ™โ€ฒ ๐‘– ๐‘— ฮ”๐œ”โ€ฒ ๐‘—๐‘–โ‰ฅฮ”๐œ™โ€ฒ ๐‘š๐‘› ฮ”๐œ”โ€ฒ๐‘›๐‘š=โ‡’max ๐‘ž,๐‘– ๐‘—๐ฟโ€ฒ ๐‘– ๐‘—(๐‘ž)โ‰คmin ๐‘ž,๐‘– ๐‘—๐‘ˆโ€ฒ ๐‘– ๐‘—(๐‘ž). โ–ก It is easy to show that the theorem above also holds when the condition is updated to reflect a shift of lower- and upper-bounds to the right, which happens when ๐œ™โ€ฒcontracts with respect to ๐œ™ more rapidly than ๐œ”โ€ฒdoes with respect to ๐œ”. The picture painted by Theorem 4.5 is that switching from min-max scaling to z-score nor-
2210.11934.pdf
525a513a60ab-6
malization or any other linear transformation that is bounded and does not severely distort the distribution of scores, especially among the top-ranking documents, results in a rank-equivalent function. At most, for any given value of the ranking metric of interest such as NDCG, we should observe a shift of the weight in the convex combination to the right or left. Figure 3 illustrates this
2210.11934.pdf
dc53db166775-0
111:12 Sebastian Bruch, Siyu Gai, and Amir Ingber (a) MS MARCO (b)Quora (c)HotpotQA (d)FiQA Fig. 3. Effect of normalization on the performance of ๐‘“Convex as a function of ๐›ผon the validation set. effect empirically on select datasets. As anticipated, the peak performance in terms of NDCG shifts to the left or right depending on the type of normalization. The uniformity requirement on the relative expansion rate, ๐œ†, in Theorem 4.5 is not as strict and restrictive as it may appear. First, it is only necessary for ๐œ†to be stable on the set of ordered pairs of documents as ranked by ๐‘“Convex : |ฮ”๐œ™โ€ฒ ๐‘– ๐‘—|/|ฮ”๐œ™๐‘– ๐‘—| |ฮ”๐œ”โ€ฒ
2210.11934.pdf
dc53db166775-1
|ฮ”๐œ”โ€ฒ ๐‘—๐‘–|/|ฮ”๐œ”๐‘—๐‘–|=๐œ†,โˆ€(๐‘‘๐‘–,๐‘‘๐‘—)st๐‘“Convex(๐‘‘๐‘–)>๐‘“Convex(๐‘‘๐‘—). Second, it turns out, close to uniformity (i.e., when ๐œ†isconcentrated around one value) is often sufficient for the effect to materialize in practice. We observe this phenomenon empirically by fixing the parameter ๐›ผin๐‘“Convex with one transformation and forming ranked lists, then choosing another transformation and computing its relative expansion rate ๐œ†on all ordered pairs of documents. We show the measured relative expansion rate in Figure 4 for various transformations. Figure 4 shows that most pairs of transformations yield a stable relative expansion rate. For example, if๐‘“Convex uses๐œ™tmmand๐‘“โ€ฒ
2210.11934.pdf
dc53db166775-2
example, if๐‘“Convex uses๐œ™tmmand๐‘“โ€ฒ Convexuses๐œ™mmโ€”denoted by ๐œ™tmmโ†’๐œ™mmโ€”for every choice of ๐›ผ,
2210.11934.pdf
415eb4f03f65-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:13 (a) MS MARCO (b)Quora (c)HotpotQA (d)FiQA Fig. 4. Relative expansion rate of semantic scores with respect to lexical scores, ๐œ†, when changing from one transformation to another, with 95% confidence intervals. Prior to visualization, we normalize values of ๐œ† to bring them into a similar scaleโ€”this only affects aesthetics and readability, but is the reason why the vertical axis is not scaled. For most transformations and every value of ๐›ผ, we observe a stable relative rate of expansion where ๐œ†concentrates around one value for the vast majority of queries. the relative expansion rate ๐œ†is concentrated around a constant value. This implies that any ranked list obtained from ๐‘“Convex can be reconstructed by ๐‘“โ€ฒ Convex. Interestingly, ๐œ™zโˆ’Lexโ†’๐œ™mmโˆ’Lexhas a comparatively less stable ๐œ†, but removing normalization altogether (i.e., ๐œ™mmโˆ’Lexโ†’๐ผ) dramatically
2210.11934.pdf
415eb4f03f65-1
distorts the expansion rates. This goes some way to explain why normalization and boundedness are important properties. In the last two sections, we have answered RQ1: Convex combination is an appropriate fusion function and its performance is not sensitive to the choice of normalization so long as the transfor- mation has reasonable properties. Interestingly, the behavior of ๐œ™tmmappears to be more robust to the data distributionโ€”its peak remains within a small neighborhood as we move from one dataset to another. We believe the reason ๐œ™tmm-normalized scores are more stable is because it has one fewer data-dependent statistic in the transformation (i.e., minimum score in the retrieved set is
2210.11934.pdf
fe3c91102a70-0
111:14 Sebastian Bruch, Siyu Gai, and Amir Ingber Table 1. Recall@1000 and NDCG@1000 (except SciFact andNFCorpus where cutoff is 100) on the test split of various datasets for lexical and semantic search as well as hybrid retrieval using RRF [5] (๐œ‚=60) and TM2C2 ( ๐›ผ=0.8). The symbolsโ€กandโˆ—indicate statistical significance ( ๐‘-value <0.01) with respect to TM2C2 and RRF respectively, according to a paired two-tailed ๐‘ก-test. Recall NDCG Dataset Lex. Sem. TM2C2 RRF Lex. Sem. TM2C2 RRF Oraclein-domainMS MARCO 0.836โ€กโˆ—0.964โ€กโˆ—0.974 0.969โ€ก0.309โ€กโˆ—0.441โ€กโˆ—0.454 0.425โ€ก0.547
2210.11934.pdf
fe3c91102a70-1
NQ 0.886โ€กโˆ—0.978โ€กโˆ—0.985 0.984 0.382โ€กโˆ—0.505โ€ก0.542 0.514โ€ก0.637 Quora 0.992โ€กโˆ—0.999 0.999 0.999 0.800โ€กโˆ—0.889โ€กโˆ—0.901 0.877โ€ก0.936zero-shotNFCorpus 0.283โ€กโˆ—0.314โ€กโˆ—0.348 0.344 0.298โ€กโˆ—0.309โ€กโˆ—0.343 0.326โ€ก0.371 HotpotQA 0.878โ€กโˆ—0.756โ€กโˆ—0.884 0.888 0.682โ€กโˆ—0.520โ€กโˆ—0.699 0.675โ€ก0.767
2210.11934.pdf
fe3c91102a70-2
FEVER 0.969โ€กโˆ—0.931โ€กโˆ—0.972 0.972 0.689โ€กโˆ—0.558โ€กโˆ—0.744 0.721โ€ก0.814 SciFact 0.900โ€กโˆ—0.932โ€กโˆ—0.958 0.955 0.698โ€กโˆ—0.681โ€กโˆ—0.753 0.730โ€ก0.796 DBPedia 0.540โ€กโˆ—0.408โ€กโˆ—0.564 0.567 0.415โ€กโˆ—0.425โ€กโˆ—0.512 0.489โ€ก0.553 FiQA 0.720โ€กโˆ—0.908 0.907 0.904 0.315โ€กโˆ—0.467โ€ก0.496 0.464โ€ก0.561 replaced with minimum feasible value regardless of the candidate set). In the remainder of this work, we use ๐œ™tmmand denote a convex combination of scores normalized by it by TM2C2 for
2210.11934.pdf
fe3c91102a70-3
brevity. 5 ANALYSIS OF RECIPROCAL RANK FUSION Chen et al. [ 5] show that RRF performs better and more reliably than a convex combination of normalized scores. RRF is computed as follows: ๐‘“RRF(๐‘ž,๐‘‘)=1 ๐œ‚+๐œ‹Lex(๐‘ž,๐‘‘)+1 ๐œ‚+๐œ‹Sem(๐‘ž,๐‘‘), (7) where๐œ‚is a free parameter. The authors of [ 5] take a non-parametric view of RRF, where the parameter๐œ‚is set to its default value 60, in order to apply the fusion to out-of-domain datasets in a zero-shot manner. In this work, we additionally take a parametric view of RRF, where as we elaborate later, the number of free parameters is the same as the number of functions being fused together, a quantity that is always larger than the number of parameters in a convex combination.
2210.11934.pdf
fe3c91102a70-4
together, a quantity that is always larger than the number of parameters in a convex combination. Let us begin by comparing the performance of RRF and TM2C2 empirically to get a sense of their relative efficacy. We first verify whether hybrid retrieval leads to significant gains in in-domain and out-of-domain experiments. In a way, we seek to confirm the findings reported in [ 5] and compare the two fusion functions in the process. Table 1 summarizes our results. We note that, we set RRFโ€™s๐œ‚to60per [ 5] but tuned TM2C2โ€™s ๐›ผ on the validation set of the in-domain datasets and found that ๐›ผ=0.8works well for the three datasets. In the experiments leading to Table 1, we fix ๐›ผ=0.8and evaluate methods on the test split of the datasets. Per [ 5,39], we have also included the performance of an oracle system that uses a per-query ๐›ผ, to establish an upper-boundโ€”the oracle knows which value of ๐›ผworks best for any given query.
2210.11934.pdf
fe3c91102a70-5
any given query. Our results show that hybrid retrieval using RRF outperforms pure-lexical and pure-semantic retrieval on most datasets. This fusion method is particularly effective on out-of-domain datasets,
2210.11934.pdf
82bb45ff1fd9-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:15 (a) in-domain (b) out-of-domain Fig. 5. Difference in NDCG@ 1000 of TM2C2 and RRF (positive indicates better ranking quality by TM2C2) as a function of ๐›ผ. When๐›ผ=0the model is rank-equivalent to lexical search while ๐›ผ=1is rank-equivalent to semantic search. rendering the observation of [ 5] a robust finding and asserting once more the remarkable perfor- mance of RRF in zeros-shot settings. Contrary to [ 5], however, we find that TM2C2 significantly outperforms RRF on all datasets in terms of NDCG, and does generally better in terms of Recall. Our observation is consistent with [39] that TM2C2 substantially boosts NDCG even on in-domain datasets. To contextualize the effect of ๐›ผon ranking quality, we visualize a parameter sweep on the validation split of in-domain datasets in Figure 5(a), and for completeness, on the test split of
2210.11934.pdf
82bb45ff1fd9-1
out-of-domain datasets in Figure 5(b). These figures also compare the performance of TM2C2 with RRF by reporting the difference between NDCG of the two methods. These plots show that there always exists an interval of ๐›ผfor which๐‘“TM2C2โ‰ป๐‘“RRFwithโ‰ปindicating better rank quality. 5.1 Effect of Parameters Chen et al. rightly argue that because RRF is merely a function of ranks, rather than scores, it naturally addresses the scale and range problem without requiring normalizationโ€”which, as we showed, is not a consequential choice anyway. While that statement is accurate, we believe it introduces new problems that must be recognized too. The first, more minor issue is that ranks cannot be computed exactly unless the entire collection Dis ranked by retrieval system ofor every query. That is because, there may be documents that appear in the union set, but not in one of the individual top- ๐‘˜sets. Their true rank is therefore unknown, though is often approximated by ranking documents within the union set. We take this approach when computing ranks.
2210.11934.pdf
82bb45ff1fd9-2
approach when computing ranks. The second issue is that, unlike TM2C2, RRF ignores the raw scores and discards information about their distribution. In this regime, whether or not a document has a low or high semantic score does not matter so long as its rank in ๐‘…๐‘˜ Semstays the same. It is arguable in this case whether rank is a stronger signal of relevance than score, a measurement in a metric space where distance matters greatly. We intuit that, such distortion of distances may result in a loss of valuable information that would lead to better final ranked lists.
2210.11934.pdf
4ea204941815-0
111:16 Sebastian Bruch, Siyu Gai, and Amir Ingber (a) MS MARCO (b)Quora (c)NQ (d)FiQA (e)HotpotQA (f)Fever Fig. 6. Visualization of the reciprocal rank determined by lexical ( ๐‘Ÿ๐‘Ÿ(๐œ‹Lex)=1/(60+๐œ‹Lex)) and semantic (๐‘Ÿ๐‘Ÿ(๐œ‹Sem)=1/(60+๐œ‹Sem)) retrieval for query-document pairs sampled from the validation split of each dataset. Shown in red are up to 20,000positive samples where document is relevant to query, and in black up to the same number of negative samples. To understand these issues better, let us first repeat the exercise in Section 4.1 for RRF. In Figure 6, we have plotted the reciprocal rank (i.e., ๐‘Ÿ๐‘Ÿ(๐œ‹o)=1/(๐œ‚+๐œ‹o)with๐œ‚=60) for sampled query-
2210.11934.pdf
4ea204941815-1
document pairs as before. From the figure, we can see that samples are pulled towards one of the poles at(0,0)and(1/61,1/61). The former attracts a higher concentration of negative samples while the latter positive samples. While this separation is somewhat consistent across datasets,
2210.11934.pdf
e88b15fd1cbc-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:17 (a) MS MARCO (b)HotpotQA Fig. 7. Difference in NDCG@1000 of ๐‘“RRFwith distinct values ๐œ‚Lexand๐œ‚Sem, and๐‘“RRFwith๐œ‚Lex=๐œ‚Sem=60 (positive indicates better ranking quality by the former). On MS MARCO, an in-domain dataset, NDCG improves when ๐œ‚Lex>๐œ‚Sem, while the opposite effect can be seen for HotpotQA , an out-of-domain dataset. the concentration around poles and axes changes. Indeed, on HotpotQA andFever there is a higher concentration of positive documents near the top, whereas on FiQA and the in-domain datasets more positive samples end up along the vertical line at ๐‘Ÿ๐‘Ÿ(๐œ‹Sem)=1/61, indicating that lexical ranks matter less. This suggests that a simple addition of reciprocal ranks does not behave consistently across domains.
2210.11934.pdf
e88b15fd1cbc-1
consistently across domains. We argued earlier that RRF is parametric and that it, in fact, has as many parameters as there are retrieval functions to fuse. To see this more clearly, let us rewrite Equation (7) as follows: ๐‘“RRF(๐‘ž,๐‘‘)=1 ๐œ‚Lex+๐œ‹Lex(๐‘ž,๐‘‘)+1 ๐œ‚Sem+๐œ‹Sem(๐‘ž,๐‘‘). (8) We study the effect of parameters on ๐‘“RRFby comparing the NDCG obtained from RRF with a particular choice of ๐œ‚Lexand๐œ‚Semagainst a realization of RRF with๐œ‚Lex=๐œ‚Sem=60. In this way, we are able to visualize the impact on performance relative to the baseline configuration that is typically used in the literature. This difference in NDCG is rendered as a heatmap in Figure 7 for select datasetsโ€”figures for all other datasets show a similar pattern. As a general observation, we note that NDCG swings wildly as a function of RRF parameters.
2210.11934.pdf
e88b15fd1cbc-2
Crucially, performance improves off-diagonal, where the parameter takes on different values for the semantic and lexical components. On MS MARCO, shown in Figure 7(a), NDCG improves when ๐œ‚Lex>๐œ‚Sem, while the opposite effect can be seen for HotpotQA , an out-of-domain dataset. This can be easily explained by the fact that increasing ๐œ‚ofor retrieval system oeffectively discounts the contribution of ranks from oto the final hybrid score. On in-domain datasets where the semantic model already performs strongly, for example, discounting the lexical system by increasing ๐œ‚Lex leads to better performance. Having observed that tuning RRF potentially leads to gains in NDCG, we ask if tuned parameters generalize on out-of-domain datasets. To investigate that question, we tune RRF on in-domain datasets and pick the value of parameters that maximize NDCG on the validation split of in-domain datasets, and measure the performance of the resulting function on the test split of all (in-domain and out-of-domain) datasets. We present the results in Table 2. While tuning a parametric RRF does
2210.11934.pdf
4346c658ecf5-0
111:18 Sebastian Bruch, Siyu Gai, and Amir Ingber (a)๐œ‚Lex=60,๐œ‚Sem=60 (b)๐œ‚Lex=10,๐œ‚Sem=4 (c)๐œ‚Lex=3,๐œ‚Sem=5 Fig. 8. Effect of ๐‘“RRFwith select configurations of ๐œ‚Lexand๐œ‚Semon pairs of ranks from lexical and semantic systems. When ๐œ‚Lex>๐œ‚Sem, the fusion function discounts the lexical systemโ€™s contribution. Table 2. Mean NDCG@1000 (NDCG@100 for SciFact andNFCorpus ) on the test split of various datasets for hybrid retrieval using TM2C2 ( ๐›ผ=0.8) and RRF (๐œ‚Lex,๐œ‚Sem). The symbolsโ€กandโˆ—indicate statistical significance ( ๐‘-value <0.01) with respect to TM2C2 and baseline RRF ( 60,60) respectively, according to a paired two-tailed ๐‘ก-test. NDCG
2210.11934.pdf
4346c658ecf5-1
paired two-tailed ๐‘ก-test. NDCG Dataset TM2C2 RRF(60,60)RRF(5,5)RRF(10,4)in-domainMS MARCO 0.454 0.425โ€ก0.435โ€กโˆ—0.451โˆ— NQ 0.542 0.514โ€ก0.521โ€กโˆ—0.528โ€กโˆ— Quora 0.901 0.877โ€ก0.885โ€กโˆ—0.896โˆ—zero-shotNFCorpus 0.343 0.326โ€ก0.335โ€กโˆ—0.327โ€ก HotpotQA 0.699 0.675โ€ก0.693โˆ—0.621โ€กโˆ— FEVER 0.744 0.721โ€ก0.727โ€กโˆ—0.649โ€กโˆ— SciFact 0.753 0.730โ€ก0.738โ€ก0.715โ€กโˆ—
2210.11934.pdf
4346c658ecf5-2
DBPedia 0.512 0.489โ€ก0.489โ€ก0.480โ€กโˆ— FiQA 0.496 0.464โ€ก0.470โ€กโˆ—0.482โ€กโˆ— indeed lead to gains in NDCG on in-domain datasets, the tuned function does not generalize well to out-of-domain datasets. The poor generalization can be explained by the reversal of patterns observed in Figure 7 where ๐œ‚Lex>๐œ‚Semsuits in-domain datasets better but the opposite is true for out-of-domain datasets. By modifying๐œ‚Lexand๐œ‚Semwe modify the fusion of ranks and boost certain regions and discount others in an imbalanced manner. Figure 8 visualizes this effect on ๐‘“RRFfor particular values of its parameters. This addresses RQ2. 5.2 Effect of Lipschitz Continuity In the previous section, we stated an intuition that because RRF does not preserve the distribution of raw scores, it loses valuable information in the process of fusing retrieval systems. In our final
2210.11934.pdf
4346c658ecf5-3
of raw scores, it loses valuable information in the process of fusing retrieval systems. In our final research question, RQ3, we investigate if this indeed matters in practice.
2210.11934.pdf
6c143f58ddf2-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:19 (a) in-domain (b) out-of-domain Fig. 9. The difference in NDCG@1000 of ๐‘“SRRF and๐‘“RRFwith๐œ‚=60(positive indicates better ranking quality bySRRF ) as a function of ๐›ฝ. The notion of โ€œpreservingโ€ information is well captured by the concept of Lipschitz continuity.6 When a function is Lipschitz continuous with a small Lipschitz constant, it does not oscillate wildly with a small change to its input. RRF does not have this property because the moment one lexical (or semantic) score becomes larger than another the function makes a hard transition to a new value. We can therefore cast RQ3 as a question of whether Lipschitz continuity is an important property in practice. To put that hypothesis to the test, we design a smooth approximation of RRF using known techniques [4, 30]. As expressed in Equation (1), the rank of a document is simply the sum of indicators. It is
2210.11934.pdf
6c143f58ddf2-1
thus trivial to approximate this quantity using a generalized sigmoid with parameter ๐›ฝ:๐œŽ๐›ฝ(๐‘ฅ)= 1/(1+exp(โˆ’๐›ฝ๐‘ฅ)). As๐›ฝapproaches 1, the sigmoid takes its usual Sshape, while ๐›ฝโ†’โˆž produces a very close approximation of the indicator. Interestingly, the Lipschitz constant of ๐œŽ๐›ฝ(ยท)is, in fact, ๐›ฝ. As๐›ฝincreases, the approximation of ranks becomes more accurate, but the Lipschitz constant becomes larger. When ๐›ฝis too small, however, the approximation breaks down but the function transitions more slowly, thereby preserving much of the characteristics of the underlying data distribution. RRF being a function of ranks can now be approximated by plugging in approximate ranks in Equation (7), resulting in SRRF: ๐‘“SRRF(๐‘ž,๐‘‘)=1 ๐œ‚+หœ๐œ‹Lex(๐‘ž,๐‘‘)+1
2210.11934.pdf
6c143f58ddf2-2
๐œ‚+หœ๐œ‹Lex(๐‘ž,๐‘‘)+1 ๐œ‚+หœ๐œ‹Sem(๐‘ž,๐‘‘), (9) where หœ๐œ‹o(๐‘ž,๐‘‘๐‘–)=0.5+ร ๐‘‘๐‘—โˆˆ๐‘…๐‘˜o(๐‘ž)๐œŽ๐›ฝ(๐‘“o(๐‘ž,๐‘‘๐‘—)โˆ’๐‘“o(๐‘ž,๐‘‘๐‘–)). By increasing ๐›ฝwe increase the Lipschitz constant of ๐‘“SRRF. This is the lever we need to test the idea that Lipschitz continuity matters and that functions that do not distort the distributional properties of raw scores lead to better ranking quality.
2210.11934.pdf
6c143f58ddf2-3
that functions that do not distort the distributional properties of raw scores lead to better ranking quality. 6A function ๐‘“is Lipschitz continous with constant ๐ฟif||๐‘“(๐‘ฆ)โˆ’๐‘“(๐‘ฅ)||๐‘œโ‰ค๐ฟ||๐‘ฆโˆ’๐‘ฅ||๐‘–for some norm||ยท|| ๐‘œand||ยท|| ๐‘– on the output and input space of ๐‘“.
2210.11934.pdf
7f308cf6cc65-0
111:20 Sebastian Bruch, Siyu Gai, and Amir Ingber (a) in-domain (b) out-of-domain Fig. 10. The difference in NDCG@1000 of ๐‘“SRRF and๐‘“RRFwith๐œ‚=5(positive indicates better ranking quality bySRRF ) as a function of ๐›ฝ. Figures 9 and 10 visualize the difference between SRRF and RRF for two settings of ๐œ‚selected based on the results in Table 2. As anticipated, when ๐›ฝis too small, the approximation error is large and ranking quality degrades. As ๐›ฝbecomes larger, ranking quality trends in the direction ofRRF. Interestingly, as ๐›ฝbecomes gradually smaller, the performance of SRRF improves over theRRF baseline. This effect is more pronounced for the ๐œ‚=60setting of RRF, as well as on the out-of-domain datasets. While we acknowledge the possibility that the approximation in Equation (9) may cause a change
2210.11934.pdf
7f308cf6cc65-1
While we acknowledge the possibility that the approximation in Equation (9) may cause a change in ranking quality, we expected that change to be a degradation, not an improvement. However, given we do observe gains by smoothing the function, and that the only other difference between SRRF and RRF is their Lipschitz constant, we believe these results highlight the role of Lipschitz continuity in ranking quality. For completeness, we have also included a comparison of SRRF, RRF, and TM2C2 in Table 3. 6 DISCUSSION The analysis in this work motivates us to identify and document the properties of a well-behaved fusion function, and present the principles that, we hope, will guide future research in this space. These desiderata are stated below. Monotonicity : When๐‘“ois positively correlated with a target ranking metric (i.e., ordering documents in decreasing order of ๐‘“omust lead to higher quality), then it is natural to require that๐‘“Hybrid be monotone increasing in its arguments. We have already seen and indeed used this
2210.11934.pdf
7f308cf6cc65-2
property in our analysis of the convex combination fusion function. It is trivial to show why this property is crucial. Homogeneity : The order induced by a fusion function must be unaffected by a positive re- scaling of query and document vectors. That is: ๐‘“Hybrid(๐‘ž,๐‘‘)๐œ‹=๐‘“Hybrid(๐‘ž,๐›พ๐‘‘)๐œ‹=๐‘“Hybrid(๐›พ๐‘ž,๐‘‘)where ๐œ‹=denotes rank-equivalence and ๐›พ>0. This property prevents any retrieval system from inflating its contribution to the final hybrid score by simply boosting its document or query vectors.
2210.11934.pdf
b436cf251fee-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:21 Table 3. Mean NDCG@1000 (NDCG@100 for SciFact andNFCorpus ) on the test split of various datasets for hybrid retrieval using TM2C2 ( ๐›ผ=0.8),RRF (๐œ‚), and SRRF( ๐œ‚,๐›ฝ). The parameters ๐›ฝare fixed to values that maximize NDCG on the validation split of in-domain datasets. The symbols โ€กandโˆ—indicate statistical significance ( ๐‘-value <0.01) with respect to TM2C2 and RRF respectively, according to a paired two-tailed ๐‘ก-test. NDCG Dataset TM2C2 RRF(60) SRRF ( 60,40)RRF(5) SRRF ( 5,100)in-domainMS MARCO 0.454 0.425โ€ก0.431โ€กโˆ—0.435โ€ก0.431โ€กโˆ—
2210.11934.pdf
b436cf251fee-1
NQ 0.542 0.514โ€ก0.516โ€ก0.521โ€ก0.517โ€ก Quora 0.901 0.877โ€ก0.889โ€กโˆ—0.885โ€ก0.889โ€กโˆ—zero-shotNFCorpus 0.343 0.326โ€ก0.338โ€กโˆ—0.335โ€ก0.339โ€ก HotpotQA 0.699 0.675โ€ก0.695โˆ—0.693โ€ก0.705โ€กโˆ— FEVER 0.744 0.721โ€ก0.725โ€ก0.727โ€ก0.735โ€กโˆ— SciFact 0.753 0.730โ€ก0.740โ€ก0.738โ€ก0.740โ€ก DBPedia 0.512 0.489โ€ก0.501โ€กโˆ—0.489โ€ก0.492โ€ก FiQA 0.496 0.464โ€ก0.468โ€ก0.470โ€ก0.469โ€ก
2210.11934.pdf
b436cf251fee-2
Boundedness : Recall that, a convex combination without score normalization is often ineffective and inconsistent because BM25 is unbounded and that lexical and semantic scores are on different scales. To see this effect we turn to Figure 11. We observe in Figure 11(a) that, for in-domain datasets, adding the unnormalized lexical scores using a convex combination leads to a severe degradation of ranking quality. We believe this is because of the fact that the semantic retrieval model, which is fine-tuned on these datasets, already produces ranked lists of high quality, and that adding the lexical scores which are on a very different scale distorts the rankings and leads to poor performance. In out-of-domain experiments as shown in Figure 11(b), however, the addition of lexical scores leads to often significant gains in quality. We believe this can be explained exactly as the in-domain observations: The semantic model generally does poorly on out-of-domain datasets while the lexical retriever does well. But because the semantic scores are bounded and relatively small, they do not significantly distort the rankings produced by the lexical retriever.
2210.11934.pdf
b436cf251fee-3
produced by the lexical retriever. To avoid that pitfall, we require that ๐‘“Hybrid be bounded:|๐‘“Hybrid|โ‰ค๐‘€for some๐‘€>0. As we have seen before, normalizing the raw scores addresses this issue. Lipschitz Continuity : We argued that because RRF does not take into consideration the raw scores, it distorts their distribution and thereby loses valuable information. On the other hand, TM2C2 (or any convex combination of scores) is a smooth function of scores and preserves much of the characteristics of its underlying distribution. We formalized this idea using the notion of Lipschitz continuity: A larger Lipschitz constant leads to a larger distortion of retrieval score distribution. Interpretability and Sample Efficiency : The question of hybrid retrieval is an important topic inIR. What makes it particularly pertinent is its zero-shot applicability, a property that makes deep models reusable , reducing computational costs and emissions as a result [ 3,34], and enabling resource-constrained research labs to innovate. Given the strong evidence supporting the idea that
2210.11934.pdf
b436cf251fee-4
resource-constrained research labs to innovate. Given the strong evidence supporting the idea that hybrid retrieval is most valuable when applied to out-of-domain datasets [ 5], we believe that ๐‘“Hybrid should be robust to distributional shifts and should not need training or fine-tuning on target
2210.11934.pdf
94824fc6dcf4-0
111:22 Sebastian Bruch, Siyu Gai, and Amir Ingber (a) in-domain (b) out-of-domain Fig. 11. The difference in NDCG of convex combination of unnormalized scores and a pure semantic search (positive indicates better ranking quality by a convex combination) as a function of ๐›ผ. datasets. This implies that either the function must be non-parametric, that its parameters can be tuned efficiently with respect to the training samples required, or that they are highly interpretable such that their value can be guided by expert knowledge. In the absence of a truly non-parametric approach, however, we believe a fusion that is more sample-efficient to tune is preferred. Because convex combination has fewer parameters than the fully parameterized RRF, we believe it should have this property. To confirm, we ask how many training queries it takes to converge to the correct ๐›ผon a target dataset. Figure 12 visualizes our experiments, where we plot NDCG of RRF (๐œ‚=60) and TM2C2 with
2210.11934.pdf
94824fc6dcf4-1
๐›ผ=0.8from Table 1. Additionally, we take the train split of each dataset and sample from it progressively larger subsets (with a step size of 5%), and use it to tune the parameters of each function. We then measure NDCG of the tuned functions on the test split. For the depicted datasets as well as all other datasets in this work, we observe a similar trend: With less than 5%of the training data, which is often a small set of queries, TM2C2โ€™s ๐›ผconverges, regardless of the magnitude of domain shift. This sample efficiency is remarkable because it enables significant gains with little labeling effort. Finally, while RRF does not settle on a value and its parameters are sensitive to the training sample, its performance does more or less converge. However, the performance of the fully parameterized RRF is still sub-optimal compared with TM2C2. In Figure 12, we also include a convex combination of fully parameterized RRF terms, denoted by RRF-CC and defined as:
2210.11934.pdf
94824fc6dcf4-2
by RRF-CC and defined as: ๐‘“RRF(๐‘ž,๐‘‘)=(1โˆ’๐›ผ)1 ๐œ‚Lex+๐œ‹Lex(๐‘ž,๐‘‘)+๐›ผ1 ๐œ‚Sem+๐œ‹Sem(๐‘ž,๐‘‘), (10) where๐›ผ,๐œ‚Lex, and๐œ‚Semare tunable parameters. The question this particular formulation tries to answer is whether adding an additional weight to the combination of the RRF terms affects retrieval quality. From the figure, it is clear that the addition of this parameter does not have a significant impact on the overall performance. This also serves as additional evidence supporting the claim that Lipschitz continuity is an important property.
2210.11934.pdf
0e96be142e14-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:23 (a) MS MARCO (b)Quora (c)HotpotQA (d)Fever Fig. 12. Sample efficiency of TM2C2 and the parameterized variants of RRF (single parameter where ๐œ‚Sem= ๐œ‚Lex, and two parameters where we allow different values of ๐œ‚Semand๐œ‚Lex, and a third variation that is a convex combination of RRF terms defined in Equation 10). We sample progressively larger subsets of the validation set (with a step size of 5%), tune the parameters of each function on the resulting set, and evaluate the resulting function on the test split. These figures depict NDCG@1000 as a function of the size of the tuning set, averaged over 5trials with the shaded regions illustrating the 95% confidence intervals. For reference, we have also plotted NDCG on the test split for RRF (๐œ‚=60) and TM2C2 with ๐›ผ=0.8from Table 1. 7 CONCLUSION
2210.11934.pdf
0e96be142e14-1
7 CONCLUSION We studied the behavior of two popular functions that fuse together lexical and semantic retrieval to produce hybrid retrieval, and identified their advantages and pitfalls. Importantly, we inves- tigated several questions and claims in prior work. We established theoretically that the choice of normalization is not as consequential as once thought for a convex combination-based fusion function. We found that RRF is sensitive to its parameters. We also observed empirically that convex combination of normalized scores outperforms RRF on in-domain and out-of-domain datasetsโ€”a finding that is in disagreement with [5].
2210.11934.pdf
8fb80edd1315-0
111:24 Sebastian Bruch, Siyu Gai, and Amir Ingber We believe that a convex combination with theoretical minimum-maximum normalization (TM2C2) indeed enjoys properties that are important in a fusion function. Its parameter, too, can be tuned sample-efficiently or set to a reasonable value based on domain knowledge. In our experiments, for example, we found the range ๐›ผโˆˆ[0.6,0.8]to consistently lead to improvements. While we observed that a line appears to be appropriate for a collection of query-document pairs, we acknowledge that that may change if our analysis was conducted on a per-query basisโ€”itself a rather non-trivial effort. For example, it is unclear if bringing non-linearity to the design of the fusion function or the normalization itself leads to a more accurate prediction of ๐›ผon a per-query basis. We leave an exploration of this question to future work. We also note that, while our analysis does not exclude the use of multiple retrieval engines as input, and indeed can be extended, both theoretically and empirically, to a setting where we have
2210.11934.pdf
8fb80edd1315-1
input, and indeed can be extended, both theoretically and empirically, to a setting where we have more than just lexical and semantic scores, it is nonetheless important to conduct experiments and validate that our findings generalize. We believe, however, that our current assumptions are practical and are reflective of the current state of hybrid search where we typically fuse only lexical and semantic retrieval systems. As such, we leave an extended analysis of fusion on multiple retrieval systems to future work. ACKNOWLEDGMENTS We benefited greatly from conversations with Brian Hentschel, Edo Liberty, and Michael Bendersky. We are grateful to them for their insight and time. REFERENCES [1] Nima Asadi. 2013. Multi-Stage Search Architectures for Streaming Documents . University of Maryland. [2]Nima Asadi and Jimmy Lin. 2013. Effectiveness/Efficiency Tradeoffs for Candidate Generation in Multi-Stage Retrieval Architectures. In Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval (Dublin, Ireland). 997โ€“1000.
2210.11934.pdf
8fb80edd1315-2
Retrieval (Dublin, Ireland). 997โ€“1000. [3]Sebastian Bruch, Claudio Lucchese, and Franco Maria Nardini. 2022. ReNeuIR: Reaching Efficiency in Neural Information Retrieval. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (Madrid, Spain). 3462โ€“3465. [4]Sebastian Bruch, Masrour Zoghi, Michael Bendersky, and Marc Najork. 2019. Revisiting Approximate Metric Optimiza- tion in the Age of Deep Neural Networks. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (Paris, France). 1241โ€“1244. [5]Tao Chen, Mingyang Zhang, Jing Lu, Michael Bendersky, and Marc Najork. 2022. Out-of-Domain Semantics to the Rescue! Zero-Shot Hybrid Retrieval Models. In Advances in Information Retrieval: 44th European Conference on IR
2210.11934.pdf
8fb80edd1315-3
Research, ECIR 2022, Stavanger, Norway, April 10โ€“14, 2022, Proceedings, Part I (Stavanger, Norway). 95โ€“110. [6]Gordon V. Cormack, Charles L A Clarke, and Stefan Buettcher. 2009. Reciprocal Rank Fusion Outperforms Condorcet and Individual Rank Learning Methods. 758โ€“759. [7]Van Dang, Michael Bendersky, and W Bruce Croft. 2013. Two-Stage learning to rank for information retrieval. In Advances in Information Retrieval . Springer, 423โ€“434. [8]Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) . Association for Computational Linguistics, Minneapolis, Minnesota, 4171โ€“4186.
2210.11934.pdf
8fb80edd1315-4
for Computational Linguistics, Minneapolis, Minnesota, 4171โ€“4186. [9]Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and Stรฉphane Clinchant. 2022. From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective. In Proceedings of the 45th International ACM
2210.11934.pdf
3ca2c422a8e4-0
Negative Sampling: Making Sparse Neural IR Models More Effective. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (Madrid, Spain). 2353โ€“2359. [10] Kalervo Jรคrvelin and Jaana Kekรคlรคinen. 2000. IR evaluation methods for retrieving highly relevant documents. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval . ACM, 41โ€“48. [11] Jeff Johnson, Matthijs Douze, and Hervรฉ Jรฉgou. 2021. Billion-Scale Similarity Search with GPUs. IEEE Transactions on Big Data 7 (2021), 535โ€“547. [12] Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the 2020 Conference on
2210.11934.pdf
c98cf58f7990-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:25 Empirical Methods in Natural Language Processing (EMNLP) . [13] Saar Kuzi, Mingyang Zhang, Cheng Li, Michael Bendersky, and Marc Najork. 2020. Leveraging Semantic and Lexical Matching to Improve the Recall of Document Retrieval Systems: A Hybrid Approach. ArXiv (2020). [14] Hang Li, Shuai Wang, Shengyao Zhuang, Ahmed Mourad, Xueguang Ma, Jimmy Lin, and Guido Zuccon. 2022. To Interpolate or Not to Interpolate: PRF, Dense and Sparse Retrievers. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (Madrid, Spain). 2495โ€“2500. [15] Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2021. Pretrained Transformers for Text Ranking: BERT and Beyond. arXiv:2010.06467 [cs.IR]
2210.11934.pdf
c98cf58f7990-1
arXiv:2010.06467 [cs.IR] [16] Tie-Yan Liu. 2009. Learning to Rank for Information Retrieval. Found. Trends Inf. Retr. 3, 3 (2009), 225โ€“331. [17] Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2021. Sparse, Dense, and Attentional Representations for Text Retrieval. Transactions of the Association for Computational Linguistics 9 (2021), 329โ€“345. [18] Ji Ma, Ivan Korotkov, Keith Hall, and Ryan T. McDonald. 2020. Hybrid First-stage Retrieval Models for Biomedical Literature. In CLEF . [19] Xueguang Ma, Kai Sun, Ronak Pradeep, and Jimmy J. Lin. 2021. A Replication Study of Dense Passage Retriever. ArXiv (2021). [20] Craig Macdonald, Rodrygo LT Santos, and Iadh Ounis. 2013. The whens and hows of learning to rank for web search.
2210.11934.pdf
c98cf58f7990-2
Information Retrieval 16, 5 (2013), 584โ€“628. [21] Yu. A. Malkov and D. A. Yashunin. 2016. Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs. [22] Antonio Mallia, Michal Siedlaczek, Joel Mackenzie, and Torsten Suel. 2019. PISA: Performant Indexes and Search for Academia. In Proceedings of the Open-Source IR Replicability Challenge co-located with 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, OSIRRC@SIGIR 2019, Paris, France, July 25, 2019. 50โ€“56. http://ceur-ws.org/Vol-2409/docker08.pdf [23] Yoshitomo Matsubara, Thuy Vu, and Alessandro Moschitti. 2020. Reranking for Efficient Transformer-Based Answer Selection . 1577โ€“1580.
2210.11934.pdf