{"id": "147d8b44b127-0", "text": "111An Analysis of Fusion Functions for Hybrid Retrieval\nSEBASTIAN BRUCH, Pinecone, USA\nSIYU GAI\u2217,University of California, Berkeley, USA\nAMIR INGBER, Pinecone, Israel\nWe study hybrid search in text retrieval where lexical and semantic search are fused together with the intuition\nthat the two are complementary in how they model relevance. In particular, we examine fusion by a convex\ncombination (CC) of lexical and semantic scores, as well as the Reciprocal Rank Fusion (RRF) method, and\nidentify their advantages and potential pitfalls. Contrary to existing studies, we find RRF to be sensitive to\nits parameters; that the learning of a CC fusion is generally agnostic to the choice of score normalization;\nthat CC outperforms RRF in in-domain and out-of-domain settings; and finally, that CC is sample efficient,\nrequiring only a small set of training examples to tune its only parameter to a target domain.\nCCS Concepts: \u2022Information systems \u2192Retrieval models and ranking ;Combination, fusion and\nfederated search .", "source": "2210.11934.pdf"} | |
{"id": "147d8b44b127-1", "text": "federated search .\nAdditional Key Words and Phrases: Hybrid Retrieval, Lexical and Semantic Search, Fusion Functions\n1 INTRODUCTION\nRetrieval is the first stage in a multi-stage ranking system [ 1,2,43], where the objective is to find\nthe top-\ud835\udc58set of documents, that are the most relevant to a given query \ud835\udc5e, from a large collection of\ndocumentsD. Implicit in this task are two major research questions: a) How do we measure the\nrelevance between a query \ud835\udc5eand a document \ud835\udc51\u2208D?; and, b) How do we find the top- \ud835\udc58documents\naccording to a given similarity metric efficiently ? In this work, we are primarily concerned with\nthe former question in the context of text retrieval.\nAs a fundamental problem in Information Retrieval ( IR), the question of the similarity between\nqueries and documents has been explored extensively. Early methods model text as a Bag of\nWords ( BoW ) and compute the similarity of two pieces of text using a statistical measure such as", "source": "2210.11934.pdf"} | |
{"id": "147d8b44b127-2", "text": "Words ( BoW ) and compute the similarity of two pieces of text using a statistical measure such as\nthe term frequency-inverse document frequency (TF-IDF) family, with BM25 [ 32,33] being its most\nprominent member. We refer to retrieval with a BoW model as lexical search and the similarity\nscores computed by such a system as lexical scores .\nLexical search is simple, efficient, (naturally) \u201czero-shot, \u201d and generally effective, but has important\nlimitations: It is susceptible to the vocabulary mismatch problem and, moreover, does not take\ninto account the semantic similarity of queries and documents [ 5]. That, it turns out, is what deep\nlearning models are excellent at. With the rise of pre-trained language models such as BERT [ 8],\nit is now standard practice to learn a vector representation of queries and documents that does\ncapture their semantics, and thereby, reduce top- \ud835\udc58retrieval to the problem of finding \ud835\udc58nearest\nneighbors in the resulting vector space [ 9,12,15,31,39,42]\u2014where closeness is measured using", "source": "2210.11934.pdf"} | |
{"id": "147d8b44b127-3", "text": "vector similarity or distance. We refer to this method as semantic search and the similarity scores\ncomputed by such a system as semantic scores .\nHypothesizing that lexical and semantic search are complementary in how they model relevance,\nrecent works [ 5,12,13,18,19,41] began exploring methods to fusetogether lexical and semantic\nretrieval: For a query \ud835\udc5eand ranked lists of documents \ud835\udc45Lexand\ud835\udc45Semretrieved separately by lexical\nand semantic search systems respectively, the task is to construct a final ranked list \ud835\udc45Fusion so as to\nimprove retrieval quality. This is often referred to as hybrid search .\n\u2217Contributed to this work during a research internship at Pinecone.\nAuthors\u2019 addresses: Sebastian Bruch, [email protected], Pinecone, New York, NY, USA; Siyu Gai, [email protected],", "source": "2210.11934.pdf"} | |
{"id": "147d8b44b127-4", "text": "University of California, Berkeley, Berkeley, CA, USA; Amir Ingber, Pinecone, Tel Aviv, Israel, [email protected]:2210.11934v1 [cs.IR] 21 Oct 2022", "source": "2210.11934.pdf"} | |
{"id": "d2f12e5b35df-0", "text": "111:2 Sebastian Bruch, Siyu Gai, and Amir Ingber\nIt is becoming increasingly clear that hybrid search does indeed lead to meaningful gains in\nretrieval quality, especially when applied to out-of-domain datasets [ 5,39]\u2014settings in which the\nsemantic retrieval component uses a model that was not trained or fine-tuned on the target dataset.\nWhat is less clear and is worthy of further investigation, however, is how this fusion is done.\nOne intuitive and common approach is to linearly combine lexical and semantic scores [ 12,19,39].\nIf\ud835\udc53Lex(\ud835\udc5e,\ud835\udc51)and\ud835\udc53Sem(\ud835\udc5e,\ud835\udc51)represent the lexical and semantic scores of document \ud835\udc51with respect\nto query\ud835\udc5e, then a linear (or more accurately, convex) combination is expressed as \ud835\udc53Convex =\n\ud835\udefc\ud835\udc53Sem+(1\u2212\ud835\udefc)\ud835\udc53Lexwhere 0\u2264\ud835\udefc\u22641. Because lexical scores (such as BM25) and semantic scores", "source": "2210.11934.pdf"} | |
{"id": "d2f12e5b35df-1", "text": "(such as dot product) may be unbounded, often they are normalized with min-max scaling [ 15,39]\nprior to fusion.\nA recent study [ 5] argues that convex combination is sensitive to its parameter \ud835\udefcand the\nchoice of score normalization.1They claim and show empirically, instead, that Reciprocal Rank\nFusion ( RRF) [6] may be a more suitable fusion as it is non-parametric and may be utilized in a\nzero-shot manner. They demonstrate its impressive performance even in zero-shot settings on a\nnumber of benchmark datasets.\nThis work was inspired by the claims made in [ 5]; whereas [ 5] addresses how various hybrid\nmethods perform relative to one another in an empirical study, we re-examine their findings\nand analyze why these methods work and what contributes to their relative performance. Our\ncontributions thus can best be summarized as an in-depth examination of fusion functions and\ntheir behavior.\nAs our first research question (RQ1), we investigate whether the convex combination fusion is\na reasonable choice and study its sensitivity to the normalization protocol. We show that, while", "source": "2210.11934.pdf"} | |
{"id": "d2f12e5b35df-2", "text": "a reasonable choice and study its sensitivity to the normalization protocol. We show that, while\nnormalization is essential to create a bounded function and thereby bestow consistency to the\nfusion across domains, the specific choice of normalization is a rather small detail: There always\nexist convex combinations of scores normalized by min-max, standard score, or any other linear\ntransformation that are rank-equivalent. In fact, when formulated as a per-query learning problem,\nthe solution found for a dataset that is normalized with one scheme can be transformed to a solution\nfor a different choice.\nWe next investigate the properties of RRF. We first unpack RRF and examine its sensitivity to its\nparameters as our second research question (RQ2)\u2014contrary to [ 5], we adopt a parametric view\nofRRF where we have as many parameters as there are retrieval functions to fuse, a quantity\nthat is always one more than that in a convex combination. We find that, in contrast to a convex\ncombination, a tuned RRFgeneralizes poorly to out-of-domain datasets. We then intuit that, because\nRRF is a function of ranks , it disregards the distribution of scores and, as such, discards useful", "source": "2210.11934.pdf"} | |
{"id": "d2f12e5b35df-3", "text": "information. Observe that the distance between raw scores plays no role in determining their\nhybrid score\u2014a behavior we find counter-intuitive in a metric space where distance does matter.\nExamining this property constitutes our third and final research question (RQ3).\nFinally, we empirically demonstrate an unsurprising yet important fact: Tuning \ud835\udefcin a convex\ncombination fusion function is extremely sample-efficient, requiring just a handful of labeled\nqueries to arrive at a value suitable for a target domain, regardless of the magnitude of shift in\nthe data distribution. RRF, on the other hand, is relatively less sample-efficient and converges to a\nrelatively less effective retrieval system.\nWe believe our findings, both theoretical and empirical, are important and pertinent to the\nresearch in this field. Our analysis leads us to believe that the convex combination formulation is\ntheoretically sound, empirically effective, sample-efficient, and robust to domain shift. Moreover,", "source": "2210.11934.pdf"} | |
{"id": "a1a34c5d3489-0", "text": "research in this field. Our analysis leads us to believe that the convex combination formulation is\ntheoretically sound, empirically effective, sample-efficient, and robust to domain shift. Moreover,\n1c.f. Section 3.1 in [ 5]: \u201cThis fusion method is sensitive to the score scales ...which needs careful score normalization\u201d\n(emphasis ours).", "source": "2210.11934.pdf"} | |
{"id": "f957f16ac078-0", "text": "An Analysis of Fusion Functions for Hybrid Retrieval 111:3\nunlike the parameters in RRF, the parameter(s) of a convex function are highly interpretable and, if\nno training samples are available, can be adjusted to incorporate domain knowledge.\nWe organized the remainder of this article as follows. In Section 2, we review the relevant\nliterature on hybrid search. Section 3 then introduces our adopted notation and provides details of\nour empirical setup, thereby providing context for the theoretical and empirical analysis of fusion\nfunctions. In Section 4, we begin our analysis by a detailed look at the convex combination of\nretrieval scores. We then examine RRF in Section 5. In Section 6, we summarize our observations\nand identify the properties a fusion function should have to behave well in hybrid retrieval. We\nthen conclude this work and state future research directions in Section 7.\n2 RELATED WORK\nA multi-stage ranking system is typically comprised of a retrieval stage and several subsequent re-\nranking stages, where the retrieved candidates are ordered using more complex ranking functions [ 2,\n38]. Conventional wisdom has that retrieval must be recall-oriented while improving ranking quality", "source": "2210.11934.pdf"} | |
{"id": "f957f16ac078-1", "text": "38]. Conventional wisdom has that retrieval must be recall-oriented while improving ranking quality\nmay be left to the re-ranking stages, which are typically Learning to Rank ( LtR) models [ 16,23,\n28,38,40]. There is indeed much research on the trade-offs between recall and precision in such\nmulti-stage cascades [ 7,20], but a recent study [ 44] challenges that established convention and\npresents theoretical analysis that suggests retrieval must instead optimize precision . We therefore\nreport both recall andNDCG [ 10], but focus on NDCG where space constraints prevent us from\npresenting both or when similar conclusions can be reached regardless of the metric used.\nOne choice for retrieval that remains popular to date is BM25 [ 32,33]. This additive statistic\ncomputes a weighted lexical match between query and document terms: It computes, for each\nquery term, the product of its \u201cimportance\u201d (i.e., frequency of a term in a document, normalized\nby document and global statistics such as average length) and its propensity\u2014a quantity that is\ninversely proportionate to the fraction of documents that contain the term\u2014and adds the scores of", "source": "2210.11934.pdf"} | |
{"id": "f957f16ac078-2", "text": "inversely proportionate to the fraction of documents that contain the term\u2014and adds the scores of\nquery terms to arrive at the final similarity or relevance score. Because BM25, like other lexical\nscoring functions, insists on an exact match of terms, even a slight typo can throw the function off.\nThis vocabulary mismatch problem has been subject to much research in the past, with remedies\nranging from pseudo-relevance feedback to document and query expansion techniques [ 14,29,35].\nTrying to address the limitations of lexical search can only go so far, however. After all, they\nadditionally do not capture the semantic similarity between queries and documents, which may\nbe an important signal indicative of relevance. It has been shown that both of these issues can\nbe remedied by Transformer-based [ 37] pre-trained language models such as BERT [ 8]. Applied\nto the ranking task, such models [ 24,26\u201328] have advanced the state-of-the-art dramatically on\nbenchmark datasets [25].\nThe computationally intensive inference of these deep models often renders them too ineffi-\ncient for first-stage retrieval, however, making them more suitable for re-ranking stages. But by", "source": "2210.11934.pdf"} | |
{"id": "f957f16ac078-3", "text": "cient for first-stage retrieval, however, making them more suitable for re-ranking stages. But by\ncleverly disentangling the query and document transformations into the so-called dual-encoder\narchitecture, where, in the resulting design, the \u201cembedding\u201d of a document can be computed\nindependently of queries, we can pre-compute document vectors and store them offline. In this\nway, we substantially reduce the computational cost during inference as it is only necessary to\nobtain the vector representation of the query during inference. At a high level, these models project\nqueries and documents onto a low-dimensional vector space where semantically-similar points", "source": "2210.11934.pdf"} | |
{"id": "357f2ffd45bc-0", "text": "obtain the vector representation of the query during inference. At a high level, these models project\nqueries and documents onto a low-dimensional vector space where semantically-similar points\nstay closer to each other. By doing so we transform the retrieval problem to one of similarity search\nor Approximate Nearest Neighbor (ANN) search\u2014the \ud835\udc58nearest neighbors to a query vector are the\ndesired top-\ud835\udc58documents. This ANN problem can be solved efficiently using a number of algorithms\nsuch as FAISS [ 11] or Hierarchical Navigable Small World Graphs [ 21] available as open source", "source": "2210.11934.pdf"} | |
{"id": "f16a465ce965-0", "text": "111:4 Sebastian Bruch, Siyu Gai, and Amir Ingber\npackages or through a managed service such as Pinecone2, creating an opportunity to use deep\nmodels and vector representations for first-stage retrieval [ 12,42]\u2014a setup that we refer to as\nsemantic search.\nSemantic search, however, has its own limitations. Previous studies [ 5,36] have shown, for\nexample, that when applied to out-of-domain datasets, their performance is often worse than\nBM25. Observing that lexical and semantic retrievers can be complementary in the way they\nmodel relevance [ 5], it is only natural to consider a hybrid approach where lexical and semantic\nsimilarities both contribute to the makeup of final retrieved list. To date there have been many\nstudies [ 12,13,17\u201319,39,41,45] that do just that, where most focus on in-domain tasks with one\nexception [ 5] that considers a zero-shot application too. Most of these works only use one of the\nmany existing fusion functions in experiments, but none compares the main ideas comprehensively.", "source": "2210.11934.pdf"} | |
{"id": "f16a465ce965-1", "text": "many existing fusion functions in experiments, but none compares the main ideas comprehensively.\nWe review the popular fusion functions from these works in the subsequent sections and, through\na comparative study, elaborate what about their behavior may or may not be problematic.\n3 SETUP\nIn the sections that follow, we study fusion functions with a mix of theoretical and empirical analysis.\nFor that reason, we present our notation as well as empirical setup and evaluation measures here\nto provide sufficient context for our arguments.\n3.1 Notation\nWe adopt the following notation in this work. We use \ud835\udc53o(\ud835\udc5e,\ud835\udc51):Q\u00d7D\u2192 Rto denote the score of\ndocument\ud835\udc51\u2208D to query\ud835\udc5e\u2208Qaccording to the retrieval system o\u2208O. Ifois a semantic retriever,\nSem, thenQandDare the space of (dense) vectors in R\ud835\udc51and\ud835\udc53Semis typically cosine similarity or\ninner product. Similarly, when ois a lexical retriever, Lex,QandDare high-dimensional sparse", "source": "2210.11934.pdf"} | |
{"id": "f16a465ce965-2", "text": "vectors in R|\ud835\udc49|, with|\ud835\udc49|denoting the size of the vocabulary, and \ud835\udc53Lexis typically BM25. A retrieval\nsystem ois the spaceQ\u00d7D equipped with a metric \ud835\udc53o(\u00b7,\u00b7)\u2014which need not be a proper metric.\nWe denote the set of top- \ud835\udc58documents retrieved for query \ud835\udc5eby retrieval system oby\ud835\udc45\ud835\udc58\no(\ud835\udc5e). We\nwrite\ud835\udf0bo(\ud835\udc5e,\ud835\udc51)to denote the rank of document \ud835\udc51with respect to query \ud835\udc5eaccording to retrieval\nsystem o. Note that,\ud835\udf0bo(\ud835\udc5e,\ud835\udc51\ud835\udc56)can be expressed as the sum of indicator functions:\n\ud835\udf0bo(\ud835\udc5e,\ud835\udc51\ud835\udc56)=1+\u2211\ufe01\n\ud835\udc51\ud835\udc57\u2208\ud835\udc45\ud835\udc58o(\ud835\udc5e)1\ud835\udc53o(\ud835\udc5e,\ud835\udc51\ud835\udc57)>\ud835\udc53o(\ud835\udc5e,\ud835\udc51\ud835\udc56), (1)", "source": "2210.11934.pdf"} | |
{"id": "f16a465ce965-3", "text": "where 1\ud835\udc50is1when the predicate \ud835\udc50holds and 0otherwise. In words, and ignoring the subtleties\nintroduced by the presence of score ties, the rank of document \ud835\udc51is the count of documents whose\nscore is larger than the score of \ud835\udc51.\nHybrid retrieval operates on the product space of\u00ceo\ud835\udc56with metric \ud835\udc53Fusion :\u00ce\ud835\udc53o\ud835\udc56\u2192R. Without\nloss of generality, in this work, we restrict\u00ceo\ud835\udc56to be Lex\u00d7Sem. That is, we only consider the\nproblem of fusing two retrieval scores, but note that much of the anlysis can be trivially extended\nto the fusion of multiple retrieval systems. We refer to this hybrid metric as a fusion function.\nA fusion function \ud835\udc53Fusion is typically applied to documents in the union of retrieved sets U\ud835\udc58(\ud835\udc5e)=\u00d0\no\ud835\udc45\ud835\udc58\no(\ud835\udc5e), which we simply call the union set . When a document \ud835\udc51in the union set is not present in", "source": "2210.11934.pdf"} | |
{"id": "f16a465ce965-4", "text": "one of the top- \ud835\udc58sets (i.e.,\ud835\udc51\u2208U\ud835\udc58(\ud835\udc5e)but\ud835\udc51\u2209\ud835\udc45\ud835\udc58\no\ud835\udc56(\ud835\udc5e)for some o\ud835\udc56), we compute its missing score\n(i.e.,\ud835\udc53o\ud835\udc56(\ud835\udc5e,\ud835\udc51)) prior to fusion.\n2http://pinecone.io", "source": "2210.11934.pdf"} | |
{"id": "2509e69bc750-0", "text": "An Analysis of Fusion Functions for Hybrid Retrieval 111:5\n3.2 Empirical Setup\nDatasets : We evaluate our methods on a variety of publicly available benchmark datasets, both in\nin-domain and out-of-domain, zero-shot settings. One of the datasets is the MS MARCO Passage\nRetrieval v1 dataset [ 25], a publicly available retrieval and ranking collection from Microsoft. It\nconsists of roughly 8.8million short passages which, along with queries in natural language,\noriginate from Bing. The queries are split into train, dev, and eval non-overlapping subsets. We\nuse the train queries for any learning or tuning and evaluate exclusively on the small dev query\nset (consisting of 6,980labeled queries) in our analysis. Included in the dataset also are relevance\nlabels.\nWe additionally experiment with 8datasets from the BeIR collection [ 36]3: Natural Questions\n(NQ, question answering), Quora (duplicate detection), NFCorpus (medical), HotpotQA (question\nanswering), Fever (fact extraction), SciFact (scientific claim verification), DBPedia (entity search),", "source": "2210.11934.pdf"} | |
{"id": "2509e69bc750-1", "text": "andFiQA (financial). For details of and statistics for each dataset, we refer the reader to [36].\nLexical search : We use PISA [ 22] for keyword-based lexical retrieval. We tokenize queries\nand documents by space and apply stemming available in PISA\u2014we do not employ any other\npreprocessing steps such as stopword removal, lemmatization, or expansion. We use BM25 with\nthe same hyperparameters as [5] (k1= 0.9and b= 0.4) to retrieve the top 1000 candidates.\nSemantic search : We use the all-MiniLM-L6-v2 model checkpoint available on HuggingFace4\nto project queries and documents into 384-dimensional vectors, which can subsequently be used\nfor indexing and top- \ud835\udc58retrieval using cosine similarity. This model has been shown to achieve\ncompetitive quality on an array of benchmark datasets while remaining compact in size and efficient\nto infer5, thereby allowing us to conduct extensive experiments with results that are competitive\nwith existing state-of-the-art models. This model has been fine-tuned on a large number of datasets,", "source": "2210.11934.pdf"} | |
{"id": "2509e69bc750-2", "text": "exceeding a total of 1 billion pairs of text, including NQ, MS MARCO Passage, and Quora . As such,\nwe consider all experiments on these three datasets as in-domain, and the rest as out-of-domain.\nWe use the exact search for inner product algorithm ( IndexFlatIP ) from FAISS [ 11] to retrieve top\n1000 approximate nearest neighbors.\nEvaluation : Unless noted otherwise, we form the union set for every query from the candidates\nretrieved by the lexical and semantic search systems. We then compute missing scores where\nrequired, compute \ud835\udc53Fusion on the union set, and re-order according to the hybrid scores. We then\nmeasure Recall@ 1000 and NDCG@ 1000 to quantify ranking quality, as recommended by Zamani\net al. [ 44]. On SciFact andNFCorpus , we evaluate Recall and NDCG at rank cutoff 100due to the\nsmall size of these two collections. Note that, we choose to evaluate deep (i.e., with a larger rank\ncut-off) rather than shallow metrics per the discussion in [ 39] to understand the performance of", "source": "2210.11934.pdf"} | |
{"id": "2509e69bc750-3", "text": "cut-off) rather than shallow metrics per the discussion in [ 39] to understand the performance of\neach system more completely.\n4 ANALYSIS OF CONVEX COMBINATION OF RETRIEVAL SCORES\nWe are interested in understanding the behavior and properties of fusion functions. In the remainder\nof this work, we study through that lens two popular methods that are representative of existing\nideas in the literature, beginning with a convex combination of scores.\nAs noted earlier, most existing works use a convex combination of lexical and semantic scores as\nfollows:\ud835\udc53Convex(\ud835\udc5e,\ud835\udc51)=\ud835\udefc\ud835\udc53Sem(\ud835\udc5e,\ud835\udc51)+(1\u2212\ud835\udefc)\ud835\udc53Lex(\ud835\udc5e,\ud835\udc51)for some 0\u2264\ud835\udefc\u22641. When\ud835\udefc=1the above\ncollapses to semantic scores and when it is 0, to lexical scores.\n3Available at https://github.com/beir-cellar/beir\n4https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2\n5c.f. https://sbert.net for details.", "source": "2210.11934.pdf"} | |
{"id": "daf800e846e5-0", "text": "111:6 Sebastian Bruch, Siyu Gai, and Amir Ingber\nAn interesting property of this fusion is that it takes into account the distribution of scores. In\nother words, the distance between lexical (or semantic) scores of two documents plays a significant\nrole in determining their final hybrid score. One disadvantage, however, is that the range of \ud835\udc53Sem\ncan be very different from \ud835\udc53Lex. Moreover, as with TF-IDF in lexical search or with inner product in\nsemantic search, the range of individual functions \ud835\udc53omay depend on the norm of the query and\ndocument vectors (e.g., BM25 is a function of the number of query terms). As such any constant \ud835\udefc\nis likely to yield inconsistently-scaled hybrid scores.\nThe problem above is trivially addressed by applying score normalization prior to fusion [ 15,39].\nSuppose we have collected a union set U\ud835\udc58(\ud835\udc5e)for\ud835\udc5e, and that for every candidate we have computed\nboth lexical and semantic scores. Now, consider the min-max scaling of scores \ud835\udf19mm:R\u2192[0,1]\nbelow:", "source": "2210.11934.pdf"} | |
{"id": "daf800e846e5-1", "text": "below:\n\ud835\udf19mm(\ud835\udc53o(\ud835\udc5e,\ud835\udc51))=\ud835\udc53o(\ud835\udc5e,\ud835\udc51)\u2212\ud835\udc5a\ud835\udc5e\n\ud835\udc40\ud835\udc5e\u2212\ud835\udc5a\ud835\udc5e, (2)\nwhere\ud835\udc5a\ud835\udc5e=min\ud835\udc51\u2208U\ud835\udc58(\ud835\udc5e)\ud835\udc53o(\ud835\udc5e,\ud835\udc51)and\ud835\udc40\ud835\udc5e=max\ud835\udc51\u2208U\ud835\udc58(\ud835\udc5e)\ud835\udc53o(\ud835\udc5e,\ud835\udc51). We note that, min-max scaling is\nthede facto method in the literature, but other choices of \ud835\udf19o(\u00b7)in the more general expression\nbelow:\n\ud835\udc53Convex(\ud835\udc5e,\ud835\udc51)=\ud835\udefc\ud835\udf19Sem(\ud835\udc53Sem(\ud835\udc5e,\ud835\udc51))+( 1\u2212\ud835\udefc)\ud835\udf19Lex(\ud835\udc53Lex(\ud835\udc5e,\ud835\udc51)), (3)", "source": "2210.11934.pdf"} | |
{"id": "daf800e846e5-2", "text": "are valid as well so long as \ud835\udf19Sem,\ud835\udf19Lex:R\u2192Rare monotone in their argument. For example, for\nreasons that will become clearer later, we can redefine the normalization by replacing the minimum\nof the set with the theoretical minimum of the function (i.e., the maximum value that is always less\nthan or equal to all values attainable by the scoring function, or its infimum) to arrive at:\n\ud835\udf19tmm(\ud835\udc53o(\ud835\udc5e,\ud835\udc51))=\ud835\udc53o(\ud835\udc5e,\ud835\udc51)\u2212inf\ud835\udc53o(\ud835\udc5e,\u00b7)\n\ud835\udc40\ud835\udc5e\u2212inf\ud835\udc53o(\ud835\udc5e,\u00b7). (4)\nAs an example, when \ud835\udc53Lexis BM25, then its infimum is 0. When\ud835\udc53Semis cosine similarity, then that\nquantity is\u22121.\nAnother popular choice is the standard score (z-score) normalization which is defined as follows:", "source": "2210.11934.pdf"} | |
{"id": "daf800e846e5-3", "text": "Another popular choice is the standard score (z-score) normalization which is defined as follows:\n\ud835\udf19z(\ud835\udc53o(\ud835\udc5e,\ud835\udc51))=\ud835\udc53o(\ud835\udc5e,\ud835\udc51)\u2212\ud835\udf07\n\ud835\udf0e, (5)\nwhere\ud835\udf07and\ud835\udf0edenote the mean and standard deviation of the set of scores \ud835\udc53o(\ud835\udc5e,\u00b7)for query\ud835\udc5e.\nWe will return to normalization shortly, but we make note of one small but important fact: In\ncases where the variance of lexical (semantic) scores in the union set is 0, we may skip the fusion\nstep altogether because retrieval quality will be unaffected by lexical (semantic) scores. The case\nwhere the variance is arbitrarily close to 0, however, creates challenges for certain normalization\nfunctions. While this would make for an interesting theoretical analysis, we do not study this\nparticular setting in this work as, empirically, we do observe a reasonably large variance among\nscores in the union set on all datasets using state-of-the-art lexical and semantic retrieval functions.\n4.1 Suitability of Convex Combination", "source": "2210.11934.pdf"} | |
{"id": "daf800e846e5-4", "text": "4.1 Suitability of Convex Combination\nA convex combination of scores is a natural choice for creating a mixture of two retrieval systems,\nbut is it a reasonable choice? It has been established in many past empirical studies that \ud835\udc53Convex with\nmin-max normalization often serves as a strong baseline. So the answer to our question appears to\nbe positive. Nonetheless, we believe it is important to understand precisely why this fusion works.\nWe investigate this question empirically, by visualizing lexical and semantic scores of query-\ndocument pairs from an array of datasets. Because we operate in a two-dimensional space, observing\nthe pattern of positive (where document is relevant to query) and negative samples in a plot can\nreveal a lot about whether and how they are separable and how the fusion function behaves. To", "source": "2210.11934.pdf"} | |
{"id": "6598f6af8980-0", "text": "An Analysis of Fusion Functions for Hybrid Retrieval 111:7\n(a) MS MARCO\n (b)Quora\n(c)NQ\n (d)FiQA\n(e)HotpotQA\n (f)Fever\nFig. 1. Visualization of the normalized lexical ( \ud835\udf19tmm(\ud835\udc53Lex)) and semantic ( \ud835\udf19tmm(\ud835\udc53Sem)) scores of query-\ndocument pairs sampled from the validation split of each dataset. Shown in red are up to 20,000positive\nsamples where document is relevant to query, and in black up to the same number of negative samples.\nAdding a lexical (semantic) dimension to query-document pairs helps tease out the relevant documents\nthat would be statistically indistinguishable in a one-dimensional semantic (lexical) view of the data\u2014when\nsamples are projected onto the \ud835\udc65(\ud835\udc66) axis.\nthat end, we sample up to 20,000positive and up to the same number of negative query-document\npairs from the validation split of each dataset, and illustrate the collected points in a scatter plot in\nFigure 1.", "source": "2210.11934.pdf"} | |
{"id": "8eacecb53293-0", "text": "111:8 Sebastian Bruch, Siyu Gai, and Amir Ingber\n(a)\ud835\udefc=0.6\n (b)\ud835\udefc=0.8\nFig. 2. Effect of \ud835\udc53Convex on pairs of lexical and semantic scores.\nFrom these figures, it is clear that positive and negative samples form clusters that are, with\nsome error, separable by a linear function. What is different between datasets is the slope of this\nseparating line. For example, in MS MARCO, Quora , and NQ, which are in-domain datasets, the\nseparating line is almost vertical, suggesting that the semantic scores serve as a sufficiently strong\nsignal for relevance. This is somewhat true of FiQA . In other out-of-domain datasets, however, the\nline is rotated counter-clockwise, indicating a more balanced weighting of lexical and semantic\nscores. Said differently, adding a lexical (semantic) dimension to query-document pairs helps tease\nout the relevant documents that would be statistically indistinguishable in a one-dimensional\nsemantic (lexical) view of the data. Interestingly, across all datasets, there is a higher concentration\nof negative samples where lexical scores vanish.", "source": "2210.11934.pdf"} | |
{"id": "8eacecb53293-1", "text": "of negative samples where lexical scores vanish.\nThis empirical evidence suggests that lexical and semantic scores may indeed be complementary\u2014\nan observation that is in agreement with prior work [ 5]\u2014and a line may be a reasonable choice for\ndistinguishing between positive and negative samples. But while these figures shed light on the\nshape of positive and negative clusters and their separability, our problem is not classification but\nranking . We seek to order query-document pairs and, as such, separability is less critical and, in fact,\nnot required. It is therefore instructive to understand the effect of a particular convex combination\non pairs of lexical and semantic scores. This is visualized in Figure 2 for two values of \ud835\udefcin\ud835\udc53Convex .\nThe plots in Figure 2 illustrate how the parameter \ud835\udefcdetermines how different regions of the\nplane are ranked relative to each other. This is a trivial fact, but it is nonetheless interesting to map\nthese patterns to the distributions in Figure 1. In-domain datasets, for example, form a pattern of", "source": "2210.11934.pdf"} | |
{"id": "8eacecb53293-2", "text": "positives and negatives that is unsurprisingly more in tune with the \ud835\udefc=0.8setting of\ud835\udc53Convex than\n\ud835\udefc=0.6.\n4.2 Role of Normalization\nWe have thus far used min-max normalization to be consistent with the literature. In this section,\nwe ask the question first raised by Chen et al. [ 5] on whether and to what extent the choice of nor-\nmalization matters and how carefully one must choose the normalization protocol. In other words,\nwe wish to examine the effect of \ud835\udf19Sem(\u00b7)and\ud835\udf19Lex(\u00b7)on the convex combination in Equation (3).\nBefore we begin, let us consider the following suite of functions:", "source": "2210.11934.pdf"} | |
{"id": "a09d9440f156-0", "text": "An Analysis of Fusion Functions for Hybrid Retrieval 111:9\n\u2022\ud835\udf19mm: Min-max scaling of Equation (2);\n\u2022\ud835\udf19tmm: Theoretical min-max scaling of Equation (4);\n\u2022\ud835\udf19z: z-score normalization of Equation (5);\n\u2022\ud835\udf19mm\u2212Lex: Min-max scaling of lexical scores, unnormalized semantic scores;\n\u2022\ud835\udf19tmm\u2212Lex: Theoretical min-max normalized lexical scores, unnormalized semantic scores;\n\u2022\ud835\udf19z\u2212Lex: z-score normalized lexical scores, unnormalized semantic scores; and,\n\u2022\ud835\udc3c: The identity transformation, leaving both semantic and lexical scores unnormalized.\nWe believe these transformations together test the various conditions in our upcoming arguments.\nLet us first state the notion of rank-equivalence more formally:\nDefinition 4.1. We say two functions \ud835\udc53and\ud835\udc54arerank-equivalent on the setUand write\ud835\udc53\ud835\udf0b=\ud835\udc54, if\nthe order among documents in a set Uinduced by \ud835\udc53is the same as that induced by \ud835\udc54.", "source": "2210.11934.pdf"} | |
{"id": "a09d9440f156-1", "text": "For example, when \ud835\udf19Sem(\ud835\udc65)=\ud835\udc4e\ud835\udc65+\ud835\udc4fand\ud835\udf19Lex(\ud835\udc65)=\ud835\udc50\ud835\udc65+\ud835\udc51are linear transformations of scores\nfor some positive coefficients \ud835\udc4e,\ud835\udc4fand real intercepts \ud835\udc4f,\ud835\udc50, then they can be reduced to the following\nrank-equivalent form:\n\ud835\udc53Convex(\ud835\udc5e,\ud835\udc51)\ud835\udf0b=(\ud835\udc4e\ud835\udefc)\ud835\udc53Sem(\ud835\udc5e,\ud835\udc51)+\ud835\udc50(1\u2212\ud835\udefc)\ud835\udc53Lex(\ud835\udc5e,\ud835\udc51).\nIn fact, letting \ud835\udefc\u2032=\ud835\udc50\ud835\udefc/[\ud835\udc50\ud835\udefc+\ud835\udc50(1\u2212\ud835\udefc)]transforms the problem to one of learning a convex\ncombination of the original scores with a modified weight. This family of functions includes \ud835\udf19mm,\n\ud835\udf19z, and\ud835\udf19tmm, and as such solutions for one family can be transformed to solutions for another\nnormalization protocol. More formally:", "source": "2210.11934.pdf"} | |
{"id": "a09d9440f156-2", "text": "normalization protocol. More formally:\nLemma 4.2. For every query, given an arbitrary \ud835\udefc, there exists a \ud835\udefc\u2032such that the convex combination\nof min-max normalized scores with parameter \ud835\udefcis rank-equivalent to a convex combination of z-score\nnormalized scores with \ud835\udefc\u2032, and vice versa.\nProof. Write\ud835\udc5aoand\ud835\udc40ofor the minimum and maximum scores retrieved by system o, and\ud835\udf07o\nand\ud835\udf0eofor their mean and standard deviation. We also write \ud835\udc45o=\ud835\udc40o\u2212\ud835\udc5aofor brevity. For every\ndocument\ud835\udc51, we have the following:\n\ud835\udefc\ud835\udc53Sem(\ud835\udc5e,\ud835\udc51)\u2212\ud835\udc5aSem\n\ud835\udc45Sem+(1\u2212\ud835\udefc)\ud835\udc53Lex(\ud835\udc5e,\ud835\udc51)\u2212\ud835\udc5aLex\n\ud835\udc45Lex\ud835\udf0b=\ud835\udefc\n\ud835\udc45sem\ud835\udc53Sem(\ud835\udc5e,\ud835\udc51)+1\u2212\ud835\udefc", "source": "2210.11934.pdf"} | |
{"id": "a09d9440f156-3", "text": "\ud835\udc45Lex\ud835\udc53Lex(\ud835\udc5e,\ud835\udc51)\n\ud835\udf0b=1\n\ud835\udf0eSem\ud835\udf0eLex\u0002\ud835\udefc\n\ud835\udc45Sem\ud835\udc53Sem(\ud835\udc5e,\ud835\udc51)+1\u2212\ud835\udefc\n\ud835\udc45Lex\ud835\udc53Lex(\ud835\udc5e,\ud835\udc51)\u2212\ud835\udefc\n\ud835\udc45Sem\ud835\udf07Sem\u22121\u2212\ud835\udefc\n\ud835\udc45Lex\ud835\udf07Lex\u0003\n\ud835\udf0b=\ud835\udefc\n\ud835\udc45Sem\ud835\udf0eLex\u0000\ud835\udc53Sem(\ud835\udc5e,\ud835\udc51)\u2212\ud835\udf07Sem\n\ud835\udf0eSem\u0001+1\u2212\ud835\udefc\n\ud835\udc45Lex\ud835\udf0eSem\u0000\ud835\udc53Lex(\ud835\udc5e,\ud835\udc51)\u2212\ud835\udf07Lex\n\ud835\udf0eLex\u0001,\nwhere in every step we either added a constant or multiplied the expression by a positive constant,\nboth rank-preserving operations. Finally, setting\n\ud835\udefc\u2032=\ud835\udefc\n\ud835\udc45Sem\ud835\udf0eLex/(\ud835\udefc", "source": "2210.11934.pdf"} | |
{"id": "a09d9440f156-4", "text": "\ud835\udc45Sem\ud835\udf0eLex/(\ud835\udefc\n\ud835\udc45Sem\ud835\udf0eLex+1\u2212\ud835\udefc\n\ud835\udc45Lex\ud835\udf0eSem)\ncompletes the proof. The other direction is similar. \u25a1\nThe fact above implies that the problem of tuning \ud835\udefcfor a query in a min-max normalization\nregime is equivalent to learning \ud835\udefc\u2032in a z-score normalized setting. In other words, there is a\none-to-one relationship between these parameters, and as a result solutions can be mapped from\none problem space to the other. However, this statement is only true for individual queries and\ndoes not have any implications for the learning of the weight in the convex combination over an\nentire collection of queries. Let us now consider this more complex setup.", "source": "2210.11934.pdf"} | |
{"id": "79a058f189f6-0", "text": "111:10 Sebastian Bruch, Siyu Gai, and Amir Ingber\nThe question we wish to answer is as follows: Under what conditions is \ud835\udc53Convex with parameter\n\ud835\udefcand a pair of normalization functions (\ud835\udf19Sem,\ud835\udf19Sem)rank-equivalent to an \ud835\udc53\u2032\nConvexof a new pair of\nnormalization functions (\ud835\udf19\u2032\nSem,\ud835\udf19\u2032\nLex)with weight \ud835\udefc\u2032? That is, for a constant \ud835\udefcwith one normalization\nprotocol, when is there a constant \ud835\udefc\u2032that produces the same ranked lists for every query but with\na different normalization protocol? The answer to this question helps us understand whether and\nwhen changing normalization schemes from min-max to z-score, for example, matters. We state\nthe following definitions followed by a theorem that answers this question.\nDefinition 4.3. We say\ud835\udc53:R\u2192Ris a\ud835\udeff-expansion with respect to \ud835\udc54:R\u2192Rif for any\ud835\udc65and\ud835\udc66", "source": "2210.11934.pdf"} | |
{"id": "79a058f189f6-1", "text": "in the domains of \ud835\udc53and\ud835\udc54we have that|\ud835\udc53(\ud835\udc66)\u2212\ud835\udc53(\ud835\udc65)|\u2265\ud835\udeff|\ud835\udc54(\ud835\udc66)\u2212\ud835\udc54(\ud835\udc65)|for some\ud835\udeff\u22651.\nFor example, \ud835\udf19mm(\u00b7)is an expansion with respect to \ud835\udf19tmm(\u00b7)with a factor \ud835\udeffthat depends on the\nrange of the scores. As another example, \ud835\udf19z(\u00b7)is an expansion with respect to \ud835\udf19mm(\u00b7).\nDefinition 4.4. For two pairs of functions \ud835\udc53,\ud835\udc54:R\u2192Rand\ud835\udc53\u2032,\ud835\udc54\u2032:R\u2192R, and two points \ud835\udc65and\n\ud835\udc66in their domains, we say that \ud835\udc53\u2032expands with respect to \ud835\udc53more rapidly than\ud835\udc54\u2032expands with\nrespect to\ud835\udc54, with a relative expansion rate of\ud835\udf06\u22651, if the following condition holds:", "source": "2210.11934.pdf"} | |
{"id": "79a058f189f6-2", "text": "|\ud835\udc53\u2032(\ud835\udc66)\u2212\ud835\udc53\u2032(\ud835\udc65)|\n|\ud835\udc53(\ud835\udc66)\u2212\ud835\udc53(\ud835\udc65)|=\ud835\udf06|\ud835\udc54\u2032(\ud835\udc66)\u2212\ud835\udc54\u2032(\ud835\udc65)|\n|\ud835\udc54(\ud835\udc66)\u2212\ud835\udc54(\ud835\udc65)|.\nWhen\ud835\udf06is independent of the points \ud835\udc65and\ud835\udc66, we call this relative expansion uniform :\n|\u0394\ud835\udc53\u2032|/|\u0394\ud835\udc53|\n|\u0394\ud835\udc54\u2032|/|\u0394\ud835\udc54|=\ud835\udf06,\u2200\ud835\udc65,\ud835\udc66.\nAs an example, if \ud835\udc53and\ud835\udc54are min-max scaling and \ud835\udc53\u2032and\ud835\udc54\u2032are z-score normalization, then\ntheir respective rate of expansion is roughly similar. We will later show that this property often\nholds empirically across different transformations.\nTheorem 4.5. For every choice of \ud835\udefc, there exists a constant \ud835\udefc\u2032such that the following functions are", "source": "2210.11934.pdf"} | |
{"id": "79a058f189f6-3", "text": "rank-equivalent on a collection of queries \ud835\udc44:\n\ud835\udc53Convex =\ud835\udefc\ud835\udf19(\ud835\udc53Sem(\ud835\udc5e,\ud835\udc51))+( 1\u2212\ud835\udefc)\ud835\udf14(\ud835\udc53Lex(\ud835\udc5e,\ud835\udc51)),\nand\n\ud835\udc53\u2032\nConvex =\ud835\udefc\u2032\ud835\udf19\u2032(\ud835\udc53Sem(\ud835\udc5e,\ud835\udc51))+( 1\u2212\ud835\udefc\u2032)\ud835\udf14\u2032(\ud835\udc53Lex(\ud835\udc5e,\ud835\udc51)),\nif for the monotone functions \ud835\udf19,\ud835\udf14,\ud835\udf19\u2032,\ud835\udf14\u2032:R\u2192R,\ud835\udf19\u2032expands with respect to \ud835\udf19more rapidly than \ud835\udf14\u2032\nexpands with respect to \ud835\udf14with a uniform rate \ud835\udf06.\nProof. Consider a pair of documents \ud835\udc51\ud835\udc56and\ud835\udc51\ud835\udc57in the ranked list of a query \ud835\udc5esuch that\ud835\udc51\ud835\udc56is", "source": "2210.11934.pdf"} | |
{"id": "79a058f189f6-4", "text": "ranked above \ud835\udc51\ud835\udc57according to \ud835\udc53Convex . Shortening \ud835\udc53o(\ud835\udc5e,\ud835\udc51\ud835\udc58)to\ud835\udc53(\ud835\udc58)\nofor brevity, we have that:\n\ud835\udc53(\ud835\udc56)\nConvex>\ud835\udc53(\ud835\udc57)\nConvex=\u21d2\ud835\udefc\u0002\n(\ud835\udf19(\ud835\udc53(\ud835\udc56)\nSem)\u2212\ud835\udf19(\ud835\udc53(\ud835\udc57)\nSem))\n| {z }\n\u0394\ud835\udf19\ud835\udc56 \ud835\udc57+(\ud835\udf14(\ud835\udc53(\ud835\udc57)\nLex)\u2212\ud835\udf14(\ud835\udc53(\ud835\udc56)\nLex))\n| {z }\n\u0394\ud835\udf14\ud835\udc57\ud835\udc56\u0003\n>\ud835\udf14(\ud835\udc53(\ud835\udc57)\nLex)\u2212\ud835\udf14(\ud835\udc53(\ud835\udc56)\nLex)\nThis holds if and only if we have the following:\n(", "source": "2210.11934.pdf"} | |
{"id": "79a058f189f6-5", "text": "Lex)\nThis holds if and only if we have the following:\n(\n\ud835\udefc>1/(1+\u0394\ud835\udf19\ud835\udc56 \ud835\udc57\n\u0394\ud835\udf14\ud835\udc57\ud835\udc56),if\u0394\ud835\udf19\ud835\udc56 \ud835\udc57+\u0394\ud835\udf14\ud835\udc57\ud835\udc56>0,\n\ud835\udefc<1/(1+\u0394\ud835\udf19\ud835\udc56 \ud835\udc57\n\u0394\ud835\udf14\ud835\udc57\ud835\udc56),otherwise.(6)\nObserve that, because of the monotonicity of a convex combination and the monotonicity of\nthe normalization functions, the case \u0394\ud835\udf19\ud835\udc56 \ud835\udc57<0and\u0394\ud835\udf14\ud835\udc57\ud835\udc56>0(which implies that the semantic and\nlexical scores of \ud835\udc51\ud835\udc57are both larger than \ud835\udc51\ud835\udc56) is not valid as it leads to a reversal of ranks. Similarly,", "source": "2210.11934.pdf"} | |
{"id": "525a513a60ab-0", "text": "An Analysis of Fusion Functions for Hybrid Retrieval 111:11\nthe opposite case \u0394\ud835\udf19\ud835\udc56 \ud835\udc57>0and\u0394\ud835\udf14\ud835\udc57\ud835\udc56<0always leads to the correct order regardless of the weight\nin the convex combination. We consider the other two cases separately below.\nCase 1: \u0394\ud835\udf19\ud835\udc56 \ud835\udc57>0and\u0394\ud835\udf14\ud835\udc57\ud835\udc56>0. Because of the monotonicity property, we can deduce that\n\u0394\ud835\udf19\u2032\n\ud835\udc56 \ud835\udc57>0and\u0394\ud835\udf14\u2032\n\ud835\udc57\ud835\udc56>0. From Equation (6), for the order between \ud835\udc51\ud835\udc56and\ud835\udc51\ud835\udc57to be preserved under\nthe image of \ud835\udc53\u2032\nConvex, we must therefore have the following:\n\ud835\udefc\u2032>1/(1+\u0394\ud835\udf19\u2032\n\ud835\udc56 \ud835\udc57\n\u0394\ud835\udf14\u2032\n\ud835\udc57\ud835\udc56).\nBy assumption, using Definition 4.4, we observe that:", "source": "2210.11934.pdf"} | |
{"id": "525a513a60ab-1", "text": "By assumption, using Definition 4.4, we observe that:\n\u0394\ud835\udf19\u2032\n\ud835\udc56 \ud835\udc57\n\u0394\ud835\udf19\ud835\udc56 \ud835\udc57\u2265\u0394\ud835\udf14\u2032\n\ud835\udc57\ud835\udc56\n\u0394\ud835\udf14\ud835\udc57\ud835\udc56=\u21d2\u0394\ud835\udf19\u2032\n\ud835\udc56 \ud835\udc57\n\u0394\ud835\udf14\u2032\n\ud835\udc57\ud835\udc56\u2265\u0394\ud835\udf19\ud835\udc56 \ud835\udc57\n\u0394\ud835\udf14\ud835\udc57\ud835\udc56.\nAs such, the lower-bound on \ud835\udefc\u2032imposed by documents \ud835\udc51\ud835\udc56and\ud835\udc51\ud835\udc57of query\ud835\udc5e,\ud835\udc3f\u2032\n\ud835\udc56 \ud835\udc57(\ud835\udc5e), is smaller than\nthe lower-bound on \ud835\udefc,\ud835\udc3f\ud835\udc56 \ud835\udc57(\ud835\udc5e). Like\ud835\udefc, this case does not additionally constrain \ud835\udefc\u2032from above (i.e.,\nthe upper-bound does not change: \ud835\udc48\u2032", "source": "2210.11934.pdf"} | |
{"id": "525a513a60ab-2", "text": "the upper-bound does not change: \ud835\udc48\u2032\n\ud835\udc56 \ud835\udc57(\ud835\udc5e)=\ud835\udc48\ud835\udc56 \ud835\udc57(\ud835\udc5e)=1).\nCase 2: \u0394\ud835\udf19\ud835\udc56 \ud835\udc57<0,\u0394\ud835\udf14\ud835\udc57\ud835\udc56<0. Once again, due to monotonicity, it is easy to see that \u0394\ud835\udf19\u2032\n\ud835\udc56 \ud835\udc57<0and\n\u0394\ud835\udf14\u2032\n\ud835\udc57\ud835\udc56<0. Equation (6) tells us that, for the order to be preserved under \ud835\udc53\u2032\nConvex, we must similarly\nhave that:\n\ud835\udefc\u2032<1/(1+\u0394\ud835\udf19\u2032\n\ud835\udc56 \ud835\udc57\n\u0394\ud835\udf14\u2032\n\ud835\udc57\ud835\udc56).\nOnce again, by assumption we have that the upper-bound on \ud835\udefc\u2032is a translation of the upper-bound\non\ud835\udefcto the left. The lower-bound is unaffected and remains 0.\nFor\ud835\udc53\u2032", "source": "2210.11934.pdf"} | |
{"id": "525a513a60ab-3", "text": "For\ud835\udc53\u2032\nConvexto induce the same order as \ud835\udc53Convex among all pairs of documents for all queries in \ud835\udc44,\nthe intersection of the intervals produced by the constraints on \ud835\udefc\u2032has to be non-empty:\n\ud835\udc3c\u2032\u225c\u00d9\n\ud835\udc5e[max\n\ud835\udc56 \ud835\udc57\ud835\udc3f\u2032\n\ud835\udc56 \ud835\udc57(\ud835\udc5e),min\n\ud835\udc56 \ud835\udc57\ud835\udc48\u2032\n\ud835\udc56 \ud835\udc57(\ud835\udc5e)]=[max\n\ud835\udc5e,\ud835\udc56 \ud835\udc57\ud835\udc3f\u2032\n\ud835\udc56 \ud835\udc57(\ud835\udc5e),min\n\ud835\udc5e,\ud835\udc56 \ud835\udc57\ud835\udc48\u2032\n\ud835\udc56 \ud835\udc57(\ud835\udc5e)]\u2260\u2205.\nWe next prove that \ud835\udc3c\u2032is always non-empty to conclude the proof of the theorem.", "source": "2210.11934.pdf"} | |
{"id": "525a513a60ab-4", "text": "We next prove that \ud835\udc3c\u2032is always non-empty to conclude the proof of the theorem.\nBy Equation (6) and the existence of \ud835\udefc, we know that max \ud835\udc5e,\ud835\udc56 \ud835\udc57\ud835\udc3f\ud835\udc56 \ud835\udc57(\ud835\udc5e)\u2264min \ud835\udc5e,\ud835\udc56 \ud835\udc57\ud835\udc48\ud835\udc56 \ud835\udc57(\ud835\udc5e). Suppose\nthat documents \ud835\udc51\ud835\udc56and\ud835\udc51\ud835\udc57of query\ud835\udc5e1maximize the lower-bound, and that documents \ud835\udc51\ud835\udc5aand\ud835\udc51\ud835\udc5bof\nquery\ud835\udc5e2minimize the upper-bound. We therefore have that:\n1/(1+\u0394\ud835\udf19\ud835\udc56 \ud835\udc57\n\u0394\ud835\udf14\ud835\udc57\ud835\udc56)\u22641/(1+\u0394\ud835\udf19\ud835\udc5a\ud835\udc5b\n\u0394\ud835\udf14\ud835\udc5b\ud835\udc5a)=\u21d2\u0394\ud835\udf19\ud835\udc56 \ud835\udc57\n\u0394\ud835\udf14\ud835\udc57\ud835\udc56\u2265\u0394\ud835\udf19\ud835\udc5a\ud835\udc5b", "source": "2210.11934.pdf"} | |
{"id": "525a513a60ab-5", "text": "\u0394\ud835\udf14\ud835\udc5b\ud835\udc5a\nBecause of the uniformity of the relative expansion rate, we can deduce that:\n\u0394\ud835\udf19\u2032\n\ud835\udc56 \ud835\udc57\n\u0394\ud835\udf14\u2032\n\ud835\udc57\ud835\udc56\u2265\u0394\ud835\udf19\u2032\n\ud835\udc5a\ud835\udc5b\n\u0394\ud835\udf14\u2032\ud835\udc5b\ud835\udc5a=\u21d2max\n\ud835\udc5e,\ud835\udc56 \ud835\udc57\ud835\udc3f\u2032\n\ud835\udc56 \ud835\udc57(\ud835\udc5e)\u2264min\n\ud835\udc5e,\ud835\udc56 \ud835\udc57\ud835\udc48\u2032\n\ud835\udc56 \ud835\udc57(\ud835\udc5e).\n\u25a1\nIt is easy to show that the theorem above also holds when the condition is updated to reflect a\nshift of lower- and upper-bounds to the right, which happens when \ud835\udf19\u2032contracts with respect to \ud835\udf19\nmore rapidly than \ud835\udf14\u2032does with respect to \ud835\udf14.\nThe picture painted by Theorem 4.5 is that switching from min-max scaling to z-score nor-", "source": "2210.11934.pdf"} | |
{"id": "525a513a60ab-6", "text": "malization or any other linear transformation that is bounded and does not severely distort the\ndistribution of scores, especially among the top-ranking documents, results in a rank-equivalent\nfunction. At most, for any given value of the ranking metric of interest such as NDCG, we should\nobserve a shift of the weight in the convex combination to the right or left. Figure 3 illustrates this", "source": "2210.11934.pdf"} | |
{"id": "dc53db166775-0", "text": "111:12 Sebastian Bruch, Siyu Gai, and Amir Ingber\n(a) MS MARCO\n (b)Quora\n(c)HotpotQA\n (d)FiQA\nFig. 3. Effect of normalization on the performance of \ud835\udc53Convex as a function of \ud835\udefcon the validation set.\neffect empirically on select datasets. As anticipated, the peak performance in terms of NDCG shifts\nto the left or right depending on the type of normalization.\nThe uniformity requirement on the relative expansion rate, \ud835\udf06, in Theorem 4.5 is not as strict and\nrestrictive as it may appear. First, it is only necessary for \ud835\udf06to be stable on the set of ordered pairs\nof documents as ranked by \ud835\udc53Convex :\n|\u0394\ud835\udf19\u2032\n\ud835\udc56 \ud835\udc57|/|\u0394\ud835\udf19\ud835\udc56 \ud835\udc57|\n|\u0394\ud835\udf14\u2032", "source": "2210.11934.pdf"} | |
{"id": "dc53db166775-1", "text": "|\u0394\ud835\udf14\u2032\n\ud835\udc57\ud835\udc56|/|\u0394\ud835\udf14\ud835\udc57\ud835\udc56|=\ud835\udf06,\u2200(\ud835\udc51\ud835\udc56,\ud835\udc51\ud835\udc57)st\ud835\udc53Convex(\ud835\udc51\ud835\udc56)>\ud835\udc53Convex(\ud835\udc51\ud835\udc57).\nSecond, it turns out, close to uniformity (i.e., when \ud835\udf06isconcentrated around one value) is often\nsufficient for the effect to materialize in practice. We observe this phenomenon empirically by fixing\nthe parameter \ud835\udefcin\ud835\udc53Convex with one transformation and forming ranked lists, then choosing another\ntransformation and computing its relative expansion rate \ud835\udf06on all ordered pairs of documents. We\nshow the measured relative expansion rate in Figure 4 for various transformations.\nFigure 4 shows that most pairs of transformations yield a stable relative expansion rate. For\nexample, if\ud835\udc53Convex uses\ud835\udf19tmmand\ud835\udc53\u2032", "source": "2210.11934.pdf"} | |
{"id": "dc53db166775-2", "text": "example, if\ud835\udc53Convex uses\ud835\udf19tmmand\ud835\udc53\u2032\nConvexuses\ud835\udf19mm\u2014denoted by \ud835\udf19tmm\u2192\ud835\udf19mm\u2014for every choice of \ud835\udefc,", "source": "2210.11934.pdf"} | |
{"id": "415eb4f03f65-0", "text": "An Analysis of Fusion Functions for Hybrid Retrieval 111:13\n(a) MS MARCO\n (b)Quora\n(c)HotpotQA\n (d)FiQA\nFig. 4. Relative expansion rate of semantic scores with respect to lexical scores, \ud835\udf06, when changing from one\ntransformation to another, with 95% confidence intervals. Prior to visualization, we normalize values of \ud835\udf06\nto bring them into a similar scale\u2014this only affects aesthetics and readability, but is the reason why the\nvertical axis is not scaled. For most transformations and every value of \ud835\udefc, we observe a stable relative rate of\nexpansion where \ud835\udf06concentrates around one value for the vast majority of queries.\nthe relative expansion rate \ud835\udf06is concentrated around a constant value. This implies that any ranked\nlist obtained from \ud835\udc53Convex can be reconstructed by \ud835\udc53\u2032\nConvex. Interestingly, \ud835\udf19z\u2212Lex\u2192\ud835\udf19mm\u2212Lexhas a\ncomparatively less stable \ud835\udf06, but removing normalization altogether (i.e., \ud835\udf19mm\u2212Lex\u2192\ud835\udc3c) dramatically", "source": "2210.11934.pdf"} | |
{"id": "415eb4f03f65-1", "text": "distorts the expansion rates. This goes some way to explain why normalization and boundedness\nare important properties.\nIn the last two sections, we have answered RQ1: Convex combination is an appropriate fusion\nfunction and its performance is not sensitive to the choice of normalization so long as the transfor-\nmation has reasonable properties. Interestingly, the behavior of \ud835\udf19tmmappears to be more robust to\nthe data distribution\u2014its peak remains within a small neighborhood as we move from one dataset\nto another. We believe the reason \ud835\udf19tmm-normalized scores are more stable is because it has one\nfewer data-dependent statistic in the transformation (i.e., minimum score in the retrieved set is", "source": "2210.11934.pdf"} | |
{"id": "fe3c91102a70-0", "text": "111:14 Sebastian Bruch, Siyu Gai, and Amir Ingber\nTable 1. Recall@1000 and NDCG@1000 (except SciFact andNFCorpus where cutoff is 100) on the test\nsplit of various datasets for lexical and semantic search as well as hybrid retrieval using RRF [5] (\ud835\udf02=60)\nand TM2C2 ( \ud835\udefc=0.8). The symbols\u2021and\u2217indicate statistical significance ( \ud835\udc5d-value <0.01) with respect to\nTM2C2 and RRF respectively, according to a paired two-tailed \ud835\udc61-test.\nRecall NDCG\nDataset Lex. Sem. TM2C2 RRF Lex. Sem. TM2C2 RRF Oraclein-domainMS MARCO 0.836\u2021\u22170.964\u2021\u22170.974 0.969\u20210.309\u2021\u22170.441\u2021\u22170.454 0.425\u20210.547", "source": "2210.11934.pdf"} | |
{"id": "fe3c91102a70-1", "text": "NQ 0.886\u2021\u22170.978\u2021\u22170.985 0.984 0.382\u2021\u22170.505\u20210.542 0.514\u20210.637\nQuora 0.992\u2021\u22170.999 0.999 0.999 0.800\u2021\u22170.889\u2021\u22170.901 0.877\u20210.936zero-shotNFCorpus 0.283\u2021\u22170.314\u2021\u22170.348 0.344 0.298\u2021\u22170.309\u2021\u22170.343 0.326\u20210.371\nHotpotQA 0.878\u2021\u22170.756\u2021\u22170.884 0.888 0.682\u2021\u22170.520\u2021\u22170.699 0.675\u20210.767", "source": "2210.11934.pdf"} | |
{"id": "fe3c91102a70-2", "text": "FEVER 0.969\u2021\u22170.931\u2021\u22170.972 0.972 0.689\u2021\u22170.558\u2021\u22170.744 0.721\u20210.814\nSciFact 0.900\u2021\u22170.932\u2021\u22170.958 0.955 0.698\u2021\u22170.681\u2021\u22170.753 0.730\u20210.796\nDBPedia 0.540\u2021\u22170.408\u2021\u22170.564 0.567 0.415\u2021\u22170.425\u2021\u22170.512 0.489\u20210.553\nFiQA 0.720\u2021\u22170.908 0.907 0.904 0.315\u2021\u22170.467\u20210.496 0.464\u20210.561\nreplaced with minimum feasible value regardless of the candidate set). In the remainder of this\nwork, we use \ud835\udf19tmmand denote a convex combination of scores normalized by it by TM2C2 for", "source": "2210.11934.pdf"} | |
{"id": "fe3c91102a70-3", "text": "brevity.\n5 ANALYSIS OF RECIPROCAL RANK FUSION\nChen et al. [ 5] show that RRF performs better and more reliably than a convex combination of\nnormalized scores. RRF is computed as follows:\n\ud835\udc53RRF(\ud835\udc5e,\ud835\udc51)=1\n\ud835\udf02+\ud835\udf0bLex(\ud835\udc5e,\ud835\udc51)+1\n\ud835\udf02+\ud835\udf0bSem(\ud835\udc5e,\ud835\udc51), (7)\nwhere\ud835\udf02is a free parameter. The authors of [ 5] take a non-parametric view of RRF, where the\nparameter\ud835\udf02is set to its default value 60, in order to apply the fusion to out-of-domain datasets\nin a zero-shot manner. In this work, we additionally take a parametric view of RRF, where as we\nelaborate later, the number of free parameters is the same as the number of functions being fused\ntogether, a quantity that is always larger than the number of parameters in a convex combination.", "source": "2210.11934.pdf"} | |
{"id": "fe3c91102a70-4", "text": "together, a quantity that is always larger than the number of parameters in a convex combination.\nLet us begin by comparing the performance of RRF and TM2C2 empirically to get a sense of their\nrelative efficacy. We first verify whether hybrid retrieval leads to significant gains in in-domain and\nout-of-domain experiments. In a way, we seek to confirm the findings reported in [ 5] and compare\nthe two fusion functions in the process.\nTable 1 summarizes our results. We note that, we set RRF\u2019s\ud835\udf02to60per [ 5] but tuned TM2C2\u2019s \ud835\udefc\non the validation set of the in-domain datasets and found that \ud835\udefc=0.8works well for the three\ndatasets. In the experiments leading to Table 1, we fix \ud835\udefc=0.8and evaluate methods on the test\nsplit of the datasets. Per [ 5,39], we have also included the performance of an oracle system that\nuses a per-query \ud835\udefc, to establish an upper-bound\u2014the oracle knows which value of \ud835\udefcworks best for\nany given query.", "source": "2210.11934.pdf"} | |
{"id": "fe3c91102a70-5", "text": "any given query.\nOur results show that hybrid retrieval using RRF outperforms pure-lexical and pure-semantic\nretrieval on most datasets. This fusion method is particularly effective on out-of-domain datasets,", "source": "2210.11934.pdf"} | |
{"id": "82bb45ff1fd9-0", "text": "An Analysis of Fusion Functions for Hybrid Retrieval 111:15\n(a) in-domain\n (b) out-of-domain\nFig. 5. Difference in NDCG@ 1000 of TM2C2 and RRF (positive indicates better ranking quality by TM2C2)\nas a function of \ud835\udefc. When\ud835\udefc=0the model is rank-equivalent to lexical search while \ud835\udefc=1is rank-equivalent\nto semantic search.\nrendering the observation of [ 5] a robust finding and asserting once more the remarkable perfor-\nmance of RRF in zeros-shot settings.\nContrary to [ 5], however, we find that TM2C2 significantly outperforms RRF on all datasets\nin terms of NDCG, and does generally better in terms of Recall. Our observation is consistent\nwith [39] that TM2C2 substantially boosts NDCG even on in-domain datasets.\nTo contextualize the effect of \ud835\udefcon ranking quality, we visualize a parameter sweep on the\nvalidation split of in-domain datasets in Figure 5(a), and for completeness, on the test split of", "source": "2210.11934.pdf"} | |
{"id": "82bb45ff1fd9-1", "text": "out-of-domain datasets in Figure 5(b). These figures also compare the performance of TM2C2 with\nRRF by reporting the difference between NDCG of the two methods. These plots show that there\nalways exists an interval of \ud835\udefcfor which\ud835\udc53TM2C2\u227b\ud835\udc53RRFwith\u227bindicating better rank quality.\n5.1 Effect of Parameters\nChen et al. rightly argue that because RRF is merely a function of ranks, rather than scores, it\nnaturally addresses the scale and range problem without requiring normalization\u2014which, as we\nshowed, is not a consequential choice anyway. While that statement is accurate, we believe it\nintroduces new problems that must be recognized too.\nThe first, more minor issue is that ranks cannot be computed exactly unless the entire collection\nDis ranked by retrieval system ofor every query. That is because, there may be documents that\nappear in the union set, but not in one of the individual top- \ud835\udc58sets. Their true rank is therefore\nunknown, though is often approximated by ranking documents within the union set. We take this\napproach when computing ranks.", "source": "2210.11934.pdf"} | |
{"id": "82bb45ff1fd9-2", "text": "approach when computing ranks.\nThe second issue is that, unlike TM2C2, RRF ignores the raw scores and discards information\nabout their distribution. In this regime, whether or not a document has a low or high semantic score\ndoes not matter so long as its rank in \ud835\udc45\ud835\udc58\nSemstays the same. It is arguable in this case whether rank is\na stronger signal of relevance than score, a measurement in a metric space where distance matters\ngreatly. We intuit that, such distortion of distances may result in a loss of valuable information that\nwould lead to better final ranked lists.", "source": "2210.11934.pdf"} | |
{"id": "4ea204941815-0", "text": "111:16 Sebastian Bruch, Siyu Gai, and Amir Ingber\n(a) MS MARCO\n (b)Quora\n(c)NQ\n (d)FiQA\n(e)HotpotQA\n (f)Fever\nFig. 6. Visualization of the reciprocal rank determined by lexical ( \ud835\udc5f\ud835\udc5f(\ud835\udf0bLex)=1/(60+\ud835\udf0bLex)) and semantic\n(\ud835\udc5f\ud835\udc5f(\ud835\udf0bSem)=1/(60+\ud835\udf0bSem)) retrieval for query-document pairs sampled from the validation split of each\ndataset. Shown in red are up to 20,000positive samples where document is relevant to query, and in black\nup to the same number of negative samples.\nTo understand these issues better, let us first repeat the exercise in Section 4.1 for RRF. In Figure 6,\nwe have plotted the reciprocal rank (i.e., \ud835\udc5f\ud835\udc5f(\ud835\udf0bo)=1/(\ud835\udf02+\ud835\udf0bo)with\ud835\udf02=60) for sampled query-", "source": "2210.11934.pdf"} | |
{"id": "4ea204941815-1", "text": "document pairs as before. From the figure, we can see that samples are pulled towards one of the\npoles at(0,0)and(1/61,1/61). The former attracts a higher concentration of negative samples\nwhile the latter positive samples. While this separation is somewhat consistent across datasets,", "source": "2210.11934.pdf"} | |
{"id": "e88b15fd1cbc-0", "text": "An Analysis of Fusion Functions for Hybrid Retrieval 111:17\n(a) MS MARCO\n (b)HotpotQA\nFig. 7. Difference in NDCG@1000 of \ud835\udc53RRFwith distinct values \ud835\udf02Lexand\ud835\udf02Sem, and\ud835\udc53RRFwith\ud835\udf02Lex=\ud835\udf02Sem=60\n(positive indicates better ranking quality by the former). On MS MARCO, an in-domain dataset, NDCG\nimproves when \ud835\udf02Lex>\ud835\udf02Sem, while the opposite effect can be seen for HotpotQA , an out-of-domain dataset.\nthe concentration around poles and axes changes. Indeed, on HotpotQA andFever there is a\nhigher concentration of positive documents near the top, whereas on FiQA and the in-domain\ndatasets more positive samples end up along the vertical line at \ud835\udc5f\ud835\udc5f(\ud835\udf0bSem)=1/61, indicating that\nlexical ranks matter less. This suggests that a simple addition of reciprocal ranks does not behave\nconsistently across domains.", "source": "2210.11934.pdf"} | |
{"id": "e88b15fd1cbc-1", "text": "consistently across domains.\nWe argued earlier that RRF is parametric and that it, in fact, has as many parameters as there are\nretrieval functions to fuse. To see this more clearly, let us rewrite Equation (7) as follows:\n\ud835\udc53RRF(\ud835\udc5e,\ud835\udc51)=1\n\ud835\udf02Lex+\ud835\udf0bLex(\ud835\udc5e,\ud835\udc51)+1\n\ud835\udf02Sem+\ud835\udf0bSem(\ud835\udc5e,\ud835\udc51). (8)\nWe study the effect of parameters on \ud835\udc53RRFby comparing the NDCG obtained from RRF with a\nparticular choice of \ud835\udf02Lexand\ud835\udf02Semagainst a realization of RRF with\ud835\udf02Lex=\ud835\udf02Sem=60. In this way,\nwe are able to visualize the impact on performance relative to the baseline configuration that is\ntypically used in the literature. This difference in NDCG is rendered as a heatmap in Figure 7 for\nselect datasets\u2014figures for all other datasets show a similar pattern.\nAs a general observation, we note that NDCG swings wildly as a function of RRF parameters.", "source": "2210.11934.pdf"} | |
{"id": "e88b15fd1cbc-2", "text": "Crucially, performance improves off-diagonal, where the parameter takes on different values for\nthe semantic and lexical components. On MS MARCO, shown in Figure 7(a), NDCG improves when\n\ud835\udf02Lex>\ud835\udf02Sem, while the opposite effect can be seen for HotpotQA , an out-of-domain dataset. This\ncan be easily explained by the fact that increasing \ud835\udf02ofor retrieval system oeffectively discounts the\ncontribution of ranks from oto the final hybrid score. On in-domain datasets where the semantic\nmodel already performs strongly, for example, discounting the lexical system by increasing \ud835\udf02Lex\nleads to better performance.\nHaving observed that tuning RRF potentially leads to gains in NDCG, we ask if tuned parameters\ngeneralize on out-of-domain datasets. To investigate that question, we tune RRF on in-domain\ndatasets and pick the value of parameters that maximize NDCG on the validation split of in-domain\ndatasets, and measure the performance of the resulting function on the test split of all (in-domain\nand out-of-domain) datasets. We present the results in Table 2. While tuning a parametric RRF does", "source": "2210.11934.pdf"} | |
{"id": "4346c658ecf5-0", "text": "111:18 Sebastian Bruch, Siyu Gai, and Amir Ingber\n(a)\ud835\udf02Lex=60,\ud835\udf02Sem=60\n (b)\ud835\udf02Lex=10,\ud835\udf02Sem=4\n (c)\ud835\udf02Lex=3,\ud835\udf02Sem=5\nFig. 8. Effect of \ud835\udc53RRFwith select configurations of \ud835\udf02Lexand\ud835\udf02Semon pairs of ranks from lexical and semantic\nsystems. When \ud835\udf02Lex>\ud835\udf02Sem, the fusion function discounts the lexical system\u2019s contribution.\nTable 2. Mean NDCG@1000 (NDCG@100 for SciFact andNFCorpus ) on the test split of various datasets\nfor hybrid retrieval using TM2C2 ( \ud835\udefc=0.8) and RRF (\ud835\udf02Lex,\ud835\udf02Sem). The symbols\u2021and\u2217indicate statistical\nsignificance ( \ud835\udc5d-value <0.01) with respect to TM2C2 and baseline RRF ( 60,60) respectively, according to a\npaired two-tailed \ud835\udc61-test.\nNDCG", "source": "2210.11934.pdf"} | |
{"id": "4346c658ecf5-1", "text": "paired two-tailed \ud835\udc61-test.\nNDCG\nDataset TM2C2 RRF(60,60)RRF(5,5)RRF(10,4)in-domainMS MARCO 0.454 0.425\u20210.435\u2021\u22170.451\u2217\nNQ 0.542 0.514\u20210.521\u2021\u22170.528\u2021\u2217\nQuora 0.901 0.877\u20210.885\u2021\u22170.896\u2217zero-shotNFCorpus 0.343 0.326\u20210.335\u2021\u22170.327\u2021\nHotpotQA 0.699 0.675\u20210.693\u22170.621\u2021\u2217\nFEVER 0.744 0.721\u20210.727\u2021\u22170.649\u2021\u2217\nSciFact 0.753 0.730\u20210.738\u20210.715\u2021\u2217", "source": "2210.11934.pdf"} | |
{"id": "4346c658ecf5-2", "text": "DBPedia 0.512 0.489\u20210.489\u20210.480\u2021\u2217\nFiQA 0.496 0.464\u20210.470\u2021\u22170.482\u2021\u2217\nindeed lead to gains in NDCG on in-domain datasets, the tuned function does not generalize well\nto out-of-domain datasets.\nThe poor generalization can be explained by the reversal of patterns observed in Figure 7 where\n\ud835\udf02Lex>\ud835\udf02Semsuits in-domain datasets better but the opposite is true for out-of-domain datasets. By\nmodifying\ud835\udf02Lexand\ud835\udf02Semwe modify the fusion of ranks and boost certain regions and discount\nothers in an imbalanced manner. Figure 8 visualizes this effect on \ud835\udc53RRFfor particular values of its\nparameters. This addresses RQ2.\n5.2 Effect of Lipschitz Continuity\nIn the previous section, we stated an intuition that because RRF does not preserve the distribution\nof raw scores, it loses valuable information in the process of fusing retrieval systems. In our final", "source": "2210.11934.pdf"} | |
{"id": "4346c658ecf5-3", "text": "of raw scores, it loses valuable information in the process of fusing retrieval systems. In our final\nresearch question, RQ3, we investigate if this indeed matters in practice.", "source": "2210.11934.pdf"} | |
{"id": "6c143f58ddf2-0", "text": "An Analysis of Fusion Functions for Hybrid Retrieval 111:19\n(a) in-domain\n (b) out-of-domain\nFig. 9. The difference in NDCG@1000 of \ud835\udc53SRRF and\ud835\udc53RRFwith\ud835\udf02=60(positive indicates better ranking quality\nbySRRF ) as a function of \ud835\udefd.\nThe notion of \u201cpreserving\u201d information is well captured by the concept of Lipschitz continuity.6\nWhen a function is Lipschitz continuous with a small Lipschitz constant, it does not oscillate wildly\nwith a small change to its input. RRF does not have this property because the moment one lexical\n(or semantic) score becomes larger than another the function makes a hard transition to a new\nvalue.\nWe can therefore cast RQ3 as a question of whether Lipschitz continuity is an important property\nin practice. To put that hypothesis to the test, we design a smooth approximation of RRF using\nknown techniques [4, 30].\nAs expressed in Equation (1), the rank of a document is simply the sum of indicators. It is", "source": "2210.11934.pdf"} | |
{"id": "6c143f58ddf2-1", "text": "thus trivial to approximate this quantity using a generalized sigmoid with parameter \ud835\udefd:\ud835\udf0e\ud835\udefd(\ud835\udc65)=\n1/(1+exp(\u2212\ud835\udefd\ud835\udc65)). As\ud835\udefdapproaches 1, the sigmoid takes its usual Sshape, while \ud835\udefd\u2192\u221e produces\na very close approximation of the indicator. Interestingly, the Lipschitz constant of \ud835\udf0e\ud835\udefd(\u00b7)is, in fact,\n\ud835\udefd. As\ud835\udefdincreases, the approximation of ranks becomes more accurate, but the Lipschitz constant\nbecomes larger. When \ud835\udefdis too small, however, the approximation breaks down but the function\ntransitions more slowly, thereby preserving much of the characteristics of the underlying data\ndistribution.\nRRF being a function of ranks can now be approximated by plugging in approximate ranks in\nEquation (7), resulting in SRRF:\n\ud835\udc53SRRF(\ud835\udc5e,\ud835\udc51)=1\n\ud835\udf02+\u02dc\ud835\udf0bLex(\ud835\udc5e,\ud835\udc51)+1", "source": "2210.11934.pdf"} | |
{"id": "6c143f58ddf2-2", "text": "\ud835\udf02+\u02dc\ud835\udf0bLex(\ud835\udc5e,\ud835\udc51)+1\n\ud835\udf02+\u02dc\ud835\udf0bSem(\ud835\udc5e,\ud835\udc51), (9)\nwhere \u02dc\ud835\udf0bo(\ud835\udc5e,\ud835\udc51\ud835\udc56)=0.5+\u00cd\n\ud835\udc51\ud835\udc57\u2208\ud835\udc45\ud835\udc58o(\ud835\udc5e)\ud835\udf0e\ud835\udefd(\ud835\udc53o(\ud835\udc5e,\ud835\udc51\ud835\udc57)\u2212\ud835\udc53o(\ud835\udc5e,\ud835\udc51\ud835\udc56)). By increasing \ud835\udefdwe increase the Lipschitz\nconstant of \ud835\udc53SRRF. This is the lever we need to test the idea that Lipschitz continuity matters and\nthat functions that do not distort the distributional properties of raw scores lead to better ranking\nquality.", "source": "2210.11934.pdf"} | |
{"id": "6c143f58ddf2-3", "text": "that functions that do not distort the distributional properties of raw scores lead to better ranking\nquality.\n6A function \ud835\udc53is Lipschitz continous with constant \ud835\udc3fif||\ud835\udc53(\ud835\udc66)\u2212\ud835\udc53(\ud835\udc65)||\ud835\udc5c\u2264\ud835\udc3f||\ud835\udc66\u2212\ud835\udc65||\ud835\udc56for some norm||\u00b7|| \ud835\udc5cand||\u00b7|| \ud835\udc56\non the output and input space of \ud835\udc53.", "source": "2210.11934.pdf"} | |
{"id": "7f308cf6cc65-0", "text": "111:20 Sebastian Bruch, Siyu Gai, and Amir Ingber\n(a) in-domain\n (b) out-of-domain\nFig. 10. The difference in NDCG@1000 of \ud835\udc53SRRF and\ud835\udc53RRFwith\ud835\udf02=5(positive indicates better ranking quality\nbySRRF ) as a function of \ud835\udefd.\nFigures 9 and 10 visualize the difference between SRRF and RRF for two settings of \ud835\udf02selected\nbased on the results in Table 2. As anticipated, when \ud835\udefdis too small, the approximation error is\nlarge and ranking quality degrades. As \ud835\udefdbecomes larger, ranking quality trends in the direction\nofRRF. Interestingly, as \ud835\udefdbecomes gradually smaller, the performance of SRRF improves over\ntheRRF baseline. This effect is more pronounced for the \ud835\udf02=60setting of RRF, as well as on the\nout-of-domain datasets.\nWhile we acknowledge the possibility that the approximation in Equation (9) may cause a change", "source": "2210.11934.pdf"} | |
{"id": "7f308cf6cc65-1", "text": "While we acknowledge the possibility that the approximation in Equation (9) may cause a change\nin ranking quality, we expected that change to be a degradation, not an improvement. However,\ngiven we do observe gains by smoothing the function, and that the only other difference between\nSRRF and RRF is their Lipschitz constant, we believe these results highlight the role of Lipschitz\ncontinuity in ranking quality. For completeness, we have also included a comparison of SRRF, RRF,\nand TM2C2 in Table 3.\n6 DISCUSSION\nThe analysis in this work motivates us to identify and document the properties of a well-behaved\nfusion function, and present the principles that, we hope, will guide future research in this space.\nThese desiderata are stated below.\nMonotonicity : When\ud835\udc53ois positively correlated with a target ranking metric (i.e., ordering\ndocuments in decreasing order of \ud835\udc53omust lead to higher quality), then it is natural to require\nthat\ud835\udc53Hybrid be monotone increasing in its arguments. We have already seen and indeed used this", "source": "2210.11934.pdf"} | |
{"id": "7f308cf6cc65-2", "text": "property in our analysis of the convex combination fusion function. It is trivial to show why this\nproperty is crucial.\nHomogeneity : The order induced by a fusion function must be unaffected by a positive re-\nscaling of query and document vectors. That is: \ud835\udc53Hybrid(\ud835\udc5e,\ud835\udc51)\ud835\udf0b=\ud835\udc53Hybrid(\ud835\udc5e,\ud835\udefe\ud835\udc51)\ud835\udf0b=\ud835\udc53Hybrid(\ud835\udefe\ud835\udc5e,\ud835\udc51)where\n\ud835\udf0b=denotes rank-equivalence and \ud835\udefe>0. This property prevents any retrieval system from inflating\nits contribution to the final hybrid score by simply boosting its document or query vectors.", "source": "2210.11934.pdf"} | |
{"id": "b436cf251fee-0", "text": "An Analysis of Fusion Functions for Hybrid Retrieval 111:21\nTable 3. Mean NDCG@1000 (NDCG@100 for SciFact andNFCorpus ) on the test split of various datasets\nfor hybrid retrieval using TM2C2 ( \ud835\udefc=0.8),RRF (\ud835\udf02), and SRRF( \ud835\udf02,\ud835\udefd). The parameters \ud835\udefdare fixed to values\nthat maximize NDCG on the validation split of in-domain datasets. The symbols \u2021and\u2217indicate statistical\nsignificance ( \ud835\udc5d-value <0.01) with respect to TM2C2 and RRF respectively, according to a paired two-tailed\n\ud835\udc61-test.\nNDCG\nDataset TM2C2 RRF(60) SRRF ( 60,40)RRF(5) SRRF ( 5,100)in-domainMS MARCO 0.454 0.425\u20210.431\u2021\u22170.435\u20210.431\u2021\u2217", "source": "2210.11934.pdf"} | |
{"id": "b436cf251fee-1", "text": "NQ 0.542 0.514\u20210.516\u20210.521\u20210.517\u2021\nQuora 0.901 0.877\u20210.889\u2021\u22170.885\u20210.889\u2021\u2217zero-shotNFCorpus 0.343 0.326\u20210.338\u2021\u22170.335\u20210.339\u2021\nHotpotQA 0.699 0.675\u20210.695\u22170.693\u20210.705\u2021\u2217\nFEVER 0.744 0.721\u20210.725\u20210.727\u20210.735\u2021\u2217\nSciFact 0.753 0.730\u20210.740\u20210.738\u20210.740\u2021\nDBPedia 0.512 0.489\u20210.501\u2021\u22170.489\u20210.492\u2021\nFiQA 0.496 0.464\u20210.468\u20210.470\u20210.469\u2021", "source": "2210.11934.pdf"} | |
{"id": "b436cf251fee-2", "text": "Boundedness : Recall that, a convex combination without score normalization is often ineffective\nand inconsistent because BM25 is unbounded and that lexical and semantic scores are on different\nscales. To see this effect we turn to Figure 11.\nWe observe in Figure 11(a) that, for in-domain datasets, adding the unnormalized lexical scores\nusing a convex combination leads to a severe degradation of ranking quality. We believe this is\nbecause of the fact that the semantic retrieval model, which is fine-tuned on these datasets, already\nproduces ranked lists of high quality, and that adding the lexical scores which are on a very different\nscale distorts the rankings and leads to poor performance. In out-of-domain experiments as shown\nin Figure 11(b), however, the addition of lexical scores leads to often significant gains in quality.\nWe believe this can be explained exactly as the in-domain observations: The semantic model\ngenerally does poorly on out-of-domain datasets while the lexical retriever does well. But because\nthe semantic scores are bounded and relatively small, they do not significantly distort the rankings\nproduced by the lexical retriever.", "source": "2210.11934.pdf"} | |
{"id": "b436cf251fee-3", "text": "produced by the lexical retriever.\nTo avoid that pitfall, we require that \ud835\udc53Hybrid be bounded:|\ud835\udc53Hybrid|\u2264\ud835\udc40for some\ud835\udc40>0. As we\nhave seen before, normalizing the raw scores addresses this issue.\nLipschitz Continuity : We argued that because RRF does not take into consideration the raw\nscores, it distorts their distribution and thereby loses valuable information. On the other hand,\nTM2C2 (or any convex combination of scores) is a smooth function of scores and preserves much\nof the characteristics of its underlying distribution. We formalized this idea using the notion of\nLipschitz continuity: A larger Lipschitz constant leads to a larger distortion of retrieval score\ndistribution.\nInterpretability and Sample Efficiency : The question of hybrid retrieval is an important topic\ninIR. What makes it particularly pertinent is its zero-shot applicability, a property that makes\ndeep models reusable , reducing computational costs and emissions as a result [ 3,34], and enabling\nresource-constrained research labs to innovate. Given the strong evidence supporting the idea that", "source": "2210.11934.pdf"} | |
{"id": "b436cf251fee-4", "text": "resource-constrained research labs to innovate. Given the strong evidence supporting the idea that\nhybrid retrieval is most valuable when applied to out-of-domain datasets [ 5], we believe that \ud835\udc53Hybrid\nshould be robust to distributional shifts and should not need training or fine-tuning on target", "source": "2210.11934.pdf"} | |
{"id": "94824fc6dcf4-0", "text": "111:22 Sebastian Bruch, Siyu Gai, and Amir Ingber\n(a) in-domain\n (b) out-of-domain\nFig. 11. The difference in NDCG of convex combination of unnormalized scores and a pure semantic search\n(positive indicates better ranking quality by a convex combination) as a function of \ud835\udefc.\ndatasets. This implies that either the function must be non-parametric, that its parameters can be\ntuned efficiently with respect to the training samples required, or that they are highly interpretable\nsuch that their value can be guided by expert knowledge.\nIn the absence of a truly non-parametric approach, however, we believe a fusion that is more\nsample-efficient to tune is preferred. Because convex combination has fewer parameters than the\nfully parameterized RRF, we believe it should have this property. To confirm, we ask how many\ntraining queries it takes to converge to the correct \ud835\udefcon a target dataset.\nFigure 12 visualizes our experiments, where we plot NDCG of RRF (\ud835\udf02=60) and TM2C2 with", "source": "2210.11934.pdf"} | |
{"id": "94824fc6dcf4-1", "text": "\ud835\udefc=0.8from Table 1. Additionally, we take the train split of each dataset and sample from it\nprogressively larger subsets (with a step size of 5%), and use it to tune the parameters of each\nfunction. We then measure NDCG of the tuned functions on the test split. For the depicted datasets\nas well as all other datasets in this work, we observe a similar trend: With less than 5%of the training\ndata, which is often a small set of queries, TM2C2\u2019s \ud835\udefcconverges, regardless of the magnitude of\ndomain shift. This sample efficiency is remarkable because it enables significant gains with little\nlabeling effort. Finally, while RRF does not settle on a value and its parameters are sensitive to the\ntraining sample, its performance does more or less converge. However, the performance of the\nfully parameterized RRF is still sub-optimal compared with TM2C2.\nIn Figure 12, we also include a convex combination of fully parameterized RRF terms, denoted\nby RRF-CC and defined as:", "source": "2210.11934.pdf"} | |
{"id": "94824fc6dcf4-2", "text": "by RRF-CC and defined as:\n\ud835\udc53RRF(\ud835\udc5e,\ud835\udc51)=(1\u2212\ud835\udefc)1\n\ud835\udf02Lex+\ud835\udf0bLex(\ud835\udc5e,\ud835\udc51)+\ud835\udefc1\n\ud835\udf02Sem+\ud835\udf0bSem(\ud835\udc5e,\ud835\udc51), (10)\nwhere\ud835\udefc,\ud835\udf02Lex, and\ud835\udf02Semare tunable parameters. The question this particular formulation tries to\nanswer is whether adding an additional weight to the combination of the RRF terms affects retrieval\nquality. From the figure, it is clear that the addition of this parameter does not have a significant\nimpact on the overall performance. This also serves as additional evidence supporting the claim\nthat Lipschitz continuity is an important property.", "source": "2210.11934.pdf"} | |
{"id": "0e96be142e14-0", "text": "An Analysis of Fusion Functions for Hybrid Retrieval 111:23\n(a) MS MARCO\n (b)Quora\n(c)HotpotQA\n (d)Fever\nFig. 12. Sample efficiency of TM2C2 and the parameterized variants of RRF (single parameter where \ud835\udf02Sem=\n\ud835\udf02Lex, and two parameters where we allow different values of \ud835\udf02Semand\ud835\udf02Lex, and a third variation that is a\nconvex combination of RRF terms defined in Equation 10). We sample progressively larger subsets of the\nvalidation set (with a step size of 5%), tune the parameters of each function on the resulting set, and evaluate\nthe resulting function on the test split. These figures depict NDCG@1000 as a function of the size of the\ntuning set, averaged over 5trials with the shaded regions illustrating the 95% confidence intervals. For\nreference, we have also plotted NDCG on the test split for RRF (\ud835\udf02=60) and TM2C2 with \ud835\udefc=0.8from Table 1.\n7 CONCLUSION", "source": "2210.11934.pdf"} | |
{"id": "0e96be142e14-1", "text": "7 CONCLUSION\nWe studied the behavior of two popular functions that fuse together lexical and semantic retrieval\nto produce hybrid retrieval, and identified their advantages and pitfalls. Importantly, we inves-\ntigated several questions and claims in prior work. We established theoretically that the choice\nof normalization is not as consequential as once thought for a convex combination-based fusion\nfunction. We found that RRF is sensitive to its parameters. We also observed empirically that convex\ncombination of normalized scores outperforms RRF on in-domain and out-of-domain datasets\u2014a\nfinding that is in disagreement with [5].", "source": "2210.11934.pdf"} | |
{"id": "8fb80edd1315-0", "text": "111:24 Sebastian Bruch, Siyu Gai, and Amir Ingber\nWe believe that a convex combination with theoretical minimum-maximum normalization\n(TM2C2) indeed enjoys properties that are important in a fusion function. Its parameter, too,\ncan be tuned sample-efficiently or set to a reasonable value based on domain knowledge. In our\nexperiments, for example, we found the range \ud835\udefc\u2208[0.6,0.8]to consistently lead to improvements.\nWhile we observed that a line appears to be appropriate for a collection of query-document pairs,\nwe acknowledge that that may change if our analysis was conducted on a per-query basis\u2014itself a\nrather non-trivial effort. For example, it is unclear if bringing non-linearity to the design of the\nfusion function or the normalization itself leads to a more accurate prediction of \ud835\udefcon a per-query\nbasis. We leave an exploration of this question to future work.\nWe also note that, while our analysis does not exclude the use of multiple retrieval engines as\ninput, and indeed can be extended, both theoretically and empirically, to a setting where we have", "source": "2210.11934.pdf"} | |
{"id": "8fb80edd1315-1", "text": "input, and indeed can be extended, both theoretically and empirically, to a setting where we have\nmore than just lexical and semantic scores, it is nonetheless important to conduct experiments\nand validate that our findings generalize. We believe, however, that our current assumptions are\npractical and are reflective of the current state of hybrid search where we typically fuse only lexical\nand semantic retrieval systems. As such, we leave an extended analysis of fusion on multiple\nretrieval systems to future work.\nACKNOWLEDGMENTS\nWe benefited greatly from conversations with Brian Hentschel, Edo Liberty, and Michael Bendersky.\nWe are grateful to them for their insight and time.\nREFERENCES\n[1] Nima Asadi. 2013. Multi-Stage Search Architectures for Streaming Documents . University of Maryland.\n[2]Nima Asadi and Jimmy Lin. 2013. Effectiveness/Efficiency Tradeoffs for Candidate Generation in Multi-Stage Retrieval\nArchitectures. In Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information\nRetrieval (Dublin, Ireland). 997\u20131000.", "source": "2210.11934.pdf"} | |
{"id": "8fb80edd1315-2", "text": "Retrieval (Dublin, Ireland). 997\u20131000.\n[3]Sebastian Bruch, Claudio Lucchese, and Franco Maria Nardini. 2022. ReNeuIR: Reaching Efficiency in Neural Information\nRetrieval. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information\nRetrieval (Madrid, Spain). 3462\u20133465.\n[4]Sebastian Bruch, Masrour Zoghi, Michael Bendersky, and Marc Najork. 2019. Revisiting Approximate Metric Optimiza-\ntion in the Age of Deep Neural Networks. In Proceedings of the 42nd International ACM SIGIR Conference on Research\nand Development in Information Retrieval (Paris, France). 1241\u20131244.\n[5]Tao Chen, Mingyang Zhang, Jing Lu, Michael Bendersky, and Marc Najork. 2022. Out-of-Domain Semantics to the\nRescue! Zero-Shot Hybrid Retrieval Models. In Advances in Information Retrieval: 44th European Conference on IR", "source": "2210.11934.pdf"} | |
{"id": "8fb80edd1315-3", "text": "Research, ECIR 2022, Stavanger, Norway, April 10\u201314, 2022, Proceedings, Part I (Stavanger, Norway). 95\u2013110.\n[6]Gordon V. Cormack, Charles L A Clarke, and Stefan Buettcher. 2009. Reciprocal Rank Fusion Outperforms Condorcet\nand Individual Rank Learning Methods. 758\u2013759.\n[7]Van Dang, Michael Bendersky, and W Bruce Croft. 2013. Two-Stage learning to rank for information retrieval. In\nAdvances in Information Retrieval . Springer, 423\u2013434.\n[8]Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional\nTransformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the\nAssociation for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) . Association\nfor Computational Linguistics, Minneapolis, Minnesota, 4171\u20134186.", "source": "2210.11934.pdf"} | |
{"id": "8fb80edd1315-4", "text": "for Computational Linguistics, Minneapolis, Minnesota, 4171\u20134186.\n[9]Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and St\u00e9phane Clinchant. 2022. From Distillation to Hard\nNegative Sampling: Making Sparse Neural IR Models More Effective. In Proceedings of the 45th International ACM", "source": "2210.11934.pdf"} | |
{"id": "3ca2c422a8e4-0", "text": "Negative Sampling: Making Sparse Neural IR Models More Effective. In Proceedings of the 45th International ACM\nSIGIR Conference on Research and Development in Information Retrieval (Madrid, Spain). 2353\u20132359.\n[10] Kalervo J\u00e4rvelin and Jaana Kek\u00e4l\u00e4inen. 2000. IR evaluation methods for retrieving highly relevant documents. In\nProceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval .\nACM, 41\u201348.\n[11] Jeff Johnson, Matthijs Douze, and Herv\u00e9 J\u00e9gou. 2021. Billion-Scale Similarity Search with GPUs. IEEE Transactions on\nBig Data 7 (2021), 535\u2013547.\n[12] Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau\nYih. 2020. Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the 2020 Conference on", "source": "2210.11934.pdf"} | |
{"id": "c98cf58f7990-0", "text": "An Analysis of Fusion Functions for Hybrid Retrieval 111:25\nEmpirical Methods in Natural Language Processing (EMNLP) .\n[13] Saar Kuzi, Mingyang Zhang, Cheng Li, Michael Bendersky, and Marc Najork. 2020. Leveraging Semantic and Lexical\nMatching to Improve the Recall of Document Retrieval Systems: A Hybrid Approach. ArXiv (2020).\n[14] Hang Li, Shuai Wang, Shengyao Zhuang, Ahmed Mourad, Xueguang Ma, Jimmy Lin, and Guido Zuccon. 2022. To\nInterpolate or Not to Interpolate: PRF, Dense and Sparse Retrievers. In Proceedings of the 45th International ACM SIGIR\nConference on Research and Development in Information Retrieval (Madrid, Spain). 2495\u20132500.\n[15] Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2021. Pretrained Transformers for Text Ranking: BERT and Beyond.\narXiv:2010.06467 [cs.IR]", "source": "2210.11934.pdf"} | |
{"id": "c98cf58f7990-1", "text": "arXiv:2010.06467 [cs.IR]\n[16] Tie-Yan Liu. 2009. Learning to Rank for Information Retrieval. Found. Trends Inf. Retr. 3, 3 (2009), 225\u2013331.\n[17] Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2021. Sparse, Dense, and Attentional Representations\nfor Text Retrieval. Transactions of the Association for Computational Linguistics 9 (2021), 329\u2013345.\n[18] Ji Ma, Ivan Korotkov, Keith Hall, and Ryan T. McDonald. 2020. Hybrid First-stage Retrieval Models for Biomedical\nLiterature. In CLEF .\n[19] Xueguang Ma, Kai Sun, Ronak Pradeep, and Jimmy J. Lin. 2021. A Replication Study of Dense Passage Retriever. ArXiv\n(2021).\n[20] Craig Macdonald, Rodrygo LT Santos, and Iadh Ounis. 2013. The whens and hows of learning to rank for web search.", "source": "2210.11934.pdf"} | |
{"id": "c98cf58f7990-2", "text": "Information Retrieval 16, 5 (2013), 584\u2013628.\n[21] Yu. A. Malkov and D. A. Yashunin. 2016. Efficient and robust approximate nearest neighbor search using Hierarchical\nNavigable Small World graphs.\n[22] Antonio Mallia, Michal Siedlaczek, Joel Mackenzie, and Torsten Suel. 2019. PISA: Performant Indexes and Search for\nAcademia. In Proceedings of the Open-Source IR Replicability Challenge co-located with 42nd International ACM SIGIR\nConference on Research and Development in Information Retrieval, OSIRRC@SIGIR 2019, Paris, France, July 25, 2019.\n50\u201356. http://ceur-ws.org/Vol-2409/docker08.pdf\n[23] Yoshitomo Matsubara, Thuy Vu, and Alessandro Moschitti. 2020. Reranking for Efficient Transformer-Based Answer\nSelection . 1577\u20131580.", "source": "2210.11934.pdf"} | |
{"id": "c98cf58f7990-3", "text": "Selection . 1577\u20131580.\n[24] Bhaskar Mitra, Eric Nalisnick, Nick Craswell, and Rich Caruana. 2016. A dual embedding space model for document\nranking. arXiv preprint arXiv:1602.01137 (2016).\n[25] Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS\nMARCO: A Human Generated MAchine Reading COmprehension Dataset. (November 2016).\n[26] Rodrigo Nogueira and Kyunghyun Cho. 2020. Passage Re-ranking with BERT. arXiv:1901.04085 [cs.IR]\n[27] Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document Ranking with a Pretrained\nSequence-to-Sequence Model. In Findings of the Association for Computational Linguistics: EMNLP 2020 . 708\u2013718.", "source": "2210.11934.pdf"} | |
{"id": "c98cf58f7990-4", "text": "[28] Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. 2019. Multi-stage document ranking with BERT. arXiv\npreprint arXiv:1910.14424 (2019).\n[29] Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. 2019. Document Expansion by Query Prediction. arXiv\npreprint arXiv:1904.08375 (2019).\n[30] Tao Qin, Tie-Yan Liu, and Hang Li. 2010. A general approximation framework for direct optimization of information\nretrieval measures. Information retrieval 13, 4 (2010), 375\u2013397.\n[31] Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In\nProceedings of the 2019 Conference on Empirical Methods in Natural Language Processing . Association for Computational\nLinguistics.\n[32] Stephen Robertson and Hugo Zaragoza. 2009. The Probabilistic Relevance Framework: BM25 and Beyond. Foundations", "source": "2210.11934.pdf"} | |
{"id": "c98cf58f7990-5", "text": "and Trends in Information Retrieval 3, 4 (April 2009), 333\u2013389.\n[33] Stephen E. Robertson, Steve Walker, Susan Jones, Micheline Hancock-Beaulieu, and Mike Gatford. 1994. Okapi at", "source": "2210.11934.pdf"} | |
{"id": "7cb55e3368fc-0", "text": "and Trends in Information Retrieval 3, 4 (April 2009), 333\u2013389.\n[33] Stephen E. Robertson, Steve Walker, Susan Jones, Micheline Hancock-Beaulieu, and Mike Gatford. 1994. Okapi at\nTREC-3.. In TREC (NIST Special Publication, Vol. 500-225) , Donna K. Harman (Ed.). National Institute of Standards and\nTechnology (NIST), 109\u2013126.\n[34] Harrisen Scells, Shengyao Zhuang, and Guido Zuccon. 2022. Reduce, Reuse, Recycle: Green Information Retrieval\nResearch. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information\nRetrieval (Madrid, Spain). 2825\u20132837.\n[35] Tao Tao, Xuanhui Wang, Qiaozhu Mei, and ChengXiang Zhai. 2006. Language Model Information Retrieval with\nDocument Expansion. In Proceedings of the Main Conference on Human Language Technology Conference of the North", "source": "2210.11934.pdf"} | |
{"id": "7cb55e3368fc-1", "text": "Document Expansion. In Proceedings of the Main Conference on Human Language Technology Conference of the North\nAmerican Chapter of the Association of Computational Linguistics (New York, New York). 407\u2013414.\n[36] Nandan Thakur, Nils Reimers, Andreas R\u00fcckl\u00e9, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR: A Heterogeneous\nBenchmark for Zero-shot Evaluation of Information Retrieval Models. In Thirty-fifth Conference on Neural Information\nProcessing Systems Datasets and Benchmarks Track (Round 2) .\n[37] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia\nPolosukhin. 2017. Attention is All You Need. In Proceedings of the 31st International Conference on Neural Information", "source": "2210.11934.pdf"} | |
{"id": "6b12950b2069-0", "text": "111:26 Sebastian Bruch, Siyu Gai, and Amir Ingber\nProcessing Systems (Long Beach, California, USA). 6000\u20136010.\n[38] Lidan Wang, Jimmy Lin, and Donald Metzler. 2011. A cascade ranking model for efficient ranked retrieval. In Proceedings\nof the 34th international ACM SIGIR conference on Research and development in Information Retrieval . ACM, 105\u2013114.\n[39] Shuai Wang, Shengyao Zhuang, and Guido Zuccon. 2021. BERT-Based Dense Retrievers Require Interpolation with\nBM25 for Effective Passage Retrieval. In Proceedings of the 2021 ACM SIGIR International Conference on Theory of\nInformation Retrieval (Virtual Event, Canada). 317\u2013324.\n[40] Qiang Wu, Christopher J.C. Burges, Krysta M. Svore, and Jianfeng Gao. 2010. Adapting boosting for information\nretrieval measures. Information Retrieval (2010).", "source": "2210.11934.pdf"} | |
{"id": "6b12950b2069-1", "text": "retrieval measures. Information Retrieval (2010).\n[41] Xiang Wu, Ruiqi Guo, David Simcha, Dave Dopson, and Sanjiv Kumar. 2019. Efficient Inner Product Approximation in\nHybrid Spaces. ArXiv (2019).\n[42] Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2021.\nApproximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval. In International Conference on\nLearning Representations .\n[43] Dawei Yin, Yuening Hu, Jiliang Tang, Tim Daly, Mianwei Zhou, Hua Ouyang, Jianhui Chen, Changsung Kang, Hongbo\nDeng, Chikashi Nobata, et al .2016. Ranking relevance in yahoo search. In Proceedings of the 22nd ACM SIGKDD\nInternational Conference on Knowledge Discovery and Data Mining . ACM, 323\u2013332.", "source": "2210.11934.pdf"} | |
{"id": "6b12950b2069-2", "text": "International Conference on Knowledge Discovery and Data Mining . ACM, 323\u2013332.\n[44] Hamed Zamani, Mike Bendersky, Donald Metzler, Honglei Zhuang, and Marc Najork. 2022. Stochastic Retrieval-\nConditioned Reranking. In Proceedings of the 2022 ACM SIGIR International Conference on the Theory of Information\nRetrieval (Madrid, Spain).\n[45] Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Min Zhang, and Shaoping Ma. 2020. RepBERT: Contextualized Text Embeddings\nfor First-Stage Retrieval.", "source": "2210.11934.pdf"} | |