diff --git "a/09E0T4oBgHgl3EQf_gKx/content/tmp_files/load_file.txt" "b/09E0T4oBgHgl3EQf_gKx/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/09E0T4oBgHgl3EQf_gKx/content/tmp_files/load_file.txt" @@ -0,0 +1,972 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf,len=971 +page_content='Why do Nearest Neighbor Language Models Work?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Frank F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Xu Uri Alon Graham Neubig Language Technologies Institute Carnegie Mellon University {fangzhex,ualon,gneubig}@cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='cmu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='edu Abstract Language models (LMs) compute the probability of a text by sequentially computing a representation of an already-seen context and using this representation to predict the next word.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Currently, most LMs calculate these representations through a neural network consuming the immediate previous context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However recently, retrieval-augmented LMs have shown to improve over standard neural LMs, by accessing information retrieved from a large datastore, in addition to their standard, parametric, next-word prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In this paper, we set out to understand why retrieval-augmented language models, and specifically why k-nearest neighbor language models (kNN-LMs) perform better than standard parametric LMs, even when the k-nearest neighbor component retrieves examples from the same training set that the LM was originally trained on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' To this end, we perform a careful analysis of the various dimensions over which kNN-LM diverges from standard LMs, and investigate these dimensions one by one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Empirically, we identify three main reasons why kNN-LM performs better than standard LMs: using a different input representation for predicting the next tokens, approximate kNN search, and the importance of softmax temperature for the kNN distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Further, we incorporate these insights into the model architecture or the training procedure of the standard parametric LM, improving its results without the need for an explicit retrieval component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The code is available at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='com/frankxu2004/knnlm-why.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 1 Introduction Language modeling is the task of predicting the probability of a text (often conditioned on context), with broad-spanning applications across natural language processing (Bengio et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2003;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Merity et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Baevski and Auli, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This modeling is usually done by sequentially encoding a context ct using a trained neural network function f, and computing the probability of the next word wt according to f (ct) and a vector representation of wt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Recently, retrieval-augmented LMs have shown a series of impressive results (Grave et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Guu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Borgeaud et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Alon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Retrieval-augmented LMs compute next token distributions based not only on the immediately preceding context ct and the model parameters, but also on an external datastore, from which examples are retrieved and incorporated into the base LM’s prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' One retrieval-augmented model that is notable for both its simplicity and efficacy is the k-nearest neighbor language model (kNN-LM;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' It extends a trained base LM by linearly interpolating the output word distribution with a kNN model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The nearest neighbors are retrieved according to the distances between the current context embedding of the base LM and all the context embeddings in the datastore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The datastore is created by encoding all contexts from any text collection, including the original LM training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' One of the most surprising results from Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2020b) is that kNN-LM reduces the perplexity of the base LM even when the kNN component is retrieving examples from the same training set that the LM was originally trained on, indicating that the kNN-LM improves the ability to model the training data and is Preprint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Under review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='02828v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='CL] 7 Jan 2023 Multi Headed Attention Feed Forward Network Layer Norm ℎ𝑠𝑚 𝑊𝑠𝑚 𝐷 𝑉 ℎ𝑑𝑠 𝑊𝑑𝑠 𝐷 𝑁𝑑𝑠 + 𝑃𝐿𝑀 parametric component 𝑃𝑘𝑁𝑁 non-parametric component In 𝑘NN-LM: 𝑁𝑑𝑠: up to 5000𝑉 𝐷 𝐷 mask-to-k() In 𝑘NN-LM: top-𝑘() FFN ATT softmax() softmax() Figure 1: An illustration of the generalized formulation of kNN-LM in Equation 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' not simply benefiting from access to more data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Intrigued by this, we ask questions like, could kNN-LM be improving because of capacity issues in the parametric base LM?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In this paper, we set out to understand why kNN-LMs work even in this setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In the following sections, we first elucidate connections between the added kNN component and the standard LM component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Specifically, we note that word distributions from the two components are both calculated using a softmax function, based on the similarity of the current context embedding with a set of embeddings that corresponds to different next words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' With this intuition, we formalize and generalize the non-parametric distribution calculation with the softmax layer and word embedding layer used in parametric LMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We then show that this generalized form exposes a variety of design choices, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', the number of context embeddings in the datastore, the input representation used in softmax layer, different similarity functions, as well as the approximation and sparsification implementations in the kNN search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This provides a general framework for analyzing kNN-LM and similar models and allows us to perform ablation studies that test the importance of various design decisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We proceed to propose multiple hypotheses for why kNN-LM works, which are testable by adjusting the various parameters exposed by our generalized formulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Based on these hypotheses, we perform ablation experiments and analyze the nuances between different implementations of the generalized version of PkNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' As the answer to our question, “why kNN-LMs work”, we eventually show that the most probable reasons are threefold: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Ensembling the output of softmax using two representations from different layers of the transformer is important;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' in our experiments, this accounts for 55% of the performance gain of kNN-LM, or 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='5% relative perplexity improvement compared to the base LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' kNN-LM uses approximate nearest neighbor search to handle the large number of candidates, and the lack of this preciseness in this algorithm actually helps kNN-LM to generalize better than using exact nearest neighbor search and distance calculation, possibly due to a regularization effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The relative perplexity improvement from this factor is about 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='6%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Depending on the design decisions that are chosen for modeling, adding a temperature term to the kNN non-parametric component can become crucial to the success of modeling (although coincidentally, in the original settings of Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2020b), a temperature of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='0 is close to optimal, which hid the importance of this term).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In some settings, the relative perplexity gap between the default and optimal temperature can be as high as 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='7%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Finally, one significant drawback to the current kNN-LM is the inefficiency of kNN search performed at each step (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Borgeaud et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Alon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Because of the similarity between kNN-LM and the parametric LM’s last layers and the many design choices, we also demonstrate that we are able to make kNN-LM more efficient by substituting the kNN search with another matrix operation that can fit in accelerator memory while maintaining more than half the perplexity improvement, or more than 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='5% relative improvement compared to the base LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 2 2 Formalizing and Generalizing kNN-LM kNN-LM (Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2020b) is a linear interpolation between a base LM and a kNN model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Given a set of contexts ci and their corresponding next token wi as a pair (ci, wi) ∈ D, kNN-LMs create a datastore (K, V) = {(ki, vi)}, as a set of key-value pairs: (K, V) = {(f (ci) , wi) | (ci, wi) ∈ D} (1) During inference, the parametric component of the LM generates the output distribution pLM(wt|ct;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' θ) over the next tokens and produces the corresponding context representation f(ct), given the test input context ct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Then the non-parametric component of the LM queries the datastore with the f(ct) representation to retrieve its k-nearest neighbors N according to a distance function d(·, ·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Next, the kNN-LM computes a probability distribution over these neighbors using the softmax of their negative distances, and aggregates the probability mass for each vocabulary item across all of its occurrences in the retrieved targets: pkNN(wt|ct) ∝ � (ki,vi)∈N 1wt=vi exp(−d(ki, f(ct))) (2) Finally, this distribution is interpolated with the parametric LM distribution pLM to produce the final kNN-LM distribution: p(wt|ct;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' θ) = (1 − λ)pLM(wt|ct;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' θ) + λpkNN(wt|ct) (3) where λ is a scalar that controls the weights of the interpolation between two components, with higher λ putting more weight on the non-parametric component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Looking closely at Equation 2, we can notice a similarity between the calculation of PkNN and the standard PLM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The kNN distribution is based on the distances between the current context and the nearest neighbors from the datastore, normalized by a softmax function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Recall that in (standard) parametric language models, the distribution over the vocabulary is also based on a measure of distance, the inner product between the current context embedding and the word embeddings of every token in the vocabulary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Because each context embedding in the datastore (K, V) corresponds to a target token, we can also view this datastore as a large word embedding matrix with multiple word embeddings for each of the vocabulary words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Theoretically, given unlimited computation, we could calculate the distribution based on the distances to every embedding in the datastore, and aggregate by vocabulary items, making it more closely resemble PLM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In this case, k = |D|, the size of the entire datastore, and Equation 2 becomes the following, based on the distances to every context in the datastore D instead of a subset of nearest neighbors N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' pD(wt|ct) ∝ � (ki,vi)∈D 1wt=vi exp(−d(ki, f(ct))) (4) In practice, we use kNN search as a way of approximation, by limiting the calculation to only k nearest neighbors to avoid the computational cost of calculating the distribution over the entire datastore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' If we re-write and generalize Equation 2, both the kNN-LM of Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2020b) and a large number of related models can be expressed through the following equation: Pinterp = (1 − λ) softmax(Wsm · hsm) � �� � PLM parametric component +λ Msoftmax(mask-to-k(Wds ⊗ hds)/τ) � �� � PkNN non-parametric component .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (5) Figure 1 provides an illustration of Equation 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The first term of the equation is the standard parametric language model, whereas the second represents a generalized version of utilizing an external datastore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The first component, the output layer of a common parametric language model, is relatively straightforward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Wsm of size V × D is the embedding matrix of the output token, and hsm is the context vector used to calculate the distribution of the output token, usually the output of the final feedforward layer in the transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In the second component, Wds represents the datastore, of size Nds × D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Nds is the number of entries in the datastore, and D is the size of each context vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' hds represents the context vector used to query the datastore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' As shown in Figure 1, these vectors can come from different layers of the transformer architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' ⊗ represents the operation type used to calculate the similarity between context vectors and the query vector, which also has several alternatives that we discuss below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' mask-to-k(·) represents a function to sparsify similarity scores across the datastore, setting all but k similarity scores to −∞, which results in probabilities of zero for all masked similarity scores after the softmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 3 Practically, this is necessary for kNN-LMs because the size of the datastore Nds makes it infeasible to calculate all outputs at the same time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' With masked logits, we apply a more generalized version of softmax with temperature τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Intuitively adding the temperature can adjust the peakiness or confidence of the softmax probability distribution output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' After the softmax, the matrix M of dimension V × Nds sums the probability of the Nds datastore entries corresponding to each of the V vocabulary entries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Each column in this matrix consists of a one-hot vector with a value of 1 and the index corresponding to the vocabulary item wi corresponding to the datastore entry for ci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Within this formulation, it becomes obvious that there are many design choices for kNN-LM-like models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' One important thing to note is that the right side of Equation 5 is actually very similar to the left side representing the standard parametric language model, with a few additional components: M, mask-to-k, and ⊗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' More specifically, some of the design decisions that go into the kNN-LM, and parallels with standard parametric models are: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Size of Wds: In the standard parametric model, the size of Wsm is V embedding vectors, each with D dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In the kNN-LM it is very large: Nds, the size of the datastore, usually the number of tokens in the entire training corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Input representation: In the parametric model, hsm is the output from the feedforward layer in the last transformer block, which we abbreviate “ffn”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In contrast, Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2020b) rather use as hds the output from the multi-headed attention layer of the last transformer block (before running the representations through the feed-forward network, and after the LayerNorm (Ba et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2016)), which we abbreviate as “att”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Similarity & Temperature: In the parametric model, the functional form of ⊗ is the inner product (abbreviated IP), whereas Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2020b) use negative squared L2 distance (abbreviated L2) as a similarity function between Wds and hds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' As the similarity scores are turned into probability distributions with the softmax function, the choice of softmax temperature (τ) can control the scaling of the similarity scores and thus the peakiness of the non-parametric component distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Approximation & Sparsification: In the parametric model, k = V , and no values are masked, but in the kNN-LM, k ≪ V , and most of the datastore entries are pruned out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The definition of the mask-to-k(·) function, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' how to select the important datastore embeddings to include in the similarity calculation (in kNN-LM’s case the k nearest neighbors), is a crucial open design choice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In the following sections, we set out to better understand how each of these design decisions contributes to the improvement in accuracy due to the use of kNN-LMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 3 Baseline kNN-LM Results First, we evaluate the kNN-LM baseline on the Wikitext-103 dataset (Merity et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2016), and examine the importance of two design choices: the input representation hds and the similarity function ⊗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In models examined in this paper, the parametric model is a transformer language model with mostly the same architecture as in Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, We do make modifications to the original base LM (Baevski and Auli, 2018) to accommodate our experimentation need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We using BPE tokenization (Sennrich et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2015) to train a smaller vocabulary (33K) than the original (260K) on the training corpus of Wikitext-103, as subword tokenization is ubiquitous in many state-of-the-art language models (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Using subword tokenization also eliminates the need for adaptive softmax (Joulin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' It makes the output layer more generalized, sharing more resemblance to the kNN component as described in Section 2, and facilitates the ablation studies in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='1 This base LM has 268M parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' To get a perspective on how large the datastore is, it is built on the training data that contains nearly 150M BPE tokens, each paired with a context vector of size 1024.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This datastore has a total memory consumption of about 300GB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' At every retrieval step, we take the top 1024 nearest neighbors, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', k = 1024, following Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The interpolated perplexity is computed with optimal interpolation parameter λ tuned according to the perplexity on the development set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' λ is fixed during the inference for all predictions, the same as the standard kNN-LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 1By training our own version of the base LM from scratch with BPE tokenization and a standard output softmax layer, our LM’s perplexity is worse than that used in the original kNN-LM paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, we observe similar relative gains from the additional kNN component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We argue that the base LM’s performance is orthogonal to the study of the factors behind kNN-LM’s improvements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 4 hds ⊗ +#params PPL Interp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' PPL Oracle Base LM 0 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='750 kNN-LM-L2 att L2 Nds × D ∞ 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='174 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='230 kNN-LM-IP att IP Nds × D ∞ 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='095 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='077 kNN-LM-L2 ffn L2 Nds × D ∞ 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='734 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='594 kNN-LM-IP ffn IP Nds × D ∞ 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='101 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='254 Table 1: Performance of the parametric language model and several kNN-LM variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Results comparing multiple kNN-LM variants are shown in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The first row represents the base parametric language model’s perplexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The second is a formulation analogous to that of Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2020b), and in the remaining rows, we vary the input representation hds and distance function ⊗ from Equation 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' All of them use a large datastore with size Nds, approximately 5000 times the size of the vocabulary V , as also reflected in “+#params”, the number of additional parameters other than the base LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We report several important quantities with respect to each model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' “PPL” shows the perplexity of only the kNN component of the model pkNN().' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This is ∞ for all kNN- LM models in all cases, as when the kNN search does not retrieve any datastore entries corresponding to the true target word wt the probability of the target word will be zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' “Oracle” shows the lower bound of the interpolation performance by choosing the best λ for each token in the evaluation dataset, which will either be λ = 0 or λ = 1 depending on whether PLM(wt|ct) > Pknn(wt|ct) or not, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' From the table, we can see that: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Using the output of the multi-headed attention layer (“att”) as hds (instead of the standard “ffn” layer) is crucial for better performance of kNN-LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In general, using negative squared L2 distance or inner product as a similarity function does not result in a large and consistent difference, although in our setting, IP provides slightly better performance when using the “att” inputs, and slightly worse when using “ffn” inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Interestingly, when using “ffn” and “IP”, the same input and distance metric used in the parametric model, the results are the worst, indicating that the kNN-LM is particularly benefiting when the kNN-LM achieves a different view of the data from the parametric model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We found in preliminary experiments that kNN-LM is generalizable to other base language models as well, ranging from small models with 82M parameters to larger models with 774M parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The gain from kNN-LM is more significant when used with a smaller, less capable base language model, as expected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The details are shown in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In this paper, we are mainly focused on the factors contributing to the relative improvements from kNN-LM, instead of the absolute performance, so we use the 268M model for the remainder of the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In the next sections, we perform further experiments with ablations on the general formulation Equation 5 to elucidate the key elements contributing to the performance improvements in kNN-LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 4 Effect of Different Wds Formulations 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='1 Replacing the Datastore with Trainable Embeddings From the observation in Section 3, we see that the choice of hds has a large impact on the performance of kNN-LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This intrigues us to explore if one key to the improvements afforded by kNN-LM lies in the use of different input representations together, namely the attention output (hds = att) and feedforward output (hds = ffn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, from only the experiments above, it is not possible to disentangle the effect of the choice of hds and that of other design choices and factors in Equation 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' To test the effect of hds in a more controlled setting, we remove the non-parametric datastore entirely, and initialize Wds in Equation 5 with a randomly initialized word embedding matrix with the same size (Nds = V ) 5 as the LM’s output embedding Wsm, and train Wds with all other parameters fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='2 The loss function for training is the cross-entropy loss of softmax(Wds · hds) with respect to the ground-truth tokens, identically to how the base LM is trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We compare how using hds = att or hds = ffn affects the interpolated performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The results are shown in Table 2, and we also show results from kNN-LMs using these two varieties of input representation for reference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' From these experiments we can find several interesting conclusions: Effectiveness of re-training Wds: In the case of “Learned Wds w/ FFN”, we are essentially re-learning the weights feeding into the softmax function separately from the underlying LM encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Despite this fact, we can see the model achieves a PPL of 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='920, which is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='83 points better than the base model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This suggests that there is some benefit in learning the parameters of Wds after freezing the body of the transformer encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Effectiveness of ensembling two predictors: In both cases for Wds, the interpolated perplexity is significantly better than that of using a single predictor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This is particularly the case when using the “att” representation for hds, suggesting that the utility of ensembling predictions from two views of the data is not only useful when using kNN-LM, but also in standard parametric models as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Parametric ensembles as an alternative to kNN-LM?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' : Overall, by using a separate word embedding matrix with size V × D as an alternative to kNN, we can recover about 55% of the performance gain achieved by kNN-LM, with only a limited number of parameters and without the necessity for slow kNN retrieval every time a token is predicted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This suggests that the majority of the gain afforded by kNN-LM could be achieved by other more efficient means as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' hds Nds ⊗ +#params PPL Interp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Oracle Base LM 0 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='750 kNN-LM w/ ATT att Big IP Nds × D ∞ 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='095 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='077 Learned Wds w/ ATT att 1x IP V × D 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='584 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='353 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='954 kNN-LM w/ FFN ffn Big IP Nds × D ∞ 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='101 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='254 Learned Wds w/ FFN ffn 1x IP V × D 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='920 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='694 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='772 Table 2: Performance comparison how the choice of hds, input representation, affects kNN baselines and models with learnable embeddings as datastore alternative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' hds is the attention layer output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='2 Increasing the Softmax Capacity One premise behind kNN-LM is that the large datastore is the key reason for the model working well: the larger the softmax capacity, the better the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Naturally, as a first step, we need to check whether such a big datastore is warranted and whether the high rank of Wds leads to better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We test the effect of the datastore size for kNN retrieval on kNN-LM interpolated perplexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' If a bigger datastore (a high rank Wds) is better in kNN-LM than a smaller datastore, then the hypothesis of softmax capacity is more probable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We randomly subsample the full datastore in varying percentages and the results are shown in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The full datastore contains more than 150M entries and storing them takes 293GB when using half-precision floating points (fp16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We can see that whether or not approximate kNN is used, the final perplexity decreases almost linearly with more percentage of the original datastore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Even with just 5% of the datastore size (15G), kNN-LM still provides a benefit over just using the base LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, even when the subsampling percentage reaches 90%, having more entries in the datastore still provides benefits without having significant diminishing returns, suggesting that a large datastore is beneficial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' One possible reason why a larger datastore is helpful is that words can be difficult to predict.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' There are several reasons: (1) They are rare, or (2) they are frequent, but they have multiple meanings and appear in different contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The softmax bottleneck (Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2017) suggests that the final dot product of language model Wsm · hsm limits the expressivity of the output probability distributions given the context;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' that is, a single output vector of a fixed (1024) size cannot express all the possible mappings between 100M training examples and 33K vocabulary outputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We hypothesize that kNN-LM improves performance by alleviating the problem, since Wds ⊗ hds has a higher rank and is more expressive than just Wsm · hsm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In other words, kNN is a sparse approximation of the full softmax over all the embeddings in the datastore Wds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' To test this hypothesis, 2Because we previously found little difference between IP and L2 as similarity functions, we use IP in the experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' For simplicity, we set temperature τ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 6 we disentangle the effect of the high rank in Wds from the actual saved context embeddings in Wds, by training an embedding matrix of the same desired size to test from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Ratio to Full Datastore Size Interpolated Perplexity 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='000 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='000 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='000 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='00 Figure 2: The effect of the size of the datastore used for kNN retrieval on final interpolated perplexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We explore several potential solutions for increasing the capacity of softmax, and examine if they can achieve a similar effect of kNN-LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The first and easiest solution is to increase the embedding matrix size by adding more embedding vectors for each word type in the vocabulary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' To test this, we replace Wsm with a much smaller matrix of size nV × D, where we allocate n embedding vectors for each word type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' When calculating the probability from this component, we compute the softmax over nV items and sum the probabilities for each vocabulary entry to calculate the final probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' mask-to-k(·) is no longer needed, as this formulation is small enough to fit the entire matrix in the GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We then finetune the new Wds on the training data until convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Figure 3 compares the base LM and the original kNN-LM versus using either attention layer output (“att”) or feedforward layer output (“ffn”) as hds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We plot the number of embeddings for each word type (nV total embeddings in Wds) versus the interpolated perplexity, with full details found in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In both cases, comparing with the top horizontal line which represents the perplexity of the base LM, replacing the datastore with a much smaller weight matrix (from Nds to nVds) by assigning only a few more embeddings for each word helps, although only about half as effective as kNN-LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' To give a perspective, the original datastore size is about 5000V .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Surprisingly, we find that increasing n does not always bring better performance, even though a larger datastore is better than using a small datastore in kNN-LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We can see that when hds = ffn, over-parameterization provides very limited improvements, while for hds = att it does not bring consistent improvements at all.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Comparing the trend of increasing the embeddings in Wds, with the bottom horizontal line in the plot, which represents the perplexity of the standard kNN-LM using the full datastore (Wds with approx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 5000V embeddings), we can see no clear trend that more trainable embeddings result in better perplexity, and that the gap between using trained embeddings and using full datastore is still significant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This suggests that simply over-parameterizing Wds is not an effective method of achieving accuracy gains similar to kNN-LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We hypothesize that this is because by just adding more embeddings, while still using the same training procedure as the original LM, the multiple embeddings for each word type after learning could still be very close to each other, and thus do not increase the softmax capacity much.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This suggests that some regularization terms may be needed during training to make the multiple embeddings not converge to the same vector, rendering over-parameterization useless.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Besides simply increasing the number of embedding vectors equally for each word type, we also propose other alternatives to increase softmax capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' First, we hypothesize that different word types have different difficulties for the language model to predict.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' For those words that appear very frequently, they may appear in many different contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' As a result, instead of adding an equal number of additional embeddings to each word type, we propose to adaptively increase the number of embeddings for word types based on word frequency, or total training loss for the word.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Second, we try to break the softmax bottleneck with a Mixture of Softmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2017) proposes a solution to the problem using a Mixture of Softmax (MoS) to produce more linearly independent probability distributions of words given different contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Last, opposite to training the word embeddings of increased size, we also consider ways to compress the datastore down to a similar-sized embedding matrix for softmax computation by clustering the whole datastore and allowing for further finetuning of the embedding matrix consisting of cluster centroids.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, none of these alternative methods provided additional benefits over the simple multi-embedding approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' More details on these attempts can be found in Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 7 Number of Trained Embeddings (nV) Interpolated Perplexity 19 20 21 22 2 4 6 8 att ffn Figure 3: The number of embeddings per word type (nV total embeddings in Wds) versus interpolated perplexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The horizontal line at the top represents the perplexity of the base LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The horizontal line at the bottom represents the interpolated perplexity using a full datastore with kNN-LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 5 Approximate kNN Search & Softmax Temperature 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='1 Comparing Approximate kNN Search To calculate PkNN of the non-parametric component in Equation 5, it is usually prohibitive to use exhaustive kNN search, and thus Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2020a) use approximate kNN search using the FAISS library (Johnson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The use of FAISS (similarly to other approximate search libraries) results in two varieties of approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Approximate Neighbors: Because the search for nearest neighbors is not exact, the set of nearest neighbors might not be equivalent to the actual nearest neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Recall the function mask-to-k(·) in Equation 5, it is the function where we select the kNN entries from the datastore Wds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We denote “real mask” as the accurate nearest neighbors for mask-to-k(·) selection, and “FAISS mask” as the approximate nearest neighbors returned by the FAISS library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='3 Approximate Scores: In addition, FAISS makes some approximations in calculating the distances between the query and the retrieved neighbors for efficiency purposes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We denote “real score” as the scores calculated from ground truth distances between the embeddings, and “FAISS score” as the distances returned by FAISS approximate search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The comparison of the different approximation settings is shown in Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Quite surprisingly, we actually find that the interpolated perplexity with approximate search is better than that with exact search, both with respect to the mask and the score calculation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Intrigued by this counter-intuitive result, we explore the effect of kNN search approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' hds ⊗ +#params PPL λ Interp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' PPL Oracle Base LM 0 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='750 kNN-LM w/ FAISS mask, FAISS score att L2 Nds × D ∞ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='271 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='174 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='230 kNN-LM w/ FAISS mask, real score att L2 Nds × D ∞ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='176 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='672 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='393 kNN-LM w/ real mask, real score att L2 Nds × D ∞ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='172 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='735 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='480 Table 3: Performance of the parametric language model and comparison of kNN-LMs using the approximate versus ground truth kNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' First, we plot the subsampled size of the datastore with the interpolated perplexity Figure 4, a similar plot to Figure 2, but showcasing the comparison between approximate and real masks, between approximate and real scores in both the full datastore as well as a small subsampled datastore setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We find that using an approximate FAISS mask to find nearest neighbors is better than using the ground truth nearest neighbors and that using the approximate score returned by FAISS is better than recomputing the ground truth distances 3To calculate the real mask over a large datastore, we shard the datastore into several smaller datastores, calculate the nearest neighbors for each of the smaller datastores, and combine them back together to get the final result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 8 between embeddings for the kNN distribution at different levels of datastore size, both at 5% or 100%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Interestingly, the gap between using an approximate score or real score given the same approximate nearest neighbors (“FAISS mask, FAISS score” vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' “FAISS mask, real score”) is larger than that between using approximate or real nearest neighbors given the same ground truth method of calculating the distance (“real mask, real score” vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' “FAISS mask, real score”), for reasons we will elucidate in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Ratio to Full Datastore Size Interpolated Perplexity 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='000 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='000 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='000 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='00 FAISS mask, FAISS score FAISS mask, real score real mask, real score Figure 4: The differences between using approximate and accurate kNN search on varying size of the datastore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='2 Adding Softmax Temperature to kNN Distribution Because the number of retrieved nearest neighbors, k is usually much smaller than the vocabulary size V , intuitively, the kNN distribution PkNN used for interpolation tends to be more peaky than the standard LM output distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' When k = 1024 and V = 33000, as in our experiments, PkNN will only have a few vocabulary items with a non-zero probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Furthermore, many of the retrieved neighbors share the same target token and thus make the kNN distribution even peakier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' One way to control the entropy, or peakiness of the distribution is to add temperature to the logits that go into the softmax function (Holtzman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We calculate the probability of non-parametric component PkNN with the following equation where t is the softmax temperature: PkNN = Msoftmax(mask-to-k(Wds ⊗ hds)/t) (6) In general, the higher the temperature, the less “peaky” the distribution would become.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We experiment with both the 5% as well as the full datastore using different temperatures ranging from 0 to 3 at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='1 intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The results are shown in Figure 5a and Figure 5b respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (a) On 5% subsampled datastore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (b) On full datastore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Figure 5: The interpolated perplexity varies with different softmax temperature values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We can see that the default temperature t = 1 does not always result in the best-interpolated perplexity and tuning softmax temperature is desirable for all sizes of datastore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The lesson learned here is that tuning the 9 real mask, real score 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='70 FAISS mask, FAISS score FAlSS mask, real score 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='65 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='60 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='55 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='50 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='0real mask, real score 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='6 FAISS mask, FAISS score FAiss mask, real score 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='4 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='2 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='0 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='8 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='6 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='4 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='0softmax temperature for the kNN distribution is crucial for getting optimal results from each setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Only coincidentally, a temperature of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='0 was close to optimal in the original settings of Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2020b), which hid the importance of this hyperparameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In both the 5% subsampled datastore and the full datastore scenarios, temperature t = 1 is close to optimal when using “FAISS mask, FAISS score”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' When using either “real mask” or “real score”, this is not true anymore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Even at the optimal temperature for each setting, “real mask, real score” somewhat underperforms “FAISS mask, real score”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' It is consistent with the counter-intuitive phenomenon discussed in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' There are also differences between the two scenarios of different datastore sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' With the full datastore, using “real score” outperforms “FAISS score” given the same “FAISS mask”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, the opposite is true when using the 5% datastore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This suggests that as the datastore size grows, using accurate distance values are better than the approximate ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The relatively small gap between using “real score” and “FAISS score” in both datastore settings shows that the main contributor to the improvements is using approximate nearest neighbors (“FAISS mask”) rather than using approximate distance values (“FAISS score”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We hypothesize that this is related to regularization for preventing overfitting, and approximate search provides fuzziness that functions as a regularizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We can think of the non-parametric part in kNN-LM, the kNN component as a model, where the datastore size is its model capacity, and the datastore is its training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Considering that the kNN component uses the exact same training data as the base parametric LM, having ground truth, accurate kNN search may cause the kNN component to overfit the training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Comparing the small datastore with only 5% with the original datastore, we see that a small datastore means a small training set for the kNN “model” and it thus it benefits more from this regularization, both both through using the FAISS mask and FAISS score (at optimal temperature settings).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' From these experiments, we can see that, surprisingly, one of the important ingredients in kNN-LM seems to be approximate kNN search, which likely prevents overfitting to the datastore created from the same training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We further analyze this unexpected result in Appendix D, where we find that longer words and words that appear in many different contexts have slightly better results with approximate nearest neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Notably, similar effects, where an approximation component lead to better generalization, have been reported in other NLP tasks as well, and are sometimes referred to as “beneficial search bias”, when modeling errors cause the highest-scoring solution to not be the correct one: Meister et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2020b) suggest that “quite surprisingly, beam search often returns better results than exact inference due to beneficial search bias for NLP tasks.” Stahlberg and Byrne (2019) also conclude that “vanilla NMT in its current form requires just the right amount of beam search errors, which, from a modeling perspective, is a highly unsatisfactory conclusion indeed, as the model often prefers an empty translation”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 6 Probably Wrong Hypotheses for Why kNN-LMs Work The results in the previous sections are the result of extensive analysis and experimentation, in which we also tested a number of hypotheses that did not turn out to have a significant effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Additional details of these hypotheses are detailed in Appendix E, and we hope that they may provide ideas for future improvements of retrieval-based LMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Ensemble of Distance Metrics We hypothesized that the ensemble of two distance metrics: the standard inner product distance (which the LM uses) and the L2 distance (which the kNN component uses), is the key to the improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, we found that similar gains can be achieved using the inner-product metric for the retrieved kNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' More details can be found in Appendix E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Ensembling of Two Models We hypothesized that the kNN component merely provides another model for ensembling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The improvement from kNN-LM is purely due to the ensembling effect of different models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, we found that kNN-LM’s improvement is orthogonal to ensembling with a different base LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' More details can be found in Appendix E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Sparsification The mask-to-k(·) used by kNN retrieval induces sparsity in the distribution over the vocab- ulary, due to a small k (typically 1024) compared to the size of the vocabulary V (33K in our experiments and 260K in the original settings of Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2020b)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We hypothesized that kNN-LM increases the probability of the top-k entries while taking “probability mass” from the long tail of unlikely word types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, we could not gain any benefits solely from sparsifying the output probability of a standard LM and interpolating it with the original LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' More details can be found in Appendix E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 10 Stolen Probabilities The stolen probabilities effect (Demeter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2020) refers to the situation where the output embeddings of an LM are learned such that some words are geometrically placed inside the convex hull that is formed by other word embeddings and can thus never be “selected” as the argmax word.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We hypothesized that kNN-LM solves the stolen probabilities problem by allowing to assign the highest probability to any word, given a test context that is close enough to that word’s datastore key.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, we found that none of the vectors in our embedding matrix and in the original embedding matrix of Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2020b) is located in the convex hull of the others, which is consistent with the findings of Grivas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' More details can be found in Appendix E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Memorization We hypothesized that the kNN component simply provides memorization of the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, we could not improve a standard LM by interpolating its probability with another standard LM that was further trained to overfit the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' More details can be found in Appendix E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Soft Labels We hypothesized that kNN-LM’s improvement lies in reducing the “over-correction” error when training with 1-hot labels, as hypothesized by Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2022), and that retrieving neighbors is not important.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' If only “soft labels” are the key, we could hypothetically improve the performance of another fresh LM with the same model architecture but trained with the soft labels from the base LM, instead of from kNN-LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This separates the effect of “soft labeling” from the additional guidance provided by kNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, this does not help with the interpolated perplexity at all.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' More details can be found in Appendix E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Optimizing Interpolated Loss We hypothesized that the standard LM cross-entropy training loss does not emphasize the examples where base LM performs badly which could benefit from kNN, and directly optimizing the interpolated loss of standard LM and a separate trainable softmax layer could be a better alternative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, we could not gain any benefits by training an additional softmax layer together with a base LM using the interpolated loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' More details can be found in Appendix E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 7 Conclusion In this paper, we investigate why kNN-LM improves perplexity, even when retrieving examples from the same training data that the base LM was trained on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' By proposing and testing various hypotheses and performing extensive ablation studies, we find that the key to kNN-LM’s success is threefold: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Ensembling different input representations – the feedforward layer output and the attention layer output – can recover 55% of the performance, even without retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' One of the most unexpected discoveries in the paper is that using approximate nearest neighbor search allows kNN-LMs to generalize better than exact nearest neighbor search, possibly due to a regularization effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Tuning the softmax temperature for the kNN distribution is crucial to adjust the standard LM output distribution with the distribution created by the retrieved neighbors’ distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We performed extensive experiments which ruled out other hypotheses as to why kNN-LMs work, such as over-parameterization, datastore clustering, sparsification, overfitting, ensembling of distance metrics, and alternative training methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We believe that this work unlocks a variety of exciting research directions for efficient kNN alternatives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' For example, exploring methods that replace the kNN component with trainable parameters and achieve comparable results without the latency burden of kNN-LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' References Uri Alon, Frank F Xu, Junxian He, Sudipta Sengupta, Dan Roth, and Graham Neubig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Neuro-symbolic language modeling with automaton-augmented retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='12431, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Layer normalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:1607.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='06450, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Alexei Baevski and Michael Auli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Adaptive input representations for neural language modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:1809.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='10853, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 11 Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' A neural probabilistic language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Journal of machine learning research, 3(Feb):1137–1155, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Improv- ing language models by retrieving from trillions of tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In International conference on machine learning, pages 2206–2240.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' PMLR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Language models are few-shot learners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='14165, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Aaron Clauset, Cosma Rohilla Shalizi, and Mark EJ Newman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Power-law distributions in empirical data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' SIAM review, 51(4):661–703, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' David Demeter, Gregory Kimmel, and Doug Downey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Stolen probability: A structural weakness of neural language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2191–2197, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' BERT: Pre-training of deep bidirectional transformers for language understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='04805, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Edouard Grave, Moustapha Cissé, and Armand Joulin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Unbounded cache model for online language modeling with open vocabulary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='02604, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Andreas Grivas, Nikolay Bogoychev, and Adam Lopez.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Low-rank softmax can have unargmaxable classes in theory but rarely in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6738–6758, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Generating sentences by editing prototypes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Transactions of the Association for Computational Linguistics, 6:437–450, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Junxian He, Taylor Berg-Kirkpatrick, and Graham Neubig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Learning sparse prototypes for text generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='16336, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Junxian He, Graham Neubig, and Taylor Berg-Kirkpatrick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Efficient nearest neighbor language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='04212, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Distilling the knowledge in a neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:1503.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='02531, 2(7), 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The curious case of neural text degeneration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:1904.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='09751, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Jeff Johnson, Matthijs Douze, and Hervé Jégou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Billion-scale similarity search with GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' IEEE Transactions on Big Data, 7(3):535–547, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Armand Joulin, Moustapha Cissé, David Grangier, Hervé Jégou, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Efficient softmax approximation for gpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In International conference on machine learning, pages 1302–1310.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' PMLR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Nearest neighbor machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='00710, 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Generalization through Memorization: Nearest Neighbor Language Models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In International Conference on Learning Representa- tions (ICLR), 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Roberta: A robustly optimized bert pretraining approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:1907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='11692, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Clara Meister, Elizabeth Salesky, and Ryan Cotterell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Generalized entropy regularization or: There’s nothing special about label smoothing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='00820, 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 12 Clara Meister, Tim Vieira, and Ryan Cotterell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Best-first beam search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Transactions of the Association for Computational Linguistics, 8:795–809, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Pointer sentinel mixture models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:1609.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='07843, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Stephen Merity, Nitish Shirish Keskar, and Richard Socher.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Regularizing and optimizing LSTM language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In Proceedings of ICLR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Hermann Ney, Ute Essen, and Reinhard Kneser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' On structuring probabilistic dependences in stochastic language modelling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Computer Speech & Language, 8(1):1–38, 1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Gabriel Pereyra, George Tucker, Jan Chorowski, Łukasz Kaiser, and Geoffrey Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Regularizing neural networks by penalizing confident output distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:1701.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='06548, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Language models are unsupervised multitask learners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' OpenAI blog, 1(8):9, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:1910.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='01108, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Rico Sennrich, Barry Haddow, and Alexandra Birch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Neural machine translation of rare words with subword units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:1508.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='07909, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Felix Stahlberg and Bill Byrne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' On nmt search errors and model errors: Cat got your tongue?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:1908.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='10090, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Rethinking the inception architecture for computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Dexin Wang, Kai Fan, Boxing Chen, and Deyi Xiong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Efficient cluster-based k-nearest-neighbor machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' ArXiv, abs/2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='06175, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Breaking the softmax bottleneck: A high-rank rnn language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' arXiv preprint arXiv:1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='03953, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Zhixian Yang, Renliang Sun, and Xiaojun Wan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Nearest neighbor knowledge distillation for neural machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5546–5556, Seattle, United States, July 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='18653/v1/2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='naacl-main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='406.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' URL https://aclanthology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='org/2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='naacl-main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='406.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 13 A kNN-LM Generalization to Other LMs #params Base LM PPL kNN-LM PPL Absolute PPL Gain Ours 268M 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='75 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='17 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='58 Distilled-GPT2 82M 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='25 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='84 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='41 GPT2-small 117M 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='84 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='55 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='29 GPT2-medium 345M 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='55 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='37 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='18 GPT2-large 774M 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='56 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='80 Table 4: Performance of kNN-LM applied to other pretrained language models of different sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' To test the generalizability of kNN-LM, we follow the same experimental setup as used in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We select several pretrained models from the GPT2 family (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2019) of various parameter counts, plus a distilled version of GPT2, DistillGPT2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (Sanh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2019) We take the pretrained model checkpoint, build the datastore and evaluate on the Wikitext-103 dataset splits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The results are shown in Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We can see that kNN-LMs has good generalizability on other models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' It improves the perplexity of all the base LMs tested.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, the larger the model is, and usually the better the base LM’s perplexity is, the less gain can be achieved from adding kNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Note that our model is trained from scratch on Wikitext-103 dataset and thus even with a relatively large model size, the perplexity and perplexity gain from adding kNN is still less than models with pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Without loss of generalizability, we will use our own trained-from-scratch model as the base LM in the following sections for ablation study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' B Detailed Results for Increasing the Softmax Capacity hds Nds ⊗ +#params PPL Interp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Oracle 0 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='750 att Big IP Nds × D ∞ 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='095 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='077 att 1x IP V × D 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='584 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='353 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='954 att 2x IP 2V × D 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='903 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='529 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='432 att 3x IP 3V × D 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='434 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='395 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='132 att 4x IP 4V × D 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='936 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='521 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='423 att 5x IP 5V × D 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='025 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='643 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='560 att 6x IP 6V × D 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='972 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='519 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='422 att 9x IP 9V × D 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='084 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='696 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='631 ffn Big IP Nds × D ∞ 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='101 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='254 ffn 1x IP V × D 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='920 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='694 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='772 ffn 2x IP 2V × D 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='889 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='646 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='701 ffn 3x IP 3V × D 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='829 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='603 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='717 ffn 4x IP 4V × D 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='769 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='629 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='876 ffn 5x IP 5V × D 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='720 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='594 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='878 ffn 6x IP 6V × D 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='726 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='599 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='902 ffn 9x IP 9V × D 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='687 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='567 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='887 Table 5: Performance comparison of kNN baselines and models with learnable embeddings as datastore alternative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' hds is either attention layer output (att) or feedforward layer output (ffn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' C Alternative Methods for Increasing Softmax Capacity C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='1 Adaptive Increasing Embedding Size We hypothesize that different word types have different difficulties for the language model to predict.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' For those words that appear very frequently, they may appear in many different contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' As a result, instead of adding equal number of additional embeddings to each word type, we propose to adaptively increase the number of embeddings for word types based on word frequency, or total training loss for the word.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Based on the intuition of Zipf’s law (Clauset et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2009), we assign 1 + logb fv for each word type v ∈ V , based on 14 either the frequency or the total training loss of the word, fv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The b is a hyperparameter that could be tuned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' To ensure fair comparison, we tune b so that for each experiment the total number of embeddings matches: � v∈V 1 + logb fv = nV .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The results are shown in Table 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We can see that although nice in paper, given the same number of total embeddings, adaptively increasing the number of embeddings assigned for each word type does not make a significant difference in the final perplexity, when compared with the models that use equal number of embeddings for each word type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' hds Nds ⊗ +#params PPL λ Interp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' PPL Oracle Base LM 0 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='750 KNN att Big L2 Nds × D ∞ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='271 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='174 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='230 KNN att Big IP Nds × D ∞ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='266 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='095 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='077 Equal Per Word att 3x IP 3V × D 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='434 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='417 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='395 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='132 Loss Weighted att 3x IP 3V × D 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='948 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='437 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='440 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='303 Freq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Weighted att 3x IP 3V × D 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='507 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='412 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='387 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='105 KNN ffn Big L2 Nds × D ∞ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='065 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='734 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='594 KNN ffn Big IP Nds × D ∞ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='050 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='101 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='254 Equal Per Word ffn 3x IP 3V × D 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='829 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='622 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='603 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='717 Loss Weighted ffn 3x IP 3V × D 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='764 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='713 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='659 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='978 Freq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Weighted ffn 3x IP 3V × D 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='757 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='658 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='572 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='782 Table 6: Performance comparison of kNN baselines and several configurations that adaptively increase the embedding size with training loss or word frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='2 Mixture of Softmaxes Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2017) proposes a solution to the problem using a Mixture of Softmax (MoS) to produce more linearly independent probability distributions of words given different contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Suppose that there are a total of R mixture components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' MoS first uses R linear layers with weight wr to transform the current query context vector hds into wrhds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' With a shared word embedding matrix Wsm, we can calculate each softmax component’s probability distribution with softmax(Wsm · wrhds).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The mixture distribution is then given by: PMoS = R � r πr,hdssoftmax(Wsm · wrhds) (7) The prior weights are calculated using another linear layer with weight wπ, as πr,hds = softmax(wπhds).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The softmax ensures that �R r πr,hds = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Comparing the MoS with the first term in Equation 5, Msoftmax(mask-to-k(Wds ⊗ hds)), we can see that there are some connections between the two.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' MoS eliminates the mask-to-k(·) operation, and replaces the single softmax across a very large vector (size of datastore), into multiple smaller softmaxes, each across only a vector of the size of vocabulary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' As a result, the huge Wds is replaced by several linear layers to project the word embedding matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Now the first term becomes: M(⊕R r softmax(Wsm · wrhds)) (8) Mir = πr,hds, ∀i ≤ V (9) where ⊕ represents the vector concatenation operation, and the aggregation matrix M now contains the mixture weights for each softmax being concatenated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We perform experiments with a varying number of mixtures (R), different definitions hds, and whether to fine-tune the output word embeddings Wsm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We allow fine-tuning the word embedding when we use attention layer output as context vector, since the word embedding matrix is trained with feedforward layer output originally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The results for this formulation are shown in Table 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' MoS models on its own increase the performance of the language model marginally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' When compared with Table 5, we find that these models are worse than those that simply increases the number of embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This is expected because MoS has fewer added parameters compared to those, as it only requires several additional linear projection layers for the embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='3 Clustering Datastore Opposite to training the word embeddings of an increased size, we also consider ways to compress the datastore down to a similar-sized embedding matrix for softmax computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The intuition is that the datastore contains 15 hds R ⊗ +#params PPL λ Interp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' PPL Oracle Base LM 0 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='750 KNN att L2 Nds × D ∞ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='271 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='174 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='230 KNN att IP Nds × D ∞ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='266 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='095 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='077 KNN ffn L2 Nds × D ∞ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='065 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='734 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='594 KNN ffn IP Nds × D ∞ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='050 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='101 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='254 Ft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' MoS+embed att 2 IP V D + 2D2 + 2D 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='986 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='437 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='720 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='573 Ft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' MoS+embed att 3 IP V D + 3D2 + 3D 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='106 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='422 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='779 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='609 Ft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' MoS Only att 2 IP 2D2 + 2D 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='552 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='371 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='011 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='796 Ft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' MoS Only att 3 IP 3D2 + 3D 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='573 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='371 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='024 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='812 Ft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' MoS Only ffn 2 IP 2D2 + 2D 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='351 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='843 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='338 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='258 Ft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' MoS Only ffn 3 IP 3D2 + 3D 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='495 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='733 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='460 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='322 Ft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' MoS Only ffn 4 IP 4D2 + 4D 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='321 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='994 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='321 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='396 Ft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' MoS Only ffn 5 IP 5D2 + 5D 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='371 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='909 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='367 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='406 Table 7: Performance comparison of kNN baselines and several MoS configurations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' R is the number of mixtures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' redundant context vectors, and thus compression could make the datastore smaller without sacrificing too much performance gain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2021) shows that we can safely compress the datastore by clustering to 50% of the original size without losing performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We test this idea further by clustering the entire datastore into a size that could fit in GPU memory (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 2V , 3V ) and thus could be easily fine-tuned further and use the resulting centroids to replace Wds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Within each cluster, there will be a distribution of different words with contexts, and we use the frequency of words within each cluster to calculate the aggregation matrix M in Equation 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This would have the added benefit of “multi-sense” embedding, which allows similar meanings to be clustered to form a new “meta word” while the same word with different meanings would form different “meta words”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' A notable example is bank, shore, and financial institution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, this does not work, mostly because of the high compression loss after clustering and the imbalanced distribution of word types among each cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' D Which Words Benefit from Approximation?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' To further understand the unexpected results when using the different kNN approximate retrieval settings in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='1 and Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='2, we analyze on a token level, based on how many times each ground truth token’s probability in the evaluation set are helped by each kNN setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' It means that for each ground truth token in the evaluation, we count the times when the kNN distribution is higher than the base LM distribution PLM, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', PkNN > PLM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Since we found previously that approximate kNN provides an additional performance boost compared to ground truth kNN, we thus compare “real mask, real score” versus “FAISS mask, real score” in this analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' To prevent outliers, we filter out words with less than 10 occurrences in the evaluation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' For each setting, we calculate the percentage of occurrences in the evaluation set where each token in the vocabulary where the kNN module achieves a better probability than base LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We then plot the absolute difference between the percentages of the two settings, with respect to various possible attributes of the token that achieves better probability using each setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Figure 6 shows that the longer the token is, which usually suggests proper nouns and harder and less common words in English, are better with approximate neighbors than ground truth ones, and vice versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We hypothesize that this is due to longer words are more prone to overfitting in kNN-LM and thus using approximate kNN provides an effect similar to smoothing and regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We also compare words that could appear in more diverse contexts with words that co-occur with few distinct contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' To measure how diverse the contexts of each word in the vocabulary is, we calculate both the forward and backward bigram entropy for each word in the evaluation set that has more than 10 occurrences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The bigram entropy is a simple yet good indicator of context diversity for a given word, as used in Kneser–Ney smoothing (Ney et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 1994).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We calculate both the forward and backward bigram entropy for each word w as 16 Figure 6: The effect of the token character length on how much accurate nearest neighbors are better than approximate FAISS neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Negative values mean worse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The trend line of the scatter points is shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' follows, where wafter and wbefore represent the word after and before the given word w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Hforward(w) = − � wafter p(wafter|w) log p(wafter|w) (10) Hbackward(w) = − � wbefore p(wbefore|w) log p(wbefore|w) (11) Forward and backward entropy represents how diverse the context after and before the given word is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Intuitively, bigram entropy is supposed to indicate words that can appear in lots of different contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The higher the entropy of a word, the more diverse its context is, and vice versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' For example, words like “Francisco” would have a low entropy because it mostly comes after “San”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Figure 7: The effect of the forward and backward entropy of words on how accurate nearest neighbors are better than approximate FAISS neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Negative values mean worse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The trend line of the scatter points are shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The comparison is shown in Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We can see that the higher the entropy in both forward and backward cases, the better using approximate nearest neighbor search becomes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This suggests that words that appear in many different contexts are better off with an approximate kNN, and “easy-to-predict” examples such as “Jersey” and “Fransisco” is better with accurate kNN, possibly because these examples are less prone to overfitting errors and thus requires less regularization from approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 17 E Failed Hypotheses E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='1 Distance Metric We hypothesize that the key to kNN-LM’s performance gain is the ensemble of two distance metrics: the standard dot product distance (which the LM uses) with the L2 distance (which the kNN component uses as ⊗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We tried to replace the kNN component with a component that just takes the tokens retrieved by the kNN search and returns their L2 distance to the LM output word embeddings: Wsm ⊗ hds instead of Wds ⊗ hds, where ⊗ represents the negative L2 distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We tried this with both variants of hds, attention layer output, and feedforward layer output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' None of these helped.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='2 Sparsification In Equation 5, mask-to-k(·) used by kNN retrieval induces sparsity in the distribution over the vocabulary, due to a small k compared to the number of vocabulary V .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We hypothesize that the in kNN-LM, the kNN distribution is sparse, practically increasing the probability of the top-k entries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The kNN distribution has up to 1024 entries that are non-zero, concentrating more probability mass over the most likely tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This effect is similar to the redistribution of probability mass for text generation in Holtzman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We test this hypothesis only by taking top 32, 64, 128, 512, or 1024 tokens in the parametric LM probability and zeroing out the probabilities of the rest of the tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' To compensate, we experiment with different softmax temperatures and then interpolate with the parametric LM probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This isolates the effect of the datastore and retrieval at all, and this does not help at all, suggesting that sparsification of the output probability alone is not enough.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Another attempt is to hypothesize that the key in kNN-LM is that it selects “which tokens to include” in the kNN distribution, and not their distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The intuition behind is that maybe the selection of the top tokens according to the kNN search is better than that from the dot-product distance between the language model’s output vector and all the vocabulary embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We perform experiments similar to the previous attempt, sparsifying the output probability with the tokens retrieved by the kNN search (but ignoring the distances provided by the kNN search) rather than the top k tokens of the LM, with and without removing duplicates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In the best case, they manage to reduce the perplexity by 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='5 (whereas kNN-LM reduces by nearly 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='3 Location within Context Window Supposedly, words in the beginning of the “context window” of the transformer at test time have less contextual information than words toward the end of context window.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We hypothesized that maybe the base LM performs worse in one of these (beginning vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' end of the context window), and maybe kNN-LM provides a higher improvement in one of these.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We measured the per-token test perplexity with respect to the location of each token in the context window.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, we did not find any significant correlation between the performance of the base LM and the location, and no significant correlation between the difference between kNN-LM and the base LM and the location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We also hypothesized that maybe the beginning of every Wikipedia article is more “predictable”, and the text becomes more difficult to predict as the article goes into details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, we also did not find any correlation with the location of the word within the document it appears in.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='4 Stolen Probabilities The stolen probabilities effect (Demeter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2020) refers to the situation where the output embeddings of an LM are learned such that some words are geometrically placed inside the convex hull that is formed by other word embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Since language models generate a score for every output word by computing the dot product of a hidden state with all word embeddings, Demeter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2020) prove that in such a case, it is impossible for words inside the convex hull to be predicted as the LM’s most probable word (the “argmax”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We hypothesized that kNN-LM solves the stolen probabilities problem by allowing to assign the highest probability to any word, given a test hidden state that is close enough to that word’s datastore key.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Nevertheless, as shown by Grivas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2022), although this problem might happen in small RNN-based language models, in modern transformers it rarely happens in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Using the code of Grivas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2022), we checked the embeddings matrix of our model and of the checkpoint provided by Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Indeed, we found that in both models – no word is un-argmaxable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 18 E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='5 Are kNN-LM Just Ensembling?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Our hypothesis is that kNN component only provides another model for ensembling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The interpolation process is basically an ensemble model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Technically it is unsurprising that kNN-LM will have the benefit from ensembling, but we perform experiments to see how it compares to other ensembling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We trained another language model with the same architecture as the base LM we used throughout the experiments, with some variants having more than one embedding vector for each word (similar to Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We interpolate the models with the original base LM, and the results are shown in Table 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We can see that even just ensembling the base LM with another identical model, but trained with a different random seed, provides a huge performance boost, both on interpreted perplexity and on oracle perplexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Prev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Layers hds Nds ⊗ +#params PPL Interp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Oracle same 0 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='750 same att Big L2 Nds × D ∞ 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='174 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='230 same att Big IP Nds × D ∞ 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='095 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='077 same ffn Big L2 Nds × D ∞ 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='734 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='594 same ffn Big IP Nds × D ∞ 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='101 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='254 diff ffn 1x IP F + V × D 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='569 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='941 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='980 diff ffn 2x IP F + 2V × D 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='914 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='948 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='885 diff ffn 3x IP F + 3V × D 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='206 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='981 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='853 Table 8: Performance comparison of kNN baselines and models with different size output embeddings re-trained from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, just because ensembling two LMs of the same architecture provides better performance than interpolating the base LM with kNN does not necessarily suggest that kNN’s performance improvement can be fully replaced by model ensembling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In other words, we are interested in whether the kNN performance improvements are orthogonal to that of model ensembling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' To test this, we compare the performance of the ensemble of K multiple LMs versus the ensemble of K − 1 multiple LMs plus the kNN component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The comparison is fair because we have the same number of models in the ensemble, and the only difference is whether the kNN component is included.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The results are shown in Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' For the “LM” series, each point is K LMs ensemble, and for the “kNN” series, each point is K − 1 LMs plus kNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We can see that even at 4-ensemble, the ensemble that contain kNN as a component still have a considerable edge over the 4-ensemble that contain just LMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Ensemble Components 16 18 20 22 1 2 3 4 LM KNN LM and KNN Figure 8: Ensembling effect comparison, between multiple base LMs and multiple base LMs plus kNN component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='6 Are kNN-LM Just Alternative Training Methods?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='1 Overfitting Since kNN-LM improves perplexity even with the same training dataset as datastore, we are curious if kNN-LM works by only “memorizing” the training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The hypothesis is that the datastore and the kNN 19 Prev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Layers hds Nds ⊗ +#params PPL Interp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Oracle Base LM same 0 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='750 KNN same att Big L2 Nds × D ∞ 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='174 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='230 KNN same att Big IP Nds × D ∞ 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='095 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='077 KNN same ffn Big L2 Nds × D ∞ 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='734 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='594 KNN same ffn Big IP Nds × D ∞ 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='101 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='254 Overfit@92 diff ffn V IP F + V × D 1702.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='806 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='732 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='764 Overfit@129 diff ffn V IP F + V × D 8966.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='508 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='733 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='814 Table 9: Performance comparison of several baselines with two overfitted models, at 92 and 129 additional epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' search are trying to memorize the training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' In other words, the parametric LM is under-fitting some tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The intuition behind this is that the kNN component retrieves examples directly from the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' What if we could retrieve the same examples using an overfitted LM?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We took the trained LM, removed the dropout, and continued training until almost perfect fit (very small training loss).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We then interpolated the overfitted transformer with the original LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' The results are shown in Table 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' F represents the number of parameters in the base LM, minus the output embedding matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We can see that overfitting can provide very little help after interpolation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Looking at the oracle performance, we think that the overfitted model memorizes some rare contexts and tokens in the training set where it could be useful during evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, the overfitting hurts the performance on other tokens too much so that even interpolation is not able to balance the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='2 Soft-Label Training Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' (2022) claims that using “soft labels” during training is the key to kNN’s success, that interpolates the ground truth labels with kNN-LM model outputs, effectively “distilling” kNN-LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' It is based on the hypothesis that the room for kNN-LM’s improvement over base LM lies in the “over-correction” when training with a 1-hot labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This is related to the effect from label smoothing methods (Szegedy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Pereyra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' Meister et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, we believe that this explanation is not satisfactory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' If the key is training with soft-labels, why do these soft labels must be provided specifically by a kNN search?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' If soft labels were the key, then soft-label training where the labels come from the base LM itself should have worked as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' To separate the effect of soft labeling from the kNN’s additional guidance, we train another LM with the same model architecture as the base LM, with the soft labels from the base LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This teacher-student training is to distill the knowledge from the base LM (Hinton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We find that by just training with “soft labels“ from the base LM to alleviate the alleged “over-correction” problem is not the key, as this does not help with the interpolated perplexity at all.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This suggests that even with the same training data, kNN still provides valuable additional guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='3 Training to Optimize Interpolated Loss In Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='2, we discover that using over-parameterization with standard LM training loss does not further close the gap towards kNN-LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This suggests that some regularization term may be needed during training to make the multiple embeddings not converge to the same vector, rendering over-parameterization useless.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' From Table 2, we see that a better interpolated perplexity may not require a very low perplexity when measured only with the extra input representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, we still use a standard LM loss to only train the additional embedding matrix, that directly minimizes the perplexity using only the extra input representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This discrepancy between training and the evaluation with interpolation suggests that training with an alternative loss function that interpolates the base LM’s output with the output using the extra input representation may be beneficial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' To test the hypothesis that standard LM training loss do not emphasize the examples where base LM performs badly, we train the extra model’s parameter Wds, with interpolated loss L: L = CrossEntropy(λsoftmax(Wds · hds) + (1 − λ)softmax(Wsm · hsm), y) (12) y represents the ground truth label for each context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We only learn the parameter Wds while freezing all other parameters, similar to all other experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' We choose λ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content='25 as it is the best hyper-parameter for kNN-LM experiments and our goal for this training is to mimic the loss of kNN-LM after interpolation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This training loss effectively assigns a higher value to the training examples where the base LM’s loss is high, 20 suggesting the need for the extra Wds to help with these hard cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' However, for either “att” for “ffn” for hds, either V or 3V for the number of embeddings in Wds, we are unable to achieve a better perplexity than just the base LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' This suggests that, while nice on paper, the interpolated loss optimization process is not trivial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'} +page_content=' 21' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09E0T4oBgHgl3EQf_gKx/content/2301.02828v1.pdf'}