diff --git "a/2NAzT4oBgHgl3EQfDfro/content/tmp_files/load_file.txt" "b/2NAzT4oBgHgl3EQfDfro/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/2NAzT4oBgHgl3EQfDfro/content/tmp_files/load_file.txt" @@ -0,0 +1,1089 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf,len=1088 +page_content='Effective and Efficient Training for Sequential Recommendation Using Cumulative Cross-Entropy Loss Fangyu Li,1 Shenbao Yu, 2 Feng Zeng, 3 Fang Yang 1* 1 2 3 Department of Automation, Xiamen University, Xiamen China {lifangyu, yushenbao}@stu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='xmu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='cn, {zengfeng, yang}@xmu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='cn Abstract Increasing research interests focus on sequential recom- mender systems, aiming to model dynamic sequence repre- sentation precisely.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' However, the most commonly used loss function in state-of-the-art sequential recommendation mod- els has essential limitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' To name a few, Bayesian Per- sonalized Ranking (BPR) loss suffers the vanishing gradi- ent problem from numerous negative sampling and prediction biases;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Binary Cross-Entropy (BCE) loss subjects to nega- tive sampling numbers, thereby it is likely to ignore valuable negative examples and reduce the training efficiency;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Cross- Entropy (CE) loss only focuses on the last timestamp of the training sequence, which causes low utilization of sequence information and results in inferior user sequence representa- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' To avoid these limitations, in this paper, we propose to calculate Cumulative Cross-Entropy (CCE) loss over the se- quence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' CCE is simple and direct, which enjoys the virtues of painless deployment, no negative sampling, and effective and efficient training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' We conduct extensive experiments on five benchmark datasets to demonstrate the effectiveness and effi- ciency of CCE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' The results show that employing CCE loss on three state-of-the-art models GRU4Rec, SASRec, and S3-Rec can reach 125.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='63%, 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='90%, and 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='24% average improve- ment of full ranking NDCG@5, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Using CCE, the performance curve of the models on the test data increases rapidly with the wall clock time, and is superior to that of other loss functions in almost the whole process of model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Introduction With the rapid development of recurrent neural networks (RNN), transformer, graph neural network (GNN), convo- lutional neural network (CNN), and other deep neural net- works, sequential recommendation models based on user in- teraction records are becoming increasingly popular in rec- ommender systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' For instance, GRU4Rec (Hidasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2016), GRU4Rec+ (Hidasi and Karatzoglou 2018), and NARM (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2017) are based on RNN;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' SASRec (Kang and McAuley 2018), BERT4Rec (Sun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2019), S3-Rec (Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2020), and NOVA-BERT (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2021) are based on transformer;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' SR-GNN (Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2019) and Caser (Tang and Wang 2018) are based on GNN and CNN, respec- tively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Corresponding author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Email-address: yang@xmu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='cn In order to unleash the full potential of the sequence rec- ommendation model, it needs to match a suitable loss func- tion that plays an essential role in determining the effective- ness and efficiency of model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' However, existing loss functions used in sequential recommendation have their own defects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' For example, one of the popular methods, GRU4Rec utilizes BPR (Rendle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2009) or TOP1 loss as the ob- jective function, which suffers from the gradient vanishing problem (Hidasi and Karatzoglou 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' We focus on two rarely discussed issues about loss func- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' First, most loss functions only calculate the loss on the last timestamp of the training sequence, which ignores the natural sequential properties of sequence data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 1 gives an illustrative example, where Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 1(a) and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 1(b) show the difference in loss calculation of GRU4Rec and SASRec, the former involves only the last timestamp while the latter covers all timestamps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 1(c) visualizes the NDCG@10 scores of GRU4Rec on each timestamp of the user sequence (the length is fixed to 50) of Yelp data, using three dif- ferent loss functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 1(c), the vanilla GRU4Rec optimizes the loss on the last timestamp of train- ing sequence, so it achieves its highest performance at the last timestamp (the 48th), but has the poorest performance at other timestamps including the validation (the 49th) and test data (the 50th).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Instead, the GRU4Rec model trained with BCE loss optimizes all timestamps of the training se- quence, which results in performance improvements over vanilla GRU4Rec on the validation and test data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' This obser- vation indicates that only calculating the last timestamp loss in the objective function cannot guarantee the accuracy of the intermediate timestamp, which causes low utilization of sequence information and generates inferior user sequence representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Second, negative sampling is a widely-used approach to improve performance for sequential recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Cor- respondingly, the loss functions, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' BCE used in SASRec, considers a small number of negative examples for each timestamp in each user sequence, which indicates that it in- volves tiny parts of the negative samples and is likely to ignore some informative negative examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' On the other hand, increasing the number of negative samples will re- duce the computational efficiency, hence the trade-off be- tween the model effectiveness and efficiency is hard to bal- ance when employing negative sampling in model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='00979v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='IR] 3 Jan 2023 Transfor mer Transfor mer Transfor mer Embedding Layer Prediction Layer S1 S2 S3 S4 Out1 Out2 Out3 R1 R2 R3 Calculate Loss on the All Timestamps GRU Embedding Layer Prediction Layer S1 S2 S3 S4 Out1 Out2 Out3 R3 Calculate Loss on the Last Timestamp GRU GRU (b) SASRec model architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' (a) GRU4Rec model architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' (c) Simplified experimental results of GRU4Rec with different losses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Figure 1: The architectures of GRU4Rec & SASRec and performance comparison of three loss functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' We displays the average NDCG@10 scores of GRU4Rec using three loss functions, at each timestamp on the Yelp dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' The sequence length is fixed to 50, with the 49th and 50th timestamps represent the validation and test item respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' To tackle these problems, in this paper, we propose a novel Cumulative Cross-Entropy (CCE) loss that jointly considers all timestamps in the training process and all neg- ative samples for loss function calculation without negative sampling (see also the performance of the proposed method in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 1(c)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In addition, CCE sufficiently covers the gra- dient of the item embedding matrix by each item’s softmax score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Furthermore, the proposed model employs the mask- ing strategy for the varied length of user sequence to guar- antee the training efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' We validate our method on three typical sequential recom- mendation models (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=', GRU4Rec, SASRec, and S3-Rec) on five benchmark datasets from different domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Exper- imental results show that our method obtain average im- provements of 125.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='63%, 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='90%, and 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='24% in terms of full ranking NDCG@5 for GRU4Rec, SASRec, and S3-Rec, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Specifically, GRU4Rec trained with CCE loss can markedly improve the NDCG@5 score by 266.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='67% over vanilla GRU4Rec on the Toys dataset (McAuley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' The main contributions are threefold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' First, we identify limitations in the existing loss function used by sequen- tial recommendation models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Second, we designed Cumu- lative Cross-Entropy loss, which extends the cross-entropy to all timestamps of the training sequence and can effec- tively solve the limitation of timestamp and negative sam- pling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Lastly, we conduct extensive experiments on five real- world datasets, demonstrating significant improvements in HIT@k and NDCG@k metrics over existing state-of-the-art methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Related Work According to the sequence timestamps involved in the loss computation, we divide the loss functions used in existing sequential recommendation models into three categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' To the best of our knowledge, this issue has not received much attention in existing studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Last Timestamp Loss Family It refers to the loss function that only involves the last timestamp of the training sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Generally, most neural network-based sequential recommendation models belong to this family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' The first sequential recommendation method based on RNN is GRU4Rec (Hidasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2016), which uti- lizes the Gated Recurrent Units (GRU) and employs several pointwise and pairwise ranking losses - such as BPR, TOP1, and CE, which only calculate the loss of the last timestamp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Besides, the improved GRU4Rec+ (Hidasi and Karatzoglou 2018) argues that the original pairwise loss function used in GRU4Rec likely causes the gradient vanishing problem, thereby proposes the improved listwise loss function BPR- max and TOP-max.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Most recent works that are influenced by GRU4Rec directly adopt or adapt BPR loss, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=', the hierar- chical gating networks HGN (Ma, Kang, and Liu 2019), the GNN-based model MA-GNN (Ma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2020) and STEN (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Besides, some models use CE loss as the objective function, such as NARM (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2017), STAMP (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2018), SMART SENSE (Jeon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2022), and SR- GNN (Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' To alleviate the item cold-start prob- lem, Mecos (Zheng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2021) uses CE loss to optimize a meta-learning task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Besides, a recent work (Petrov and Mac- donald 2022) utilizes the LambdaRank (Burges 2010) loss function, which still belongs to the last timestamp family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Masked Language Model Loss Family The Masked Language Model (MLM) (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2018) loss is derived from the cloze task (Taylor 1953), and the ob- jective is to accurately predict the item that randomly mask in input sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Recent work adopted the idea of MLM and employs MLM loss in sequential recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' For example, BERT4Rec (Sun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2019), utilizes BERT (De- vlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2018) to model user behavior;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' NOVA-BERT (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2021) introduces an attention mechanism that suf- ficiently leverages side information to guide and preserve item representations invariant in its vector space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' However, the item masking methods sacrifice much training time to achieve good performances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' All Timestamp Loss Family As the name suggests, it considers all timestamps of the training sequence in loss computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' To the best of our knowledge, the BCE loss is the mainly member in this fam- ily besides CCE loss proposed in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' It is employed in the CNN-based model Caser (Tang and Wang 2018), the attention-based model SASRec (Kang and McAuley 2018), RKSA (Ji et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2020), ELECRec (Chen, Li, and Xiong 2022) and CAFE (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2022), and the state-of-the-art self-supervised learning model S3-Rec (Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Note that S3-Rec uses BCE loss at its fine-tuning stage, and utilizes item attributes and Mutual Information Maximiza- tion (MIM) to capture fusion between context data and se- quence data at the pre-training stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In addition, the genera- tor module in ELECRec extends the CE loss to ALL Times- tamp, but its role in the loss calculation does not ignore the mask item as the BCE loss does.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' There is a paucity of discus- sions on the training objective of BCE loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In our opinion, all timestamp loss is able to take full advantage of the prop- erties of sequence data, that is, the input under the current timestamp is the label of the previous timestamp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' However, the BCE loss is inevitably affected by negative sampling, and the number of negative samples will affect its perfor- mance and computational efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Typical models and Loss Functions in sequential recommendation We first formulate the problem of sequential recommenda- tion, then introduce two most representative model struc- tures of neural network-based sequential recommendation models and the most commonly used loss functions, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' BPR, TOP1, BCE, and CE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Problem Statement Suppose that there are a set of users U = � u1, u2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=', u|U| � and a set of items I = � i1, i2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=', i|I| � , where |U| and |I| denote the the number of users and items, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In the sequential recommendation, we mainly focus on the user’s historical interaction records.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Therefore, we formulate a user sequence S1:n = (S1, S2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=', Sn) based on interaction records in chronological order, where n denotes the length of user sequence and St denotes the user interaction item at timestamp t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' We first define two kinds of sequential recom- mendation models below: Rn = flast(S1:n), (1) R1:n = fall(S1:n), (2) where flast and fall are models that adopt the last times- tamp loss and all timestamp loss, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Rn = � rn,1, rn,2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=', rn,|I| � denotes the outputs of all items at timestamp n, where rn,t is the prediction score of item it at timestamp n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' R1:n = (R1, R2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=', Rn) is the result on all timestamps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Next, we define the embedding layer and prediction layer, which are the typical operations in the sequential recommen- dation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Given a sequence input with the fixed-length l, the input sequence of the embedding layer (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=', S1:l) is trans- formed to the embedding vector E1:l = (e1, e2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=', el) ∈ Rl×e by the embedding matrix We ∈ R|I|×e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In addition, the prediction layer is an unbiased dense layer with a weight matrix W T e , which shares the weight matrix with the embed- ding layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' We now proceed to inntroduce the GRU4Rec and SASRec models, as well as the corresponding loss fuctions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' GRU4Rec Model Architecture GRU4Rec is one of the most classi- cal sequential recommendation models, which utilizes GRU to model the user sequence and output a sequence represen- tation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Given three components of the GRU, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=', the update gate z, the candidate hidden state ˆh and the reset gate r, the hidden state ht ∈ Rd can be calculated as: ht = zt ˆht + (1 − zt)ht−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' (3) In Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 3, we have: zt = σ(Wzet + Uzht−1), (4) ˆht = σ(Whet + Uh(rt ⊙ ht−1)), (5) rt = σ(Wret + Urht−1), (6) where Wz,r,h ∈ Rd×e and Uz,r,h ∈ Rd×d are the weight matrices, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' The last hidden state hl of the GRU is the vector that represents the input sequence S1:l, which passes through the prediction layer to get the final result Rl = hlW T e = � rl,1, rl,2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=', rl,|I| � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Loss Function There are three loss function of vanilla GRU4Rec, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=', BPR loss (Rendle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2009), TOP1 loss, and CE loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Here we give the calculation method of BPR and TOP1 as follows: Lbpr = − 1 Ns Ns � neg=1 log σ(rl,pos − rl,neg), (7) Ltop1 = 1 Ns Ns � neg=1 σ(rl,neg − rl,pos), (8) where Ns is the number of negative samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' rl,pos, rl,neg are the scores of the positive item and negative item at the last timestamp l, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Note that we omit the regular- ization term for readability since it has nothing to do with the following discussion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' To simplify the formula, we use b+,− to represent the prediction bias of (rl,pos − rl,neg) and b−,+ to denote (rl,neg − rl,pos).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' We then examine their gradients w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' the score of positive item rl,pos as follows: ∂Lbpr ∂rl,pos = − 1 Ns Ns � neg=1 (1 − σ(b+,−)) , (9) ∂Ltop1 ∂rl,pos = 1 Ns Ns � neg=1 σ(b−,+) (1 − σ(b−,+)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' (10) Obviously, the vanishing gradient problem will occur for both loss functions when the number of negative samples Ns increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In addition, the prediction bias b+,− for BPR (or b1,+ for TOP1) that tends to infinity also induces the vanish- ing gradient problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In practice, due to the huge size of the negative set, the case of prediction bias occurs frequently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Therefore, GRU4Rec+ proposed the improved BPR-max and TOP1-max losses via applying softmax scores on nega- tive examples, which can be calculated as follows: Lbpr−max = − log Ns � neg=1 snegσ(b+,−), (11) Ltop1−max = Ns � neg=1 snegσ(b−,+), (12) where sneg is the softmax score of the negative examples ineg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' We also examine their gradients w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' the score of pos- itive item rl,pos: ∂Lbpr−max ∂rl,pos = − �Ns neg=1 snegσ(b+,−) (1 − σ(b+,−)) �Ns neg=1 snegσ(b+,−) , (13) ∂Ltop1−max ∂rl,pos = − Ns � neg=1 snegσ(b−,+)(1 − (σ(b−,+)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' (14) Through the softmax score sneg, the new loss can miti- gate the vanishing gradient problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' However, as we men- tioned, the trade-off between the model effectiveness and efficiency is hard to balance when employing negative sam- pling in model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Meanwhile, the sampling operation may skip informative negative samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' SASRec Model Architecture The SASRec model is the first to in- troduce Transformer (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2017) into sequential recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' SASRec stacks two layers of transformer encoders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' For readability, we only introduce the single layers of the transformer encoder block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Before the transformer en- coder, SASRec adds a position vector P1:l ∈ Rl×e, thereby the final input is ˆE = E1:l + P1:l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Then it use Multi-Head Self-Attention (MH) layer to learn the asymmetric interac- tions and make the model more flexible,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' which consists of multiple independent self-attention layers (SA) and trans- form by a weight matrix WO ∈ Re×e: MH( ˆE) = [SA1( ˆE),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' SA2( ˆE),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' · · · ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' SAH( ˆE)]WO,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' (15) SAj( ˆE) = attention( ˆEWQj,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' ˆEWKj,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' ˆEWV j),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' (16) attention(Q,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' K,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' V ) = softmax �QKT √e � V,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' (17) where WQj,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' WKj,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' WV j ∈ Re×e/H are the linear projection matrix that scales the input ˆE into a small space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Note that in the case of self-attention, the queries Q, keys K, and values V all equal to the input ˆE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' To satisfy the nature of sequence data, SASRec cut off the connection of Qi and Kj(j > i) in the attention calculation (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 17).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' The multi-head self- attention layer aggregate all previous item embedding with adaptive weights and is still a linear model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Therefore, to endow the model with nonlinear, SASRec applies a point- wise two-layer feed-forward network F with ReLU (Nair and Hinton 2010) activation function: F( ˆE) = ReLU(MH( ˆE)W1)W2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' (18) To avoid overfitting, dropout (Srivastava et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2014) and layer normalization (Ba, Kiros, and Hinton 2016) are used for the input of both modules (MH and F).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Further, to sta- bilize training, a residual connection (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2016) is ap- plied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' g(x) = x + Dropout(g(LayerNormalization(x))), (19) where g(x) is the multi-head self-attention layer or point- wise feed-forward network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Finally, through the prediction layer, the result of SASRec is R1:l = F( ˆE)W T e .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Loss Function SASRec adopts the binary cross-entropy (BCE) loss as the objective function, and here we use mask to simplify the objective function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Lbce = − l � t=1 MASK [log σ(rt,pos) + log σ(1 − rt,neg)] , (20) where MASK = (mask1, mask2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=', maskl) is the mask vector, maskt is False when St in the sequence S1:l is the mask item, and True otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' We can find that the main difference in loss function between GRU4Rec and SASRec is the cumulative term of time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Intuitively, this loss allows more positive samples to participate in the optimization pro- cess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' However, it depends on the negative sampling oper- ation, and randomly generates only one negative item for each timestamp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Further, we give the gradient w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='t the score of positive item rt,pos and negative item rt,neg as follows: ∂Lbce ∂rt,pos = −maskt(1 − σ(rt,pos)), (21) ∂Lbce ∂rt,neg = maskt(1 − σ(1 − rt,neg)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' (22) As we can see, the gradient coincides with the objective of the sequential recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' However, the majority of negative items do not participate in the loss calculation due to the sampling strategy, which means that they contribute little to the update of model parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Therefore, BCE is essentially prone to lose information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Intuitively, adding more negative examples can alleviate this problem, but it would spend much more time on sampling operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Our Method : Cumulative Cross-Entropy Loss Based on the above discussions, we observe that, instead of average loss, adaptive loss via softmax function may be more suitable for sequential recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In this sense, the Cross-Entropy (CE) loss is a natural choice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Its calcula- tion and gradient can be described as follows: Lce = − log exp (rl,pos) �|I| j=1 exp (rl,j) , (23) ∂Lce ∂rl,pos = exp (rl,pos) �|I| j=1 exp (rl,j) − 1, (24) ∂Lce ∂rl,j = exp (rl,j) �|I| j=1 exp (rl,j) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' (25) Note that without sampling, CE loss aggregates the predic- tion score of the whole item size, which contains the whole negative example set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Compared with BCE, the CE loss is more suitable for sequential recommendation for the follow- ing reasons: 1) Sequential recommendation can be regarded as a multi-classification task, and the softmax function used in CE loss was born for this;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2) The gradient of CE loss can cover the whole item set in a single step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 3) CE avoids negative sampling, and hence refrains from difficulties aris- ing therefrom, such as the additional time cost of sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Therefore, CE can improve the training efficiency and re- duces the risk of information loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' However, the current form of CE loss used in sequential recommendation only focuses on the last timestamp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In this paper, we directly extend it to all timestamps, and propose a novel Cumulative Cross-Entropy loss, which is calculated as follows: Lcce = − l � t=1 MASK log exp (rt,pos) �|I| j=1 exp (rt,j) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' (26) The idea of CCE is simple and direct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' It revises the short- sighted training objective of CE, and takes the advantage of BCE that perform loss calculation on each timestamp of the sequence;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Further, it avoids the negative sampling operation in BCE, and calculates gradient on the entire item set like CE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Extensive experiments verify the effectiveness of the CCE loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Experiments We conduct extensive experiments on five benchmark datasets to validate the effectiveness and efficiency of the proposed CCE loss, aiming to answer the following research questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' RQ1: How does the CCE loss perform when em- ployed in the state-of-the-art models?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' RQ2: How efficient is the training of the models using the CCE loss?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' RQ3: How does the CCE loss perform across all timestamps?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Experiments Setup Datasets We use five public benchmark datasets collected from three real-world platforms, namely, three sub-category datasets on Amazon1 (McAuley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2015): Beauty, Sports and Toys;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' a business recommendation dataset Yelp2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and a music artist recommendation dataset LastFM3 (Cantador, Brusilovsky, and Kuflik 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Note that we only use the transaction records after January 1st, 2019 in Yelp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 1http://jmcauley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='ucsd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='edu/data/amazon/links.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='html 2https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='yelp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='com/dataset 3https://grouplens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='org/datasets/hetrec-2011/ Table 1: Statistics of five datasets after preprocessing Dataset Sports Toys Yelp Beauty LastFM # of sequences 35598 19412 30431 22362 1090 # of items 18357 11924 20033 12101 3646 # of iteractions 296337 167597 316454 198502 52551 Average length 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='14 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='06 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='80 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='40 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='41 Density 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='05% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='07% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='05% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='07% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='32% Data Processing Following the recent and state-of-the- arts in sequential recommendation (Kang and McAuley 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Tang and Wang 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Sun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2019), we divide a given dataset into train, validation, and test sets according to the leave-one-out strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In addi- tion, to reproduce the pre-training model S3-Rec, we pre- processed the original datasets as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' (1) We remove users and items with less than five interaction records.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' (2) We group the interaction records by users and sort them chronologically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' (3) We keep the user sequence with the fixed-length l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' After preprocessing, the statistics of the five datasets are summarized in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Baseline Methods Since most sequential recommenda- tion models only output results at the final timestamp Rl, we here choose three representative models, which are not only able to output all timestamp results R1:l, but also equipped with stable and superior performance: GRU4Rec (Hidasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' which is the first to apply GRU to model user interaction sequence for the session- based recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' SASRec (Kang and McAuley 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' which is a transformer-based model, using a multi-head attention mechanism to learn the asymmetric interactions and make the model more flexible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' S3-Rec (Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' which is the first to introduce self-supervised learning to the sequential recommenda- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' For comparative loss functions, we choose CE as the rep- resentative of the last timestamp loss function since it per- forms better than BPR loss and Top1 loss in our preliminary experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Besides, we use BCE as the representative of all timestamp loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Note that we ignore the masked language model loss due to its large training cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Implementation Details To reproduce the sequential rec- ommendation models GRU4Rec, SASRec, and S3-Rec, we use the open-source of S3-Rec code4 and RecBole5 repo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' The hyperparameters of these models are set as suggested in the original paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' For each dataset, the fixed length of the input sequence is set to 50, the size of the item em- beddings is 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Besides, we use the Adam optimizer with the default learning rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='001, parameters β1 and β2 are set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='9 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='999, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' We train models for 150 epochs with the early stop strategy6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' We save the optimal model based on the evaluation metrics on the validation set 4https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='com/RUCAIBox/CIKM2020-S3Rec 5https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='com/RUCAIBox/RecBole 6We terminate the training when the evaluation metric does not improve for ten consecutive epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Table 2: Comparing three loss functions with respect to the performance of GRU4Rec, SASRec, and S3-Rec on five datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Best results are in boldface, and the best one between Lbce and Lce is indicated by underline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' “Improve” denotes the improvement over the best performance of Lbce (or Lce), while the degradation cases are marked with ↓.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Dataset Metric GRU4Rec SASRec S3-Rec Lbce Lce Lcce Improve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Lbce Lce Lcce Improve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Lbce Lce Lcce Improve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Sports HR@5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0099 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0221 121.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='00% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0216 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0168 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0380 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='93% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0217 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0325 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0456 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='31% HR@10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0184 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0163 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0357 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='02% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0330 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0229 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0541 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='94% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0359 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0478 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0642 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='31% HR@20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0297 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0253 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0548 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='51% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0491 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0330 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0752 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='16% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0567 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0709 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0908 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='07% NDCG@5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0063 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0064 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0143 123.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='44% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0147 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0117 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0267 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='63% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0137 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0213 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0311 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='01% NDCG@10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0090 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0085 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0187 107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='78% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0184 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0137 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0318 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='83% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0182 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0262 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0371 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='60% NDCG@20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0118 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0107 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0235 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='15% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0225 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0162 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0371 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='89% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0234 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0320 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0438 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='88% Toys HR@5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0128 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0097 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0420 228.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='13% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0430 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0385 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0736 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='16% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0409 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0568 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0791 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='26% HR@10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0236 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0153 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0597 152.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='97% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0613 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0485 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0989 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='34% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0641 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0796 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='1096 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='69% HR@20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0401 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0229 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0834 107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='98% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0862 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0616 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='1299 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='70% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0998 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='1119 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='1492 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='33% NDCG@5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0081 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0065 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0297 266.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='67% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0288 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0291 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0533 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='16% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0261 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0398 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0566 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='21% NDCG@10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0116 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0083 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0354 205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='17% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0347 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0323 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0615 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='23% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0335 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0472 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0664 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='68% NDCG@20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0157 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0102 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0414 163.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='69% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0410 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0356 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0693 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='02% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0425 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0553 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0764 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='16% Yelp HR@5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0128 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0094 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0211 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='84% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0166 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0101 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0232 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='76% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0206 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0178 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0290 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='78% HR@10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0220 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0164 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0367 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='82% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0273 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0174 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0379 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='83% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0354 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0311 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0474 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='90% HR@20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0378 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0273 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0603 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='52% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0499 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0275 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0623 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='85% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0552 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0498 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0756 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='96% NDCG@5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0080 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0055 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0134 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='50% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0106 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0064 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0151 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='45% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0126 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0115 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0184 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='03% NDCG@10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0109 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0078 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0184 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='81% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0140 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0087 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0198 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='43% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0173 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0157 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0243 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='46% NDCG@20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0149 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0244 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='76% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0184 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0112 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0259 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='76% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0223 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0204 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0314 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='81% Beauty HR@5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0161 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0223 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0489 119.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='28% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0358 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0401 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0694 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='07% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0379 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0577 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0753 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='50% HR@10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0266 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0343 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0695 102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='62% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0573 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0537 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0932 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='65% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0614 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0830 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='1031 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='22% HR@20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0447 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0514 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0998 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='16% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0878 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0719 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='1286 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='47% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0979 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='1203 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='1440 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='09% NDCG@5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0147 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0342 132.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='65% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0235 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0291 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0492 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='07% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0232 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0389 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0529 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='99% NDCG@10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0133 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0185 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0408 120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='54% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0305 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0355 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0568 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='00% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0307 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0471 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0619 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='42% NDCG@20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0179 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0228 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0484 112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='28% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0381 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0381 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0657 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='44% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0400 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0565 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0721 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='61% LastFM HR@5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0248 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0211 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0339 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='69% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0266 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0083 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0450 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='17% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0431 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0339 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0422 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='09% ↓ HR@10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0468 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0312 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0459 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='92% ↓ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0404 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0156 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0587 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='30% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0688 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0541 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0789 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='68% HR@20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0624 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0495 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0606 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='88% ↓ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0550 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0284 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0862 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='73% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='1220 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0881 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='1349 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='57% NDCG@5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0161 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0138 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0222 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='89% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0179 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0049 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0310 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='18% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0273 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0197 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0262 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='03% ↓ NDCG@10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0232 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0171 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0262 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='93% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0223 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0073 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0354 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='74% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0356 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0260 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0381 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='02% NDCG@20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0272 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0218 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0299 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='93% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0259 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0423 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='32% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0491 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0346 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='0519 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='70% at the training stage, then report their performances on the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Note that for the pre-training model S3-Rec, we use the reproduced model offered by its source code, and retrain it at the fine-tuning stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' All experiments are conducted us- ing 10-cores of an Intel i9-10900K CPU, 24GB of memory and an NVIDIA GeForce RTX 3090 GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Evaluation Metrics To evaluate the performance of se- quential recommendation models, we adopt the top-k Hit Ratio (HIT@k, k=5, 10, 20) and top-k Normalized Dis- counted Cumulative Gain (NDCG@k, k=5, 10, 20), which are commonly used in previous studies (Hidasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Kang and McAuley 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' The details of the metrics can be found in (Krichene and Rendle 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Recent work on sampling strategies (Dallmann, Zoller, and Hotho 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Krichene and Rendle 2020) found that under the same sampling test set, the results of the evaluation met- rics are inconsistent when using different sampling strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' To avoid inconsistency, we report the full ranking metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Experimental Results Overall Results (RQ1) Table 2 shows the performance of the GRU4Rec, SASRec, and S3-Rec using CCE, BCE, and CE, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' We observe that the CCE loss improves the best performance of BCE (or CE) for all models in most cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In addition, we perform t-test on the results, which shows that the performance of all models using the proposed CCE loss are significantly different from that of using BCE or CE (at significant level p < .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Note that there are only 4 out of 90 cases, where results produced by the CCE loss has very slight performance decrease (up to 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='03%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' For GRU4Rec, compared with BCE and CE, the pro- posed CCE loss greatly promotes the performance of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' The average improvements on five datasets in terms of HR@5, HR@10, HR@20, NDCG@5, NDCG@10, NDCG@20 are 113.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='99%, 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='90%, 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='66% 125.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='63%, 103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='05% and 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='76% respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Interestingly, the CCE loss brings an astonishing 266.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='67% improvement at NDCG@5 on Toys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In addition, experiments show that the GRU4Rec with CCE can achieve better performance on Sports, Yelp, Beauty, and LastFM than the original SASRec, which indicates that the loss function has great influence on model performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' For SASRec, our CCE loss achieves an overall increase in terms of all metrics on five datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Specifically, the av- erage improvements in terms of HR@5, HR@10, HR@20, NDCG@5, NDCG@10, NDCG@20 are 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='82%, 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='41%, 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='38%, 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='90%, 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='05%, and 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='09%, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Com- pared with the GRU4Rec and SASRec models, although S3- Rec with BCE (or CE) obtains the best performance, the CCE loss still shows a substantial improvement for S3-Rec in terms of the six metrics, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=', the average improvements are 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='75% (HR@5), 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='96% (HR@10), 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='73% (HR@20), 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='24% (NDCG@5), 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='24% (NDCG@10), and 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='83% (NDCG@20), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' (a) Sports (b) Toys (c) Yelp GRU4Rec SASRec S3-Rec (d) Beauty (e) LastFM Figure 2: The performance curve (NDCG@10) of GRU4Rec, SASRec and S3Rec using different loss functions on the test data during training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' (a) Sports (b) Toys (c) Yelp GRU4Rec (d) Beauty (e) LastFM Figure 3: The performance of GRU4Rec using different loss functions at all timestamps on the five datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Training Efficiency (RQ2) We evaluate the training effi- ciency of our approach from two aspects, as suggested in (Kang and McAuley 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2 displays the NDCG@10 scores on the test sets during the training process of baseline models with different loss functions on the five benchmark datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' We also show the training speed, which counts the average time consumption for one training epoch (second- s/epoch) (see the bottom-right corner of each graph).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' As can be seen from Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2, compared with the BCE and CE loss, despite sharing the similar training speeds for all models, the performance curve of the models with CCE on the test data increases rapidly as the wall clock time increases, as well as dominating the models with other loss functions for nearly the entire training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' For example, the SASRec+CCE takes about 100 seconds to reach a much higher value of NDCG@10 (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=', 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='035) on Sports, while spends 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='24 sec- onds for one epoch, which is close to BCE (11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='21s/epoch) and CE (12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='62s/epoch).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In summary, we argue that the CCE loss can effectively and efficiently help model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Performance on All Timestamps (RQ3) In this section, we extend the results in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 1c to five benchmark datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 3, the vertical axes represent NDCG@10 scores of GRU4Rec, while the horizontal axes represent the whole timestamp of the input sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' The last two times- tamps refer to the validation and test item, respectively, where the model performance drops drastically in nature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' On Beauty, Sports, Toys, and Yelp, CCE has a very signifi- cant boost across all timestamps, which shows that CCE can better guarantee the accuracy of the intermediate process of model inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' For the LastFM dataset, CCE has only a slight improvement over BCE in the training sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' This result may explain why it does not show great advantages on the test data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Intuitively, a loss function that is able to guar- antee the accuracy for all timestamps of training sequence can effectively improve the recommendation accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Conclusion In this paper, we address the issue of loss function design in sequential recommendation models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' We point out that the whole training sequence should be considered when calcu- lating the loss, rather than the last timestamp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Meanwhile, avoiding negative sampling can improve the training effi- ciency and accuracy of recommendations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' We propose a novel cumulative cross-entropy loss and apply it to three typical models, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=', GRU4Rec, SASRec, and S3Rec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Exper- iments on five benchmark datasets demonstrate its effective- ness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' We hope that this work can inspire the design of loss function in the subsequent research on sequence recommen- dation models and contribute to effective and efficient train- ing for sequential recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' References Ba, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Kiros, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Hinton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Layer nor- malization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' arXiv preprint arXiv:1607.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='06450.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Burges, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' From ranknet to lambdarank to lamb- damart: An overview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Cantador, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Brusilovsky, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Kuflik, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Second workshop on information heterogeneity and fusion in rec- ommender systems (HetRec2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Proceedings of the fifth ACM conference on Recommender systems, 387–388.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Xiong, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' ELECRec: Training Sequential Recommenders as Discriminators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Proceed- ings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Dallmann, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Zoller, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Hotho, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' A Case Study on Sampling Strategies for Evaluating Neural Sequen- tial Item Recommendation Models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Fifteenth ACM Con- ference on Recommender Systems, 505–514.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Devlin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Chang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Lee, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Toutanova, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Bert: Pre-training of deep bidirectional transformers for lan- guage understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' arXiv preprint arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='04805.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' He, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Ren, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Sun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Deep resid- ual learning for image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, 770–778.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Hidasi, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Karatzoglou, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Recurrent neural net- works with top-k gains for session-based recommendations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Proceedings of the 27th ACM international conference on information and knowledge management, 843–852.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Hidasi, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Karatzoglou, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Baltrunas, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Tikk, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Session-based recommendations with recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In ICLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Jeon, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Kim, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Yoon, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Lee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Kang, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Ac- curate Action Recommendation for Smart Home via Two- Level Encoders and Commonsense Knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Ji, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Joo, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Song, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Kim, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Moon, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Sequential recommendation with relation-aware kernelized self-attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Proceedings of the AAAI conference on artificial intelligence, 4304–4311.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Kang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and McAuley, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Self-attentive sequen- tial recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In 2018 IEEE international confer- ence on data mining (ICDM), 197–206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' IEEE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Krichene, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Rendle, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' On Sampled Metrics for Item Recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discov- ery & Data Mining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Ren, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Chen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Ren, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Lian, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Ma, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Neural attentive session-based recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Pro- ceedings of the 2017 ACM on Conference on Information and Knowledge Management, 1419–1428.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Zhao, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Chan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Faloutsos, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Karypis, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Pantel, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and McAuley, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Coarse-to-Fine Sparse Sequential Recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Proceedings of the 44th In- ternational ACM SIGIR Conference on Research and Devel- opment in Information Retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Ding, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Chen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Xin, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Shi, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Tang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Wang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Extracting Attentive Social Tempo- ral Excitation for Sequential Recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Proceed- ings of the 30th ACM international conference on informa- tion and knowledge managemen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Liu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Cai, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Dong, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Zhu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Shang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Noninvasive self-attention for side information fusion in sequential recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In AAAI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Liu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Zeng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Mokhosi, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' STAMP: short-term attention/memory priority model for session-based recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Proceedings of the 24th ACM SIGKDD international conference on knowledge dis- covery & data mining, 1831–1839.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Ma, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Kang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Hierarchical gating net- works for sequential recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 825–833.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Ma, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Ma, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Sun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Coates, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Memory augmented graph neural networks for se- quential recommendation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Proceedings of the AAAI con- ference on artificial intelligence, 5045–5052.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' McAuley, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Targett, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Shi, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Van Den Hengel, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Image-based recommendations on styles and substi- tutes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Proceedings of the 38th international ACM SIGIR conference on research and development in information re- trieval, 43–52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Nair, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Hinton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Rectified linear units im- prove restricted boltzmann machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In ICML.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Petrov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Macdonald, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Effective and Effi- cient Training for Sequential Recommendation using Re- cency Sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Proceedings of the 16th ACM confer- ence on recommender systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Rendle, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Freudenthaler, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Gantner, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Schmidt- Thieme, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' BPR: Bayesian Personalized Ranking from Implicit Feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Uncertainty in Artificial Intel- ligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Srivastava, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Hinton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Krizhevsky, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Sutskever, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Salakhutdinov, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Dropout: a simple way to prevent neural networks from overfitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' The journal of machine learning research, 15(1): 1929–1958.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Sun, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Wu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Pei, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Lin, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Ou, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Jiang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' BERT4Rec: Sequential recommendation with bidi- rectional encoder representations from transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Pro- ceedings of the 28th ACM international conference on infor- mation and knowledge management, 1441–1450.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Tang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Wang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Personalized top-n sequential recommendation via convolutional sequence embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Proceedings of the eleventh ACM international conference on web search and data mining, 565–573.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Taylor, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 1953.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' “Cloze procedure”: A new tool for mea- suring readability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Journalism quarterly, 30(4): 415–433.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Vaswani, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Shazeer, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Parmar, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Uszkoreit, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Jones, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Gomez, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Kaiser, Ł.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Polosukhin, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' At- tention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Advances in neural information pro- cessing systems, 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Wu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Tang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Zhu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Xie, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Tan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Session-based recommendation with graph neural net- works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In AAAI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Zheng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Wu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Cold-start se- quential recommendation via meta learner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence, 4706– 4713.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Zhou, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Zhao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Zhu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Wang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Zhang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' and Wen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content='-R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' S3-rec: Self-supervised learning for sequential recommendation with mutual infor- mation maximization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'} +page_content=' In Proceedings of the 29th ACM In- ternational Conference on Information & Knowledge Man- agement, 1893–1902.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NAzT4oBgHgl3EQfDfro/content/2301.00979v1.pdf'}