diff --git "a/7dE0T4oBgHgl3EQffQBI/content/tmp_files/load_file.txt" "b/7dE0T4oBgHgl3EQffQBI/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/7dE0T4oBgHgl3EQffQBI/content/tmp_files/load_file.txt" @@ -0,0 +1,903 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf,len=902 +page_content='You Truly Understand What I Need : Intellectual and Friendly Dialogue Agents grounding Knowledge and Persona Jungwoo Lim1, Myunghoon Kang1∗, Yuna Hur1∗, Seungwon Jung1∗, Jinsung Kim1∗, Yoonna Jang1, Dongyub Lee3, Hyesung Ji2, Donghoon Shin2, Seungryong Kim1§ and Heuiseok Lim1§ 1Korea University, 2Dialogue Tech Division, NCSOFT, 3Naver Corporation {wjddn803,chaos8527,yj72722,redlion0929,jin62304,seungryong_kim,limhseok}@korea.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='kr, {hyesung84,dhshin}@ncsoft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='com, dongyub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='lee@navercorp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='com Abstract To build a conversational agent that interacts fluently with humans, previous studies blend knowledge or personal profile into the pre-trained language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' However, the model that considers knowledge and persona at the same time is still limited, leading to hallucination and a passive way of using personas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We propose an effective dialogue agent that grounds external knowledge and persona simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The agent selects the proper knowledge and persona to use for generating the answers with our candidate scoring implemented with a poly-encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Then, our model generates the utterance with lesser hallucination and more engagingness utilizing retrieval augmented generation with knowledge-persona enhanced query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We conduct experiments on the persona- knowledge chat and achieve state-of-the-art performance in grounding and generation tasks on the automatic metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Moreover, we validate the answers from the models regarding hallucination and engagingness through human evaluation and qualitative results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We show our retriever’s effectiveness in extracting relevant documents compared to the other previous retrievers, along with the comparison of multiple candidate scoring methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Code is available at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='com/dlawjddn803/INFO 1 Introduction To build an ultimate conversational agent that interacts with humans fluently, previous studies provide generative neural network-based models (Sordoni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Vinyals and Le, 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Although the answers generated from those models are plausible, they lack informativeness and engagingness resulting in bland responses compared to humans (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', Equal Contributors § Corresponding author Dialogue Human: Is it in England?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Machine: No, it is actually in Scotland where you are going.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Human: Where in Scotland?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Human’s Persona I will travel through North Ayrshire.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' I am going to Scotland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' I like history.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' I am interested in architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' I love to garden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Ground Truth Knowledge Eglinton Castle was a large Gothic castellated mansion in Kilwinning, North Ayrshire, Scotland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Predicted Answers BARTbase It is in Scotland, which is a place you love.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' BARTlarge It is in Scotland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' in Scotland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' in Scotland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' in Ground Truth Response It is in North Ayrshire so you could visit when you travel through.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Table 1: Example of the generated answers from a typical generative model, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', BART.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We can find that BARTbase uses different persona sentence which has not appeared human’s personal profiles resulting in hallucinated answer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Also, BARTlarge generates less engaging answers by making use of the knowledge only to answer the question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Both generated responses are in the situation of hallucination and are less engaging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' However, for knowledgeable and attractive conversation, people usually provide informative replies by considering the background of the person whom they are talking to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Towards a human-like manner of dialogue, Ghazvininejad et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2018) and Dinan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2018) introduce the knowledge- grounded conversation for the knowledgeable and informative responses, whereas Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2018a) suggest the persona-grounded dialogue for the personalized responses to the users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' To improve the machine’s answer with the external knowledge base, one injects the factual knowledge into the parameters of the language model (Raffel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Roberts et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Despite the models’ capability of utilizing external knowledge implicitly, they produce “hallucinations” in the responses (Marcus, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The hallucination arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='02401v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='CL] 6 Jan 2023 in the dialogue involves the situation where the generated output contradicts the reference knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Also, it includes the situation when the generated output cannot be confirmed from the knowledge source (Ji et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' To mitigate these hallucinated answers, hybrid models employing parametric memory with non-parametric (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', retrieval-based) memory are introduced to directly access external memories, leading the source to be inspected and interpreted (Karpukhin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Petroni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Lewis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' On the other hand, Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2018a) suggest persona-chat dialogues with the corresponding personal profiles of each interlocutor to avoid general and monotonous answers from the machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Though See et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2020) show comparable quality in generating personalized conversation, the generated utterances merely confirm each interlocutor’s persona resulting in a passive manner of speaking such as “I have four children”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In addition, the incoherent topics of the dialogues lead to shallow levels of conversation between the interlocutors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' To elaborate on this chit-chat conversation supported by external knowledge, Jang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2022) presents a novel persona-knowledge chat with a generative model that considers persona information and world knowledge altogether.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Despite obtaining the knowledge and persona when generating the answers, the generative models’ responses still exhibit both hallucination and lesser engagingness as in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In this paper, we propose INFO (Intellectual and Friendly dialOg agents) that responds with external knowledge and persona simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Owing to the enhanced capturing relevancy between the context and each candidate set, the knowledge selector and persona selector for the grounding task are implemented with the poly-encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' To alleviate hallucinated responses from the model, we adopt retrieval-augmented generation (RAG) (Lewis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2020b) by utilizing non-parametric memory and parametric generator in addition to the enhanced input query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' By injecting predicted sources as input to the retrieved-augmented generator, our model maintains consistency between grounding and generation while training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Therefore, our model generates more knowledgeable and engaging answers in an active manner with less hallucination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We show that INFO achieves the highest scores on both grounding and generation tasks in empirical experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Also, we compare diverse candidate scoring modules including bi-encoder, cross-encoder, and poly-encoder and demonstrate their effect on generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We additionally conduct experiments to show the effectiveness of the retriever module compared to sparse and dense retrievers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The qualitative results and human evaluation are also presented to validate our model’s capability to generate human-like answers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Our contributions are as follows: We propose the model that grounds persona information and external knowledge with lesser hallucination and adequate utilization of persona in an active manner simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Our approach suggests that the generated responses from the model are interpretable regarding what the model refers to while generating.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We show that INFO achieves the SoTA performance in all of the automatic metrics and demonstrate its comparable quality with human evaluation and qualitative analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2 Related Works 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='1 Knowledge Grounded Conversation To let the neural network models ground external knowledge and generate informative answers, Ghazvininejad et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2018) suggests a data- driven neural conversational agent that provides knowledgeable answers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Also, Dinan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2018) introduces open-domain dialogue where the two speakers are talking with Wikipedia knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' To inject the external knowledge into the pre-trained language model efficiently, Raffel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Roberts et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2020) success in equipping the knowledge into the parameters and show comparable performance in open-domain question and answering tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' However, the approach is not capable of expand or revise their inherent knowledge and provides hallucination (Marcus, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' To overcome the limitations, Lewis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2020b) combines a pre-trained parametric model and non-parametric memory for the open-domain question and answering to reduce hallucination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Since their non- parametric memory can be updated without extra pre-training, revising knowledge is more efficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Furthermore, it is found that a retrieval-augmented Figure 1: Overview of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' U is the input comprises dialogue history and knowledge snippet, and cand denotes each candidate from the grounding tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The grounding score is obtained through the dot product operation with the representation of input context Udial and candidate at.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The predicted sources convert into the knowledge-persona enhanced query (KPEQ) with dialogue history and KPEQ is fed into the retrieval-augmented generator to generate the responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' generator also reduces hallucination in knowledge- grounded conversation as well (Shuster et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2021), and a similar approach recently achieves outstanding performance in knowledge-grounded conversation (Paranjape et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='2 Persona Grounded Conversation In order to alleviate bland and general answers with consistent personality, Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2018a) constructs a persona-chat dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In the dataset, the two interlocutors chat with the persona profile sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Along with this dataset, Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2018a) introduces the model with a profile memory network by considering the dialogue history to perform attention over the persona.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' They enlarge the persona-chat dataset with Reddit corpus, and pre-trained the model with these dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' After that, they fine-tune pre- trained model on the persona-chat (Mazare et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Also, Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2020) trains a receiver to reinforce the mutual persona understanding between interlocutors, and Wolf et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2019) utilize pre-trained models (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2019) to build personalized dialogue agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='3 Encoders for Sentence Scoring There exist diverse encoder structures for sentence scoring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Bi-encoder scores the relevance between sentences by feeding context and candidates into separate encoders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' An example of bi-encoders are memory networks (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2018a), transformer memory networks (Dinan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2018), LSTM (Lowe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Since bi- encoder calculates with cached encoded sentence representations, it is relatively fast in computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' However, the bi-encoder has a limitation of capturing mutual information between context and candidates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Cross-encoder, on the other hand, scores by aligning context and candidates in one sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' A type of cross-encoders is a sequential matching network that is based on deep matching networks (Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2018) and gated self-attention (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2018b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Although using a cross-encoder can achieve rich interaction between the sentences within the encoder, the problem of slow processing still remains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' To exploit both benefits of each model, poly-encoder adopts attention mechanism into the bi-encoder architecture and shows satisfactory performances as cross-encoder with fast inference time (Humeau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' For the enhanced representation of grounding knowledge and persona, we employ a poly-encoder as a selector for each grounding task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 3 Method To generate more knowledgeable and engaging dialogue, we introduce our conversational model that grounds external knowledge and persona information as in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We first encode the input with the pre-trained language model, and then choose the proper knowledge and persona from the given candidates for each selector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We employ poly-encoder (Humeau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2019) as knowledge selector and persona selector to exploit its enhanced capability of capturing relevance between candidate set and context (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', dialogue history).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Then,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' the predicted persona ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='and knowledge are aligned into one sequence ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='KPEQ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Retriever(Non-Parametric) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Poly-encoder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Knowledge Selector ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Document Index ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Uaial ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='O- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Score ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Poly- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='U ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Z1-08 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='encoder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Z7777 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Z1-03 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Z2-02 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Uaial ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Uaal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='acand ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Knowledge ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Z1-01 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Candidate ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Z2-09 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='C ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='CM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Dialogue ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Z2-05 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Z2-07 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Candidate Aggregator ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Persona ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Candidate ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='个 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='↑ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='h1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='hn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='h2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='a2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='a1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='aT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Persona Selector ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='↑ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='↑ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Persona ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Marginalize ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Generator ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Generated ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Context Encoder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Candidate Encoder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Poly- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Level ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='(Parametric) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Answer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='↑ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='T ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='↑ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='encoder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='Indicator ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='U ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='candi ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='cand2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='candTto the dialogue history for consistency between ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='grounding and generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The sequence is defined as a knowledge-persona enhanced query (KPEQ), then it feeds into the retriever-augmented generator (RAG).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The generator then extracts the relevant paragraphs to refer from the knowledge index to reduce hallucination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='1 Input Construction The given dialogue is notated as {(uhm 1 , umc 1 ), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='(uhm o , umc o )}, where o is the number of rounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' uhm and umc indicate the utterances of human and machines, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We first take o-th round dialogue history, except for the final machine’s reply umc o , for the initial input for the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We define the clue of the dialogue as knowledge snippet clk to inform the machine of which topic the user is interested in.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The knowledge snippet is the name of the landmark that the user encounters, which is given topic from the dialogue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We then align the dialogue history and knowledge snippet into the one sequence for the model input as U = {uhm 1 , umc 1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='uhm o , clk}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='2 Model Components 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='1 Poly-Encoder Based Candidate Scoring For knowledge and persona grounding tasks, we suggest poly-encoder-based candidate scoring to leverage the capability of capturing the semantic similarities between the context input and the candidates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' It is employed to select proper sources to be used when generating the utterance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' When the context input U comes in, we compute the grounding scores of each candidate utilizing the embeddings of context input and encoded candidates in the poly-encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The grounding score is used to select the most suitable source(s) in the knowledge selector and persona selector, which will be introduced in the following Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='2 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In poly-encoder architecture (Humeau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2019), candidates are fed into the candidate encoder and denoted as {a1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', aT } where T is the number of candidates in the set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Each candidate embedding at is the first output of the candidate encoder, which is represented by the transformer model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' After encoding candidates, the context input (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', dialogue history) is embedded with a separate context encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Unlike the candidate encoder, the context encoder embeds the dialogue into multiple vectors through M context codes {c1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='cM}, which are learned for capturing diverse aspects of a given context rather than using one embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Each context code is used to extract U m dial by attending over all the previous layer’s output as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' U m dial = � j wcm j hj (1) Note that the h1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', hn is the output of the pre- trained language model and n is the number of tokens in the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The weights are computed as (wcm 1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', wcm n ) = softmax(cm · h1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', cm · hn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Then, the final attention proceeds between the global features of the input and a given candidate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In other words, the final dialogue feature Udial is obtained by aggregating each dialogue feature U m dial, while gaining richer interactions with context codes as in Equation 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Udial = � m wmU m dial, (2) where w1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', wM can be obtained from softmax(at · U 1 dial, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', at · U M dial).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The final predicted candidate is chosen based on the highest score that is acquired from the dot product operation as (Udial · at).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='2 Knowledge Selector (KS) We build a knowledge selector for the knowledge grounding task, employing poly-encoder-based candidate scoring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' When the grounding scores are produced from the candidate scoring module, the label with the highest score is selected as the predicted knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The knowledge loss LKG for the knowledge grounding task is computed with cross-entropy loss (Brier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 1950) as in Equation 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' LKG = − � j klj · log ˆ klj, (3) klj is the ground-truth label from the knowledge candidates of the j-th example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='3 Persona Selector (PS) We also implement a persona selector for the persona grounding task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Since multiple personas can be chosen to generate the responses, consideration of one or more persona sentences are needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Similar to the knowledge selector, we assign the grounding score to each persona candidate with the candidate scoring module as in Equation 1 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' When the scores of each candidate are computed from the candidate scoring module, then the persona level indicator classifies which the number of the persona should be selected with the [CLS] token of the model input U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' After predicting the level of persona-engagingness, we pick persona sentences to be grounded according to the number predicted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' For example, if the persona level indicator predicts 2, then top-2 persona sentences are chosen in the persona grounding task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The selected persona sentence(s) are marked as 1 otherwise, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We use binary cross-entropy loss for persona grounding as in Equation 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' LPG = − � j plj · log ˆ plj + (1 − plj) · log(1 − ˆ plj) (4) Note that plj is the ground-truth label from the knowledge candidates of the j-th example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='4 Query-Enhanced Generator Following the works of Lewis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2020b), we exploit the retrieval augmented generation’s capability to reduce hallucination and access the memory directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' For a consistent way of training while solving grounding and generation tasks, we reconstruct the query that feeds into the retriever.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' When the knowledge and persona are predicted from each selector, we aggregate them with dialogue history into one sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Then, the final query is denoted as KPEQ = {U;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' ˆP;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' ˆK} and defined as a knowledge-persona enhanced query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' ˆP and ˆK are predicted persona and knowledge from each candidate set, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The retriever rη aims to search top-K latent paragraphs with the KPEQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We utilize a pre- trained dense passage retriever (DPR) (Karpukhin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2020) trained on natural question dataset (Kwiatkowski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2019) which has parametric memory and bi-encoder architecture to retrieve a latent document embedding following Lewis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (2020b) : rη(z|KPEQ) ∝ exp(d(z)⊤q(KPEQ)), (5) where d(·) is an embedding from a document encoder and q(·) is a representation from query encoder, both implemented with BERTbase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' z denotes the list of document.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' With the relevant paragraphs from the retriever, we employ RAG-Token architecture as the generator to borrow its strength of predicting each target token based on top-K different paragraphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Since RAG-Sequence, which has a different architecture to RAG-Token, uses the same document from the retriever to predict each token as depicted in Equation 6, the result may opt to depend on the retrieved document (Lewis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The two different versions of RAGs (Lewis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2020b) are as follows: SRS(y|x) ≈ � z∈top-k(p(·|x)) rη(z|x) N � i gθ(yi|x, z, y1:i−1) (6) SRT(y|x) ≈ N � i � z∈top-k(p(·|x)) rη(z|x)gθ(yi|x, z, y1:i−1), (7) where SRS indicates our method with RAG- Sequence architecture and SRT denotes ours with the RAG-Token model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' x is a token of KPEQ and yi is a single token from the ground truth responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Also, z is a retrieved paragraph from the retriever and N is the maximum sequence length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The SRT generator g(·) marginalizes the loss from different paragraphs when generating answers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In detail, the generator outputs a distribution for the next token for each document before marginalizing as in Equation 7 where η denotes the parameter of the retriever, and θ indicates the parameter of the generator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' After that, the generator repeats the process with the following output token.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Finally, the SRT aims to generate the next token following an auto-regressive manner with a standard beam search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In other words, the model minimizes the negative marginal log-likelihood for each input/output pair (KPEQj, yj).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The language model loss is formulated as : LS = − � j logp(yj|KPEQj) (8) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='3 Final Objectives We then train the full model in the multi-tasking manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The full objectives of the model is indicated as Equation 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' L = λKGLKG + λPGLPG + λSLS (9) Models Generation Grounding (Acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=') chrF++ BLEU R-1 R-2 R-L BERTScore Persona Knowledge GPT2small 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='73 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='43 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='58 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='44 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='62 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='56 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='44 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='59 GPT2medium 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='12 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='31 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='29 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='17 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='12 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='92 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='44 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='42 BARTbase 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='77 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='99 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='24 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='73 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='13 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='35 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='45 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='18 BARTlarge 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='69 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='91 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='57 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='83 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='05 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='10 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='44 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='01 INFO (SRS) 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='33 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='36 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='36 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='36 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='16 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='00 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='70 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='24 INFO (SRT ) 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='29 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='46 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='26 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='35 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='06 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='29 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='87 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='22 Table 2: Main results on the official validation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' SRS denotes our method with RAG-Sequence architecture and SRT indicates the model with RAG-Token model as generator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The models are evaluated by generation metrics, including chrF++, BLEU, ROUGE-1 (R-1), ROUGE-2 (R-2), ROUGE-L (R-L), and BERTScore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We control the proportion of each task and we set λKG, λPG, and λS as 1:1:5 for the experiments, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We find the value of each λ with manual search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 4 Experiments 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='1 Experiment Details Dataset FoCus (Jang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2022) is the dataset for customized dialogue benchmark, where each conversation is directly grounded with knowledge and persona.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The dataset includes knowledge- aware dialogue with personal profiles between humans and machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' There are 12,484 dialogues about 5,152 knowledge sources from Wikipedia and 32,855 persona sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' To validate the knowledge grounding capability and customized dialogue generation, we evaluate our method with the official FoCus validation set for the effectiveness of experiments since the result from the official test set can be tested only through the leaderboard*.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Experimental Setup For each candidate scoring module, we implement poly-encoder (Humeau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2019) with BERTlarge, and the number of context codes is 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' For the dialogue generation, we implement our method with Hugging Face (Wolf et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2020) and use facebook/rag-token-nq as the backbone model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We use the same architecture of retriever and generator from RAG along with the decoding and leverage our knowledge index for non-parametric query-document ranking with FAISS library (Johnson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The knowledge index consists of the paragraphs from the given Wikipedia knowledge entitled with the name of the given landmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We set learning rate as 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='25e-6 with AdamW (Kingma and Ba, 2014) https://codalab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='lisn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='upsaclay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='fr/competitions/3754 for the optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The batch size is set as 32, and the number of dialogue history is 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The whole model was trained for three epochs on RTX A6000 GPU and took 8 hours per one epoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Baselines We implement the baselines from previous study (Jang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2022) and we conduct experiments with GPT-2 (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2019) and BART (Lewis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2020a) as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' For a fair comparison, we demonstrate the results on GPT- 2small, which has 12 layers, and BARTbase, which has 6 encoders and 6 decoder layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Also, GPT- 2medium contains 24 layers of the decoder, and BARTlarge possesses 12 layers for each encoder and decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='2 Automatic Evaluation We show the main results on the FoCus dataset with automatic metrics in grounding and generation tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The official metrics for the benchmark are chrF++ (Popovi´c, 2017), BLEU (Papineni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2002), ROUGE-1, ROUGE-2, and ROUGE-L (Lin, 2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' To consider the semantic similarity score for each token between candidate and reference sentences using contextual representation, we additionally adopt BERTscore (Zhang* et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' For grounding task, we used accuracy for both knowledge and persona grounding, and F1 score for the persona grounding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Table 2, it is found that our method shows substantial improvements in all the metrics from generation to grounding compared to the baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Especially, the performances of INFO increase over 18% at least regarding the generation metrics except for BERTScore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Furthermore, our model achieves remarkable success in persona and knowledge accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Unlike the performance in other generation metrics, SRS demonstrates better persona accuracy than SRT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' This result might be Model Generation Grounding chrF++ BLEU R-1 R-2 R-L BERTScore Persona (Acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=') Persona (F1) Knowledge (Acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=') SRT Bi-encoder 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='83 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='51 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='35 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='80 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='37 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='86 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='10 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='20 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='18 Cross-encoder 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='90 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='18 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='57 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='25 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='29 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='52 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='09 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='32 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='49 Poly-encoder 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='29 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='46 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='26 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='35 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='06 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='29 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='87 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='56 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='22 Table 3: Performances comparison between the encoding modules for grounding tasks attributed to the architecture of the generator, which is more applicable to sentence classification tasks such as persona grounding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The official test result is also demonstrated in Appendix A, but BERTscore is missing due to the unreleased ground truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='3 Human Evaluation We conduct a human evaluation to validate the responses from our model through Amazon Mturk services†.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The assessment criteria are fluency, adequacy, provenance, engagingness, and hallucination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In specific, provenance is the level of utilization of the ground truth knowledge into the responses, whereas engagingness means how much the answers are persona-related.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Also, hallucination indicates whether the answer contradicts the persona and knowledge or cannot be verified from the source content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We randomly chose 50 dialogues from the official test set, and three workers were allocated to evaluate each dialogue generated by our model and baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We asked the workers to rank the answers according to each criterion following Cho and May (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Rank is scaled from 1 to 5, and the lower number is mapped to the better quality except for hallucination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The agreement between the annotators is calculated with Fleiss’ Kappa coefficient and is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='4185 indicating fair agreement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The relations between the annotators hardly exist since we collect the results from the Amazon Mturk workers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' As in Table 4, INFO surpasses BARTbase, BARTlarge, GPT-2small and GPT-2medium in all of the criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' INFO achieves the highest rank in adequacy, fluency, and provenance and generates a more human-like response than other generative models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Also, the workers ranked our model the lowest when they were asked to rank the responses in the most hallucinated order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Thus, it can be found that INFO generates more engaging and fewer hallucination utterances with respect to the human.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The distribution of the rank per each criterion is illustrated in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' †https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='mturk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='com/ Models Avg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Rank Ad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' ↓ Fl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' ↓ Prov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' ↓ Eng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' ↓ Hall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' ↑ GPT-2small 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='57 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='41 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='58 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='46 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='49 GPT-2medium 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='11 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='10 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='04 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='25 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='02 BARTbase 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='43 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='29 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='47 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='22 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='45 BARTlarge 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='31 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='63 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='29 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='44 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='69 INFO (Ours) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='57 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='57 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='62 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='63 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='35 Table 4: Human evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The value in the table is the average rank of the each model’s response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The abbreviation Ad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Fl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Prov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Eng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' and Hall denote adequacy, fluency, provenance, engaginess, and hallucination, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 5 Results and Analysis 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='1 Variants on Candidate Scoring Module To validate the poly-encoder as a candidate scoring module, we apply diverse candidate scoring modules, including the bi-encoder and cross- encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' From the results in Table 3, we can find that the poly-encoder outperforms in the generation task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In the grounding task, SRT with cross-encoder scoring shows improved accuracy on grounding persona and knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The result seems to be SRT with bi-encoder and cross-encoder are better than that with poly-encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' However, the F1 score of INFO is higher than the two candidate scoring modules implying that low accuracy in persona is due to the tendency of active use on the persona in poly-encoder while the other two models opt to predict not to use persona sentence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The results suggest that the high accuracy of persona not always guarantees the engagingness in the dialogue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='2 Comparison on other Retrievers We show that INFO is effective in retrieving knowledge compared to other sparse and dense retrievers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We retrieve the knowledge from our knowledge index built with Wikipedia paragraphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We utilize TF-IDF (Joachims, 1996), and deep passage retrieval (DPR) (Karpukhin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In the case of TF-IDF, we set the sum of query and knowledge tokens less than or equal to 512, which is the maximum sequence length of DPR and INFO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We use bert-base-uncased as the tokenizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' For DPR, we extract less than 40 knowledge using TF-IDF due to memory limitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We first retrieve the five paragraphs related to the query that comprises knowledge snippet, dialogue history, predicted knowledge candidate, and selected persona sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Table 5, we find that the retriever we used outperforms compared to the TF-IDF and DPR in all the metrics, including BERTscore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The results imply that INFO’s retriever is suitable for extracting similar paragraphs rather than other retrievers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Model chrF++ BLEU R-1 R-2 R-L BERTScore TF-IDF 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='91 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='52 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='91 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='96 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='43 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='54 DPR 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='57 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='86 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='44 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='55 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='20 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='48 INFO 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='36 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='40 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='48 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='18 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='32 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='14 Table 5: Comparison with other retrievers 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='3 Effect of Selectors on Generation We measure each selector module’s effect on the generation task by changing the query which feds into the retriever on a validation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The experimental results are shown in Table 6, where GTK, GTP represents ground truth knowledge and persona.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Although the query that comprises the ground truth source shows the highest scores, INFO demonstrates comparable results on the generation task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' From the result where the performance increase of INFO + GTP is larger than that of INFO + GTK about 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='8%p, we can identify that our persona selector still has more space to achieve its maximum level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Query chrF++ BLEU R-1 R-2 R-L BERTScore INFO (RT) 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='29 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='46 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='26 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='35 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='06 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='29 +GTK 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='35 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='56 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='31 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='55 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='18 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='29 +GTP 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='19 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='39 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='61 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='46 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='01 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='79 +GTK+GTP 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='40 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='60 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='88 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='64 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='16 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='84 Table 6: Comparison between the generation performances based on the variants of query with ground truth knowledge and persona.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Note that all the performance is evaluated with the official validation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='4 Qualitative Analysis In Table 7, an example from the predicted results is illustrated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In the case of BARTlarge, and GPT- 2medium, the responses only reflect the ground Given Landmark Finding Nemo Submarine Voyage Dialogue Human: What area of the park is this ride in?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Machine: This ride is located in the Tomorrowland area of Disneyland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Human: Has this ride always been about Finding Nemo?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Human’s Persona I’ve never been to California.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' My favorite cartoon is Finding Nemo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' I would like to visit Disneyland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' My favorite color is yellow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' I enjoy swimming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Ground Truth Knowledge (Grounding) Based on the characters and settings of the 2003 Disney · Pixar, Finding Nemo, it is a re-theming of the classic Submarine Voyage attraction that operated from 1959 to 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Retrieved Knowledge (Generation) The original Submarine Voyage was built in 1959 as part of the then new Tomorrowland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In 2008, Finding Nemo Submarine Voyage received an award for outstanding achievement from the Themed Entertainment Association.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Finding Nemo Submarine Voyage is .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' which opened on June 11, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' it is a re-theming of the classic Submarine Voyage attraction that operated from 1959 to 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' On July 15, 2005, two days before the 50th Anniversary of Disneyland, the Finding Nemo Submarine Voyage was officially announced at the new Turtle Talk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The attraction reuses the eight original 1959 Submarine Voyage through Liquid Space attraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Predicted Answers BARTbase Yes, that’s right.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' You’re a fan of the “Fantasy” film, so I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' BARTlarge Yes, the ride is based on the characters and settings of the 2003 Disney · Pixar film GPT-2small No, it was originally a way to show that you love Finding Nemo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' GPT-2medium Yes, it has operated from 1959 to 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' INFO (Ours) No, this attraction is actually a re-theme of the classic submarine voyage attraction that operated from 1959 to 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The attraction is based on the characters and settings of the 2003 Disney Pixar film Finding Nemo, which is your favorite cartoon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Ground Truth Response No, your favorite cartoon is a new addition to this ride.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The current Finding Nemo ride is a re-theming of the classic “Submarine Voyage” attraction that operated here from 1959 to 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Table 7: Qualitative result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' All the predicted results in grounding task are from our model, INFO and it predicts the correct answers in both tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We add other baselines’ responses for comparative analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' truth knowledge resulting in less engaged answers without any persona-related phrases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Although BARTbase seems to employ a persona sentence in the form of the phrase “You’re fan of the Fantasy film”, its used sentence does not appear in human’s personal profiles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' This result also indicates that the utterance is hard to identify its provenance on the knowledge source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Moreover, GPT-2small generates the utterance that contradicts the ground truth knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' From the result, we can find that the generated responses from the baselines show hallucinations on both persona and knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Unlike other baselines, our model blends ground truth knowledge and persona sentence into the response with less hallucination and engagingness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In addition, the retrieved knowledge source that our model refers to provides interpretability and provenance of the responses to the users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' More examples are also depicted in Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 6 Conclusions In this paper, we presented a conversational agent that generates responses grounding the user’s persona and external knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We utilized poly-encoder-based candidate scoring for each grounding task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We additionally implement persona level indicator to consider multiple persona selections for delicate persona grounding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' With predicted sources, we construct a knowledge-persona enhanced query to retrieve latent paragraphs, and they are used to generate informative and engaging responses by marginalizing loss for each token.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We show that our method achieves the state-of-the-art (SoTA) score in both grounding and generation tasks in the persona-knowledge conversation dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We also demonstrate that the responses from INFO show less hallucination and more engagingness through human evaluation and qualitative analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We also compare the grounding modules and retrievers to show INFO’s effectiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 7 Limitations The proposed model INFO has limitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Given the INFO’s settings, the model cannot deal with real-world application, which means the absence of ground truth knowledge or persona candidates in the grounding task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We also conducted the human evaluation to evaluate the capability of the proposed model’s mitigating hallucination in dialogue generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' However, the number of cases is relatively small for evaluating the capability of mitigating hallucination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Finally, INFO demands high GPU computation resources, since it marginalizes loss at the token level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We plan to improve the INFO for future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We will train and evaluate the INFO in open- domain settings as well as real-world settings for the applicable conversational agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Moreover, we will conduct human evaluations with more cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Especially, we will enhance the way of quantitative measurement for the model’s hallucinated answers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Last but not least, we will improve the generator of INFO with more computationally efficient components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 8 Acknowledgement This work was supported by Institute of Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2020-0-00368,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' A Neural-Symbolic Model for Knowledge Acquisition and Inference Techniques),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' This research was supported by the MSIT(Ministry of Science and ICT),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Korea,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' under the ITRC(Information Technology Research Center) support program(IITP-2022-2018-0-01405) supervised by the IITP(Institute for Information & Communications Technology Planning & Evaluation),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' This work was supported by Institute for Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2022-0-00369, (Part 4) Development of AI Technology to support Expert Decision-making that can Explain the Reasons/Grounds for Judgment Results based on Expert Knowledge) References Glenn W Brier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 1950.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Verification of forecasts expressed in terms of probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Monthly weather review, 78(1):1–3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Hyundong Cho and Jonathan May.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Grounding conversations with improvised dialogues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2398–2413, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Wizard of wikipedia: Knowledge-powered conversational agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Jianfeng Gao, Michel Galley, and Lihong Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Neural approaches to conversational ai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' ACL 2018, page 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' A knowledge-grounded neural conversation model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Thirty-Second AAAI Conference on Artificial Intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Poly-encoders: Architectures and pre-training strategies for fast and accurate multi-sentence scoring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Yoonna Jang, Jungwoo Lim, Yuna Hur, Dongsuk Oh, Suhyune Son, Yeonsoo Lee, Donghoon Shin, Seungryong Kim, and Heuiseok Lim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Call for customized conversation: Customized conversation grounding persona and knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 10803–10812.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Andrea Madotto, and Pascale Fung.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Survey of hallucination in natural language generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' arXiv preprint arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='03629.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Thorsten Joachims.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' A probabilistic analysis of the rocchio algorithm with tfidf for text categorization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Technical report, Carnegie-mellon univ pittsburgh pa dept of computer science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Jeff Johnson, Matthijs Douze, and Hervé Jégou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Billion-scale similarity search with GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' IEEE Transactions on Big Data, 7(3):535–547.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Dense passage retrieval for open-domain question answering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Diederik P Kingma and Jimmy Ba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Adam: A method for stochastic optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='6980.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Natural questions: A benchmark for question answering research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Transactions of the Association for Computational Linguistics, 7:452–466.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Retrieval-augmented generation for knowledge-intensive nlp tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 33:9459–9474.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and William B Dolan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' A diversity- promoting objective function for neural conversation models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Chin-Yew Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' ROUGE: A package for automatic evaluation of summaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Text Summarization Branches Out, pages 74–81, Barcelona, Spain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Qian Liu, Yihong Chen, Bei Chen, Jian-Guang Lou, Zixuan Chen, Bin Zhou, and Dongmei Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' You impress me: Dialogue generation via mutual persona perception.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Ryan Lowe, Nissan Pow, Iulian Vlad Serban, and Joelle Pineau.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The ubuntu dialogue corpus: A large dataset for research in unstructured multi- turn dialogue systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285–294.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Gary Marcus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The next decade in ai: four steps towards robust artificial intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' arXiv preprint arXiv:2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='06177.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Pierre-Emmanuel Mazare, Samuel Humeau, Martin Raison, and Antoine Bordes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Training millions of personalized dialogue agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2775–2779.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Bleu: a method for automatic evaluation of machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' pages 311–318.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Ashwin Paranjape, Omar Khattab, Christopher Potts, Matei Zaharia, and Christopher D Manning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Hindsight: Posterior-guided training of retrievers for improved open-ended generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Rocktäschel, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' How context affects language models’ factual predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Automated Knowledge Base Construction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Maja Popovi´c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' chrF++: words helping character n-grams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Proceedings of the Second Conference on Machine Translation, pages 612–618, Copenhagen, Denmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Language models are unsupervised multitask learners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' OpenAI blog, 1(8):9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Exploring the limits of transfer learning with a unified text-to-text transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Journal of Machine Learning Research, 21:1–67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Adam Roberts, Colin Raffel, and Noam Shazeer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' How much knowledge can you pack into the parameters of a language model?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418–5426.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' What makes a good conversation?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' how controllable attributes affect human judgments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1702–1723.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Retrieval augmentation reduces hallucination in conversation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3784–3803.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and William B Dolan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' A neural network approach to context-sensitive generation of conversational responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196–205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Oriol Vinyals and Quoc V Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' A neural conversational model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' arXiv preprint arXiv:1506.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='05869.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Rush.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Transformers: State-of-the-art natural language processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Transfertransfo: A transfer learning approach for neural network based conversational agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' arXiv preprint arXiv:1901.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='08149.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Liu Yang, Minghui Qiu, Chen Qu, Jiafeng Guo, Yongfeng Zhang, W Bruce Croft, Jun Huang, and Haiqing Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Response ranking with deep matching networks and external knowledge in information-seeking conversation systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In The 41st international acm sigir conference on research & development in information retrieval, pages 245– 254.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2018a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Personalizing dialogue agents: I have a dog, do you have pets too?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204– 2213.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Weinberger, and Yoav Artzi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Bertscore: Evaluating text generation with bert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, and Gongshen Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' 2018b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Modeling multi- turn conversation with deep utterance aggregation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' In Proceedings of the 27th International Conference on Computational Linguistics, pages 3740–3752.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' A Automatic Evaluation on Official Test Set Models Generation Grounding (Acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=') chrF++ BLEU R-1 R-2 R-L Persona Knowledge GPT2small 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='83 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='60 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='28 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='56 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='42 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='83 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='95 GPT2medium 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='34 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='58 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='35 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='16 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='34 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='64 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='46 BARTbase 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='80 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='15 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='26 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='73 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='06 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='66 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='02 BARTlarge 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='63 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='86 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='36 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='42 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='73 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='62 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='53 INFO (RS) 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='81 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='41 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='37 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='41 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='16 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='74 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='88 INFO (RT) 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='61 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='33 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='27 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='39 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='09 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='83 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='10 Table 8: Main results on the official test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' RT indicates the model with RAG-Token model as generator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The models are evaluated by generation metrics, including chrF++, BLEU, ROUGE-1 (R-1), ROUGE-2 (R-2) and ROUGE-L (R-L).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' The accuracy for persona grounding task and knowledge grounding task are also noted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Since BERTscore is not the official generation metric, we cannot evaluate the result on the metric as the ground truth of the test is not yet disclosed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' B Human Evaluation Distribution on Each Criteria (a) Adequacy (b) Fluency Figure 2: The distribution of the rank on the adequacy and fluency criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Guide A to E indicates INFO, BARTbase, BARTlarge, GPT-2small, and GPT-2medium, in the order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Guide A Guide B 100 Guide C Guide D Guide E 80 f evaluation 60 of 40 # 20 0 1 2 3 4 5 RankGuide A Guide B 100 Guide C Guide D Guide E 80 f evaluation 60 JO 40 # 20 0 1 2 3 4 5 Rank(a) Provenance (b) Engagingness Figure 3: The distribution of the rank on the provenance and engagingness criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Guide A to E indicates INFO, BARTbase, BARTlarge, GPT-2small, and GPT-2medium, in the order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Figure 4: The distribution of the rank on the less hallucination criterion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Note that the highest rank (1) means the most hallucinated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Guide A to E indicates INFO, BARTbase, BARTlarge, GPT-2small, and GPT-2medium, in the order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Guide A Guide B 100 Guide C Guide D Guide E 80 f evaluation 60 of 40 # 20 0 1 2 3 4 5 RankGuide A 100 Guide B Guide C Guide D Guide E 80 f evaluation 60 of 40 # 20 0 1 2 3 4 5 RankGuide A Guide B 100 Guide C Guide D Guide E 80 f evaluation 60 of 40 # 20 0 2 3 5 1 4 RankC Qualitative Results Given Landmark Nocton Hall Dialogue Human: I know this place, but I don’t remember the name of this place.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Human’s Persona I have a hall in my house.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' I have worked with military hospital.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' I would like to go to England.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' I have heard about Frederick John Robinson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' I have respect to American Officers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Ground Truth Knowledge (Grounding) Today the site is maintained by English Heritage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Retrieved Knowledge (Generation) “Hogesdon” is first recorded in the Domesday Book, meaning an Anglo- Saxon farm (or “fortified enclosure”) belonging to Hoch, or Hocq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='. By 1677, the church was in such poor structural condition that the Dean of Chichester convened a consistory court with three parishioners and asked them why repairs had not been carried out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' he south transept had historically served as the mausoleum for the Cavendish family, who resided at nearby Latimer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Following the diminished influence of the ancient Priory, predecessor of the two parishes of St Bartholomew, disputes began to arise over rights to tithes and taxes payable by lay residents who claimed allegiance with the nearby and anciently associated parish of St Botolph Aldersgate - an unintended consequence and legacy of King Kenry VIII’s religious reforms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' a wall-mounted stone pulpit, a stone altar, a series of tall, pointed windows high in the walls, an unadorned stone font and a short wooden steeple-little more than an extended belfry-extending from the nave roof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='a wall-mounted stone pulpit, a stone altar, a series of tall, pointed windows high in the walls, an unadorned stone font and a short wooden steeple-little more than an extended belfry-extending from the nave roof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Predicted Answers BARTbase This is the Nocton Hall, a historic Grade II listed building in the village of Nocton, BARTlarge Nocton Hall is a historic Grade II listed building in the village of Nocton, in Lincoln GPT-2small This is Nocton Hall, a historic Grade II listed building in the village of Nocton, in GPT-2medium This is the Nocton Hall, a historic Grade II listed building in the village of Nocton, INFO (Ours) This will be a nice vision for you.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Nocton Hall is a historic Grade II listed building in England that you would like to visit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Ground Truth Response You can see Nocton Hall in the village of Nocton, in Lincolnshire of England, the country you want to visit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Given Landmark Maiden Castle, Dorset Dialogue Human: Wow, this is amazing!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' What is this?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Machine: It is Maiden Castle in Dorset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' I thought you would like it since you are interested in historic forts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Human: Who owns the site today?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Human’s Persona I like Britain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' I have been to Dorset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' I am interested in historic forts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' I hope to work for English Heritage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' I would like to visit an old fort.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Ground Truth Knowledge (Grounding) Today the site is protected as a Scheduled Ancient Monument and is maintained by English Heritage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Retrieved Knowledge (Generation) Portland Castle is an artillery fort constructed by Henry VIII on the Isle of Portland, Dorset, between 1539 and 1541.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' this version of events, or even that the hill fort was attacked by the Romans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Between 1985 and 1986 further excavations under Niall Sharples were prompted by the hill fort’s deteriorating condition, partly caused by the large number of visitors to the site.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' a Tudor rose and the initials E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' (Elizabeth Regina), has been preserved and can be seen in the inner bailey of the castle mounted on a replica carriage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Constructed on a territorial boundary in about 600 BC, the first hill fort at Maiden Castle was a 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='4-hectare (16-acre) area surrounded by a single ditch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Predicted Answers BARTbase The site is maintained by English Heritage, the country you are from.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' BARTlarge Today the site is owned by English Heritage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' GPT-2small Today the site is protected as a Scheduled Ancient Monument and is maintained by English Heritage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' GPT-2medium Today the site is maintained by English Heritage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' INFO (Ours) Today the site is owned by English Heritage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' You may wish to research this further since you hope to work for English Heritage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Ground Truth Response It is owned by English Heritage;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' a company you hope to work for.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' Table 9: Qualitative results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' All the predicted results in grounding task are from our model, INFO and it predicts the correct answers in both tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'} +page_content=' We add other baselines’ responses for comparative analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dE0T4oBgHgl3EQffQBI/content/2301.02401v1.pdf'}