diff --git "a/AtFQT4oBgHgl3EQfMjZB/content/tmp_files/load_file.txt" "b/AtFQT4oBgHgl3EQfMjZB/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/AtFQT4oBgHgl3EQfMjZB/content/tmp_files/load_file.txt" @@ -0,0 +1,612 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf,len=611 +page_content='Contextual Dynamic Prompting for Response Generation in Task-oriented Dialog Systems Sandesh Swamy AWS AI Labs sanswamy@amazon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='com Narges Tabari AWS AI Labs nargesam@amazon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='com Chacha Chen∗ University of Chicago chacha@uchicago.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='edu Rashmi Gangadharaiah AWS AI Labs rgangad@amazon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='com Abstract Response generation is one of the critical com- ponents in task-oriented dialog systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Exist- ing studies have shown that large pre-trained language models can be adapted to this task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' The typical paradigm of adapting such ex- tremely large language models would be by fine-tuning on the downstream tasks which is not only time-consuming but also involves sig- nificant resources and access to fine-tuning data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Prompting (Schick and Schütze, 2020) has been an alternative to fine-tuning in many NLP tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' In our work, we explore the idea of using prompting for response generation in task-oriented dialog systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Specifically, we propose an approach that performs con- textual dynamic prompting where the prompts are learnt from dialog contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' We aim to distill useful prompting signals from the dia- log context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' On experiments with MultiWOZ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='2 dataset (Zang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2020), we show that contextual dynamic prompts improve response generation in terms of combined score (Mehri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2019a) by 3 absolute points, and a mas- sive 20 points when dialog states are incor- porated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Furthermore, human annotation on these conversations found that agents which in- corporate context were preferred over agents with vanilla prefix-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' 1 Introduction With the advent of large language models (LLMs), a vast majority of NLP tasks, including dialog sys- tems, further fine-tune these LMs for their down- stream tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Although these approaches pro- vide substantial improvements over traditional task- specific models (Ham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Hosseini-Asl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2022), it is a time consum- ing process that also involves significant use of energy/resources in the form of compute.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' These ap- proaches also require tuning and storing parameters for each downstream task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' ∗ Work done during an internship at AWS AI Labs A more recent line of work, explores “prompt- ing” LLMs to elicit the necessary knowledge re- quired for the downstream tasks (Shin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Schick and Schütze, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Petroni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Prompts composed of tokens or short pieces of text (discrete prompts) inserted at the end of the input examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' These prompts are typically man- ually defined based on the specific downstream task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' The main motivation behind these approaches stems from the idea that the large corpora that these language models are trained on contain relevant in- formation which is pertinent to the task on hand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Adapter-tuning was proposed as an alternate ap- proach to fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' These methods only train task-specific layers that are inserted within pre- trained LMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Such a lightweight approach that add about 4% task-specific parameters has shown to ob- tain comparable performances to their fine-tuning counterparts (Rebuffi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Houlsby et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Drawing inspiration from prompting, prefix- tuning approaches (Li and Liang, 2021) were pro- posed as another alternative to fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' These approaches pre-pend a sequence of task-specific continuous vectors (aka prefix-) to the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' In contrast to prompting, the prefix consists of free parameters that do not correspond to actual real tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Such an approach is more prevalent since it only optimizes the prefix and does not tune pa- rameters of the entire LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Most of the existing approaches use static prompts, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', the same set of tokens are used as “prompt tokens" regardless of input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' However, we believe that taking context into consideration is critical especially in response generation since the current response has to fit not only the domain but also the information being requested in previous turns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' For example: In the MultiWOZ dataset, if a customer asks about train bookings, the agent response has to restrict itself to that particular do- arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='13268v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='CL] 30 Jan 2023 main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' To address this problem, we explore the idea of generating input-dependent or contextual prompts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' We want the prompts to capture and en- code different signals for different turns of dialogs depending on the context, hence, we call our ap- proach dynamic context prompting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' This way, we hope to distill useful signals into the prompts and provide the model with adequate signals to gener- ate a desired system response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' In this work, we explore the potential of using dialog context within a prefix tuning approach for the task of response generation in task-oriented dialog systems (TOD).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' The contributions of this paper are summarized as: we propose a context-dependent prefix-tuning method for dialog response generation in TOD systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' to illustrate the benefits of such an approach, we conduct experiments on the MultiWOZ dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' We show that our model significantly outperforms the original task-dependent de- sign of the prefix-tuning method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' 2 Related Work 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='1 Dialog Generation With the prevalence of LLMs, the quest for an answer to “how do we effectively adapt such mod- els for dialog generation?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='" has been on the fore- front of researchers’ minds in the dialog commu- nity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' For task-oriented dialogs, fine-tuning large pre-trained models such as GPT-2 or T5 has made great progress on benchmarks recently (Ham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Hosseini-Asl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Built upon these advances, more recent line of work investigates the effectiveness of using multi-task learning (Su et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2021), or pre-training the model on external dialog cor- pora (Peng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' More recently, prompting has been used to address the sub-task of dialog state tracking (Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Different from those works, we focus on the task of dialog response generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='2 Prompt-based Learning As an alternative to the fine-tuning paradigm, prompting involves a sequence of tokens appended to the input text, which can then induce the model to engage in a certain behavior suited to the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Since the release of GPT-2 (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2018, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2020), many prompt-related pa- pers have emerged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Most of the leading approaches in prompting use task-specific prompts, ranging from discrete prompts (Shin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Schick and Schütze, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Petroni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2019) to continuous “soft prompts” (Li and Liang, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Lester et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' These methods have a fixed prompt for each task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' However, in dialog systems specifically, the context varies for every turn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' In our work, we aim to design prompts which are context-dependent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' 3 Problem Statement Response generation is one of the tasks carried out in dialog systems usually in addition to dia- log state tracking (DST).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Given a dialog context (previous turns between the system and the user) C = [u1, s1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', un−1, sn−1] and the current user utterance un, the goal of response generation is to generate system response sn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Note that in the actual task, we generate delexicalized system re- sponses, given all the groundtruth previous turns as input, following previous works (Hosseini-Asl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Wen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Techniques mentioned in (Ham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Hosseini-Asl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2020) rely on fully fine-tuning LLMs to carry out this task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' In contrast, our ap- proach builds on the prefix-tuning framework, but incorporates dialog context, C, as an additional signal for the prefix tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' As a supplement to context C, we added dialog state information D (up to the current turn) to further help response generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' 4 Contextual Dynamic Prompting Framework 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='1 Prefix-tuning for Response Generation Our work is built on top of prefix tuning for genera- tion tasks (Li and Liang, 2021), which adds a fixed set of tunable prefix tokens/prompts to the origi- nal input x to obtain a new input, [PREFIX;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' x].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Following the denotation in (Li and Liang, 2021), we use Pθ[i, :] to denote the ith prefix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Pθ[i, :] is generated by: Pθ[:, :] = MLPθ(P ′), (1) where P ′ is a fixed smaller matrix as input to a feedforward neural network (MLPθ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' The training objective of prefix-tuning is same as fine-tuning, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', the following log-likelihood objective: max θ log pφ(y|x), Figure 1: The figures above indicate the differences between the vanilla prefix-tuning approach compared to our approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' In both these variants, only the prefix tokens are tuned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' where y is the decoder output and x is the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' θ represents the trainable parameters in the prefix tun- ing feedforward neural network and φ denotes all other parameters that include the frozen parameters of the large language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' For our task of response generation, we con- catenate the prefix with the dialog context and the current user utterance as input [PREFIX;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' u1, s1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', un−1, sn−1, un].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' The target output is the system response sn as seen in Figure 1 (a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' We adopt T5 (Raffel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2020) as the pre- trained language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' T5 employs an encoder- decoder framework which is prevalent in seq2seq tasks (Sutskever et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Cho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='2 Contextual Prefix-tuning In vanilla prefix-tuning, the parameters of the prefix are fixed after training for any particular task to be reused.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' However, a dialog system involves having multiple turns of conversation between a system and the user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' It is imperative in such systems to dynamically incorporate contextual information to carry out a meaningful conversation with the user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' We explore how we can distill the dialog context information into the prefix with a prompt encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Different from the original design, we want to encode additional signals into the prefix that dif- fers for each input instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' In other words, we want to generate contextual prefix or contextual dynamic prompts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Formally, we modify the equation (1) as follows: Pθ[:, :] = MLPθ(encoder(C)), (2) where C = [u1, s1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', un−1, sn−1] represents the dialog context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' We first obtain the representation of the dialog context by feeding C into a T5 en- coder which is kept frozen as shown in Figure 1 (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Subsequently, we use the prompt encoder, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=', the feedforward neural network, to get the prefix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' The generated prefix Pθ is then concatenated with only the current user utterance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Instead of concatenating the whole context as the input to the T5 decoder, we first distill the signal into the prefix tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' As a consequence of freezing the T5 encoder which gen- erates the context representation, we still have the same number of tunable parameters as the original prefix-tuning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content='3 Input-dependent Prefix-tuning with Dialog State In most task-oriented dialog systems, we also have access to the dialog state at every turn in addition to dialog context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' The dialog state has information such as requested slots and filled slots at every turn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' We provide the dialog state D in addition to the context C to obtain contextual dynamic prompts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' As a result, we will now modify equation (2) as: Pθ[:, :] = MLPθ(encoder(C;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtFQT4oBgHgl3EQfMjZB/content/2301.13268v1.pdf'} +page_content=' Dn−1)), (3) we only provide the most recent dialog state Dn−1 which is an amalgamation of all previous dialog states D