text
stringlengths
4
222k
label
int64
0
4
Summarization, or the task of condensing a document's main points into a shorter document, is important for many text domains, such as headlines for news and abstracts for research papers.This paper presents a novel unsupervised abstractive summarization method that generates summaries directly from source documents, without the aid of example summaries. This approach simultaneously optimizes for the following important properties of a good summary:• coverage of the keywords of the document,• fluency of generated language, and • brevity of generated summaries. One of the main contributions of this work is a novel method of inducing good coverage of important concepts from the original article. The coverage model we propose takes as input the original document with keywords masked out (see Figure 1) . It uses the current best automatically generated summary to try to uncover the missing keywords. The more informative the current summary is, the more successful the coverage model is at guessing the blanked out keywords from the original document. A resulting coverage score is fed back into the training process of the summarization model with the objective of producing summaries with high coverage.A second contribution is our unsupervised training procedure for summarization, the Summary Loop, which leverages the coverage model as well as a simple fluency model to generate and score summaries. During training, the procedure is conditioned on a desired summary length, forcing the Summarizer model to adapt to a length budget. Figure 1 shows Summary Loop summaries obtained for the same document under three different length budgets.A third contribution is a set of specialized techniques employed during training to guide the model away from pathological behavior. These guard rails include a method for reducing repetition, for encouraging the model to complete sentences, and to avoid frame filling patterns.The models trained through the Summary Loop outperform all prior unsupervised summarization methods by at least 2 ROUGE-1 points on common news summarization datasets (CNN/DM and Newsroom), and achieve within a few points of state-of-the-art supervised algorithms, without ever being exposed to any summaries. In addition, summaries generated by our method use 50% more summarization techniques (compression, merging, etc.) than prior automatic work and achieve higher levels of abstraction, reducing by almost half the gap between human-generated summaries and automatic summaries in terms of length of copied spans.Abstractive Summarization. Sequence-to-sequence (seq2seq) (Sutskever et al., 2014 ) models trained using teacher-forcing are the most common approach to abstractive summarization (Nallapati et al., 2016) . A common architecture is the Pointer-Generator (See et al., 2017) . Performance can further be improved by constraining the attention (Gehrmann et al., 2018; Gui et al., 2019; and using pretrained Transformer-based language models (Lewis et al., 2019; Chi et al., 2019; Edunov et al., 2019) . Through architectural changes, the training procedure remains constant: using a large corpus of document-summary pairs, the model is trained to reproduce target summaries.Unsupervised Summarization. Most unsupervised summarization work is extractive: sentences deemed relevant are pulled out of the original document and stitched into a summary, based on a heuristic for a sentence's relevance (Mihalcea and Tarau, 2004; Barrios et al., 2015; West et al., 2019) . Nikolov and Hahnloser (2019)'s abstractive approach is partially unsupervised, not requiring parallel data, but only a group of documents and a group of summaries. In contrast, our work does not require any summaries, and is trained using only documents. Radford et al. (2019) summarize documents using a language model (GPT2) in a Zeroshot learning setting. The model reads the document followed by a special token "TL/DR", and is tasked with continuing the document with a summary. Our work is an extension of this work: we initialize our Summarizer model with a GPT2 and specialize it with a second unsupervised method.Summarization and Q&A. Eyal et al. (2019) and Arumae and Liu (2018) turn reference summaries into fill-in-the-blank (FIB) questions, either as an evaluation metric or to train an extractive summarization model. In this work, we directly generate FIB questions on the document being summarized, bypassing the need for a reference summary. Scialom et al. (2019) 's work stays closer to a Q&A scenario, and uses a Question Generation module to generate actual questions about the document, answered by a Squad-based (Rajpurkar et al., 2018) model using the generated summary. We refrain from using actual questions because question generation remains a challenge, and it is unclear how many questions should be generated to assess the quality of a summary.RL in Summarization. Paulus et al. (2018) introduced Reinforcement Learning (RL) to neural summarization methods by optimizing for ROUGE scores, leading to unreadable summaries. Since then, Reinforcement Learning has been used to select sentences with high ROUGE potential (Chen and Bansal, 2018) , or optimize modified versions of ROUGE that account for readability . In all cases, the reward being computed relies on a reference summary, making the methods supervised. We craft a reward that does not require a target summary allowing our training process to remain unsupervised.
0
A customary pattern for papers on natural language processing runs roughly as follows:1. Here's a difficult language situation. 2. Here's the semantic (pragmatic / discourse / whatever) information necessary to interpret the situation as we humans interpret it. 3. Here's how I put this information into my system. 4. Therefore, my system can handle the situation. Although the amount of effort which has been expended in doing this kind of thing is admirable, there is still something less than totally satisfactory about the approach. First, it tends to trivialize the notion of "solution to a problem." Second, and more important, the approach relies on hand-coding pieces of information (commonly called "knowledge"). The more we want our programs to know, the more we have to hand-feed them. We can keep on doing this, and maybe, after several generations, someone will put all the pieces together and discover that AI researchers have succeeded in rewriting the history of the universe as we understand it.Without question, this will be a form of success. But there might be a faster way to the goal ---one that doesn't involve reinventing the wheel of knowledge. We * The second author currently works for B.I.M., Belgium.already have volumes of hand-coded natural language information in our reference books ---dictionaries and encyclopedias, for instance. If computers could access that information, and use it to help get out of difficult situations, they (and we) would be much further ahead. Problems should be expected, of course: reference works are arbitrary and inconsistent; just having the information does not guarantee being able to use it.It is nevertheless possible to think of on-line reference books as knowledge bases. In this paper we propose techniques for processing the definitions of an on-line standard dictionary, and for extracting from them the semantic information necessary to resolve the ambiguities inherent in the attachment of English prepositional phrases. We consult the on-line dictionary as if it were a semantic expert, and we find in it the kind of information that has previously been supplied by means of scripts, frames, templates, and similar hand-crafted devices.We start by presenting PEG, a broad-coverage computational grammar of English which is our tool for analyzing on-line definitions, and by discussing the tools necessary for extracting from these definitions the knowledge relevant for disambiguation. Subsequent sections describe these tools in greater detail, focussing on specific sentences and on the heuristic machinery used to discover their most likely interpretations. A final section sums up the work and suggests some important areas for future research. Two appendices provide traces of the processing of some examples, and technical information about the "approximate reasoning" techniques used in our system.
0
Computational modeling of human multimodal language is an upcoming research area in natural language processing. This research area focuses on modeling tasks such as multimodal sentiment analysis (Morency et al., 2011) , emotion recognition (Busso et al., 2008) , and personality traits recognition (Park et al., 2014) . The multimodal temporal signals include the language (spoken words), visual (facial expressions, gestures) and acoustic modalities (prosody, vocal expressions) . At its core, these multimodal signals are The second stage selects the loud voice behavior which is locally interpreted as emphasis before being fused with previous stages into a strongly negative representation. Finally, the third stage selects the shrugging and speech elongation behaviors that reflect ambivalence and when fused with previous stages is interpreted as a representation for the disappointed emotion.highly structured with two prime forms of interactions: intra-modal and cross-modal interactions (Rajagopalan et al., 2016) . Intra-modal interactions refer to information within a specific modality, independent of other modalities. For example, the arrangement of words in a sentence according to the generative grammar of a language (Chomsky, 1957) or the sequence of facial muscle activations for the presentation of a frown. Cross-modal interactions refer to interactions between modalities. For example, the simultaneous co-occurrence of a smile with a positive sentence or the delayed occurrence of a laughter after the end of a sentence. Modeling these interactions lie at the heart of human multimodal language analysis and has recently become a centric research direction in multimodal natural language processing (Liu et al., 2018; Pham et al., 2018; , multimodal speech recognition Gupta et al., 2017; Harwath and Glass, 2017; Kamper et al., 2017) , as well as multimodal machine learning (Tsai et al., 2018; Srivastava and Salakhutdinov, 2012; Ngiam et al., 2011) .Recent advances in cognitive neuroscience have demonstrated the existence of multistage aggregation across human cortical networks and functions (Taylor et al., 2015) , particularly during the integration of multisensory information (Parisi et al., 2017) . At later stages of cognitive processing, higher level semantic meaning is extracted from phrases, facial expressions, and tone of voice, eventually leading to the formation of higher level crossmodal concepts (Parisi et al., 2017; Taylor et al., 2015) . Inspired by these discoveries, we hypothesize that the computational modeling of crossmodal interactions also requires a multistage fusion process. In this process, cross-modal representations can build upon the representations learned during earlier stages. This decreases the burden on each stage of multimodal fusion and allows each stage of fusion to be performed in a more specialized and effective manner.In this paper, we propose the Recurrent Multistage Fusion Network (RMFN) which automatically decomposes the multimodal fusion problem into multiple recursive stages. At each stage, a subset of multimodal signals is highlighted and fused with previous fusion representations (see Figure 1 ). This divide-and-conquer approach decreases the burden on each fusion stage, allowing each stage to be performed in a more specialized and effective way. This is in contrast with conventional fusion approaches which usually model interactions over multimodal signals altogether in one iteration (e.g., early fusion ). In RMFN, temporal and intra-modal interactions are modeled by integrating our new multistage fusion process with a system of recurrent neural networks. Overall, RMFN jointly models intra-modal and cross-modal interactions for multimodal language analysis and is differentiable end-to-end.We evaluate RMFN on three different tasks related to human multimodal language: sentiment analysis, emotion recognition, and speaker traits recognition across three public multimodal datasets. RMFN achieves state-of-the-art performance in all three tasks. Through a comprehensive set of ablation experiments and visualizations, we demonstrate the advantages of explicitly defining multiple recursive stages for multimodal fusion.
0
The standard keyboard was initially designed for native English speakers. In Asia, such as China, Japan and Thailand, people cannot input their language through the standard keyboard directly. Asian text input becomes one of the challenges for computer users in Asia. Therefore, an Asian language input method is one of the most difficult problems in Asian language processing.For Chinese, the input methods can be roughly divided into two types: one is the structure-based or shape-based input method, which was developed based on the structure of Chinese characters, such as the Wubi method [Wang 2005 ], Cangjie method, Boshiamy method, among others. These methods can reach a high input speed by a skilled user. However, a lot of effort is required to master them. The other is the pronunciation-based input method, such as the Insun input method [Wang 1993 ], Microsoft input method, Bopomofo, among others. These methods are easy to learn. The user can input the Chinese character with scarcely any training, on the condition that they can pronounce it correctly. Hybrid input methods have also been proposed, i.e. Renzhi and Tze-loi input method. However, they only possess a limited share of the market. [ISO 7098: 1991] . The Pinyin-based input method dominates the market of Chinese input methods. It is said that over 97% of Chinese computer users are using pinyin to input Chinese [Chen 1997 ].According to the scale of input unit, the pinyin-based input method can be divided into three types: the character-level input method, the word-level or phrase-level input method, and the sentence-level input method, respectively. The sentence-level input method usually achieves higher accuracy by exploitation of more context information than the other two. It has become the most prevalent pinyin-based input method. The Pinyin-to-Character Conversion task aims to convert a sequence of pinyin strings into one Chinese sentence. It is the core process of the sentence-level pinyin-based input method. Therefore, improving the performance of the Pinyin-to-Character Conversion system is well worth studying. In addition, the Pinyin-to-Character Conversion task can be taken as a simplified task of automatic speech recognition, both of which aim to convert phonetic information into character sequence.However, the Pinyin-to-Character Conversion task doesn 't have to deal with acoustic ambiguity because the pinyin strings are directly input through the keyboard. Therefore, the technique is also illuminative in the task of automatic speech recognition.The linguist approach [Wang 1993; Hsu and Chen 1993; Kuo 1995] and the statistical approach [Zhang et al. 1998; Xu et al. 2000; Wu 2000; Gao et al. 2002; Gao et al. 2005; Xiao et al. 2005a] are two technical approaches to the Pinyin-to-Character Conversion task. The statistical approach is mainly based on the technique of statistical language models, especially the ngram model and its variant forms. In recent years, it has drawn great interest due to its a Class-Based Maximum Entropy Markov Model Approach efficiency and robustness. However, several drawbacks have also been found in the traditional ngram model. First, according to Zipf's law [Zipf 1935 ], there are a lot of words which rarely or never occur in the training corpus. The data sparseness problem is severe [Brown et al. 1992 ] in the ngram model. Second, long distance constraints are difficult to capture since the ngram model only focuses on local lexical constraints. Third, it 's hard to utilize the linguistic knowledge of the ngram model. Many techniques have been proposed to address the drawbacks of the traditional ngram model. To solve the data sparseness problem, various kinds of smoothing techniques have been proposed, such as additive smoothing [Jeffreys 1948 ], Katz smoothing [Katz 1987 ], linear interpolation smoothing [Jelinek and Mercer 1980] , semantic based smoothing [Xiao et al. 2005b; Xiao et al. 2006] . To utilize the linguistic knowledge, a set of linguistic rules are generated automatically and they are incorporated into the traditional ngram model by a hybrid ngram model [Wang et al. 2005] . Hsu [Hsu 1995] proposes the context sensitive model (CSM) in which the semantic patterns are captured by the templates. As much as 96% accuracy, which is the best result of the traditional Chinese input methods as far as we know, is reported for CSM on the Phoneme-to-Character Conversion task. Trigger techniques have been proposed [Zhou and Lua 1998 ] and word-pair techniques have been proposed [Tsai and Hsu 2002; Tsai et al. 2004; Tsai 2005; Tsai 2006] . The linguist knowledge can be effectively described by triggers and pairs; meanwhile, the long distance constraints can be well captured.Compared with the commercial input system (MS-IME 2003) , effective improvements have been achieved by these techniques [Tsai 2006 ]. Wang [Wang et al. 2004] utilizes the theory of rough set so as to discover the linguistic knowledge and incorporate it into the Pinyin-to-Character Conversion system. Compared with the traditional ngram model, Wang 's system achieves a higher accuracy with a smaller storage requirement. Xiao [Xiao et al. 2005a] incorporates the word positional information into the Pinyin-to-Character Conversion system and achieves encouraging results in experiments. Gao [Gao et al. 2005] proposes the Minimum Sample Risk (MSR) principle to estimate the parameters of the ngram model. Success has been achieved with this principle for a Japanese input method.What's more, some techniques have been proposed especially for Chinese text input method. A Pinyin-to-Character Conversion system with spelling-error correction was developed by Zhang [Zhang et al. 1997 ]. In the system, a rule-based model is designed to correct typing errors when the user inputs pinyin strings. Not only can the system accept the correct pinyin input, but it can also tolerate common typing errors. Similar work has been done by Chen [Chen and Lee 2000] . Chen constructs a statistical model to correct user typing errors. Moreover, Chen proposes a modeless input technique in which the user can input English using a Chinese input method, not requiring language mode switch.However, there is another drawback of the ngram model in the Pinyin-to-Character Conversion task, which has been ignored by most researchers. It takes no account of pinyin constraints on the input pinyin sequence while actually in the process of Pinyin-to-Character Conversion. This paper regards that the pinyin information from the pinyin sequence is helpful for selecting the correct character sequence in the Pinyin-to-Character Conversion task. First, the current input pinyin string is helpful for selecting the correct character which corresponds to that pinyin. For example, the input pinyin sequence is "ta1 shi4 di2 shi4 you3?" which should be converted into "他是敌是友?" ("Is he an enemy or friend?"). Let's focus on the third pinyin string of "di2". There are two homonyms which correspond to it: "敌" and "的".(There are actually many homonyms, but let 's only focus on "敌" and "的" for simplification). "的" is one of the most frequent Chinese characters and its frequency is usually much higher than "敌". According to the ngram model, the above pinyin sequence should be converted into " 他是的是友?" which is a wrong conversion. However, " 的 " is a polyphone which corresponds to both "di2" and "de5". In Chinese, "的" is usually pronounced as "de5" instead of "di2". ("的" is pronounced as "di2" only in the word "的确" (certainly)). The frequency of "的" mainly comes with its pronunciation "de5". If the pinyin information is considered in the above conversion, the co-occurrences of "的" and "di2" are usually lower than that of "敌"and "di2". Then, the above pinyin sequence is correctly converted into "他是敌是友?".Second, the contextual information, especially the future information, can be well exploited in the pinyin constraints. For example, there are two pinyin sequences. The first one is "yi4 zhi1 ke3 ai4 de5 xiao3 hua1" which should be converted into "一枝可爱的小花" (This is a lovely flower). The second pinyin "zhi1" should be converted into "枝" which is determined by its future character "花" (flower). The second pinyin sequence is "yi4 zhi1 ke3 ai4 de5 xiao3 hua1 mao1" which should be converted into "一只可爱的小花猫" (This is a lovely cat). The second pinyin "zhi1" should be converted into "只" which is determined by its future character "猫" (cat). However, according to the ngram model, the conversion of "zhi1" is only determined by its history information which is the character "一" in the above two cases. The characters of "花" and "猫" are both the further information that the ngram model can not exploit. Therefore, the same probabilities are assigned to both the characters of "只" and "枝".They can not be distinguished by the ngram model. In the above two conversions, at least one of them would be converted incorrectly. However, if the pinyin constraints are considered, the constraints of "hua1" and "mao1", which correspond to the characters of "花" and "猫", are exploited and imposed on the conversion of "zhi1". Then, the above two cases can be distinguished and the correct conversions can be obtained. Third, the long distance constraints can be exploited from the pinyin sequence. As for the ngram model, it has to construct a high-order model to capture the long distance constraints. However, high-order ngram models suffer from the curse of dimensionality which usually leads to a severe data sparseness problem. The current model order is usually 2 or 3. In the above example, in order to exploit a Class-Based Maximum Entropy Markov Model Approach the constraints of "花" and "猫" on the conversion of "zhi1", it has to build up at least a 7-order ngram model which suffers from a great data sparseness problem and cannot work well in reality. However, the pinyin constraints are collected as features and exploited under the Maximum Entropy (ME) framework in this paper. The context window size can be relatively large (i.e. 5 pinyin strings or 7 pinyin strings) without the curse of dimensionality.Then the constraints of "花" and "猫" can be imposed on the conversion of "zhi1" by exploitation of their pinyin information.This paper aims to improve the performance of the Pinyin-to-Character Conversion system by exploitation of the pinyin constraints from pinyin sequence. The pinyin constraints are described under the ME framework [Berge et al. 1996] , and the character constraints are modeled by the traditional ngram model. Combining these two models into a unified framework, the paper builds the Pinyin-to-Character Conversion system on a MEMM model [McCallum et al. 2000] . However, the label set on the Pinyin-to-Character Conversion task is the Chinese lexicon. The scale of Chinese lexicon is usually in the range of label can be obtained by some automatic algorithms [Li 1998; Chen and Huang 1999; Gao et al. 2001] or from some pre-defined thesauri [Mei et al. 1983] . The scale of class set is usually much smaller than that of target label, which makes it feasible to train C-MEMM under the Maximum Entropy principle. Then, these constraints are conveyed from the class sequence to the target label sequence. So, C-MEMM can efficiently exploit the pinyin constraints from pinyin sequence and get effective improvement in the Pinyin-to-Character Conversion task.The paper is organized as follows: the MEMM model is briefly reviewed in Section 2. In Section 3, the C-MEMM model is proposed and its probability functions are deduced according to the Bayes rule and the Markov property. Both the cases of hard class and soft class are discussed in detail. Experimental results and discussions are provided in Section 4.The related works are described in Section 5, and the conclusions are drawn in Section 6.
0
One of the main stumbling blocks for Spoken Dialogue Systems (SDSs) is the lack of reliability of Automatic Speech Recognizers (ASRs) (Pellegrini and Trancoso, 2010). Recent research prototypes of ASRs yield Word Error Rates (WERs) between 15.6% (Pellegrini and Trancoso, 2010) and 18.7% (Sainath et al., 2011) for broadcast news. However, the commercial ASR employed in this research had a WER of 30% and a Sentence Error Rate (SER) (proportion of sentences for which no correct textual transcription was produced) of 65.3% for descriptions of household objects.In addition to mis-recognized entities or actions, ASR errors often yield ungrammatical sentences that cannot be processed by subsequent interpretation modules of an SDS, e.g., "the blue plate" being mis-heard as "to build played", and hesitations (e.g., "ah"s) being mis-heard as "and" or "on"all of which happened in our trials.In this paper, we offer a general framework for error detection and correction in spoken utterances that is based on the noisy channel model, and present a first-stage implementation of this framework that performs simple corrections of referring expressions. Our model is implemented as a preprocessing step for the Scusi? spoken language interpretation system (Zukerman et al., 2008; Zukerman et al., 2009) . The idea of the noisy channel model is that a message is sent through a channel that introduces errors, and the receiver endeavours to reconstruct the original message by taking into account the characteristics of the noisy channel and of the transmitted information (Ringger and Allen, 1996; Brill and Moore, 2000; Zwarts et al., 2010) . The system described in this paper handles three types of errors: noise (which is removed), missing prepositions (which are inserted), and mis-heard words (which are replaced). Table 1 shows two descriptions that illustrate these errors. The first row for each description displays what was spoken, the second row displays what was heard by the ASR, and the third row shows the semantic labels assigned to each segment in the description by a shallow semantic parser (Section 3.2). Specifically, in the first example, the preposition "to" is missing, and the object "stool" is mis-heard as "storm"; and in the second example "the plate" is mis-heard as "to play", and the noisy "it" has been inserted by the ASR.Ideally, we would like to replace mis-heard words with phonetically similar words, e.g., use "plate" instead of "play". However, at present, as a first step, we replace mis-heard words with generic options, e.g., "thing" for an object or landmark. Further, we insert the generic preposition "at" for a missing preposition. Thus, we deviate from the noisy channel approach in that we do not quite reconstruct the original message. Instead, we construct a grammatically correct version of this message that enables the generation of reasonable interpretations (rather than no interpretation or non-sensical ones). For example, the mis-heard description "to play it in the microwave" in Table 1 is modified to "the thing in the microwave". Clearly, this is not what the speaker said, but hopefully, this modified text, which describes an object, rather than an action, enables the identification of the intended object, e.g., a plate, or at least a small set of candidates, in light of the rest of the description.Our mechanism was evaluated on a corpus of 295 spoken referring expressions, significantly improving the interpretation performance of the original Scusi? system (Section 6.3).The rest of this paper is organized as follows. In the next section, we discuss related work. In Section 3, we outline the design of our system. Our probabilistic model is described in Section 4, followed by the noisy channel error correction procedure. In Section 6, we discuss our evaluation, and then present concluding remarks.
0
As we all know, we are currently facing an incompletely harmonious and secure network environment. Nowadays, the number of Internet users is very large, especially the proportion of minors is steadily increasing, which shows how important it is to create a hopeful social media environment.Such an environment that embodies equality, tolerance, and diversity can help people who are in depression, confusion, lack of identity and other difficulties gain hope. At the same time, it also brings a better online social experience to the entire user community on social media.Currently, the detection and classification of social media speech are mostly biased towards offensive and controversial speech detection, but this task is hope speech detection. The difference is that the former uses detection of negative speech to eliminate the impact of offensive speech on the creation of a healthy network environment. The hope speech detection task focuses on detecting hopeful words to create a good network environment, rather than detecting and deleting negative comments to encourage people and deprive individuals of their freedom of speech (Chakravarthi, 2020) . The data source of this task comes from YouTube comments. What we have to do is to divide it into three categories: 'Hope speech', 'Not hope speech' and 'Not in intended language'. And we used K-fold cross-validation and ALBERT model to complete the detection of hope speech.
0
Our long term goal is the development of methods which will allow one to produce optimal analyses from arbitrary natural language corpora, where by optimization we understand an MDL (minimum description length; Rissanen, 1989) interpretation of the term: an optimal analysis is one which finds a grammar which simultaneously minimizes grammar length and data compression length. Our specific and primary focus is on morphology, and on how knowledge of morphology can be a useful step towards a more complete knowledge of a language's linguistic structure.Our strategy is based on the following observation: knowing the rightmost suffix of a word is very useful information in inferring (or guessing) a word's part of speech (POS), but due to the ambiguity of many suffixes, it is even better to know both a word's suffix and the range of other suffixes that the word's stem appears with elsewhere, i.e., its signature. As we will see below, this conjunction of "better" information is what we call the signature transform, and in this paper, we explore how knowledge of signature transform can be merged with knowledge of the context vector to draw conclusions about morphology and syntax.In the distant future, we would like to be able to use the signature transform in a general process of grammar induction, but that day is not here; we therefore test our experiments by seeing how well we are able to predict POS as assigned by an available tagger (TreeTagger; Schmid 1994 ). In particular, we wish to decrease the uncertainty of a word's POS through the morphological analysis described here. This decrease of uncertainty will enter into our calculation through an increase in the probability assigned to our test corpus once the corpus has been augmented with TreeTagger assigned POS tags. But to be clear on our process: we analyze a completely raw text morphologically, and use the POS tags from TreeTagger only to evaluate the signature transforms that we generate.We assume without argument here that any adequate natural language grammar will contain a lexicon which includes both lexical stems which are specified for morphological properties, such as the specific affixes with which they may occur, and affixes associated with lexical categories. We also explicitly note that many affixes are homophonous: they are pronounced (or written) identically, but have different morphological or syntactic characteristics, such as the English plural -s and the verbal 3rd person singular present -s.We focus initially on unsupervised learning of morphology for three reasons: first, because we already have a quite successful unsupervised morphological learner; second, the final suffix of a word is typically the strongest single indicator of its syntactic category; and third, analysis of a word into a stem T plus suffix F allows us (given our knowledge that the suffix F is a stronger indicator of category than the stem T) to collapse many distinct stems into a single cover symbol for purposes of analysis, simplifying our task, as we shall see. 1 We eschew the use of linguistic resources with hand-(i.e., human-)assigned morphological information in order for this work to contribute, eventually, to a better theoretical understanding of human language acquisition.We present in this paper an algorithm that modifies the output of the morphology analyzer by combining redundant signatures. Since we ultimately want to use signatures and signature transforms to learn syntactic categories, we developed an algorithm that uses the syntactic contextual information. We evaluate the changes to the morphological analysis from the standpoint of efficient and adequate representation of lexical categories. This paper presents a test conducted on English, and thus can only be considered a preliminary step in the eventually development of a languageindependent tool for grammar induction based on morphology. Nonetheless, the concepts that motivate the process are language-independent, and we are optimistic that similar results would be found in tests based on texts from other languages.In section 2 we discuss the notion of signature and signature transform, and section 3 present a more explicit formulation of the general problem. In section 4 we present our algorithm for signature collapse. Section 5 describes the experiments we ran to test the signature collapsing algorithm, and section 6 presents and discusses our results.
0
Although extensive and various forms of text data are easily available in the present age, in order for readers to gather information effectively, they need technology that overcomes any differences in their linguistic competence. For example, technology that buries the difference in the linguistic competence of foreign language learners, children, the elderly, and disabled persons is useful (Inui and Fujita, 2004) . We present our research on paraphrasing to control language at the elementary school level in order to simplify texts for children. We believe that vocabulary simplification for children can be realized by paraphrasing text according to Basic Vocabulary to Learn (BVL) (Kai and Matsukawa, 2002) . BVL is a collection of words selected on the basis on a lexical analysis of elementary school textbooks. It contains 5,404 words that can help children write expressively.As previous work indicated, there are lexical paraphrases that define statements from a Japanese dictionary (Kajiwara et al., 2013) . The definition statements from the Japanese dictionary explain a given headword in several easy words. Therefore, lexical simplification and paraphrasing that conserves a particular meaning are expected by paraphrasing the headword with the words in the definitions. However, definition statements are short sentences that consist of several words. Consequently, there are few paraphrase candidates, and natural paraphrasing is difficult even if we use certain dictionaries together. In addition, the definition statement as a whole is equivalent to the headword; there is no guarantee that any individual word extracted from the definition statement can paraphrase the headword.We propose lexical paraphrasing based on a variety of contexts obtained from a large corpus without depending on existing lexical resources from such a background. The proposed method is not dependent on language, thus it can perform lexical paraphrases using a corpus of arbitrary languages. In this paper we examine and report on Japanese nouns.
0
A key characteristics which speech-to-speech machine translation systems strive to have is a good trade-off between accuracy of translation and low latency (Waibel and Fuegen, 2012; Bangalore et al., 2012) . Latency is defined as the delay between the input speech and the delivered translation (Niehues et al., 2016) and roughly corresponds to interpreter's décalage in human interpreting.While a number of engineering approaches are being proposed to reduce latency by in the same time maintaining good automatic speech translation quality (Waibel and Fuegen, 2012; Bangalore et al., 2012; Sridhar et al., 2013b; Schmid and Garside, 2005) , few approaches are getting explicitly inspired by human interpreting, by learning from the strategies which interpreters employ in order to produce good quality translation (Niehues et al., 2016; He et al., 2015; Sridhar et al., 2013a) .In line with this area of research, starting with an initial objective to boost a speech machine translation system working with English/Arabic language pair (Dalvi et al., 2017) we conduct experiments on a subset of sessions from the WAW corpus (Abdelali et al., 2018 ) -a corpus of simultaneously interpreted conference speeches, to get informed about interpreters' behaviour and learn which strategies interpreters employ to maintain good output accuracy while in the same time not exceeding their delay from the speaker. Our task is complex, as we want to find a way in which human expertise in interpreting can boost the performance of speech machine translation systems.With this article, we are enriching our previous research (Temnikova et al., 2017; Abdelali et al., 2018) and run an extensive multilateral analysis on a subset of WAW corpus interpreted sessions, before extending to a large number of sessions. The aim of this article is to test how much and what information we can extract by a combined manual (expert) and automatic analysis and also to propose a new automatic method for décalage calculation. We present the results of a manual evaluation run by two human experts on the points of reference generated by our décalage method.Knowing that the strategies applied by interpreters and their décalage (including décalage as a sign of cognitive challenges and as a strategy) depend on source input characteristics, and that décalage can subsequently influence other interpreters' variables (Lee, 2002) , we analyze: 1) the source speech characteristics of several conference sessions (including the presence of noise and other interruptions), 2) several output variables of interpreters (such as décalage, average interpreters' output speed, number of hesitations, repetitions and false starts) and we interpret our findings using the rich knowledge of a practitioner interpreter with background in Interpreting Studies. We address all these issues with a combination of automatic methods and manual (expert) annotations of both speech recordings and speakers' and interpreters' transcripts. We link our new findings with the manually annotated interpreting strategies in the same subset of conference sessions by two human annotators (Abdelali et al., 2018; Temnikova et al., 2017) , see Section 3.The rest of the article is structured as follows: Section 2 presents some of the relevant related work; Section 3 introduces the data and the general methodology; Sections 4 and 5 present the analysis of source speeches (both manual annotation and automatic analysis of fluency indicators and external conditions tags); Sections 6 and 8 discuss the analysis of interpreter variables (décalage and fluency indicators) and present our automatic décalage calculation method; Section 7 shows an approximate analysis of speakers input rate and interpreters delivery rate (speaking speed). Section 9 provides the overall results discussion and Section 10 concludes the article.
0
The use of notes written by healthcare providers in the clinical settings has long been recognized to be a source of valuable information for clinical practice and medical research. Access to large quantities of clinical reports may help in identifying causes of diseases, establishing diagnoses, detecting side effects of beneficial treatments, and monitoring clinical outcomes (Agus, 2016; Goldacre, 2014; Murdoch and Detsky, 2013) . The goal of clinical natural language processing (NLP) is to develop and apply computational methods for linguistic analysis and extraction of knowledge from free text reports (Demner-Fushman et al., 2009; Hripcsak et al., 1995; Meystre et al., 2008) . But while the benefits of clinical NLP and data mining have been universally acknowledged, progress in the development of clinical NLP techniques has been slow. Several contributing factors have been identified, most notably difficult access to data, limited collaboration between researchers from different groups, and little sharing of implementations and trained models (Chapman et al., 2011) . For comparison, in biomedical NLP, where the working data consist of biomedical research literature, these conditions have been present to a much lesser degree, and the progress has been more rapid (Cohen and Demner-Fushman, 2014) . The main contributing factor to this situation has been the sensitive nature of data, whose processing may in certain situations put patient's privacy at risk.The ethics discussion is gaining momentum in general NLP (Hovy and Spruit, 2016) . We aim in this paper to gather the ethical challenges that are especially relevant for clinical NLP, and to stimulate discussion about those in the broader NLP community. Although enhancing privacy through restricted data access has been the norm, we do not only discuss the right to privacy, but also draw attention to the social impact and biases emanating from clinical notes and their processing. The challenges we describe here are in large part not unique to clinical NLP, and are applicable to general data science as well.
0
Deep neural network-based machine learning (ML) models are powerful but vulnerable to adversarial examples. Adversarial examples also yield broader insights into the targeted models by exposing them to such maliciously crafted examples. The introduction of the adversarial example and training ushered in a new era to understand and improve the ML models, and has received significant attention recently (Szegedy et al., 2013; Goodfellow et al., 2015; Moosavi-Dezfooli et al., 2016; Papernot et al., 2016b; Carlini and Wagner, 2017; Yuan et al., 2019; Eykholt et al., 2018; Xu et al., 2019) .Even though generating adversarial examples for texts has proven to be a more challenging task * These authors contributed equally to this work. Figure 1 : Sentence-level attack: An adversarial example (bottom) for the output (top) of a deep neural dependency parser (Dozat and Manning, 2017) . Replacing a word "stock" with an adversarially-chosen word "exchange" in the sentence causes the parser to make four mistakes (blue, dashed) in arc prediction. The adversarial example preserves the original syntactic structures, and the substitute word is assigned to the same part of speech (POS) as the replaced one. The assigned POS tags (blue) are listed below the words.than for images and audios due to their discrete nature, a few methods have been proposed to generate adversarial text examples and reveal the vulnerability of deep neural networks in natural language processing (NLP) tasks including reading comprehension (Jia and Liang, 2017) , text classification (Samanta and Mehta, 2017; Wong, 2017; Liang et al., 2018; Alzantot et al., 2018) , machine translation (Zhao et al., 2018; Ebrahimi et al., 2018; Cheng et al., 2018) and dialogue systems . These recent methods attack text examples mainly by replacing, scrambling, and erasing characters or words or other language units under certain semantics-preserving constraints.Although adversarial examples have been studied recently for NLP tasks, previous work almost exclusively focused on semantic tasks, where the attacks aim to alter the semantic prediction of ML models (e.g., sentiment prediction or question answering) without changing the meaning of original texts. To the best of our knowledge, adversarial examples to syntactic tasks, such as dependency parsing, have not been studied in the literature. Motivated by this, we take the neural network-based dependency parsing algorithms as targeted models and aim to answer the following questions: Can we construct syntactic adversarial examples to fool a dependency parser without changing the original syntactic structure? And can we make dependency parsers robust with respect to these attacks?To answer these questions, we propose two approaches to study where and how parsers make mistakes by searching over perturbations to existing texts at sentence and phrase (corresponding to subtrees in a parse tree) levels. For the sentencelevel attack, we modify an input sentence to fool a dependency parser while such modification should be syntactically imperceptible to humans (see Figure 1) . Any new error (excluding the arcs directly connected to the modified parts) made by the parser is accounted as a successful attack.For the phrase-level (or subtree-level) attack, we choose two phrases from a sentence, which are separated by at least k words (say k ≥ 0), and modify one phrase to cause the parser's prediction errors in another target phrase (see Figure 2 ). Unlike the sentence-level attack, any error occurred outside the target subtree is not considered as a successful attacking trial. It helps us to investigate whether an error in one part of a parse tree may exert longrange influence, and cause cascading errors (Ng and Curran, 2015) . We study the sentence-level and subtree-level attacks both in white-box and black-box settings. In the former setting, an attacker can access to the model's architecture and parameters while it is not allowed in the latter one.Our contributions are summarized as follows: (1) we explore the feasibility of generating syntactic adversarial sentence examples to cause a dependency parser to make mistakes without altering the original syntactic structures; (2) we propose two approaches to construct the syntactic adversarial examples by searching over perturbations to existing texts at sentence and phrase levels in both the blackbox and white-box settings; (3) our experiments with a close to state-of-the-art parser on the English Penn Treebank show that up to 77% of input examples admit adversarial perturbations, and moreover that robustness and generalization of parsing models can be improved by adversarial training with the proposed attacks. The source code is available at (https://github.com/zjiehang/DPAttack). A example sentence: In a stock-index arbitrage sell program, traders buy or sell big baskets of stocks and offset the trade in futures to lock in a price difference. Figure 2 : Phrase-level attack: two separate subtrees in a parse tree are selected, and one of them (left) is deliberately modified to cause a parser to make incorrect arc prediction for another target subtree (right). For example, we can make a neural dependency parser (Dozat and Manning, 2017) to attach the word "difference" in the target subtree to its sibling "in" instead of the correct head "lock" (the subtree's root) by maliciously manipulating the selected leftmost subtree only.
0
The communication of critical imaging findings from the radiologist to the referring physician is a key factor in providing efficacious patient care (Lakhani et al., 2012) . Currently, the most common form of communication is a physicianto-physician telephone conversation, initiated by the radiologist at the time of image interpretation. This process is tedious, inefficient, and error prone. A missed communication can result in progressed disease, hospital readmission, and even death. In the United States, the American College of Radiology suggests three hallmarks of effective methods of communication: a) supporting the ordering provider in providing optimal patient care, b) using methods that are tailored to satisfy the need for timeliness, and c) implementing methods to minimize risk of communication errors (American College of Radiology, 2014). Critical findings may result in death or severe morbidity and require urgent or emergent attention (Larson et al., 2014) . These critical test results are often documented in free-text imaging notes. Natural language processing (NLP) can automatically extract, track, and report these findings in a timely manner to support patient safety efforts.
0
Any text-to-speech (TTS) system that aims at producing understandable and natural-sounding output needs to have on-board methods for predicting prosody. Most systems start with generating a prosodic representation at the linguistic or symbolic level, followed by the actual phonetic realization in terms of (primarily) pitch, pauses, and segmental durations. The first step involves placing pitch accents and inserting prosodic boundaries at the right locations (and may involve tune choice as well). Pitch accents correspond roughly to pitch movements that lend emphasis to certain words in an utterance. Prosodic breaks are audible interruptions in the flow of speech, typically realized by a combination of a pause, a boundary-marking pitch movement, and lengthening of the phrase-final segments. Errors at this level may impede the listener in the correct understanding of the spoken utterance (Cutler et al., 1997) . Predicting prosody is known to be a hard problem that is thought to require information on syntactic boundaries, syntactic and semantic relations between constituents, discourse-level knowledge, and phonological well-formedness constraints (Hirschberg, 1993) . However, producing all this information -using full parsing, including establishing semanto-syntactic relations, and full discourse analysis -is currently infeasible for a realtime system. Resolving this dilemma has been the topic of several studies in pitch accent placement (Hirschberg, 1993; Black, 1995; Pan and McKeown, 1999; Pan and Hirschberg, 2000; Marsi et al., 2002) and in prosodic boundary placement (Wang and Hirschberg, 1997; Taylor and Black, 1998) . The commonly adopted solution is to use shallow information sources that approximate full syntactic, semantic and discourse information, such as the words of the text themselves, their part-of-speech tags, or their information content (in general, or in the text at hand), since words with a high (semantic) information content or load tend to receive pitch accents (Ladd, 1996) .Within this research paradigm, we investigate pitch accent and prosodic boundary placement for Dutch, using an annotated corpus of newspaper text, and machine learning algorithms to produce classifiers for both tasks. We address two questions that have been left open thus far in previous work:1. Is there an advantage in inducing decision trees for both tasks, or is it better to not abstract from individual instances and use a memory-based k-nearest neighbour classifier? 2. Is there an advantage in inducing classifiers for both tasks individually, or can both tasks be learned together.The first question deals with a key difference between standard decision tree induction and memorybased classification: how to deal with exceptional instances. Decision trees, CART (Classification and Regression Tree) in particular (Breiman et al., 1984) , have been among the first successful machine learning algorithms applied to predicting pitch accents and prosodic boundaries for TTS (Hirschberg, 1993; Wang and Hirschberg, 1997) . Decision tree induction finds, through heuristics, a minimallysized decision tree that is estimated to generalize well to unseen data. Its minimality strategy makes the algorithm reluctant to remember individual outlier instances that would take long paths in the tree: typically, these are discarded. This may work well when outliers do not reoccur, but as demonstrated by (Daelemans et al., 1999) , exceptions do typically reoccur in language data. Hence, machine learning algorithms that retain a memory trace of individual instances, like memory-based learning algorithms based on the k-nearest neighbour classifier, outperform decision tree or rule inducers precisely for this reason.Comparing the performance of machine learning algorithms is not straightforward, and deserves careful methodological consideration. For a fair comparison, both algorithms should be objectively and automatically optimized for the task to be learned. This point is made by (Daelemans and Hoste, 2002) , who show that, for tasks such as word-sense disambiguation and part-of-speech tagging, tuning al-gorithms in terms of feature selection and classifier parameters gives rise to significant improvements in performance. In this paper, therefore, we optimize both CART and MBL individually and per task, using a heuristic optimization method called iterative deepening.The second issue, that of task combination, stems from the intuition that the two tasks have a lot in common. For instance, (Hirschberg, 1993) reports that knowledge of the location of breaks facilitates accent placement. Although pitch accents and breaks do not consistently occur at the same positions, they are to some extent analogous to phrase chunks and head words in parsing: breaks mark boundaries of intonational phrases, in which typically at least one accent is placed. A learner may thus be able to learn both tasks at the same time.Apart from the two issues raised, our work is also practically motivated. Our goal is a good algorithm for real-time TTS. This is reflected in the type of features that we use as input. These can be computed in real-time, and are language independent. We intend to show that this approach goes a long way towards generating high-quality prosody, casting doubt on the need for more expensive sentence and discourse analysis.The remainder of this paper has the following structure. In Section 2 we define the task, describe the data, and the feature generation process which involves POS tagging, syntactic chunking, and computing several information-theoretic metrics. Furthermore, a brief overview is given of the algorithms we used (CART and MBL). Section 3 describes the experimental procedure (ten-fold iterative deepening) and the evaluation metrics (F-scores). Section 4 reports the results for predicting accents and major prosodic boundaries with both classifiers. It also reports their performance on held-out data and on two fully independent test sets. The final section offers some discussion and concluding remarks.
0
The Arabic Language is one of the oldest languages in the world, which made Arabic dialects emerge over the years. Although Modern Standard Arabic (MSA) is the only standardized form of the Arabic language that has a predefined set of grammatical rules, it is only used in education, some media channels, and official written documents. This owes to the fact that people tend to use dialects more in their daily life. Those dialects deviate from the classical MSA in terms of morphology, phonology, lexicon, and syntax (Janet, 2007) . For example, a morphological difference could be seen in the affixes that are appended to the verb to indicate its tense, like the prefix which indicate the present tense in Jordanian dialect. The existence of many varieties of the Arabic dialects gave rise to the task of automatic identification of written Arabic dialects, since a prior identification of those dialects is essential to many applications, such as sentiment analysis, opinion mining, author profiling, and machine translation. Despite the significant differences between the dialects, they still share some similarities such as having common character/vocabulary sets and basic language rules which make dialect identification an interesting but challenging problem. Moreover, the closeness between some dialects that are within the same country makes it even more challenging to distinguish between them.Unlike most of the previous work which targeted coarse-grained Arabic dialect identification, this work presents the participation of Qatar University team in the Multi Arabic Dialect Applications and Resources (MADAR) shared task (Bouamor et al., 2019) that addresses a finegrained classification of 25 dialects of different Arabic cities in addition to MSA. We propose a simple classification approach that only utilizes word and character n-grams using Naïve Bayes learning model. While our approach is so simple (depending only on two categories of lexical features), it proved not to be naïve; the official testing results show that our best submitted run achieved reasonably-good F 1 scores across the different dialects, ranging from 0.52 to 0.84.The rest of the paper is organized as follows. Section 2 outlines the data used in building/training our models. Section 3 details our proposed approach. Section 4 presents our runs and official testing results. Section 5 discusses and analyzes the performance of our best run. Finally, Section 6 concludes our work with some directions of future work.
0
Recently, many vision and language (V&L) models that combine images and text have been proposed (Lu et al., 2019; Tan and Bansal, 2019; Li et al., 2019; Su et al., 2020) . These models follow the pretrain-andfinetune paradigm, i.e. they are pretrained using self-supervision on large amounts of image-caption pairs 1 and are then finetuned on the task(s) of interest. Such V&L models have obtained state-of-theart performance across a number of different V&L tasks, e.g. visual question answering (VQA); visual commonsense reasoning; grounding referring expressions; and image retrieval, among others.Pretrained V&L models use a combination of masked multimodal modelling -i.e., masking out words and object bounding boxes from the input and predicting them -and image-sentence alignment, i.e., predicting whether an image-sentence pair is correctly aligned or not. Such models hold the promise of partially addressing the 'meaning gap' in unimodal pretrained language models such as BERT (Devlin et al., 2019) by directly connecting language to visual representations (Bender and Koller, 2020; Bisk et al., 2020) .In this paper, we use foiling to investigate how well pretrained V&L models integrate and reason upon textual and visual representations. The foiling strategy, introduced by Shekhar et al. (2017) in the context of vision and language tasks, relies on replacing an element in a text with another element, such that the replacement results in a mismatch with the image. We propose two tasks which require effective multimodal integration: (1) discriminating a correctly aligned image-sentence pair from an incorrectly aligned one, and (2) counting entities in the image.V&L models are commonly pretrained on task (1), and should not have many difficulties detecting incorrect image-sentence pairs. Counting, our task (2), nicely puts together visual and textual reasoning. It requires the detection of object instances in the visual input, mapping these instances to categories, as well as properly aligning such instances to references in the textual input. Model architectures have been proposed specifically for counting, which is known to be a hard V&L problem (Zhang et al., 2018; Acharya et al., 2019; Trott et al., 2018; Chattopadhyay et al., 2017) . Unlike these specialised approaches, we focus on generalpurpose V&L models. Related V&L work has also investigated generalised quantifiers (such as most) in a V&L context, but this work has generally exploited synthetic datasets (Sorodoc et al., 2018; Pezzelle and Fernández, 2019; Testoni et al., 2019 ). Here, we task the model to judge whether an unambiguous question or statement about the number of entities visible in a natural image is correct.We use three publicly available, representative V&L models in our investigation: LXMERT 2 (Tan and Bansal, 2019) , ViLBERT and ViLBERT 12in-1 3 (Lu et al., 2019 . ViLBERT and ViL-BERT 12-in-1 use the same BERT-based model architecture, which incorporates two separate visual and linguistic streams that interact through multiple co-attention transformer layers. ViLBERT is trained using self-supervised learning on imagecaption pairs, while ViLBERT 12-in-1 is further finetuned on 12 different tasks using multi-task learning. LXMERT is also a dual-stream architecture and combines textual and visual transformerbased encoders with cross-modal layers. However, LXMERT is pretrained not only on image-caption pairs but also directly on the visual question answering (VQA) task using multiple VQA datasets.While all models are trained on image-sentence alignment, only ViLBERT is not directly trained on VQA; hence the model can be probed "zeroshot" on our counting task. LXMERT, by contrast, is pretrained on VQA (including how many questions, the focus in our counting probe). Hence, LXMERT was exposed to examples where answering a question correctly requires the model to detect and categorise instances in an image, and then aligning these to the text. Finally, the tasks ViLBERT 12-in-1 are finetuned on also include VQA, including instances with numerical answers requiring counting abilities. We therefore believe it serves as a solid baseline and should be well equipped to detect foiled probing instances. To our surprise, we find that none of these models perform particularly well in our counting experiments.Our main contributions are: i) We show that all three models perform image-sentence alignment well, as expected given their pretraining; ii) We build a counting probe, which requires a model to adequately perform cross-modal grounding; iii) We find that ViLBERT, ViLBERT 12-in-1 and LXMERT perform similarly to the random baseline when directly applying the image-sentence 2 github.com/huggingface/transformers 3 github.com/facebookresearch/ vilbert-multi-task alignment head to perform counting without finetuning; iv) We find that all models seem to exploit dataset bias and fail to generalise to out-ofdistribution quantities. Even when finetuned, they only partially solve our counting probe. 4
0
In applications of complex Natural Language Processing tasks, such as automatic knowledge base construction, entity summarization, and question answering systems, it is essential to first have high quality systems for lower level tasks, such as partof-speech (POS) tagging, chunking, named entity recognition (NER), entity linking, and parsing among others. These lower level tasks are usually decoupled and optimized separately to keep the system tractable. The disadvantage of the decoupled approach is that each lower level task is not aware of other tasks and thus not able to leverage information provided by others to improve performance. What is more, there is no guarantee that their outputs will be consistent.This paper addresses the problem by building a joint model for Entity Recognition and Disambiguation (ERD). The goal of ERD is to extract named entities in text and link extracted names to a knowledge base, usually Wikipedia or Freebase. ERD is closely related to NER and linking tasks. NER aims to identify named entities in text and classify mentions into predefined categories such as persons, organizations, locations, etc. Given a mention and context as input, entity linking connects the mention to a referent entity in a knowledge base.Existing ERD systems typically run a NER to extract entity mentions first, then run an entity linking model to link mentions to a knowledge base. Such a decoupled approach makes the system tractable, and both NER and linking models can be optimized separately. The disadvantages are also obvious: 1) errors caused by NER will be propagated to linking and are not recoverable 2) NER can not benefit from information available used in entity linking; 3) NER and linking may create inconsistent outputs.We argue that there is strong mutual dependency between NER and linking tasks. Consider the following two examples:1. The New York Times (NYT) is an American daily newspaper. 2. Clinton plans to have more news conferences in 2nd term. WASHINGTON 1996-12-06 Example 1 is the first sentence from the Wikipedia article about "The New York Times". It is reasonable but incorrect for NER to identify "New York Times" without "The" as a named entity, while entity linking has no trouble connecting "The New York Times" to the correct entity.Example 2 is a news title where our NER classifies "WASHINGTON" as a location, since a location followed by a date is a frequent pattern in news articles it learned, while the entity linking prefers linking this mention to the U.S. president "George Washington" since another president's name "Clinton" is mentioned in the context. Both the entity boundaries and entity types predicted by NER are correlated to the knowledge of entities linked by entity linking. Modeling such mutual dependency is helpful in resolving inconsistency and improving performance for both NER and linking.We propose JERL, Joint Entity Recognition and Linking, to jointly model NER and linking tasks and capture the mutual dependency between them. It allows the information from each task to improve the performance of the other. If NER is highly confident on its outputs of entity boundaries and types, it will encourage entity linking to link an entity which is consistent with NER's outputs, and vice versa. In other words, JERL is able to model how consistent NER and linking's outputs are, and predict coherent outputs. According to our experiments, this approach does improve the end to end performance. To the best of our knowledge, JERL is the first model to jointly optimize NER and linking tasks together completely .Sil (2013) also proposes jointly conducting NER and linking tasks. They leverage existing NER/chunking systems and Freebase to over generate mention candidates and leave the linking algorithm to make final decisions, which is a reranking model. Their model captures the dependency between entity linking decisions and mention boundary decisions with impressive results. The difference between our model and theirs is that our model jointly models NER and linking tasks from the training phrase, while their model is a combined one which depends on an existing state-of-art NER system. Our model is more powerful in capturing mutual dependency by considering entity type and confidences information, while in their model the confidence of outputs is lost in the linking phrase. Furthermore, in our model NER can naturally benefit from entity linking's decision since both decisions are made together, while in their model, it is not clear how the linking decision can help the NER decision in return.Joint optimization is costly. It increases the problem complexity, is usually inefficient, and requires the careful consideration of features of multiple tasks and mutual dependency, making proper assumptions and approximations to enable tractable training and inference. However, we believe that joint optimization is a promising direction for improving performance for NLP tasks since it is closer to how human beings process text information. Experiment result indicates that our joint model does a better job at both NER and linking tasks than separate models with the same features, and outperforms state-of-art systems on a widely used data set. We found improvements of 0.4% absolute F 1 for NER on CoNLL'03 and 0.36% absolute precision@1 for linking on AIDA. NER is a widely studied problem, and we believe our improvement is significant.The contributions of this paper are as follows: 1. We identify the mutual dependency between NER and linking tasks, and argue that NER and linking should be conducted together to improve the end to end performance. 2. We propose the first completely joint NER and linking model, JERL, to train and inference the two tasks together. Efficient training and inference algorithms are also presented. 3. The JERL outperforms the best NER record on the CoNLL'03 data set, which demonstrates how NER could be improved further by leveraging knowledge base and linking techniques.The remainder of this paper is organized as follows: the next section discusses related works on NER, entity linking, and joint optimization; section 3 presents our Joint Entity Recognition and Linking model in detail; section 4 describes experiments, results, and analysis; and section 5 concludes.
0
People are social beings who communicate their feelings, emotions, thoughts, ideas, etc. through verbal and non-verbal interactions. Based on these interactions, we build relationships, and these relationships, in turn, help create and maintain a network of peers. Peers in a network cooperate with each other, help each other to learn, and exchange ideas. However, they also compete for the same resources (Vega-Redondo et al., 2019) , not least attention. Peer networks are particularly important for innovation and entrepreneurship (Gonzalez-Uribe and Leatherbee, 2017), as they produce an active exchange of ideas.People are usually assumed to be altruistic in networks like online social forums. They cooperate with and help one another with answers, advice, and ideas. The motivations behind helping a peer include, but are not limited to, getting pure pleasure from helping, self-advancement, building a reputation, developing relationships, or sheer entertainment (Tausczik and Pennebaker, 2012) .When people interact with each other, their interactions vary along various communicative styles, such as showing cooperativeness, equality, business orientation, etc. (Rashid and Blanco, 2018) . Varying these communication styles provides tools to achieve communicative goals. For example, someone trying to build a reputation will tend to use a more cooperative style. Someone who tries to be helpful may use more words of advice in their interactions. The usage of relationshipestablishing styles is more prevalent in certain personalities (Cheng, 2011) and in specific settings. Business-oriented people communicate more independence, tolerance of ambiguity, risk-taking propensity, innovativeness, and leadership qualities (Wagener et al., 2010) .The impact of these styles is, therefore, an essential factor in text analysis. However, due to their complex, decentralized nature, these communication styles have been studied very little in NLP. Cooperativeness is more than just a few keywordsit includes a whole inventory of communicative tools. This property makes it harder to annotate and predict. Part of the reason is the lack of adequate corpora. We provide such a corpus and report encouraging results for the above styles.Contributions We introduce a new task, predicting the communicative strategies of interlocutors in a real-life setting, and provide a new, multiplyannotated data set of 5k+ instances. We find that the various communicative dimensions can be efficiently predicted. Additional tests suggest that the communicative strategy of a person is somewhat predictive of their business success. (Artstein and Poesio, 2008) .
0
Wikipedia, one of the most frequently visited web sites nowadays, contains the largest amount of knowledge ever gathered in one place by volunteer contributors around the world (Poe, 2006) . Each Wikipedia article contains information about one entity or concept, gathers information about entities of one particular type of entities (the socalled list pages), or provides information about homonyms (disambiguation pages). As of July 2007, Wikipedia contains close to two million articles in English. In addition to the Englishlanguage version, there are 200 versions in other languages. Wikipedia has about 5 million registered contributors, averaging more than 10 edits per contributor.Natural language processing and search tools can greatly benefit from Wikipedia by using it as an authoritative source of common knowledge and by exploiting its interlinked structure and disambiguation pages, or by extracting concept cooccurrence information. This paper presents a successful study on enriching the Wikipedia data with named entity tags. Such tags could be employed by disambiguation systems such as Bunescu and Pa ca (2006) and Cucerzan (2007) , in mining relationships between named entities, or in extracting useful facet terms from news articles (e.g., Dakka and Ipeirotis, 2008) .In this work, we classify the Wikipedia pages into categories similar to those used in the CoNLL shared tasks (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) and ACE (Doddington et al., 2004) . To the best of our knowledge, this is the first attempt to perform such classification on the English language version of the collection. 1 Although the task settings are different, the results we obtained are comparable with those previously reported in document classification tasks.We examined the Wikipedia pages to extract several feature groups for our classification task. We also observed that each entity/concept has at least two pseudo-independent views (page-based features and link-based features), which allow the use a co-training method to boost the performance of classifiers trained separately on each view.The classifier that achieved the best accuracy on out test set was applied then to all Wikipedia pages and its classifications are provided to the academic community for use in future studies through a Web service. 2 1 Watanabe et al. (2007) have reported recently experiments on categorizing named entities in the Japanese version of Wikipedia using a graph-based approach. 2 The Web service is available at wikinet.stern.nyu.edu.
0
Pretrained language models (LMs), like BERT and GPTs Brown et al., 2020) , have shown remarkable performance on many natural language processing (NLP) tasks, such as text classification and question answering (Raffel et al., 2020) , becoming the foundation of modern NLP systems. By performing self-supervised learning on text, such as masked language modeling , LMs learn to encode various knowledge from text corpora and produce informative language representations for downstream tasks (Bosselut et al., 2019; Bommasani et al., 2021) .However, existing LM pretraining methods typically consider a single document in each input context Joshi et al., 2020) , and do not model links between documents. This can pose limitations because documents often have rich dependencies with each other (e.g. hyperlinks, references), and knowledge can span across documents. As a simple example, in Figure 1 , the Wikipedia article "Tidal Basin, Washington D.C." (left) describes that the basin hosts "National Cherry Blossom Festival", and the hyperlinked article (right) reveals the background that the festival celebrates "Japanese cherry trees". Taken together, the hyperlink offers new, multi-hop knowledge "Tidal Basin has Japanese cherry trees", which is not available in the single article "Tidal Basin" alone. Acquiring such multi-hop knowledge in pretraining could be useful for various applications including question answering. In fact, document links like hyperlinks and references are ubiquitous (e.g. web, books, scientific literature), and guide how we humans acquire knowledge and Figure 2 : Overview of our approach, LinkBERT. Given a pretraining corpus, we view it as a graph of documents, with links such as hyperlinks ( §4.1). To incorporate the document link knowledge into LM pretraining, we create LM inputs by placing a pair of linked documents in the same context (linked), besides the existing options of placing a single document (contiguous) or a pair of random documents (random) as in BERT. We then train the LM with two self-supervised objectives: masked language modeling (MLM), which predicts masked tokens in the input, and document relation prediction (DRP), which classifies the relation of the two text segments in the input (contiguous, random, or linked) ( §4.2).even make discoveries too (Margolis et al., 1999) .In this work, we propose LinkBERT, an effective language model pretraining method that incorporates document link knowledge. Given a text corpus, we obtain links between documents such as hyperlinks, and create LM inputs by placing linked documents in the same context window, besides the existing option of placing a single document or random documents as in BERT. Specifically, as in Figure 2 , after sampling an anchor text segment, we place either (1) the contiguous segment from the same document, (2) a random document, or (3) a document linked from anchor segment, as the next segment in the input. We then train the LM with two joint objectives: We use masked language modeling (MLM) to encourage learning multi-hop knowledge of concepts brought into the same context by document links (e.g. "Tidal Basin" and "Japanese cherry" in Figure 1 ). Simultaneously, we propose a Document Relation Prediction (DRP) objective, which classifies the relation of the second segment to the first segment (contiguous, random, or linked). DRP encourages learning the relevance and bridging concepts (e.g. "National Cherry Blossom Festival") between documents, beyond the ability learned in the vanilla next sentence prediction objective in BERT.Viewing the pretraining corpus as a graph of documents, LinkBERT is also motivated as self-supervised learning on the graph, where DRP and MLM correspond to link prediction and node feature prediction in graph machine learning (Yang et al., 2015; . Our modeling approach thus provides a natural fusion of language-based and graph-based self-supervised learning.We train LinkBERT on two domains: the general domain, using Wikipedia articles with hyperlinks ( §4), and the biomedical domain, using PubMed articles with citation links ( §6). We then evaluate the pretrained models on a wide range of downstream tasks including question answering, in both domains. LinkBERT consistently improves on baseline LMs across domains and tasks. For the general domain, LinkBERT outperforms BERT on MRQA benchmark (+4% absolute in F1-score) as well as GLUE benchmark. For the biomedical domain, LinkBERT exceeds PubmedBERT (Gu et al., 2020) and attains new state-of-the-art on BLURB biomedical NLP benchmark (+3% absolute in BLURB score) and MedQA-USMLE reasoning task (+7% absolute in accuracy). Overall, LinkBERT attains notably large gains for multi-hop reasoning, multidocument understanding, and few-shot question answering, suggesting that LinkBERT internalizes significantly more knowledge than existing LMs by pretraining with document link information.
0
In dependency semantic parsing, one is given a natural language sentence and has to output a directed graph representing an associated, mostlikely semantic analysis. Semantic parsing integrates tasks that have usually been addressed separately in statistical natural language processing, such as named entity recognition, word sense disambiguation, semantic role labeling, and coreference resolution. Semantic parsing is currently receiving considerable attention, as attested by the number of approaches being proposed for its solution (Oepen et al., 2014 (Oepen et al., , 2015 and by the variety of existing semantic representations and available datasets (Kuhlmann and Oepen, 2016) .A successful approach to dependency semantic parsing by Wang et al. (2015b,a) first parses the input sentence into a dependency tree t, and then applies a transition-based algorithm that translates t into a dependency graph in Abstract Meaning Representation (AMR), a popular semantic representation developed by Banarescu et al. (2013) . In this work, we present a finite-state transducer for tree-to-graph translation that can serve as a mathematical model for transition-based systems such as the one by Wang et al. (2015b) and, more in general, for work on the syntax-semantics interface.Bottom-up tree transducers (Thatcher, 1973) have gained significant attention in the field of machine translation, where they are used to map syntactic phrase structure trees from source to target languages. This holds in particular for their "extended" version, which may process, in a single step, sections of the input consisting of several symbols; see (Maletti et al., 2009) and references therein. We propose a similar formalism for dependency semantic parsing, mapping syntactic dependency trees into directed graphs that represent the associated semantic interpretation.When translating dependency trees into graphs in a bottom-up fashion, we face two problems. Firstly, bottom-up tree transducers process ranked trees, i.e., the number of children at each node is bounded by some constant. Thus, typically, these tree transducers use a single rule to process in one shot a node along with all of its (previously processed) children in the source tree. In contrast, in the case of dependency trees there is no global constant that limits the number of children a node may have, and processing all of the children by means of a single rule is problematic.Secondly, in an output tree of a bottom-up tree transducer, nodes that are located near one another are translations of nodes in a source tree that are in close proximity as well. This condition is often referred to as locality. Locality does no longer hold true when translating trees into graphs. In fact, so-called reentrancy nodes in a graph have several parents, which are translations of nodes in the source tree whose distance from one another may not be bounded by a constant. Reentrancies thus require some form of nonlocal processing, generally not found in tree transducers.The main contribution of this work is a finitestate tree-to-graph transducer that processes dependency trees in a bottom-up, left-to-right fashion. Our solution to the two problems mentioned above is rather simple. Each node is processed together with its children in several translation steps which consume the children left to right. Furthermore, in order to implement reentrancy, each translated subtree produces a graph annotated with a record of selected vertices, to be made accessible later in the translation process.While our transducers use extended translation rules in the sense of Maletti et al. (2009) , they can be cast in a simple normal form, facilitating algorithmic processing. We provide a polynomial time algorithm for translating an input dependency tree into a packed graph forest, from which each translation graph can efficiently be recovered.Related work. Bottom-up tree-to-graph transducers were introduced by Vogler (1994, 1998) who based their work on hyperedge replacement. Since the graph construction mechanism we use is equivalent to hyperedge replacement, our notion of tree-to-graph transducers is essentially an unranked and extended generalization of theirs, except for the fact that ours cannot create multiple copies of unbounded material in the input. This ability seems inappropriate for modeling natural language semantics.The system by Wang et al. (2015b) has inspired our work. A technical comparison between their formalism and ours is made in Remark 1. An alternative approach to the syntax-semantics interface exploits multi-component synchronous tree-adjoining grammars; see Nesson and Shieber (2006) and references therein. However, these formal models yield tree-like semantic representations, as opposed to general graphs.A common approach in semantic parsing is to extend existing syntactic dependency parsers to produce graphs, realizing translation models from strings to graphs, as opposed to the treeto-graph model investigated here. On this line, transition-based, greedy parsers have been adapted by Ballesteros and Al-Onaizan (2017) , Damonte et al. (2017), Hershcovich et al. (2017) , Peng et al. (2018) and Vilares and Gómez-Rodríguez (2018) . Despite the fact that the input is a bare string, these systems exploit features obtained from a precomputed run of a dependency parser, thus committing to some best parse tree, similarly to the pipeline model of Wang et al. (2015b) . Dynamic programming parsers have also been adapted to produce graphs by Kuhlmann and Jonsson (2015) and Schluter (2015) . Semantic translation from strings to graphs is further investigated by Jones et al. (2012) and Peng et al. (2015) using synchronous hyperedge replacement grammars, who provide unsupervised learning algorithms for grammar extraction. Finally, Groschwitz et al. (2018) use a neural supertag parser to map a string into a dependency-style tree representation of the com-positional structure of the corresponding AMR graph. More precisely, this tree is a term in a special algebra: its constants denote lexicalized AMR graph fragments, which are combined into larger and larger AMR graphs by two binary algebraic operations for graph combination. These operations supply a partial AMR graph either with an argument or with a modifier. The evaluation of the term then yields the output AMR for the input sentence. The tree-to-graph mapping is entirely deterministic, in contrast to our approach. Groschwitz et al. (2018) also provide an unsupervised alignment algorithm that extracts rules from semantic graph banks.
0
Emotion classification has become increasingly important due to the large-scale deployment of artificial emotional intelligence. In various aspects of our lives, these systems now play a crucial role. For example, customer care solutions are now gradually shifting to a hybrid mode where an AI will try to solve the problem first, and only when it fails, will a human intervene. The WASSA 2022 Shared Task covers four different tasks on Empathy Detection, Emotion Classification, Personality Prediction, and Interpersonal Reactivity Index Prediction. We participated in task 1 on Empathy Detection and task 2 on Emotion Classification.Most of the existing emotion classification tasks are restricted to only using signals such as video, audio, or text, but seldom using demographic data, partly because such information is often not available. However, using demographic information also raises ethical concerns. In the current shared task, additional demographic information was made available, thus implicitly inviting participants to investigate the interaction between empathy, emotion, and demographic information. In this work, we will compare two different systems, one using demographic data and one that does not.Our text-only system performs very competitively. In the evaluation, we ranked first in the empathy detection task and second in the emotion classification task 1 . Adding demographic information to the systems makes them less competitive.The remainder of the paper is structured as follows: In section 2, we will discuss the related work on emotion classification. In section 3, we will present our two systems and discuss their differences. We will also discuss the challenges we encountered and how we addressed them. In section 4, we will present the evaluation results of our systems and the performance of our other systems. We will also discuss the implications of these results. In section 5 we will conclude and discuss future research efforts.
0
There is much computer-assisted language learning (CALL) literature that explores effective methods of teaching vocabulary. In recent studies conducted using the REAP system, which finds documents from the internet to teach vocabulary, we have shown that speech synthesis reinforces written text for learning in reading activities (Dela Rosa et al., 2010) , and we have also shown that contextsensitive dictionary definitions afford better vocabulary learning for L2 language students (Dela Rosa and Eskenazi, 2011) .One issue that remains to be explored in this context: determining what factors make an individual word easier to learn. We propose that word complexity, on both the phonetic and semantic levels, can affect how easily an L2 vocabulary word can be learned.In this paper we first discuss past work on factors that impact vocabulary acquisition in intelligent tutoring environments, and then explore work on defining the complexity of a word with respect to vocabulary learning. Next we describe two classroom studies we conducted with ESL college students to test the effect of word complexity on L2 vocabulary learning. Finally we examine our results and suggest future research directions.
0
Meaning relations refer to the way in which two sentences can be connected, e.g. if they express approximately the same content, they are considered paraphrases. Other meaning relations we focus on here are textual entailment and contradiction 1 (Dagan et al., 2005) , and specificity.Meaning relations have applications in many NLP tasks, e.g. recognition of textual entailment is used for summarization (Lloret et al., 2008) or machine translation evaluation (Padó et al., 2009) , and paraphrase identification is used in summarization (Harabagiu and Lacatusu, 2010) .The complex nature of the meaning relations makes it difficult to come up with a precise and widely accepted definition for each of them. Also, there is a difference between theoretical definitions and definitions adopted in practical tasks. In this paper, we follow the approach taken in pre-vious annotation tasks and we give the annotators generic and practically oriented instructions.Paraphrases are differently worded texts with approximately the same content (Bhagat and Hovy, 2013; De Beaugrande and Dressler, 1981) . The relation is symmetric. In the following example, (a) and (b) are paraphrases. Textual Entailment is a directional relation between pieces of text in which the information of the Text entails the information of the Hypothesis (Dagan et al., 2005) . In the following example, Text (t) entails Hypothesis Specificity is a relation between phrases in which one phrase is more precise and the other more vague. Specificity is mostly regarded between noun phrases (Cruse, 1977; Enç, 1991; Farkas, 2002) . However, there has also been work on specificity on the sentence level (Louis and Nenkova, 2012) . In the following example, (c) is more specific than (d) as it gives information on who does not get good education: (c) Girls do not get good education. Semantic Similarity between texts is not a meaning relation in itself, but rather a gradation of meaning similarity. It has often been used as a proxy for the other relations in applications such as summarization (Lloret et al., 2008) , plagiarism detection (Alzahrani and Salim, 2010; Bär et al., 2012) , machine translation (Padó et al., 2009) , question answering (Harabagiu and Hickl, 2006) , and natural language generation (Agirre et al., 2013) . We use it in this paper to quantify the strength of relationship on a continuous scale. Given two linguistic expressions, semantic text similarity measures the degree of semantic equivalence (Agirre et al., 2013) . For example, (a) and (b) have a semantic similarity score of 5 (on a scale from 0-5 as used in the SemEval STS task) (Agirre et al., 2013 (Agirre et al., , 2014 .Interaction between Relations Despite the interactions and close connection of these meaning relations, to our knowledge, there exists neither an empirical analysis of the connection between them nor a corpus enabling it. We bridge this gap by creating and analyzing a corpus of sentence pairs annotated with all discussed meaning relations.Our analysis finds that previously made assumptions on some relations (e.g. paraphrasing being bi-directional entailment (Madnani and Dorr, 2010; Androutsopoulos and Malakasiotis, 2010; Sukhareva et al., 2016) ) are not necessarily right in a practical setting. Furthermore, we explore the interactions of the meaning relation of specificity, which has not been extensively studied from an empirical point of view. We find that it can be found in pairs on all levels of semantic relatedness and does not correlate with entailment.
0
Relation extraction (RE), defined as the task of identifying the relationship between concepts mentioned in text, is a key component of many natural language processing applications, such as knowledge base population (Ji and Grishman, 2011) and question answering (Yu et al., 2017) . Distant supervision (Mintz et al., 2009; Hoffmann et al., 2011 ) is a popular approach to heuristically generate labeled data for training RE systems by aligning entity tuples in text with known relation instances from a knowledge base, but suffers from noisy labels and incomplete knowledge base information (Min et al., 2013; Fan et al., 2014) . Figure 1 shows an example of three sentences labeled with an existing KB relation, two of which are false positives and do not actually express the relation.Current state-of-the-art RE methods try to address these challenges by applying multi-instance learning methods (Mintz et al., 2009; Surdeanu et al., 2012; Lin et al., 2016 ) and guiding the model by explicitly provided semantic and syntactic knowledge, e.g. part-of-speech tags (Zeng et al., 2014) and dependency parse information (Surdeanu et al., 2012; . Recent methods also utilize side information, e.g. paraphrases, relation aliases, and entity types (Vashishth et al., 2018) . However, we observe that these models are often biased towards recognizing a limited set of relations with high precision, while ignoring those in the long tail (see Section 5.2).Deep language representations, e.g. those learned by the Transformer (Vaswani et al., 2017) via language modeling (Radford et al., 2018) , have been shown to implicitly capture useful semantic and syntactic properties of text solely by unsupervised pre-training (Peters et al., 2018) , as demonstrated by state-of-the-art performance on a wide range of natural language processing tasks (Vaswani et al., 2017; Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018) , including supervised relation extraction (Alt et al., 2019) . Radford et al. (2019) even found language models to perform fairly well on answering open-domain questions without being trained on the actual task, suggesting they capture a limited amount of "common-sense" knowledge. We hypothesize that pre-trained language models provide a stronger signal for distant supervision, better guiding relation extraction based on the knowledge acquired during unsupervised pre-training. Replacing explicit linguistic and side-information with implicit features improves domain and language independence and could increase the diversity of the recognized relations.In this paper, we introduce a Distantly Supervised Transformer for Relation Extraction (DIS-TRE). We extend the standard Transformer architecture by a selective attention mechanism to handle multi-instance learning and prediction, which allows us to fine-tune the pre-trained Transformer language model directly on the distantly supervised RE task. This minimizes explicit feature extraction and reduces the risk of error accumulation. In addition, the self-attentive architecture allows the model to efficiently capture longrange dependencies and the language model to utilize knowledge about the relation between entities and concepts acquired during unsupervised pre-training. Our model achieves a state-of-the-art AUC score of 0.422 on the NYT10 dataset, and performs especially well at higher recall levels, when compared to competitive baseline models.We selected the GPT as our language model because of its fine-tuning efficiency and reasonable hardware requirements, compared to e.g. LSTMbased language models (Ruder and Howard, 2018; Peters et al., 2018) or BERT (Devlin et al., 2018) . The contributions of this paper can be summarized as follows:• We extend the GPT to handle bag-level, multi-instance training and prediction for distantly supervised datasets, by aggregating sentence-level information with selective attention to produce bag-level predictions ( § 3).• We evaluate our fine-tuned language model on the NYT10 dataset and show that it achieves a state-of-the-art AUC compared to RESIDE (Vashishth et al., 2018) and PCNN+ATT (Lin et al., 2016) in held-out evaluation ( § 4, § 5.1).• We follow up on these results with a manual evaluation of ranked predictions, demonstrating that our model predicts a more diverse set of relations and performs especially well at higher recall levels ( § 5.2).• We make our code publicly available at https://github.com/DFKI-NLP/ DISTRE.
0
Our participation in SemEval 2015 was focused on solving the technical problems that afflicted our previous participation (Buscaldi et al., 2014) and including additional features based on alignments, such as the Sultan similarity (Sultan et al., 2014b) and the measure available in CMU Sphinx-4 (Lamere et al., 2003) for speech recognition. We baptised the new system SOPA from the Spanish word for "soup", since it uses a heterogeneous mix of features. Well aware of the importance that the training corpus and the regression algorithms have for the STS task, we used language models to select the most appropriate training corpus for a given text, and we explored some alternatives to the ν-Support Vector Regression (ν-SVR) (Schölkopf et al., 1999) used in our previous participations, specifically the Multi-Layer Perceptron (Bishop and others, 1995) and Random Forest (Breiman, 2001 ) regression algorithms. The obtained results show that Random Forests outperforms the other algorithms on every test set. We describe all the features in Section 2; the details on the learning algorithms and the training corpus selection process are described in Section 3, and the results obtained by the system are detailed in Section 4.
0
System Combination refers to the method of combining output of multiple MT systems, to produce a output better than each individual system. Currently, there are several approaches to machine translation which can be classified as phrasebased, hierarchical, syntax-based (Hildebrand and Vogel, 2008) which are equally good in their translation quality even though the underlying frameworks are completely different. The motivation behind System Combination arises from this diversity in the state-of-art MT systems, which suggests that systems with different paradigms make different errors, and can be made better by combining their strengths.One approach of combining translations is based on representing translations by confusion network and then aligning these confusion networks using string alignment algorithms (Rosti et.al, 2009) , . Another approach generates features for every translation to train algorithms for ranking systems based on their quality and the top ranking output is considered to be a candidate translation, (Hildebrand and Vogel, 2008) is an example of ranking based combination. We use ideas from ranking based approaches to learn order in which systems should be aligned in a confusion network based approach.Our approach is based on incremental alignment of confusion networks (Karakos et.al, 2008 ), wherein each system output is represented by a confusion network. The confusion networks are then aligned in a pre-defined order to generate a combination output. This paper contributes two enhancements to (Karakos et.al, 2008) . First, use of Support Vector Machines to learn order in which the system outputs should be aligned. Second, we explore use of Google n-grams for building dynamic language model and interpolate the resulting language model with a large static language model for rescoring of system combination outputs.The rest of the paper is organized as follows: Section 2 illustrates the idea and pipeline of the baseline combination system; Section 3 gives details of SVM ranking for learning system order for combination; Section 4 explains use of Google n-gram based language models; Results are discussed in Section 5; Concluding remarks are given in Section 6;
0
Crowdsourcing is no longer a new term in the domain of Computational Linguistics and Machine Translation research (Callison-Burch and Dredze, 2010; Snow et al., 2008; Callison-Burch, 2009) . Crowdsourcing -basically where task outsourcing is delegated to a largely unknown Internet audience -is emerging as a new paradigm of human in the loop approaches for developing sophisticated techniques for understanding and generating natural language content. Amazon Mechanical Turk(AMT) and CrowdFlower 1 are representative general purpose crowdsourcing platforms where as Lingotek and Gengo 2 are companies targeted at localization and translation of content typically leveraging freelancers.Our interest is towards developing a crowdsourcing based system to enable general, nonexpert crowd-workers generate natural language content equivalent in quality to that of expert linguists. Realization of the potential of attaining great scalability and cost-benefit of crowdsourcing for natural language tasks is limited by the ability of novice multi-lingual workers generate high quality translations. We have specific interest in Indian languages due to the large linguistic diversity as well as the scarcity of linguistic resources in these languages when compared to European languages. Crowdsourcing is a promising approach as many Indian languages are spoken by hundreds of Millions of people (approximately, Hindi-Urdu by 500M, Bangla by 200M, Punjabi by over 100M 3 ) coupled with the fact that representation of Indian workers in online crowdsourcing platforms is very high (close to 40% in Amazon Mechanical Turk (AMT)).However, this is a non-trivial task owing to lack of expertise of novice crowd workers in translation of content. It is well understood that familiarity with multiple languages might not be good enough for people to generate high quality translations. This is compounded by lack of sincerity and in certain cases, dishonest intention of earning rewards disproportionate to the effort and time spent for online tasks. Common techniques for quality control like gold data based validation and worker reputation are not effective for a subjective task like translation which does not have any task specific measurements. Having expert linguists manually validate crowd generated content defies the purpose of deploying crowdsourcing on a large scale.In this work, we propose a technique, based on the Divide-and-Conquer principle. The technique can be considered similar to a Map-Reduce task run on crowd processors, where the translation task is split into simpler tasks distributed to the crowd (the map stage) and the results are later combined in a reduce stage to generate complete translations. The attempt is to make translation tasks easy and intuitive for novice crowd-workers by providing translations aids to help them generate high quality of translations. Our contribution in this work is a end-to-end, crowdsourcingplatform-independent, translation crowdsourcing system that completely automates the translation crowdsourcing task by (i) managing the translation pipeline through software components and the crowd; (ii) performing quality control on workers' output; and (iii) interfacing with crowdsourcing service providers. The multi-stage, Mapreduce approach simplifies the translation task for crowd workers, while novel design of user interface makes the task convenient for the worker and discourages spamming. The system thus offers the potential to generate high quality parallel corpora on a large scale.We discuss related work in Section 2 and the multi-staged approach which is central to our system in Section 3. Section 4 describes the system architecture and workflow, while Section 5 presents important aspects of the user interfaces in the system. We present our preliminary experiments and observations in Section 6. Section 7 concludes the paper, pointing to future directions.
0
Swearing is the use of taboo language (also referred to as bad language, swear words, offensive language, curse words, or vulgar words) to express the speaker's emotional state to their listeners (Jay, 1992; Jay, 1999) . Not limited to face to face conversation, swearing also occurs in online conversations, across different languages, including social media and online forums, such as Twitter, typically featured by informal language and spontaneous writing. Twitter is considered a particularly interesting data source for investigations related to swearing. According to the study in Wang et al. (2014) the rate of swear word use in English Twitter is 1.15%, almost double compared to its use in daily conversation (0.5 − 0.7%) as observed in previous work (Jay, 1992; Mehl and Pennebaker, 2003) . The work by Wang et al. (2014) also reports that a portion of 7.73% tweets in their random sampling collection is containing swear words, which means that one tweet out of thirteen includes at least one swear word. Interestingly, they also observed that a list of only seven words covers about 90% of all the swear words occurrences in their Twitter sample: fuck, shit, ass, bitch, nigga, hell, and whore. Swearing in social media can be linked to an abusive context, when it is intended to offend, intimidate or cause emotional or psychological harm, contributing to the expression of hatred, in its various forms. In such contexts, indeed, swear words are often used to insult, such as in case of sexual harassment, hate speech, obscene telephone calls (OTCs), and verbal abuse (Jay et al., 2006; Jay and Janschewitz, 2008) . However, swearing is a multifaceted phenomenon. The use of swear words does not always result in harm, and the harm depends on the context where the swear word occurs (Jay, 2009a) . Some studies even found that the use of swear words has also several upsides. Using swear words in communication with friends could promote some advantageous social effects, including strengthen the social bonds and improve conversation harmony, when swear word is used in ironic or sarcastic contexts (Jay, 2009a) . Another study by Stephens and Umland (2011) found that swearing in cathartic ways is able to increase pain tolerance. Furthermore, Johnson (2012) has shown that the use of swear words can improve the effectiveness and persuasiveness of a message, especially when used to express an emotion of positive surprise. Also accounts of appropriated uses of slurs should not be neglected (Bianchi, 2014) , that is those uses by targeted groups of their own slurs for non-derogatory purposes (e.g., the appropriation of 'nigger' by the African-American community, or the appropriation of 'queer' by the homosexual community). Many studies have been proposed in recent years to deal with online abuse, where swear words have an important role, providing a signal to spot abusive content. However, as we can expect observing the different facets of swearing in social environments, the presence of swear words could also lead to false positives when they occur in a nonabusive context. Distinguishing between abusive and notabusive swearing contexts seems to be crucial to support and implement better content moderation practices. Indeed, on the one hand, there is a considerable urgency for most popular social media, such as Twitter and Facebook, to develop robust approaches for abusive language detection, also for guaranteeing a better compliance to governments demands for counteracting the phenomenon (see, e.g., the recently issued EU commission Code of Conduct on countering illegal hate speech online (EU Commission, 2016). On the other hand, as reflected in statements from the Twitter Safety and Security 1 users should be allowed to post potentially inflammatory content, as long as they are not-abusive 2 . The idea is that, as long as swear words are used but do not contain abuse/harassment, hateful conduct, sensitive content, and so on, they should not be censored. Our Motivation and Contribution. We explore the phenomenon of swearing in Twitter conversations, taking the possibility of predicting the abusiveness of a swear word in a tweet context as the main investigation perspective. The main goal is to automatically differentiate between abusive swearing, which should be regulated and countered in online communications, and not-abusive one, that should be allowed as part of freedom of speech, also recognising its positive functions, as in the case of reclaimed uses of slurs. To achieve our goal, we propose a two-fold contribution. First, we develop a new benchmark Twitter corpus, called SWAD (Swear Words Abusiveness Dataset), where abusive swearing is manually annotated at the word level. Based on several previous studies (Jay, 2009a; Dinakar et al., 2011; Golbeck et al., 2017) , we define abusive swearing as the use of swear word or profanity in several cases such as namecalling, harassment, hate speech, and bullying involving several sensitive topic including physical appearance, sexuality, race & culture, and intelligence, with intention from the author to insult or abuse a target (person or group).The other uses such as reclaimed uses, catharsis, humor, or conversational uses, are considered as not-abusive swearing. Second, we develop and experiment with supervised models to automatically predicting abusive swearing. Such models are trained on the novel SWAD corpus, to predict the abusiveness of a swear word within a tweet. The results confirm the robustness of the annotation in the SWAD corpus. We obtained 0.788 in macro F 1 -score in sequence labeling setting by using BERT, and explored the role of different features, also related to affect, in a standard text classification setting, with the aim to shed a better light on the properties which allow to distinguish between abusive and not-abusive swearing. The paper is organized as follows. Section 2 introduces related work on swearing in context. Section 3 reports on the various steps of development of the SWAD Twitter corpus. The annotation scheme applied and the main issues in the annotation process are described in Section 4. Section 5 presents the experimental setting and discusses the result. Finally, Section 6 includes conclusive remarks and ideas for future work. Wang et al. (2014) examines the cursing activity in the social media platform Twitter 3 . They explore several research questions including the ubiquity, utility, and also contextual dependency of textual swearing in Twitter. On the same platform, Bak et al. (2012) found that swearing is used frequently between people who have a stronger social relationship, as a part of their study on self-disclosure in Twitter conversation. Furthermore, Gauthier et al. (2015) provide an analysis of swearing on Twitter from several sociolinguistic aspects including age and gender. This study presents a deep exploration of the way British men and women use swear words. A gender-and age-based study of swearing was also conducted by Thelwall (2008) , using the social network MySpace 4 to develop the corpus. Swearing is not always offensive or abusive and its offensiveness or abusiveness is context-dependant. Swearing context is explored by several prior studies. Fägersten (2012) , following the dichotomy introduced by Ross (1969) , classifies swearing context into two types: annoyance swearing, "occurring in situations of increased stress", where the use of swear words appears to be "a manifestation of a release of tension", and social swearing, "occurring in situations of low stress and intended as a solidarity builder", which is related to a use of swear words in settings that are socially relaxed. The work by Jay (2009b) found that the offensiveness of taboo words is very dependant on their context, and postulates the use of taboo words in conversational context (less offensive) and hostile context (very offensive). These findings support prior work by Rieber et al. (1979) who found that obscenities/swear words used in a denotative way are far more offensive than those used in a connotative way. Furthermore, Pinker (2007) classified the use of swear words into five categories based on why people swear: dysphemistic, exact opposite of euphemism; abusive, using taboo words to abuse or insult someone; idiomatic, using taboo words to arouse interest of listeners without really referring to the matter; emphatic, to emphasize another word; cathartic, the use of swear words as a response to stress or pain. The most similar work to ours is the study by Holgate et al. (2018) that introduced six vulgar word use functions, and built a novel English dataset based on them. The classification of the function of swear words is used to improve the classification of hate speech in social media. In this work, instead we focus on the abusiveness prediction of swear words, rather than their function, with the goal of discovering the context of a given swear word whether abusive (should be censored) or not-abusive.
0
In the area of biomedical NLP, Named Entity Recognition (NER) is a widely discussed and studied topic. The aim of the task is to identify biomedical entities such as genes, proteins, cell types, and diseases in biomedical documents, to allow for knowledge discovery in this domain. Models for Biomedical NER (Bio-NER) offer the opportunity to mine information and knowledge and thereby foster biomedical and drug discovery research (Habibi et al., 2017) . Several shared tasks addressing Bio-NER have been organized. These attempts and tasks resulted in benchmark datasets for solving English Bio-NER tasks, e.g. the GENIA corpus (Kim et al., 2003) , JNLPBA (Kim et al., 2004) , and BC2GM (Smith et al., 2008) .However, when turning our attention to Chinese bio-NER, only a few attempts have been made, and these attempts either had limitations in text resource types and amount (Gu et al., 2008) or did not predefine biomedical named entity categories . Another limitation is that these attempts mostly focus on clinical texts or biomedical scientific publications but not include other relevant text resources, in particular biomedical patents. Many biomedical discoveries are patented in China, not only because of the encouraging policy on patentability of genetic products, but also since the existence of the speedily progressed and cheaper gene sequencing services (Du, 2018) .Patent texts are highly technical with long sentences (Verberne et al., 2010) . Two additional challenges of Chinese biomedical patents that we encountered are OCR errors and the heavy usage of code-mixing expressions, mixing English and Chinese in one entity. This is mainly because the protein and gene names are commonly written in English (or the English names are given after the Chinese ones), while the disease names and other contents are written in Chinese. For this reason, it is not possible to directly apply pre-trained NLP models to Chinese biomedical patents. Moreover, as mentioned before, because of the lack of related studies, we not only lack pre-trained models which were trained on Chinese biomedical text data, but also well-organized Chinese biomedical patents datasets.The contributions of this paper are threefold. First, we release a hand-labeled dataset with 5,813 sentences and 2,267 unique named entities from 21 Chinese biomedical patents. Second, we obtain promising results for the extraction of genes, proteins, and diseases with BERT models using our labeled data in limited training time and with limited computing power. Third, we show that when we use our NER model to extract entities from a large patent collection, we can potentially identify novel gene-gene interactions. We release our data and code for use by others. 1 In the following parts of this paper, we discuss previous attempts to solve Chinese Bio-NER tasks and other related tasks in Section 2; our methods and implementation details are explained in Section 3; the results of all experiments, along with the post analysis results, are described and discussed in Section 4; in Section 5 we discuss challenges and limitations of our study, followed by conclusions in section 6.
0
This paper presents the phrase-based machine translation system developed at RALI in order to participate in both the French-English and English-French translation tasks. In these two tasks, we used all the corpora supplied for the constraint data condition apart from the LDC Gigaword corpora.We describe its different components in Section 2. Section 3 reports our experiments to subsample the available out-of-domain corpora in order to adapt the translation models to the news domain. Section 4, dedicated to post-processing, presents how N-best lists are reranked and how the French 1-best output is corrected by a grammatical checker. Section 5 studies how the original source language of news acts upon translation quality. We conclude in Section 6.2 System Architecture
0
After its introduction in 2017, the Transformer architecture (Vaswani et al., 2017) quickly became the gold standard for the task of neural machine translation (NMT) (Ott et al., 2018) . Furthermore, variants of the Transformer have since been used very successfully for a variety of other tasks such as language modeling (LM) , natural language understanding (NLU) (Devlin et al., 2019; Liu et al., 2019) , speech translation (ST) (Vila et al., 2018) , automatic speech recognition (ASR) Mohamed et al., 2019) and image processing (Parmar et al., 2018) .A major advantage of the Transformer compared to previous architectures is the faster training speed achieved by complete parallelization across timesteps. However, this also leads to one of the biggest problems of the Transformer, namely the quadratic time and memory complexity of attention layers with respect to the sequence length. For sentence-level NMT this is not a big issue as most of the time the length of sequences is relatively short and can be handled efficently, even if subword segmentation is applied (Sennrich et al., 2016; Kudo, 2018) . However, this drastically changes when moving towards character-level (Gupta et al., 2019) or document-level (Tiedemann and Scherrer, 2017) NMT. Especially for the latter, speed and memory issues are one of the biggest roadblocks towards 'true' document level systems (Junczys-Dowmunt, 2019) . This leads to the situation where most works make do with including just a few sentences as a form of 'local' context information (Tiedemann and Scherrer, 2017; Jean et al., 2017; Bawden et al., 2018) or heavily compressing the document information (Tu et al., 2018; Kuang et al., 2018; Morishita et al., 2021) .More recently research focus has been shifting towards more efficient attention calculation for longer input sentences in several LM and NLU tasks (Tay et al., 2020) . Among these works is the approach by Kitaev et al. (2020) , in which the authors propose to make the attention matrix sparse by pre-selecting the relevant positions. They report good results on the LM objective while at the same time drastically reducing computational complexity. In this work we take the approach of Kitaev et al. (2020) as a starting point to improve the efficiency of (document-level) NMT systems.Our contribution is three-fold:• We adapt the locality-sensitive hashing (LSH) approach of Kitaev et al. (2020) to selfattention in the Transformer NMT framework. 1• We expand the concept of LSH to encoderdecoder cross-attention and provide insights on how this concept affects the behavior of the system.• We use this more memory-efficient NMT framework to conduct experiments on document-level NMT with more context information as would be possible with the baseline architecture.
0
A Text-to-Speech (TTS) system converts the input text into synthetic speech with high naturalness and intelligibility. Naturalness is mainly influenced by the prosody modeling, especially by the Phrase Break (PB) prediction. Because the PB prediction is the first step of TTS, any error in this step will propagate to downstream steps such as intonation prediction and duration modeling. Those errors will result in the synthetic speech which is unnatural and difficult to understand. So that many researchers devote themselves to improving the performance of the PB prediction.Typically PB prediction methods usually use machine learning models like Hidden Markov Models (HMMs) or Conditional Random Fields (CRFs) which trained with large sets of labeled training data. In these PB prediction models, the Part-of-Speech (POS) tag have been shown to be an effective feature and usually been included in the input feature set. The POS estimation itself is also a challenging task, and relies on large labeled training corpus, too. Its accuracy is always lower than our expectation, especially for those low-resource languages like Mongolian where the required linguistic resources are not readily available, and manual annotation is expensive and time-consuming.In recent years, there are many works applying the word embedding techniques to Natural Language Processing (NLP) tasks, such as question answering, machine translation and so on (Bordes, 2014; Xiong, 2017; Devlin, 2014) . Previous work has shown that the POS prediction task can be solved with high accuracy only using the word embedding feature as the input (Wang, 2015) . POS information is most likely to be included in the word embedding representations. Therefore, some PB prediction systems which don't rely on the POS feature are developed (Watts, 2011; Vadapalli, 2014; Vadapalli, 2016) . In (Watts, 2011) , the authors obtain continuous-valued word embedding features that summarize the distributional characteristics of word types as surrogates of POS features. In (Vadapalli, 2014) , researchers propose a neural network dictionary learning architecture to induce task-specified word embedding representations and show that these features perform better at PB prediction task. (Vadapalli, 2016) presents their investigations of recurrent neural networks (RNNs) for the phrase break prediction task by using word embedding. The above efforts have also been directed toward unsupervised methods of inducing word representations, which can be used as surrogates for POS tags, in the PB prediction task. QH1 >1%@ TLKYOD >1%@ QL >%@ KRPXQ >1%@ X >1%@ EH\BH >1%@ \LQ >%@ HUHJXO >1%@ TLKLUDJ >1%@ WY >1%@ WYVDODQBD >%@ Figure 1 : NNBS suffixes within a Mongolian sentences, the red part is the segmented NNBS suffixes from the word. There are three pauses in the sentence, one of which is located at the NNBS suffix: "-yin".Although the word embedding training operates in an unsupervised way, this approach face an issue when applied to Mongolian languages, which are agglutinative in nature and the available Mongolian corpus is not large enough for the huge Mongolian vocabulary. Fortunately, Mongolian is a morphologically rich language. Its suffixes often act as a positive signal which implies the POS information of the word. It's like that the word implied by the suffix '-ly' is an adverb in English. Morphologically, unlike many other languages, a Mongolian word is not just a concatenation of characters. It is constructed by the special agglutinative property. Mongolian words can be decomposed into a set of morphemes: one root and several suffixes.In this paper, we investigate Mongolian PB prediction models that operate on the level of sub-word units: stem and suffixes (the part without suffix). We hypothesize that stem and suffix serve to discriminate words based on syntactic meaning, and that these sub-word units can be used to model PB. We automatically segment every Mongolian word to a sequence of sub-word units, then map all sub-word units into a continuous vector representations by lookup table, which are then fed into a neural network. Instead of a feed-forward network, we use the Long Short-Term Memory (LSTM) network to predict the right PB label. The segmentation process reduces the vocabulary and alleviates the data sparse problem. Therefore, the learned word embedding for sub-word is more robust, then the performance of the PB prediction system can be improved.Our experiments show the proposed model can achieve significant performance than the conventional CRF-based models, and the sub-word embedding based method outperforms the entire word embedding based method.
0
Translation from one source language to multiple target languages at the same time is a difficult task for humans. A person often needs to be familiar with specific translation rules for different language pairs. Machine translation systems suffer from the same problems too. Under the current classic statistical machine translation framework, it is hard to share information across different phrase tables among different language pairs. Translation quality decreases rapidly when the size of training corpus for some minority language pairs becomes smaller. To conquer the problems described above, we propose a multi-task learning framework based on a sequence learning model to conduct machine translation from one source language to multiple target languages, inspired by the recently proposed neural machine translation(NMT) framework proposed by . Specifically, we extend the recurrent neural network based encoder-decoder framework to a multi-task learning model that shares an encoder across all language pairs and utilize a different decoder for each target language.The neural machine translation approach has recently achieved promising results in improving translation quality. Different from conventional statistical machine translation approaches, neural machine translation approaches aim at learning a radically end-to-end neural network model to optimize translation performance by generalizing machine translation as a sequence learning problem.Based on the neural translation framework, the lexical sparsity problem and the long-range dependency problem in traditional statistical machine translation can be alleviated through neural networks such as long shortterm memory networks which provide great lexical generalization and long-term sequence memorization abilities.The basic assumption of our proposed framework is that many languages differ lexically but are closely related on the semantic and/or the syntactic levels. We explore such correlation across different target languages and realize it under a multi-task learning framework. We treat a separate translation direction as a sub RNN encode-decoder task in this framework which shares the same encoder (i.e. the same source language representation) across different translation directions, and use a different decoder for each specific target language. In this way, this proposed multi-task learning model can make full use of the source language corpora across different language pairs. Since the encoder part shares the same source language representation across all the translation tasks, it may learn semantic and structured predictive representations that can not be learned with only a small amount of data. Moreover, during training we jointly model the alignment and the translation process simultaneously for different language pairs under the same framework. For example, when we simultaneously translate from English into Korean and Japanese, we can jointly learn latent similar semantic and structure information across Korea and Japanese because these two languages share some common language structures.The contribution of this work is three folds. First, we propose a unified machine learning framework to explore the problem of translating one source language into multiple target languages. To the best of our knowledge, this problem has not been studied carefully in the statistical machine translation field before. Second, given large-scale training corpora for different language pairs, we show that our framework can improve translation quality on each target language as compared with the neural translation model trained on a single language pair. Finally, our framework is able to alleviate the data scarcity problem, using language pairs with large-scale parallel training corpora to improve the translation quality of those with few parallel training corpus.The following sections will be organized as follows: in section 2, related work will be described, and in section 3, we will describe our multi-task learning method. Experiments that demonstrate the effectiveness of our framework will be described in section 4. Lastly, we will conclude our work in section 5.
0
Recently pre-trained language models like Bert (Devlin et al., 2018) , XLnet (Yang et al., 2019b) , Elmo (Peters et al., 2018) ,GPT (Radford et al., 2018) have been demonstrated to offer substantial performance boosts for many NLP tasks such as Machine Reading Comprehension, Named Entity Recognition, and Natural Language Inference. There are usually two steps in similar models: pre-training and finetuning. Model parameters are trained on unlabeled data over different pre-training tasks and then applied to different labeled downstream tasks for fine-tuning. Researchers were devoted to improving fine-tuning skills to enhance performance of downstream tasks in previous months, but actually some studies (Sun et al., 2019b) (Yang et al., 2019a) (Dong et al., 2019) have shown that self-supervised tasks during pre-training have a huge impact on the performance of fine-tuning as these tasks determine the learning method and whether pre-trained models can utilize massive unlabeled data efficiently. Bert, the most popular language pre-trained model in NLP communities, includes two pre-training tasks: Masked LM (MLM) and Next Sentence Prediction (NSP). In the MLM task, Bert randomly masks a certain percentage of tokens in the sentences and learns to predict these masked tokens. In the NSP task, Bert learns to predict whether two sentences are adjacent. In fact, the NSP task has two obvious shortcomings: (1) Its initial target is to model the relationship between two sentences, which is usually consequential, adversative or contradictory. However, due to the long length of two sentences, there are inevitably many domain-specific words. Therefore the Bert model can judge by whether two sentences belong to a same field, not complex inference in sentence level. As a result, the NSP task is more like a simple document-level task rather than a complex sentencelevel task. Actually, the NSP task usually takes only 1/10 of the total training time to achieve nearly 100% accuracy.(2) The input format of the NSP task is inconsistent with some downstream tasks. For example, in Machine Reading Comprehension tasks (MRC), the input format is always a query and passage pair, and is different from the NSP task. This setup difference would produce a deviation between pre-training and fine-tuning, and leads Bert to continue to make decisions by repeated or highly matched tokens between two sentences rather than its semantic analysis and inference ability. This wrong tendency to learn some specific rules in word-level would make the model over-fitting occurs easily especially in small datasets.Some studies have shown customized pre-training tasks for downstream tasks can effectively help a model to capture corresponding knowledge and semantic information. Hence a new pre-training task, Sentence Insertion (SI), is proposed in this paper to replace the NSP task in BERT for MRC datasets. The SI task randomly extracts a sentence from the document as the query, and predicts the location of it as the training objective. Meanwhile, for segments with no answer when it comes to long MRC inputs in downstream tasks, the query is extracted from another document 40% of the time, and the model needs to judge if an answer exists. The SI task has advantages below: (1) It models representations in sentence level, as the model has to analyze the logical relationships between sentences to acquire an exact decision, and it can avoid the wrong tendency to rely on highly matched words. (2) It is more compatible with the querypassage pairs mode, because its input format is consistent with MRC tasks. In this mode, the query in MRC would also pay more attention to the relevant parts in the passage after the SI task pre-training, even without fine-tuning. That is, the SI task strengthen a model's search ability significantly. Finally, to further enhance the model, a Chinese word segmentation method based on SentencePiece (Kudo, 2018) is used to embed long sequences short enough. For comparison, a baseline similarly to English WordPiece is made, which is tokenized by a Chinese tokenizer pkuseg (Luo et al., 2019) .The comparison between SI and NSP is made in eight different types of Chinese NLP tasks. Due to the different pre-training corpus, a Bert-NSP model is trained as another baseline in the same setup under our own corpus. Two Chinese pre-training works: Bert-WWM (Cui et al., 2019) and Ernie 1.0 (Sun et al., 2019a) are also used as the baselines. SI algorithm outperforms all other pre-trained models in query-passage pairs tasks, and takes a slim lead with Bert in other tasks. The contributions of this paper are summarized as follows: (I) A new pre-training task SI is presented to eliminate the difference between fine-tuning and pre-training in querypassage pairs NLP tasks. This model is named SiBert. It ranks 1st place on Chinese Machine Reading Comprehension 2019 leaderboard and exceeds the official Bert baseline by nearly 20%. (II) A Chinese word segmentation method based on Senten-cePiece is used in this paper for tasks with long texts, which saves a lot of memory and help the model to contain more words in a segment to provide more context information for model decisions. (III) The optimization skills of BlockSparse (Child et al., 2019) , and the fast-gelu activation function are used, enabling the model to be trained on 8 16GB Tesla v100 GPU with batch size 256, length 512. The code and model will be published open source to GitHub.
0
The today's outputs of Machine Translation (MT) often contain serious grammatical errors. This is particularly apparent in statistical MT systems (SMT), which do not employ structural linguistic rules. These systems have been dominating the area in the recent years (Callison-Burch et al., 2011) . Such errors make the translated text less fluent and may even lead to unintelligibility or misleading statements. The problem is more evident in languages with rich morphology, such as Czech, where morphological agreement is of a relatively high importance for the interpretation of syntactic relations.The DEPFIX system (Mareček et al., 2011) attempts to correct some of the frequent SMT sys-tems' errors in English-to-Czech translations. 1 It analyzes the target sentence (the SMT output in Czech language) using a morphological tagger and a dependency parser and attempts to correct it by applying several rules which enforce consistency with the Czech grammar. Most of the rules use the source sentence (the SMT input in English language) as a source of information about the sentence structure. The source sentence is also tagged and parsed, and word-to-word alignment with the target sentence is determined.In this paper, we present DEPFIX 2012, an improved version of the original DEPFIX 2011 system. It makes use of a new parser, described briefly in Section 3, which is adapted to handle the generally ungrammatical target sentences better. We have also enhanced the set of grammar correction rules, for which we give a detailed description in Section 4. Section 5 gives an account of the experiments performed to evaluate the DEPFIX 2012 system and compare it to DEPFIX 2011. Section 6 then concludes the paper.
0
L'objectif de notre travail est l'enrichissement de bases de connaissances sémantiques de type BabelNet (Navigli & Ponzetto, 2012) ou DBPédia (Lehmann et al., 2014) à partir des informations contenues dans des documents textuels semi-structurés. Ces bases de connaissances jouent aujourd'hui un rôle clé dans de nombreuses applications du TAL, et leur alimentation constitue donc un enjeu important afin de rendre disponibles des informations lexico-sémantiques multilingues à large échelle. A l'heure actuelle, la construction de ces réseaux se fonde principalement sur des ressources existantes telles WordNet ou sur l'exploitation de la partie structurée des documents encyclopédiques de Wikipédia 1 . Ainsi, des extracteurs dédiés se focalisent sur les infobox, les catégories, ou les liens définis dans les pages Wikipédia (Morsey et al., 2012; Lehmann et al., 2014) . Les contenus textuels des documents, riches en information décrivant des concepts et des relations entre ces concepts, mais plus difficilement accessibles, sont généralement sous-exploités.Différentes méthodes ont cependant été définies pour extraire à partir des textes des informations (termes et relations sémantiques entre termes) susceptibles d'alimenter ces bases de connaissances. Ces travaux utilisent généralement des extracteurs de termes et font appel à des techniques fondées sur l'application de patrons morpho-syntaxiques dans la lignée de (Hearst, 1992) , sur le principe de proximité distributionnelle (Lenci & Benotto, 2012) , ou sur l'exploitation de structures textuelles spécifiques, par exemple les définitions (Malaisé et al., 2004) ou les structures énumératives (Fauconnier & Kamel, 2015) .Notre travail de recherche vise à enrichir le Web des données pour le français en mettant en oeuvre de façon combinée plusieurs méthodes d'extraction de termes et de relations entre termes à partir des textes, pour l'acquisition de différents types de relations sémantiques, en premier lieu l'hyperonymie et la méronymie. Comme il a été montré par exemple par (Schropp et al., 2013) , la combinaison de plusieurs approches est une piste intéressante pour tirer parti de la multiplicité des indices textuels signalant une relation sémantique, et dépasser les limites identifiées pour chaque méthode. Notre approche s'appuiera sur un corpus issu de Wikipédia en français, dont les articles ont la particularité de combiner différents niveaux de structuration de l'information textuelle.
0
Many works in sentiment analysis try to make use of shallow processing techniques. The common thing in all these works is that they merely try to identify sentiment-bearing expressions as shown by Ruppenhofer and Rehbein (2012) . No effort has been made to identify which expression actually contributes to the overall sentiment of the text. In Mukherjee and Bhattacharyya (2012) these expressions are given weight-age according to their position w.r.t. the discourse elements in the text. But it still takes into account each expression.Semantic analysis is essential to understand the exact meaning conveyed in the text. Some words tend to mislead the meaning of a given piece of text as shown in the previous example. WSD (Word Sense Disambiguation) is a technique which can been used to get the right sense of the word. Balamurali et al., (2012) have made use of Word-Net synsets for a supervised sentiment classification task. Tamare (2010) and Rentoumi (2009) have also shown a performance improvement by using WSD as compared to word-based features for a supervised sentiment classification task. In Hasan et al., (2012) , semantic concepts have been used as additional features in addition to wordbased features to show a performance improvement. Syntagmatic or structural properties of text are used in many NLP applications like machine translation, speech recognition, named entity recognition, etc. A clustering based approach which makes use of syntactic features of text has been shown to improve performance in Kashyap et al., (2013) . Another approach can be found in Mukherjee and Bhattacharyya (2012) which makes use of lightweight discourse for sentiment analysis. In general, approaches using semantic analysis are expensive than syntax-based approaches due to the shallow processing involved in the latter. As pointed out earlier, all these works incorporate all the sentiment-bearing expressions to evaluate the overall sentiment of the text. The fact that not all expressions contribute to the overall sentiment is completely ignored due to this. Our approach tries to resolve this issue. To do this, we create a UNL graph for each piece of text and include only the relevant expressions to predict the sentiment. Relevant expressions are those which satisfy the rules/conditions. After getting these expressions, we use a simple dictionary lookup along with attributes of words in a UNL graph to calculate the sentiment.The rest of the paper is organized as follows. Section 2 discusses related work. Section 3 explains our approach in detail. The experimental setup is explained in Section 4. Results of the experiments are presented in Section 5. Section 6 discusses these results followed by conclusion in Section 7. Section 8 hints at some future work.
0
In situated human-robot dialogue, humans and robots have mismatched capabilities of perceiving the shared environment. Thus referential communication between them becomes extremely challenging. To address this problem, our previous work has conducted a simulation-based study to collect a set of human-human conversation data that explain how partners with mismatched perceptions strive to succeed in referential communication (Liu et al., 2012; Liu et al., 2013) . Our data have shown that, when conversation partners have mismatched perceptions, they tend to make extra collaborative effort in referential communication. For example, the speaker often refers to the intended object iteratively: first issuing an initial installment, and then refashioning till the hearer identifies the referent correctly. The hearer, on the other hand, often provides useful feedback based on which further refashioning can be made. This data has demonstrated the importance of incorporating collaborative discourse for referential grounding.Based on this data, as a first step we developed a graph-matching approach for referential grounding (Liu et al., 2012; Liu et al., 2013) . This approach uses Attributed Relational Graph to capture collaborative discourse and employs a statespace search algorithm to find proper grounding results. Although it has made meaningful progress in addressing collaborative referential grounding under mismatched perceptions, the state-space search based approach has two major limitations. First, it is neither flexible to obtain multiple grounding hypotheses, nor flexible to incorporate different hypotheses incrementally for follow-up grounding. Second, the search algorithm tends to have a high time complexity for optimal solutions. Thus, the previous approach is not ideal for collaborative and incremental dialogue systems that interact with human users in real time.To address these limitations, this paper describes a new approach to referential grounding based on probabilistic labeling. This approach aims to integrate different types of evidence from the collaborative referential discourse into a unified probabilistic scheme. It is formulated under the Bayesian reasoning framework to easily support generation and incorporation of multiple grounding hypotheses for follow-up processes. Our empirical results have shown that the probabilistic labeling approach significantly outperforms the state-space search approach in both grounding accuracy and efficiency. This new approach provides a good basis for processing collaborative discourse and enabling collaborative dialogue system in situated referential communication.
0
The novel coronavirus disease (COVID-19) is affecting public health and the economy worldwide. The surge in social media usage during the pandemic led the online content to an excellent tool to examine risk communication (Lazer et al., 2018; Beaunoyer et al., 2020) . As more people seek and share information online, NGOs and especially the WHO have warned of the danger of the increasing misinformation on the pandemic. A new term, infodemic, was coined to describe this phenomenon.On the other hand, the natural language processing (NLP) community's recent developments enable in-depth analysis of topical changes from online resources. Latent Dirichlet Allocation (LDA) can detect major topics from unstructured text data (e.g., extract topics in the context of a global pandemic (Park et al., 2020) ). Advanced language models like BERT can be used to learn representations (e.g., the pandemic discourse on Twitter (Müller et al., 2020) ). A language-agnostic version of BERT further extends its capability to handle multiple languages (Gencoglu, 2020) .Despite social media's potential to understand the risk communication pattern and infodemic during the pandemic, several challenges remain. One of them is the temporal aspect. The existing topic models which exclude temporal information cannot represent how the conversations/themes developed over time (Blei and Lafferty, 2006) . While some works propose the refinements of such models by incorporating the time or metadata information (Blei and Lafferty, 2006; Roberts et al., 2013) , they still need to dissect the data into arbitrarily chosen time chunks. Particularly in risk communication, where the public attention evolves quickly in a short period, the contextual information intertwining topic and time becomes a critical component (Atefeh and Khreich, 2015) . Considering the time aspect also allows identifying topics that are of significant influence yet over a short span of time.Addressing the limitation above, we present a time-topic cohesive model to detect contextualized events and topics over time. Our model marries key ideas from contrastive learning (hereafter CL) and multilingual BERT (hereafter mBERT). CL is a machine learning and computer vision technique to classify similar objects by devising a triplet loss function among one anchor and two targets (Dai and Lin, 2017) . By designing a triplet loss such as computing the difference of topic and time between anchor and target tweets, our model can jointly consider temporal and topical characteristics when detecting major events about the pandemic. mBERT allows us to apply the model to multiple countries for comparison.The final model is applied to a collection of Twitter messages gathered from three Asian countries: South Korea, Vietnam, and Iran. Based on the determined events, we can understand what information (or misinformation) was mainly talked about at what stage of the pandemic in each country. Unlike existing topic models, this new method also captures the temporal coherence of topics. We present a case study that shows how the risk communication on COVID-19 starts with several key events initially and then expands to diverse domains in South Korea.
0
With increasing penetration of ecommerce, reliance on and importance of contact centers is increasing. While emails and automated chat-bots are gaining popularity, voice continues to be the overwhelming preferred communication medium leading to mil-lions of phone calls landing at contact centers. Handling such high volume of calls by human agents leads to hiring and maintaining a large employee base. Additionally, managing periodic peaks (owing to sale periods, festive seasons etc.) as well as hiring, training, monitoring make the entire process a demanding operation. To address these challenges as well as piggybacking on recent progress of NLP and Dialog Systems research, voice-bots are gaining popularity. Voice-bot is a common name of automated dialog systems built to conduct task-oriented conversations with callers. They are placed as the first line of response to address customer concerns and only on failure, the calls are transferred to human agents. Goodness of voicebots, measured by automation rate, is proportional to the fraction of calls it can handle successfully end-to-end.Customers' contacts in ecommerce domain are broadly about two types viz. for general enquiry about products before making a purchase and post purchase issue resolution; with overwhelming majority of contacts are of the latter type. For post purchase contacts, one of the first information that a voice-bot needs to gather is which product the customer is calling about. The most common practice has been to enumerate all products she has purchased, say in a reverse chronological order, and asking her to respond with her choice by pressing a numeric key. This is limiting in two important ways. Firstly, it limits the scope to a maximum of ten products which is insufficient in a large fraction of cases. Secondly and more importantly, listening to a long announcement of product titles to select one is a time-consuming and tedious customer experience.In this paper, we introduce the problem of order identification and propose a technique to identify or predict the product of interest for which a cus- Table 1 : Examples of top matches from fuzzy search and predictive model. The first column shows transcribed customer utterances and the second column shows all active orders at the time of the call with the top match from fuzzy search emphasized. The examples under Predictive Model shows the most likely order at the time of the call along with top-k features leading to the prediction. tomer has contacted the contact center 1 . We do it in a natural and efficient manner based on minimal or no explicit additional input from the customer through a novel combination of two complementary approaches viz. search and prediction. The system is not restricted by the number of products purchased even over a long time period. It has been shown to be highly accurate with 87% accuracy and over 65% coverage in a real-life and noisy environment.After customer verification, a question was introduced in the voice-bot flow "Which product you are calling about?". Her response was recorded and transcribed by an automatic speech recognition (ASR) system to text in realtime. We modeled the search problem as a task to retrieve the most matching product considering this response as a query over the search space of all active products represented as a set of product attributes e.g. title, description, brand, color, author etc. While simple in formulation, the task offers a few practical challenges. Customers do not describe their products in a standard manner or as it is described in the product catalog. For example, to describe a "SAMSUNG Galaxy F41 (Fusion Blue, 128 GB) (6 GB RAM)" phone, they may say F41, Galaxy F41, mobile, phone, mobile phone, cellphone, Samsung mobile, etc. (more examples can be seen in Table1). Secondly, the responses from customers varied widely from being heavily code-mixed to having only fillers (ummms, aahs, whats etc.) to blank responses. This is complemented owing to the background noise as well as imperfections in ASR systems. Finally, in a not so uncommon scenario, often customers' active orders include multiple instances of the same product, minor variations thereof (e.g. in color), or related products which share many attributes (e.g. charger for "SAMSUNG Galaxy F41 (Fusion Blue, 128 GB) (6 GB RAM)") which are indistinguishable from their response to the prompt.We propose an unsupervised n-gram based fuzzy search based on a round of pre-processing followed by custom lexical and phonetic similarity metrics.In spite of its simplicity, this solution achieves 32% coverage with an accuracy of 87%, leveraging the relatively small search space. The custom nature of this solution achieves much higher accuracy compared to more sophisticated general purpose product search available on ecommerce mobile apps and websites. This simple technique also does not require additional steps such as named entity recognition (NER) which has been used for product identification related work in literature (Wen et al., 2019) . Additionally, NER systems' performance are comparatively poor on ASR transcripts owing to high degree of recognition and lexical noise (e.g. missing capitalization etc) (Yadav et al., 2020) .While fuzzy search works with high accuracy, its coverage is limited owing to various mentioned noise in the data. We observed that about 25% of callers did not answer when asked to identify the product they were calling about. To overcome this challenge, we introduced a complementary solution based on predictive modeling which does not require explicit utterances from customers. In simple words, the model creates a ranking of active orders on the basis of likelihood of a customer calling about them. This is based on the intuition that certain characteristics of orders make them more likely to call about e.g. a return, an orders which was supposed to be delivered on the day of calling etc. Based on such features of orders and customer profile, a random forest model gives prediction accuracy of 72%, 88% and 94% at top-1, 2, and 3. For high confidence predictions, the voice-bot's prompt is changed to "Are you calling for the <PRODUCT-NAME> you ordered?" For right predictions, not only it reduces the duration of the call, also increases customer delight by the personalized experience. In combination, fuzzy search and predictive model cover 64.70% of all voice-bot calls with an accuracy of 87.18%. Organization of the paper: The rest of the paper is organized as follows. Section 2 narrates the background of order identification for voicebot, sections 3 discusses the proposed approach and sections 4 and 5 discuss the datasets used in our study and experiments respectively. Section 6 briefs some of the literature related to our work before concluding in section 7.
0
Over the past few years, much work has focussed on inferring political preferences of people from their behavior, both in unsupervised and supervised settings. Classical ideal point models (Poole and Rosenthal, 1985; Martin and Quinn, 2002) estimate the political ideologies of legislators through their observed voting behavior, possibly paired with the textual content of bills (Gerrish and Blei, 2012) and debate text (Nguyen et al., 2015) ; other unsupervised models estimate ideologies of politicians from their speeches alone (Sim et al., 2013) . Twitter users have also been modeled in a similar framework, using their observed following behavior of political elites as evidence to be explained (Barberá, 2015) . Supervised models, likewise, have not only been used for assessing the political stance of sentences (Iyyer et al., 2014) but are also very popular for predicting the holistic ideologies of everyday users on Twitter (Rao et al., 2010; Pennacchiotti and Popescu, 2011; Al Zamal et al., 2012; Cohen and Ruths, 2013; Volkova et al., 2014) , Facebook (Bond and Messing, 2015) and blogs (Jiang and Argamon, 2008) , where training data is relatively easy to obtaineither from user self-declarations, political following behavior, or third-party categorizations.Aside from their intrinsic value, estimates of users' political ideologies have been useful for quantifying the orientation of news media sources (Park et al., 2011; Zhou et al., 2011) . We consider in this work a different task: estimating the political import of propositions like OBAMA IS A SOCIALIST.In focusing on propositional statements, we draw on a parallel, but largely independent, strand of research in open information extraction. IE systems, from early slot-filling models with predetermined ontologies (Hobbs et al., 1993) to the largescale open-vocabulary systems in use today (Fader et al., 2011; Mitchell et al., 2015) have worked toward learning type-level propositional information from text, such as BARACK OBAMA IS PRES-IDENT. To a large extent, the ability to learn these facts from text is dependent on having data sources that are either relatively factual in their presentation (e.g., news articles and Wikipedia) or are sufficiently diverse to average over conflicting opinions (e.g., broad, random samples of the web).Many of the propositional statements that individuals make online are, of course, not objective descriptions of reality at all, but rather reflect their own beliefs, opinions and other private mental states (Wiebe et al., 2005) . While much work has investigated methods for establishing the truth content of individual sentences -whether from the perspective of veridicality (de Marneffe et al., 2012) , fact assessment (Nakashole and Mitchell, 2014) , or subjectivity analysis (Wiebe et al., 2003; Wilson, 2008) -the structure that exists between users and their assertions gives us an opportunity to situate them both in the same political space: in this work we operate at the level of subject-predicate propositions, and present models that capture not only the variation in what subjects (e.g., OBAMA, ABORTION, GUN CONTROL) that individual communities are more likely to discuss, but also the variation in what predicates different communities assert of the same subject (e.g., GLOBAL WARMING IS A HOAX vs. IS A FACT). The contributions of this work are as follows:• We present a new evaluation dataset of 766propositions judged according to their positions in a political spectrum. • We present and evaluate several models for estimating the ideal points of subject-predicate propositions, and find that unsupervised methods perform best (on sufficiently partisan data).
0
Quality Estimation (QE) for Machine Translation (MT) is the task of predicting the overall quality of an automatically generated translation e.g., on either word, sentence or document level (Blatz et al., 2004; Ueffing and Ney, 2007) . In opposition to automatic metrics and manual evaluation which rely on gold standard reference translations, QE models can produce quality estimates on unseen data, c 2020 The authors. This article is licensed under a Creative Commons 3.0 licence, no derivative works, attribution, CC-BY-ND. and at runtime. QE has already proven its usefulness in many applications such as improving productivity in post-editing of MT, and recent neuralbased approaches to QE have been shown to provide promising performance in predicting quality of neural MT output (Fonseca et al., 2019) .QE models are trained under full supervision, which requires to have quality-labelled training data at hand. Obtaining annotated data for all the domains and languages of interest is costly and often impractical. As a result, QE models can suffer from the same limitations as neural MT models themselves, such as drastic degradation of their performance on out-of-domain data. As an alternative, QE models are often trained under weak supervision, using training instances labelled from noisy or limited sources (e.g. data labelled with automatic metrics for MT).Here, we focus on sentence-level QE, where given a pair of sentences (the source and its translation), the aim is to train supervised Machine Learning (ML) models that can predict a quality label as a numerical value. The most widely used label for sentence-level QE is the Human-mediated Translation Edit Rate (HTER) (Snover et al., 2006) , which represents the post-editing effort. HTER consists of the minimum number of edits a human language expert is required to make in order to fix the translation errors in a sentence, taking values between 0 and 1. The main limitation of HTER is that it does not represent an actual translation error rate, but its noisy approximation. The noise stems mostly from errors in the heuristics used to automatically align the machine translation and its post-edited version, but also from the fact that some edits represent preferential choices of humans, rather than errors. To overcome such limitations, QE models can be improved by using data that has been Figure 1 : Example of a German sentence (top) and its automatic translation into English. The HTER between the translation and its post-edited version (ANN-1) is 0.091, while the proportion of fine-grained expert-annotated MT errors (ANN-2), is 6/23 = 0.261. directly annotated for translation errors by human experts. Figure 1 shows an example of the discrepancy between the HTER score and the proportion of actual errors from expert annotation, for a raw translation and its post-edited version.Annotations of MT errors usually follow finegrained error taxonomies such as the Multidimensional Quality Metrics (MQM) framework (Lommel et al., 2014) . While such annotations provide highly reliable labelled data, they are more expensive to produce than HTER. This often results in datasets that are orders of magnitude smaller than HTER-based ones. This makes it hard to only use such high-quality resources for training neuralbased QE models, which typically require large amounts of training data.In this paper, we use transfer-learning to develop QE models by exploiting the advantages of both noisy and high-quality labelled data. We leverage information from large amounts of HTER data and small amounts of MQM annotations to train more reliable sentence-level QE models. Our aim is to predict the proportion of actual errors in MT outputs. More fine-grained error prediction is left for future work.Main contributions: (1) We introduce a new task of predicting the proportion of actual translation errors using transfer-learning for QE 1 , by leveraging large scale noisy HTER annotations and smaller but of higher quality expert MQM annotations; (2) we show that our simple yet effective approach using transfer-learning yields better performance at predicting the proportion of actual errors in MT, compared to models trained directly on expert-annotated MQM or HTER-only data; (3) we report experiments on four language pairs and both statistical and neural MT systems.
0
Urdu uses Nastalique style of Arabic script for writing, which is cursive in nature. Characters join together to form ligatures, which end either with a space or with a non-joining character. A word may be composed of one of more ligatures. In Urdu, space is not used to separate two consecutive words in a sentence; instead readers themselves identify the boundaries of words, as the sequence of ligatures, as they read along the text. Space is used to get appropriate character shapes and thus it may even be used within a word to break the word into constituent ligatures (Naseem 2007 , Durrani 2008 ). Therefore, like other languages (Theeramunkong & Usanavasin, 2001; Wan and Liu, 2007; Khankasikam & Muansuwan, 2005; Haruechaiyasak et al., 2008; Haizhou & Baosheng, 1998) , word segmentation or word tokenization is a prelimi-nary task for Urdu language processing. It has applications in many areas like spell checking, POS tagging, speech synthesis, information retrieval etc. This paper focuses on the word segmentation problem from the point of view of Optical Character Recognition (OCR) System. As space is not visible in typed and scanned text, spacing cues are not available to the OCR for word separation and therefore segmentation has to be done more explicitly. This word segmentation model for Urdu OCR system takes input in the form of a sequence of ligatures recognized by an OCR to construct a sequence of words from them.
0
Question Generation (QG) systems play a vital role in question answering (QA), dialogue system, and automated tutoring applications -by enriching the training QA corpora, helping chatbots start conversations with intriguing questions, and automatically generating assessment questions, respectively. Existing QG research has typically focused on generating factoid questions relevant to one fact obtainable from a single sentence (Duan et al., 2017; Zhao et al., 2018; Kim et al., 2019) , as exemplified in Figure 1 a) . However, less explored has been the comprehension and reasoning aspects of questioning, resulting in questions that are shallow and not reflective of the true creative human process.People have the ability to ask deep questions about events, evaluation, opinions, synthesis, or reasons, usually in the form of Why, Why-not, How, Input Paragraph A: Pago Pago International Airport Pago Pago International Airport, also known as Tafuna Airport, is a public airport located 7 miles (11.3 km) southwest of the central business district of Pago Pago, in the village and plains of Tafuna on the island of Tutuila in American Samoa, an unincorporated territory of the United States. Input Paragraph B: Hoonah Airport Hoonah Airport is a state-owned public-use airport located one nautical mile (2 km) southeast of the central business district of Hoonah, Alaska. Question: Are Pago Pago International Airport and Hoonah Airport both on American territory? Answer: YesOxygen is used in cellular respiration and released by photosynthesis, which uses the energy of sunlight to produce oxygen from water. What-if, which requires an in-depth understanding of the input source and the ability to reason over disjoint relevant contexts; e.g., asking Why did Gollum betray his master Frodo Baggins? after reading the fantasy novel The Lord of the Rings. Learning to ask such deep questions has intrinsic research value concerning how human intelligence embodies the skills of curiosity and integration, and will have broad application in future intelligent systems. Despite a clear push towards answering deep questions (exemplified by multi-hop reading comprehension (Cao et al., 2019) and commonsense QA (Rajani et al., 2019)), generating deep questions remains un-investigated. There is thus a clear need to push QG research towards generating deep questions that demand higher cognitive skills.In this paper, we propose the problem of Deep Question Generation (DQG), which aims to generate questions that require reasoning over multiple pieces of information in the passage. Figure 1 b) shows an example of deep question which requires a comparative reasoning over two disjoint pieces of evidences. DQG introduces three additional challenges that are not captured by traditional QG systems. First, unlike generating questions from a single sentence, DQG requires document-level understanding, which may introduce long-range dependencies when the passage is long. Second, we must be able to select relevant contexts to ask meaningful questions; this is non-trivial as it involves understanding the relation between disjoint pieces of information in the passage. Third, we need to ensure correct reasoning over multiple pieces of information so that the generated question is answerable by information in the passage.To facilitate the selection and reasoning over disjoint relevant contexts, we distill important information from the passage and organize them as a semantic graph, in which the nodes are extracted based on semantic role labeling or dependency parsing, and connected by different intra-and intersemantic relations ( Figure 2 ). Semantic relations provide important clues about what contents are question-worthy and what reasoning should be performed; e.g., in Figure 1 , both the entities Pago Pago International Airport and Hoonah Airport have the located at relation with a city in United States. It is then natural to ask a comparative question: e.g., Are Pago Pago International Airport and Hoonah Airport both on American territory?. To efficiently leverage the semantic graph for DQG, we introduce three novel mechanisms: (1) proposing a novel graph encoder, which incorporates an attention mechanism into the Gated Graph Neural Network (GGNN) , to dynamically model the interactions between different semantic relations; (2) enhancing the word-level passage embeddings and the node-level semantic graph representations to obtain an unified semantic-aware passage representations for question decoding; and (3) introducing an auxiliary content selection task that jointly trains with question decoding, which assists the model in selecting relevant contexts in the semantic graph to form a proper reasoning chain. We evaluate our model on HotpotQA (Yang et al., 2018) , a challenging dataset in which the questions are generated by reasoning over text from separate Wikipedia pages. Experimental results show that our model -incorporating both the use of the semantic graph and the content selection task -improves performance by a large margin, in terms of both automated metrics (Section 4.3) and human evaluation (Section 4.5). Error analysis (Section 4.6) validates that our use of the semantic graph greatly reduces the amount of semantic errors in generated questions. In summary, our contributions are: (1) the very first work, to the best of our knowledge, to investigate deep question generation, (2) a novel framework which combines a semantic graph with the input passage to generate deep questions, and (3) a novel graph encoder that incorporates attention into a GGNN approach.
0
Sentiment analysis has a long and successful history in the context of natural language processing. As with the majority of the problems in this domain, we have seen a gradual shift towards solutions based on neural models. Nowadays, such models can be readily used as a part of a larger solution, for example to analyse communication on social networks.Then, perhaps unsurprisingly, the EmoContext task (Chatterjee et al., 2019b) with its format is highly evocative of the social networks. That is, it consists of conversational triplets, where the task is to correctly guess the emotion category of the last conversational turn.Over the years, there was a number of different sentiment analysis and recognition competitions, workshops and shared tasks (Klinger et al., 2018; Rosenthal et al., 2017) , however the conversational nature of the data is not common.Our system is based on the recurrent neural networks, both simple and hierarchical architectures. We have experimented with various techniques and hyper-parameters, such as regularization, class weights, embeddings, strength of dropout and added noise. Finally, we have improved our results by creating an ensemble, with various voting methods and sample reweighing.
0
De nombreux développements de logiciels sont effectués dans les laboratoires de recherche comme support à la recherche ou aboutissement d'une recherche. Ces développements sont souvent innovants et intéressent rapidement d'autres entités que le laboratoire. Il se pose alors la question des choix pour permettre et pour accompagner leur valorisation, pour augmenter leur visibilité et leur capacité à susciter des collaborations. La portabilité et l'accessibilité du logiciel en sont des points clés. La portabilité, car choisir une plate-forme pour un programme revient à en restreindre l'audience. L'accessibilité aux contenus et aux fonctions du logiciel s'avère également essentielle pour sa diffusion large à différentes communautés d'utilisateurs, car un logiciel est à la fois un objet scientifique mais aussi potentiellement un objet de transfert de technologie.De nombreuses boîtes à outils pour réaliser différents niveaux de segmentations de la parole et l'apprentissage des modèles sous-jacents sont mis à disposition sur le web. Elles bénéficient parfois d'une large documentation, d'une communauté d'utilisateurs, de tutoriaux et de forums actifs. Des ressources (dictionnaires, modèles) sont également disponibles pour quelques langues. Pourtant, lorsqu'il s'agit d'effectuer des alignements texte/son, la plupart des phonéticiens choisissent de le faire manuellement même si plusieurs heures sont souvent nécessaire pour n'aligner qu'une seule minute de signal. Les raisons principalement évoquées concernent le fait qu'aucun outil n'est à la fois disponible librement, utilisable de façon simple et ergonomique, multiplateforme et, bien sûr, qui prend en charge la langue que veut traiter l'utilisateur. Ainsi, bien qu'elles soient très utilisées par les informaticiens, des boîtes à outils telles que, par exemple, HTK (Young, 1994) , Sphinx (Carnegie Mellon University, 2011) ou Julius (Lee et al., 2001 ), ne bénéficient toujours pas d'un développement qui permette une accessibilité à une communauté plus large d'utilisateurs, en particulier, à des utilisateurs non-informaticiens. HTK (Hidden Markov Toolkit), en effet, requiert un niveau de connaissances techniques très important à la fois pour son installation et pour son utilisation. Par ailleurs, HTK nécessite de s'enregistrer et il est proposé sous une licence qui limite les termes de sa diffusion (« The Licensed Software either in whole or in part can not be distributed or sub-licensed to any third party in any form. »). En outre, la dernière version (3.4.1) date de 2005. Malgré cela, HTK est largement utilisé et ses formats de données ont été largement repris par d'autres outils. Contrairement à HTK, Sphinx et Julius sont diffusés sous licence GPL. À ce titre, ils peuvent être re-distribués par des tiers, et ils sont régulièrement mis à jour. Par rapport à Sphinx, Julius offre toutefois l'avantage de pouvoir utiliser des modèles et dictionnaires au format HTK et de s'installer très facilement.Développer un outil d'alignement automatique, s'appuyant uniquement sur des ressources libres (outils et données) et regroupant les critères nécessaire à son accessibilité à des noninformaticiens n'est pas uniquement un défi technique. On suppose en effet que si tel était le cas, cet outil existerait depuis longtemps ! Quelques outils sont toutefois déjà disponibles. P2FA (Yuan et Liberman, 2008) est un programme python multi-plate-forme qui permet de simplifier l'utilisation d'HTK pour l'alignement. De même, EasyAlign (Goldman, 2011) repose sur HTK, pour l'alignement automatique. Il se présente sous la forme d'un plugin pour le logiciel Praat (Boersma et Weenink, 2009) , très utilisé pour l'annotation phonétique. EasyAlign (Goldman, 2011) offre l'avantage d'être simple à utiliser et propose une segmentation semi-automatique en Unités Inter-Pausales (IPUs), mots, syllabes et phonèmes pour 5 langues, mais il ne fonctionne que sous Windows. Dans (Cangemi et al., 2010) , les auteurs proposent les ressources pour l'italien et un logiciel d'alignement (licence GPL), également seulement pour Windows. L'outil présenté dans cet article s'appelle SPPAS, acronyme de « SPeech Phonetization Alignment and Syllabification ». L'article en présente d'abord une vue d'ensemble puis décrit les 4 modules principaux : la segmentation en unités inter-pausales, la phonétisation, l'alignement et la syllabation. Enfin, une évaluation de la phonétisation est proposée.
0
Tweeting 1 is a modern phenomenon. Complementing short message texting, instant messaging, and email, tweeting is a public outlet for netizens to broadcast themselves. The short, informal nature of tweets allows users to post often and quickly react to others' posts, making Twitter an important form of close-to-real-time communication.Perhaps as a consequence of its usability, form, and public nature, tweets are becoming an important source of data for mining emerging trends and opinion analysis. Of particular interest are retweets, tweets that share previous tweets from others. Tweets with a high retweet count can be taken as a first cut towards trend detection.It is known that social network effects exert marked influence on re-tweeting (Wu et al., 2011; Recuero et al., 2011) . But what about the content of the post? To the best of our knowledge, little is known about what properties of tweet content motivate people to share. Are there content signals that mark a tweet as important and worthy of sharing?To answer these questions, we delve into the data, analyzing tweets to better understand posting behavior. Using a classification scheme informed by previous work, we annotate 860 tweets and propagate the labeling to a large 9M corpus (Section 2). On this corpus, we observe regularities in emoticon use, sentiment analysis, verb tense, named entities and hashtags (Section 3), that enable us to specify feature classes for re-tweet prediction. Importantly, the outcome of our analysis is that a single holistic treatment of tweets is suboptimal, and that re-tweeting is better understood with respect to the specific function of the individual tweet. These building blocks allow us to build a per-function based re-tweet predictor (Section 4) that outperforms a baseline.
0
Complex nominals (CNs) (e.g. wind power) are very frequent in English specialized texts (Nakov, 2013) . They are distinguished by their syntactic-semantic complexity, since at least two concepts are juxtaposed with no clear indication of the link between them (Rosario et al., 2002) . This means that in CNs such as air pollution and oil pollution, which have the same external form (the head pollution combines with a noun modifier), different semantic relations can be established between their constituents (has_patient vs. caused_by, respectively) (Maguire et al., 2010) . The root of this issue is noun packing, which can be addressed by analyzing the formation processes of CNs, involving predicate deletion (e.g. power system, instead of a system produces power) and predicate nominalization (e.g. energy transfer, instead of energy is transferred) (Levi, 1978) . This paper describes the use of paraphrases conveying the conceptual content of English two-term CNs (Nakov and Hearst, 2006; Butnariu and Veale, 2008; Cabezas-García and Faber, 2017) in the specialized domain of environmental science. Verb paraphrases were used to access the concealed propositions in two-term CNs formed by predicate nominalization and verb deletion. Some of these paraphrases were based on the lexico-syntactic patterns that generally convey semantic relations in real texts (Meyer, 2001; Marshman, 2006) . Our goal was to access the semantics of these multi-word terms (MWTs) in order to (i) disambiguate the semantic relation between the constituents of the CN; and (ii) develop a procedure to infer the semantic relations in these MWTs.
0
The specificity of terms represents the quantity of domain specific information contained in the terms. If a term has large quantity of domain specific information, the specificity of the term is high. The specificity of a term X is quantified to positive real number as equation 1.EQUATIONThe specificity is a kind of necessary condition for term hierarchy, i.e., if X 1 is one of ancestors of X 2 , then Spec(X 1 ) is less than Spec(X 2 ). Thus this condition can be applied to automatic construction or evaluation of term hierarchy. The specificity also can be applied to automatic term recognition.Many domain specific terms are multiword terms. When domain specific concepts are represented as multiword terms, the terms are classified into two categories based on composition of unit words. In the first category, new terms are created by adding modifiers to existing terms. For example "insulin-dependent diabetes mellitus" was created by adding modifier "insulin-dependent" to its hypernym "diabetes mellitus" as in Table 1 . In English, the specific level terms are very commonly compounds of the generic level term and some modifier (Croft, 2004) . In this case, compositional information is important to get meaning of the terms. In the second category, new terms are independent of existing terms. For example, "wolfram syndrome" is semantically related to its ancestor terms as in Table 1 . But it shares no common words with its ancestor terms. In this case, contextual information is important to get meaning of the terms. Contextual information has been mainly used to represent the meaning of terms in previous works. (Grefenstette, 1994) (Pereira, 1993) and (Sanderson, 1999) used contextual information to find hyponymy relation between terms. (Caraballo, 1999) also used contextual information to determine the specificity of nouns. Contrary, compositional information of terms has not been commonly discussed. We propose new specificity measuring methods based on both compositional and contextual information. The methods are formulated as information theory like measures.This paper consists as follow; new specificity measuring methods are introduced in section 2, and the experiments and evaluation on the methods are discussed in section 3, finally conclusions are drawn in section 4.
0
Opinion summarization, i.e., the aggregation of user opinions as expressed in online reviews, blogs, internet forums, or social media, has drawn much attention in recent years due to its potential for various information access applications. For example, consumers have to wade through many product reviews in order to make an informed decision. The ability to summarize these reviews succinctly would allow customers to efficiently absorb large amounts of opinionated text and manufacturers to keep track of what customers think about their products (Liu, 2012) .The majority of work on opinion summarization is entity-centric, aiming to create summaries from text collections that are relevant to a particular entity of interest, e.g., product, person, company, and so on. A popular decomposition of the problem involves three subtasks Liu, 2004, 2006) : (1) aspect extraction which aims to find specific features pertaining to the entity of interest (e.g., battery life, sound quality, ease of use) and identify expressions that discuss them; (2) sentiment prediction which determines the sentiment orientation (positive or negative) on the aspects found in the first step, and (3) summary generation which presents the identified opinions to the user (see Figure 1 for an illustration of the task).A number of techniques have been proposed for aspect discovery using part of speech tagging (Hu and Liu, 2004) , syntactic parsing (Lu et al., 2009) , clustering (Mei et al., 2007; Titov and McDonald, 2008b) , data mining (Ku et al., 2006) , and information extraction (Popescu and Etzioni, 2005) . Various lexicon and rule-based methods (Hu and Liu, 2004; Ku et al., 2006; Blair-Goldensohn et al., 2008) have been adopted for sentiment prediction together with a few learning approaches (Lu et al., 2009; Pappas and Popescu-Belis, 2017; Angelidis and Lapata, 2018) . As for the summaries, a common format involves a list of aspects and the number of positive and negative opinions for each (Hu and Liu, 2004) . While this format gives an overall idea of people's opinion, reading the actual text might be necessary to gain a better understanding of specific details. Textual summaries are created following mostly extractive methods (but see Ganesan et al. 2010 for an abstractive approach), and various formats ranging from lists of words (Popescu and Etzioni, 2005) , to phrases (Lu et al., 2009) , and sentences (Mei et al., 2007; Blair-Goldensohn et al., 2008; Lerman et al., 2009; Wang and Ling, 2016) .In this paper, we present a neural framework for opinion extraction from product reviews. We follow the standard architecture for aspect-based summarization, while taking advantage of the success of neural network models in learning continuous features without recourse to preprocessing tools or linguistic annotations. Central to our system is the ability to accurately identify aspect- Figure 1 : Aspect-based opinion summarization. Opinions on image quality, sound quality, connectivity, and price of an LCD television are extracted from a set of reviews. Their polarities are then used to sort them into positive and negative, while neutral or redundant comments are discarded. specific opinions by using different sources of information freely available with product reviews (product domain labels, user ratings) and minimal domain knowledge (essentially a few aspectdenoting keywords). We incorporate these ideas into a recently proposed aspect discovery model (He et al., 2017) which we combine with a weakly supervised sentiment predictor (Angelidis and Lapata, 2018) to identify highly salient opinions. Our system outputs extractive summaries using a greedy algorithm to minimize redundancy. Our approach takes advantage of weak supervision signals only, requires minimal human intervention and no gold-standard salience labels or summaries for training.Our contributions in this work are three-fold: a novel neural framework for the identification and extraction of salient customer opinions that combines aspect and sentiment information and does not require unrealistic amounts of supervision; the introduction of an opinion summarization dataset which consists of Amazon reviews from six product domains, and includes development and test sets with gold standard aspect annotations, salience labels, and multi-document extractive summaries; a large-scale user study on the quality of the final summaries paired with automatic evaluations for each stage in the summarization pipeline (aspects, extraction accuracy, final summaries). Experimental results demonstrate that our approach outperforms strong baselines in terms of opinion extraction accuracy and similarity to gold standard summaries. Human evaluation further shows that our summaries are preferred over comparison systems across multiple criteria.
0
Language is an indispensable and important part of human daily life. Natural language is everywhere as a most direct and simple tool of expression. Natural language processing is to transform the language used for human communication into a machine language that can be understood by machines. It is a model and algorithm framework for studying language capabilities. In recent years, NLP research has increasingly used new deep learning methods. As an important branch of artificial intelligence, language models are models that can estimate the probability distribution of a group of language units (usually word sequences). These models can be built at a lower cost and have significantly improved several NLP tasks, such as machine translation, speech recognition and parsing. The processing flow of natural language can be roughly divided into five steps: obtaining anticipation, preprocessing the corpus, characterizing, model training, and evaluating the effect of modeling.With the rapid development of the Internet, the frequency of online communication on social software such as Weibo, Twitter, and forums is getting higher and higher, and the Internet itself has also changed from "reading Internet" to "interactive Internet". The Internet has not only become an important source for people to obtain information, but also an important platform for people to express their opinions and share their own experiences and directly express their emotions. The achievements of NLP research laid a good foundation for text sentiment analysis. Text sentiment analysis is an important research branch in the field of natural language understanding, involving theories and methods in the fields of linguistics, psychology, artificial intelligence, etc. It mainly includes the processing of text sources, the subjective and objective classification of network text, and the subjective text Analysis and other steps.Due to the huge inclusiveness and openness of the Internet itself, it attracts users of different races, different languages, different cultural backgrounds and different religious beliefs to communicate with each other here. Therefore, mixed language sentiment classification will be an important research for NLP direction.
0
The task of temporal annotation, which is addressed in the TempEval-3 challenge, consists of three subtasks: (A) the extraction and normalization of temporal expressions, (B) event extraction, and (C) the annotation of temporal relations . This makes sub-task A, i.e., temporal tagging, a prerequisite for the full task of temporal annotating documents. In addition, temporal tagging is important for many further natural language processing and understanding tasks, and can also be exploited for search and exploration scenarios in information retrieval (Alonso et al., 2011) .In the context of the TempEval-2 challenge (Verhagen et al., 2010), we developed our temporal tagger HeidelTime (Strötgen and Gertz, 2010) , which achieved the best results for the extraction and nor-malization of temporal expressions for English documents. For our work on multilingual information retrieval (e.g., ), we extended HeidelTime with a focus on supporting the simple integration of further languages (Strötgen and Gertz, 2012a) . For TempEval-3, we now tuned Heidel-Time's English resources and developed new Spanish resources to address both languages that are part of TempEval-3. As the evaluation results demonstrate, HeidelTime outperforms the systems of all other participants for the full task of temporal tagging by achieving high quality results for the extraction and normalization for English and Spanish.The remainder of the paper is structured as follows: We explain HeidelTime's system architecture in Section 2. Section 3 covers the tuning of Heidel-Time's English and the development of the Spanish resources. Finally, we discuss the evaluation results in Section 4, and conclude the paper in Section 5.
0
Text style transfer aims to convert an input text into another generated text with a different style but the same basic semantics as the input. One major challenge in this setting is that many style transfer tasks lack parallel corpora, since the absence of human references makes it impossible to train the text style transfer models using maximum likelihood estimation (MLE), which aims to maximize the predicted likelihood of the references. As a result, some of the earliest work (Shen et al., 2017; Hu et al., 2017; Fu et al., 2018) on unsupervised text style transfer proposed training algorithms that are still based on MLE by formulating the style transfer models as auto-encoders optimized with reconstruction loss. Specifically, during training the model is tasked to generate a style-agnostic encoding and reconstruct the input text based on this encoding with style-specific embeddings or decoders. During inference, the model aims to transfer the source text style using the target style information. While these methods have seen empirical success, they face the inherent difficulty of coming up with a style-agnostic but content-preserving encodingthis is a non-trivial task and failure at this first step will diminish style transfer accuracy and content preservation of the final output.Another line of work (Xu et al., 2018; Pang and Gimpel, 2019; proposes training algorithms based on rewards related to the automatic evaluation metrics, which can assess the model performance more directly during training. This approach is conceptually similar to training algorithms that optimize models using rewards related to the corresponding evaluation metrics for other NLP tasks, such as machine translation (Shen et al., 2016; Wieting et al., 2019a) or text summarization (Paulus et al., 2018; . As for unsupervised style transfer, the widely used automatic metrics mainly attend to three desiderata:(1) style transfer accuracy -the generated sentence must be in the target style, commonly measured by the accuracy of a style classifier applied to the transferred text, (2) fluency -the generated text must be grammatically correct and natural, commonly measured by the perplexity of a language model and (3) content preservation -the semantics need to be preserved between the source and target, commonly measured by the BLEU score between the system outputs and source texts. Since these automatic metrics only require the system outputs and source texts, they can be used as rewards for training. Moreover, the two lines of approaches can be used together, and previous work (Yang et al., 2018; John et al., 2019; Madaan et al., 2020) proposed methods which use the auto-encoders as the backbone augmented with task-specific rewards. In particular, the style transfer accuracy reward is used by most of the recent work.However, reward-based training algorithms still have their limitations, and in this paper we aim to identify and address the bottlenecks of these methods. Specifically, we focus on two problems:(1) the difficulty of designing an efficient reward for content preservation, (2) the lack of robustness of the existing automatic evaluation metrics.Content preservation is more difficult to measure compared to style transfer accuracy and fluency because it needs to consider the overlap in the semantics between the source text and system outputs. While using BLEU score between the source text and system output would be a direct solution (Xu et al., 2018) , this approach has an inherent limitation in that n-gram based metrics such as BLEU are sensitive to lexical differences and will penalize modifications that are necessary for transferring text style. In fact, previous work has proposed various different proxy rewards for content preservation. One of the most popular methods is the cycle-consistency loss Dai et al., 2019; Pang and Gimpel, 2019) , which introduces a round-trip generation process, where the model generates an output in the target style, and the ability of a reconstruction model to re-generate the original text is used as a proxy for content preservation. While this method is more tolerant to lexical differences, the correlation between the reconstruction loss and content preservation can be weak.Therefore, we aim to design a reward for content preservation which can directly assess the semantic similarity between the system outputs and input texts. Specifically, we note that models of semantic similarity are widely studied (Wieting et al., 2016; Sharma et al., 2017; Pagliardini et al., 2018; , and we can leverage these methods to directly calculate the similarity between the system outputs and input texts. This renders our method applicable for even unsupervised settings where no human references are available.Another key challenge for reward-based training algorithms is that the existing automatic evaluation metrics are not well-correlated with human evaluation . It poses general risks to the work in this field with respect to model training and evaluation since these metrics are widely used. An important observation we made from our experiments is that style transfer models can exploit the weaknesses of the automatic metrics. They do this by making minimal changes to the input texts which are enough to trick the classifier used for style transfer accuracy while achieving high content preservation and fluency scores due to the high lexical similarity with the input texts. Upon identifying this risk, we re-visit and propose several strategies that serve as auxiliary regularization on the style transfer models, effectively mitigating the problem discussed above.We empirically show that our proposed reward functions can provide significant gains in both automatic and human evaluation over strong baselines from the literature. In addition, the problems we identify with existing automatic evaluation metrics suggest that the automatic metrics need to be used with caution either for model training or evaluation in order to make it truthfully reflect human evaluation.
0
The system described in this paper 1 was submitted for the CoNLL-SIGMORPHON 2018 Shared Task (Cotterell et al., 2018) , part 1 only. This assignment challenges the participants to design systems that generate inflected forms based on an input lemma and feature set as shown in Figure 1 .Training data is usually provided in three different volumes (see Table 1 ), all conforming to the UniMorph standard proposed by Kirov et al. (2018) . The entire data set comprises 103 languages, although not every training volume is available for every language. In addition, some languages have significantly less training samples than the maximum depicted in Table 1 .With such a high count of diverse languages, our system is not tailored towards specific linguistic * These authors contributed equally 1 Source code available at https://gitlab.com/ nats/sigmorphon18 1.000 934.5 high 10.000 8553.6 Table 1 : Maximum training data volumes features of a language, but instead learns transitionbased character actions to transform a lemma into its inflected form. We try to limit the number of output actions that our network has to learn by grouping certain characters into common groups based on graphical features like accents or symbol modifiers. Lastly, we propose a method to enhance the training data of the low setting without the use of external resources.
0
This paper presents a proposition bank for Russian (RuPB) that balances parallelism with the English PropBank against guidance from linguistic properties specific to Russian. A proposition bank, or PropBank, is a lexical resource that follows the PropBank scheme (Palmer et al., 2005) to provide consistent labeling of semantic roles for large corpora. Semantic Role Labeling (SRL) provides consistent semantic information for natural language processing at a level appropriate for statistical machine learning of semantic relations. Data annotated with proposition bank labels supports the training of automatic SRL which improves question answering (Zapirain et al., 2013) , information extraction (Moreda et al., 2005) , and textual entailments (Sammons et al., 2010) , and statistical machine translation (Bazrafshan and Gildea, 2013) . Semantic roles communicate "Who does What to Whom and How and When and Where?" They may appear in various positions in the sentence, as in the examples below where "John" is always the semantic agent, but appears in (1) as the subject noun phrase, in (2) as the direct object in a passive transitive clause, and in (3) as the object of the final prepositional phrase of a passive ditransitive clause. Despite these syntactic alternations, in all three sentences the semantic role of "John" as the Agent of the hitting event never changes.(1) John hit the nail with the hammer.(2) The nail was hit by John.(3) The nail was hit with a hammer by John.It is important that natural language processing models works with multiple languages and language structures. Systems that have been evaluated on multiple languages are more likely to generalize well to new languages (Cohen, 2020). The initial goal for developing a proposition bank for Russian was to support alignment of English and Russian predicates and automatic projection of English semantic roles to Russian texts. This is a complicated task because Russian is more complex morphologically and exhibits more flexible word order than English or other Indo-European languages currently modeled by proposition banks. This project tests the portability of the PropBank scheme. During development of the the Russian PropBank (RuPB), language-specific issues presented unique challenges. Yet, an in-depth linguistic understanding of these issues also provided guidance towards a more consistent and appropriate representation of the semantic structure of Russian verbs. Russian is spoken by 150 million people across twelve time zones as a first language, in most of the former Soviet Union as a second language, and is one of the official languages of the United Nations. It is written with the Cyrillic alphabet which is used by other languages such as Bulgarian, Ukrainian, and Kazakh. With respect to available related resources, there is a rich lexicon of verb structure in Czech, another Slavic language with many similarities in morphosyntax, but the only Russian semantic-syntactic resource available today relies as much on syntactic as semantic structure. The following sections describe the process of developing RuPB. Related resources that supported the process are introduced in Section 2. The creation and subdivision of a predicate's rolesets, as well as the rationale for inclusion of derivations as predicates or aliases of another predicate, are explained in Section 3. Language-specific issues that both complicated and enriched the process and the language internal guidance that they provided when creating the Prop-Bank are described in Section 4. RuPB's coverage is described in Section 5. We conclude with some notes on Inter-Annotator Agreement and plans for future work in Sections 6. and 7.
0
Various kinds of corpora developed for analysis of linguistic phenomena and statistical information gathering are now accessible via electronic media and can be utilized for the study of natural language processing. Since these include written-language and monolingual corpora, however, they are not necessarily useful for research and development of multilingual spoken language processing. A multilingual spoken language corpus is indispensable for research on areas of spoken language communication such as speech-to-speech translation.Many projects on speech-to-speech translation began at that time [Rayner et al. 1993; Roe et al. 1992; Wahlster et al. 2000] . SRI International and Swedish Telecom developed a prototype speech translation system that could translate queries from spoken English to spoken Swedish in the domain of air travel information systems [Rayner et al. 1993] . AT&T Bell Laboratories and Telefónica Investigación y Desarrollo developed a restricted domain spoken language translation system called Voice English/Spanish Translator (VEST) [Roe et al. 1992] . In Germany, Verbmobil [Wahlster 2000] , was created as a major speech-to-speech translation research project. The Verbmobil scenario assumes native speakers of German and of Japanese who both possess at least a basic knowledge of English. The Verbmobil system supports them by translating from their mother tongue, i.e. Japanese or German, into English.In the 1990s, speech recognition and synthesis research shifted from a rule-based to a corpus-based approach such as HMM and N -gram. However, machine translation research still depended mainly on a rule-based or knowledge-based approach. In the 2000s, wholly corpus-based projects such as European TC-STAR [Höge 2002; Lazzari 2006] and DARPA GALE [Roukos 2006 ] began to deal with monologue speeches such as broadcast news andEuropean Parliament plenary speeches. In this paper, we report corpus construction activities for translation of spoken dialogs of travel conversations.There are a variety of requirements for every component technology, such as speech recognition and language processing. A variety of speakers and pronunciations may be important for speech recognition, and a variety of expressions and information on parts of speech may be important for natural language processing. The speech and natural language processing essential to multilingual spoken language research requires unified structure and annotation, such as tagging.In this paper, we introduce an interpreter-aided spoken dialog corpus and discuss corpus configuration. Next, we introduce the basic travel expression corpus developed to train machine translation of spoken language among Japanese, English, and Chinese speakers. Finally, we discuss the Japanese, English, and Chinese multilingual spoken dialog corpus that we created using speech-to-speech translation systems.
0
The investigation for Chinese information extraction is one of the topics of the project COLLATE dedicated to building up the German Competence Center for Language Technology. After accomplishing the task concerning named entity (NE) identification, we go on studying identification issues for named entity relations (NERs). As an initial step, we define 14 different NERs based on six identified NEs in a sports domain based Chinese named entity recognition system (Yao et al., 2003) . In order to learn NERs, we annotate the output texts from this system with XML. Meanwhile, the NER annotation is performed by an interactive mode.The goal of the learning is to capture valuable information from NER and non-NER patterns, which is implicated in different features and helps us identify NERs and non-NERs. Generally speaking, because not all features we predefine are important for each NER or non-NER, we should distinguish them by a reasonable measure mode. According to the selection criterion we proposeself-similarity, which is a quantitative measure for the concentrative degree of the same kind of NERs or non-NERs in the corresponding pattern library, the effective feature sets -general-character feature (GCF) sets for NERs and individual-character feature (ICF) sets for non-NERs are built. Moreover, the GCF and ICF feature weights serve as a pro portion determination of the features' degree of importance for identifying NERs against non-NERs. Subsequently, identification thresholds can also be determined.In the NER identification, we may be confronted with the problem that an NER candidate in a new case matches more than one positive case, or both positive and negative cases. In such situations, we have to employ a vote to decide which existing case environment is more similar to the new case. In addition, a number of special circumstances should be also considered, such as relation conflict and relation omission.
0
Automatic processing of curriculum vitae (CVs) is important in multiple real-life scenarios. This includes analyzing, organizing and deriving actionable business intelligence from CVs. For corporates, such processing is interesting in scenarios such as hiring applicants as employees, promoting and transitioning employees to new roles etc. For individuals, it is possible to add value by designing CV improvement and organization tools, enabling them to create more effective CVs specific to their career objectives as well as maintain the CVs easily over time. Hence, it is important to transform CVs to follow a unified structure, thereby, paving ways for smoother and more effective manual/automated CV analysis.The semi-structuredness of CVs, with the diversity that different CVs exhibit, however, makes CV processing a challenging task. For example, a first CV could have sections personal details, education, technical skills, project experience, managerial skills, others and a second CV, equivalent to the first one, could have sections about me, career objective, work experience, academic background, proficiency, professional interests, in that order. Note that, some sections are equivalent (e.g., personal details and about me) in the two CVs, some sections are simply absent in some CVs (e.g., any equivalent of others that is present in the first CV, is missing in the second CV) and some sections in one CV is a composition of multiple sections in another CV (e.g., proficiency in the second CV is a combination of technical skills and managerial skills of the first). In real-life, the variations are high, and the solutions available today are far from perfect. Clearly, the problem at hand requires attention.Multiple industrial solutions, such as Text Kernel 1 , Burning Glass 2 and Sovren 3 , have attempted to solve the problem at hand, and are offered as commercial products. Several researchers have also investigated the problem. Yu et al. (2005) proposed a hybrid (multipass) information extraction model to assign labels to block of CVs. Subsequent works, such as Chuang et al. (2009) and Maheshwari et al. (2010) , also used multipass approaches, and feature-based machine learning techniques. Kopparapu (2010) suggested a knowledge-based approach, using section-specific keywords and n-grams. Tosik et al. (2015) found word embeddings to be more effective compared to word types and other features for CRF mod-els. Singh et al. (2010) and Marjit et al. (2012) , amongst others, also proposed different solutions.We use a phrase-embedding based approach to identify and label sections, as well as investigate the usefulness of traditional language resources such as WordNet (Miller, 1995) and ConceptNet (Liu and Singh, 2004) . Empirically, our approach significantly outperforms other approaches.
0
The Internet offers a constantly growing source of information, not only in terms of size, but also in terms of languages and communication settings it includes. As a consequence, Web corpora, language resources developed by automatically crawling the Web, offer revolutionary potentials for fields using textual data, such as Natural Language Processing (NLP), linguistics and other humanities (Kilgariff and Grefenstette, 2003) .Despite their potentials, Web corpora are underused. One of the important reasons behind this is the fact that in the existing Web corpora, all of the different documents have an equal status. This complicates their use, as for many applications, knowing the composition of the corpus would be beneficial. In particular, it would be important to know what registers, i.e. text varieties such as a user manual or a blog post, the corpus consists of (see Section 2 for a definition). In NLP, detecting the register of a text has been noted to be useful for instance in POS tagging (Giesbrecht and Evert, 2009) , discourse parsing (Webber, 2009) and information retrieval (Vidulin et al., 2007) . In linguistics, the correct constitution of a corpus and the criteria used to assemble it have been subject to long discussions (Sinclair, 1996) , and note that without systematic classification, Web corpora cannot be fully benefited from.In this paper, we explore the development of register sub-corpora for the Finnish Internet Parsebank 1 , a Web-crawled corpus of Internet Finnish. We assemble the sub-corpora by first naively deducing four register classes from the Parsebank document URLs and then creating a classifier based on these classes to detect texts representing these registers from the rest of the Parsebank (see Section 4). The register classes we develop are news, blogs, forum discussions and encyclopedia articles. Instead of creating a full-coverage taxonomy of all the registers covered by the Parsebank, in this article our aim is to test this method in the detection of these four registers. If the method works, the number of registers will be extended in future work.In the register detection and analysis, we compare three methods: the traditional bag-of-words as a baseline, lexical trigrams as proposed by Gries & al. (2011) , and Dependency Profiles (DP), cooccurrence patterns of the documents labelled in a specific class, assumed a register, and dependency syntax relations.In addition to reporting the standard metrics to estimate the classifier performance, we evaluate the created sub-corpora by analysing the mismatches between the naively assumed register classes and the classifier predictions. In addition, we analyse the linguistic register characteristics estimated by the classifier. This validates the quality of the sub-corpora and is informative about the linguistic variation inside the registers (see Section 5).Finally, we publish four register-specific subcorpora for the Parsebank that we develop in this paper: blogs, forum discussions, encyclopedia articles and news (see Section 6). We release two sets of sub-corpora: the A collection consists of two million documents with register-specific labels. For these documents, we estimate the register prediction precision to be >90%. The collection B consists of four million documents. For these, the precision is >80%. These sub-corpora allow the users to focus on specific registers, which improves the Parsebank usability significantly (see discussions in and (Asheghi et al., 2016) ).
0
Deep data-driven (or stochastic) sentence generation needs to be able to map abstract semantic structures onto syntactic structures. This has been a problem so far since both types of structures differ in their topology and number of nodes (i.e., are non-isomorphic). For instance, a truly semantic structure will not contain any functional nodes, 1 while a surface-syntactic structure or a chain of tokens in a linearized tree will contain all of them. Some state-of-the-art proposals use a rule-based module to handle the projection between non-isomorphic semantic and syntactic structures/chains of tokens, e.g., (Varges and Mellish, 2001; Belz, 2008; Bohnet et al., 2011) , and some adapt the semantic structures to be isomorphic with syntactic structures . In this paper, we present two alternative stochastic approaches to the projection between non-isomorphic structures, both based on a cascade of Support Vector Machine (SVM) classifiers. 2 The first approach addresses the projection as a generic non-isomorphic graph transduction 1 See, for instance, (Bouayad-Agha et al., 2012) . 2 Obviously, other machine learning techniques could also be used. problem in terms of four classifiers for 1. identification of the (non-isomorphic) correspondences between fragments of the source and target structure, 2. generation of the nodes of the target structure, 3. generation of the dependencies between corresponding fragments of the source and target structure, and 4. generation of the internal dependencies in all fragments of the target structure. The second approach takes advantage of the linguistic knowledge about the projection of the individual linguistic token types. It replaces each of the above four classifiers by a set of classifiers, with each single classifier dealing with only one individual linguistic token type (verb, noun, adverb, etc.) or with a configuration thereof. As will be seen, the linguistic knowledge pays off: the second approach achieves considerably better results.Since our goal is to address the challenge of the projection of non-isomorphic structures, we focus, in what follows, on this task. That is, we do not build a complete generation pipeline until the surface. This could be done, for instance, by feeding the output obtained from the projection of a semantic onto a syntactic structure to the surface realizer described in .
0
Surface realisation consists in producing all the sentences associated by a grammar with a given semantic formula. For lexicalist grammars such as LTAG (Lexicalised Tree Adjoining Grammar), surface realisation usually proceeds bottom-up from a set of flat semantic literals 1 . However, surface realisation from flat semantic formulae is known to be exponential in the length of the input (Kay96; Bre92; KS02). In this paper, we abstract from the TAG based surface realiser for French GenI, (GK05) and argue that TAG naturally supports the integration of various proposals made to help reduce either surface realisation or parsing complexity into a TAG based, lexically driven surface realiser. Specifically, we show:1. that TAG elementary trees naturally support the implementation of a technique called polarity filtering used to reduce the exponential factor introduced by lexical ambiguity (Per03), 2. that TAG two operations of substitution and adjunction provides a natural framework for implementing a delayed adjunction mechanism capable of reducing the complexity due to the lack of ordering information and 3. that TAG extended domain of locality helps reduce the potential complexity increment introduced by semantically empty items such as infinitival "to" or complementiser "that".
0
The idea of HFST -Helsinki Finite-State Technology (Lindén et al. 2009 (Lindén et al. , 2011 is to provide opensource replicas of well-known tools for building morphologies, including XFST (Beesley and Karttunen 2003) . HFST's lack of replace rules such as those supported by XFST, prompted us to implement them using the present method, which replicates XFST's behavior (with minor differences which will be detailed in later work), but will also allow easy expansion with new functionalities.The semantics of replacement rules mixes contextual conditions with replacement strategies that are specified by replace rule operators. This paper describes the implementation of replace rules using a preference operator, .r-glc., that disambiguates alternative replacement strategies according to a preference relation. The use of preference relations (Yli-Jyrä 2008b) is similar to the worsener relations used by Gerdemann (2009) . The current approach was first described in Yli-Jyrä (2008b) , and is closely related to the matching-based finite-state approaches to optimality in OT phonology (Noord and Gerdemann 1999; Eisner 2000) . The preference operator, .r-glc., is the reversal of generalized lenient composition (glc), a preference operator construct proposed by Jäger (2001) . The implementation is developed using the HFST library, and is now a part of the same.The purpose of this paper is to explain a general method of compiling replace rules with .r-glc. operator and to show how preference constraints described in Yli-Jyrä (2008b) can be combined to form different replace rules.
0
Switzerland has four national languages: German/Swiss German (63%), French (22.7%), Italian (8.1%), Romansh (0.5%); the numbers in brackets are the percentages of the population speaking them 1 . As can be derived from Figure 1 , French is spoken in the west, Italian is spoken primarily in Ticino, Val Bregaglia and Val Pschiavo and Romansh speakers are distributed over Graubünden. Swiss German, which is primarily spoken in the center and east of Switzerland, is highly dialectical. Typically, speakers speak a dialect representative of the region. To be understood by visitors, Swiss German speakers switch to standard German (Garner et al., 2014) . Swiss German and its dialectal variants do not have a standard written form, instead the standard written form is standard German. Although regional Swiss German dialects are manifold and their differences can be very subtle, there is a common Swiss German speaking style which is used in Swiss broadcasts (e.g. weather reports of Schweizer Radio und Fernsehen 2 (SRF)), and that is understood well by the vast majority of the Swiss German speaking people. Broadcast companies are typically interested in the automatic transcription of speech data including (live) subtitling and automatic speech recognition (ASR). In the case of Swiss German, the desired output of the ASR system is standard German, as there is no standardized written form of Swiss German dialect. Due to the mismatch between the dialectal pronunciation of Swiss German and the written form of standard German, these system often fail. In this paper we are exploring methods to close this gap, amongst others a data-driven method. In the remainder of the paper, we first describe the data investigated in this work and the available annotation (Section 2.). In Section 3., we then describe the involved ASR recognition methods including the data-driven method to improve the pronunciation model and evaluate the results. A conclusion is given in Section 4..
0
Open-domain question answering (Voorhees, 1999; Chen et al., 2017 ) is a long-standing task where a question answering system goes through a largescale corpus to answer information-seeking questions. Previous work typically assumes that there is only one well-defined answer for each question, or only requires systems to predict one correct answer, which largely simplifies the task. However, humans may lack sufficient knowledge or patience * *Corresponding author: Minlie Huang.Original Question: When did [You Don't Know Jack] come out? Interpretation #1: When did the first video game called [You Don't Know Jack] come out? Evidence #1: You Don't Know Jack is a video game released in 1995, and the first release in ... Answer #1: 1995 Interpretation #2: When did the Facebook game [You Don't Know Jack] come out on Facebook? Evidence #2: In 2012, Jackbox Games developed and published a social version of the game on Facebook ... Answer #2: 2012 Interpretation #3: When did the film [You Don't Know Jack] come out? Evidence #3: "You Don't Know Jack" premiered April 24, 2010 on HBO. Answer #3: April 24, 2010 to frame very specific information-seeking questions, leading to open-ended and ambiguous questions with multiple valid answers. According to Min et al. (2020b) , over 50% of a sampled set of Google search queries (Kwiatkowski et al., 2019) are ambiguous. Figure 1 shows an example with at least three interpretations. As can be seen from this example, the number of valid answers depends on both questions and relevant evidence, which challenges the ability of comprehensive exploitation of evidence from a large-scale corpus.Existing approaches mostly adopt the rerankthen-read framework. A retriever retrieves hundreds or thousands of relevant passages which are later reranked by a reranker; a generative reader then predicts all answers in sequence conditioned on top-ranking passages. With a fixed memory constraint 1 , there is a trade-off between the size of the reader and the number of passages the reader can process at a time. According to , provided that the reranker is capable of selecting a small set of highly-relevant passages with high coverage of diverse answers, adopting a larger reader can outperform a smaller reader using more passages. However, as shown by Section 4.4, this framework is faced with three problems: first, due to the small reading budget, the reranker has to balance relevance and diversity, which is non-trivial as it is unknown beforehand that which answers should be distributed with more passages to convince the reader and which answers can be safely distributed with less to save the budget for the other answers; second, the reader has no access to more retrieved evidence that may be valuable but is filtered out by the reranker, while combining information from more passages was found to be beneficial to open-domain QA (Izacard and Grave, 2021b) ; third, as the reader predicts answers in sequence all at once, the reader learns pathological dependencies among answers, i.e., whether a valid answer will be predicted also depends on passages that cover some other valid answer(s), while ideally, prediction of a particular answer should depend on the soundness of associated evidence itself.To address these issues, we propose to answer open-domain multi-answer questions with a recallthen-verify framework. Specifically, we first use an answer recaller to predict possible answers from each retrieved passage individually; this can be done with high recall, even when using a weak model for the recaller, but at the cost of low precision due to insufficient evidence to support or refute a candidate. We then aggregate retrieved evidence relevant to each candidate, and verify each candidate with a large answer verifier. By separating the reasoning process of each answer, our framework avoids the problem of multiple answers sharing a limited reading budget, and makes better use of retrieved evidence while also leveraging strong large models under the same memory constraint.Our contributions are summarized as follows:• We empirically analyze the problems faced by the rerank-then-read framework when dealing with open-domain multi-answer QA.• To address these issues, we propose to answer open-domain multi-answer questions with a recall-then-verify framework, which makes better use of retrieved evidence while also leveraging the power of large models under the same memory constraint.• Our framework establishes a new state-of-the-art record on two multi-answer QA datasets with significantly more valid predictions.
0
Evidence-based medicine (EBM) is of primary importance in the medical field. Its goal is to present statistical analyses of issues of clinical focus based on retrieving and analyzing numerous papers in the medical literature (Haynes et al., 1997) . The PubMed database is one of the most commonly used databases in EBM (Sackett et al., 1996) .Biomedical papers, describing randomized controlled trials in medical intervention, are published at a high rate every year. The volume of these publications makes it very challenging for physicians to find the best medical intervention for a given patient group and condition (Borah et al., 2017) . Computational methods and natural language processing (NLP) could be adopted in order to expedite the process of biomedical evidence synthesis. Specifically, NLP tasks applied to well structured documents and queries can help physicians extract appropriate information to identify the best available evidence in the context of medical treatment.Clinical questions are formed using the PIO framework, where clinical issues are broken down into four components: Population/Problem (P), Intervention (I), Comparator (C), and Outcome (O). We will refer to these categories as PIO elements, by using the common practice of merging the C and I categories. In (Rathbone et al., 2017) a literature screening performed in 10 systematic reviews was studied. It was found that using the PIO framework can significantly improve literature screening efficacy. Therefore, efficient extraction of PIO elements is a key feature of many EBM applications and could be thought of as a multilabel sentence classification problem.Previous works on PIO element extraction focused on classical NLP methods, such as Naive Bayes (NB), Support Vector Machines (SVM) and Conditional Random Fields (CRF) (Chung, 2009; Boudin et al., 2010) . These models are shallow and limited in terms of modeling capacity. Furthermore, most of these classifiers are trained to extract PIO elements one by one which is suboptimal since this approach does not allow the use of shared structure among the individual classifiers.Deep neural network models have increased in popularity in the field of NLP. They have pushed the state of the art of text representation and information retrieval. More specifically, these techniques enhanced NLP algorithms through the use of contextualized text embeddings at word, sentence, and paragraph levels (Mikolov et al., 2013; Le and Mikolov, 2014; Peters et al., 2017; Devlin et al., 2018; Logeswaran and Lee, 2018; Radford et al., 2018) .More recently, Jin and Szolovits (2018) proposed a bidirectional long short term memory (LSTM) model to simultaneously extract PIO components from PubMed abstracts. To our knowledge, that study was the first in which a deep learning framework was used to extract PIO elements from PubMed abstracts.In the present paper, we build a dataset of PIO elements by improving the methodology found in (Jin and Szolovits, 2018) . Furthermore, we built a multi-label PIO classifier, along with a boosting framework, based on the state of the art text embedding, BERT. This embedding model has been proven to offer a better contextualization compared to a bidirectional LSTM model (Devlin et al., 2018) .
0
Since the development of the Prolog programming language (Colmerauer 1973; Roussel 1975) , logic programming (Kowalski 1974 (Kowalski , 1979 Van Emden 1975) has been applied in many different fields. In natural language processing, useful grammar formalisms have been developed and incorporated in Prolog: metamorphosis grammars, due to Colmerauer (1978) , and extraposition grammars, defined by F. Pereira (1981) ; definite clause grammars (Pereira and Warren 1980) are a special case of metamorphosis grammars. The first sizable application of logic grammars was a Spanish/French-consultable data base system by Dahl (1977 Dahl ( , 1981 , which was later adapted to Portuguese tion as a "metagrammatical" construction, in the sense that metarules, general system operations, or "secondpass" operations such as transformations, are needed for its formulation.Perhaps the most general and powerful metagrammatical device for handling coordination in computational linguistics has been the SYSCONJ facility for augmented transition networks (ATNs) (Woods 1973; Bates 1978) . The ATN interpreter with this facility built into it can take an ATN that does not itself mention conjunctions at all, and will parse reduced coordinate constructions, which are of the form A X and Y B, for example, John drove his car through and A X completely demolished a plate glass window. Y Bwhere the unreduced deep structure corresponds to A X B and A Y B.The result of the parse is this unreduced structure. SYSCONJ accomplishes this by treating the conjunction as an interruption which causes the parser to back up in its history of the parse. Before backing up, the current configuration (immediately before the interruption) is suspended for later merging. The backing up is done to a point when the string X was being parsed (this defines X), and with this configuration the string Y is parsed. The parsing of Y stops when a configuration is reached that can be merged with the suspended configuration, whereupon B is parsed. The choices made in this process can be deterministic or non-deterministic, and can be guided by syntactic or semantic heuristics. There are some problems with SYSCONJ, however. It suffers from inefficiency, due to the combinatorial explosion from all the choices it makes. Because of this inefficiency, it in fact has not been used to a great extent in ATN parsing. Another problem is that it does not handle embedded complex structures. Furthermore, it is not clear to us that SYSCONJ offers a good basis for handling the scoping problems that arise for semantic interpretation when conjunctions interact with quantifiers (and other modifiers) in the sentence. This latter problem is discussed in detail below.In this paper we present a system for handling coordination in logic grammars. The system consists of three things:(1) a new formalism for logic grammars, which we call modifier structure grammars (MSGs),(2) an interpreter (or parser) for MSGs that takes all the responsibility for the syntactic aspects of coordination (as with SYSCONJ), and(3) a semantic interpretation component that produces logical forms from the output of the parser and deals with scoping problems for coordination. The whole system is implemented in Prolog-10 (Pereira, Pereira, and Warren 1978) .Coordination has of course received some treatment in standard logic grammars by the writing of specific grammar rules. The most extensive treatment of this sort that we know of is in Pereira et al. (1982) , which also deals with ellipsis. However, we are aware of no general, metagrammatical treatment of coordination in logic grammars previous to ours.Modifier structure grammars, described in detail in Section 2, are true logic grammars, in that they can be translated (compiled) directly into Horn clause systems, the program format for Prolog.In fact, the treatment of extraposition in MSGs is based on F. Pereira's (1981) extraposition grammars (XGs), and MSGs can be compiled into XGs (which in turn can be compiled into Horn clause systems). A new element in MSGs is that the formation of analysis structures of sentences has been made largely implicit in the grammar formalism. For previous logic grammar formalisms, the formation of analyses is entirely the responsibility of the grammar writer. Compiling MSGs into XGs consists in making this formation of analyses explicit.Although MSGs can be compiled into XGs, it seems difficult to do this in a way that treats coordination automatically (it appears to require more metalogical facilities than are currently available in Prolog systems). Therefore, we are using an interpreter for MSGs (written in Prolog).For MSGs, the analysis structure associated (by the sYstem) with a sentence is called the modifier structure (MS) of the sentence. This structure can be considered an annotated phrase structure tree, and in fact the name "modifier structure grammar" is intended to be parallel to "phrase structure grammar". If extraposition and coordination are neglected, there is a context-free phrase structure grammar underlying an MSG; and the MS trees are indeed derivation trees for this underlying grammar, but with extra information attached to the nodes.In an MS tree, each node contains not only syntactic information but also a term called a semantic item (supplied in the grammar), which determines the node's contribution to the logical form of the sentence.This contribution is for the node alone, and does not refer to the daughters of the node, as in the approach of Gazdar (1981) . Through their semantic items, the daughters of a node act as modifiers of the node, in a fairly traditional sense made precise below -hence the term "modifier structure". The notion of modifier structure used here and the semantic interpretation component, which depends on it, are much the same as in previous work by McCord (1982, 1981) , especially the latter paper. But new elements are the notion of MSG (making modifier structure implicit in the grammar), the MSG interpreter, with its treatment of coordination, and the specific rules for semantic interpretation of coordination. The MSG interpreter is described in Section 3. As indicated above, the interpreter completely handles the syntax of coordination.The MSG grammar itself should not mention conjunctions at all. The interpreter has a general facility for treating certain words as demons (cf. Winograd 1972) , and conjunctions are handled in this way. When a conjunction demon appears in a sentence A X conj Y B, a process is set off which in outline is like SYSCONJ, in that backing up is done in the parse history in order to parse Y parallel to X, and B is parsed by merger with the state interrupted by the conjunction. However, our system has the following interesting features:(1) The MSG interpreter manipulates stacks in such a way that embedded coordination (and coordination of more than two elements) and interactions with extraposition are handled. (Examples are given in the Appendix.)(2) The interpreter produces a modifier structure for the sentence A X conj Y B which remains close to the surface form, as opposed to the unreduced structure A X B conj A Y B (but it does show all the pertinent semantic relations through unification of variables). Not expanding to the unreduced form is important for keeping the modifier relationships straight, in particular, getting the right quantifier scoping. Our system analyzes the sentence Each man drove a car through and completely demolished a glass window, producing the logical formeach(X,man(X),exists(Y,car(Y), exists(Z,glass(Z)&window(Z), drove_through(X,Y,Z) &completely(demolished(X,Z)) )))This logical form would be difficult to recover from the unreduced structure, because the quantified noun phrases are repeated in the unreduced structure, and the logical form that corresponds most naturally to the unreduced structure is not logically equivalent to the above logical form.(3) In general, the use of modifier structures and the associated semantic interpretation component per-mits a good treatment of scoping problems involving coordination. Examples are given below.(4) The system seems reasonably efficient. For example, the analysis of the example sentence under (2) above (including syntactic analysis and semantic interpretation) was done in 177 milliseconds. The reader can examine analysis times for other examples in the Appendix. One reason for the efficiency is just that the system is formulated as a logic programming system, and especially that it uses Prolog-10, with its compiler. Another reason presumably lies in the details of the MSG interpreter. For example, the interpreter does not save the complete history of the parse, so that the backing up necessary for coordination does not examine as much.(5) The code for the system seems short, and most of it is listed in this paper.The semantic interpretation component is described in Section 4, but not in complete detail since it is taken in the main from McCord (1982 McCord ( , 1981 . Emphasis is on the new aspects involving semantic interpretation of coordinate modifiers.Semantic interpretation of a modifier structure tree is done in two stages. The first stage, called reshaping, deals heuristically with the well-known scoping problem, which arises because of the discrepancies that can exist between (surface) syntactic relations and intended semantic relations. Reshaping is a transformation of the syntactic MS tree into another MS tree with the (hopefully) correct modifier relations. The second stage takes the reshaped tree and translates it into logical form. The modifiers actually do their work of modification in this second stage, through their semantic items.As an example of the effects of reshaping on coordinate structures involving quantifiers, the sentence Each man and each woman ate an apple is given the logical formeach(X,man(X),exists(Y,apple(Y),ate(X,Y))) & each(X,woman(X),exists(Y,apple(Y),ate(X,Y))),whereas the sentence A man and a woman sat at each table is given the formeach(Y,table(Y), exists(X,man(X),sat_at(X,Y)) & exists(X,woman(X),sat_at (X,Y))).Section 5 of the paper presents a short discussion of possible improvements for the system, and Section 6 consists of concluding remarks. The Appendix to the paper contains a listing of most of the system, a sample MSG, and sample parses. The reader may wish to examine the sample parses at this point.
0
This paper describes our development of systems for the WMT17 Neural Machine Translation (NMT) Training Task (WMT, 2017) . This task tests methods of adjusting the NMT training process, with a fixed size and format for the final English-to-Czech system. A large (approx. 50 million line) general-domain (mostly subtitles) bilingual corpus is provided as a training set. A domain is provided for each line of this corpus. News text, the application domain, composes about 0.5% of the corpus (see Table 1 , column "Given"). A subword expansion to be used is explicitly provided as well. We preprocess the training data to standardize some punctuation and character encoding differences. We filter the data to remove some lines of foreign languages and little information, approximately 5% of the training data.We follow a teacher-student (aka knowledge distillation) paradigm for this task (Ba and Caruana, 2014) . We train ten replicate systems larger than the final system, based on all the training data available. These systems are aware of different factors (domain, case, subword location) for each subword, allowing them to use this information to learn finer details of translation. They also produce different outputs, based on randomness in training. We translate the entire news-domain training corpus with all replicate systems. These outputs are added to the most applicable training data as another set of references, and the final NMT systems are trained from this decimated and augmented training set.We choose to resist making many changes to the given systems, in order to provide useful a posteriori comparisons. To this end, we use:• only neuralmonkey, or branches thereof, for NMT• the given data only• alterations to given 4GB and 8GB configurations only.for all intermediate systems.
0
Metonymy is a figure of speech that uses "one entity to refer to another that is related to it" (Lakoff and Johnson, 1980, p.35) . In example (1), for instance, China and Taiwan stand for the governments of the respective countries:(1)China has always threatened to use force if Taiwan declared independence. (BNC) Metonymy resolution is the task of automatically recognizing these words and determining their referent. It is therefore generally split up into two phases: metonymy recognition and metonymy interpretation (Fass, 1997) .The earliest approaches to metonymy recognition identify a word as metonymical when it violates selectional restrictions (Pustejovsky, 1995) .Indeed, in example (1), China and Taiwan both violate the restriction that threaten and declare require an animate subject, and thus have to be interpreted metonymically. However, it is clear that many metonymies escape this characterization. Nixon in example (2) does not violate the selectional restrictions of the verb to bomb, and yet, it metonymically refers to the army under Nixon's command.( 2)Nixon bombed Hanoi.This example shows that metonymy recognition should not be based on rigid rules, but rather on statistical information about the semantic and grammatical context in which the target word occurs. This statistical dependency between the reading of a word and its grammatical and semantic context was investigated by Markert and Nissim (2002a) and Nissim and Markert (2003; 2005) . The key to their approach was the insight that metonymy recognition is basically a subproblem of Word Sense Disambiguation (WSD). Possibly metonymical words are polysemous, and they generally belong to one of a number of predefined metonymical categories. Hence, like WSD, metonymy recognition boils down to the automatic assignment of a sense label to a polysemous word. This insight thus implied that all machine learning approaches to WSD can also be applied to metonymy recognition.There are, however, two differences between metonymy recognition and WSD. First, theoretically speaking, the set of possible readings of a metonymical word is open-ended (Nunberg, 1978) . In practice, however, metonymies tend to stick to a small number of patterns, and their labels can thus be defined a priori. Second, classic WSD algorithms take training instances of one particular word as their input and then disambiguate test instances of the same word. By contrast, since all words of the same semantic class may undergo the same metonymical shifts, metonymy recognition systems can be built for an entire semantic class instead of one particular word (Markert and Nissim, 2002a) .To this goal, Markert and Nissim extracted from the BNC a corpus of possibly metonymical words from two categories: country names (Markert and Nissim, 2002b) and organization names (Nissim and Markert, 2005) . All these words were annotated with a semantic label -either literal or the metonymical category they belonged to. For the country names, Markert and Nissim distinguished between place-for-people, place-for-event and place-for-product. For the organization names, the most frequent metonymies are organization-for-members and organization-for-product. In addition, Markert and Nissim used a label mixed for examples that had two readings, and othermet for examples that did not belong to any of the pre-defined metonymical patterns.For both categories, the results were promising. The best algorithms returned an accuracy of 87% for the countries and of 76% for the organizations. Grammatical features, which gave the function of a possibly metonymical word and its head, proved indispensable for the accurate recognition of metonymies, but led to extremely low recall values, due to data sparseness. Therefore Nissim and Markert (2003) developed an algorithm that also relied on semantic information, and tested it on the mixed country data. This algorithm used Dekang Lin's (1998) thesaurus of semantically similar words in order to search the training data for instances whose head was similar, and not just identical, to the test instances. Nissim and Markert (2003) showed that a combination of semantic and grammatical information gave the most promising results (87%).However, Nissim and Markert's (2003) approach has two major disadvantages. The first of these is its complexity: the best-performing algorithm requires smoothing, backing-off to grammatical roles, iterative searches through clusters of semantically similar words, etc. In section 2, I will therefore investigate if a metonymy recognition al-gorithm needs to be that computationally demanding. In particular, I will try and replicate Nissim and Markert's results with the 'lazy' algorithm of Memory-Based Learning.The second disadvantage of Nissim and Markert's (2003) algorithms is their supervised nature. Because they rely so heavily on the manual annotation of training and test data, an extension of the classifiers to more metonymical patterns is extremely problematic. Yet, such an extension is essential for many tasks throughout the field of Natural Language Processing, particularly Machine Translation. This knowledge acquisition bottleneck is a well-known problem in NLP, and many approaches have been developed to address it. One of these is active learning, or sample selection, a strategy that makes it possible to selectively annotate those examples that are most helpful to the classifier. It has previously been applied to NLP tasks such as parsing (Hwa, 2002; Osborne and Baldridge, 2004) and Word Sense Disambiguation (Fujii et al., 1998) . In section 3, I will introduce active learning into the field of metonymy recognition.
0
For digitization of incoming mails in business contexts as well as for retro-digitizing archives, batch scanning of documents can be a major simplification of the processing workflow. In this scenario, scanned images of multipage documents arrive at a document management system as an ordered stream of single pages lacking information on document boundaries. Page stream segmentation (PSS) then is the task of dividing the continuous document stream into sequences of pages that represent single physical documents. 1 Applying a fully automated approach of document page segmentation can be favorable over manually separating and scanning documents, especially in contexts of very large data sets which need to be separated (Gallo et al., 2016) . In a joint research project together with a German research archive, we supported the task of retro-digitization of a paper archive consisting of circa one million pages put on file between 1922 and 2010 (Isemann et al., 2014) . The collection contains documents of varying content, types and lengths around the topic of ultimate disposal of nuclear waste, mostly administrative letter correspondence and research reports, but also stock lists, meeting minutes and email printouts. The 1M pages were archived in roughly 20.000 binders which were batch-scanned due to limited manual capacities for separating individual documents. The long time range of archived material affects document quality, proliferation of layout standards, different fonts and the use of hand-written texts. All these circumstances pose severe challenges to OCR as well as to page stream segmentation (PSS). In this article, we introduce our approach to PSS comparing (linear) support vector machines (SVM) and convolutional neural networks (CNN). For the first time for this task, we combine textual and visual features into one net-work to achieve most-accurate results. The upcoming section 2. elaborates on related work. In section 3. we describe our dataset together with one reference dataset for this task. In section 4. we introduce our neural network based architecture for PSS. As a baseline, we introduce an SVM-based model solely operating on text features. Then, we introduce CNN for PSS on text and image data separately as well as in a combined architecture. Section 5. presents a quantitative and a qualitative evaluation of the approach on the two datasets.
0
The need for statistical hypothesis testing for machine translation (MT) has been acknowledged since at least Och (2003) . In that work, the proposed method was based on bootstrap resampling and was designed to improve the statistical reliability of results by controlling for randomness across test sets. However, there is no consistently used strategy that controls for the effects of unstable estimates of model parameters. 1 While the existence of optimizer instability is an acknowledged problem, it is only infrequently discussed in relation to the reliability of experimental results, and, to our knowledge, there has yet to be a systematic study of its effects on hypothesis testing. In this paper, we present a series of experiments demonstrating that optimizer instability can account for substantial amount of variation in translation quality, 2 which, if not controlled for, could lead to incorrect conclusions. We then show that it is possible to control for this variable with a high degree of confidence with only a few replications of the experiment and conclude by suggesting new best practices for significance testing for machine translation.
0
Learning from examples is the predominant approach for many NLP tasks: A model is trained on a set of labeled examples from which it then generalizes to unseen data. Due to the vast number of languages, domains and tasks and the cost of annotating data, it is common in real-world uses of NLP to have only a small number of labeled examples, making few-shot learning a highly important research area. Unfortunately, applying standard supervised learning to small training sets often performs poorly; many problems are difficult to grasp from just looking at a few examples. For instance, assume we are given the following pieces of text:• T 1 : This was the best pizza I've ever had.• T 2 : You can get better sushi for half the price.• T 3 : Pizza was average. Not worth the price. 1 Our implementation is publicly available at https:// github.com/timoschick/pet. (1) A number of patterns encoding some form of task description are created to convert training examples to cloze questions; for each pattern, a pretrained language model is finetuned.(2) The ensemble of trained models annotates unlabeled data. (3) A classifier is trained on the resulting soft-labeled dataset.Furthermore, imagine we are told that the labels of T 1 and T 2 are l and l , respectively, and we are asked to infer the correct label for T 3 . Based only on these examples, this is impossible because plausible justifications can be found for both l and l . However, if we know that the underlying task is to identify whether the text says anything about prices, we can easily assign l to T 3 . This illustrates that solving a task from only a few examples becomes much easier when we also have a task description, i.e., a textual explanation that helps us understand what the task is about.With the rise of pretrained language models (PLMs) such as GPT (Radford et al., 2018) , BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) , the idea of providing task descriptions has become feasible for neural architectures: We can simply append such descriptions in natural language to an input and let the PLM predict continuations that solve the task (Radford et al., 2019; Puri and Catanzaro, 2019) . So far, this idea has mostly been considered in zero-shot scenarios where no training data is available at all.In this work, we show that providing task descriptions can successfully be combined with standard supervised learning in few-shot settings: We introduce Pattern-Exploiting Training (PET), a semi-supervised training procedure that uses natural language patterns to reformulate input examples into cloze-style phrases. As illustrated in Figure 1 , PET works in three steps: First, for each pattern a separate PLM is finetuned on a small training set T . The ensemble of all models is then used to annotate a large unlabeled dataset D with soft labels. Finally, a standard classifier is trained on the soft-labeled dataset. We also devise iPET, an iterative variant of PET in which this process is repeated with increasing training set sizes.On a diverse set of tasks in multiple languages, we show that given a small to medium number of labeled examples, PET and iPET substantially outperform unsupervised approaches, supervised training and strong semi-supervised baselines. Radford et al. (2019) provide hints in the form of natural language patterns for zero-shot learning of challenging tasks such as reading comprehension and question answering (QA). This idea has been applied to unsupervised text classification (Puri and Catanzaro, 2019) , commonsense knowledge mining (Davison et al., 2019) and argumentative relation classification (Opitz, 2019) . Srivastava et al. (2018) use task descriptions for zero-shot classification but require a semantic parser. For relation extraction, Bouraoui et al. (2020) automatically identify patterns that express given relations. Mc-Cann et al. (2018) rephrase several tasks as QA problems. Raffel et al. (2020) frame various problems as language modeling tasks, but their patterns only loosely resemble natural language and are unsuitable for few-shot learning. 2 Another recent line of work uses cloze-style phrases to probe the knowledge that PLMs acquire during pretraining; this includes probing for factual and commonsense knowledge (Trinh and Le, 2018; Petroni et al., 2019; Wang et al., 2019; Sakaguchi et al., 2020) , linguistic capabilities (Ettinger, 2020; Kassner and Schütze, 2020) , understanding of rare words (Schick and Schütze, 2020) , and ability to perform symbolic reasoning (Talmor et al., 2019) . Jiang et al. (2020) consider the problem of finding the best pattern to express a given task.
0
In questions where the wh-word is embedded into a larger NP, there are two structural possibilities, shown in (1) and (2).(1) (a) The picture of whom does John like?(b) Which boy's father did you see?(2) (a) Whom does John like a picture of? (b) Which painting did you see a photograph of?The larger NP containing the question word can be pied-piped as in (1) to the beginning of the sentence together with the wh-word. This requires some kind of syntactic or semantic reconstruction, i.e.: For scopal purposes, the matrix NP must contribute its semantics (at least in one of the readings) approximately in the position of its trace, while the wh-word itself has of course the widest possible scope.Native speakers judge pied-piping of embedding NPs ungrammatical in some cases. Particularly, although pied-piping is always fine in relative clauses, a direct question like (3b) is ungrammatical. 1 (3) (a) On the corner of which street does his friend live? (b) A picture of whom does John like? However, as examples (1a) and 3ashow, pied-piping is found with some determiners. We therefore generally allow this construction in the grammar, and attribute the infelicity of some examples to independent factors.In another construction, shown in (2), the matrix NP can be stranded in its object position, yielding potential problems for semantic compositionality in frameworks that do not use transformations.Constructions as (2) are claimed to be only possible if all embedding NPs (those which are stranded) are nonspecific. This goes back to Fiengo and Higginbotham (1981) , who show in a much broader context that extraction out of NPs is not possible if an embedding NP is specific. Thus, we get the following judgments:(4) (a) Who did John see a picture of? (b) Who did John see the picture of? (c) Who did John see every picture of?We see that the range of determiners is lexically specified by the construction that they appear in (i.e., the extraction configuration). As for the lexical restrictions with regard to pied-piping above, these effects will not concern us in this paper. They must be dealt with by independent processes, e.g. lexical constraints.In this paper we show how an approach to the semantics of Tree Adjoining Grammar that uses semantic feature structures and variable unification as in can provide the correct variable bindings for both types of questions. The paper proposes elementary trees and semantic representations that allow to account for both constructions, (1) and (2), in a uniform way.use the framework presented in that follows this line: We use flat semantic representations with unification variables (similar to MRS, Copestake et al., 1999) . The semantic representations contain propositional metavariables. Constraints on the relative scope of these metavariables and propositional labels are used to provide underspecified representations of scope ambiguities. To keep track of the necessary variable unifications, semantic feature structures are associated with each node in the elementary tree. For semantic computation, the nodes in the derivation tree contain the semantic information associated with the elementary trees. At the end of a derivation, all possible disambiguations, i.e. injective functions from the remaining propositional variables to labels, must be found to obtain the different possible scopings of the sentence. The disambiguated representations are interpreted conjunctively.
0
The objective of the ILLICO project, is the development of a generator of natural language interfaces for the consultation of different kinds of knowledge bases in French. The main external characteristic of the ILLICO interface lies in the fact that it can guide, if necessary, the user while he/she composes sentences. Guided composition is done according to the principle of partial synthesis of a sentence. The main internal characteristic of an ILLICO interface is the modularity of its linguistic knowledge specifying the lexical, syntactic, semantic and conceptual levels of wellformedness. In order to take the consequences of these two main characteristics into account, we have developped an approach and a system in which all the constraints on the different levels of well-formedness are coroutined when the system analyzes a given sentence or synthesizes a partial one. In this paper, we describe the external and internal characteristics of the ILLICO interface, and their consequences on sentence processing. Then we describe the principle of coroutining constraints on different levels of well-formedness and the associated Prolog program.
0
A working definition of coreference resolution is partitioning the noun phrases we are interested in into equivalence classes, each of which refers to a physical entity. We adopt the terminologies used in the Automatic Content Extraction (ACE) task (NIST, 2003a) and call each individual phrase a mention and equivalence class an entity. For example, in the following text segment,(1):"The American Medical Association voted yesterday to install the heir apparent as its president-elect, rejecting a strong, upstart challenge by a district doctor who argued that the nation's largest physicians' group needs stronger ethics and new leadership." mentions are underlined, "American Medical Association", "its" and "group" refer to the same organization (object) and they form an entity. Similarly, "the heir apparent" and "president-elect" refer to the same person and they form another entity. It is worth pointing out that the entity definition here is different from what used in the Message Understanding Conference (MUC) task (MUC, 1995; MUC, 1998 ) -ACE entity is called coreference chain or equivalence class in MUC, and ACE mention is called entity in MUC.An important problem in coreference resolution is how to evaluate a system's performance. A good performance metric should have the following two properties:Discriminativity: This refers to the ability to differentiate a good system from a bad one. While this criterion sounds trivial, not all performance metrics used in the past possess this property.Interpretability: A good metric should be easy to interpret. That is, there should be an intuitive sense of how good a system is when a metric suggests that a certain percentage of coreference results are correct. For example, when a metric reports ¡ £ ¢ ¥ ¤ or above correct for a system, we would expect that the vast majority of mentions are in right entities or coreference chains.A widely-used metric is the link-based F-measure (Vilain et al., 1995) adopted in the MUC task. It is computed by first counting the number of common links between the reference (or "truth") and the system output (or "response"); the link precision is the number of common links divided by the number of links in the system output, and the link recall is the number of common links divided by the number of links in the reference. There are known problems associated with the link-based Fmeasure. First, it ignores single-mention entities since no link can be found in these entities; Second, and more importantly, it fails to distinguish system outputs with different qualities: the link-based F-measure intrinsically favors systems producing fewer entities, and may result in higher F-measures for worse systems. We will revisit these issues in Section 3.To counter these shortcomings, Bagga and Baldwin (1998) proposed a B-cubed metric, which first computes a precision and recall for each individual mention, and then takes the weighted sum of these individual precisions and recalls as the final metric. While the B-cubed metric fixes some of the shortcomings of the MUC Fmeasure, it has its own problems: for example, the mention precision/recall is computed by comparing entities containing the mention and therefore an entity can be used more than once. The implication of this drawback will be revisited in Section 3.In the ACE task, a value-based metric called ACEvalue (NIST, 2003b) is used. The ACE-value is computed by counting the number of false-alarm, the number of miss, and the number of mistaken entities. Each error is associated with a cost factor that depends on things such as entity type (e.g., "LOCATION", "PER-SON"), and mention level (e.g., "NAME," "NOMINAL," and "PRONOUN"). The total cost is the sum of the three costs, which is then normalized against the cost of a nominal system that does not output any entity. The ACEvalue is finally computed by subtracting the normalized cost from ¦ . A perfect coreference system will get a ¦ § © § ¤ ACE-value while a system outputs no entities will get a § ACE-value. A system outputting many erroneous entities could even get negative ACE-value. The ACEvalue is computed by aligning entities and thus avoids the problems of the MUC F-measure. The ACE-value is, however, hard to interpret: a system with ¡ § ¤ ACE-value does not mean that ¡ § ¤ of system entities or mentions are correct, but that the cost of the system, relative to the one outputting no entity, is ¦ § ¤ . In this paper, we aim to develop an evaluation metric that is able to measure the quality of a coreference system -that is, an intuitively better system would get a higher score than a worse system, and is easy to interpret. To this end, we observe that coreference systems are to recognize entities and propose a metric called Constrained Entity-Aligned F-Measure (CEAF). At the core of the metric is the optimal one-to-one map between subsets of reference and system entities: system entities and reference entities are aligned by maximizing the total entity similarity under the constraint that a reference entity is aligned with at most one system entity, and vice versa. Once the total similarity is defined, it is straightforward to compute recall, precision and F-measure. The constraint imposed in the entity alignment makes it impossible to "cheat" the metric: a system outputting too many entities will be penalized in precision while a system outputting two few entities will be penalized in recall. It also has the property that a perfect system gets an F-measure ¦ while a system outputting no entity or no common mentions gets an F-measure § . The proposed CEAF has a clear meaning: for mention-based CEAF, it reflects the percentage of mentions that are in the correct entities; For entitybased CEAF, it reflects the percentage of correctly recognized entities.The rest of the paper is organized as follows. In Section 2, the Constrained Entity-Alignment F-Measure is presented in detail: the constraint entity alignment can be represented by a bipartite graph and the optimal alignment can be found by the Kuhn-Munkres algorithm (Kuhn, 1955; Munkres, 1957) . We also present two entity-pair similarity measures that can be used in CEAF: one is the absolute number of common mentions between two entities, and the other is a "local" mention Fmeasure between two entities. The two measures lead to the mention-based and entity-based CEAF, respectively. In Section 3, we compare the proposed metric with the MUC link-based metric and ACE-value on both artificial and real data, and point out the problems of the MUC F-measure.
0
The increasing popularity of social media services such as Facebook, Twitter and Google+, and the advance of Web 2.0 have enabled users to share information and, as a result, to have influence on the content distributed via these services. The ease of sharing, e.g., directly from a laptop, a tablet or a smart phone, have contributed to the tremendous growth of the content that users share on a daily basis, to the extent that nowadays social networks have no choice but to filter part of the information stream even when it comes from our closest friends.Naturally, soon this unprecedented abundance of data has attracted business and research interest from various fields including marketing, political science, and social studies, among many others, which are interested in questions like these: Do people like the new Apple Watch? What do they hate about iPhone6? Do Americans support Oba-maCare? What do Europeans think of Pope's visit to Palestine? How do we recognize the emergence of health problems such as depression?Such questions can be answered by studying the sentiment of the opinions people express in social media. As a result, the interest for sentiment analysis, especially in social media, has grown, further boosted by the needs of various applications such as mining opinions from product reviews, detecting inappropriate content, and many others.Below we describe the creation of data and the development of a system for sentiment polarity classification in Twitter for Macedonian: positive, negative, neutral. We are inspired by a similar task at SemEval, which is an ongoing series of evaluations of computational semantic analysis systems, composed by multiple challenges such as text similarity, word sense disambiguation, etc. One of the challenges there was on Sentiment Analysis in Twitter, at SemEval 2013-2015 (Nakov et al., 2013; Rosenthal et al., 2014; , where over 40 teams participated three years in a row. 1 Here we follow a similar setup, focusing on message-level sentiment analysis of tweets, but for Macedonian instead of English. Moreover, while at SemEval the task organizers used Mechanical Turk to do the annotations, where the control for quality is hard (everybody can pretend to know English), our annotations are done by native speakers of Macedonian.The remainder of the paper is organized as follows: Section 2 presents some related work. Sections 3 and 4 describe the datasets and the various lexicons we created for Macedonian. Section 5 gives detail about our system, including the preprocessing steps and the features used. Section 6 describes our experiments and discusses the results. Section 7 concludes with possible directions for future work.
0
When electronic means became the prime instrument for storage and exchange of personal health data, the risks of inadvertent disclosure of personal health information (i.e., details of the individual's health) had increased. Inadvertently disclosed personal health information facilitates criminals to commit medical identity theft, i.e., allows an imposter to obtain care or medications under someone else's identity [10] . Furthermore, PHI is an important source of identity theft [14] , and has been used by terrorist organizations to target law enforcement personnel and intimidate witnesses [21] . PHI security breaches had happened in various domains. PHI has leaked from a Canadian provincial government agency [6] and from health care providers, through documents sent by employees and medical students [18] . There are several examples of the confirmed leaks on peer-to-peer file sharing networks: a chiropractor exposed his patient files on a peer-to-peer network, including notes on treatments and medications taken [20] , a criminal obtained passwords for 117, 000 medical records through a file sharing network [24] . In this work, we present a system which detects personal health information (PHI) in free-form heterogenous texts. It can be used to detect the inadvertent disclosure of PHI, thus, benefit information leak detection.Texts which contain personal health information can be written by doctors, nurses, medical students or patients and can be obtained from various sources within the health care network. Hospitals provide patient health records (e.g. speech assessment, discharge summaries, nurse notes), patients write letters, notes, etc. These texts can be found on the web, within peerto-peer file exchange networks, and on second-hand disk drives [12, 25] . Within those texts, we seek the information which refers to individual's health: disease (pneumonia) 1 , treatment procedures (X-rays), prescribed drugs (aspirin), health care providers (the Apple Tree Medical Centre). Our system contributes to information leak prevention, a growing content-based part of data leak prevention.There are several differences between our tool and the previous work on PHI leak prevention. Our system detects personally identifiable and health information. Previous work focussed on detection and de-identification of personally identifiable information (.e.g,, person names, phone numbers, age-related dates), but did not retrieve health information. Our system processes data of unknown content, context and structure. Whereas, previously the PHI leak prevention systems operated within a closed domain of hospital patient records, where the input data was guaranteed to contain PHI. As we mentioned, these systems were built to find and alter personally identifiable information, e.g., name, age, phone.In our case, the input files come with unknown content. Sometimes the file content excludes a possibility of personal information, e.g., a young-adult vampireromance novel Twilight, a research presentation Statistical Learning Theory, a song Quel Temps Fait Il A Paris.Sometimes, file contents may suggest holding personal information and PHI, e.g., personal correspondence, documents from lawyer or physician offices. In many other cases, files fall between these two categories. We discard files which we identify as being highly unlikely to contain PHI and concentrate on the analysis of the remaining files. In the remainder of the presentation, we define personal health information, provide examples of texts containing PHI and discuss the extent of confirmed inadvertent PHI leaks. We define pairs of possible/impossible and probable/improbable PHI containers. Our data and empirical results are presented after that. We follow with discussion of related work and motives for the adaptation of medical knowledge sources. At the end, we present plans for future work and conclusions.
0
In this paper, we describe some new extensions to the hybrid data-driven MT system developed at DCU, MATREX (Machine Translation using Examples), subsequent to our participation at IWSLT 2006 [1] , IWSLT 2007 [2] and IWSLT 2008 [3] . In this year's participation, optimising the system in a low-resource scenario is our main focus.The first technique deployed in our system is word lattice decoding, where the input of the system is not a string of words, but rather a lattice encoding multiple segmentations for a single sentence. This method has been repeatedly demonstrated to be effective in improving the coverage of the MT systems [4, 5, 6, 7] . Another technique investigated is a novel data selection method, which differentiates high-and low-quality bilingual sentence pairs in the training data, and use them separately in training MT systems.We participate in the CHALLENGE tasks and the BTEC Chinese-English and Turkish-English tasks. For CHAL-LENGE tasks, both the single-best ASR hypotheses and the correct recognition results are translated. Three different prototype SMT systems are built for each translation task and a few novel techniques are then applied to different systems.The final submission is a combination of the outputs from different systems.The remainder of the paper is organized as follows. In Section 2, we describe the various components of the system; in particular, we give details about the various novel extensions to MATREX as summarised above. In Section 3, the experimental setup is presented and experimental results obtained for various language pairs are reported in Section 4. In Section 5, we conclude, and provide avenues for further research.
0
Recently, we have been witnessing the steady increasing demand for human-computer systems and interfaces of various complexity. The current research efforts in humancomputer system design diverge more and more from traditional paradigms to modelling of two-party task-oriented systems like information-seeking dialogues. The research community is currently targeting more flexible, adaptable, open-domain dialogue systems driven by modelling natural human multimodal behaviour. Advances are also being made in modelling and managing multi-party interactions, e.g. for meetings or multi-player games. Existing approaches developed for two-party dialogue have undergone certain changes. For instance, it has been acknowledged that assumptions that conversational agents act fully rationally and cooperatively do not hold in many conversational settings, see e.g. (Traum et al., 2008) and (Asher and Quinley, 2011) . This is particularly true in competitive games, debates, and negotiations where participants do not have fully aligned preferences and do not adopt shared intentions or goals. In this paper we focus on modelling negotiations, more precisely multi-issue bargaining dialogues. Much good work has been done on simple, well-structured negotiations -interactions among a few parties with fixed interests and alternatives, see (Georgila and Traum, 2011) ; (Efstathiou and Lemon, 2015) and (DeVault et al., 2015) . In many real-life negotiations, parties negotiate over not one but multiple issues. Moreover, negotiators bargaining over one or multiple issues today may, and in real life most certainly will, come back to the negotiation table. So, there may be delays in making complete agreements, and previously reached agreements can be cancelled. In this paper we discuss multi-issue repetitive bargaining interactions collection and analysis as important steps towards computational modelling of such conversations. The paper is structured as follows. Section 2 discusses the application task domain, specifying participants roles and goals, and possible interactive phenomena to be encountered. Section 3 presents the designed scenario, interfaces and data collection procedure. In Section 4 we specify the annotation design in detail by describing the type of annotations performed and annotation scheme used proposing possible extension of those. We also provide various corpus statistics, examples and a corpus overview in terms of type of data, annotations performed and formats used. Section 5 presents initial task and interaction control models built/learned using the annotated data. Section 6 concludes the reported work by summarizing corpus collection, data annotation activities, finding derived from initial models, and outlines future research.
0
Recently, the fully-connected attention-based models, like Transformer (Vaswani et al., 2017) , become popular in natural language processing (NLP) applications, notably machine translation (Vaswani et al., 2017) and language modeling (Radford et al., 2018) . Some recent work also suggest that Transformer can be an alternative to recurrent neural networks (RNNs) and convolutional neural networks (CNNs) in many NLP tasks, such as GPT (Radford et al., 2018) , BERT (Devlin et al., 2018) , Transformer-XL (Dai et al., 2019) and Universal Transformer (Dehghani et al., 2018) .More specifically, there are two limitations of the Transformer. First, the computation and mem-h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 s h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8Figure 1: Left: Connections of one layer in Transformer, circle nodes indicate the hidden states of input tokens. Right: Connections of one layer in Star-Transformer, the square node is the virtual relay node. Red edges and blue edges are ring and radical connections, respectively. ory overhead of the Transformer are quadratic to the sequence length. This is especially problematic with long sentences. Transformer-XL (Dai et al., 2019) provides a solution which achieves the acceleration and performance improvement, but it is specifically designed for the language modeling task. Second, studies indicate that Transformer would fail on many tasks if the training data is limited, unless it is pre-trained on a large corpus. (Radford et al., 2018; Devlin et al., 2018) .A key observation is that Transformer does not exploit prior knowledge well. For example, the local compositionality is already a robust inductive bias for modeling the text sequence. However, the Transformer learns this bias from scratch, along with non-local compositionality, thereby increasing the learning cost. The key insight is then whether leveraging strong prior knowledge can help to "lighten up" the architecture.To address the above limitation, we proposed a new lightweight model named "Star-Transformer". The core idea is to sparsify the architecture by moving the fully-connected topology into a star-shaped structure. Fig-1 gives an overview. Star-Transformer has two kinds of connections. The radical connections preserve the non-local communication and remove the redundancy in fully-connected network. The ring connections embody the local-compositionality prior, which has the same role as in CNNs/RNNs. The direct outcome of our design is the improvement of both efficiency and learning cost: the computation cost is reduced from quadratic to linear as a function of input sequence length. An inherent advantage is that the ring connections can effectively reduce the burden of the unbias learning of local and non-local compositionality and improve the generalization ability of the model. What remains to be tested is whether one shared relay node is capable of capturing the long-range dependencies.We evaluate the Star-Transformer on three NLP tasks including Text Classification, Natural Language Inference, and Sequence Labelling. Experimental results show that Star-Transformer outperforms the standard Transformer consistently and has less computation complexity. An additional analysis on a simulation task indicates that Star-Transformer preserve the ability to handle with long-range dependencies which is a crucial feature of the standard Transformer.In this paper, we claim three contributions as the following and our code is available on Github 1 :• Compared to the standard Transformer, Star-Transformer has a lightweight structure but with an approximate ability to model the long-range dependencies. It reduces the number of connections from n 2 to 2n, where n is the sequence length.• The Star-Transformer divides the labor of semantic compositions between the radical and the ring connections. The radical connections focus on the non-local compositions and the ring connections focus on the local composition. Therefore, Star-Transformer works for modestly sized datasets and does not rely on heavy pre-training.• We design a simulation task "Masked Summation" to probe the ability dealing with long-range dependencies.In this task, we verify that both Transformer and Star-Transformer are good at handling long-range dependencies compared to the LSTM and BiLSTM.
0
Transformer (Vaswani et al., 2017) has achieved the state-of-the-art performance on a variety of translation tasks. It consists of different stacked components, including self-attention, encoder-attention, and feed-forward layers. However, so far not much is known about the internal properties and functionalities it learns to achieve the performance, which poses significant challenges for designing optimal architectures.In this work, we bridge the gap by conducting a granular analysis of components on trained Transformer models. We attempt to understand how does each component contribute to the model outputs. Specifically, we explore two metrics to evaluate the impact of a particular component on the model performance: 1) contribution in information flow that manually masks individual component each time and evaluate the performance without that component; and 2) criticality in representation generalization that depends on how much closer the weights can get for each component to the initial weights while still maintaining performance. Those two metrics evaluate the component importance of a trained Transformer model from different perspectives. Empirical results on two benchmarking datasets reveal the following observations ( §3.1):• The decoder self-attention layers are least important, and the decoder feed-forward layers are most important.• The components that are closer to the model input and output (e.g., lower layers of encoder and higher layers of decoder) are more important than components on other layers.• Upper encoder-attention layers in decoder are more important than lower encoder-attention layers.The findings are consistent across different evaluation metrics, translation datasets, initialization seeds and model capacities, demonstrating their robustness.We further analyze the underlying reason ( §3.2), and find that lower dropout ratio and more training data lead to less unimportant components. Unimportant components can be identified at early stage of training, which are not due to deficient training. Finally, we show that unimportant components can be rewound (Frankle and Carbin, 2019) to further improve the translation performance of Transformer models ( §3.3).
0
Basic research is critically needed to guide the develop: ment of a new generation of complex natural language systems that are still in the planning stages, such as ones that support multimodal, multilingual, or multiparty exchanges across a variety of intended applications. In the case of planned multimodal systems, for example, the potential exists to support more robust, productive, and flexible human-computer interaction than that afforded by current unimodal ones [3] . However, since multimodal systems are relatively complex, the problem of how to design optimal configurations is unlikely to be solved through simple intuition alone. Advance empirical work Computer Dialogue Laboratory A.I. Center, SRI International 333 Ravenswood Avenue Menlo Park, California, U.S. A. 94025 with human subjects will be needed to generate a factual basis for designing multimodal systems that can actually deliver performance superior to unimodal ones.In particular, there is a special need for both methodological tools and research results based on high-quality simulations of proposed complex NL systems. Such simulations can reveal specific information about people's language, task performance, and preferential use of different types of systems, so that they can be designed to handle expected input. Likewise, simulation research provides a relatively affordable and nimble way to compare the specific advantages and disadvantages of alternative architectures, such that more strategic designs can be developed in support of particular applications. In the longer term, conclusions based on a series of related simulation studies also can provide a broader and more principled perspective on the best application prospects for emerging technologies such as speech, pen, and multimodal systems incoporating them.In part for these reasons, simulation studies of spoken language systems have become common in the past few years, and have begun to contribute to our understanding of human speech to computers [1, 5, 6, 7, 8, 17] . However, spoken language simulations typically have been slow and cumbersome. There is concern that delayed responding may systematically distort the data that these simulation studies were designed to collect, especially for a modality like speech from which people expect speed [6, 10, 15] . Unlike research on spoken language systems, there currently is very little literature on handwriting and pen systems. In particular, no simulation studies have been reported on: (1) interactive handwriting 1 [6] , (2) comparing interactive speech versus handwriting as alternative ways to interact with a system, or (3) examining the combined use of speech and handwriting to simulated multimodal systems of different types. Potential advantages of a combined pen/voi~ system have been outlined previously [4, 12] . High quality simulation 1Although we are familiar with nonlnteractive writing from everyday activities like personal notetaking, very llttle-is known about interactive writing and pen use as a modality of humancomputer interaction. research on these topics will be especially important to the successful design of mobile computing technology, much of which will emphasize communications and be keyboardless.The simulation technique developed for this research aims to: (1) support a very rapid exchange with simulated speech, pen, and pen/voice systems, such that response delays are less than 1 second and interactions can be subject-paced, (2) provide a tool for investigating interactive handwriting and other pen functionality, and (3) devise a technique appropriate for comparing people's use of speech and writing, such that differences between these communication modalities and their related technologies can be better understood. Toward these ends, an adaptable simulation method was designed that supports a wide range of studies investigating how people speak, write, or use both pen and voice when interacting with a system to complete qualitatively different tasks (e.g., verbal/temporal, computational/numeric, graphic/cartographic). The method also supports examination of different issues in spoken, written, and combined pen/voice interactions (e.g., typical error patterns and resolution strategies).In developing this simulation, an emphasis was placed on providing automated support for streamlining the simulation to the extent needed to create facile, subject-paced interactions with clear feedback, and to have comparable specifications for the different modalities. Response speed was achieved in part by using scenarios with correct solutions, and by preloading information. This enabled the assistant to click on predefined fields in order to respond quickly. In addition, the simulated system was based on a conversational model that provides analogues of human backchannel and propositional confirmations. Initial tasks involving service transactions embedded propositional-level confirmations in a compact transaction "receipt," an approach that contributed to the simulation's clarity and speed. Finally, emphasis was placed on automating features to reduce attentional demand on the simulation assistant, which also contributed to the fast pace and low rate of technical errors in the present simulation.
0
In recent years, there has been growing interest in diachronic lexical resources, which comprise terms from different language periods. (Borin and Forsberg, 2011; Riedl et al., 2014) . These resources are mainly used for studying language change and supporting searches in historical domains, bridging the lexical gap between modern and ancient language.In particular, we are interested in this paper in a certain type of diachronic thesaurus. It contains entries for modern terms, denoted as target terms. Each entry includes a list of ancient related terms. Beyond being a historical linguistic resource, such thesaurus is useful for supporting searches in a diachronic corpus, composed of both modern and ancient documents. For example, in our historical Jewish corpus, the modern Hebrew term for terminal patient 1 has only few verbatim occurrences, in modern documents, but this topic has been widely discussed in ancient periods. A domain searcher needs the diachronic thesaurus to enrich the search with ancient synonyms or related terms, such as dying and living for the moment.Prior work on diachronic thesauri addressed the problem of collecting relevant related terms for given thesaurus entries. In this paper we focus on the complementary preceding task of collecting a relevant list of modern target terms for a diachronic thesaurus in a certain domain. As a starting point, we assume that a list of meaningful terms in the modern language is given, such as titles of Wikipedia articles. Then, our task is to automatically decide which of these candidate terms are likely to be relevant for the corpus domain and should be included in the thesaurus. In other words, we need to decide which of the candidate modern terms corresponds to a concept that has been discussed significantly in the diachronic domain corpus.Our task is closely related to term scoring in the known Terminology Extraction (TE) task in NLP. The goal of corpus-based TE is to automatically extract prominent terms from a given corpus and score them for domain relevancy. In our setting, since all the target terms are modern, we avoid extracting them from the diachronic corpus of modern and ancient language. Instead, we use a given candidate list and apply only the term scoring phase. As a starting point, we adopt a rich set of state-of-the-art TE scoring measures and integrate them as features in a common supervised classification approach (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) .Given our Information Retrieval (IR) motivation, we notice a closely related task to TE, namely Query Performance Prediction (QPP). QPP methods are designed to estimate the retrieval quality of search queries, by assessing their relevance to the text collection. Therefore, QPP scoring measures seem to be potentially suitable also for our terminology scoring task, by considering the candidate term as a search query. Some of the QPP measures are indeed similar in nature to the TE methods, analyzing the distribution of the query terms within the collection. However, some of the QPP methods have different IR-biased characteristics and may provide a marginal contribution. Therefore, we adopted them as additional features for our classifier and indeed observed a performance increase.Most of the QPP methods prioritize query terms with high frequency in the corpus. However, in a diachronic corpus, such criterion may sometimes be problematic. A modern target term might appear only in few modern documents, while being referred to, via ancient terminology, also in ancient documents. Therefore, we would like our prediction measure to be aware of these ancient documents as well. Following a particular QPP measure (Zhou and Croft, 2007) , we address this problem through Query Expansion (QE). Accordingly, our method first expands the query containing the modern candidate term, then calculates the QPP scores of the expanded query and then utilizes them as scoring features. Combining the baseline features with our expansion-based QPP features yields additional improvement in the classification results.
0
Big countries such as India and China have several languages which change by regions. For instance, India has 23 constitutionally recognized official languages (e.g., Hindi, Tamil, and Panjabi) and several hundreds unofficial local languages. Despite Indian population is approximately 1.3 billion, only approximately 10% of them English speak English. Some studies say that out of these 10% English speakers only 2% can speak, write, and read English well, and rest 8% can merely understand simple English and speak broken English with an amazing variety of accents (sta). Considering a significant amount of valuable resources is available on the web in English and most people in India can not understand it well, it is essential to translate such content in to local languages to facilitate people. Sharing information between people is necessary not only for business purposes but also for sharing their feelings, opinions, and acts. To this end, translation plays an important role in minimizing the communication gap between different people. Considering the vast amount of information, it is not feasible to translate the content manually. Hence, it is essential to translate text from one language (say, English) to another language (say, Tamil) automatically. This process is also known as machine translation.There are many challenges in machine translation for Indian languages. For instance, (i) the size of parallel corpora and (ii) differences amongst languages, mainly the morphological richness and word order differences due to syntactical divergence are two of the major challenges. Indian languages (IL) suffer both of these problems, especially when they are being translated from English. There are only a few parallel corpora for English and Indian languages. Moreover, Indian languages such as Tamil differ from English in word order as well as in morphological complexity. For instance, English has Subject-Verb-Object (SVO) whereas Tamil has Subject-Object-Verb (SOV). Moreover, English is a fusional whereas Tamil is agglutinative languages. While syntactic differences contribute to difficulties of translation models, morphological differences contribute to data sparsity. We attempt to address both issues in this paper.Though much work is being done on machine translation for foreign and Indian languages but apart from foreign languages most of works on Indian languages are limited to conventional machine translation techniques. We observe that the techniques like word-embedding and Byte-pairencoding (BPE) are not applied on many Indian languages which have shown a great improvement in natural language processing. Thus, in this paper, we apply a neural machine translation technique (torch implementation) with word embedding and BPE. Especially, we work on English-Tamil language pair as it is one of the most difficult language pair (ZdenekŽabokrtskỳ, 2012) to translate due to morphologically richness of Tamil language. We obtain the data from EnTamv2.0 and Opus, and evaluate our result using widely used evaluation matric BLEU. Experimental results confirm that we got much better results than conventional machine translation techniques on Tamil language. We believe that our work can also be applied to other Indian language pairs too.Main contributions of our work are as follows:• This is the first work to apply BPE with word embedding on Indian language pair (English-Tamil) with NMT technique.• We achieve comparable accuracy with a simpler model in less training time rather then training on deep and complex neural network which requires much time to train.• We have shown how and why data preprocessing is a crucial step in neural machine translation.• Our model outperforms Google translator with margin of 4.58 BLEU score.The rest of the paper is organized as follows. Sections 2 and 3 describe related work and the methodology of our MIDAS translator, respectively. Evaluation is presented in Section 4. Finally, Section 5 concludes the paper.
0
Sentiment Analysis (SA) is an active field of research in Natural Language Processing and deals with opinions in text. A typical application of classical SA in an industrial setting would be to classify a document like a product review into positive, negative or neutral sentiment polarity. In contrast to SA, the more fine-grained task of Aspect-Based Sentiment Analysis (ABSA) (Hu and Liu, 2004; Pontiki et al., 2015) aims to find both the aspect of an entity like a restaurant, and the sentiment associated with this aspect. It is important to note that ABSA comes in two variants. We will use the sentence "I love their dumplings" to explain these variants in detail. Both variants are implemented as a two-step procedure. The first variant is comprised of Aspect-Category Detection (ACD) followed by Aspect-Category Sentiment Classification (ACSC). ACD is a multilabel classification task, where a sentence can be associated with a set of predefined aspect categories like "food" and "service" in the restaurants domain. In the second step, ACSC, the sentiment polarity associated to the aspect-category is classified. For our example-sentence the correct result is the tuple ("food", "positive"). The second variant consists of Aspect-Target Extraction (ATE) followed by Aspect-Target Sentiment Classification (ATSC). ATE is a sequence labeling task, where terms like "dumplings" are detected. In the second step, ATSC, the sentiment polarity associated with the aspect-target is determined. In our example the correct result is ("dumplings", "positive"). In this paper, we focus on ATSC. In recent years, specialized neural architectures (Tang et al., 2016a; Tang et al., 2016b) have been developed that substantially improved modeling of this target-context relationship. More recently, the Natural Language Processing community expe-rienced a substantial shift towards using pre-trained language models (Peters et al., 2018; Radford and Salimans, 2018; Howard and Ruder, 2018; Devlin et al., 2019) as a base for many down-stream tasks, which also includes ABSA (Song et al., 2019; Xu et al., 2019; Sun et al., 2019) . We still see huge potential that comes with this trend, which is why we approach the ATSC task using the BERT architecture. As shown by Xu et al. (2019) , for the ATSC task the performance of models that were pre-trained on general text corpora is improved substantially by finetuning the language model on domain-specific corpora -in their case review corpora -that have not been used for pre-training BERT, or other language models. We extend the work by Xu et al. by further investigating the behavior of finetuning the BERT language model in relation to ATSC performance. In particular, our contributions are:1. Analysis of the influence of the amount of trainingsteps used for BERT language model finetuning on the Aspect-Target Sentiment Classification performance.2. Findings on how exploiting BERT language model finetuning enables us to achieve new state-of-the-art performance on the SemEval 2014 restaurants dataset.3. Analysis of cross-domain adaptation between the laptops and restaurant domains. Adaptation is tested by self-supervised finetuning of the BERT language model on the target-domain and then supervised training on the ATSC task in the source-domain. In addition, the performance of training on the combination of both datasets is measured.
0
Many real-world speech recognition applications, including teleconferencing, and AI assistants, require recognizing and understand long conversations. In a long conversation, there exists the tendency of semantically related words or phrases reoccur across sentences, or there exists topical coherence. Thus, such conversational context information, higher-level knowledge that spans across sentences, provides important information that can improve speech recognition. However, the long conversations typically split into short sentencelevel audios to make building speech recognition models computationally feasible in current stateof-the-art recognition systems (Xiong et al., 2017; Saon et al., 2017) .Over the years, there have been many studies have attempted to inject a longer context information into language models. Based on a recurrent neural network (RNNs) language models (Mikolov et al., 2010) , (Mikolov and Zweig, 2012; Wang and Cho, 2015; Ji et al., 2015; Liu and Lane, 2017; Xiong et al., 2018) , proposed using a context vector that would encode the longer context information as an additional network input. However, all of these models have been developed on text data, and therefore, it must still be integrated with a conventional acoustic model which is built separately without a longer context information, for speech recognition on long conversations.Recently, new approaches to speech recognition models integrate all available information (e.g. acoustic, linguistic resources) in a so-called end-to-end manner proposed in (Graves et al., 2006; Graves and Jaitly, 2014; Hannun et al., 2014; Miao et al., 2015; Chorowski et al., 2014 Chorowski et al., , 2015 Chan et al., 2015; . In these approaches, a single neural network is trained to recognize graphemes or even words from speech directly. Especially, the model using semantically meaningful units, such as words or sub-word (Sennrich et al., 2015) , rather than graphemes have been showing promising results (Audhkhasi et al., 2017b; Li et al., 2018; Soltau et al., 2016; Zenkel et al., 2017; Palaskar and Metze, 2018; Sanabria and Metze, 2018; Rao et al., 2017; Zeyer et al., 2018) .In this work, motivated by such property of the end-to-end speech recognition approaches, we propose to integrate conversational context information within a direct acoustic-to-word, end-toend speech recognition to better process long conversations. Thus far, the research in speech recognition systems has focused on recognizing sentences and to the best of our knowledge, there have been no studies of word-based models incorporating conversational context information. There has been recent work attempted to use the conversational context information from the preceding graphemes (Kim and Metze, 2018) , however, it is limited to encode semantically meaningful context representation. Another recent work attempted to use a context information (Pundak et al., 2018) , however, their method requires a list of phrases at inference (i.e. personalized contact list). We evaluate our proposed approach on the Switchboard conversational speech corpus (Godfrey and Holliman, 1993; Godfrey et al., 1992) , and show that our model outperforms the sentence-level end-toend speech recognition model.
0
Peer review provides learning opportunities for students in their roles as both author and reviewer, and is a promising approach for helping students improve their writing (Lundstrom and Baker, 2009) . However, one limitation of peer review is that student reviewers are generally novices in their disciplines and typically inexperienced in constructing helpful textual reviews (Cho and Schunn, 2007) . Research in the learning sciences has identified properties of helpful comments in textual reviews, e.g., localizing where problems occur in a paper and suggesting solutions to problems (Nelson and Schunn, 2009) , or providing review justifications such as explanations of judgments (Gielen et al., 2010) . Research in computer science, in turn, has used natural language processing and machine learning to build models for automatically identifying helpful review properties, including localization and solution Nguyen and Litman, 2013; Xiong et al., 2012; , as well as quality and tone (Ramachandran and Gehringer, 2015) . While such prediction models have been evaluated intrinsically (i.e., with respect to predicting gold-standard labels), few have actually been incorporated into working peer review systems and evaluated extrinsically (Ramachandran and Gehringer, 2013; . The SWoRD research project 1 involves different active research threads for improving the utility of an existing web-based peer review system. Our research in the SWoRD project aims at building instant feedback components for improving the quality of textual peer reviews. Our initial work focused on improving review localization ). Here we focus on increasing the presence of solutions in reviews. When students submit reviews, Analyze Louv's rhetorical strategies -Draft #1 Review Document by AuntLisaThe passage below is from Last Child in the Woods (2008) by Richard Louv. Read the passage carefully. Then, in a well-developed essay, analyze the rhetorical strategies Louv uses to develop his argument about the separation between people and nature. Support your analysis with specific references to the text ...Provide feedback on the quality of the author's thesis. The thesis is well stated though the points listed in your thesis are not all clearly expounded upon in the body of the essay. Pathos and logos are mentioned only twice throughout the entire essay, not including the thesis staement and ethos isnt mentioned at all a second time.Add solution Already exists?Yes  NoThe essay is organized in a simple and easy to understand way, with simple language and high vocabulary used, though it would be better to directly state what you will talk about in your body paragraphs in your thesis so they can be more connected. natural language processing is used to automatically predict whether a solution is present in each peer review comment ( Figure 1 ). If not enough critical comments are predicted to contain explicit solutions for how to make the paper better, students are taken from the original review interface to a new instant feedback interface which scaffolds them in productively revising the original peer reviews ( Figure 2 ). Sections 2 and 3 describe the Instant-feedback workflow, and the supporting natural language processing techniques. Section 4 demonstrates the promise of our system in supporting student review revision in a recent system deployment.
0
Due to a rapid proliferation of textual information in digital form various security-related organisations have recently acknowledged the benefits of deploying techniques to automate the process of extraction of structured information on events from free texts (Appelt, 1999; Ashish et al., 2006; Ji et al., 2009; Piskorski and Yangarber, 2013) . Examples of current capabilities of such techniques for the extraction of disease outbreaks, crisis situations, cross-border crimes and computer security events from on-line sources are given in (Grishman et al., 2002; King and Lowe, 2003; Yangarber et al., 2008; Gao et al., 2013; Danilova and Popova, 2014; Ritter et al., 2015) .This paper reports on the creation of a corpus of structured information on security-related events automatically extracted from online news over a period of 8 years, part of which has been manually curated. The main drive behind this endeavour is to provide material to theNLP community working on event extraction, which could be used in various ways, e.g., for: (a) carrying out evaluations of detection and extraction of securityrelated events from online news (human-curated data), (b) training event type classifiers, (c) learning domain-specific terminology, (d) creating full-fledged inline or stand-off annotations with eventcentric information based on the automatically extracted event templates.Other efforts on the creation of corpora with event-related annotation of various nature include: GDELT , FactBank (Saurí and Pustejovsky, 2009) , ICEWS (Ward et al., 2013) , EventCorefBank (Cybulska and Vossen, 2014), ASTRE (Nguyen et al., 2016) and (Hong et al., 2016) . Contrary to most other initiatives our corpus contains aggregated information on events extracted at news cluster level without provision of links to concrete phrases in news articles from which the information was inferred.Section 2 briefly presents our news event extraction system. Section 3 reports on an evaluation thereof to provide insights on the quality of extraction. Section 4 provides some corpus statistics.
0
As people have access to an increasingly larger amount of information, technologies may enable them to consume that information more efficiently. Existing technologies have focused on automated summarization techniques. However, summarization techniques are not fully mature: emphasis mistakes are frequent and may cause the reader to miss crucial points in the summarized document. To address this issue, as an alternative to summarization, key portions of a document can instead be highlighted (or made more visible by bold, italic, etc)Highlights appear within their context (unlike a summary), and the impact of 'bad" highlights is of much lower consequence than 'bad" summaries.We believe highlights to be motivated by reading intentions. Thus, we must determine if a difference exists between extractive summary sentences and human highlights. The framework presented in this paper allows users to efficiently and scalably crowdsource two related tasks: collecting highlight annotations, and comparing the performance of automated highlighting systems.
0
Human beings are known to perceive and feel various, highly-nuanced emotions, expressed both in spoken and written texts. Modeling emotions in user-generated content has been shown to benefit domains such as commerce, public health, and disaster management (Bollen et al., 2011b; Neppalli et al., 2017; Hu et al., 2018; Pamungkas, 2019) . E.g., emotion cues from social media posts were used to identify depression and PTSD (Deshpande and Rao, 2017; Aragón et al., 2019) and for personalizing chatbots to improve user satisfaction (Wei et al., 2019) .Recent studies list as many as 154 human emotions (Smith, 2015) . However, most researchers in * Equal contribution from both authors.Psychology have largely agreed on a set of basic emotions such as anger, fear, disgust, sadness, surprise, and happiness (Ekman, 2016) and showed that complex emotions can be expressed using this basic set (Ekman, 1992; Plutchik, 2001 ). For example, Plutchik uses combinations, intensity, and opposites of basic emotions for capturing the higherorder emotions. That is, annoyance and rage can be viewed as the less or more intense forms of anger, and anticipation is the opposite of surprise. Thus, most recent studies on automatic emotion detection use Ekman's or Plutchik's sets of 6 or 8 emotions, respectively (Mohammad et al., 2018; .Existing models for automatic emotion identification in user-generated texts typically use supervised learning techniques. The state-of-the-art emotion detection performance on tweets, news articles, blogs, reviews, and TV-show transcripts is obtained using complex, deep learning architectures that combine a range of features including terms, embeddings, and domain-specific aspects such as emojis, as well as human-generated lexicons of emotion-word associations (Chatterjee et al., 2019; Zahiri and Choi, 2018; Mundra et al., 2017; Abdul-Mageed and Ungar, 2017; Köper et al., 2017) . Much manual effort is involved in collecting annotated data for a given domain and fine-tuning domain-specific models.Other auxiliary works enabling emotion detection can be placed under two complementary directions. The first one is lexicon development for emotions via manual vocabulary labeling or automatic generation, for example, based on similarity to a set of seed words (Mohammad and Turney, 2013; Araque et al., 2019) . The second direction uses a latent space of embeddings to compare sentences with emotion lexicons (Xu et al., 2015; Savigny and Purwarianti, 2017) . Compiling a lexicon of a high quality and coverage is a labor-intensive task, and even when automation and crowdsourcing is involved, a close manual control is required. As for latent space representations, the embedding model must include sufficient information about the underlying emotions, obtained, e.g., from the lexicons or labeled datasets (Agrawal et al., 2018; Tang et al., 2014) .Both embeddings and lexicons enable basic techniques for unsupervised emotion predictionfor example, by using word embeddings similarities (Kim et al., 2010) or overlap between lexicon words and input text (Araque et al., 2019) . Considering the abundance of user-generated texts on the current-day Web with its ever-changing topics (for example, "COVID-19 lockdown"), we argue that it is desirable to develop advanced unsupervised models that detect emotions accurately across domains, offer a probabilistic explanation for the predicted emotions, while not depending on large quantities of labeled data. These desirables comprise our precise objectives in this paper.We present Emotion-Sensitive TextRank (ES-TeR ) and its variants as our similarity functions that use word graphs for scoring input texts with reference to a given set of emotions. ESTeR is designed based on the following two observations: (1) For a given language, words expressing emotions are fairly stable across domains (Agrawal et al., 2018) . For example, the same words ("This is absurd...") may be used to express anger (emotion) regarding a product on an e-commerce website as well as in a tweet related to a goverment policy. (2) Word-occurrence graphs are known to capture contextual and latent language information and were successfully used in various NLP tasks (Mihalcea and Tarau, 2004; Yan et al., 2013; Chen and Kao, 2015; Kong et al., 2016) .We make the following contributions:• For identifying emotions in textual content, we propose similarity functions that incorporate word co-occurrence information from large-scale, publicly-available text corpora and word associations from lexicons. Our novel similarity functions are based on random walks on word graphs and score an input text with respect to a given emotion.• Next, we formally show the relation between the proposed similarity functions and Personalized PageRank . In addition, we provide a computational method based on solving a linear system of equations to compute our similarity functions efficiently at the dataset level, rather than per instance. • We present experiments illustrating the superior performance of our models on five recent, publiclyavailable datasets for emotion detection (Klinger et al., 2018; .• Finally, we showcase our proposed model on a newly-collected dataset of COVID-19 tweets by highlighting various interesting aspects of public emotions during the current pandemic.In the next section (Section 2), we present our scoring framework for emotion detection along with derivations on how to compute our solution efficiently. In Section 3, we summarize datasets and experiments illustrating the performance of our proposed model. In Section 4, we demonstrate anecdotally, the effectiveness of model on COVID-19 tweets. Finally, we present closely-related work in Section 5 and conclusions in Section 6.
0
Halliday distinguishes between two kinds of variation in language: social variation, which he calls dialect, and functional variation, which he calls register (e.g. Halliday, 1989, p. 44) . Var-Dial's focus is on the first kind of variation, in particular diatopic variation, and addresses topics such as automatic identification of dialects but also includes topics like diachronic language variation. In this paper, we look at variation of the second kind, namely variation between literate/written and oral/spoken language (different registers, as Halliday would call it). However, we assume that the phenomenon of literate/written vs. oral/spoken language interacts with diachronic language change, which, in turn, interacts with diatopic variation (e.g. one dialect becomes more important than another one and has larger impact on the further development of the language). Hence, if we want to understand language change, we have to take into account different kinds of variation.In general, human language is used in two major forms of representation: written and spoken.Both discourse modes place different demands on the language user. Spoken discourse has to be processed online by speakers and hearers and, hence, strongly depends on the capacity of the working memory. In contrast, written discourse proceeds independently of production and reading speed, and allows for a rather free and elaborate structuring of texts. This discrepancy can result in quite different utterances.Moreover, as many linguists have noticed, there is also a high amount of variation within written and spoken language (Koch and Oesterreicher, 2007; Halliday, 1989; Biber and Conrad, 2009) . For example, the language used in scientific presentations is rather similar to prototypical written language, despite its spoken realization. Chat communication on the other hand, although realized in the written medium, rather resembles spontaneous spoken speech. In other words, independently of their medial realization, language can show characteristics that are typical of the written or spoken mode. As Halliday (1989, p.32) puts it, "'written' and 'spoken' do not form a simple dichotomy; there are all sorts of writing and all sorts of speech, many of which display features characteristic of the other medium".In the 1980s, Koch and Oesterreicher (1985) proposed to distinguish between medial and conceptual orality and literacy. On the medial dimension, an utterance can be realized either phonetically (spoken) or graphically (written), while the conceptual dimension forms a broad continuum between the extremes of conceptual orality and conceptual literacy. Example (1) from Halliday (1989, p.79) illustrates this continuum, from a clear conceptually-literate sentence in (a) to a clear conceptually-oral equivalent in (c).(1) a. The use of this method of control unquestionably leads to safer and faster train run-ning in the most adverse weather conditions.b. If this method of control is used trains will unquestionably (be able to) run more safely and faster (even) when the weather conditions are most adverse.c. You can control the trains this way and if you do that you can be quite sure that they'll be able to run more safely and more quickly than they would otherwise no matter how bad the weather gets.The work reported here is part of a larger project which investigates syntactic change in German across a long period of time (1000 years). One of the working hypotheses of the project is that certain parts of syntactic change can be attributed to changes in discourse mode: Early writings showed many features of the oral mode. The dense, complex structure which is characteristic of many modern elaborate written texts is the product of a long development.Interestingly, spoken language has also developed denser structures over time. It is commonly assumed that this is a reflex of the written language, and is due to the increasing amount of written language which became available after the invention of printing and since then has played a prominent role in the society. As Halliday (1989, p.45) argues, this feedback happens "particularly because of the prestige" of written registers.The aim of the project is to trace these two strands of development, by investigating and comparing texts that are located at different positions of the orality scale. Of course, we do not have records of historical spoken language. Rather, we have to rely on written texts that are as close as possible to the spoken language. So we need to be able to identify conceptually-oral, i.e. spoken-like texts.The present paper addresses the first step in this enterprise, namely to find means to automatically measure the conceptual orality of a given modern text. In particular, we investigate a range of linguistic features that can be automatically determined and seem useful for this task.The remainder of this paper is structured as follows: Section 2 gives an overview of the related work. In Section 3, features of orality as proposed in the literature are presented, and the set of linguistic features used in the present study is spec-ified. Section 4 introduces the data and describes their linguistic annotation as well as the way we determine expected orality. In Section 5, results from training a classifier on the linguistc features are discussed. Finally, Section 6 summarizes the results and gives an outlook at future investigations. An appendix provides further details of the analysis.
0
The use of machine translation (MT) has now become widespread in many areas thanks to improvements in neural modelling (Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017) . Accordingly, researchers have attempted to integrate discourse into neural machine translation (NMT) systems. As a consequence, document-level human evaluation of MT has raised interest in the community as it enables a more detailed assessment of suprasentential context. However, the definition of document-level, in terms of how much of the text needs to be shown, is still unclear. Moreover, although a few works have looked into documentlevel evaluation (Läubli et al., 2018; Toral et al., 2018 ; Barrault et al., 2019; , little is known about the issues of using documentlevel methodologies to assess MT quality.The present research attempts to shed light on the differences in inter-annotator agreement (IAA) when evaluating MT with different methodologies, namely random single sentences, sentences in context, and full document scores. We also look into perceived effort from translators when evaluating the translations in the different methodologies. Results have shown a good level of IAA with a methodology where translators are able to assess individual sentences within the context of a document compared to a methodology with random sentence assessments, while a methodology where translators give a single score per document yields low IAA. Furthermore, we note that misevaluation cases recur in the random single sentences evaluation scenario.
0
Neural machine translation (NMT) has recently achieved excellent results in the news translation task. Hassan et al. [1] report achieving a "human parity" on Chinese→English news translation. WMT 2018 overview paper [2, p. 291 ] reports that our English→Czech system "CUNI Transformer" [3] was evaluated as significantly better (p < 0.05) than the human reference. However, it has been shown [4, 5] that evaluating the quality of translation of news articles on isolated sentences without the context of the whole document (as done in WMT 2018) is not sufficient. Thus, the research has focused on document-level translation (see e.g. [6, 7, 8] ) which is trained simply by training on multi-sentence sequences. 1 Another line of research focuses on domain adaptation of NMT; see [10] for an overview. One of the most simple and effective techniques is fine-tuning [11] , where an NMT model trained on (large) general-domain (or "out-domain") data is further trained on (smaller) in-domain data. The term "domain" in domain adaptation is usually understood very broadly -a domain can be defined by any property of the training data (and expected test data), such as the topic, genre, formality, style, written vs. spoken language etc.As far as we know, there is no prior work on the interaction of the above-mentioned approaches to NMT:document-level translation and domain adaptation. Is domain adaptation of document-level systems different from the domain adaptation of sentence-level systems? What are the differences in the translation output? While we have no definite answers to these questions, we hope our present work brings some new insights into the issue.
0
Extracting temporal information from text is important linguistic skill to process health-related text. Also, there are a lot of potential applications of temporal information extraction in the healthrelated domain, including forecasting treatment effect (Choi et al., 2016) , early detecting diseases (Khanday et al., 2020) , and tracking treatment progress (Demner-Fushman et al., 2021) . With the recent trends of telehealth, an automated system that can extract temporal information from the health-related narrative text can provide benefits to not only healthcare professionals but also recipients enabling active engagement, such as selfmonitoring. In this paper, we consider the use-case of a sleep diary, which is a summary of sleep designed to gather information about daily sleep patterns (Carney et al., 2012) . A typical sleep diary consists of a series of close-ended questions to record the time. By writing sleep diaries, people can keep track of sleep, monitor sleep habits, and document sleeping problems which can be shared with their sleep therapists. We focus on extracting temporal information from a free-text sleep diary, To achieve this, a system should extract temporal expressions from the unstructured user-generated text and normalise the extracted temporal expressions into a standard format, as illustrated in Figure 1 .Temporal information extraction from usergenerated text is a challenging task. First of all, it requires processing not only text but also numbers (e.g., 11pm or 23:00). But recent pre-trained language models (Devlin et al., 2019; Yang et al., 2019) have difficulty in processing numbers (Saxton et al., 2018; Ravichander et al., 2019; Dua et al., 2019) because these language models are pretrained with language modelling objectives. Even though there have been recent studies on training language models to process numerical information (Andor et al., 2019; Geva et al., 2020) , the remaining challenge is how to obtain a large amount of training data.A second challenge is that there are various ways of describing the same normalised time. For example, the normalised time 23:00 can be expressed as 11, 11 pm, 23:00, eleven o'clock, etc. This issue is, even more, severe when dealing with usergenerated text that is typically noisy: the usergenerated text is prone to spelling errors and grammatical errors and contains a lot of abbreviations (Petz et al., 2013) . To address this, a sufficient amount of training dataset containing pairs of various temporal expressions and normalised time values is required.A third challenge is that there are different types of temporal expressions which of each is difficult to extract. For example, temporal expressions include not only standalone times (e.g., 23:00) but also relative times (e.g., 5 minutes after), counts (e.g., 3 times), duration (e.g., for an hour), and frequencies (e.g., once per hour). For relative time expressions, the challenge is how to annotate temporal expressions and model dependencies. For count time expressions, the challenge is to deal with ambiguous terms, such as 'several times' and 'a few times'.The last challenge is how to collect large-scale data while developing a proof-of-concept model to validate the hypothesis. Especially for healthrelated data, the data collection requires rigorous process of considering privacy and ethical aspects, which might result in a slow process. Moreover, typical machine learning development process includes the multiple cycles of collecting a new dataset and updating a model to improve the performance of model. Therefore, the challenge is how to train a machine leanring model when only a low very low amount of training data is available. Therefore, the main research question of this paper is how to extract temporal information from user-generated noisy text with the limited number of training data. To this end, we propose a synthetic data generation algorithm to augment the size of training data. We also propose a multitask model and investigate whether the multi-task learning strategy is beneficial to the target task by exploiting additional training signals from the existing training data. The main contributions of this paper include the followings:• A new custom dataset has been collected to demonstrate the success of the free-text sleep diary use-case (Section 3).• The temporal information extraction and normalisation tasks are reformulated as a question and answering task (Section 4.1).• A novel model that can extract temporal expressions from unstructured text and normalise them into the standard format is proposed (Section 4.2).• Experimental results show that utilising synthetic data and multi-task learning can be beneficial to performance improvement (Section 5.5).• We also provide further analysis on experimental results to reveal insights of the model behaviours (Section 6).
0
Case markers express semantic roles, describing the relationship between the arguments they apply to and the action of a verb. Adpositions (prepositions, postpositions, and circumpositions) further express a range of semantic relations, including space, time, possession, properties, and comparison.The use of specific case markers and adpositions for particular semantic roles is idiosyncratic to every language. Hindi-Urdu has a case-marking system along with a large postposition inventory. Idiosyncratic bundling of case and adpositional relations poses problems in many natural language processing tasks for Hindi, such as machine translation (Ratnam et al. 2018 , Jha 2017 , Ramanathan et al. 2009 , Rao et al. 1998 and semantic role labelling (Pal and Sharma 2019, Gupta 2019) . Many models for these tasks rely on human-annotated corpora as training data, such as the one created for the Hindi-Urdu PropBank (Bhatt et al., 2009) , and by Kumar et al. (2019) . The study of adposition and case semantics in corpora is also useful from a linguistic perspective, in comparing and categorizing the encoding of such relations across languages.There is a lack of corpora in South Asian languages for such tasks. Even Hindi, despite being a resource-rich language, is limited in available labelled data (Joshi et al., 2020) . This extended abstract reports on in-progress annotation of case markers and adpositions in a Hindi corpus, employing the cross-lingual SNACS scheme (Semantic Network of Adposition and Case Supersenses; . The guidelines we are developing also apply to Urdu, since the grammatical base of Hindi and Urdu is largely the same.
0
This paper reviews the currently available design strategies for software infrastructure for NLP and presents an implementation of a system called GATE -a General Architecture for Text Engineering. By software infrastructure we mean what has been variously referred to in the literature as: software architecture; software support tools; language engineering platforms; development enviromnents. Our gloss on these terms is: common models for the representation, storage and exchange of data in and between processing modules in NLP systems, along with graphical interface tools for the management of data and processing and the visualisation of data.NLP systems produce information about texts 1, and existing systems that aim to provide software infrastructure for NLP can be classified as belonging to one of three types according to the way in which they treat this information:additive, or markup-based:information produced is added to the text in the form of markup, e.g. in SGML (Thompson and McKelvie, 1996) ; referential, or annotation-based: information is stored separately with references back to the original text, e.g. in the TIPSTER architecture (Grishman, 1996) ; abstraction-based:the original text is preserved in processing only as parts of an integrated data structure that represents information about the text in a uniform theoretically-motivated model, e.g. attribute-value structures in the ALEP system (Simkins, 1994) .A fourth category might be added to cater for those systems that provide communication and control infrastructure without addressing the text-specific needs of NLP (e.g. Verbmobil's ICE architecture (Amtrup, 1995) ).We begin by reviewing examples of the three approaches we sketched above (and a system that falls into the fourth category). Next we discuss current trends in the field and motivate a set of requirements that have formed the design brief for GATE, which is then described. The initial distribution of the system includes a MUC-6 (Message Understanding Conference 6 (Grishman and Sundheim, 1996) ) style information extraction (IE) system and an overview 1These texts may sometimes be the results of automatic speech recognition -see section 2.6. of these modules is given. GATE is now available for research purposes -see http ://ul;w. dcs. shef. ac. u_k/research/groups/ nlp/gate/ for details of how to obtain the system.It is written in C++ and Tcl/Tk and currently runs on UNIX (SunOS, Solaris, Irix, Linux and AIX are known to work); a Windows NT version is in preparation.Managing Information about Text
0
Task-oriented dialogue systems play an important role in helping users accomplish a variety of tasks through verbal interactions (Young et al., 2013; Gao et al., 2019) . Dialogue state tracking (DST) is an essential component of the dialogue manager in pipeline-based task-oriented dialogue systems. It aims to keep track of users' intentions at each turn of the conversation (Mrkšić et al., 2017) . The state information indicates the progress of the conversation and is leveraged to determine the next system action and generate the next system response (Chen et al., 2017) . As shown in Figure 1 , the dialogue state is typically represented as a set of (slot, value) pairs .Hi, how may I help you? I need to book a room at autumn house.Definitely, for how many people and how many nights? Just me, 3 nights. Can you also give me information on the vue cinema?Sure. It is in the city centre, and the phone number is 08451962320.Thanks for your help. That's all I need.(hotel-name, autumn house) (hotel-name, autumn house) (hotel-book people, 1) (hotel-book stay, 3) (attraction-name, vue cinema) (hotel-name, autumn house) (hotel-book people, 1) (hotel-book stay, 3) (attraction-name, vue cinema)Dialogue State Figure 1 : An example dialogue spanning two domains. On the left is the dialogue context with system responses shown in orange and user utterances in green. The dialogue state at each turn is presented on the right.Therefore, the problem of DST is defined as extracting the values for all slots from the dialogue context at each turn of the conversation. Over the past few years, DST has made significant progress, attributed to a number of publicly available dialogue datasets, such as DSTC2 , FRAMES (El Asri et al., 2017) , MultiWOZ 2.0 (Budzianowski et al., 2018) , Cross-WOZ , and SGD . Among these datasets, MultiWOZ 2.0 is the most popular one. So far, lots of DST models have been built on top of it Wu et al., 2019; Ouyang et al., 2020; Hu et al., 2020; Ye et al., 2021b; Lin et al., 2021) .However, it has been found out that there is substantial noise in the state annotations of MultiWOZ 2.0 (Eric et al., 2020) . These noisy labels may impede the training of robust DST models and lead to noticeable performance decrease (Zhang et al., 2016) . To remedy this issue, massive efforts have been devoted to rectifying the annotations, and four refined versions, including MultiWOZ 2.1 (Eric et al., 2020) , MultiWOZ 2.2 , MultiWOZ 2.3 (Han et al., 2020b) , and MultiWOZ 2.4 (Ye et al., 2021a) , have been released. Even so, there are still plenty of noisy and inconsistent la-bels. For example, in the latest version MultiWOZ 2.4, the validation set and test set have been manually re-annotated and tend to be noise-free. While the training set is still noisy, as it remains intact. In reality, it is costly and laborious to refine existing large-scale noisy datasets or collect new ones with fully precise annotations (Wei et al., 2020) , let alone dialogue datasets with multiple domains and multiple turns. In view of this, we argue that it is essential to devise particular learning algorithms to train DST models robustly from noisy labels.Although loads of noisy label learning algorithms (Natarajan et al., 2013; Han et al., 2020a) have been proposed in the machine learning community, most of them target only multi-class classification (Song et al., 2020) . However, as illustrated in Figure 1 , the dialogue state may contain multiple labels, which makes it unstraightforward to apply existing noisy label learning algorithms to the DST task. In this paper we propose a general framework, named ASSIST (lAbel noiSe-robuSt dIalogue State Tracking), to train DST models robustly from noisy labels. ASSIST first trains an auxiliary model on a small clean dataset to generate pseudo labels for each sample in the noisy training set. Then, it leverages both the generated pseudo labels and vanilla noisy labels to train the primary model. Since the auxiliary model is trained on the clean dataset, it can be expected that the pseudo labels will help us train the primary model more robustly. Note that ASSIST is based on the assumption that we have access to a small clean dataset. This assumption is reasonable, as it is feasible to manually collect a small noise-free dataset or re-annotate a portion of a large noisy dataset.In summary, our main contributions include:• We propose a general framework ASSIST to train robust DST models from noisy labels. To the best of our knowledge, we are the first to tackle the DST problem by taking into consideration the label noise.• We theoretically analyze why the pseudo labels are beneficial and show that a proper combination of the pseudo labels and vanilla noisy labels can approximate the unknown true labels more accurately.• We conduct extensive experiments on Multi-WOZ 2.0 & 2.4. The results demonstrate that ASSIST can improve the DST performance on both datasets by a large margin.
0
The usage of social media sites has significantly increased over the years. Every minute people upload thousands of new videos on YouTube, write blogs on Tumblr 1 , take pictures on Flickr and Instagram, and send messages on Twitter and Facebook. This has lead to an information overload that makes it hard for people to search and discover relevant information.Social media sites have attempted to mitigate this problem by allowing users to follow, or subscribe to updates from specific users. However, as 1 www.tumblr.com the number of followers grows over time, the information overload problem returns. One possible solution to this problem is the usage of recommendation systems, which can display to users items and followers that are related to their interests and past activities.Over time recommender methods have significantly evolved. By observing the history of useritem interactions, the systems learn the preferences of the users and use this information to accurately filter through vast amount of items and allowing the user to quickly discover new, interesting and relevant items such as movies, clothes, books and posts. There is a substantial body of work on building recommendation systems for discovering new items, following people in social media platforms, predicting what people like (Purushotham et al., 2012; Chua et al., 2013; Kim et al., 2013) . However, these models either do not consider the characteristics of user-item adoption behaviors or cannot scale to the magnitude of data.It is important to note that the problem of recommending blog posts differs from the traditional collaborative filtering settings, such as the Netflix rating prediction problem in two main aspects. First, the interactions between the users and blogs are binary in the form of follows and there is no explicit rating information available about the user's preference. The follow information can be represented as an unidirectional unweighted graph and popular proximity measures based on the structural properties of the graph have been applied to the problem (Yin et al., 2011) . Second, the blog recommendation inherently has richer side information additional to the conventional user-item matrix (i.e. follower graph).In Tumblr, text data includes a lot of information, since posts have no limitation in length, compared to other microblogging sites such as Twitter. While such user generated content charac-terizes various blogs, user activity is a more direct and informative signal of user preference as users can explicitly express their interests by liking and reblogging a post. This implies that users who liked or reblogged the same posts are likely to follow similar blogs. The challenge is how to combine multiple sources of information (text and activity) at the same time. For the purpose, we propose a novel convex collective matrix completion (CCMC) social media recommender model, which can scale to million by million matrix using Hazan's algorithm (Gunasekar et al., 2015) . Our contributions are as follows:• We propose a novel CCMC based Tumblr blog post recommendation model. • We represent users and blogs with an extensive set of side information sources such as the user/blog activity and text/tags. • We conduct extensive experimental evaluations on Tumblr data and show that our approach significantly outperforms existing methods.
0
The assessment of learners' language abilities is a significant part in language learning. In conventional assessment, the problem of limited teacher availability has become increasingly serious with the population increase of language learners. Fortunately, with the development of computer techniques and machine learning techniques (natural language processing and automatic speech recognition), Computer-Assisted Language Learning (CALL) systems help people to learn language by themselves.One form of CALL is evaluating the speech of the learner. Efforts in speech assessment usually fo-cus on the integrality, fluency, pronunciation, and prosody (Cucchiarini et al., 2000; Neumeyer et al., 2000; Maier et al., 2009; Huang et al., 2010) of the speech, which are highly predictable like the exam form of the read-aloud text passage. Another form of CALL is textual assessment. This work is also named AES. Efforts in this area usually focus on the content, arrangement and language usage (Landauer et al., 2003; Ishioka and Kameda, 2004; Kakkonen et al., 2005; Attali and Burstein, 2006; Burstein et al., 2010; Persing et al., 2010; Peng et al., 2010; Attali, 2011; Yannakoudakis et al., 2011) of the text written by the learner under a certain form of examination.In this paper, our evaluation objects are the oral English picture compositions in English as a Second Language (ESL) examination. This examination requires students to talk about four successive pictures with at least five sentences in one minute, and the beginning sentence is given. This examination form combines both of the two forms described above. Therefore, we need two steps in the scoring task. The first step is Automatic Speech Recognition (ASR), in which we get the speech scoring features as well as the textual transcriptions of the speeches. Then, the second step could grade the text-free transcription in an (conventional) AES system. The present work is mainly about the AES system under the certain situation as the examination grading criterion is more concerned about the integrated content of the speech (the reason will be given in subsection 3.1).There are many features and techniques which are very powerful in conventional AES systems, but applying them in this task will cause two different problems as the scoring objects are the ASR output results. The first problem is that the inevitable recognition errors of the ASR will affect the performance of the feature extractions and scoring system. The second problem is caused by the special characteristic of the ASR result. As all these methods are designed under the normal AES situation that they are not suitable for the characteristic.The impact of the first problem can be reduced by either perfecting the results of the ASR system or building the AES system which is not sensitive to the ASR errors. Improving the performance of the ASR is not what we concern about, so building an error insensitive AES system is what we care about in this paper. This makes many conventional features no longer useful in the AES system, such as spelling errors, punctuation errors and even grammar errors.The second problem is caused by applying the bag-of-words (BOW) techniques to score the ASR transcription. The BOW are very useful in measuring the content features and are usually robust even if there are some errors in the scoring transcription. However, the robustness would not exist anymore because of the characteristic of the ASR result. It is known that better performance of ASR (reduce the word error rate in ASR) usually requires a strong constrain Language Model (LM). It means that more meaningless parts of the oral speeches would be recognized as the words quite related to the topic content. These words will usually be the key words in the BOW methods, which will lead to a great disturbrance for the methods. Therefore, the conventional BOW methods are no longer appropriate because of the characteristic of the ASR result.To tackle the two problems described above, we apply the FST (Mohri, 2004) . As the evaluating objects are from an oral English picture composition examination, it has two important features that make the FST algorithm quite suitable.• Picture composition examinations require students to speak according to the sequence of the pictures, so there is strong sequentiality in the speech.• The sentences for describing the same picture are very identical in expression, so there is a hierarchy between the word sequences in the sentences (the expression) and the sense for the same picture.FST is designed to describe a structure mapping two different types of information sequences. It is very useful in expressing the sequences and the hierarchy in picture composition. Therefore, we build a FST-based model to extract features related to the transcription assessment in this paper. As the FSTbased model is similar to the BOW metrics, it is also an error insensitive model. In this way, the impact of the first problem could be reduced. The FST model is very powerful in delivering the sequence information that a meaningless sequence of words related to the topic content will get low score under the model. Therefore, it works well concerning the second problem. In a word, the FST model can not only be insensitive to the recognition error in the ASR system, but also remedy the weakness of BOW methods in ASR result scoring.In the remainder of the paper, the related work of conventional AES methods is addressed in section 2. The details of the speech corpus and the examination grading criterion are introduced in section 3. The FST model and its improved method are proposed in section 4. The experiments and the results are presented in section 5. The final section presents the conclusion and future work.
0
The field of NLP had seen a resurgence of research in shallow semantic analysis. The bulk of this recent work views semantic analysis as a tagging, or labeling problem, and has applied various supervised machine learning techniques to it Jurafsky (2000, 2002) ; Gildea and Palmer (2002) ; Surdeanu et al. (2003) ; ; Thompson et al. (2003) ; Pradhan et al. (2003) ). Note that, while all of these systems are limited to the analysis of verbal predicates, many underlying semantic relations are expressed via nouns, adjectives, and prepositions. This paper presents a preliminary investigation into the semantic parsing of eventive nominalizations (Grimshaw, 1990) in English and Chinese.
0
Swiss German ("Schwyzerdütsch" or "Schwiizertüütsch", abbreviated "GSW") is the name of a large continuum of dialects attached to the Germanic language tree spoken by more than 60% of the Swiss population (Coray and Bartels, 2017) . Used every day from colloquial conversations to business meetings, Swiss German in its written form has become more and more popular in recent years with the rise of blogs, messaging applications and social media. However, the variability of the written form is rather large as orthography is more based on local pronunciations and emerging conventions than on a unique grammar. Even though Swiss German is widely spread in Switzerland, there are still few natural language processing (NLP) corpora, studies or tools available (Hollenstein and Aepli, 2014) . This lack of resources may be explained by the small pool of speakers (less than one percent of the world population), but also the many intrinsic difficulties of Swiss German, including the lack of official writing rules, the high variability across different dialects, and the informal context in which texts are commonly written. Furthermore, there is no official top-level domain (TLD) for Swiss German on the Internet, which renders the automatic collection of Swiss German texts more difficult. To foster the development of NLP tools for Swiss German, we gathered the largest corpus of written Swiss German to date by crawling the web using a customized tool. We highlight the difficulties for finding Swiss German on the web and demonstrate in an experimental evaluation how our text corpus can be used to significantly improve an important NLP task : language modeling.
0