text
stringlengths
4
222k
label
int64
0
4
The social web and social media networks have received an ever-increasing amount of attention since their emergence 15-20 years ago. Their popularity among billions of users has had a significant effect on the way people consume information in general, and news in particular (Newman et al., 2016) . This development is accompanied by a number of challenges, which resulted in various NLP tasks that deal with information quality (Derczynski and Bontcheva, 2014; Dale, 2017; Saquete et al., 2020) . Due to the data-driven nature of these tasks, they are often evaluated under the umbrella of (un)shared tasks, on topics such as rumour detection or verification (Derczynski et al., 2017; Gorrell et al., 2019) , offensive language and hate speech detection (Zampieri et al., 2019; Basile et al., 2019; Struß et al., 2019; Waseem et al., 2017; Fišer et al., 2018; Roberts et al., 2019; Akiwowo et al., 2020) or fake news and fact-checking (Hanselowski et al., 2018; Thorne et al., 2019; Mihaylova et al., 2019) .Several shared tasks concentrate on stance (Mohammad et al., 2016) and hyper-partisan news detection (Kiesel et al., 2019) , which predict either the stance of the author towards the topic of a news piece, or whether or not they exhibit allegiance to a particular party or cause. We argue that transparency and de-centralisation (i. e., moving away from a single, objective "truth" and a single institution, organisation or algorithm that decides on this) are essential in the analysis and dissemination of online information (Rehm, 2018) . The prediction of political bias was recently examined by the 2019 Hyperpartisan News Detection task (Kiesel et al., 2019) with 42 teams submitting valid runs, resulting in over 30 publications. This task's test/evaluation data comprised English news articles and used labels obtained by Vincent and Mestre (2018) , but their five-point scale was binarised so the challenge was to label articles as being either hyperpartisan or not hyperpartisan.We follow Wich et al. (2020) in claiming that, in order to better understand online abuse and hate speech, biases in data sets and trained classifiers should be made transparent, as what can be considered hateful or abusive depends on many factors (relating to both sender and recipient), including race (Vidgen et al., 2020; Davidson et al., 2019) , gender (Brooke, 2019; Clarke and Grieve, 2017) , and political orientation (Vidgen and Derczynski, 2021; Jiang et al., 2020) . This paper contributes to the detection of online abuse by attempting to uncover political bias in content.We describe the creation of a new data set of German news articles labeled for political bias. For annotation, we adopt the semi-supervised strategy of Kiesel et al. (2019) who label (English) articles according to their publisher. In addition to opening up this line of research to a new language, we use a more fine-grained set of labels. We argue that, in addition to knowing whether content is hyperpartisan, the direction of bias (i. e., left-wing or rightwing) is important for end user transparency and overall credibility assessment. As our labels are not just about hyperpartisanism as a binary feature, we refer to this task as political bias classification. We apply and evaluate various classification models to the data set. We also provide suggestions for improving performance on this challenging task. The rest of this paper is structured as follows. Section 2 discusses related work on bias and hyperpartisanism. Section 3 describes the data set and provides basic statistics. Section 4 explains the methods we apply to the 2019 Hyperpartisan News Detection task data (for evaluation and benchmarking purposes) and to our own data set. Sections 5 and 6 evaluate and discuss the results. Section 7 sums up our main findings.
0
Open-domain question answering (QA) is a longstanding, unsolved problem. The central challenge is to automate every step of QA system construction, including gathering large databases and answering questions against these databases. While there has been significant work on large-scale information extraction (IE) from unstructured text (Banko et al., 2007; Hoffmann et al., 2010; Riedel et al., 2010) , the problem of answering questions with the noisy knowledge bases that IE systems produce has received less attention. In this paper, we present an approach for learning to map questions to formal queries over a large, open-domain database of extracted facts (Fader et al., 2011) .Our system learns from a large, noisy, questionparaphrase corpus, where question clusters have a common but unknown query, and can span a diverse set of topics. Table 1 shows example paraphrase clusters for a set of factual questions. Such data provides strong signal for learning about lexical variation, but there are a number of challenges. Given that the data is communityauthored, it will inevitably be incomplete, contain incorrectly tagged paraphrases, non-factual questions, and other sources of noise.Our core contribution is a new learning approach that scalably sifts through this paraphrase noise, learning to answer a broad class of factual questions.We focus on answering open-domain questions that can be answered with single-relation queries, e.g. all of the paraphrases of "Who wrote Winnie the Pooh?" and "What cures a hangover?" in Table 1 . The algorithm answers such questions by mapping them to executable queries over a tuple store containing relations such as authored(milne, winnie-the-pooh) and treat(bloody-mary, hangover-symptoms).The approach automatically induces lexical structures, which are combined to build queries for unseen questions. It learns lexical equivalences for relations (e.g., wrote, authored, and creator), entities (e.g., Winnie the Pooh or Pooh Bear), and question templates (e.g., Who r the e books? and Who is the r of e?). Crucially, the approach does not require any explicit labeling of the questions in our paraphrase corpus. Instead, we use 16 seed question templates and string-matching to find high-quality queries for a small subset of the questions. The algorithm uses learned word alignments to aggressively generalize the seeds, producing a large set of possible lexical equivalences. We then learn a linear ranking model to filter the learned lexical equivalences, keeping only those that are likely to answer questions well in practice.Experimental results on 18 million paraphrase pairs gathered from WikiAnswers 1 demonstrate the effectiveness of the overall approach. We performed an end-to-end evaluation against a database of 15 million facts automatically extracted from general web text (Fader et al., 2011) . On known-answerable questions, the approach achieved 42% recall, with 77% precision, more than quadrupling the recall over a baseline system.In sum, we make the following contributions:• We introduce PARALEX, an end-to-end opendomain question answering system.• We describe scalable learning algorithms that induce general question templates and lexical variants of entities and relations. These algorithms require no manual annotation and can be applied to large, noisy databases of relational triples.• We evaluate PARALEX on the end-task of answering questions from WikiAnswers using a database of web extractions, and show that it outperforms baseline systems.• We release our learned lexicon and question-paraphrase dataset to the research community, available at http://openie.cs.washington.edu.
0
The internal consistency of the annotation in a treebank is crucial in order to provide reliable training and testing data for parsers and linguistic research. Treebank annotation, consisting of syntactic structure with words as the terminals, is by its nature more complex and thus more prone to error than other annotation tasks, such as part-of-speech tagging. Recent work has therefore focused on the importance of detecting errors in the treebank (Green and Manning, 2010) , and methods for finding such errors automatically, e.g. (Dickinson and Meurers, 2003b; Boyd et al., 2007; Kato and Matsubara, 2010) .We present here a new approach to this problem that builds upon Dickinson and Meurers (2003b) , by integrating the perspective on treebank consistency checking and search in Kulick and Bies (2010) . The approach in Dickinson and Meurers (2003b) has certain limitations and complications that are inherent in examining only strings of words. To over-come these problems, we recast the search as one of searching for inconsistently-used elementary trees in a Tree Adjoining Grammar-based form of the treebank. This allows consistency checking to be based on structural locality instead of n-grams, resulting in improved precision of finding inconsistent treebank annotation, allowing for the correction of such inconsistencies in future work.
0
Paraphrases are semantically equivalent expressions in the same language. Because "equivalence" is the most fundamental semantic relationship, techniques for generating and recognizing paraphrases play an important role in a wide range of natural language processing tasks (Madnani and Dorr, 2010) .In the last decade, automatic acquisition of knowledge about paraphrases from corpora has been drawing the attention of many researchers. Typically, the acquired knowledge is simply represented as pairs of semantically equivalent sub-sentential expressions as in (1).(1) a. look like ⇔ resemble b. control system ⇔ controller The challenge in acquiring paraphrases is to ensure good coverage of the targeted classes of paraphrases along with a low proportion of incorrect pairs. However, no matter what type of resource has been used, it has proven difficult to acquire paraphrase pairs with both high recall and high precision.Among various types of corpora, monolingual corpora can be considered the best source for highcoverage paraphrase acquisition, because there is far more monolingual than bilingual text available. Most methods that exploit monolingual corpora rely on the Distributional Hypothesis (Harris, 1968) : expressions that appear in similar contexts are expected to have similar meaning. However, if one uses purely distributional criteria, it is difficult to distinguish real paraphrases from pairs of expressions that are related in other ways, such as antonyms and cousin words.In contrast, since the work in (Bannard and Callison-Burch, 2005) , bilingual parallel corpora have been acknowledged as a good source of highquality paraphrases: paraphrases are obtained by putting together expressions that receive the same translation in the other language (pivot language). Because translation expresses a specific meaning more directly than context in the aforementioned approach, pairs of expressions acquired in this manner tend to be correct paraphrases. However, the coverage problem remains: there is much less bilingual parallel than monolingual text available.Our objective in this paper is to obtain paraphrases that have high quality (like those extracted from bilingual parallel corpora via pivoting) but can be generated in large quantity (like those extracted from monolingual corpora via contextual similarity). To achieve this, we propose a method that exploits general patterns underlying paraphrases and uses both bilingual parallel and monolingual sources of information. Given a relatively high-quality set of paraphrases obtained from a bilingual parallel corpus, a set of paraphrase patterns is first induced. Then, appropriate instances of such patterns, i.e., potential paraphrases, are harvested from a monolingual corpus.After reviewing existing methods in Section 2, our method is presented in Section 3. Section 4 describes our experiments in acquiring paraphrases and presents statistics summarizing the coverage of our method. Section 5 describes a human evaluation of the quality of the acquired paraphrases. Finally, Section 6 concludes this paper.
0
Building systems that can naturally and meaningfully converse with humans has been a central goal of artificial intelligence since the formulation of the Turing test (Turing, 1950) . Research on one type of such systems, sometimes referred to as non-task-oriented dialogue systems, goes back to the mid-60s with Weizenbaum's famous program ELIZA: a rule-based system mimicking a Rogerian psychotherapist by persistently either rephrasing statements or asking questions (Weizenbaum, * Indicates equal contribution.Speaker A: Hey, what do you want to do tonight? Speaker B: Why don't we go see a movie? Model Response Nah, let's do something active. Reference Response Yeah, the film about Turing looks great! Figure 1 : Example where word-overlap scores fail for dialogue evaluation; although the model response is reasonable, it has no words in common with the reference response, and thus would be given low scores by metrics such as BLEU. 1966 ). Recently, there has been a surge of interest towards building large-scale non-task-oriented dialogue systems using neural networks (Sordoni et al., 2015b; Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016a; Li et al., 2015) . These models are trained in an end-to-end manner to optimize a single objective, usually the likelihood of generating the responses from a fixed corpus. Such models have already had a substantial impact in industry, including Google's Smart Reply system (Kannan et al., 2016) , and Microsoft's Xiaoice chatbot (Markoff and Mozur, 2015) , which has over 20 million users.One of the challenges when developing such systems is to have a good way of measuring progress, in this case the performance of the chatbot. The Turing test provides one solution to the evaluation of dialogue systems, but there are limitations with its original formulation. The test requires live human interactions, which is expensive and difficult to scale up. Furthermore, the test requires carefully designing the instructions to the human interlocutors, in order to balance their behaviour and expectations so that different systems may be ranked accurately by performance. Although unavoidable, these instructions introduce bias into the evaluation measure. The more common approach of having humans evaluate the quality of dialogue system responses, rather than distinguish them from human responses, induces similar drawbacks in terms of time, expense, and lack of scalability. In the case of chatbots designed for specific conversation domains, it may also be difficult to find sufficient human evaluators with appropriate background in the topic (Lowe et al., 2015) .Despite advances in neural network-based models, evaluating the quality of dialogue responses automatically remains a challenging and understudied problem in the non-task-oriented setting. The most widely used metric for evaluating such dialogue systems is BLEU (Papineni et al., 2002) , a metric measuring word overlaps originally developed for machine translation. However, it has been shown that BLEU and other word-overlap metrics are biased and correlate poorly with human judgements of response quality (Liu et al., 2016) . There are many obvious cases where these metrics fail, as they are often incapable of considering the semantic similarity between responses (see Figure 1 ). Despite this, many researchers still use BLEU to evaluate their dialogue models (Ritter et al., 2011; Sordoni et al., 2015b; Li et al., 2015; Galley et al., 2015; Li et al., 2016a) , as there are few alternatives available that correlate with human judgements. While human evaluation should always be used to evaluate dialogue models, it is often too expensive and time-consuming to do this for every model specification (for example, for every combination of model hyperparameters). Therefore, having an accurate model that can evaluate dialogue response quality automatically -what could be considered an automatic Turing test -is critical in the quest for building human-like dialogue agents.To make progress towards this goal, we make the simplifying assumption that a 'good' chatbot is one whose responses are scored highly on appropriateness by human evaluators. We believe this is sufficient for making progress as current dialogue systems often generate inappropriate responses. We also find empirically that asking evaluators for other metrics results in either low inter-annotator agreement, or the scores are highly correlated with appropriateness (see supp. material). Thus, we collect a dataset of appropriateness scores to various dialogue responses, and we use this dataset to train an automatic dialogue evaluation model (ADEM). The model is trained in a semi-supervised manner using a hierarchical recur- Table 1 : Statistics of the dialogue response evaluation dataset. Each example is in the form (context, model response, reference response, human score). rent neural network (RNN) to predict human scores. We show that ADEM scores correlate significantly with human judgement at both the utterance-level and system-level. We also show that ADEM can often generalize to evaluating new models, whose responses were unseen during training, making ADEM a strong first step towards effective automatic dialogue response evaluation. 1
0
Constructing high-quality and large-scale corpora has always been a fundamental research area in the field of Chinese natural language processing. In recent years, the rapid development in the fields of machine translation (MT), phonetic recognition (PR), information retrieval (IR), web text mining, and etc., is demanding more Chinese corpora of higher quality and larger scale. Ensuring consistency of Part-of-Speech (POS) tagging plays an important role in constructing highquality Chinese corpora. In particular, we focus on consistency check of the POS tagging of multi-tagging words, which consist of same Chinese characters and are near-synonymous, but £ To whom correspondence should be addressed.have different grammatical functions. No matter how many different POS tags a multi-category words may be tagged, ensuring consistency of POS tagging means to assign the multi-category word with the same POS tag when it appears in similar context.Novel approaches and techniques have been proposed for automatic rule-based and statisticsbased POS tagging, and the "state-of-the-art" approaches achieve a tagging precision of 89% and 96%, respectively. A great portion of the words appearing in Chinese corpora are multi-category words. We have studied the text data from the 2M-word Chinese corpus published by Peking University, and statistics show that multi-category words cover 11% of the words, while the percentage of the occurrence of multi-category words is as high as 47%. When checking the POS tags, human experts may have disagreements or make mistakes in some cases. After analyzing 1,042 sentences containing the word " ", which are extracted from the 2M-word Chinese corpus of Peking University, the number of incorrect tags for the word " " is 15, which accounts for around 1.3%.So far in the field of POS tagging, most of the works have focused on novel algorithms or techniques for POS tagging. There are only a limited number of studies has focused on consistency check of POS tagging. Xing (Xing, 1999) analyzed the inconsistency phenomena of word segmentation (WS) and POS tagging. Qu and Chen (Qu and Chen, 2003) improved the corpus quality by obtaining POS tagging knowledge from processed corpora, preprocessing, and checking con-sistency with methods based on rules and statistics. Qian and Zheng (Qian and Zheng, 2003; Qian and Zheng, 2004) introduced a rule-based consistency check method that obtained POS tagging knowledge automatically from processed corpora by machine learning (ML) and rough set (RS) methods. For real corpora, Du and Zheng (Du and Zheng, 2001 ) proposed a rule-based consistency check method and strategy to identify the inconsistency phenomena of POS tagging. However, the algorithms and techniques for automatic consistency check of POS tagging proposed in (Qu and Chen, 2003; Qian and Zheng, 2003; Qian and Zheng, 2004; Du and Zheng, 2001 ) still have some insufficiencies. For example, the assignment of POS tags of the inconsistent POS tagging that are not included in the instance set needs to be conducted manually.In this paper, we propose a novel classificationbased method to check the consistency of POS tagging. Compared to Zhang et al. (Zhang et al., 2004) , the proposed method fully considers the mutual relation of the POS in POS tagging sequence, and adopts transition probability and emission probability to describe the mutual dependencies and -NN algorithm to weigh the similarity. We evaluated our proposed algorithm on our 1.5M-word corpus. In open test, our method achieved a precision of 85.24% and a recall of 85.84%.The rest of the paper is organized as follows. Section 2 introduces the context vector model of POS tagging sequences. Section 3 describes the proposed classification-based consistency check algorithm. Section 4 discusses the experimental results. Finally, the concluding remarks are given in Section 5.
0
The web is full of customers' opinions on various products. Automatic extraction, processing and summarization of such opinions are very useful for future users. Opinions about products are often expressed using evaluative words and phrases that have a certain positive or negative sentiment. Therefore, important features in the qualitative classification of opinions about a particular entity are opinion words and expressions used in the domain. The problem is that it is impossible to compile a list of opinion expressions, which will be equally applicable to all domains, as some opinion phrases are used only in a specific domain while the others are contextoriented [Lu et. al., 2011] . Indeed, sentiment lexicons adapted to a particular domain or topic have been shown to improve task performance in a number of applications, including opinion retrieval [Jijkoun et. al., 2010] , and expressionlevel sentiment classification [Choi and Cardie, 2009] . In addition there are several studies about context-dependent opinion expressions [Lu et. al., 2011] .The number of different domains is very large, and recent studies are focused on cross-domain approaches, to bridge the gap between the domains [Pan et al, 2010] . On the other side there are different subject fields that has similar sentiment lexicon. For example: «breathtaking» is an opinion word in entertainment (movies, books, games etc.) domain, but non-opinion in the politics domain. At the opposite side some words («evil», «treachery» etc.) have strong sentiment in politics domain, but are neutral in entertainment domain, these words do not express any opinion about a film, game or book.Thus we suppose that different domains can be separated into clusters (for example: entertainment, digital goods, politics, traveling etc.) where domains of the same cluster have similar sentiment lexicons.In this paper we focus on the problem of construction of a domain-specific sentiment lexicon in Russian, which can be utilized for various similar domains.We present a new supervised method for domain-specific opinion word extraction. We train this method in one domain and then utilize it in two others. Then we combine extracted word lists to construct a general list of opinion words typical to this domain cluster.Our approach is based on several text collections, which can be automatically formed for many subject areas. The set of text collections includes: a collection of product reviews with author evaluation scores, a text collection of product descriptions and a contrast corpus (for example, a general news collection). For each word in a review collection we calculate various statistical features using aforementioned collections and then apply machine learning algorithms for term classification.To evaluate the effectiveness of the proposed method we conduct experiments on data sets in three different domains: movies, books and computer games. The results show that our approach can identify new opinion words specific to the given domain (for example "fabricated" in movie domain).For further evaluation of the lexicon quality, we manually labeled extracted word lists, and our method is proved to be effective in construct-ing a qualitative list of domain-dependent sentiment lexicon. The results also demonstrate the advantage of combining multiple lists of opinion words over using any single list.The reminder of this article is organized as follows. In Section 2 we describe the state-ofthe-art in the opinion words extraction sphere, Section 3 describes our approach in the movie domain, in Section 4 we utilize our approach for two other domains and combine opinion word vocabularies for all three domains.
0
Phrase translation tables play an important role in the process of building machine translation systems. The quality of translation table, which identifies the relations between words or phrases in the source language and those in the target language, is crucial for the quality of the output of most machine translation systems. Currently, the most widely used state-of-the-art tool to generate phrase translation tables is GIZA++ (Och and Ney, 2003) , which trains the ubiquitous IBM models (Brown et al., 1993) and the HMM introduced by (Vogel et al., 1996) , in combination with the Moses toolkit (Koehn et al., 2007) . MGIZA++, a multi-threaded word aligner based on GIZA++, is proposed by (Gao and Vogel, 2008) .In this paper, we investigate a different approach to the production of phrase translation tables: the sampling-based approach (Lardilleux and Lepage, 2009b) . This approach is implemented in a free open-source tool called Anymalign. 1 Being in line with the associative alignment trend illustrated by (Gale and Church, 1991; Melamed, 2000; Moore, 2005) , it is much simpler than the models implemented in MGIZA++, which are in line with the estimating trend illustrated by (Brown et al., 1991; Och and Ney, 2003; Liang et al., 2006) . In addition, it is capable of aligning multiple languages simultaneously; but we will not use this feature here as we will restrain ourselves to bilingual experiments in this paper.In sampling-based alignment, only those sequences of words sharing the exact same distribution (i.e., they appear exactly in the same sentences of the corpus) are considered for alignment.The key idea is to make more words share the same distribution by artificially reducing their frequency in multiple random subcorpora obtained by sampling. Indeed, the smaller a subcorpus, the less frequent its words, and the more likely they are to share the same distribution; hence the higher the proportion of words aligned in this subcorpus. In practice, the majority of these words turn out to be hapaxes, that is, words that occur only once in the input corpus. Hapaxes have been shown to safely align across languages (Lardilleux and Lepage, 2009a) .The subcorpus selection process is guided by a probability distribution which ensures a proper coverage of the input parallel corpus:EQUATIONwhere k denotes the size (number of sentences) of a subcorpus and n the size of the complete input corpus. Note that this function is very close to 1/k 2 : it gives much more credit to small subcorpora, which happen to be the most productive (Lardilleux and Lepage, 2009b) . Once the size of a subcorpus has been chosen according to this distribution, its sentences are randomly selected from the complete input corpus according to a uniform distribution. Then, from each subcorpus, sequences of words that share the same distribution are extracted to constitute alignments along with the number of times they were aligned. 2 Eventually, the list of alignments is turned into a full-fledged translation table, by calculating various features for each alignment. In the following, we use two translation probabilities and two lexical weights as proposed by (Koehn et al., 2003) , as well as the commonly used phrase penalty, for a total of five features.One important feature of the sampling-based alignment method is that it is implemented with an anytime algorithm: the number of random subcorpora to be processed is not set in advance, so the alignment process can be interrupted at any moment. Contrary to many approaches, after a very short amount of time, quality is no more a matter of time, however quantity is: the longer the aligner runs (i.e. the more subcorpora processed), the more alignments produced, and the more reliable their associated translation probabilities, as they are calculated on the basis of the number of time each alignment was obtained. This is possible because high frequency alignments are quickly output with a fairly good estimation of their translation probabilities. As time goes, their estimation is refined, while less frequent alignments are output in addition.Intuitively, since the sampling-based alignment process can be interrupted without sacrificing the quality of alignments, it should be possible to allot more processing time for n-grams of similar lengths in both languages and less time to very different lengths. For instance, a source bigram is much less likely to be aligned with a target 9-gram than with a bigram or a trigram. The experiments reported in this paper make use of the anytime feature of Anymalign and of the possibility of allotting time freely.This paper is organized as follows: Section 2 describes a preliminary experiment on the sampling-based alignment approach implemented in Anymalign baseline and provides the experimental results from which the problem is defined. In Section 3, we propose a variant in order to improve its performance on statistical machine translation tasks. Section 4 introduces standard normal distribution of time to bias the distribution of n-grams in phrase translation tables. Section 5 describes the effects of pruning on the translation quality. Section 6 presents the merge of two aligners' phrase translation tables. Finally, in Section 7, conclusions and possible directions for future work are presented.
0
Neural Network based architectures are increasingly being used for capturing the semantics of the Natural Language (Pennington et al., 2014) . We put them to use for alignment of the sentences in monolingual corpora. Sentence alignment can be formally defined as a mapping of sentences from one document to other such that a sentence pair belongs to the mapping iff both the sentences convey the same semantics in their respective texts. The mapping can be many-to-many as a sentence(s) in one document could be split into multiple sentences in the other to convey same information. It is to be noted that this task is different form paraphrase identification because here we are not just considering the similarity between two individual sentences but we are also considering the context in a sense that we are making use of the order in which the sentences occur in documents.Text alignment in Machine Translation (MT) tasks varies a lot from sentence alignment in monolingual corpora as MT tasks deal with bilingual corpora which exhibits a very strong level of alignment. But two comparable documents in monolingual corpora, such as two articles written about a common entity or two newspaper reports about an event, use widely divergent forms to express same information content. They may contain paraphrases, alternate wording, change of sentence and paragraph order etc. As a result, the surface-based techniques which rely on comparing the sentence lengths, sentence ordering etc. are less likely to be useful for monolingual sentence alignment as opposed to their effectiveness in alignment of bilingual corpora.Sentence alignment finds its use in applications such as plagiarism detection (Clough et al., 2002) , information retrieval and question answering (Marsi and Krahmer, 2005) . It can also be used to generate training set data for tasks such as text summarization.
0
Abstract Meaning Representation (AMR) (Banarescu et al., 2013) is a semantic formalism encoding the meaning of a sentence as a rooted, directed graph. AMR uses a graph to represent meaning, where nodes (such as "boy", "want-01") represent concepts, and edges (such as "ARG0", "ARG1") represent relations between concepts. Encoding many semantic phenomena into a graph structure, AMR is useful for NLP tasks such as machine translation (Jones et al., 2012; Tamchyna et al., 2015) , question answering (Mitra and Baral, 2015) , summarization (Takase et al., 2016) and event detection (Li et al., 2015) .AMR-to-text generation is challenging as function words and syntactic structures are abstracted away, making an AMR graph correspond to multiple realizations. Despite much literature so far on text-to-AMR parsing (Flanigan et al., 2014; Wang et al., 2015; Peng et al., 2015; Vanderwende et al., 2015; Pust et al., 2015; Artzi et al., 2015; Groschwitz et al., 2015; Goodman et al., 2016; Zhou et al., 2016; Peng et al., 2017) , there has been little work on AMR-to-text generation (Flanigan et al., 2016; Song et al., 2016; Pourdamghani et al., 2016) . want-01 ARG0 ARG0 Figure 1 : Graph-to-string derivation. Flanigan et al. (2016) transform a given AMR graph into a spanning tree, before translating it to a sentence using a tree-to-string transducer. Their method leverages existing machine translation techniques, capturing hierarchical correspondences between the spanning tree and the surface string. However, it suffers from error propagation since the output is constrained given a spanning tree due to the projective correspondence between them. Information loss in the graph-to-tree transformation step cannot be recovered. Song et al. (2016) directly generate sentences using graphfragment-to-string rules. They cast the task of finding a sequence of disjoint rules to transduce an AMR graph into a sentence as a traveling salesman problem, using local features and a language model to rank candidate sentences. However, their method does not learn hierarchical structural correspondences between AMR graphs and strings.We propose to leverage the advantages of hierarchical rules without suffering from graph-to-tree errors by directly learning graph-to-string rules. As shown in Figure 1 , we learn a synchronous node replacement grammar (NRG) from a corpus of aligned AMR and sentence pairs. At test time, we apply a graph transducer to collapse input go-01 boy want-01 ARG1 ARG0 ARG0 go-01 #X2# ARG1 ARG0 (root) go-01 #X3# want-01 ARG1 ARG0 ARG0 #S# #X1# {#S#} {#X1#} {#X2# to go} {#X3# wants to go} {the boy wants to go} (a) AMR: String: AMR graphs and generate output strings according to the learned grammar. Our system makes use of a log-linear model with real-valued features, tuned using MERT (Och, 2003) , and beam search decoding. It gives a BLEU score of 25.62 on LDC2015E86, which is the state-of-the-art on this dataset.EQUATION2 Synchronous Node Replacement Grammar
0
Word segmentation has been a long-standing challenge for the Chinese NLP community. It has received steady attention over the past two decades. Previous studies show that joint solutions usually lead to the improvement in accuracy over pipelined systems by exploiting POS information to help word segmentation and avoiding error propagation. However, traditional joint approaches usually involve a great number of features, which arises four limitations. First, the size of the result models is too large for practical use due to the storage and computing constraints of certain real-world applications. Second, the number of parameters is so large that the trained model is apt to overfit on training corpus.Third, a longer training time is required. Last but not the least, the decoding by dynamic programming technique might be intractable since a large search space is faced by the decoder.The choice of features, therefore, is a critical success factor for these systems. Most of the state-ofthe-art systems address their tasks by applying linear statistical models to the features carefully optimized for the tasks. This approach is effective because researchers can incorporate a large body of linguistic knowledge into the models. However, the approach does not scale well when it is used to perform more complex joint tasks, for example, the task of joint word segmentation, POS tagging, parsing, and semantic role labeling. A challenge for such a joint model is the large combined search space, which makes engineering effective task-specific features and structured learning of parameters very hard. Instead, we use multilayer neural networks to discover the useful features from the input sentences.There are two main contributions in this paper. (1) We describe a perceptron-style algorithm for training the neural networks, which not only speeds up the training of the networks with negligible loss in performance, but also can be implemented more easily; (2) We show that the tasks of Chinese word segmentation and POS tagging can be effectively performed by the deep learning. Our networks achieved close to state-of-the-art performance by transferring the unsupervised internal representations of Chinese characters into the supervised models.Section 2 presents the general architecture of neural networks, and our perceptron-style training algorithm for tagging. Section 3 describes how to leverage large unlabeled data to obtain more useful character embeddings, and reports the experimental results of our systems. Section 4 presents a brief overview of related work. The conclusions are given in section 5.
0
How can we automatically infer the sentiment of an author towards an entity based only on the text of their news article? This task can be seen as a part of complete document understanding, with potential uses in detecting journalistic bias, and in collecting articles that express certain viewpoints towards entities. There are no readily useful datasets or effective solutions for author sentiment inference in the news domain. Sentiment analysis solutions have covered a wide range of domains including movie reviews (Pang et al., 2008; Singh et al., 2013; Socher et al., 2011) , product reviews (Dave et al., 2003; Turney, 2002; Fang and Zhan, 2015) , and social media (Abbasi et al., 2008; Pak and Paroubek, 2010) . In the news domain, the closest is inferring the sentiment expressed by one entity (or a group) towards another entity (or a group) mentioned in a news article (Choi et al., 2016) . However, this does not necessarily cover the sentiment expressed by the author towards a specific entity.To address this gap, we introduce PerSenT, a crowdsourced dataset of sentiment annotations on news articles about people. For each article, annotators judge what the author's sentiment is towards the main (target) entity of the article. The annotations also include similar judgments on paragraphs within the article. Our experiments with multiple strong classification models show that this is a difficult task that introduces multiple unmet challenges.The task we propose is difficult mainly because most information contained in a news article is likely irrelevant for the purposes of inferring the author's sentiment. This is a key difference compared to previously studied sentiment problems in domains such as product or movie reviews, where most information is likely relevant for sentiment inference. On the one hand, a single global document-level representation requires careful aggregation of information related to the target entity that is relevant for sentiment inference. On the other, it is also not easy to make local decisions (say at a paragraph level) and aggregate them -in many cases paragraph-level decisions can be noisy due to discourse gaps, and furthermore not all paragraph-level decisions should contribute equally to the final decision. Indeed neither approach works satisfactorily. For instance, fine-tuning pre-trained BERT (Devlin et al., 2018) base model only 1 Code and dataset released at https://stonybrooknlp.github.io/PerSenT/ This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. Figure 1 : A positive (truncated) document towards the target entity Lisa Pratt. A bold green/red word is lexically positive/negative towards the entity. A green/red unbold word is positive/negative towards other entities in the document. The solid green line means the author's view towards the entity is semantically positive on the span. The target entity and all her mentions are in italics.yields an F1 of 0.48 on one test set and 0.42 on the more difficult test set for the document-level task. Paragraph-level prediction using BERT is even more difficult, with 0.43 and 0.40 F1 values for the two test sets respectively (see section 3.4 for more details). Motivated by the nature of the task -requiring focus towards the target entity and aggregating information across the discourse -we also benchmark models that focus on entity-oriented representations (Recurrent Entity Network), discourse-based representations. However, we find that their performance is worse than the pre-trained BERT classifier.We conduct further analysis to show the nature of the challenges introduced by this news domain dataset. One of the main challenges in this task is as shown in Figure 1 , not all paragraphs necessarily convey the author's sentiment towards the target entity. Thus, making a single decision from a representation of the entire document can be ineffective. Furthermore, our analysis shows that prediction in documents with many unique entities is more difficult. We also find that it is easier to predict documentlevel sentiment when there are many paragraphs that convey the same sentiment. Last, we also provide a qualitative error analysis that indicates the main categories of inference errors. In summary, this work introduces a new sentiment inference task in the news domain that introduces new challenges in document-level sentiment inference along with a dataset that will support further research in this area.
0
Le corpus arboré de Paris 7, également appelé French Treebank (FTB), est la plus grande ressource disponible de textes annotés syntaxiquement et morpho-syntaxiquement pour le français (Abeillé et al., 2003) . Il est le résultat d'un projet d'annotation supervisée d'articles du journal Le Monde mené depuis plus d'une dizaine d'années. La quasi-totalité des méthodes d'étiquetage morpho-syntaxique du français utilisent cet ensemble de données que ce soit pour leur phase d'entraînement ou d'évaluation, e.g. (Crabbé et Candito, 2008; Denis et Sagot, 2010) . La qualité de l'annotation du corpus est donc déterminante.De la même manière que la plupart des corpus annotés morpho-syntaxiquement, e.g. le Penn TreeBank (Marcus et al., 1993) pour l'anglais, le FTB a été construit de manière semi-automatique. Un étiqueteur automatique est d'abord appliqué sur l'ensemble des textes. Les sorties sont ensuite corrigées manuellement des éventuelles erreurs commises par l'outil. Malgré cette dernière étape, il est presque certain que des erreurs existantes ne sont pas corrigées et que de nouvelles erreurs sont introduites (les humains n'étant pas infaillibles). Plusieurs études illustrent d'ailleurs cette problématique en décrivant quelques unes des erreurs d'annotation récurrentes du FTB telles que l'absence d'étiquette ou la présence d'éléments XML vides 1 (Arun et Keller, 2005; Green et al., 2011) .Dans cette étude, nous présentons une série d'expériences que nous avons menée sur la correction automatique du FTB. Nous détaillons les différentes erreurs que nous avons rencontrées ainsi que les solutions que nous appliquons. Deux méthodes sont utilisées. La première consiste à identifier les mots sans étiquette et leur attribuer celle d'une forme correspondante observée dans le corpus. La seconde méthode utilise les variations de n-gramme pour détecter et corriger les anomalies d'annotation. L'évaluation de la correction du corpus est réalisée de manière extrinsèque en étudiant l'impact du niveau de correction sur les performances de plusieurs méthodes d'étiquetage morpho-syntaxique.Le reste de cet article est organisé comme suit. La section 2 présente le corpus French Treebank que nous utilisons dans cette étude. La section 3 est consacrée à la description de la méthode que nous proposons. Nous décrivons ensuite en section 4 nos résultats expérimentaux avant de présenter les travaux connexes aux nôtres. La section 6 conclut cette étude et donne quelques perspectives de travaux futurs.
0
In this short paper, we introduce a method to score polarization of different corpora with respect to a given topic. The method is intended to support studies where two different corpora are compared (e.g., news sources inspired by different political positions or social communities characterized by different viewpoints) to investigate whether they convey implicit attitudes towards a given topic. This corpus-wise comparison -its main peculiarity with respect to the large body of work proposed to study and correct bias in NLP models (we refer to (Garrido-Muñoz et al., 2021) for a detailed survey on bias in NLP models) -is based on a new measure that we introduce, the Sliced Word Embedding Association Test (SWEAT).SWEAT is an extension of the Word Embedding Association Test (WEAT) proposed by Caliskan et al. (2017) , which measures the comparative polarization for a pair of topical wordsets (e.g., insects and flowers) against a pair of attribute wordsets (e.g., pleasant and unpleasant) in a single-corpus distributional model (e.g. 1950 American newspaper articles). In this context, with polarization we refer to the phenomenon for which two communities have opposite attitudes against some topic.With SWEAT we extend this approach by measuring the relative polarization for a single topical wordset -the topic, using a pair of stable attribute wordsets deemed to have opposite valence -the two poles, in a pair of aligned distributional models representing the semantics of two different corpora. We explain the rationale behind SWEAT with an example. Suppose that we want to investigate whether two different Italian news sources, e.g., La Repubblica (known to be closer to center-left political positions) and Il Giornale (known to be closer to center-right political positions) hold different and opposite viewpoints about a topic, e.g., "Berlusconi" (a reference center-right Italian politician in the recent past). We can collect a news corpus from La Repubblica and one from Il Giornale to train two different distributional models in such a way that they are aligned (Hamilton et al., 2016b; Carlo et al., 2019; Cassani et al., 2021) .We expect that some words have stable meanings while other change across corpora reflecting the different viewpoints. We can then select a set of words describing the "Berlusconi" topic, whose representations are expected to differ across corpora, and two wordsets having respectively positive and negative valence (the two poles), whose representations are expected to be stable across corpora. The main idea behind SWEAT is the following: if the two corpora hold polarized views about the topic, the "Berlusconi" wordset will be associated more strongly with the positive rather than with the negative pole in one corpus (Il Giornale), while the opposite association will hold in the other corpora (La Repubblica). SWEAT measures this difference and reports effect size and significance.Contributions. We introduce the SWEAT, a novel statistical measure to study relative polarization in distributional representations. We additionally introduce a lexicon selection pipeline and an easy-to-use code to create visualizations. We believe our measure can be useful for different use cases in the computational social science field. We share a repository with an easy to use implementation of our measure. 1 2 Background: WEAT Caliskan et al. (2017) introduce the Word Embedding Association Test (WEAT) to test whether distributional representations exhibited the same implicit biases detected in social sciences studies through behaviorally-measured word associations (Greenwald et al., 1998) .The WEAT compares the relative associations of a pair of target concepts X and Y (e.g., Science and Arts) to a pair of attribute concepts A and B (e.g., Male and Female) in a distributional vector space E; X, Y , A, and B are all sets that contain representative words for the concept. The statistical measure is based on the following formula:S(X, Y, A, B) = x∈X s(x, A, B)− y∈Y s(y, A, B)The value s(w, A, B) is instead computed as:1 |A| a∈A cos(E(w), E(a))− 1 |B| b∈B cos(E(w), E(b))Where the effect size is defined as:d = mean x∈X s(x, A, B) − mean y∈X s(y, A, B) std w∈X∩Y s(w, A, B)Significance is computed through a permutation test (Dwass, 1957) over the possible partition of equal size for the union of target-wordsetsP [X ∪ Y ] = {(X i , Y i )} i .The p-value is computed as the rate of scores, from all possible permutations, that are higher than the tested one:P i [S(X i , Y i , A, B) > S(X, Y, A, B)].Depending on the sign of the score the association could be either X ∼ A, Y ∼ B for positive scores and X ∼ B, Y ∼ A for negative ones. where the ∼ indicates semantic association.1 https://github.com/vinid/SWEAT
0
Le 26 septembre 2013, Google annonce lors d'une conférence de presse pour fêter ses 15 ans, son nouvel algorithme baptisé « Hummingbird ». Ce nouvel algorithme s'éloigne de la logique de la recherche d'informations (RI) des requêtes en mots-clés pour s'ouvrir aux requêtes en langage naturel (LN). C'est un changement majeur pour le géant de la recherche dont l'objectif affiché est d'être capable de traiter des requêtes plus complexes et plus longues tout en prenant en compte le sens des mots dans leur contexte. Si l'on reprend l'exemple de Danny Sullivan 1 à la requête « Quel est l'endroit le plus près de chez moi pour acheter un iPhone 5S ? », Google devrait pouvoir prendre en compte et lier les notions « endroit près de chez moi », « acheter », « iPhone 5S » sous-entendu : l'objet désiré, le type d'action exercée sur l'objet (celui d'acheter) et la couverture géographique (à proximité de là où est localisé l'internaute « chez moi »). L'objectif est donc double : (a) comprendre la demande dans sa globalité pour lui donner un « sens » plus exact et (b) capitaliser d'autres informations que celles apparaissant sur les pages de recherche. Plus largement et au travers de la demande exprimée, c'est l'interprétation du besoin informationnel de l'utilisateur qui est visé pour le premier point. Ainsi, dans l'exemple cité, cela reviendrait à comprendre que la demande concerne le domaine de la téléphonie mais aussi que le besoin s'étend également à une volonté d'acheter un appareil. Le second point, lui, s'intéresse au lieu d'habitation si l'internaute a déjà renseigné cette information. Également des requêtes précédemment effectuées sur le moteur peuvent aider à « contextualiser » la demande notamment sur les centres d'intérêts de l'utilisateur. Cet algorithme rend compte d'une prise de conscience de la part des développeurs des systèmes de recherche d'informations (SRI) de traiter plus efficacement les requêtes avec une LATOUR MARILYNE meilleure contextualisation du besoin informationnel. Dans ce sens, plusieurs pistes d'améliorations sont en cours : les premières ont pour objectif de mieux définir le besoin informationnel ; les buts des utilisateurs ainsi que les tâches sousjacentes à la réalisation de celui-ci ; les secondes portent sur une meilleure contextualisation du contexte de l'utilisateur et de son environnement.
0
Social media has become an important real-time information source, especially during emergencies, natural disasters and other hot events. According to a new Pew Research Center survey, social media has surpassed traditional news platforms (such as TV and radio) as a news source for Americans: about twothirds of American adults (68%) get news via social media. Among all major social media sites, Twitter is still the site Americans most commonly use for news, with 71% of Twitter's users get their news from Twitter. However, it can often be daunting to catch up with the most recent contents due to high volume and velocity of tweets. Hence, social summarization aiming to acquire the most representative and concise information from massive tweets when a hot event happens is particularly urgent.In recent years, many large-scale summarization datasets have been proposed such as New York Times (Sandhaus, 2008) , Gigaword (Napoles et al., 2012) , NEWSROOM (Grusky et al., 2018) and CNN/DAILYMAIL (Nallapati et al., 2016) . However, most of these datasets focus on formal document summarization. Actually, social media text has many different characteristics from formal documents: 1) Short. The length of a tweet is limited to 140 characters, which is much shorter than formal document. 2) Informal. Tweets usually contains informal expressions such as abbreviations, typos, special symbols and so on, which make tweets more difficult to deal with. 3) Social signal. There are different kinds of social signals on social media such as hashtags, urls and emojis. 4) Potential relations. Tweets are generated by users and hence have potential connections through user relationship. Because of these characteristics, traditional summarization methods often do not perform well on social media. Figure 1 : Diagram of the process for creating the TWEETSUM dataset.Though there exists some social media summarization datasets (Hu et al., 2015; Li et al., 2016; P.V.S. et al., 2018; Duan et al., 2012; Cao et al., 2017; Nguyen et al., 2018) . However, these datasets only consider the text on social media and ignore the potential social signals on social network. In a social context, the interactions between friends are obviously different from that between strangers. This phenomenon demonstrates that social relationship can affect user behavior patterns and consequently affect the tweets content they post. This inspires us to consider integrating social relations relevant signals when analyzing social information.In this paper, we construct an event-oriented large-scale dataset with user relations for social summarization called TWEETSUM. It contains 12 real world hot events with a total of 44,034 tweets and 11,240 users. In summary, this paper provides the following contributions: (1) Construct an event-oriented social media summarization dataset, TWEETSUM, which contains social signals. To our knowledge, it is the first summarization dataset that contains user relationships relevant social signals, such as hashtags and user profiles and so on; (2) Create expert summaries for each social event and verified the existence of sociological theory in real data, including social consistency and contagion; (3) Evaluate the performance of typical extractive summarization models on our TWEETSUM dataset to provide benchmarks and validate the effectiveness of the dataset.
0
Online reviews have become a rich source of information for people to know more about real-world entities for making purchasing decisions (Bright-Local, 2019) . Reviews contain diverse information ranging from general sentiments and customer experiences to features and attributes about an entity. Table 1 shows examples of different types of information found in reviews. Since consuming a large number of reviews can be cumbersome, text mining tools and algorithms are popularly used to uncover and aggregate customer sentiments expressed in opinions and experiences to provide a summary of how the entities are perceived by customers. However, existing mining tools largely ignore information about unique features and attributes of the reviewed entity. Such information tends to be sparse compared to expressions about usage, experience and opinions. We observe that in domains such as hotel reviews, sentences with unique features can be as few as 5% of all sentences in the reviews. In a public dataset (Reviews, 2021), for example, "rooftop bar" of Table 1 appears in only 3,026 of 8,211,545 sentences and the attribute is rare that exists in only 197 of 3945 TripAdvisor hotels. Nevertheless, such information is of great interest to users and can be further useful for many downstream applications such as ranking reviews, creating concise entity summaries and answering questions about the entities.In this work, we focus on mining sentences that describe unique information about entities from its reviews. We call these unique sentences salient facts and denote this task as Salient Fact Extraction. Although scarce, salient facts exhibit at least one of the two characteristics: (a) they mention attributes rarely used to describe other entities (example 1 in Table 1 ), or (b) they convey unique, detailed information (e.g. numeric or categorical) about a common attribute (example 2 in Table 1 ). Due to the scarcity of salient facts in the reviews, collecting a labeled dataset to train a supervised model is extremely inefficient and cost-prohibitive.Although there is a rich body of research on extracting tips, informative and helpful sentences from reviews (Li et al., 2020; Novgorodov et al., 2019; Negi and Buitelaar, 2015; Guy et al., 2017a; Chen et al., 2014; Hua et al., 2019; Zhang et al., 2019; Gao et al., 2018) , these approaches have several limitations for extracting Sentence Type 1 There is a rooftop bar.Salient Fact 2 The hotel gives 90% discount for seniors.3 The price is cheap. Sentiment 4 We stayed 3 nights here.Usage Experience 5 Choose other hotels instead. Suggestion Table 1 : Different types of information in hotel reviews. A salient fact mentions attributes (marked in blue) distinctive to the hotel or provides uncommon descriptions (marked in red) for common attributes. salient facts. Firstly, informativeness and saliency are related but have subtle differences. Not all informative sentences describe unique information about an entity. Secondly, due to scarcity of salient facts, collecting labeled training data to train supervised techniques (which is the common technique used for finding informative reviews) can be expensive and time-consuming. To address the scarcity problem, we propose a novel unsupervised extractor for identifying salient sentences in a zero-label setting where abundant unlabeled reviews are available. A naive approach is to refer to the distributional patterns of salient sentences in a review corpus. We projected all the sentences in a corpus to a t-SNE plot (Hinton and Roweis, 2002) and found that salient sentences tend to appear as border points on the graph. However, we observed that not all border points are salient facts. Many sentences mentioning named entities names or unique personal stories also appear as border points. Such non-informative sentences thus make distributional patterns noisy and the extraction challenging.Based on these distributional patterns, we propose a novel system, ZL (Zero Label) -Distiller, which uses two Transformer-based models for capturing unique and informative distributional patterns to extract salient facts. It uses a Transformer-based entity prediction model to identify most unique sentences for an entity, and another Transformer-based model to filter out non-informative sentences, such that informative sentences can be kept. The former one measures how distinctive a review sentence is to the corresponding entity but not to others. The latter one masks entity names in all sentences and drops those sentences that are likely personal stories. To our best knowledge, this is the first work to capture distributional patterns of all sentences for mining useful review sentences.Contributions. In summary, our contributions are four folds. (1) We formulate a novel task that extracts entity-specific information (denoted as salient facts) from online reviews (2) To deal with scarcity of salient facts, we present an unsupervised method ZL-Distiller, which relies on distributional patterns instead of human annotations. 3We show that ZL-Distiller leads to new state-of-the-art performance when used independently, or combined with supervised models on 3 domains (Hotel, Product and Restaurant). 4We demonstrate that ZL-Distiller benefits downstream applications including question answering, and entity summarization by removing non-informative sentences from the pipeline.
0
Natural Language Processing and Understanding (NLP/NLU) have great care of research, recently. Several language models had been trained on huge data of corpora (Peters et al., 2018; Devlin et al., 2018) , and some benchmarks showed how methods display enhanced performance (Devlin et al., 2018) . However, most end-to-end trained methods on common sense, compared to people, are disappointing. For instance, people can understand and recognize immediately that we can place an apple into a fridge, but cannot and never place a TV into a fridge. Thus, for trained systems, it is much harder to recognize the difference. Accordingly, it is essential to have the ability to evaluate how good a system is to recognize the sense (Davis, 2017) . Recent datasets have experimented common sense over tasks, for example co-reference resolution (Ortiz, 2015) or subsequent event prediction (Zellers et al., 2018) . They stated that a method is prepared with common sense through testing if it can offer a correct answer where no added knowledge to the input.SemEval is the International Workshop on Semantic Evaluation, which is developed from SensEval to evaluate semantic study methods. The 14th version of this workshop, SemEval-2020, has provided 12 tasks. We have participated in Task 4 -Commonsense Validation and Explanation (Wang et al., 2020) , which offers three subtasks with datasets in English. The main goal of this task is to recognize which sentences are common sense. Further information about Task 4 and the datasets is given in Section 3.This paper describes our model for participating in SemEval-2020 (Task 4 -Sub Task A). Our team has built an ensembling system to select from two English statements that both have alike words to identify which one is making sense and which one is not making sense. We have used four different state-of-the-art pre-trained models (BERT (Devlin et al., 2018) , ALBERT (Lan et al., 2019) , Roberta (Liu et al., 2019) , and XLNet ), then we combined their outputs and used voting method for each model to choose the shared output from each model. Our baseline model has scored 89.1 accuracies while our improved model has shown significant performance over the baseline model with scoring 96.2 and it is 0.8 away from the first ranked model. Our baseline model ranked 17 out of 41 teams while TeamJUST would rank fourth.The paper is constructed as follows: Related work is provided in section 2. A description for task 4 and the datasets are presented in section 3. The architecture of our approach is introduced in section 4. The detailed experiments are provided in section 5. Results and analysis are presented in Section 6. Finally, the conclusion is in section 7.
0
Recurrent neural network language models (LMs) can learn to predict upcoming words with remarkably low perplexity (Mikolov et al., 2010; Jozefowicz et al., 2016; Radford et al., 2019) . This overall success has motivated targeted paradigms that measure whether the LM's predictions reflect a correct analysis of sentence structure. One such evaluation strategy compares the probability assigned by the LM to a minimal pair of sentences differing only in grammaticality (Linzen et al., 2016) . In the following example, the LM is expected to assign a higher probability to the sentence when the verb agrees in number with the subject (1a) than when it does not (1b):(1) a. The author laughs.b. *The author laugh.RNN LMs have been shown to favor the grammat-ical variant in the vast majority of cases sampled at random from a corpus (Linzen et al., 2016) , but their accuracy decreases in the presence of distracting nouns intervening between the head of the subject and the verb, especially when those nouns are in relative clauses (Marvin and Linzen, 2018 ).Can we hope to address these deficits by training larger and larger networks on larger and larger corpora, relying on the "unreasonable effectiveness" of massive datasets (Halevy et al., 2009) and computational power? 1 Or would architectural advances be necessary to improve our LMs' syntactic representations (Kuncoro et al., 2018) ? This paper takes a first step towards addressing this question. We train 125 RNN LMs with long short-term memory (LSTM, Hochreiter and Schmidhuber, 1997) units, systematically varying the size of the training corpus and the dimensionality of the models' hidden layer, and track the relationship between these parameters and the performance of the models on agreement dependencies in a range of syntactic constructions (Marvin and Linzen, 2018) . We also compare our RNNs' accuracy to that of GPT (Radford et al., 2018) and BERT (Devlin et al., 2019) , Transformer-based LMs trained on very large corpora. We find that model capacity does not consistently improve performance beyond a minimum threshold. Increased corpus size likewise has a moderate and inconsistent effect on accuracy. We estimate that even if training data yielded consistent improvements, an unreasonable amount of data would be required to match human accuracy. We conclude that reliable and data-efficient learning of syntax is likely to require external supervision signals or a stronger inductive bias than that provided by RNNs and Transformers.
0
In recent times, with rapid digitisation, people are increasingly using social media and various other forums available online for interpersonal communication (Riehm et al., 2020) . However, these platforms also come with their own share of drawbacks, such as the propagation of fake news (Waszak et al., 2018) and cyberbullying (Whittaker and Kowalski, 2015) , to list a few.Comments that are found to be offensive and often degrading, that may be targeted at an individual or a community as a whole, are categorised as abusive comments. These comments often have negative effects on the mental well-being of people (O'Reilly et al., 2018) , with an apparent relation between the time spent on social media and increasing levels of depression (Karim et al., 2020) . There is a pressing need for moderation on these websites, which motivates the creation of a system that will be able to classify abusive comments into one of many categories. It could also be useful in identifying and filtering out vitriolic content.A major challenge faced with this task is that most of the data available contains a mixture of languages (Lin et al., 2021) , with people often transliterating from their native language into English, thus posing a hurdle, as most resources available for the task of Abusive language detection are pretrained on English text.Tamil is a Dravidian classical language used by the Tamil people of South Asia. Tamil is an official language of Tamil Nadu, Sri Lanka, Singapore, and the Union Territory of Puducherry in India. Tamil is one of the world's longest-surviving classical languages. Malayalam is Tamil's closest significant cousin; the two began splitting during the 9th century AD (Anita and Subalalitha, 2019b,a; Subalalitha and Poovammal, 2018; Subalalitha, 2019; Srinivasan and Subalalitha, 2019; Narasimhan et al., 2018; Sakuntharaj and Mahesan, 2021 Thavareesan and Mahesan, 2019 , 2020a ,b, 2021 .The Task A of Abusive Comment Detection in Tamil-ACL 2022 (Priyadharshini et al.) involves classification of purely Tamil text, whereas task B deals with the classification of code-mixed Tamil English text into 8 categories as listed in Table 1 . Our approach for Task B was to create embeddings for each data record and then pass them to the various classifiers. Three types of embeddings were employed -a multilingual BERT that produces language-agnostic embeddings, TF-IDF vectorizer and a combination of both.The remainder of this paper is organised as follows. Section 2 is dedicated to related works obtained from the literature survey. Section 3 proceeds to describe the dataset used. Section 4 covers the details of the preprocessing steps, outlines the feature extraction process and describes the model employed for this task. Section 5 summarises the results and Section 6 concludes the paper.Example Train Dev None-ofthe-above Does not belong in any of the other categories Bala kumar wat ur asking.? 1st olunga kealviya kealunga.These are comments indicating contempt against men. Identification and classification of offensive tasks in a fast and effective manner is very important in the moderation of online platforms (Priyadharshini et al., 2021; Kumaresan et al., 2021; Chakravarthi, 2020; Chakravarthi and Muralidaran, 2021; Sampath et al., 2022; Ravikiran et al., 2022; Chakravarthi et al., 2022; Bharathi et al., 2022) . We explored various models to achieve the same. Ravishankar et al. (Ravishankar and Raghunathan, 2017) proposed three different approaches to classify tamil tweets based on syntactic patterns. These include Tweet weight model, TF-IDF and Domain-Specific Tags (DST), and used Tamil Dictionary (Agarathi).The authors collected tweets from 100 movies which amounted upto 7000 tweets. They proposed three other feature extraction models which include TF-IDF, adjective rules, negation rules, and adjective rules which could be passed into classifiers.Alison P. Ribeiro et al. (Ribeiro and Silva, 2019) presented their model to identify hate speech against women and immigrants. It consisted of pre-trained word embeddings using FastText and GloVe which they passed through a CNN network.Younes Samih et al. (Modha et al., 2021) modelled an architecture with Support Vector Machine (SVM) and Deep Neural Networks (DNNs) for task to identify Hate Speech and offensive content. They experimented four different approaches and combined them into an ensemble. They used Fast-Text for the first one, FFNN architecture with four hidden layers for the second one and for the third one they created pretrained word embeddings using Mazajak method which was then passed into a CNN layer and a BiLSTM layer. Their next approach was using BERT. They combined these to create an ensemble which performed well for the given dataset.Anna Glazkova et al.(Glazkova et al., 2021 ) for the HASOC 2021 task which focused on detecting offensive, profane and hate content in tweets in six languages . They proposed various models which include pretrained BERT, RoBERTa and LaBSE. Though the performance of the models were similar for the English datasets, LaBSE outperformed the others for Hindi and Marathi datasets.Shervin et al (Malmasi and Zampieri, 2017 ) used a corpus of 14.5k English tweets and modelled an approach to classify them as hate speech and nonhate speech. Their model uses character n-grams, word n-grams and word skip-grams for feature extraction which was passed onto a linear SVM classifier.Burnap et al. in their paper (Burnap and Williams, 2014) wrote about their model -they used unigram , bigram feature extraction techniques and POS (Parts of Speech tagging), they also used the Stanford Lexical Parser, along with a context-free lexical parsing model, to extract typed dependencies within the tweet text. This was further passed to classifiers like Bayesian Logistic Regression, Random Forest Decision Trees and Support Vector Machines.Aswathi Saravanaraj et al. proposed an approach for the automatic identification of cyberbullying words and rumours. They modelled a Naive Bayes and a Random Forest approach which obtained a greater accuracy then pre-existing models.From the literature survey performed, it is inferred that an approach involving feature extraction using TF-IDF delivers good results and that transformer models like LaBSE work the best for Indian language datasets, with a particularly high accuracy for the Tamil language. The SVM classifier works well for Hate/Abusive language recognition.Although various innovative models have been experimented on in the studies discussed above, a model involving TF-IDF feature extraction, LaBSE and SVM, will be a novel approach to this task.italisation may be considered to be different words and, to prevent that from happening, the dataset was standardised by the conversion of the text to lowercase.2. Removal of punctuations: Since the model involves creating a corpus of the most frequently occurring words in every category, punctuations are removed. The list of punctuations from the string library was used in this process.3. Removal of extra unwanted characters: The dataset contained a significant number of lines containing noise including emojis and iOS flags which have been filtered out, using RegEx.4. Removal of stop words: Stop words are words in a language that are used in abundance as a part of the grammatical structure but do not necessarily add to the meaning of the sentence as a whole. These involve propositions, pronouns and articles among others. To achieve this, a curated list of Tamil-English stop words has been created and used. It includes words such as "r" (are), "ur" (your), "nee" (translates to you in Tamil) and "intha" (translates to this in Tamil).
0
The RTE challenges (Dagan et al., 2006) aim to automatically determine whether an entailment relation obtains between a naturally occurring text sentence (T) and a hypothesis sentence (H). The RTE corpus (Bar Haim et al., 2006; Giampiccolo et al., 2007 Giampiccolo et al., , 2008 Bentivogli et al., 2009) , which is currently the only available resource of textual entailments, marks entailment candidates as valid/invalid. 1 Example 1• T: The head of the Italian opposition, Romano Prodi, was the last president of the EC.• H: Romano Prodi is a former president of the EC. 2• Entailment: ValidThis categorization contains no indication of the linguistic processes that underlie entailment. In the lack of a gold standard of inferential phenomena, entailment systems can be compared based on their performance, but their inferential processes are not directly accessible for analysis. The goal of this work is to elucidate some central inferential processes underlying entailments in the RTE corpus. By doing that, we aim to advance the possibility of creating a benchmark for modeling entailment recognition. We presume that this goal is to be achieved incrementally by modeling increasingly complex semantic phenomena. To this end, we employ a standard model-theoretic approach to entailment in order to combine gold standard annotations with a computational framework. The model contains a formally defined interpreted lexicon, which specifies the inventory of symbols and semantic operators that are supported, and an informally defined annotation scheme that instructs annotators how to bind words and constructions from a given T-H pair to entries in the interpreted lexicon. Our choice to focus on the semantic phenomena of restrictive, intersective and appositive modification is driven by their predominance in the RTE datasets, the ability to annotate them with high consistency and the possibility to capture their various syntactic expressions by a limited set of concepts.However, currently we are only at the first stages of implementing the theoretical semantic model using an annotation platform combined with a theorem prover. In the course of the development of this model we adopted a narrower annotation scheme by which modification phenomena were annotated in all valid entailment pairs from RTE 1-4 without accounting for the way in which the annotated phenomena contribute to the inference being made. This work allowed us to perform data analysis and to further learn about the phenomena of interest as part of the development of the semantic model.The structure of this paper is as follows. Section 3 reviews some related methods used in Bos et al. (2004) and MacCartney and Manning (2007) . In Section 4 we introduce the formal semantic model on which we rely and use it for analyzing some illustrative textual entailments. Section 5 points out a challenge in applying this model to parts of the RTE data and describes our first-stage annotation scheme. We elaborate on the methods employed in applying this scheme to the datasets of RTE 1-4, and present some quantitative data on the targeted phenomena and inter-annotator agreement. Section 6 concludes.
0
Machine translation (MT) output errors are varied, and have been discussed and classified by many researchers. Flanagan (1994) classifies and ranks errors into three levels according to improvability and intelligibility. Elliot et al. (2004) identify fluency-and adequacy-related errors for automatic MT evaluation. Vilar et al. (2006) make a comprehensive classification of SMT output errors. Farrús et al. (2010) present a linguistic-based evaluation of a variety of SMT output errors. Popović and Burchardt (2011) attempt to provide methods for an automatic error analysis of MT output errors that overcome weaknesses of automatic evaluation metrics.This paper introduces three particular types of errors made at the online English-to-Japanese translation on the Google Language Tools, a cutting-edge SMT system, and demonstrates that grammatical knowledge is required for preventing such errors.
0
Recently, spoken dialogue systems have been becoming popular in various applications, such as a speech assistant system in smartphones and smart speakers, an information guide system in public places, and humanoid robots. There have been a variety of studies for developing spoken dialogue systems, and the systems are roughly grouped into two categories, taskoriented and non-task-oriented systems, from the aspect of having a goal or not in the dialogue. Although the task-oriented dialogue systems (Zue et al., 2000; Kawanami et al., 2007) are important as practical applications, e.g., ticket vending and information guidance, the role of the non-task-oriented systems is increasing for more advanced human-computer interaction (HCI) including voice chat.There have been many studies related to the nontask-oriented dialogue systems. Nakano et al. (2006) tried to incorporate both task-oriented and non-taskoriented dialogue functions into a humanoid robot us-ing a multi-expert model. Dybala et al. (2010) proposed an evaluation method of subjective features of human-computer interaction using chatbots. Yu et al. (2016) proposed a set of conversational strategies to handle possible system breakdowns. Although these studies enhance the performance of the dialogue systems, an important role is still missing from the viewpoint of the system expressivity. Specifically, the system cannot perceive and express para-linguistic information such as emotions, which is completely different from our daily communication.Several studies have been presented where emotions were taken into consideration in spoken dialogue systems. MMDAgent (Lee et al., 2013 ) is a well-known open-source dialogue system toolkit where emotional speech synthesis based on hidden Markov models (HMMs) (Yoshimura et al., 1999) is incorporated and style modeling and style interpolation techniques can be used for providing expressive speech (Nose and Kobayashi, 2011) . Su et al. (2014) have combined situation and emotion detection with a spoken dialogue system for health care to provide more warming feedback of the system. Kase et al. (2015) developed a scenario-based dialogue system where emotion estimation and emotional speech synthesis were incorporated. However, the use of emotional speech synthesis was not investigated in a non-task-oriented dialogue system, and the effect of the emotions on the dialogue is still unclear.In this study, we develop a Japanese simple non-taskoriented expressive dialogue system with text-based emotion detection and emotional speech synthesis. We then conduct a dialogue experiment in which participants chat with the system and evaluate the performance in terms of multiple subjective measures such as richness and pleasantness of the conversation and analyze the result. We also examine the change of the pitch variation of the users in the dialogue to investigate the acoustic effect of the system expressivity on the utterance of the users. Figure 1 shows the flow of the dialogue system constructed for the experiment in Section 5. The speech input is decoded to the text using a speech recog- Figure 1 : Overview of our non-task-oriented dialogue system with system-driven/user-cooperative emotional speech synthesis. The system or the user utterance is used alternatively for the emotion labeling in the case of (a) the system-driven or (b) the user-cooperative systems, respectively. nizer, Julius (Lee and Kawahara, 2009) . In the dialogue management part, system responses are generated by combining example-based and typical rulebased (Weizenbaum, 1966) response generation methods. First, query matching for the example-based response generation is applied to the text using a dialogue example database that is constructed in advance. Specifically, the decoded text is converted to a vector using a bag of words, and cosine similarity is calculated between the text and the questions in the database. If the similarity score is larger than or equal to a predetermined threshold, the answer corresponding to the question having highest similarity is adopted as the system utterance. Otherwise, the system utterance is generated by applying the prepared rules to the decoded text, i.e., the user utterance. For the rule-based response generation, nouns (e.g., baseball, pasta) and subjective words (e.g., like, dislike, happy) are extracted from the user utterance and are used for the response generation based on the rules. Table 1 shows an example of the dialogue between a user and the system where the system responses are generated using both example-and rule-based methods.
0
As COVID-19-an infectious disease caused by a coronavirus-led the world to a pandemic, a large number of scientific articles appeared in journals and other venues. In a span of five months, PubMed alone indexed over 60,000 articles matching coronavirus related search terms such as SARS-CoV-2 or COVID-19. This volume of published material can be overwhelming. There is a need for effective search algorithms and question answering systems to find relevant information and answers. In response to this need, an international challenge-TREC COVID Search (Roberts et al., 2020; -was organised by several institutions, such as NIST and Allen Institute for AI, where research groups and tech companies developed systems that searched over scientific literature on coronavirus. Through an iterative setup organised in different rounds, participants are presented with several topics. The evaluations measure the effectiveness of these systems in finding the relevant articles containing answers to the questions in the topics.We propose a method that improves the systems developed for the TREC-COVID challenge by adopting a novel hybrid neural end-to-end approach for ranking of search results. Our method combines a traditional inverted index and word-matching retrieval with a neural indexing component based on BERT architecture (Devlin et al., 2019) . Our neural indexer leverages the Siamese network training framework (Reimers and Gurevych, 2019) finetuned on an auxiliary task (unrelated to literature retrieval) to produce universal sentence embeddings. This means that neural indexing can be performed offline for the entire document collection and does not need to be retrained on additional queries. This allows for incorporating the neural component for the entire retrieval process, contrasting with the typical multi-stage neural re-ranking approaches (Li et al., 2020; Zhang et al., 2020; Liu et al., 2017; Wang et al., 2011) .Our method is competitive with the top systems presented in TREC COVID 2 . It improves as corpus size increases despite not being trained on additional data which is a useful property in pandemic information retrieval.
0
Question answering is the task of providing natural language answers to natural language questions using an information retrieval engine. Due to the unrestricted nature of the problem, shallow and statistical methods are paramount.Spoken dialogue systems address the problem of accessing information from a structured database (such as time table information) or controlling appliances by voice. Due to the fact that the scope of the application defined by the back-end, the domain of the system is well-defined. Therefore, in the presence of vague, ill-defined or misrecognized input from the user, dialogue management, relying on the domain restrictions as given by the application, can interactively request more information from the user until the users' intent has been determined. In this paper, we are interested in generation of information seeking questions in interactive question-answering systems.We implemented a system that combines features of question answering systems with those of spoken dialogue systems. We integrated the following two features in an interactive restricted domain question answering system: (1) As in question answering systems, the system draws its knowledge from a database of unstructured text. (2) As in spoken dialogue systems, the system can interactively query for more information in the case of vague or ill-defined user queries.Restricted domain question answering systems can be deployed in interactive problem solving solutions, for example, software trouble shooting. In these scenarios, interactivity becomes a necessity. This is because it is highly unlikely that all facts relevant to retrieving the appropriate response are stated in the query. For example, in the software trouble shooting task described in [5] , a frequent system generated information seeking question is for the version of the software. Therefore, there is a need to inquire additional problem relevant information from the user, depending on the interaction history and the problem to be solved.In this paper, we specifically address the problem of how to generate information seeking questions in the case of ambiguous, vague or ill-defined user questions. We assume that the decision of whether an information seeking question is needed is made outside of the module described here. More formally, the problem we address can be described as follows: Given 1. A representation of the previous interaction history, consisting of user and system utterances, and retrieval results from the IR subsystem, 2. A decision for a information seeking question Produce An information seeking question.Problems of this kind have appeared traditionally in task oriented spoken dialogue systems, where missing information needs to be prompted. However, in the case of spoken dialogue systems, question generation is typically not a substantial problem: the fact that the back-end is well-structured allows for simple template-based generation in many cases. For example, missing values for database queries or remote method invocations can be queried that way. (But see also Oh and Rudnicky [7] or Walker et al [12] for more elaborated approaches to generation for spoken dialogue systems).In our case, however, a template-based approach is unrealistic. This is due to the unstructured back-end application. Unlike as spoken dialogue systems, we cannot make assumptions over what kind of questions to ask as this is determined by the result set of articles as returned by the information retrieval engine. Existing interactive question-answering systems (see section 7.1 for a more detailed description) either use canned text on dialogue cards [5] , break down the dialogue representation into frames and then techniques from spoken dialogue systems [8] , or make simplifying assumptions to the extent that generation essentially becomes equivalent to template-based generation.For reasons discussed above, we propose an example-based approach to generation. More specifically, we use an existing dialogue corpus to retrieve appropriate questions and modify in order to fit the situation at hand. We describe two algorithms for instance-based natural language questions generation by first selecting appropriate candidates from the corpus, then modifying the candidates to fit the situation at hand, and finally re-rank the candidates. This is an example of a memory-based learning approach, which in turn is a kind of a case-based reasoning. To the best of our knowledge, this is the first work addressing the problem of example-based generation information seeking questions in the absence of a structured back-end application.
0
In the present age of digital revolution with proliferating numbers of internet-connected devices, we are facing an exponential rise in the volume of available information. Users are constantly facing the problem of deciding what to read and what to skip. Text summarization provides a practical solution to this problem, causing a resurgence in research in this field.Given a topic of interest, a standard search often yields a large number of documents. Many of them are not of the user's interest. Rather than going through the entire result-set, the reader may read a gist of a document, produced via summarization tools, and then decide whether to fully read the document or not, thus saving a substantial amount of time. According to Jones (2007) , a summary can be defined as "a reductive transformation of source text to summary text through content reduction by selection and/or generalization on what is important in the source". Summarization based on content reduction by selection is referred to as extraction (identifying and including the important sentences in the final summary), whereas a summary involving content reduction by generalization is called abstraction (reproducing the most informative content in a new way).The present paper focuses on extraction-based single-document summarization. We formulate the task as a graph-based optimization problem, where vertices represent the sentences and edges the connections between sentences. Textual entailment (Giampiccolo et al., 2007) is employed to estimate the degree of connectivity between sentences, and subsequently to assign a weight to each vertex of the graph. Then, the Weighted Minimum Vertex Cover, a classical graph algorithm, is used to find the minimal set of vertices (that is -sentences) that forms a cover. The idea is that such cover of well-connected vertices would correspond to a cover of the salient content of the document.The rest of the paper is organized as follows: In Section 2, we discuss related work and describe the WMVC algorithm. In Section 3, we propose a novel summarization method, and in Section 4, experiments and results are presented. Finally, in Section 5, we conclude and outline future research directions.
0
In the past few years, data-driven response generation (Vinyals and Le, 2015; Shang et al., 2015; Vougiouklis et al., 2016) has achieved impressive performance, drawing continuously increasing attention from academia and industry. Conventionally, with the guidance of maximum likelihood estimation (MLE), neural dialogue models are expected to maximize the probability of generating the corresponding reference given any query. Unfortunately, due to the many-to-one phenomenon (see Table 1 ), a characteristic of the dialogue task (Csáky et al., 2019) , these models are prone to produce safe but generic responses (e.g., I don't know (Li et al., 2016) ), which sets an obstacle for the generative dialogue system to be deployed widely. Some researchers tried to redesign the objective 1 The code and preprocessed data are available at https://github.com/Yiwei98/dialogue-negative-distillation. of models to meet the requirement of diverse responses instead of MLE, such as MMI (Li et al., 2016) , AdaLabel (Wang et al., 2021) , and IAT . Besides, several studies (Kulikov et al., 2019; Holtzman et al., 2020) proposed more advanced decoding strategies to alleviate the problem of generic responses. Indeed, the above methods boost the diversity of responses by reminding the model what should be said.However, inspired by negative training (Kim et al., 2019; Ma et al., 2021) , we argue that it is also necessary to tell the dialogue model what not to say. To alleviate the problem of generic responses, He and Glass (2020) negatively updates the parameters when identifying the high-frequency responses. Li et al. (2020a) punishes the behaviors of generating repetitive or high-frequency tokens by using the unlikelihood objective (Welleck et al., 2020) .Although the negative-training based methods enhance the diversity of responses, there still exists two drawbacks: First, they regard high-frequency tokens or utterances as negative candidates. However, the high-frequency response problem is only a sub-problem of the generic response problem (He and Glass, 2020) . It means that the responses that are low-frequency but generic will escape from punishment. Even worse, we have observed that some generic responses followed by a low-frequency but meaningless subsequence can avoid being identified as high-frequency, which inevitably sacrifices the fluency of responses (see Analysis). Second, these methods ignore the implicit negative knowledge in neural networks that characterizes negative candidates at multiple levels. We contend that it is more effective to conduct negative training with richer information (e.g., hierarchical representation).To tackle the above problems and further improve the diversity of responses, we propose a novel negative training paradigm called Negative Distillation (ND Well, I don't know.12.14 She likes dancing. Ask her to dance. 5: It doesn't matter. You gotta find what she's I don't know . . . 6.82 interested in and go with that. 1: The many-to-one phenomenon in DailyDialog. All the above five queries have the same I don't know-like responses. The corresponding source entropy (Csáky et al., 2019) scores are much higher than the median score (0.92) of the whole training set. This phenomenon will lead to the generic response problem. lation (KD) (Hinton et al., 2015; Jiao et al., 2020) takes the teacher as a positive role model and induces the student to imitate. Differing from that, we train the teacher as a negative role model and remind the student to get rid of those bad behaviors.Specifically, we first collect a negative training set by using a filtering method called Source Entropy (Csáky et al., 2019) . This filtering method can retrieve all many-to-one cases of the raw dataset. Note that the "one" is usually a generic response. Then, we train a dialogue model on the above subset as the negative teacher. Given queries, the negative teacher can provide a set of negative candidates (i.e., generic and dull responses) that the student is prone to generate, which avoids the first drawback mentioned before. Therefore, the student obtains query-wise bad behaviors for Negative Distillation. To conduct the negative update holistically, we design two negative objectives, including soft unlikelihood loss on the prediction layer and reverse square error on the intermediate layer. In this way, the negative distillation fully exploits multilevel negative knowledge to force the student to generate non-generic responses.Our contributions are summarized as follows:• We propose a novel and effective negative training paradigm called Negative Distillation. It constructs query-wise generic responses as the negative candidates.• We design two negative objectives to utilize multi-level information to further boost the performance of negative distillation.• We perform extensive experiments and detailed analysis to verify the effectiveness of the negative distillation framework and the superiority compared with previous negative training methods.
0
Opinions are commonly expressed in many kinds of written and spoken text such as blogs, reviews, new articles, and conversation. Recently, there have been a surge in reserach in opinion analysis (sentiment analysis) research (Liu, 2012; Pang and Lee, 2008) .While most past researches have mainly addressed explicit opinion expressions, there are a few researches for implicit opinions expressed via implicatures. Deng and Wiebe (2014) showed how sentiments toward one entity may be propagated to other entities via opinion implicature rules. Consider The bill would curb skyrocketing health care costs. Note that curb costs is bad for the object costs since the costs are reduced. We can reason that the writer is positive toward the event curb since the event is bad for the object health care costs which the writer expresses an explicit negative sentiment (skyrocketing). We can reason from there that the writer is positive toward the bill, since it is the agent of the positive event.These implicature rules involve events that positively or negatively affect the object. Such events are called malefactive and benefactive, or, for ease of writing, goodFor (gf ) and badFor (bf ) (hereafter gfbf). The list of gfbf events and their polarities (gf or bf) are necessary to develop a fully automatic opinion inference system. On first thought, one might think that we only need lists of gfbf words. However, it turns out that gfbf terms may be ambiguous -a single word may have both gf and bf meanings.Thus, in this work, we take a sense-level approach to acquire gfbf lexicon knowledge, leading us to employ lexical resources with finegrained sense rather than word representations. For that, we adopt an automatic bootstrapping method which disambiguates gfbf polarity at the sense-level utilizing WordNet, a widely-used lexical resource. Starting from the seed set manually generated from FrameNet, a rich lexicon in which words are organized by semantic frames, we explore how gfbf terms are organized in WordNet via semantic relations and expand the seed set based on those semantic relations.The expanded lexicon is evaluated in two ways. First, the lexicon is evaluated against a corpus that has been annotated with gfbf information at the word level. Second, samples from the expanded lexicon are manually annotated at the sense level, which gives some idea of the prevalence of gfbf lexical ambiguity and provides a basis for senselevel evaluation. Also, we conduct the agreement study. The results show that the expanded lexicon covers more than half of the gfbf instances in the gfbf corpus, and the system's accuracy, as measured against the sense-level gold standard, is substantially higher than baseline. In addition, in the agreement study, the annotators achieve good agreement, providing evidence that the annotation task is feasible and that the concept of gfbf gives us a natural coarse-grained grouping of senses.
0
Temporal entity extraction and normalization is an important aspect of Natural Language Processing (Alonso et al., 2011; Campos et al., 2014) . There has been a substantial body of work on the task and there exist numerous well performing publicly available models for identifying and normalizing temporal entities (Strötgen and Gertz, 2010; Chang and Manning, 2012; Zhong and Cambria, 2018) .There exist however a growing number of NLP applications which require extraction of only a relevant subset of time entities that are useful for solving specific problems within a larger body of text. Examples of such tasks include understanding search queries ("Find me all emails sent by April between May 11 th and May 21 st "), Goal Oriented Dialogue Systems ("Deliver George Orwell's 1984 by next week.", "Send the "FY 2020 Budget" to Watson Monday morning.") etc. Using the temporal entity extraction models for these tasks is insufficient, since they fail to disambiguate between general date-time entities and entities necessary to solve the task.In this paper, we address the task of recognizing date-time entities required by an AI scheduling assistant for correctly scheduling meetings. Cortana from Microsoft Scheduler, Clara from Clara Labs and Amy from X.ai are examples of such email based digital assistants for scheduling meetings. For such systems, a user organizing the meeting adds the digital assistant as a recipient in an email with other attendees and delegates the task of scheduling to the digital assistant in natural language. For the assistant to correctly schedule the meeting, it must correctly extract the date-time entities expressed by the user in the email to indicate the times they want the meeting scheduled, as well as the times that do not work for them. The verbose nature of emails often exacerbates the difficulty of identifying relevant date-time entities; since the number of distractor (i.e valid date-time entities not pertinent to the task) tend to increase (Eg: In Fig. 1 "today" serves as a distractor entity).To this end, we present SHERLOCK: ScHeduling Entity Recovery by LOoking at Contextual Knowledge, a novel model for detecting relevant date-time entities in the context of scheduling as well as identifying the entities associated with a negation constraint. SHERLOCK comprises of 3 modules for identifying the relevant entities as well as negation constraints associated with them: Figure 1 : The 3 modules of SHERLOCK: First a high recall rule based extractor generates the potential entities. The neural module then takes the email and the entities and generates scores for each entity. Only the relevant entities are passed to the final negation module to detect times to schedule and times to avoid.Date-Time Extractor: A high recall date-time entity extractor to identify all date-time entities in an email Entity Relevance Scorer: A neural model to classify each of the extracted entities as being relevant to scheduling or not by considering the context presented in the email.Negation Detector: A negation module to identify if there exists a negation constraint associated with each of the extracted relevant entities. Fig. 1 illustrates each module: the entity extractor extracts "today", "next week", "Wednesday" and "May". Each of these entities is scored by the neural module, and only "next week" and "Wednesday" are identified as being relevant to scheduling. Finally, the negation module identifies that "Wednesday" has a negation constraint. While SHERLOCK focuses on the task of scheduling, we believe that a similar approach can be used to tackle the problem of extracting relevant datetime entities from documents for other tasks.The contributions of this paper are as follows:Task specific date-time extractor: A novel method for combining conventional high recall rule-based model with a novel neural model for incorporating contextual information to identify relevant date-time entities for the task at hand.Identifying negation constraints for temporal entities: A heuristic negation module that helps identify negation constraints associated with time entities in the context of scheduling meetings. To the best of our knowledge, prior to this work, negation constraints associated with time-entity extraction have not been studied before. We first present our proposed method for extracting time entities relevant to the task of scheduling a meeting in ( §2). Next, we describe our approach for identifying negation constraints associated with extracted entities in ( §3). In ( §4), we describe our experimental setup and baselines. We discuss the results in ( §5) and show that SHERLOCK helps improve performance both on the task of identifying relevant entities as well as identifying negation constraints. We then present the related work in ( §6), and finally conclude in ( §7).
0
The problems raised when translating into richer morphology languages are well known and are being continuously studied (Popovic and Ney, 2004; Koehn and Hoang, 2007; de Gispert and Mariño, 2008; Toutanova et al., 2008; Clifton and Sarkar, 2011; Bojar and Tamchyna, 2011) .When translating from English into Spanish, inflected words make the lexicon to be very large causing a significant data sparsity problem. In addition, system output is limited only to inflected phrases available in the parallel training corpus (Bojar and Tamchyna, 2011) . Hence, phrase-based SMT systems cannot generate proper inflections unless they have learned them from the appropriate phrases.That would require to have a parallel corpus containing all possible word inflections for all phrases available, which it is an unfeasible task.Different approaches to address the morphology into SMT may be summarized in four, not mutually exclusive, categories: i) factored models (Koehn and Hoang, 2007) , enriched input models (Avramidis and Koehn, 2008; Ueffing and Ney, 2003) , segmented translation (Virpioja et al., 2007; de Gispert et al., 2009; Green and DeNero, 2012) and morphology generation (Toutanova et al., 2008; de Gispert and Mariño, 2008; Bojar and Tamchyna, 2011) .Whereas segmented translation is intended for agglutinative languages, translation into Spanish has been classically addressed either by factored models (Koehn and Hoang, 2007) , enriched input scheme (Ueffing and Ney, 2003) or target language simplification plus a morphology generation as an independent step (de Gispert and Mariño, 2008 ). This latter approach has also been used to translate to other rich morphology languages such as Czech (Bojar and Tamchyna, 2011) .The problem of morphology sparsity becomes crucial when addressing translations out-of-domain. Under that scenario, there is a high presence of previously unseen inflected forms even though their lemma could have been learned with the training material. A typical scenario out-of-domain is based on weblog translations, which contain material based on chat, SMS or social networks text, where it is frequent the use of second person of the verbs. However, second person verb forms are scarcely populated within the typical training material (e.g. Europarl, News and United Nations). That is due to the following reasons: i) text from formal acts converts the second person (tú) subject into usted formal form, which uses third person inflections and ii) text from news is mainly depicted in a descriptive language relegating second person to textual citations of dialogs that are a minority over all the text.Some recent domain-adaptation work (Haddow and Koehn, 2012) has dealt implicitly with this problem using the OpenSubtitles 1 bilingual corpus that contains plenty of dialogs and therefore second person inflected Spanish forms. However, their study found drawbacks in the use of an additional corpus as training material: the improvement of the quality of the out-of-domain translations worsened the quality of in-domain translations. On the other hand, the use of an additional corpus to train specific inflected-forms language generator has not yet been addressed.This paper presents our findings on tackling the problem to inflect out-of-domain verbs. We built a SMT system from English into simplified morphology Spanish in order to inflect the verbs as an independent postprocessing step. This strategy has been formerly applied to translate from English into Spanish with a N-gram based decoder (de Gispert and Mariño, 2008) but without dealing with out-ofdomain data and neither with a factored based system (Koehn and Hoang, 2007) . We analyze the most convenient features (deep vs. shallow) to perform this task, the impact of the aforementioned strategy when using different training material and different test sets. The main reason to focus the study only on the verbs is their strong impact on the translation quality (Ueffing and Ney, 2003; de Gispert and Mariño, 2008) .In section 2 we describe the architecture of the simplification plus generation strategy. In section 3 we detail the design of the generation system. In section 4 we detail the experiments performed and we discuss them in section 5. At last, we explain in section 6 the main conclusions and lines to be dealt in the future.1 www.opensubtitles.org
0
Users prefer incremental dialogue systems to their non-incremental counterparts (Aist et al., 2007) . For a syntactic parser to contribute to an incremental dialogue system or any other incremental NLP application, it also needs to work incrementally. However, parsers usually operate on whole sentences only and few parsers exist that are capable of incremental parsing or are even optimized for it. This paper focuses on using a parser as part of an incremental pipeline that requires timely response to natural language input. In such a scenario, delay imposed by a parser's lookahead is more severe than delay caused by parsing speed since the parsing speed is capped by the user's input speed. Depending on the input method, the maximum typing speed varies between 0.75 seconds per word (qwerty keyboard) and 6 seconds per word (mobile 12-key multi-tap) (Arif and Stuerzlinger, 2009) and is usually lower if the sentence has to be phrased while typing.In such a scenario, the objective of the parser is to yield high quality results and produce them as soon as they are needed by a subsequent component. It is rarely known beforehand when the next word will be available for processing. Therefore, in an incremental pipeline a) computation should continue until the next word occurs if this might contribute to a better result, and b) a new word should be included immediately to avoid delays. A parser which works pull-based, i.e. processes one prefix until it is deemed finished and then pulls the next word, is insufficient under conditions, since it would require to determine the time used for processing before the processing can even start. Either the estimated processing time will be too short, violating a), or it will be too long, violating b). In contrast, push-based architectures can meet both requirements since the processing of the prefix can be interrupted when new input is available. Beuck et al. (2011) showed that Weighted Constraint Dependency Grammar-based parsers are capable of producing high-quality incremental results but neglected the processing time needed for each increment. In this paper, we will use jwcdg 1 , a reimplementation of the WCDG parser written in Java. jwcdg uses a transformation-based algorithm. It comes equipped with a strong anytime capability, causing the quality of the results to depend on the processing time jwcdg is allowed to consume. We will show that jwcdg can produce high quality results even if only granted fairly low amounts of processing time.Dependency parsing assigns every word to another word or NIL as its regent and the resulting edges are annotated with a label. If dependency analyses are used to describe the syntactic structure of sentence prefixes, different amounts of prediction can be provided. The interesting cases are those where either the regent or a dependent is not yet part of the sentence prefix.If the regent of a word w is not yet available, the parser can make a prediction about where and how w should be attached. One possibility is to simply state that the regent of w lies somewhere in the future without giving any additional information. This can be modelled by attaching w to a generic nonspec node (Daum, 2004) . Beuck et al. (2011) call this minimal prediction.However, it is usually possible to predict more: The existence of upcoming words can be anticipated and w can then be attached to one of these words. Of course, most of the time it will not be possible to predict exact words but abstract pseudo-words can be used instead that stand for a certain type such as nouns or verbs. Beuck et al. (2011) call these pseudo-words virtual nodes and the approach of using virtual nodes structural prediction (because the virtual nodes accommodate crucial aspects of the upcoming structure of the sentence). A virtual node can be included into an analysis to represent words that are expected to appear in later increments.As an example, "Peter drives a red" can be analyzed as Su bj Det Ad j Obja using structural prediction. Minimal prediction leads to disconnected analyses while structural prediction allows for connected analyses which resemble the structure of whole sentences. Tn this case, the analysis includes the information that the regent of "red" is the object of "drives", which is missing in the analysis using minimal prediction.The key difference between non-incremental and incremental parsing is the uncertainty about the continuation of the sentence. If a prediction about upcoming structure is being made, there is no guarantee that this prediction will be accurate. Using a beam of possible prefixes, as done in This allows for monotonous extensions even without a beam since the analysis of a prefix will not be incompatible with the continuation of the sentence. This approach is used by MaltParser (Nivre et al., 2007) .A transformation-based parser, finally, can also deal with non-monotonic extensions. In contrast to beam search, only a single analysis is generated for each prefix and there is no guarantee that the analysis of a prefix p n is a monotonic extension of p n−1 . Because the analysis of p n−1 is only used to initialize the transformation process, the search space is not restricted by the results that were obtained in former prefixes although they still guide the analysis.
0
This paper describes work that aims to improve upon previous approaches to identifying relationships between named objects in text (e.g., people, organisations, locations). Figure 1 contains several example sentences from the ACE 2005 corpus that contain relations and Figure 2 summarises the relations occurring in these sentences. So, for example, sentence 1 contains an employment relation between Lebron James and Nike, sentence 2 contains a sports-affiliation relation between Stig Toefting and Bolton and sentence 4 contains a business relation between Martha Stewart (she) and the board of directors (of Martha Stewart Living Omnimedia).Possible applications include identifying companies taking part in mergers/acquisitions from business newswire, which could be inserted into a corporate intelligence database. In the biomedical domain, we may want to identify relationships between genes and proteins from biomedical publications, e.g. Hirschman et al. (2004) , to help scientists keep up-to-date on the literature. Or, we may want to identify disease and treatment relations in publications and textbooks, which can be used to help formalise medical knowledge and assist general practitioners in diagnosis, treatment and prognosis (Rosario and Hearst, 2004) . Another application scenario involves building networks of relationships from text collections that indicate the important entities in a domain and can be used to visualise interactions. The networks could provide an alternative to searching when interacting with a document collection. This could prove beneficial, for example, in investigative journalism. It might also be used for social science research using techniques from social network analysis (Marsden and Lin, 1982) . In previ-ous work, relations have been used for automatic text summarisation as a conceptual representation of sentence content in a sentence extraction framework (Filatova and Hatzivassiloglou, 2004) .In the next section, we motivate and introduce the relation discovery task, which addresses some of the shortcomings of conventional approaches to relation extraction (i.e. supervised learning or rule engineering) by applying minimally supervised methods. 1 A critical part of the relation discovery task is grouping entity pairs by their relation type. This is a clustering task and requires a robust conceptual representation of relation semantics and a measure of similarity between relations. In previous work (Hasegawa et al., 2004; Chen et al., 2005) , the conceptual representation has been limited to term-by-document (TxD) models of relation semantics. The current work introduces a term co-occurrence (TxT) representation for the relation discovery task and shows that it performs significantly better than the TxD representation. We also explore dimensionality reduction techniques, which show a further improvement.Section 3 presents a parameterisation of similarity models for relation discovery. For the purposes of the current work, this consists of the semantic representation for terms (i.e. how a term's context is modelled), dimensionality reduction technique (e.g. singular value decomposition, latent Dirichlet allocation), and the measure used to compute similarity.We also build on the evaluation paradigm for relation discovery with a detailed, controlled experimental setup. Section 4 describes the experiment design, which compares the various system configurations across six different subsets of the relation extraction data from the automatic content extraction (ACE) evaluation. Finally, Section 5 presents results and statistical analysis.
0
This paper presents the PROMT systems submitted for the Shared Translation Task of WMT16. We participated in seven language pairs with three different types of systems: English-Russian, Russian-English, English-German (Rulebased systems); Finnish-English, Turkish-English (Statistical systems); English-Spanish, English-Portuguese (Hybrid systems). The paper is organized as follows. In Section 1, we briefly outline the three types of our systems and their features. In Section 2, we describe the experimental setups and the training data and present the results. Finally, Section 3 concludes the paper.
0
In the larger context of the TALK project 1 we are developing a multimodal dialogue system for a Music Player application for in-car and in-home use, which should support natural, flexible interaction and collaborative behavior. The system functionalities include playback control, manipulation of playlists, and searching a large MP3 database. We believe that in order to achieve this goal, the system needs to provide advanced adaptive multimodal output.We are conducting Wizard-of-Oz experiments [Bernsen et al., 1998 ] in order to guide the development of our system. On the one hand, the experiments should give us data on how the potential users interact with such an application. But we also need data on the multimodal interaction strategies that the system should employ to achieve the desired naturalness, flexibility and collaboration. We therefore need a setup where the wizard has freedom of choice w.r.t. their response and its realization through single or multiple modalities. This makes it different from previous multimodal experiments, e.g., in the SmartKom project [Türk, 2001] , where the wizard(s) followed a strict script. But what we need is also different in several aspects from taking recordings of straight human-human interactions: the wizard does not hear the user's input directly, but only gets a transcription, parts of which are sometimes randomly deleted (in order to approximate imperfect speech recognition); the user does not hear the wizard's spoken output directly either, as the latter is transcribed and re-synthesized (to produce system-like sounding output). The interactions should thus more realistically approximate an interaction with a system, and thereby contain similar phenomena (cf. [Duran et al., 2001] ).The wizard should be able to present different screen outputs in different context, depending on the search results and other aspects. However, the wizard cannot design screens on the fly, because that would take too long. Therefore, we developed a setup which includes modules that support the wizard by providing automatically calculated screen output options the wizard can select from if s/he want to present some screen output.Outline In this paper we describe our experiment setup and the first experiences with it. In Section 2 we overview the research goals that our setup was designed to address. The actual setup is presented in detail in Section 3. In Section 4 we describe the collected data, and we summarize the lessons we learnt on the basis of interviewing the experiment participants. We briefly discuss possible improvements of the setup and our future plans with the data in Section 5.
0
Mismatched conditions -differences in channel noise between training audio and testing audio -are problematic for computer speech recognition systems. Signal enhancement, mismatch-resistant acoustic features, and architectural compensation within the recognizer are common solutions (Gong, 1995) . The human auditory system implements all three of these solutions by 1.) enhancing the speech signal via the filtering of the head, outer ear, and basilar membrane, 2.) extracting prominent, noise-resistant information from the speech signal, and 3.) implementing dereverberation and noise reduction mechanisms within the cellular architecture of the brain.Commercial speech recognizers must inevitably deal with mismatched conditions. Such mismatches may include additive channel noise or loss of frequency information. Both of these events occur in the telephone channel. Telephone-band speech recognition (8 KHz) is a difficult task (Bourlard, 1996; Karray and Martin, 2003) . Both Gaussian systems (Chigier, 1991; Moreno and Stern, 1994) and non-Gaussian systems (Hasegawa-Johnson et al., 2004) trained on telephone-band speech are not as accurate as systems trained on wide band speech (16 KHz) (Halberstadt and Glass, 1998) . This may indicate that a speech recognition system should compensate for channel anomalies before the decoding phase.The distinctive features [silence, continuant, sonorant, syllabic, and consonantal] are binary valued phonetic descriptors (Stevens, 1999) . For example, a sound can either be produced in the nucleus of a syllable ([+syllabic]) or not ([−syllabic] ). The vowel /ae/ is a [+syllabic] sound and the consonant /d/ is a [−syllabic] sound. A transition between the two sounds, as in the word "add," is a [+−syllabic] landmark. Detection and recognition of acoustic-phonetic landmarks as a front-end to an HMM-based speech recognition system improves both phone and word recognition accuracy on telephone-band speech (Borys and Hasegawa-Johnson, 2005; Borys, 2008) . Landmark-based systems generalize accurately to noisy and mismatched conditions (Kirchhoff, 1999; Juneja and Espy-Wilson, 2004) .Models of the auditory periphery have been used for denoising/enhancing speech (Hunt and Lefebvre, 1989; Virag, 1999) , speech recognition in clean (Cohen, 1989; Hunt and Lefebvre, 1989; Ghitza, 1994; Ying et al., 2012) and noisy conditions (Kim et al., 1999) , and emotion recognition (Ying et al., 2012) . When applied as a front-end, models of the auditory periphery improve speech recognition accuracy (Cohen, 1989; Hunt and Lefebvre, 1989; Ghitza, 1994; Virag, 1999) , however, such systems fail to achieve human performance. Current auditory models primarily mimic the cochlea and auditory nerve, both ignoring the effects of head-related filtering and failing to account for neural processing in the brainstem. Neurologists have proposed that the processing in auditory brainstem nuclei, such as the cochlear nucleus and lateral lemniscus, may improve the robustness of human speech recognition to changes in environment (Ehret and Romand, 1997; Winer and Schreiner, 2005; Schnupp et al., 2011) .Both landmark detection and auditory modeling improve recognition accuracy when used as front-ends for speech recognition systems operating in mismatched conditions. This paper proposes an approach that unifies the two methods.
0
Aspect-Based Sentiment Analysis (ABSA) (Pontiki et al., 2014) has attracted much attention of researchers in recent years. In ABSA, aspect (or called opinion target) extraction and opinion term extraction are two fundamental tasks. Aspect is the word or phrase in the reviews referring to the object towards which users show attitudes, while opinion terms are those words or phrases representing users' attitudes (Wu et al., 2020) . For example, in the sentence "The dim sum is delicious.", the phrase "dim sum" is an aspect and the word "delicious" is an opinion term. See the upper part of Table 1 for more examples. Plenty of works based on neural networks have been done in both aspect * The corresponding author.Reviews: "Soooo great! The food is delicious and inexpensive, and the environment is in a nice. The only problem is that the soup and dessert are ordinary." Aspect-Opinion Pairs: food : [delicious, inexpensive] (one-to-many) environment : [nice] (one-to-one) soup, dessert : [ordinary] (many-to-one) Table 1 : The upper part is a restaurant review and the lower part shows the corresponding aspect-opinion pairs. Extracted aspects and opinion terms are marked in red and blue, respectively. and opinion term extraction (Liu et al., 2015; Poria et al., 2016; Xu et al., 2018) ; moreover, some studies combine these two tasks into a multi-task structure to extract aspects and opinion terms simultaneously (Wang et al., 2016 (Wang et al., , 2017 Li and Lam, 2017; Dai and Song, 2019) .However, one critical deficiency in the researches mentioned above is that they ignore the relation of aspects and opinion terms, which leads to the birth of Target-oriented Opinion Words (or Terms) Extraction (TOWE) (Fan et al., 2019) for extracting the corresponding opinion terms of a given opinion target. Subsequently, Aspect-Opinion Pair Extraction (AOPE) (Chen et al., 2020) and Pair-wise Aspect and Opinion Terms Extraction (PAOTE) have emerged, which both aim at extracting aspects and opinion terms in pairs. AOPE and PAOTE are exactly the same task, only named differently. In the following, we use AOPE to denote this task for simplicity. It can be considered that AOPE contains aspect and opinion word extraction and TOWE. Since aspect extraction has been fully studied and satisfactory results have been obtained, TOWE, which aims at mining the relation between aspects and opinion terms, is the key to the AOPE task. As shown in the lower part of Table 1 , the relational structure of the aspect-opinion pairs within a sentence can be complicated, including one-to-one, one-to-many, and many-to-one.The challenge of TOWE is the learning of representations of the given opinion target accurately and a few works focus on this task. For instance, Fan et al. (2019) propose an Inward-Outward LSTM to pass target information to the left context and the right context of the target respectively, and then they combine the left, right, and global context to encode the sentence. Recently, SDRN (Chen et al., 2020) and SpanMlt both adopt a pre-trained language model to learn contextual representations for AOPE. In SDRN, a double-channel recurrent network and a synchronization unit are applied to extract aspects, opinion terms and their relevancy. In SpanMlt, the terms are extracted under annotated span boundaries with contextual representations, and then the relations between every two span combinations are identified. However, apart from hyper-parameters in the pre-trained language model, these two methods introduce many other hyper-parameters (e.g., the hidden size, thresholds and recurrent steps in SDRN, and the span length, top k spans and the balanced factor of different tasks in SpanMlt). Some of these hyper-parameters have a significant impact on the model performance.Motivated by the previous work and to address the challenges mentioned above, we propose a Target-Specified sequence labeling method based on Multi-head Self-Attention (Vaswani et al., 2017 ) (TSMSA). The sentence is first processed in the format "[SEP] Aspect [SEP]" (e.g., "The [SEP] food [SEP] is delicious."), which is inspired by Soares et al. (2019) who utilized a special symbol "[SEP]" to label all entities and output their corresponding representations. Then we develop a sequence labeling model based on multi-head self-attention to identify the corresponding opinion terms. By using the special symbol and self-attention mechanism, TSMSA is capable of capturing the information of the specific aspect. To improve the performance of our model, we apply pre-trained language models like BERT (Devlin et al., 2019) which contain a multi-head self-attention module as the encoder. As a case study, we integrate aspect and opinion term extraction, and TOWE into a Multi-Task architecture named MT-TSMSA to validate the effectiveness of our method on the AOPE task. In addition, apart from hyper-parameters in the pre-trained language model, we only need to adjust the balanced factor of different tasks in MT-TSMSA. In summary, our main contributions are as follows:• We propose a target-specified sequence labeling method with multi-head self-attention mechanism to perform TOWE, which generates target-specific context representations for different targets in the same review with the special symbol and multi-head self-attention. Pre-trained language models can be conveniently applied to improve the performance.• For our TSMSA and MT-TSMSA, only a small amount of hyper-parameters need to be adjusted when using pre-trained language models. Compared to the existing models for TOWE and AOPE, we alleviate the tradeoff issue between a model's complexity and performance.Extensive experiments validate that our TSMSA can achieve the best performance on TOWE, and MT-TSMSA performs quite competitive on AOPE. The rest of this paper is organized as follows. Section 2 introduces the existing studies on TOWE and AOPE, respectively. Section 3 details the proposed TSMSA and MT-TSMSA. Section 4 presents our experimental results and discussions. Finally, we draw conclusions in Section 5.
0
Corpora annotated with part-of-speech tags and syntactic structure are crucial for the development and evaluation of automatic tools for syntactic analysis, as well as for empirical research in syntax. For Swedish, annotated corpora have been available for quite a number of years. The venerable MAMBA treebank (Teleman, 1974) was created in the 1970s. It has formed the basis for a number of Swedish constituency and dependency treebanks such as Talbanken05 (Nivre et al., 2006) , the more recent Swedish Treebank, and the Swedish part of the multilingual Universal Dependency Treebank (de Marneffe et al., 2014) . The Stockholm-Umeå Corpus (SUC) (Ejerhed et al., 1992) with manually checked part-of-speech tags and base forms for roughly a million tokens, has been a de facto standard for Swedish part-of-speech tagging. The Swedish Treebank uses the SUC part-of-speech tags together with the automatically converted syntactic structures from MAMBA (Nivre et al., 2008) .In our project Koala, we develop new annotation tools to be used for the multi-billion token corpora of Korp, the corpus query infrastructure at Språkbanken. Part of our effort lies in evaluation of these annotation tools. For a number of reasons, the corpora mentioned and their annotation schemata are not suitable as our gold standard.First, the texts in the corpora are quite dated, and do not reflect the text types available in Korp. Secondly, the MAMBA annotation would require several complex conversion heuristics to be used as a conventional constituency or dependency treebank. Due to technical limitations in the 1970s, attachment in MAMBA is underspecified in some cases, most notably in clause coordination, and its annotation does not have explicit phrase categories. On the other hand, its set of grammatical function categories is very fine-grained, and we consider some more semantic/pragmatic distinctions hard to apply. For the Swedish Treebank we further note that the part-of-speech tags and the syntactic categories were designed in separate projects, and there are several cases of redundancy, where grammatical function distinctions are also reflected in the set of part-of-speech tags.In this paper, we describe the design of the syntactic layer, and to some extent the part-ofspeech layer, of the new Koala multi-genre annotated Swedish corpus. In designing the annotation guidelines, we have aimed to address the above-mentioned shortcomings: First, the part-ofspeech, phrase, and function categories have received clearly separated roles. Secondly, we use a syntactic annotation format that is less restrictive than MAMBA's. Thirdly, the annotation model has been designed with deterministic conversion into other formalisms in mind. Finally, the corpus consists of material from several genres. The texts have been collected from public-domain sources, so that the corpus can be made freely available. With the data release, we will also supply scripts for conversion to other standards.
0
The current call for cost-effective, accessible and userfriendly health care services, together with recent advances in interactive technologies, has triggered an enormous interest in digital medical applications. Many such services are provided online, e.g. ordering medicines, making doctor appointments, accessing medical records (Turgiss et al., 2011) . Self-service healthcare is actively promoted. Interactive health screening kiosks are deployed where people can measure their vision, blood pressure, weight and body mass index, receive an overall health assessment, and access a database of local doctors (Bluth, 2009) . Health care providers are sometimes replaced by virtual conversational agents (DeVault et al., 2014) .Of chief importance is that the quality of technologyenhanced and technology-mediated services is not significantly lower than conventional in person patient-provider encounters, but adopt a user-centred approach to achieve high effectiveness, relevance and quality. For successful designs and innovations, attention needs to be paid not only to technical possibilities but also very much to the social interactive environment in which these innovations may be placed. Consequently, it is important to understand how well a technical solution fits in with the activities and needs of the users in a proposed setting. Systematic and comprehensive interaction analysis and dialogue modelling methods are often used for obtaining a satisfactory degree of understanding of human interactive behaviour for the subsequent specification of mechanisms of human dialogue that need to be incorporated into a system. A multi-disciplinary analysis of user behavioural, physiological and functional data is required, with processes and results that are understandable by medical and non-medical experts, for staying close to the reality of doctors and patients, and for developing products that are well accepted by their users. The data analysis often involves annotation with dialogue act information. Annotation schemes have been constructed that are useful both for empirically-based studies of interactive and task-related phenomena, and for data-driven design of interactive systems. A number of studies have proposed the use of a dialogue act taxonomy tailored to the medical domain (Sandvik et al., 2002; Miller and Nelson, 2005; Chang et al., 2013; Bolioli and others, 2019) . Most of them are based on the RIAS scheme (Chang et al., 2013; Miller and Nelson, 2005; Bolioli and others, 2019) , which has proved efficient for the analysis of various kinds of medical encounters 1 , but which cannot be directly used for building a dialogue system or its components. The widely used domain-independent ISO 24617-2 dialogue act taxonomy, on the other hand, needs some adaptation to the medical domain, but is well suited for computational modelling and for dialogue system design. This study tests the assumption that the two schemes are in this sense complementary, and when combined together in a sensible way provide a unified model that supports the quantitative and qualitative analysis of observed behaviour in natural interactive medical settings, while also being useful for quality assessment of interactive and task-related performance of medical professionals, including technology-enhanced and technologymediated interactions. Moreover, the combined taxonomy can facilitate user-based interactive data collection (real or simulated), as well as the design of conversational medical applications. The paper is structured as follows. Section 2 specifies the use cases and discusses the related work performed in the analysis and modelling of medical encounters. Section 3 introduces the RIAS and ISO 24617-2 taxonomies. Section 4 presents annotation experiments performed to assess the compatibility of concepts defined in both taxonomies. We specify the corpus data and discuss the obtained results. Section 5 defines a mapping between the RIAS and ISO 24617-2 taxonomies, and proposes extensions to ISO 24617-2 in order to make it powerful and accurate, as required for the use cases of analysing and modelling medical interactions. Finally, Section 6 summarizes our findings and outlines directions for future research and development.
0
L'analyse syntaxique des langues est une tâche complexe, en partie à cause de la richesse et du volume des informations à prendre en compte sur les mots et sur les constructions syntaxiques. Il est pourtant indispensable de disposer de ces informations, sous la forme de ressources telles que des lexiques et des grammaires, en essayant de minimiser les incomplétudes et les erreurs présentes dans ces ressources. L'utilisation à grande échelle de ces ressources dans des analyseurs est à ce titre très intéressante (van Noord, 2004) , et en particulier l'étude des cas conduisant à un échec de l'analyse : comme le dit le dicton, on apprend de ses erreurs.Nous proposons un modèle probabiliste permettant de repérer les formes potentiellement sources d'erreurs à partir d'un corpus de phrases soumis à analyse. Pour exploiter au mieux les formes détectées par le modèle et en particulier pour pouvoir identifier la source de l'erreur, un environnement de visualisation a été mis en place. L'ensemble a été testé sur les résultats d'analyse produits sur plusieurs corpus de plusieurs centaines de milliers de phrases et deux systèmes d'analyse syntaxique distincts, à savoir FRMG et SXLFG.
0
Enriching vector models of word meaning so they can represent multiple word senses per word type seems to offer the potential to improve many language understanding tasks. Most traditional embedding models associate each word type with a single embedding (e.g., Bengio et al. (2006) ). Thus the embedding for homonymous words like bank (with senses including 'sloping land' and 'financial institution') is forced to represent some uneasy central tendency between the various meanings. More fine-grained embeddings that represent more natural regions in semantic space could thus improve language understanding.Early research pointed out that embeddings could model aspects of word sense (Kintsch, 2001) and recent research has proposed a number of models that represent each word type by different senses, each sense associated with a sensespecific embedding (Kintsch, 2001; Reisinger and Mooney, 2010; Neelakantan et al., 2014; Huang et al., 2012; Chen et al., 2014; Pina and Johansson, 2014; Wu and Giles, 2015; Liu et al., 2015) . Such sense-specific embeddings have shown improved performance on simple artificial tasks like matching human word similarity judgments-WS353 (Rubenstein and Goodenough, 1965) or MC30 (Huang et al., 2012) .Incorporating multisense word embeddings into general NLP tasks requires a pipelined architecture that addresses three major steps:1. Sense-specific representation learning:learn word sense specific embeddings from a large corpus, either unsupervised or aided by external resources like WordNet.2. Sense induction: given a text unit (a phrase, sentence, document, etc.), infer word senses for its tokens and associate them with corresponding sense-specific embeddings.3. Representation acquisition for phrases or sentences: learn representations for text units given sense-specific embeddings and pass them to machine learning classifiers.Most existing work on multi-sense embeddings emphasizes the first step by learning sense spe-cific embeddings, but does not explore the next two steps. These are important steps, however, since it isn't clear how existing multi-sense embeddings can be incorporated into and benefit realworld NLU tasks.We propose a pipelined architecture to address all three steps and apply it to a variety of NLP tasks: part-of-speech tagging, named entity recognition, sentiment analysis, semantic relation identification and semantic relatedness. We find:• Multi-sense embeddings give improved performance in some tasks (e.g., semantic similarity for words and sentences, semantic relation identification part-of-speech tagging), but not others (e.g., sentiment analysis, named entity extraction). In our analysis we offer some suggested explanations for these differences.• Some of the improvements for multi-sense embeddings are no longer visible when using more sophisticated neural models like LSTMs which have more flexibility in filtering away the informational chaff from the wheat.• It is important to carefully compare against embeddings of the same dimensionality.• When doing so, the most straightforward way to yield better performance on these tasks is just to increase embedding dimensionality.After describing related work, we introduce the new unsupervised sense-learning model in section 3, give our sense-induction algorithm in section 4, and then in following sections evaluate its performance for word similarity, and then various NLP tasks.
0
For statistical machine translation (SMT), a crucial issue is how to build a translation model to extract as much accurate and generative translation knowledge as possible. The existing SMT models have made much progress. However, they still suffer from the bad performance of unnatural or even unreadable translation, especially when the sentences become complicated. We think the deep reason is that those models only extract translation information on lexical or syntactic level, but fail to give an overall understanding of source sentences on semantic level of discourse. In order to solve such problem, (Gong et al., 2011; Xiao et al., 2011; Wong and Kit, 2012) build discourse-based translation models to ensure the lexical coherence or consistency. Although some lexicons can be translated better by their models, the overall structure still remains unnatural. Marcu et al. (2000) design a discourse structure transferring module, but leave much work to do, especially on how to integrate this module into SMT and how to automatically analyze the structures. Those reasons urge us to seek a new translation framework under the idea of "translation with overall understanding".Rhetorical structure theory (RST) (Mann and Thompson, 1988) provides us with a good perspective and inspiration to build such a framework. Generally, an RST tree can explicitly show the minimal spans with semantic functional integrity, which are called elementary discourse units (edus) (Marcu et al., 2000) , and it also depicts the hierarchical relations among edus. Furthermore, since different languages' edus are usually equivalent on semantic level, it is intuitive to create a new framework based on RST by directly mapping the source edus to target ones.Taking the Chinese-to-English translation as an example, our translation framework works as the following steps: 1) Source RST-tree acquisition: a source sentence is parsed into an RST-tree;2) Rule extraction: translation rules are extracted from the source tree and the target string via bilingual word alignment;3) RST-based translation: the source RSTtree is translated into target sentence with extracted translation rules.Experiments on Chinese-to-English sentencelevel discourses demonstrate that this method achieves significant improvements.
0
With the proliferation of microblogging and its wide influence on how information is shared and digested, the studying of microblog sites has gained interest in recent NLP research. Several approaches have been proposed to enable a deep understanding of information on Twitter. An emerging approach is to use semantic annotation techniques, for instance by mapping Twitter information snippets to canonical entities in a knowledge base or to Wikipedia (Meij et al., 2012; Guo et al., 2013) , or by revisiting NLP tasks in the Twitter domain (Owoputi et al., 2013; Ritter et al., 2011) . Much of the existing work focuses on annotating a single Twitter message (tweet). However, information in Twitter is rarely digested in isolation, but rather in a collective manner, with the adoption of special mechanisms such as hashtags. When put together, the unprecedentedly massive adoption of Hard to believe anyone can do worse than Russia in #Sochi. Brazil seems to be trying pre;y hard though! spor=ngnews.com… a hashtag within a short time period can lead to bursts and often reflect trending social attention. Understanding the meaning of trending hashtags offers a valuable opportunity for various applications and studies, such as viral marketing, social behavior analysis, recommendation, etc. Unfortunately, the task of hashtag annotation has been largely unexplored so far.In this paper, we study the problem of annotating trending hashtags on Twitter by entities derived from Wikipedia. Instead of establishing a static semantic connection between hashtags and entities, we are interested in dynamically linking the hashtags to entities that are closest to the underlying topics during burst time periods of the hashtags. For instance, while '#sochi' refers to a city in Russia, during February 2014, the hashtag was used to report the 2014 Winter Olympics (cf. Figure 1 ). Hence, it should be linked more to Wikipedia pages related to the event than to the location.Compared to traditional domains of text (e.g., news articles), annotating hashtags poses additional challenges. Hashtags' surface forms are very ad-hoc, as they are chosen not in favor of the text quality, but by the dynamics in attention of the large crowd. In addition, the evolution of the semantics of hashtags (e.g., in the case of '#sochi') makes them more ambiguous. Furthermore, a hashtag can encode multiple topics at once. For example, in March 2014, '#oscar' refers to the 86th Academy Awards, but at the same time also to the Trial of Oscar Pistorius. Sometimes, it is difficult even for humans to understand a trending hashtag without knowledge about what was happening with the related entities in the real world.In this work, we propose a novel solution to these challenges by leveraging temporal knowledge about entity dynamics derived from Wikipedia. We hypothesize that a trending hashtag is associated with an increase in public attention to certain entities, and this can also be observed on Wikipedia. As in Figure 1 , we can identify 2014 Winter Olympics as a prominent entity for '#sochi' during February 2014, by observing the change of user attention to the entity, for instance via the page view statistics of Wikipedia articles. We exploit both Wikipedia edits and page views for annotation. We also propose a novel learning method, inspired by the information spreading nature of social media such as Twitter, to suggest the optimal annotations without the need for human labeling. In summary:• We are the first to combine the Wikipedia edit history and page view statistics to overcome the temporal ambiguity of Twitter hashtags.• We propose a novel and efficient learning algorithm based on influence maximization to automatically annotate hashtags. The idea is generalizable to other social media sites that have a similar information spreading nature.• We conduct thorough experiments on a realworld dataset and show that our system can outperform competitive baselines by 17-28%.
0
Examining historical legal texts offers insight into the development of legal thinking and the practice of law. In order to facilitate computer processing, older legal texts are typically scanned from paper, microfilm or other physical media and then converted to text using Optical Character Recognition (OCR), which introduces numerous errors. Many of these errors can be corrected automatically using spelling and grammar correcting systems. However, the names of people, places and other proper names cannot be corrected easily, making the study of lawyers, judges and other people unreliable (Hamdi et al., 2020) . One use of reliable names is inferring personal biases and connections that may affect outcomes (Clarke, 2018) . In order to address this problem, names need to be accurately identified in the text and then corrected and standardized in a process often called Named Entity Disambiguation or NED (Yamada et al., 2016) . Nonetheless, extracting accurate names is only part of the solution. In the future, organization names must also be extracted, and the respective roles must be identified.The process of computationally extracting names from text is more formally called Named Entity Recognition (NER) and has been a difficult problem for many decades (Nadeau and Sekine, 2007) . Furthermore, extracting names in legal text provides many domain-specific challenges (Bikel et al., 1999) .This paper describes a two-pronged approach for extracting the names of lawyers arguing cases: (i) Extract the lawyer names from text using our ensemble model based on a neural network and a state machine.(ii) Identify and correct transcription errors to uniquely identify lawyers.Our system for extracting the names of lawyers in legal text uses a transformer-based neural network (Vaswani et al., 2017) feeding a finite state machine. After extraction, the identified names are subjected to several heuristic rules to identify errors, misspelled names and name variations in order to attempt to uniquely identify the lawyers named. When errors cannot be corrected automatically, such as in names with alternative spellings, the extent of the errors is quantified.In order to develop, train and test this system, we used legal cases from the Harvard Caselaw Access Project (Harvard University, 2018) . This project includes the complete text of decisions from United States courts dating back to the 1700s, with over 40 million pages of text spanning over 360 years. In our analysis, we only focused on cases from 1900 to 2010 in jurisdictions that were states as of 1900. Thus, Alaska and Hawaii were not considered. Because states and courts often have different reporting styles that have varied substantially over the years, we segmented most of our analysis by state and then by decade.In a typical case text in the United States, lawyers are only identified in the header section on the first page using a relatively standardized format, usually called the "party names". They are rarely mentioned by name in the decision text. A typical party names text would read as follows:David P. Sutton, Asst. Corp. Counsel, with whom Charles T. Duncan, Corp. Counsel, Hubert B. Pair, Principal Asst. Corp. Counsel, and Richard W. Barton, Asst. Corp. Counsel, were on the brief, for appellant.The text is usually a single complex sentence where all principal people, firms and their roles are identified in a mostly standardized and stylized format. Because of the sentence's complexity, the text is sometimes difficult for non-lawyers and automated systems to decipher. The parsing problem is compounded because the style standards and norms vary by location and over time. In addition to containing spelling and transcription errors, words and names are sometimes given as initials, nicknames or abbreviations. All these types of things confound automated systems.Thus, a solution to the problem can be divided into two parts: (i) extract the names from the text and (ii) standardize the name to identify individuals.One solution to the the first part of extracting names is documented by Dozier et al. (2010) , who describe their work at Westlaw (now part of Thomson Reuters) in 2000 identifying entities in US case law using Bayesian modeling (Dozier and Haschart, 2000) . Their process extracts more than just names and involves parsing words in part by using a finite state machine that is specially tailored for each jurisdiction. More recently, Wang et al. (2020) propose a solution based on a neural network architecture that performs well across various domains, including legal text. In addition, Leitner et al. (2019) have developed a very promising system to perform NER in German legal texts that was built and trained on their own dataset (Leitner et al., 2020) . This dataset was also used by Bourgonje et al. (2020) in their NER work based on BERT.These approaches apply generically to the entire legal text and are not focused on the grammatically challenging party names text. In any case, lack of a similar dataset for English prevents us from trying these approaches.Our system differs from previous attempts in that it is an ensemble composed of a transformerbased neural network and a state machine rather than a single architecture. The state machine allows the inclusion of pre-established knowledge of the syntax and style of the named parties text, thereby increasing accuracy. This increased the accuracy by 10% compared with the state-of-the-art transformer-based FLERT model (Schweter and Akbik, 2021) .
0
Topic models are probabilistic graphical models meant to capture the semantic associations underlying corpora. Since the introduction of latent Dirichlet allocation (LDA) (Blei et al., 2003) , these models have been extended to account for more complex distributions over topics, such as adding supervision (Blei and McAuliffe, 2007) , non-parametric priors (Blei et al., 2004; Teh et al., 2006) , topic correlations (Li and McCallum, 2006; Mimno et al., 2007; and sparsity (Williamson et al., 2010; Eisenstein et al., 2011) .While much research has focused on modeling distributions over topics, less focus has been given to the makeup of the topics themselves. This emphasis leads us to find two problems with LDA and its variants mentioned above: (1) independently generated topics and (2) overparameterized models.Independent Topics In the models above, the topics are modeled as independent draws from a single underlying distribution, typically a Dirichlet. This violates the topic modeling community's intuition that these distributions over words are often related. As an example, consider a corpus that supports two related topics, baseball and hockey. These topics likely overlap in their allocation of mass to high probability words (e.g. team, season, game, players), even though the two topics are unlikely to appear in the same documents. When topics are generated independently, the model does not provide a way to capture this sharing between related topics. Many extensions to LDA have addressed a related issue, LDA's inability to model topic correlation, 1 by changing the distributions over topics Li and McCallum, 2006; Mimno et al., 2007; Paisley et al., 2011 ). Yet, none of these change the underlying structure of the topic's distributions over words.Topics are most often parameterized as multinomial distributions over words: increasing the topics means learning new multinomials over large vocabularies, resulting in models consisting of millions of parameters. This issue was partially addressed in SAGE (Eisenstein et al., 2011) by encouraging sparsity in the topics which are parameterized by their difference in logfrequencies from a fixed background distribution. Yet the problem of overparameterization is also tied to the number of topics, and though SAGE reduces the number of non-zero parameters, it still requires a vocabulary-sized parameter vector for each topic.We present the Shared Components Topic Model (SCTM), which addresses both of these issues by generating each topic as a normalized product of a smaller number of underlying components. Rather than learning each new topic from scratch, we model a set of underlying component distributions that constrain topic formation. Each topic can then be viewed as a combination of these underlying components, where in a model such as LDA, we would say that components and topics stand in a one to one relationship. The key advantages of the SCTM are that it can learn and share structure between overlapping topics (e.g. baseball and hockey) and that it can represent the same number of topics in a much more compact representation, with far fewer parameters.Because the topics are products of components, we present a new training algorithm for the significantly more complex product case which relies on a Contrastive Divergence (CD) objective. Since SCTM topics, which are products of distributions, could be represented directly by distributions as in LDA, our goal is not necessarily to learn better topics, but to learn models that are substantially smaller in size and generalize better to unseen data. Experiments on two corpora show that our model uses fewer underlying multinomials and still achieves lower perplexity than LDA, which suggests that these constraints could lead to better topics.
0
A standard evaluation setup for supervised machine learning (ML) tasks assumes an evaluation metric which compares a gold label to a classifier prediction. This setup assumes that the task has clearly defined and unambiguous labels and, in most cases, that an instance can be assigned few labels. These assumptions, however, do not hold for natural language generation (NLG) tasks like machine trans-lation (MT) (Bahdanau et al., 2015; Johnson et al., 2017 ) and text summarization (Rush et al., 2015; Tan et al., 2017) , where we do not predict a single discrete label but generate natural language text. Thus, the set of labels for NLG is neither clearly defined nor finite. Yet, the standard evaluation protocols for NLG still predominantly follow the described default paradigm: (1) evaluation datasets come with human-created reference texts and (2) evaluation metrics, e.g., BLEU (Papineni et al., 2002) or METEOR (Lavie and Agarwal, 2007) for MT and ROUGE (Lin and Hovy, 2003) for summarization, count the exact "label" (i.e., n-gram) matches between reference and system-generated text. In other words, established NLG evaluation compares semantically ambiguous labels from an unbounded set (i.e., natural language texts) via hard symbolic matching (i.e., string overlap).The first remedy is to replace the hard symbolic comparison of natural language "labels" with a soft comparison of texts' meaning, using semantic vector space representations. Recently, a number of MT evaluation methods appeared focusing on semantic comparison of reference and system translations (Shimanaka et al., 2018; Clark et al., 2019; Zhao et al., 2019) . While these correlate better than n-gram overlap metrics with human assessments, they do not address inherent limitations stemming from the need for reference translations, namely: (1) references are expensive to obtain; (2) they assume a single correct solution and bias the evaluation, both automatic and human (Dreyer and Marcu, 2012; Fomicheva and Specia, 2016) , and (3) limitation of MT evaluation to language pairs with available parallel data.Reliable reference-free evaluation metrics, directly measuring the (semantic) correspondence between the source language text and system translation, would remove the need for human references and allow for unlimited MT evaluations: any monolingual corpus could be used for evaluating MT systems. However, the proposals of referencefree MT evaluation metrics have been few and far apart and have required either non-negligible supervision (i.e., human translation quality labels) (Specia et al., 2010) or language-specific preprocessing like semantic parsing (Lo et al., 2014; Lo, 2019) , both hindering the wide applicability of the proposed metrics. Moreover, they have also typically exhibited performance levels well below those of standard reference-based metrics (Ma et al., 2019) .In this work, we comparatively evaluate a number of reference-free MT evaluation metrics that build on the most recent developments in multilingual representation learning, namely cross-lingual contextualized embeddings (Devlin et al., 2019) and cross-lingual sentence encoders (Artetxe and Schwenk, 2019). We investigate two types of crosslingual reference-free metrics: (1) Soft token-level alignment metrics find the optimal soft alignment between source sentence and system translation using Word Mover's Distance (WMD) (Kusner et al., 2015) . Zhao et al. (2019) recently demonstrated that WMD operating on BERT representations (Devlin et al., 2019) substantially outperforms baseline MT evaluation metrics in the reference-based setting. In this work, we investigate whether WMD can yield comparable success in the reference-free (i.e., cross-lingual) setup; (2) Sentence-level similarity metrics measure the similarity between sentence representations of the source sentence and system translation using cosine similarity.Our analysis yields several interesting findings. (i) We show that, unlike in the monolingual reference-based setup, metrics that operate on contextualized representations generally do not outperform symbolic matching metrics like BLEU, which operate in the reference-based environment. (ii) We identify two reasons for this failure: (a) firstly, cross-lingual semantic mismatch, especially for multi-lingual BERT (M-BERT), which construes a shared multilingual space in an unsupervised fashion, without any direct bilingual signal; (b) secondly, the inability of the state-of-the-art crosslingual metrics based on multilingual encoders to adequately capture and punish "translationese", i.e., literal word-by-word translations of the source sentence-as translationese is an especially persistent property of MT systems, this problem is particularly troubling in our context of referencefree MT evaluation. (iii) We show that by executing an additional weakly-supervised cross-lingual re-mapping step, we can to some extent alleviate both previous issues. (iv) Finally, we show that the combination of cross-lingual reference-free metrics and language modeling on the target side (which is able to detect "translationese"), surpasses the performance of reference-based baselines.Beyond designating a viable prospect of webscale domain-agnostic MT evaluation, our findings indicate that the challenging task of reference-free MT evaluation is able to expose an important limitation of current state-of-the-art multilingual encoders, i.e., the failure to properly represent corrupt input, that may go unnoticed in simpler evaluation setups such as zero-shot cross-lingual text classification or measuring cross-lingual text similarity not involving "adversarial" conditions. We believe this is a promising direction for nuanced, fine-grained evaluation of cross-lingual representations, extending the recent benchmarks which focus on zeroshot transfer scenarios (Hu et al., 2020) .
0
Word Sense Disambiguation (WSD) is an indispensable component of language understanding (Navigli, 2009) ; hence, it has been one of the most studied long-standing problems in lexical semantics. Currently, the dominant WSD paradigm is the supervised approach , which highly relies on sense-annotated data. Similarly to many other supervised tasks, the amount of labeled (sense-annotated) data for WSD highly determines downstream performance. One of the factors that make WSD a challenging problem is that creating sense-annotated data is an expensive and arduous process, i.e., the so-called knowledge-acquisition bottleneck (Gale et al., 1992) . Moreover, WSD research often focuses on the English language. While datasets for other languages exist (Petrolito and Bond, 2014a; , these are generally automatically generated (Delli Bovi et al., 2017; Pasini et al., 2018; Scarlini et al., 2020a; Barba et al., 2020) or not large enough for training supervised WSD models (Navigli et al., Authors marked with a star ( ) contributed equally.2013a; Moro and Navigli, 2015) . 1 However, recent contextualized embeddings have proven highly effective in English WSD (Peters et al., 2018; Loureiro and Jorge, 2019; Vial et al., 2019; Loureiro et al., 2021) , as well as in capturing high-level linguistic knowledge that can be shared or transferred across different languages (Conneau et al., 2020; Cao et al., 2020) . Therefore, cross-lingual transfer has opened new opportunities to circumvent the knowledge acquisition bottleneck for less-resourced languages. In this paper, we aim at investigating this opportunity. To this end, we build upon recent research on cross-lingual transfer to compute contextualized sense embeddings and verify if semantic distinctions in the English language are transferable to other languages.The contributions are threefold: (1) We adapt existing datasets to build a unified benchmark for cross-lingual WSD based on WordNet; (2) we test the effectiveness of contextualized embeddings for cross-lingual transfer in the context of WSD; and (3) we establish relevant and simple baselines for future work in cross-lingual WSD. 2
0
Statistical Machine Translation (SMT) is useful for building a machine translator between a pair of languages that follow similar word orders. However, SMT does not work well for distant language pairs such as English and Japanese, since English is an SVO language and Japanese is an SOV language.Some existing methods try to solve this wordorder problem in language-independent ways. They usually parse input sentences and learn a reordering decision at each node of the parse trees.For example, Yamada and Knight (2001) , Quirk et al. (2005) , Xia and McCord (2004) , and Li et al. (2007) proposed such methods.Other methods tackle this problem in languagedependent ways (Katz-Brown and Collins, 2008; Collins et al., 2005; Nguyen and Shimazu, 2006) . Recently, Xu et al. (2009) and Hong et al. (2009) proposed rule-based preprocessing methods for SOV languages. These methods parse input sentences and reorder the words using a set of handcrafted rules to get SOV-like sentences.If we could completely reorder the words in input sentences by preprocessing to match the word order of the target language, we would be able to greatly reduce the computational cost of SMT systems.In this paper, we introduce a single reordering rule: Head Finalization. We simply move syntactic heads to the end of the corresponding syntactic constituents (e.g., phrases and clauses) . We use only this reordering rule, and we do not have to consider part-of-speech tags or rule weights because the powerful Enju parser allows us to implement the rule at a general level.Why do we think this works? The reason is simple: Japanese is a typical head-final language. That is, a syntactic head word comes after nonhead (dependent) words. SOV is just one aspect of head-final languages. In order to implement this idea, we need a parser that outputs syntactic heads. Enju is such a parser from the University of Tokyo (http://www-tsujii.is.s. u-tokyo.ac.jp/enju). We discuss other parsers in section 5.There is another kind of head: semantic heads. Hong et al. (2009) used Stanford parser (de Marneffe et al., 2006) , which outputs semantic headbased dependencies; Xu et al. (2009) also used the same representation.The use of syntactic heads and the number of dependents are essential for the simplicity of Head Finalization (See Discussion). Our method simply checks whether a tree node is a syntactic head. We do not have to consider what we are moving and how to move it. On the other hand, Xu et al. had to introduce dozens of weighted rules, probably because they used the semantic headbased dependency representation without restriction on the number of dependents.The major difference between our method and the above conventional methods, other than its simplicity, is that our method moves not only verbs and adjectives but also functional words such as prepositions. Figure 1 shows Enju's XML output for the simple sentence: "John hit a ball." The tag <cons> indicates a nonterminal node and <tok> indicates a terminal node or a word (token). Each node has a unique id. Head information is given by the node's head attribute. For instance, node c0's head is node c3, and c3 is a VP, or verb phrase. Thus, Enju treats not only words but also non-terminal nodes as heads.
0
Research in machine translation has focused broadly on two main goals, improving word choice and improving word order in translation output. Current machine translation metrics rely upon indirect methods for measuring the quality of the word order, and their ability to capture the quality of word order is poor (Birch et al., 2010) .There are currently two main approaches to evaluating reordering. The first is exemplified by the BLEU score (Papineni et al., 2002) , which counts the number of matching n-grams between the reference and the hypothesis. Word order is captured by the proportion of longer n-grams which match. This method does not consider the position of matching words, and only captures ordering differences if there is an exact match between the words in the translation and the reference. Another approach is taken by two other commonly used metrics, ME-TEOR (Banerjee and Lavie, 2005) and TER (Snover et al., 2006) . They both search for an alignment between the translation and the reference, and from this they calculate a penalty based on the number of differences in order between the two sentences. When block moves are allowed the search space is very large, and matching stems and synonyms introduces errors. Importantly, none of these metrics capture the distance by which words are out of order. Also, they conflate reordering performance with the quality of the lexical items in the translation, making it difficult to tease apart the impact of changes. More sophisticated metrics, such as the RTE metric (Padó et al., 2009) , use higher level syntactic or semantic analysis to determine the grammaticality of the output. These approaches require annotation and can be very slow to run. For most research, shallow metrics are more appropriate.We introduce a novel shallow metric, the Lexical Reordering Score (LRscore), which explicitly measures the quality of word order in machine translations and interpolates it with a lexical metric. This results in a simple, decomposable metric which makes it easy for researchers to pinpoint the effect of their changes. In this paper we show that the LRscore is more consistent with human judgements than other metrics for five out of eight different language pairs. We also apply the LRscore during Minimum Error Rate Training (MERT) to see whether information on reordering allows the translation model to produce better reorderings. We show that humans prefer the output of systems trained with the LRscore 52.5% as compared to 43.9% when training with the BLEU score. Furthermore, training with the LRscore does not result in lower BLEU scores.The rest of the paper proceeds as follows. Section 2 describes the reordering and lexical metrics that are used and how they are combined. Section 3 presents the experiments on consistency with human judgements and describes how to train the language independent parameter of the LRscore. Section 4 reports the results of the experiments on MERT. Finally we discuss related work and conclude.
0
In the past few years, there was a growing interest in mining opinions in the user-generated content (UGC) on the Web, e.g., customer reviews, forum posts, and blogs. One major focus is sentiment classification and opinion mining (e.g., Pang et al 2002; Turney 2002; Hu and Liu 2004; Wilson et al 2004; Kim and Hovy 2004; Popescu and Etzioni 2005) © 2008. Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license (http://creativecommons.org/licenses/by-ncsa/3.0/). Some rights reserved.However, these studies mainly center on direct opinions or sentiments expressed on entities. Little study has been done on comparisons, which represent another type of opinion-bearing text. Comparisons are related to but are also quite different from direct opinions. For example, a typical direct opinion sentence is "the picture quality of Camera X is great", while a typical comparative sentence is "the picture quality of Camera X is better than that of Camera Y. " We can see that comparisons use different language constructs from direct opinions. A comparison typically expresses a comparative opinion on two or more entities with regard to their shared features or attributes, e.g., "picture quality". Although direct opinions are most common in UGC, comparisons are also widely used (about 10% of the sentences), especially in forum discussions where users often ask questions such as "X vs. Y" (X and Y are competing products). Discussions are then centered on comparisons. Jindal and Liu (2006) proposed a technique to identify comparative sentences from reviews and forum posts, and to extract entities, comparative words, and entity features that are being compared. For example, in the sentence, "Camera X has longer battery life than Camera Y", the technique extracts "Camera X" and "Camera Y" as entities, and "longer" as the comparative word and "battery life" as the attribute of the cameras being compared. However, the technique does not find which entity is preferred by the author. For this example, clearly "Camera Y" is the preferred camera with respect to the "battery life" of the cameras. This paper aims to solve this problem, which is useful in many applications because the preferred entity is the key piece of information in a comparative opinion. For example, a potential customer clearly wants to buy the product that is better or preferred.In this work, we treat a sentence as the basic information unit. Our objective is thus to identify the preferred entity in each comparative sentence. A useful observation about comparative sentences is that in each such sentence there is usually a comparative word (e.g., "better", "worse" and -er word) or a superlative word (e.g., "best", "worst" and -est word). The entities being compared often appear on the two sides of the comparative word. A superlative sentence may only have one entity, e.g., "Camera X is the best". For simplicity, we use comparative words (sentences) to mean both comparative words (sentences) and superlative words (sentences). Clearly, the preferred entity in a comparative sentence is mainly determined by the comparative word in the sentence. Some comparative words explicitly indicate user preferences, e.g., "better", "worse", and "best". We call such words opinionated comparative words. For example, in the sentence, "the picture quality of Camera X is better than that of Camera Y", Camera X is preferred due to the opinionated comparative word "better".However, many comparative words are not opinionated, or their opinion orientations (i.e., positive or negative) depend on the context and/or the application domain. For instance, the word "longer" is not opinionated as it is normally used to express that the length of some feature of an entity is greater than the length of the same feature of another entity. However, in a particular context, it can express a desired (or positive) or undesired (or negative) state. For example, in the sentence, "the battery life of Camera X is longer than Camera Y", "longer" clearly expresses a desired state for "battery life" (although this is an objective sentence with no explicit opinion). "Camera X" is thus preferred with regard to "battery life" of the cameras. The opinion in this sentence is called an implicit opinion. We also say that "longer" is positive in this context. We know this because of our existing domain knowledge. However, "longer" may also be used to express an undesirable state in a different context, e.g., "Program X's execution time is longer than Program Y". longer" is clearly negative here. "Program Y" is thus preferred. We call comparative words such as "longer" and "smaller" context-dependent opinion comparatives.Sentences with opinionated words (e.g., "better", and "worse") are usually easy to handle. Then the key to solve our problem is to identify the opinion orientations (positive or negative) of context-dependent comparative words. To this end, two questions need to be answered: (1) what is a context and (2) how to use the context to help determine the opinion orientation of a comparative word?The simple answer to question (1) is the whole sentence. However, a whole sentence as context is too complex because it may contain too much irrelevant information, which can confuse the system. Intuitively, we want to use the smallest context that can determine the orientation of the comparative word. Obviously, the comparative word itself must be involved. We thus conjecture that the context should consist of the entity feature being compared and the comparative word. Our experimental results show that this context definition works quite well.To answer the second question, we need external information or knowledge because there is no way that a computer program can solve the problem by analyzing the sentence itself. In this paper, we propose to use the external information in customer reviews on the Web to help solve the problem. There are a large number of such reviews on almost any product or service. These reviews can be readily downloaded from many sites. In our work, we use reviews from epinions.com. Each review in epinions.com has separate Pros and Cons (which is also the case in most other review sites). Thus, positive and negative opinions are known as they are separated by reviewers. However, they cannot be used directly because Pros and Cons seldom contain comparative words. We need to deal with this problem. Essentially, the proposed method computes whether the comparative word and the feature are more associated in Pros or in Cons. If they are more associated in Pros (or Cons) than Cons (or Pros), then the comparative word is likely to be positive (or negative) for the feature. A new association measure is also proposed to suit our purpose. Our experiment results show that it can achieve high precision and recall.
0
Textual entailment is the task of automatically determining whether a natural language hypothesis can be inferred from a given piece of natural language text. The RTE challenges (Bentivogli et al., 2009; Bentivogli et al., 2011) have spurred considerable research in textual entailment over newswire data. This, along with the availability of large-scale datasets labeled with entailment information (Bowman et al., 2015) , has resulted in a variety of approaches for textual entailment recognition. A variation of this task, dubbed textual entailment search, has been the focus of RTE-5 and subsequent challenges, where the goal is to find all sentences in a corpus that entail a given hypothesis. The mindshare created by those challenges and the availability of the datasets has spurred many creative solutions to this problem. However, the evaluations have been restricted primarily to these datasets, which are in the newswire domain. Thus, much of the existing state-of-the-art research has focused on solutions that are effective in this domain.It is easy to see though, that entailment search has potential applications in other domains too. For instance, in the clinical domain we imagine entailment search can be applied for clinical trial matching as one example. Inclusion criteria for a clinical trial (for e.g., patient is a smoker) become the hypotheses, and the patient's electronic health records are the text for entailment search. Clearly, an effective textual entailment search system could possibly one day fully automate clinical trial matching.Developing an entailment system that works well in the clinical domain and, thus, automates this matching process, requires lots of labeled data, which is extremely scant in the clinical domain. Generating such a dataset is tedious and costly, primarily because it requires medical domain expertise. Moreover, there are always privacy concerns in releasing such a dataset to the community. Taking this into consideration, we investigate the problem of textual entailment in a low-resource setting.We begin by creating a dataset in the clinical domain, and a supervised entailment system that is competitive on multiple domains -newswire as well as clinical. We then present our work on selftraining and active learning to address the lack of a large-scale labeled dataset. Our self-training system results in significant gains in performance on clinical (+13% F-score) and on newswire (+15% F-score) data. Further, we show that active learning with uncertainty sampling reduces the number of required annotations for the entailment search task by more than 90% in both domains.
0
Text categorization is the task of assigning a text 1 to one or more of a set of predefined categories. As with most other natural language processing applications, representational factors are decisive for the performance of the categorization. The incomparably most common representational scheme in text categorization is the Bag-of-Words (BoW) approach, in which a text is represented as a vector t of word weights, such that t i = (w 1 ...w n ) where w n are the weights (usually a tf ×idf -value) 2 of the words in the text. The BoW representation ignores all semantic or conceptual information; it simply looks at the surface word forms.There have been attempts at deriving more sophisticated representations for text categorization, including the use of n-grams or phrases 1 In the remainder of this paper, we use the terms "text" and "document" synonymously.2 The tf ×idf measure is a standard weighting scheme, where tf i is simply the frequency of word i in the document, and idf is the inverse document frequency, given by N n i where N is the total number of documents in the data, and ni is the number of documents in which word i occurs. The most common version of the tf ×idf formula is wi = tf i × log N n i (Baeza-Yates and Ribeiro-Neto, 1999) . (Lewis, 1992; Dumais et al., 1998) , or augmenting the standard BoW approach with synonym clusters or latent dimensions (Baker and Mc-Callum, 1998; Cai and Hofmann, 2003) . However, none of the more elaborate representations manage to significantly outperform the standard BoW approach (Sebastiani, 2002) . In addition to this, they are typically more expensive to compute.What interests us in this paper is the difference between using standard BoW and more elaborate, concept-based representations. Since text categorization is normally cast as a problem concerning the content of the text (Dumais et al., 1998) , one might assume that looking beyond the mere surface word forms should be beneficial for the text representations. We believe that, even though BoW representations are superior in most text categorization tasks, concept-based schemes do provide important information, and that they can be used as a supplement to the BoW representations. Our goal is therefore to investigate whether there are specific categories in a standard text categorization collection for which using concept-based representations is more appropriate, and if combinations of word-based and concept-based representations can be used to improve the categorization performance.In order to do this, we introduce a new method for producing concept-based representations for natural language data. The method is efficient, fast and scalable, and requires no external resources. We use the method to create concept-based representations for a standard text categorization problem, and we use the representations as input to a Support Vector Machine classifier. The categorization results are compared to those reached using standard BoW representations, and we also demonstrate how the performance of the Support Vector Machine can be improved by combining representations.
0
Hate speech detection models play an important role in online content moderation and enable scientific analyses of online hate more generally. This has motivated much research in NLP and the social sciences. However, even state-of-the-art models exhibit substantial weaknesses (see Schmidt and Wiegand, 2017; Fortuna and Nunes, 2018; Mishra et al., 2020, for reviews) .So far, hate speech detection models have primarily been evaluated by measuring held-out performance on a small set of widely-used hate speech datasets (particularly Waseem and Hovy, 2016; Founta et al., 2018) , but recent work has highlighted the limitations of this evaluation paradigm. Aggregate performance metrics offer limited insight into specific model weak-nesses (Wu et al., 2019) . Further, if there are systematic gaps and biases in training data, models may perform deceptively well on corresponding held-out test sets by learning simple decision rules rather than encoding a more generalisable understanding of the task (e.g. Niven and Kao, 2019; Geva et al., 2019; Shah et al., 2020) . The latter issue is particularly relevant to hate speech detection since current hate speech datasets vary in data source, sampling strategy and annotation process (Vidgen and Derczynski, 2020; Poletto et al., 2020) , and are known to exhibit annotator biases (Waseem, 2016; Waseem et al., 2018; as well as topic and author biases (Wiegand et al., 2019; Nejadgholi and Kiritchenko, 2020) . Correspondingly, models trained on such datasets have been shown to be overly sensitive to lexical features such as group identifiers , and to generalise poorly to other datasets (Nejadgholi and Kiritchenko, 2020; Samory et al., 2020) . Therefore, held-out performance on current hate speech datasets is an incomplete and potentially misleading measure of model quality.To enable more targeted diagnostic insights, we introduce HATECHECK, a suite of functional tests for hate speech detection models. Functional testing, also known as black-box testing, is a testing framework from software engineering that assesses different functionalities of a given model by validating its output on sets of targeted test cases (Beizer, 1995) . Ribeiro et al. (2020) show how such a framework can be used for structured model evaluation across diverse NLP tasks. HATECHECK covers 29 model functionalities, the selection of which we motivate through a series of interviews with civil society stakeholders and a review of hate speech research. Each functionality is tested by a separate functional test. We create 18 functional tests corresponding to distinct expressions of hate. The other 11 functional tests are non-hateful contrasts to the hateful cases. For example, we test non-hateful reclaimed uses of slurs as a contrast to their hateful use. Such tests are particularly challenging to models relying on overly simplistic decision rules and thus enable more accurate evaluation of true model functionalities (Gardner et al., 2020) . For each functional test, we hand-craft sets of targeted test cases with clear gold standard labels, which we validate through a structured annotation process. 1 HATECHECK is broadly applicable across English-language hate speech detection models. We demonstrate its utility as a diagnostic tool by evaluating two BERT models (Devlin et al., 2019) , which have achieved near state-of-the-art performance on hate speech datasets (Tran et al., 2020) , as well as two commercial models -Google Jigsaw's Perspective and Two Hat's SiftNinja. 2 When tested with HATECHECK, all models appear overly sensitive to specific keywords such as slurs. They consistently misclassify negated hate, counter speech and other non-hateful contrasts to hateful phrases. Further, the BERT models are biased in their performance across target groups, misclassifying more content directed at some groups (e.g. women) than at others. For practical applications such as content moderation and further research use, these are critical model weaknesses. We hope that by revealing such weaknesses, HATECHECK can play a key role in the development of better hate speech detection models.We draw on previous definitions of hate speech (Warner and Hirschberg, 2012; as well as recent typologies of abusive content to define hate speech as abuse that is targeted at a protected group or at its members for being a part of that group. We define protected groups based on age, disability, gender identity, familial status, pregnancy, race, national or ethnic origins, religion, sex or sexual orientation, which broadly reflects international legal consensus (particularly the UK's 2010 Equality Act, the US 1964 Civil Rights Act and the EU's Charter of Fundamental Rights). Based on these definitions, we approach hate speech detection as the binary classification of content as either hateful or non-hateful. Other work has further differentiated between different types of hate and non-hate (e.g. Founta et al., 2018; Salminen et al., 2018; , but such taxonomies can be collapsed into a binary distinction and are thus compatible with HATECHECK.Content Warning This article contains examples of hateful and abusive language. All examples are taken from HATECHECK to illustrate its composition. Examples are quoted verbatim, except for hateful slurs and profanity, for which the first vowel is replaced with an asterisk.
0
In contrast to English, some vowels in languages such as Arabic and Hebrew are not part of the alphabet and diacritics are used for vowel specification. 1 In addition to pertaining vowels, diacritics can also represent other features such as case marking and phonological gemination in Arabic. Not including diacritics in the written text in such languages increases the number of possible meanings as well as pronunciations. Humans rely on the surrounding context and their previous knowledge to infer the meanings and/or pronunciations of words. However, computational models, on the other hand, are inherently limited to deal with missing diacritics which pose a challenge for such models due to increased ambiguity.Diacritic restoration (or diacritization) is the process of restoring these missing diacritics for every character in the written texts. It can specify pronunciation and can be viewed as a relaxed variant of word sense disambiguation. For example, the Arabic word Elm 2 can mean "flag" or "knowledge", but the meaning as well as pronunciation is specified when the word is diacritized ( Ealamu means "flag" while Eilomo means "knowledge"). As an illustrative example in English, if we omit the vowels in the word pn, the word can be read as pan, pin, pun, and pen, each of these variants have different pronunciations and meanings if it composes a valid word in the language.The state-of-the-art diacritic restoration models reached a decent performance over the years using recurrent or convolutional neural networks in terms of accuracy (Zalmout and Habash, 2017; Orife, 2018) and/or efficiency Orife, 2018 ); yet, there is still room for further improvements. Most of these models are built on character level information which help generalize the model to unseen data, but presumably lose some useful information at the word level. Since word level resources are insufficient to be relied upon for training diacritic restoration models, we integrate additional linguistic information that considers word morphology as well as word relationships within a sentence to partially compensate for this loss.In this paper, we improve the performance of diacritic restoration by building a multitask learning model (i.e. joint modeling). Multitask learning refers to models that learn more than one task at the same time, and has recently been shown to provide good solutions for a number of NLP tasks (Hashimoto et al., 2016; Kendall et al., 2018) .The use of a multitask learning approach provides an end-to-end solution, in contrast to generating the linguistic features for diacritic restoration as a preprocessing step. In addition, it alleviates the reliance on other computational and/or data resources to generate these features. Furthermore, the proposed model is flexible such that a task can be added or removed depending on the data availability. This makes the model adaptable to other languages and dialects.We consider the following auxiliary tasks to boost the performance of diacritic restoration: word segmentation, part-of-speech (POS) tagging, and syntactic diacritization. We use Arabic as a case study for our approach since it has sufficient data resources for tasks that we consider in our joint modeling. 3 The contributions of this paper are twofold:1. We investigate the benefits of automatically learning related tasks to boost the performance of diacritic restoration;2. In doing so, we devise a state-of-the-art model for Arabic diacritic restoration as well as a framework for improving diacritic restoration in other languages that include diacritics.
0
As one of the key techniques in web information processing, text classification has been studied for a long time (Aas and Eikvil, 1999; Wang and Li, 2011; Yang and Pedersen, 1997) . A growing number of machine learning techniques have been applied to text classification and some of them have proven to be successful (Miao and Kamel, 2011; Sebastiani, 2002) . In the machine learning approach, the learning process is an instance of supervised or semi-supervised learning because classifier can be built automatically by learning from adequately pre-labeled training documents and then classified unseen documents (Feldman and Sanger, 2007) . However, the task of manually labeling a large amount of documents is timeconsuming and even impractical. Given a general classification task, people usually construct training data in two ways. One is augmenting a small number of labeled documents with large amounts of unlabeled ones to guide the learning model iteratively, so that the new classifier can label the unlabeled documents (Jiang, 2009; Nigam et al., 2000) . In such studies, the bootstrapping technique is often used to label the unlabeled documents and refine the initial classifier (Gliozzo et al., 2009; Ko and Seo, 2009) . The other is collecting training corpora from the Web. Such works use the class name and its associated terms to collect training corpora iteratively (Huang et al., 2005) . Cheng (2009) and Day et al. (2009) firstly sample the Web with a set of given class names, and then query the keywords manually populated from each class by search engines for retrieving quality training documents. Huang et al. (2004) proposed a LiveClassifier system which also makes use of search engines for automatically constructing training classifier.Though reached encouraging performance, above methods have some limitations in organizing training corpora. For the first method, although some algorithms just use a small set of labeled documents, which still require much time and effort for complicated categories ; And the second method depends on several external resources, which greatly limits its flexibility and reliability. For example, manually given keywords or terms for a class are easily affected by different persons; different search engines may also bring different results with various type of noises contained in search results (Huang et al., 2004) .Inspired by these issues, we design a new system to automatically acquire training corpora. Given a class hierarchy, our basic idea is to collect corpora merely based on class names. Firstly, we crawl the webpages starting with several selected websites and identify the navigation bars of these websites. Then each navigational item in the navigational bars is matched with the class names. The valid subpages from a navigational item are labeled with the matched class name. After extracting contents from these subpages, the initial candidate corpora are constructed. Finally clustering algorithm is used to remove noises from the corpora. In latter parts of paper, we denote the automatic constructed corpora as ACC.The main contributions of this paper are:(1) An automatic system for constructing classification corpora is built. It is a new way to collect large-scale, high quality corpora; moreover, it is completely adaptive to any kind of class hierarchy;(2) To improve the ACC quality, text clustering based automatic noise filtering approaches are proposed and analyzed;(3) The proposed system and methods are evaluated on large scale standard corpora and encouraging results are reached.The remainder of this paper is organized as follows. Section 2 described the architecture of the automatic corpora construction system; Section 3 and 4 present experimental settings and results respectively. The paper is closed with conclusion and future work in section 5.
0
In Complex Word Identification (CWI), the goal is to find which words in a given text may challenge the members of a given target audience. It is part of the usual Lexical Simplification pipeline, which is illustrated in Figure 1 . As shown by the results obtained by (Paetzold and Specia, 2013) and (Shardlow, 2014) , ignoring the step of Complex Word Identification in Lexical Simplification can lead simplifiers to neglect challenging words, as well as to replace simple words with inappropriate alternatives.Various strategies have been devised to address CWI and most of them are very simple in nature. For example, to identify complex words, the lexical simplifier for the medical domain in (Elhadad and Sutaria, 2007) uses a Lexicon-Based approach that exploits the UMLS (Bodenreider, 2004) database: if a medical expression is among the technical terms registered in UMLS, then it is complex. The complexity identifier for the lexical simplifier in (Keskisärkkä, 2012) , for Swedish, uses a threshold over word frequencies to distinguish complex from simple words. Recently, however, more sophisticated approaches have been used. (Shardlow, 2013) presents a CWI benchmarking that compares the performance of a Threshold-Based strategy, a Support Vector Machine (SVM) model trained over various features, and a "simplify everything" baseline. (Shardlow, 2013 )'s SVM model has shown promising results, but CWI approaches do not tend to explore Machine Learning techniques and, in particular, their combination. As an effort to fill this gap, in this paper we describe our contributions to the Complex Word Identification task of SemEval 2016. We introduce two systems, SV000gg-Hard and SV000gg-Soft, both of which use straightforward Ensemble Methods to combine different predictions for CWI. These come from a variety of models, ranging from simple Lexicon-Based approaches to more elaborate Machine Learning classifiers.
0
The goal of sentiment analysis is to determine the attitude or emotional state held by the author of a piece of text. Automatic sentiment classification that can quickly garner user sentiment is useful for applications ranging from product marketing to measuring public opinion. The volume and availability of short-text user content makes automated sentiment analysis systems highly attractive for companies and organizations, despite potential complications arising from their short length and specialized use of language. The popularity of Twitter as a social media platform on which people can readily express their thoughts, feelings, and opinions, coupled with the openness of the platform, provides a large amount of publicly accessible data ripe for analysis, being a well established domain for sentiment analysis as reflecting realworld attitudes (Pak and Paroubek, 2010; Bollen et al., 2011) . In this paper, we look into Twitter sentiment analysis (TSA) as a suitable, core instance of general short-text sentiment analysis (Thelwall et al., 2010 (Thelwall et al., , 2012 Kiritchenko et al., 2014; Dos Santos and Gatti, 2014) , and encourage the methods and practices presented to be applied across other domains.Building a TSA model that can automatically determine the sentiment of a tweet has received significant attention over the past several years. However, since most state-of-the-art TSA models use machine learning to tune their parameters, their performance -and relevance to a real-world implementation setting -is highly dependent on the dataset on which they are trained. TSA dataset construction has, unfortunately, received less attention than TSA model design. Many commonly used TSA datasets make assumptions that do not hold in a real-world implementation setting. For example, it is a common practice for studies to discard tweets on which there is high annotator disagreement. While some argue that this is done to remove noise resulting from poor annotator quality, this argument does not hold when considering that these datasets present high rates of unanimous annotator agreement 1 . This suggests that the problem is not poor annotators, but, rather, difficult data that does not fall into the established categories of sentiment.Consider the sample tweets in Table 1 drawn from our dataset, one with unanimous agreement on an OBJECTIVE label, one with 60% agreement, and one with complete disagreement. We observe that, as the amount of disagreement across annotations increases, so too does the clarity of what the tweet's gold standard label really should be. Though the issues we raise may seem obvious, the absence of their proper treatment in the existing literature suggests the need to systematically consider their implications in sentiment analysis.In this paper, we propose the inclusion of a COMPLICATED class of sentiment to indicate that the text does not fall into the established categories of sentiment. We offer insights into the differences between tweets that receive different levels of inter-annotator-agreement, providing empirical evidence that tweets with differing levels of agreement are qualitatively different from each other.Our claims are supported by empirical analysis of a new TSA dataset, the McGill Twitter Sentiment Analysis dataset (MTSA), which we release publicly with this work 2 . The dataset contains 7,026 tweets across five different topic-domains, annotated with 5x coverage. We release this dataset with the raw annotation results, and hope that researchers and organizations will be able to 1 Annotator disagreement information has proven useful in other areas of sentiment analysis (Wilson et al., 2005 analyze our dataset and build models that can be applied in real-world sentiment analysis settings.2 Current Problems in TSAThe field of Twitter Sentiment Analysis (TSA) has seen a considerable productive work over the past several years, and several large reviews and surveys have been written to highlight the trends and progress of the field, its datasets, and the methods used for building automatic TSA systems (Saif et al., 2013; Medhat et al., 2014; Martínez-Cámara et al., 2014; Giachanou and Crestani, 2016) . There are a variety of methods for constructing TSA datasets along a variety of domains, ranging from very specific (e.g., OMD (Shamma et al., 2009) ) to general (e.g., SemEval 2013-2014 (Nakov et al., 2016) ). While there is the popular Stanford Twitter corpus, constructed with noisy labellings (Go et al., 2009) , the more common method of constructing TSA datasets relies on manual annotation (usually crowd-sourced) of tweet sentiment to establish gold-standard labellings according to a pre-defined set of possible label categories (often POSITIVE, NEGATIVE, and NEUTRAL) (Shamma et al., 2009; Speriosu et al., 2011; Thelwall et al., 2012; Saif et al., 2013; Nakov et al., 2016; Rosenthal et al., 2017) .One of the earliest manually annotated TSA datasets, the Obama-McCain Debate (OMD) (Shamma et al., 2009) was released with the specific annotator votes for each tweet, rather than a final specific label assignment. Nonetheless, most work on this dataset filters out tweets with less than two-thirds agreement (Speriosu et al., 2011; Saif et al., 2013) (Table 2) . Unfortunately, many later dataset releases have not followed the example of the OMD; the designers of such datasets have opted instead to release only the resultant labelling according to a motivated (but constraining) label-assignment schema, often removing tweets with high inter-annotator disagreement from the final dataset release (Saif et al., 2013; Nakov et al., 2016; Rosenthal et al., 2017) .The assumptions and implications resulting from such design choices should be carefully considered by researchers before deciding on how to construct or analyze sentiment analysis datasets. Indeed, a current limitation in the field is the lack of attention paid to label-assignment schemes, which ultimately determine the gold-standard labellings of samples. We argue that researchers should consider whether or not the choices made during dataset construction adequately reflect a situation in which automatic sentiment analysis systems would be used in real-world settings.
0
Bio-events are founding blocks of bio-networks depicting profound biological phenomena. Automatically extracting bio-events may assist researchers while facing the challenge of growing amount of biomedical information in textual form. A bio-event carries more semantic information biochemical reactions between entities, therefore, is more informative for studying associations between bio-concepts, e.g. gene and phenotype (Li et al., 2013) .A number of methods have been proposed to process the automated extraction of biomedical events including rule-based (Cohen et al., 2009; Kilicoglu and Bergler, 2011; Bui and Sloot, 2011) and machine learning-based (Miwa et al., 2012; Hakala et al., 2013; Munkhdalai et al., 2015) methods. Bui et al. (2013) presented a rule-based method for bio-event extraction by using a dictionary and patterns generated automatically from annotated events. TEES (Björne and Salakoski, 2013 ) is a SVM based text mining system for the extraction of events and relations from natural language texts, it obtains good performance on a few tasks in BioNLP-ST 2013 (Nédellec et al., 2013) . As a major type of biomedical events, a series of methods concentrate on protein-protein interactions (PPI) Papanikolaou et al., 2015) . Kernel-based methods are widely used for relation extraction task and obtain good results by leveraging lexical and syntactic information (Airola et al., 2008; Miwa et al., 2009; Li et al., 2015b) . Peng et al. (2015) proposed Extended Dependency Graph (EDG) and evaluated it with two kernels on some PPI datasets, obtained good improvements on F-value.We previously use a set of basic features including word embedding on a classifier for the BioNLP 2013 Genia dataset, the result is comparable to the state-of-the-art solution (Li et al., 2015a) . The system is built with flexibility in mind. It is designed to tackle more types of bio-events. In this paper, we introduce LitWay, which is based on the previous infrastructure and uses a machine learning based method in combination with syntactic rules. The system is tested on a completely different task, the SeeDev of BioNLP-ST 2016. It achieves the best result among all participants with an F-score of 43.2% (recall and precision are 44.8% and 41.7% respectively).
0
The success of neural methods in numerous subfields of NLP lead to recent development of neural 'end-to-end' (e2e) architectures in natural language generation (NLG) (Dušek et al., 2020) , where a direct mapping from meaning representations (MRs) to text is learned. While recent neural approaches mostly map flat inputs to texts without representing discourse level information explicitly within MRs, Balakrishnan et al. (2019) argues that discourse relations should be reintroduced into neural generation, echoing what has been long argued in more traditional approaches to natural language processing where discourse relations play one of the central roles in natural language text understanding and generation (Mann and Thompson, 1988; Reiter and Dale, 2000; Lascarides and Asher, 2007) .To study whether discourse relations are beneficial for neural NLG, Stevens-Guille et al. (2020) proposed the Methodius corpus, which was developed as an experiment in recreating the classic rulebased NLG system Methodius (Isard, 2016) using a neural generator. In their corpus, the meaning representation (MR) of a text is a tree that encodes the overall discourse structure of the texts plus facts related by discourse relations therein. They were concerned with whether explicit encoding of discourse relations improves the quality of generated texts by LSTM recurrent neural networks (Hochreiter and Schmidhuber, 1997) . However, they left open the question whether discourse relations are helpful for pre-trained transformer-based (Vaswani et al., 2017) language models (Lewis et al., 2020; Raffel et al., 2019) , which have recently shown remarkable performance on NLG tasks. In this work, we address that question using the T5-Large implementation of Wolf et al. (2019) .A particularly attractive quality of pre-trained models is their ability to generalize from limited data. For example, Peng et al. (2020) proposed to fine-tune a model pre-trained on a large NLG corpus using a small amount of labeled data from a specific domain to adapt the model to generate texts in that domain. In a similar vein, when the labeled data is limited, Arun et al. (2020) suggest to use a large pre-trained model with self-training and knowledge distillation to smaller, faster models. Kale and Rastogi (2020) argue that pre-trained language models make it possible to transform a sequence of semantically correct, but (possibly) ungrammatical template-based texts into a natural sounding, felicitous text of English. They find that template-based textual input is beneficial to use with pre-trained language models when the model needs to generalize from relatively few examples.Given these considerations, we cannot answer the question whether it is helpful to include discourse relations in the input to a pre-trained model for NLG without considering the form of the input, the size of the training data, and the extent to which the test data goes beyond what has been seen in training. As such, we conduct experiments using several versions of the Methodius corpus, where these versions possess one or more of the following properties: (a) discourse relations included in the MR; (b) discourse relations excluded from the MR; (c) tree-structured MR (a hierarchically structured representation of the meaning); (d) flat, textual MR (i.e., non hierarchically structured). We are furthermore concerned with how the linguistic knowledge encoded in pre-trained language models interacts with the different versions of the corpus. We want to be able to scrutinize the structure of the outputs, i.e., texts, too since our intention is to check the models' capabilities in realizing particular phenomena. For these purposes, we conduct experiments using the following setup: (1) Use various portions of the labeled data. (2) Train zero-shot models (with respect to certain discourse-related phenomena) together with various few-shot models (with respect to the same phenomena). (3) Test various aspects of generated texts, both with respect to discourse structure congruence and correctness (factual information).
0
Recently, conversational recommender system (CRS) (Chen et al., 2019; Sun and Zhang, 2018; Li et al., 2018; Zhang et al., 2018b; Liao et al., 2019) has become an emerging research topic, which aims to provide high-quality recommendations to users through natural language conversations. Generally, a CRS is composed of a recommender component and a dialog component, which make suitable recommendation and generate proper response, respectively. To develop an effective CRS, high-quality datasets are crucial to learn the model parameters. Existing CRS datasets roughly fall into two main categories, namely attribute-based user simulation (Sun and Zhang, 2018; Lei et al., 2020; Zhang et al., 2018b) and chit-chat based goal completion (Li et al., 2018; Chen et al., 2019; .These datasets usually assume that a user has clear, immediate requests when interacting with the system. They lack the proactive guidance (or transitions) from non-recommendation scenarios to the desired recommendation scenario. Indeed, it has become increasingly important that recommendations can be naturally triggered according to conversation context Kang et al., 2019) . This issue has been explored to some extent by DuRecDial dataset . DuRecDial has characterized the goal-planning process by constructing a goal sequence. However, it mainly focuses on type switch or coverage for dialog sub-tasks (e.g., non-recommendation, recommendation and questionanswering). Explicit semantic transition that leads up to the recommendation has not been well studied or discussed in DuRecDial dataset. Besides, most of existing CRS datasets Li et al., 2018) mainly rely on human annotators to create user profiles or generate the conversations. It is difficult to capture rich, complicated cases from real-world applications with a limited number of human annotators, since the generated conversations mainly reflect the characteristics (e.g., interest) of annotators or predefined identities.To tackle the above problems, we construct a new CRS dataset named Recommendation through Topic-Guided Dialog (TG-ReDial). It consists of 10,000 two-party dialogues between a seeker and a Figure 1 : An illustrative example for TG-ReDial dataset. We utilize real data to construct the recommended movies, topic threads, user profiles and utterances. Other user-related information (e.g., historical interaction records) is also available in our dataset. recommender in the movie domain. There are two new features in our dataset. First, we explicitly create a topic thread to guide the entire content flow for each conversation. Starting with a non-recommendation topic, the topic thread naturally guides the user to the recommendation scenario through a sequence of evolving topics. Our dataset enforces natural transitions towards recommendation through chit-chat conversations. Second, our dataset has been created in a semi-automatic way by involving reasonable and controllable human annotation efforts. The key idea is to align user identities in conversations with real users from a popular movie review website. In this way, the recommended movies, the created topic threads and the recommendation reasons are mined or generated based on real-world data. The major role of the human annotators is to revise, polish or rewrite the conversation data when necessary. Therefore, we do not rely on human annotators to create personalized user profiles as previous studies (Li et al., 2018; , making our conversation data closely resembles real-world cases. Figure 1 presents an illustrative example for our TG-ReDial dataset.Based on the TG-ReDial dataset, we study a new task of topic-guided conversational recommendation, which can be decomposed into three sub-tasks, namely item recommendation, topic prediction, and response generation. Topic prediction aims to create the topic thread that leads to the final recommendation; item recommendation provides suitable items that meet the user needs; and response generation produces proper reply in natural language. In our approach, the recommender module utilizes both historical interaction and dialog text for deriving accurate user preference, which are modeled by sequential recommendation model SASRec (Kang and McAuley, 2018) and pre-trained language model BERT (Devlin et al., 2019) , respectively. The dialog module consists of a topic prediction model and a response generation model. The topic prediction model integrates three kinds of useful data (i.e., historical utterances, historical topics and user profile) to predict the next topic. The response generation model is implemented based on GPT-2 (Radford et al., 2019) to produce responses for guiding users or giving persuasive recommendation. To validate the effectiveness of our approach, we conduct extensive experiments on TG-ReDial dataset to compare our approach with competitive baseline models.Our main contributions are summarized as follows:(1) We release a new dataset TG-ReDial for conversational recommender systems. It emphasizes natural topic transitions that leads to the final recommendation. Our dataset is created in a semi-automatic way, and hence human annotation is more reasonable and controllable.(2) Based on TG-ReDial, we present the task of topic-guided conversational recommendation, consisting of item recommendation, topic prediction and response generation. We further develop an effective solution to leverage multiple kinds of data signals based on Transformer and its variants BERT and GPT-2.
0
Mapping documents or sentences to vectors (Le and Mikolov, 2014; Pagliardini et al., 2018) is the foundation of various natural language processing (NLP) tasks, such as text classification (Kim, 2014) , paraphrase detection (Socher et al., 2011) , natural language inference (Bowman et al., 2015) , question answering (Zhou et al., 2015) , etc. The most straightforward approach to sentence representation uses the bag-of-words model that represents a sentence as a bag of its constituent words, disregarding grammar and even the word order but keeping multiplicity. Another similar approach is called Glove (Pennington et al., 2014) , which takes the average of word vectors of the constituent words of a sentence. These approaches are typically efficient to train, while they ignore the sequential characteristic of the text. To account for the word order, Skip-Thought (Kiros et al., 2015) learns sentence representation in an unsupervised way inspired by the skip-gram. It aims to predict the neighboring sentences or phrases for a given sentence. However, the training process of the Skip-Thought is very slow, which motivates the FastSent (Kenter et al., 2016) to speed up the training by representing a sentence as a sum of its constituent word vectors. Although FastSent is faster than Skip-Thought in training, it sacrifices the order of words in a sentence, which is important in language models, such as the n-gram feature. For example, Gupta et al. (2019) utilize the bi-gram and even tri-gram to train their embedding model.As discussed above, most existing works of sentence representation require pre-trained word vectors as the input or initialization. Sentence representation is taken as a downstream task of word representation. However, when human beings read a sentence or an article, their eyes in fact receive a series of text images which are then passed to the brain for recognition and understanding. Hence, a natural way of word representation is to use visual shapes of the words or characters as features directly (Shimada et al., 2016; Su and Lee, 2017; Liu et al., 2017; Sun et al., 2019; Liu and Yin, 2020) . For example, Su and Lee (2017) and Shimada, Kotani, and Iyatomi (2016) take Chinese and Japanese characters as images and apply a subsequent convolutional autoencoder to take those images as input and then output lowdimensional character embeddings. Liu and Yin (2020) extract both the forward and backward n-gram features from the text's pixel embedding.We propose to learn the sentence embeddings using the non-pretrained word images but fold them into sentence images as input. Our model utilizes multiple bi-gram, tri-gram, quat-gram even five-gram embeddings of both forward and backward orders of words. Current research in NLP tends to use deep and complex models, which make the performance compromise to the model complexity. However, the proposed model has a lightweight structure as shown in Figure 1 . In detail, we render words or characters of a sentence into images and then fold them into a 3-dimensional sentence tensor X ∈ R w×h×l , where w ×h is the size of the word or character image and l is the length of the sentence. Each slice X i ∈ R w×h corresponds to a word or a character image. Furthermore, we propose to fully exploit the language feature (i.e., the word order in a sentence) with two distinctive strategies: (1) extracting the multiple n-gram features with several 3-dimensional convolutional kernels of different sizes (n is the number of words covered by the kernel); (2) learning both the forward and backward semantic information from the sentence with bi-direcitonal convolutions, as shown by Figure 1 . We name the proposed model as 3D-ConvLM (3-dimensional Convolutional Language Model). We choose multiple n to integrate multiple n-gram convolutional kernels. Taking the demo in Figure 1 as an example, we use bi-gram, trigram, and quad-gram information together. Moreover, these n-gram information are constructed from both the normal text order and the reverse order through bi-directional convolutions. A subsequent 1dimensional max-over-time pooling is applied to each channel of this 2-dimensional feature map output by each n-gram model with different channels. After pooling, feature maps from each n-gram model of two directions are concatenated as the output feature of the convolutional layer. Finally, three fully connected (FC) layers are used for conducting text embedding learning.The contributions of our work are three-fold:(1) We propose to represent a sentence or an article with a video-like 3-dimensional tensor, and each frame of this tensor represents one word in the sentence or article, which provides an alternative view to understand the NLP with computer vision techniques.(2) We use a 3-dimensional convolutional kernel to learn the n-gram features from the text tensor.(3) We propose to use bi-directional convolutions to extract semantic information on both the text's forward and backward orders.The proposed 3D-ConvLM extracts and integrates multiple n-gram features during forward and backward convolutions, which further increases the flexibility of the input of text information. We evaluate 3D-ConvLM on text classification and sentence matching, and study the difference between traditional Chinese and simplified Chinese under the proposed framework.2 Related Works(S A , S B ) into 3-dimensional tensors X A , X B ;2. Apply multiple bi-directional n-gram detectors to X A , X B separately and output feature maps A, B;3. Impose 1-dimensional max-over-time pooling on A, B and output A , B respectively The SICK dataset (Marelli et al., 2014) contains about 10,000 English sentence pairs. Each pair was annotated for relatedness (SICK-R) and entailment (SICK-E) by means of crowdsourcing 5 . Samples in STS14 (Agirre et al., 2014) are also labeled as well as the SICK-R that contains 36000 sentence pairs. We run the following baselines with the SentEval (Conneau and Kiela, 2018) , which is a sentence embeddings evaluation toolkit 6 .
0
For speech recognition, the detection of speech affects recognition performance. A robust word boundary detection method in the presence of variable-label noises is necessary and is studied in this paper. Depending on the characteristics of speech, a variety of parameters have been proposed for boundary detection. They include the time energy (the magnitude in time domain), zero crossing rate (ZCR) [1] and pitch information [2] . These parameters usually fail to detect word boundary when signal-to-noise ratio (SNR) is low.Another parameter concerning frequency domain has also been recently proposed. According to the frequency energy, the time-frequency (TF) parameter [3] which sums the energy in time domain and the frequency energy was presented. The TF-based algorithm may work well for fixed-level background noise. However, its detection performance degrades for background noise of various levels. For this problem, some modified TF parameters are proposed [4] . In [5] , the idea of using Wavelet transform features as speech detection features was proposed. In this paper, we present a new Low-band Wavelet Energy (LWE) parameter which separates the speech from noise in the domain of Wavelet transform. Computation of the WE parameter is easier than the modified TF parameters, and it is shown in the experiment section that a better detection performance is achieved.After the features for detection have been extracted, the next step is to determine thresholds and decision rules. Many decision methods based on computational intelligence techniques have been proposed, such as fuzzy neural networks (FNNs) [4] and neural networks (NNs) [6] . Generalization performance may be poor when FNNs and NNs are over-trained. To cope with the low generalization ability problem, a new learning method, the Support Vector Machine (SVM), has been proposed [7, 8] . SVM is a new and useful learning method whose formulation is based on the principle of structural risk minimization. Instead of minimizing an objective function based on training, SVM attempts to minimize a bound on the generalization error. SVM has gained wide acceptance due to its high generalization abilities for a wide range of applications. For this reason, this paper used a SVM as a detector.The rest of the paper is organized as follows. Section II introduces the derivation and analysis of the WE and ZCR parameters. Section III describes the SVM detector.Experiments on word boundary detection for noisy speech recognition are studied in Section IV. Finally, Section V draws conclusions.
0
Fill-in-the-blank is a popular style used for evaluating proficiency of language learners, from homework to official tests, such as TOEIC 1 and TOEFL 2 . As shown in Figure 1 , a quiz is composed of 4 parts; (1) sentence, (2) blank to fill in, (3) correct answer, and (4) distractors (incorrect options). However, it is not easy to come up with appropriate distractors without rich experience in language education. There are two major requirements that distractors should satisfy: reliability and validity (Alderson et al., 1995) . First, distractors should be reliable; they are exclusive against the answer and none of distractors can replace the answer to avoid allowing multiple correct answers in one quiz. Second, distractors should be valid; they discriminate learners' proficiency adequately.Each side, government and opposition, is _____ the other for the political crisis, and for the violence. There are previous studies on distractor generation for automatic fill-in-the-blank quiz generation (Mitkov et al., 2006) . Hoshino and Nakagawa (2005) randomly selected distractors from words in the same document. Sumita et al. (2005) used an English thesaurus to generate distractors. Liu et al. (2005) collected distractor candidates that are close to the answer in terms of word-frequency, and ranked them by an association/collocation measure between the candidate and surrounding words in a given context. Dahlmeier and Ng (2011) generated candidates for collocation error correction for English as a Second Language (ESL) writing using paraphrasing with native language (L1) pivoting technique. This method takes an sentence containing a collocation error as input and translates it into L1, and then translate it back to English to generate correction candidates. Although the purpose is different, the technique is also applicable for distractor generation. To our best knowledge, there have not been studies that fully employed actual errors made by ESL learners for distractor generation.In this paper, we propose automated distractor generation methods using a large-scale ESL corpus with a discriminative model. We focus on semantically confusing distractors that measure learners' competence to distinguish word-sense and select an appropriate word. We especially target verbs, because verbs are difficult for language learners to use correctly (Leacock et al., 2010 trained on error patterns extracted from an ESL corpus, and can generate exclusive distractors with taking context of a given sentence into consideration. We conduct human evaluation using 3 native and 23 non-native speakers of English. The result shows that 98.3% of distractors generated by our methods are reliable. Furthermore, the non-native speakers' performance on quiz generated by our method has about 0.76 of correlation coefficient with their TOEIC scores, which shows that distractors generated by our methods satisfy validity.Contributions of this paper are twofold;(1) we present methods for generating reliable and valid distractors, (2) we also demonstrate the effectiveness of ESL corpus and discriminative models on distractor generation.
0
Topic segmentation is a fundamental NLP task that has received considerable attention in recent years (Barrow et al., 2020; Glavas and Somasundaran, 2020; Lukasik et al., 2020) . It can reveal important aspects of a document semantic structure by splitting the document into topical-coherent textual units. Taking the Wikipedia article in Table 1 as an example, without the section marks, a reliable topic segmenter should be able to detect the correct boundaries within the text and chunk this article into the topical-coherent units T1, T2 and T3. The results of topic segmentation can further benefit other key downstream NLP tasks such as document summarization (Mitra et al., 1997; Riedl and Biemann, 2012a; Xiao and Carenini, 2019) , question answering (Oh et al., 2007; Diefenbach et al., 2018) , machine reading (van Dijk, 1981; Preface: Marcus is a city in Cherokee County, Iowa, United States.[T1] History: S1: The first building in Marcus was erected in 1871. S2: Marcus was incorporated on May 15, 1882.[T2] Geography: S3: Marcus is located at (42.822892, -95.804894) . S4: According to the United States Census Bureau, the city has a total area of 1.54 square miles, all land.[T3] Demographics: S5: As of the census of 2010, there were 1,117 people, 494 households, and 310 families residing in the city. ... ... Saha et al., 2019) and dialogue modeling (Xu et al., 2020; .A wide variety of techniques have been proposed for topic segmentation. Early unsupervised models exploit word statistic overlaps (Hearst, 1997; Galley et al., 2003) , Bayesian contexts (Eisenstein and Barzilay, 2008) or semantic relatedness graphs (Glavaš et al., 2016) to measure the lexical or semantic cohesion between the sentences or paragraphs and infer the segment boundaries from them. More recently, several works have framed topic segmentation as neural supervised learning, because of the remarkable success achieved by such models in most NLP tasks (Wang et al., , 2017 Sehikh et al., 2017; Koshorek et al., 2018; Arnold et al., 2019) . Despite minor architectural differences, most of these neural solutions adopt Recurrent Neural Network (Schuster and Paliwal, 1997) and its variants (RNNs) as their main framework. On the one hand, RNNs are appropriate because topic segmentation can be modelled as a sequence labeling task where each sentence is either the end of a segment or not. On the other hand, this choice makes these neural models limited in how to model the context. Because some sophisticated RNNs (eg., LSTM, GRU) are able to preserve long-distance information (Lipton et al., 2015; Sehikh et al., 2017; Wang et al., 2018) , which can largely help language models. But for topic segmentation, it is critical to supervise the model to focus more on the local context.As illustrated in Table 1 , the prediction of the segment boundary between T1 and T2 hardly depends on the content in T3. Bringing in excessive long-distance signals may cause unnecessary noise and hurt performance. Moreover, text coherence has strong relation with topic segmentation (Wang et al., 2017; Glavas and Somasundaran, 2020) . For instance, in Table 1 , sentence pairs from the same segment (like <S1, S2> or <S3, S4>) are more coherent than sentence pairs across segments (like S2 and S3). Arguably, with a proper way of modeling the coherence between adjacent sentences, a topic segmenter can be further enhanced.In this paper, we propose to enhance a state-ofthe-art (SOTA) topic segmenter (Koshorek et al., 2018) based on hierarchical attention BiLSTM network to better model the local context of a sentence in two complementary ways. First, we add a coherence-related auxiliary task to make our model learn more informative hidden states for all the sentences in a document. More specifically, we refine the objective of our model to encourage smaller coherence for the sentences from different segments and larger coherence for the sentences from the same segment. Secondly, we enhance context modeling by utilizing restricted self-attention (Wang et al., 2018) , which enables our model to pay attention to the local context and make better use of the information from the closer neighbors of each sentence (i.e., with respect to a window of explicitly fixed size k). Our empirical results show (1) that our proposed context modeling strategy significantly improves the performance of the SOTA neural segmenter on three datasets, (2) that the enhanced segmenter is more robust in domain transfer setting when applied to four challenging real-world test sets, sampled differently from the training data, (3) that our context modeling strategy is also effective for the segmenters trained on other challenging languages (eg., German and Chinese), rather than just English.
0
The adoption of XML as a standard for the storage, retrieval and delivery of information has meant that many enterprises have large corpora in this format. Very often information components in these corpora require translation. Normally, such enterprises have enjoyed all of the benefits of XML on the information creation side, but very often, fail to maximize all the benefits that XML based translation can provide.The separation of form and content which is inherent within the concept of XML makes XML document easier to localize than traditional proprietary text processing or composition systems.Nevertheless decisions made during the creation of the XML structure and authoring of documents can have a significant effect on the ease with which the source language text can be localized into other languages. The difficulties introduced into XML documents through inappropriate use of syntactical tools can have a profound effect on translatability and cost. It may even require complete re-authoring of documents in order to make them translatable. This is worth noting as a very high proportion of XML documents are candidates for translation into other languages.A key concept in the treatment of translatable text within XML documents is that of the "text unit". A text unit is defined as being the content of an XML element, or the subdivision thereof into recognizable sentences that are linguistically complete as far as translation is concerned.
0
The ability to spot deception is an issue in many important venues: in police, security, border crossing, customs, and asylum interviews; in congressional hearings; in financial reporting; in legal depositions; in human resource evaluation; and in predatory communications, including Internet scams, identity theft, and fraud. The need for rapid, reliable deception detection in these high stakes venues calls for the development of computational applications that can distinguish true from false claims.Our ability to test such applications is, however, hampered by a basic issue: the ground truth problem. To be able to recognize the lie, the researcher must not only identify distinctive behavior when someone is lying but must ascertain whether the statement being made is true or not.The prevailing method for handling the ground truth problem is the controlled experiment, where truth and lies can be managed.While controlled laboratory experiments have yielded important insights into deceptive behavior, ethical and proprietary issues have put limits on the extent to which controlled experiments can model deception in the "real world". High stakes deception cannot be simulated in the laboratory without serious ethics violations. Hence the motivation to lie is weak since subjects have no personal loss or gain at stake. Motivation is further compromised when the lies are sanctioned by the experimenter who directs and condones the lying behavior (Stiff et al., 1994) . With respect to the studies themselves, replication of laboratory deception research is rarely done due to differences in data sets and subjects used by different research groups. The result, as Vrij (2008) points out, is a lack of generalizability across studies. We believe that many of the issues holding back deception research could be resolved through the construction of standardized corpora that would provide a base for expanding deception studies, comparing different approaches and testing new methods. As a first step towards standardization, we offer a set of practical guidelines for building corpora that are customized for studies of high stakes deception. The guidelines are based on our experiences in creating a corpus of real world language data that we used for testing the deception detection approach described in , Fitzpatrick and Bachenko (2010) . We hope that our experience will encourage other researchers to build and contribute corpora with the goal of establishing a shared resource that passes the test of ecological validity.Section 2 of the paper describes the data collection initiative we are engaged in, section 3 describes the methods used to corroborate the claims in the data, section 4 concludes our account and covers lessons learned.We should point out that the ethical considerations that govern our data collection are subject to the United States Code of Federal Regulations (CFRs) for the protection of human subjects and may differ in some respects from those in other countries.
0
Roll call votes are official records of how politicians vote on bills (potential laws) in the United States House of Representatives and Senate. Reliable prediction of these votes, using historical voting records and the text of bills, can be used to forecast political decisions on key issues, which can be informative for the electorate and other political stakeholders. Prior work has used politicians' voting records as a means to study their ideological stances (Poole and Rosenthal, 1985; Clin-ton et al., 2004) , as well as roll call votes combined with the text of the corresponding bills to predict votes on newly drafted bills (Gerrish and Blei, 2012; Kraft et al., 2016; Kornilova et al., 2018) . However, these approaches fail to make good predictions for the votes of politicians whose records are not established, such as new candidates for office -a time when this information can be most useful for the electorate. We hypothesize that additional sources of knowledge about new politicians can predict their future votes.In this work we explore two sources of additional knowledge about politicians to better predict roll call votes: news article text about the politicians, and Freebase (Bollacker et al., 2008) , which is a manually curated knowledge base (KB). Relevant news articles may contain words that are indicative of a politician's stance with respect to specific issues. A KB such as Freebase is likely to contain rich information such as events a congressperson attends, people they are related to, and personal details such as schools they were educated at; this information may be correlated with a politician's stance on specific issues (Sunshine Hillygus, 2005; Duckitt and Sibley, 2010; Kraut and Lewis, 1975) . Information in a KB is likely to be more restricted, but more reliable, than information extracted from news articles.We integrate these sources of information into the embedding based prediction model proposed in Kraft et al. (2016) . We experiment with two representations for news articles: as the mean of the embeddings of the words in the article, and as a bag of words. To represent information in KBs, we first capture the KB relations using Universal Schema (US) (Riedel et al., 2013) , and then construct relation embeddings using a neural network.We evaluate the proposed approaches on multiple sessions of Congress under two settings: (1) with only politicians that are observed at train-ing time, which is similar to the setting of prior work (Kraft et al., 2016; Kornilova et al., 2018) and (2) where a subset of politicians' voting patterns are never observed at training time, representing the new candidate for office setting. We establish a new state-of-the-art under the evaluation framework used by most prior work (Setting 1). We also show that our approach outperforms a state-of-the-art model in our evaluation framework (Setting 2). Compared to the previous stateof-the-art model for roll call prediction, in Setting (1) our best approach gives an improvement in accuracy of 1.0% (an error reduction of 8.7%), and in Setting (2) our best approach gives an improvement in accuracy of 33.4% (an error reduction of 66.7%). Additionally, in Setting (2), augmentation via KB gives 4.2% more accurate predictions than augmentation from news text.Code to reproduce our experiments can be found here: https://github.com/ronakzala/ universal-schema-bloomberg/.
0
Several efforts are being made to incorporate syntactic analysis into phrase-base statistical translation (PBT) (Och 2002; Koehn et. al. 2003) , which represents the state of the art in terms of robustness in modeling local word reordering and efficiency in decoding. Syntactic analysis is meant to improve some of the pitfalls of PBT:Translation options selection: candidate phrases for translation are selected as consecutive ngrams. This may miss to consider certain syntactic phrases if their component words are far apart.Phrase reordering: especially for languages with different word order, e.g. subject-verbobject (SVO) and subject-object-verb (SVO) languages, long distance reordering is a problem. This has been addressed with a distance based distortion model (Och 2002; Koehn et al. 2003) , lexicalized phrase reordering (Tillmann, 2004; Koehn, et.al., 2005; Al-Onaizan and Papineni, 2006) , by hierarchical phrase reordering model (Galley and Manning, 2008) or by reordering the nodes in a dependency tree (Xu et al., 2009) Movement of translations of fertile words: a word with fertility higher than one can be translated into several words that do not occur consecutively. For example, the Italian sentence "Lui partirà domani" translates into German as "Er wird morgen abreisen". The Italian word "partirà" (meaning "will leave") translates into "wird gehen" in German, but the infinite "abreisen" goes to the end of the sentence with a movement that might be quite long.Reordering of phrases is necessary because of different word order typologies of languages: constituent word order like SOV for Hindi vs. SVO for English; order of modifiers like noun-adjective for French, Italian vs. adjective-noun in English. Xu et al. (2009) tackle this issue by introducing a reordering approach based on manual rules that are applied to the parse tree produced by a dependency parser.However the splitting phenomenon mentioned above requires more elaborate solutions than simple reordering grammatical rules.Several schemes have been proposed for improving PBMT systems based on dependency trees. Our approach extends basic PBT as de-scribed in (Koehn et. al., 2003) with the following differences:we perform tree-to-string translation. The dependency tree of the source language sentence allows identifying syntactically meaningful phrases as translation options, instead of ngrams. However these phrases are then still looked up in a Phrase Translation Table (PT) quite similarly to PBT. Thus we avoid the sparseness problem that other methods based on treelets suffer (Quirk et al., 2005) . reordering of phrases is carried out traversing the dependency tree and selecting as options phrases that are children of each head. Hence a far away but logically connected portion of a phrase can be included in the reordering. phrase combination is performed by combining the translations of a node with those of its head. Hence only phrases that have a syntactic relation are connected. The Language Model (LM) is still consulted to ensure that the combination is proper, and the overall score of each translation is carried along. when all the links in the parse tree have been reduced, the root node contains candidate translations for the whole sentences alternative visit orderings of the tree may produce different translations so the final translation is the one with the highest score.Some of the benefits of our approach include: 1) reordering is based on syntactic phrases rather than arbitrary chunks 2) computing the future cost estimation can be avoided, since the risk of choosing an easier ngram is mitigated by the fact that phrases are chosen according to the dependency tree 3) since we are translating from tree to string, we can directly exploit the standard phrase tables produced by PBT tools such as giza++ (Och and Ney, 2000) and Moses (Koehn, 2007) 4) integration with the parser: decoding can be performed incrementally while a dependency Shift/Reduce parser builds the parse tree (Attardi, 2006 ).
0
Not all words are equally important to the meaning of a spoken message. Identifying the importance of words is useful for a variety of tasks including text classification and summarization (Hong and Nenkova, 2014; Yih et al., 2007) . Considering the relative importance of words can also be valuable when evaluating the quality of output of an automatic speech recognition (ASR) system for specific tasks, such as caption generation for Deaf and Hard of Hearing (DHH) participants in spoken meetings (Kafle and Huenerfauth, 2017) .As described by Berke et al. (2018) , interlocutors may submit audio of individual utterances through a mobile device to a remote ASR system, with the text output appearing on an app for DHH users. With ASR being applied to new tasks such as this, it is increasingly important to evaluate ASR output effectively. Traditional Word Error Rate (WER)-based evaluation assumes that all word transcription errors equally impact the quality of the ASR output for a user. However, this is less helpful for various applications (Mc-Cowan et al., 2004; Morris et al., 2004) . In particular, Kafle and Huenerfauth (2017) found that metrics with differential weighting of errors based on word importance correlate better with human judgment than WER does for the automatic captioning task. However, prior models based on text features for word importance identification Sheikh et al., 2016) face challenges when applied to conversational speech:• Difference from Formal Texts: Unlike formal texts, conversational transcripts may lack capitalization or punctuation, use informal grammatical structures, or contain disfluencies (e.g. incomplete words or edits, hesitations, repetitions), filler words, or more frequent out-of-vocabulary (and invented) words (McKeown et al., 2005) .• Availability and Reliability: Text transcripts of spoken conversations require a human transcriptionist or an ASR system, but ASR transcription is not always reliable or even feasible, especially for noisy environments, nonstandard language use, or low-resource languages, etc.While spoken messages include prosodic cues that focus a listener's attention on the most important parts of the message (Frazier et al., 2006) , such information may be omitted from a text transcript, as in Figure 1 , in which the speaker pauses after "right" (suggesting a boundary) and uses rising intonation on "from" (suggesting a question). Moreover, there are application scenarios where transcripts of spoken messages are not always available or fully reliable. In such cases, models based on a speech signal (without a text transcript) might be preferred.With this motivation, we investigate modeling acoustic-prosodic cues for predicting the importance of words to the meaning of a spoken dialogue. Our goal is to explore the versatility of speech-based (text-independent) features for word importance modeling. In this work, we frame the task of word importance prediction as sequence labeling and utilize a bi-directional Long Short-Term Memory (LSTM)-based neural architecture for context modeling on speech.
0
In the family of categorial grammars, combinatory categorial grammar (CCG) has received by far the most attention in the computational linguistics literature. There exist algorithms for both mildly context-sensitive (e.g., Kuhlmann and Satta, 2014) and context-free (typically CKY; Cocke and Schwartz, 1970; Kasami, 1966; Younger, 1967) CCG parsing, and there has been much research on statistical CCG parsers (e.g., Clark and Curran, 2007; Lewis et al., 2016; Stanojević and Steedman, 2020) . Another member of the categorial family, Lambek categorial grammar (LCG), has been less well-explored: LCG work has been primarily theoretical or focused on non-statistical parsing.The recent lack of attention is likely due to two notable results: (1) LCG is weakly context-free equivalent (Pentus, 1997) ; and (2) LCG parsing is NP-complete (Pentus, 2006; Savateev, 2012) . However, neither of these issues is necessarily practically relevant. Moreover, LCG presents a number of advantages and interesting properties. For example, LCG provides even greater syntax-semantics transparency than is the case for most CCG parsers because it does not invoke non-categorial rules, maintaining a consistent parsing framework. LCG's rules together define a calculus over syntactic categories that is a subset of linear logic (Girard, 1987) . LCG, like CCG or LTAG, is a highly lexicalized formalism: lexical categories encode substantial syntactic information, and as a result are themselves complex and structured. Despite this, the inner structure of the categories has not been strongly considered in parsers beyond evaluating the category for compatibility with a grammatical rule.In this paper, we present the first statistical LCG parser. Unlike past parsers for CCG or LTAG, our parser explicitly incorporates structural aspects of the grammar. We base our system on proof nets, a graphical method for representing linear logic proofs that abstracts over irrelevant aspects, such as the order of application of logical rules (Girard, 1987; Roorda, 1992) . This corresponds to the problem of spurious ambiguity, making proof nets an attractive choice for representing derivations.Our work has two primary contributions. First, we introduce a self-attention-based LCG parsing model that incorporates proof net structure in multiple ways. We find that minding proof net structure enables us to define a model that is differentiable through this categorial structure down to the atomic categories of the grammar, improving parsing accuracy and coverage on an English LCG corpus.Second, proof net constraints allow us to define novel grammatico-structural loss functions that can be used as training objectives. This enables us to train a parser without ground-truth derivations that has high coverage and even frequently includes the correct parse among the parses that it finds. Our analysis shows that all of our components contribute to the parser's performance, but that planarity information is especially important.⊢ / ⊢ / e , ⊢ ⊢ ⊢ \ \ e , ⊢ , ⊢ / i ⊢ / , ⊢ \ i ⊢ \axiom ⊢ Figure 1 : The rules of the associative Lambek calculus without product and allowing empty premises.
0
Writing high quality texts in a foreign language requires years of study and a deep comprehension of the language. With a society that is becoming more and more international, the ability to express ideas in English has become the basis of fruitful communication and collaboration.In this work, we propose a tool to provide nonnative speakers of English with help in their translation or writing process. Instead of relying on manually created dictionaries, many existing tools leverage parallel bilingual corpora, using a concordancer to provide translation suggestions together with their contexts. Notable examples relevant to this demonstration are linguee.com and tradooit.com. Given a word or a phrase in a foreign language, these systems present example sentences containing the query in the source language as well as the target language, showing the correct usage of the word/phrase, and at the same time providing translation candidates.Many applications rely on direct word-to-text matching and are therefore prone to missing semantically similar contexts that, although similar and relevant, do not share any words with the query. Instead of matching words directly, we propose a system that employs crosslingually constrained vector representations (embeddings) of words and phrases to retrieve English sentences that are similar to a given phrase or word in a different language (query). These vector representations not only allow for efficient crosslingual lookups in databases consisting of millions of sentences, but can also be employed to visualize intralingual and interlingual semantic relationships between phrases.
0
Modeling and characterizing human expertise is a major bottleneck in advancing image-based application systems. We propose a framework for integrating experts' eye movements and verbal narrations as they examine and describe images in order to understand images semantically. Eye movements can act as pointers to important image regions, while the co-captured descriptions provide conceptual labels associated with those regions.Although successful when applied to scenic images in controlled experiments, many multimodal integration techniques do not transfer directly to scenarios requiring domain-specific expertise. Our approach is inspired by Yu and Ballard (2004) , who combine NLP methods with eye movements to generate linguistic descriptions of videos, and Forsyth et al. (2009) , who use image features to match words to the corresponding pictures. We expand here on earlier work (Vaidyanathan et al., 2015) exploring multimodal integration in medical image annotation.Because an exact temporal match between the visual and verbal modalities cannot be assumed (Griffin, 2013) , our framework integrates the two modalities without enforcing strict temporal correspondence. We use a bitext word alignment algo-rithm, originally developed for word alignment in machine translation, to align an expert's fixations on an image with the words in that expert's description of that image. The resulting alignments are then used to annotate image regions with corresponding conceptual labels, which in turn may aid image labeling and captioning applications. In this paper we discuss the parameters of our framework and their effects on alignment accuracy.
0
Part-of-speech (POS) tagging and chunking have been essential components of Natural Language Processing (NLP) techniques that target learner English, such as grammatical error correction and automated essay scoring. In addition, they are frequently used to extract linguistic features relevant to the given task. For example, in the CoNLL-2014 Shared Task (Ng et al., 2014 , 10 of the 12 teams used one or both POS-tagging and chunking to extract features for grammatical error correction.They have also been used for linguistic analysis of learner English, particularly in corpus-based studies. Aarts and Granger (1998) explored characteristic POS patterns in learner English. Nagata and Whittaker (2013) demonstrated that POS sequences obtained by POS-tagging can be used to distinguish between mother tongue interferences effectively.The heavy dependence on POS-tagging and chunking suggests that failures could degrade the performance of NLP systems and linguistic analyses (Han et al., 2006; Sukkarieh and Blackmore, 2009) . For example, failure to recognize noun phrases in a sentence could lead to failure in correcting related errors in article use and noun number. More importantly, such failures make it more difficult to simply count the number of POSs and chunks, thereby causing inaccurate estimates of their distributions. Note that such estimates are often employed in linguistic analysis, including the above-mentioned studies.Despite its importance in related tasks, we also note that few studies have focused on performance evaluations of POS-tagging and chunking. Only a few studies, including Nagata et al. (2011) , Berzak et al. (2016) and Sakaguchi et al. (2012) , have reported the performance of POS taggers in learner English and found a performance gap between native and learner English. However, none of those studies described the root causes of POS-tagging and chunking errors in detail. Detailed investigations would certainly improve performance, which in turn, would improve related tasks. Furthermore, to the best of our knowledge, no study has reported chunking performance when applied to learner English. 1 Unknown words are a major cause of POStagging and chunking failures (Manning, 2011) . In learner English, spelling errors, which occur frequently, are a major source of unknown words.Spell checkers (e.g., Aspell) are used to correct spelling errors prior to POS-tagging and chunking. However, their effectiveness remains unclear.Thus, we evaluate the extent to which spelling errors in learner English affect the POS tag-ging and chunking performance. More precisely, we analyze the performance analysis of POStagging/chunking to determine (1) the extent to which performance is reduced due to spelling errors, (2) what types of spelling errors impact the performance, and (3) the effect of correcting spelling errors using a spell checker. Our analysis demonstrates that employing a spell checker is not required preliminary step of POS-tagging and chunking for NLP analysis of learner English.
0
Implementation of MT systems usually relies on two very separate teams.-A linguistic engineering team, usually composed of a project manager, several engineers and one or several computational linguists. Its mission is to configure the MT engine and produce for each source file an "MT-engine output" (i.e. an enginetranslated target file or a corresponding translation memory that can be reapplied to the source file). -A post-editing team, usually composed of a project manager and several linguists. Its mission is to edit the MT-engine output and to produce final target files.In most cases: -The first team is located within a large MLV (multi-language vendor) or end-client. -The second team is an ad-hoc team created either within the MLV or end-client, or most frequently within a contracted SLV (singlelanguage vendor).-Communication between the teams is very limited and infrequent, usually through project managers and not at team members level.Whereas numerous papers and studies have focused on the tasks associated to MT engine configuration, we will concentrate here on the management of the post-editing team, whose success is equally crucial to the quality and timeliness of the overall project.
0
Probabilistic part of speech taggers have proven to be successful in English part of speech labelling [Church 1988; DeRose, 1988; de Marcken, 1990; Meteer, et. al. 1991, etc.] . Such stochastic models perform very well given adequate amounts of training data representative of operational data. Instead of merely stating what is possible, as a non-stochastic rule-based model does, probabilistic models predict the likelihood of an event. In determining the part of speech of a highly ambiguous word in context or in determining the part of speech of an unknown word, they have proven quite effective for English.By contrast, rule-based morphological analyzers employing a hand-crafted lexicon and a hand-crafted connectivity matrix are the traditional approach to Japanese word segmentation and part of speech labelling [Aizawa and Ebara 1973] . Such algorithms have already achieved 90-95% accuracy in word segmentation and 90-95% accuracy in part-of-speech labelling (given correct word segmentation). The potential advantage of a rulebased approach is the ability of a human coding rules that cover events that are rare, and therefore may be inadequately represented in most training sets. Furthermore, it is commonly assumed that large training sets are not required.A third approach combines a rule-based part of speech tagger with a set of correction templates automatically derived from a training corpus [Brill 1992] .We faced the challenge of processing Japanese text, where neither spaces nor any other delimiters mark the beginning and end of words. We had at our disposal the following:A rule-based Japanese morphological processor (JUMAN) from Kyoto University.-A context-free grammar of Japanese based on part of speech labels distinct from those produced by JUMAN.-A probabilistic part-of-speech tagger (POST) [Meteer, et al., 1991] which assumed a single sequence of words as input.-Limited human resources for creating training data.This presented us with four issues: 1) how to reduce the cost of modifying the rule-based morphological analyzer to produce the parts of speech needed by the grammar,2) how to apply probabilistic modeling to Japanese, e.g., to improve accuracy to -97%, which is typical of results in English,3) how to deal with unknown words, where JUMAN typically makes no prediction regarding part of speech, and 4) how to estimate probabilities for low frequency phenomena.Here we report on an example-based technique for correcting systemmatic errors in word segmentation and part of speech labelling in Japanese text. Rather than using handcrafted rules, the algorithm employs example data, drawing generalizations during training. In motivation, it is similar to one of the goals of Brill (1992).
0
In task-oriented dialogues, the computer system communicates with the user in the form of a conversation and accomplishes various tasks such as hotel booking, flight reservation and retailing. In this process, the system needs to accurately convert the desired information, a.k.a. meaning representation, to a natural utterance and convey it to the users (Table 1 ). The quality of response directly impacts the user's impression of the system. Thus, there are numerous previous studies in the area of natural language generation (NLG) for task-oriented dialogues, ranging from templatebased models (Cheyer and Guzzoni, 2014; Langkilde and Knight, 1998) to corpus-based methods (Dušek and Jurčíček, 2016; Tran and Nguyen, 2017; Wen et al., 2015; .However, one issue yet to be solved is that the system responses often lack the fluency and naturalness of human dialogs. In many cases, the system responses are not natural, violating inherent human language usage patterns. For instance, Wildwood is an Indian restaurant in the riverside area near Raja Indian Cuisine.It is not family friendly.w/o adv.Wildwood is a restaurant providing Indian food. It is located in the riverside. It is near Raja Indian Cuisine. Table 1 : Example of generated utterances from meaning representation input. Our model learns to put two pieces of location information in one sentence via adversarial training.in the last row of Table 1 , two pieces of location information for the same entity restaurant should not be stated in two separate sentences. In another example in Table 4 , the positive review child friendly and the negative review low rating should not appear in the same sentence connected by the conjunction and. These nuances in language usage do impact user's impression of the dialogue system, making the system response rigid and less natural.To solve this problem, several methods use reinforcement learning (RL) to boost the naturalness of generated responses (Ranzato et al., 2015; Li et al., 2016) . However, the Monte-Carlo sampling process in RL is known to have high variance which can make the training process unstable. Li et al. (2015) proposes to use maximum mutual information (MMI) to boost the diversity of language, but this criterion makes exact decoding intractable.On the other hand, the adversarial training for natural language generation has shown to be promising as the system needs to produce responses indiscernible from human utterances (Rajeswar et al., 2017; Wu et al., 2017; Nie et al., 2018) . Apart from the generator, there is a dis-criminator network which aims to classify system responses from human results. The generator is trained to fool the discriminator, resulting in a min-max game between the two components which boosts the quality of generated utterances (Goodfellow et al., 2014) . Due to the discreteness of language, most previous work on adversarial training in NLG apply reinforcement learning, suffering from high-variance problem (Yu et al., 2017; Li et al., 2017; Ke et al., 2019) .In this work, we apply adversarial training to utterance generation in task-oriented dialogues and propose the model AdvNLG. Instead of using RL, we follow Yang et al. (2018) to leverage the Straight-Through Gumbel-Softmax estimator (Jang et al., 2016) for gradient computation. In the forward pass, the generator uses the argmax operation on vocabulary distribution to select an utterance and sends it to the discriminator. But during backpropagation, the Gumbel-Softmax distribution is used to let gradients flow back to the generator. We also find that pretraining the generator for a warm start is very helpful for improving the performance.To evaluate our model, we conduct experiments on public datasets E2ENLG (Novikova et al., 2017) and RNN-LG (Wen et al., 2016) . Our model achieves strong performance and obtains new state-of-the-art results on four datasets. For example, in Restaurant dataset, it improves the best result by 3.6% in BLEU. Human evaluation corroborates the effectiveness of our model, showing that the adversarial training against human responses can make the generated language more accurate and natural.
0
Most information retrieval models (Salton et al., 1975; Fuhr, 1992; Ponte and Croft, 1998; Fang and Zhai, 2005) compute relevance scores based on matching of terms in queries and documents. Since various terms can be used to describe a same concept, it is unlikely for a user to use a query term that is exactly the same term as used in relevant documents. Clearly, such vocabulary gaps make the retrieval performance non-optimal. Query expansion (Voorhees, 1994; Mandala et al., 1999a; Fang and Zhai, 2006; Qiu and Frei, 1993; ) is a commonly used strategy to bridge the vocabulary gaps by expanding original queries with related terms. Expanded terms are often selected from either co-occurrence-based thesauri (Qiu and Frei, 1993; Jing and Croft, 1994; Peat and Willett, 1991; Smeaton and van Rijsbergen, 1983; Fang and Zhai, 2006) or handcrafted thesauri (Voorhees, 1994; Liu et al., 2004) or both Mandala et al., 1999b) .Intuitively, compared with co-occurrence-based thesauri, hand-crafted thesauri, such as WordNet, could provide more reliable terms for query expansion. However, previous studies failed to show any significant gain in retrieval performance when queries are expanded with terms selected from WordNet (Voorhees, 1994; Stairmand, 1997) . Although some researchers have shown that combining terms from both types of resources is effective, the benefit of query expansion using only manually created lexical resources remains unclear. The main challenge is how to assign appropriate weights to the expanded terms.In this paper, we re-examine the problem of query expansion using lexical resources with the recently proposed axiomatic approaches (Fang and Zhai, 2006) . The major advantage of axiomatic approaches in query expansion is to provide guidance on how to weight related terms based on a given term similarity function. In our previous study, a cooccurrence-based term similarity function was proposed and studied. In this paper, we study several term similarity functions that exploit various information from two lexical resources, i.e., WordNet and dependency-thesaurus constructed by Lin (Lin, 1998) , and then incorporate these similarity functions into the axiomatic retrieval framework. We conduct empirical experiments over several TREC standard collections to systematically evaluate the effectiveness of query expansion based on these similarity functions. Experiment results show that all the similarity functions improve the retrieval performance, although the performance improvement varies for different functions. We find that the most effective way to utilize the information from Word-Net is to compute the term similarity based on the overlap of synset definitions. Using this similarity function in query expansion can significantly improve the retrieval performance. According to the retrieval performance, the proposed similarity function is significantly better than simple mutual information based similarity function, while it is comparable to the function proposed in (Fang and Zhai, 2006) . Furthermore, we show that the retrieval performance can be further improved if the proposed similarity function is combined with the similarity function derived from co-occurrence-based resources.The main contribution of this paper is to reexamine the problem of query expansion using lexical resources with a new approach. Unlike previous studies, we are able to show that query expansion using only manually created lexical resources can significantly improve the retrieval performance.The rest of the paper is organized as follows. We discuss the related work in Section 2, and briefly review the studies of query expansion using axiomatic approaches in Section 3. We then present our study of using lexical resources, such as WordNet, for query expansion in Section 4, and discuss experiment results in Section 5. Finally, we conclude in Section 6.
0
Point-of-view, or perspective, affects many aspects of language. This paper presents ProSPer, a dataset for probing how humans and neural language models track spatial perspective in text.In recent years, neural network language understanding has been probed in a variety of syntactic and semantic tasks. 1 We propose a probe task for one of the most complex aspects of language: relative spatial language, or spatial perspective. We measure the ability to infer spatial perspective using a come/go prediction task: infer a missing motion verb from a passage of text ( Figure 1 ).This task combines aspects of previous probe tasks (long-distance dependencies, co-reference resolution), but also poses new challenges: (1) ranking the importance of individuals in a discourse, (2) reasoning over ambiguity, and (3) inferring spatial relations. This makes it challenging for any language user: as our behavioral data shows, human performance is not perfect. However, the task may be particularly hard for language models since 1 Including subject-verb agreement (Linzen et al., 2016; Giulianelli et al., 2018; Gulordava et al., 2018; Lin et al., 2019) , question formation (Jumelet et al., 2019; McCoy et al., 2020) , filler-gap dependencies (Wilcox et al., 2018) , anaphora (Jumelet et al., 2019) , category membership (Ettinger, 2020) , and negative polarity items (Jumelet and Hupkes, 2018) .Rick changed the subject. "I heard that you were having some furniture delivered this afternoon," he said to Aunt Emily. " I thought I'd by and see if you needed any help."(1) go (2) come they lack access to grounded information, which has been hypothesized to be important for spatial language acquisition (Glenberg and Gallese, 2012) .This paper explores human and neural network language model understanding of perspective. In Sections 2-3, we motivate and present our task and dataset. In Section 4, we measure human performance on ProSPer and find accuracy rates of 77-88%. In Section 5, we evaluate pre-trained neural language models and show that the BERT family (Devlin et al., 2019) achieves human-like accuracy.In Section 6, we explore differences between human and model behavior. Drawing on psycholinguistic work on perspective (Harris, 2012) , we outline three perspective inference strategies. Our evidence supports a genre frequency bias: humans perform best in conversation-like contexts, while RoBERTa excels in written genres, reflecting the language each encounters most during learning.This paper contributes to the understanding of both neural network and human language capabilities. From a cognitive science perspective, our findings contribute to two open debates: the role of grounded information in language acquisition (Section 5) and the existence of cognitive biases in perspective inference (Section 6). From an applied perspective, our results motivate greater use of conversational data in applications where perspectival language is important, such as in navigation, story generation, and human-robot interaction.• ProSPer: a novel dataset for probing understanding of spatial perspectival language. • Novel human behavioral data showing that humans achieve around 77-88% accuracy. • Comparison of neural language models, showing that RoBERTa's accuracy is human-like. • Fine-grained error analysis guided by previous psycholinguistic work, revealing a genre frequency bias for humans and RoBERTa.
0
The goal of having natural language versions of formal, computer-generated mathematical texts has been driven by the increasing quantity of formal mathematics being produced by a variety of projects. The purposes for these collections varies over pedagogical purposes (Constable, 1996) , proving sophisticated theorems (Cederquist et al., 1997) , formalizing foundational theories (Huang et al., 1994) , and using theorem proving to verify code and hardware (O'Leary et al., 1994; Liu et al., 1999) . For all of these purposes, some of the individuals wishing to understand the proofs will not be familiar with the system used to produce the proofs and its specialized syntax. The domain of formal mathematics has a definite need for natural language versions of its objects. The necessity of automatic generation of these texts is clear not only from the large number of formal proofs being produced, but also from the technical expertise required to understand the proofs and transform them to natural language reliably. We focus on producing full, static proofs, such as would be found in textbooks or research publications. Though this prevents the degree of customization available from interactive systems (such as (Fiedler, 2001) ), it allows the application of existing natural language search and summarization tools over the collected proof texts of a formal library.One of the most pervasive and complex proof techniques common to almost every domain of mathematics is proof by induction. Induction is used in proofs from number theory to code verification. It is often the first sophisticated proof technique taught in introductory logic courses but is used in the most complicated proofs in both mathematics and computer science. An ability to express induction clearly is central to any effective tool for generating texts from formal proofs.In this paper, we will lay out some of the complications involved in expressing induction in texts and our proposed solutions. Our focus will be on planning the texts to express this wide-ranging technique. Our examination of induction will be driven by a corpus of human-produced proof texts which employ induction, as well as a commitment to ensuring the validity of the formal proof within the informal proof text. We will describe an approach currently being used to generate texts from formal proofs and how this system is expanded to handle induction. Finally, we will discuss how our observations about induction may apply to producing texts employing other sophisticated proof techniques such as diagonalization.
0
The replacement of words with a representative synonymous expression dramatically enhances text analysis systems. We developed a text mining system called TAKMI (Nasukawa, 2001 ) which can find valuable patterns and rules in text that indicate trends and significant features about specific topics using not only word frequency but also using predicate-argument pairs that indicate dependencies among terms. The dependency information helps to distinguish between sentences by their meaning. Here are some examples of sentences from a PC call center's logs, along with the extracted dependency pairs:customer broke a tp customer...break, break...tp end user broke a ThinkPad end user...break, break...ThinkPadIn these examples, "customer" and "end user" and "tp" and "ThinkPad" can be assumed to have the same meaning in terms of this analysis for the call center's operations. Thus, these two sentences have the same meaning, but the differences in expressions prevent us from recognizing their iden-tity. The variety of synonymous expressions causes a lack of consistency in expressions. Other examples of synonymous expressions are: customer = cu = cus = cust = end user = user = eu Windows95 = Win95 = w95One way to address this problem is by assigning canonical forms to synonymous expressions and variations of inconsistent expressions. The goal of this paper is to find those of synonymous expressions and variations of inconsistent expressions that can be replaced with a canonical form for text analysis. We call this operation "term aggregation". Term aggregation is different from general synonym finding. For instance, "customer" and "end user" may not be synonyms in general, but we recognize these words as "customer" in the context of a manufacturers' call center logs. Thus, the words we want to aggregate may not be synonyms, but their role in the sentences are the same in the target domain from the mining perspective. Yet, we can perform term aggregation using the same methods as in synonym finding, such as using word feature similarities.There are several approaches for the automatic extraction of synonymous expressions, such as using word context features, but the results of such approaches tend to contain some antonymous expressions as noise. For instance, a system may extract "agent" as a synonymous expression for "customer", since they share the same feature of being human, and since both words appear as subjects of the same predicates, such as "talk", "watch", and "ask".In general, it is difficult to distinguish synonymous expressions from antonymous expressions based on their context. However, if we have a coherent corpus, one in which the use of expressions is consistent for the same meaning, the words extracted from that corpus are guaranteed to have different meanings from each other. Figure 1 illustrates the idea of such coherent corpora. Words with similar contexts within incoherent corpora consist of various expressions including synonyms and antonyms, as in the left hand side of this figure, because of the use of synonymous expressions as in the upper right box of the figure. In contrast, words with similar contexts within each coherent corpus do not contain synonymous expressions, as in the lower right box of the figure.By using the information about non-synonymous expressions with similar contexts, we can deduce the synonymous expressions from the words with similar contexts within incoherent corpora by removing the non-synonymous expressions.In this paper, we use a set of textual data written by the same author as a coherent corpus. Our assumption is that one person tends to use one expression to represent one meaning. For example, "user" for "customer" and "agt" for "agent" as in Figure 1 .Our method has three steps: extraction of synonymous expression candidates, extraction of noise candidates, and re-evaluation with these candidates. In order to evaluate the performance of our method, we conducted some experiments on extracting term aggregation sets. The experimental results indicate that our method leads to better precision than the basic synonym extraction approach, though the recall rates are slightly reduced.The rest of this paper is organized as follows. First we describe the personal stylistic variations in each author's text in Section 2, and in Section 3 we will give an overview of our system. We will present the experimental results and discussion in Section 4. We review related work in Section 5 and we consider future work and conclude the paper in Section 6.
0
Parsing accuracy is important. Parsing accuracy has been shown to have a significant effect on downstream applications such as textual entailment (Yuret et al., 2010) and machine translation (Neubig and Duh, 2014) , and most work on parsing evaluates accuracy to some extent. However, one element that is equally, or perhaps even more, important from the view of downstream applications is parser robustness, or the ability to return at least some parse regardless of the input. Every failed parse is a sentence for which downstream applications have no chance of even performing processing in the normal way, and application developers must perform 1 http://github.com/odashi/ckylark special checks that detect these sentences and either give up entirely, or fall back to some alternative processing scheme.Among the various methods for phrase-structure parsing, the probabilistic context free grammar with latent annotations (PCFG-LA, (Matsuzaki et al., 2005; Petrov et al., 2006) ) framework is among the most popular for several reasons. The first is that it boasts competitive accuracy, both in intrisinic measures such as F1-score on the Penn Treebank (Marcus et al., 1993) , and extrinsic measures (it achieved the highest textual entailment and machine translation accuracy in the papers cited above). The second is the availablity of easy-to-use tools, most notably the Berkeley Parser, 2 but also including Egret, 3 and BUBS Parser. 4 However, from the point of view of robustness, existing tools for PCFG-LA parsing leave something to be desired; to our knowledge, all existing tools produce a certain number of failed parses when run on large data sets. In this paper, we introduce Ckylark, a new PCFG-LA parser specifically designed for robustness. Specifically, Ckylark makes the following contributions:• Based on our analysis of three reasons why conventional PCFG-LA parsing models fail (Section 2), Ckylark implements three improvements over the conventional PCFG-LA parsing method to remedy these problems (Section 3).• An experimental evaluation (Section 4) shows that Ckylark achieves competitive accuracy with other PCFG-LA parsers, and can robustly parse large datasets where other parsers fail.• Ckylark is implemented in C++, and released under the LGPL license, allowing for free research or commercial use. It is also available in library format, which means that it can be incorporated directly into other programs.
0
Pretraining deep Transformer models (Vaswani et al., 2017) with language modeling and fine-tuning these models over downstream tasks have led to great success in recent years (Devlin et al., 2018; Yang et al., 2019) , and even enabled researchers to design models that outperform human baselines in the GLUE benchmark (Wang et al., 2018) . Although these models are empirically powerful in many Figure 1 : This figure presents a two-dimensional visualization of a token embedding vector v with its two approximations: v and v . Vector v has a larger Euclidean distance error than v , but its direction is more similar to the reference vector. Our experiments show that v generally provides a better approximation of the original token compared to v . natural language understanding (NLU) tasks, they often require a massive number of parameters, making them hard to use for memory-constrained applications (e.g., edge devices). Therefore, there have been efforts to compress BERT-like models while preserving comparable performance with the original model. Many of these compression methods are based on knowledge distillation (Hinton et al., 2015) to help the compressed model (student) to perform close to the original model in different NLU tasks. However, these approaches often need high computation resources due to e.g., the necessity of retraining the expensive language modeling on a huge corpus or the use of expensive augmentation techniques to make the distillation effectively work (Jiao et al., 2019) . Moreover, compression techniques that rely on training/fine-tuning language models are becoming less feasible due to its ever-increasing cost for current state-of-the-art architectures with hundreds of millions of parameters (He et al., 2020; Raffel et al., 2019; Brown et al., 2020) .More recently, there have been efforts to compress Transformer-based models for more resource-constrained scenarios (Mao et al., 2020) by using offline methods, such as matrix factorization (Winata et al., 2019; Lan et al., 2019; , weight pruning (Li et al., 2016; Han et al., 2015) , and also weight quantization (Zhou et al., 2016; Hubara et al., 2016) .This paper focuses on token embedding matrix compression due to being one of the largest matrices in BERT-based architectures.We specifically question the effectiveness of current low-rank matrix factorization methods in recent literature (Lan et al., 2019; by comparing them with the performance of a linear AutoEncoder over different compression ratios 2 . We define a new loss objective which is not only dependent on the commonly used Mean Absolute Error (MAE) or Mean Squared Error (MSE) loss between input embeddings and AutoEncoder reconstruction, but is also sensitive to the noise in reconstructed embeddings "direction" (measured by cosine distance). We present the intuition behind the importance of embedding vector direction in the Figure 1 . In the following sections we show that cosine distance indeed plays a more critical role than MAE/MSE ( Figure 3) as measured by the Perplexity of the entire model in language modeling.In Section 4, we demonstrate that our compression algorithm is superior or competitive to the Singular Value Decomposition (SVD) baseline over several natural language understanding tasks from GLUE (Wang et al., 2018) benchmark, as well as the SQuAD dataset (Rajpurkar et al., 2016) for question answering. We also compare our performance with the SVD-based compression over different compression ratios, and specifically show that our model performs consistently better in higher compression ratios.Our contribution can be summarized as follows:• We demonstrate the importance of direction (measured by cosine distance) in token embeddings compression.• We leverage the AutoEncoder architecture to explore various multi-objective optimization 2 Number of parameters in the original embedding matrix, over the sum of the parameters in factorized matrices.functions.• We outperform the SVD-based baseline in terms of Perplexity and over various downstream tasks.
0
In this first participation to the French-English translation task at WMT, our goal was to build a standard phrase-based statistical machine translation system and study the impact of French morphological variations at different stages of training and decoding.Many strategies have been proposed to integrate morphology information in SMT, including factored translation models , adding a translation dictionary containing inflected forms to the training data (Schwenk et al., 2008) , entirely replacing surface forms by representations built on lemmas and POS tags (Popović and Ney, 2004) , morphemes learned in an unsupervised manner (Virpojia et al., 2007) , and using Porter stems and even 4-letter prefixes for word alignment (Watanabe et al., 2006) . In non-European languages, such as Arabic, heavy effort has been put in identifying appropriate input representations to improve SMT quality (e.g., Sadat and Habash (2006)) As a first step toward using morphology information in our French-English SMT system, this submission focused on studying the impact of * The author was partially funded by GALE DARPA Contract No. HR0011-06-C-0023. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Defense Advanced Research Projects Agency. different input representations for French based on the POS and lemmatization provided by the Treetagger tool (Schmid, 1994) . In the WMT09 French-English data sets, we observe that more than half of the words that are unknown in the translation lexicon actually occur in the training data under different inflected forms. We show that combining a lemma backoff strategy at decoding time and improving alignments by generalizing across verb surface forms improves OOV rates and translation quality.
0
With subtle changes in word choice, a writer can influence a reader's perspective on a matter in many ways (Thomas et al., 2006; Recasens et al., 2013) . For example, Table 1 shows how the verbs claimed and said, although reasonable paraphrases for one another in the given sentence, have very different implications. Saying claimed casts doubt on the certainty of the underlying proposition and might implicitly bias a reader's interpretation of the sentence. That is, such linguistic cues (e.g., hedges, implicatives, intensifiers) can induce subtle biases through implied sentiment and presupposed facts about the entities and events with which they interact (Rashkin et al., 2015) . When models of language are trained on large web corpora that consist of text written by many people, distributional patterns might lead the lexical representations of these Bias Prompt AssertiveNeutral PromptIn a speech on June 9, 2005, Bush claimed that the "Patriot Act" had been used to bring charges against more than 400 suspects, more than half of whom had been convicted. William Graff, a former Texas primary voter who was also shot on his gogo days, was shot and killed at one point in the fight between Bush and the two terrorists, which Bush called executive order had taken "adrenaline."In a speech on June 9, 2005, Bush said that the "Patriot Act" had been used to bring charges against more than 400 suspects, more than half of whom had been convicted. "This agreement done are out of a domestic legal order," Bush said in referring to the presidential Domestic Violence policy and the president's new domestic violence policy; Roe v. Wade. "The president is calling on everyone.. Table 1: Table shows generations from a language model (GPT-2); when prompted with a linguistically biased sentence (left) and one edited to be neutral (right). Prompts are in gray while model generations are in black.seemingly innocuous words to encode broader information about the opinions, preferences, and topics with which they co-occur. Although studies have shown that humans recognise these framing effects in written text (Recasens et al., 2013; Pavalanathan et al., 2018) , it remains to be seen whether language models trained on large corpora respond to, or even recognise, such linguistic cues. In this work, we investigate the extent to which generative language models following the GPT-2 (124M-1.5B parameters) (Radford et al., 2019) and (Brown et al., 2020) architecture respond to such framing effects. We compare the generations that models produce when given linguistically-biased prompts to those produced when given minimally-different neutral prompts. We measure the distributional changes in the two sets of generations, as well as analyse the frequency of words from specific style lexi-cons, such as hedges, assertives, and subjective terms. We also investigate the differences in the civility of the text generated from the two sets of prompts, as measured by the PERSPECTIVE API 1 , a tool used to detect rude or hateful speech. To understand the topical differences, we compare frequency of the references made by models to specific entities and events. Overall, we find that linguistically-biased prompts lead to generations with increased use of linguistically biased words (e.g., hedges, implicatives), and heightened sentiment and polarity. Anecdotally, we see that the named entities and events referred to are also more polarised. Interestingly, we see no significant trends in model size, but observe that even the smallest model we test (124M parameters) is sufficiently capable of differentiating the subtly biased vs. the neutral prompts.
0
The European Union has 24 official languages. The European Parliament and the European Commission state that multilingualism is both "an asset and a shared commitment" (European Commission, 2008; European Parliament, 2009) . Languages are not only a means of conveying informationthey are constitutive parts of our cultures and identities, and thus one of the key pillars of Europe's rich cultural heritage. At the same time, languages can also create barriers, and overcoming language barriers is one of the main challenges European citizens, public servants and businesses are facing in crosslanguage communication and trade. The European Commission Directorate-General for Translation has the largest translation service in the world translating EU legislation into all 24 official EU languages. In 2018, the Commission's Directorate General for Translation translated some 2,255,000 pages (European Commission, 2019, p. 8) . To meet the high demand for translation, the Commission developed its own machine translation (MT) system called eTranslation 1 (formerly known as MT@EC), which was launched in 2013. In 2014, the European Commission started the Connecting Europe Facility (CEF) Programme. eTranslation aims to facilitate multilingual communication and the exchange of documents and other linguistic content in Europe between national public administrations on the one hand and between these administrations and EU and CEF-affiliated country citizens and businesses (European Commission, 2018) on the other hand. However, in order to make eTranslation work in the various domains and language pairs relevant to public 1 For more details on CEF eTranslation, see https://ec.europa.eu/cefdigital/wiki/display/CEFDIGITAL /eTranslation services and administrations across Europe, corresponding language resources wereand still areneeded.In order to address this need, the European Language Resource Coordination (ELRC) was launched in April 2015 to collect language data relevant for public services in EU Member and CEF-affiliated States (Lösch et al., 2018) . While ELRC has been and is very successful having delivered more than 1.400 language resources and tools covering all EU official languages, it also uncovered a number of important obstacles impeding the collection of language data produced by public services. In order to address these obstacles and to make data collection sustainable, ELRC carried out a Europe-wide study. The results of the investigations are presented in detail in the ELRC White Paper, Sustainable Language Data Sharing to Support Language Equality in Multilingual Europe. Why Language Data Matters, including Country Profiles for each country (ELRC, 2019). To our knowledge, this is the first pan-European study covering 29 EU Member States and CEF-affiliated countries on obstacles to language data sharing and recommendations on how to overcome these. Section 2 of the paper provides further background to ELRC, some comparison with other data collection initiatives, as well as details on the methodology of the investigation. Section 3 provides the main findings of the study, detailing in particular the obstacles found that currently prevent the sustainable sharing of language data, specifically in the context of public services in Europe. Section 4 presents recommendations on how to address the obstacles, including corresponding suggestions on the European/national policy level and on the organisational or institutional level. Section 5 summarises and concludes.
0
The amount of digital resources is growing day by day, and this raises new challenges in the management process in particular in the access and reuse of such resources. For this aim we need systems that offer solutions to effectively store digital resources (text, images, video, etc.) and allow us to give them an interpretation with respect to a given semantic.Our contribution in this direction regards the definition and implementation of a system that permits a user to manage resources (mainly textual resources, but also media resource) and easily annotate them. We called this system HEI (Hunter Events Interface).Today in research areas of Natural Language (NL) and temporal reasoning there are numerous tools that help NL text analysis and facilitate applications in many different sectors. Many research groups have made available on the Web some of their own developed tools (parsers, name detectors, etc ..) providing, in particular in the field of temporal representation and reasoning, a little support for the integration of such services. This work is born with the goal of developing a methodology that integrates services concerning annotation, discovery, connectivity, and temporal reasoning through events in natural language texts provided by different developers and also, offering services to annotate media. Through the use of such annotations, we can define multimedia streams that coherently synchronises media elements with a synthetic voice delivering the textual content, and retrieve information to respond to queries as in dialog systems.For the annotation of natural language texts we have chosen CSWL (Cultural Story Web Language) as referred semantic formalism, and according to CSWL we have established the following annotation approach, that can be made by the particular HEI architecture:1. for each text in NL, it applies n basic services Sn automatically annotating the text with significant labels for the CSWL semantics (basic services Sn are available in Web or are part of an internal repository HEI) producing m annotated entities (see section 4.1);2. some services (algorithms on temporal grammatical, semantic or pragmatic properties emerging from the NL text) are applied to extend the m entity previously annotated (see section 4.2); 3. we can monitor the made annotations, and interactively execute the completion of annotations.In literature there are some systems that allow browsing and text annotation (Hou et al., 2015; de Boer et al., 2015; Vossen et al., 2016) . In (Hou et al., 2015; Vossen et al., 2016) the authors presented NewsMiner and NewsReader system respectively, and both are focused on the interpretation of news using an event-based formalism. In (de Boer et al., 2015) , in particular, the focus is on historical events. Our system is also based on an event formalism, but in addition to events takes into account fluents and complex events. Moreover HEI system allows users to evaluate the temporal consistency of text, and can be expanded with the addition of new annotation services.In this paper we introduce the CSWL formalism (section 2), the architecture of the HEI system (section 3) and some services implemented in the system (section 4).
0
In autoregressive Neural Machine Translation (NMT), a decoder generates one token at a time, and each output token depends on the output tokens generated so far. The decoder's prediction of the end of the sentence determines the length of the output sentence. This prediction is sometimes made too earlybefore all of the input information is translated-causing a so-called under-translation.Transformer has sinusoidal positional encoding to incorporate the token position information in the sequence into its encoder and decoder (Vaswani et al., 2017) . There are some previous studies for controlling an output length in Transformer. Takase and Okazaki (2019) proposed two variants of length-aware positional encodings called length-ratio positional encoding (LRPE) and length-difference positional encoding (LDPE) to control the output length based on the given length constraints in automatic summarization. Lakew et al. (2019) applied LDPE and LRPE to NMT. They trained an NMT model using output length constraints based on LDPE and LRPE along with special tokens representing length ratio classes between input and output sentences, while they used the input sentence length at the inference time. However, the length of an input sentence is not a reliable estimator of the output length, because the actual output length varies with the content of the input.Using length constraints in the decoder is a promising approach to the under-translation problem. We propose an NMT method based on LRPE and LDPE with a BERT-based output length prediction. The proposed method adds noise to the output length constraints in training to improve its robustness against the possible length variances in the translation. In our experiments with an English-to-Japanese dataset, the BERT-based output length prediction outperformed the use of the input length, and the proposed method, including noise injection into the training-time length constraints, improved the translation performance in BLEU for short sentences.
0
One of the important requirements for developing practical natural language processing system is a morphological analyzer that can automatically assign the correct POS (part-of-speech) tagging to the correct word with time and space efficiency. For non-separated languages such as Japanese, Korea, Chinese and Thai, the more task in morphological analyzer is needed, i.e, segmenting an input sentence into the right words (Nobesawa et.al, 1994; Seung-Shik Kang et.al, 1994) . However, there is another problematic aspect, called implicit spelling error, that should be solved in morphological processing level. The implicit spelling errors are spelling errors which make the other right meaningful words., This work attempts to provide a robust morphological analyzer by using a gradual refinement module for weeding out the many possible alternatives and/or the erroneous chains of words caused by those three non-trivial problems: word boundary ambiguity, POS tagging ambiguity and implicit spelling error.Many researchers have used a corpus based approach to POS tagging such as trigram model (Charniak, 1993) ; feature structure tagger (Kemp,1994) , to word segmentation, such as Dbigram (Nobesawa et.al, 1994) , to both POS tagging and word segmentation (Nagata, 1994) and to spelling error detection as well as correction (Araki et.al, 1994; Kawtrakul, et.al, 1995(b) ). Eventhough a corpus based approach exhibits seemingly high average accuracy, it requires a large amount of training data and validation, data (Franz, 1995) . Instead of using a corpus based approach, a new simple hybrid technique which incorporates heuristic, syntactic and semantic knowledge is proposed to Thai morphological analyzer. It consists of wordboundary preference, syntactic coarse rules and semantic strength measurement. To implement this technique, a three-stage approach is adopted to the gradual refinement module : preference based pruning, syntactic based pruning and semantic based pruning. Each stage will gradually weed out word boundary ambiguities, tag ambiguities and implicit spelling errors.Our preliminary experiment shows that the proposed model can work with a time-efficiency and increase the accuracy of word boundary and tagging disambiguation as well as the implicit spelling error correction.In the following sections, we will begin by reviewing three non-trivial problems of Thai morphological analyzer. An overview of the gradual refinement module will be given. We will then show the algorithm with examples for pruning the erroneous word chains prior to parsing. Finally, the results of applying this algorithm will be presented.
0
Vocabulary is a challenging aspect for language learners to master. Extended word knowledge, such as word polarity and position, is not widely available in traditional dictionaries. Thus, for most language learners, it is very difficult to have a good command of such lexical phenomena.Current linguistics software programs use large corpus data to advance language learning. The use of corpora exposes learners to authentic contextual clues and lets them discover patterns or collocations of words from contextual clues (Partington, 1998) . However, a huge amount of data can be overwhelming and time-consuming (Yeh et al., 2007) for language learners to induce rules or patterns. On the other hand, some lexical phenomena seem unable to be comprehended 1 http://glance-it.herokuapp.com/ fast and directly in plain text format (Koo, 2006) . For example, in the British National Corpus (2007), "however" seems more negative than "but". Also, compared with "but", "however" appears more frequently at the beginning of a sentence.With this in mind, we proposed GLANCE 1 , a text visualization tool, which presents corpus data using charts and graphs to help language learners understand the lexical phenomena of a word quickly and intuitively. In this paper, we focused on five types of lexical phenomena: polarity, position, POS, form and discipline, which will be detailed in the Section 3. Given a single query word, the GLANCE system shows graphical representations of its lexical phenomena sequentially within a single web page.Additionally we believe that the use of graphics also facilitates the understanding of the differences between two words. Taking this into consideration, we introduce a comparison mode to help learners differentiate two words at a glance. Allowing two word input, GLANCE draws the individual representative graphs for both words and presents these graphs in a twocolumn view. The display of parallel graphs depicts the distinctions between the two words clearly.
0
With the rapid growth of electronic documents and the great development of network in China, there are more and more people touching the Intemet, on which, however, English is the most popular language being used. It is difficult for most people in China to use English fluently, so they would like to use Chinese to express their queries to retrieval the relevant English documents. This situation motivates research in Cross Language Information Retrieval (CLIR).There are two approaches to CLIR, one is query translation; the other is translating original language documents to destination This research was supported by the National Science Fund of China for Distinguished Young Scholars under contact 69983009. language equivalents. Obviously, the latter is a very expensive task since there are so many documents in a collection and there is not yet a reliable machine translation system that can be used to process automatically. Most researchers are inclined to choose the query translation approach [Oard. (1996) ]. Methods for query translation have focused on three areas: the employment of machine translation techniques, dictionary based translation [Hull & Grefenstette (1996) ; Ballesteros & Croft (1996) ], parallel or comparable corpora for generating a translation model [Davis & Dunning (1995) ; Sheridan & Ballerini (1996) ; Nie, Jian-Yun et a1. (1999) ]. Machine translation (MT) method has many obstacles to prevent its employment into CLIR such as deep syntactic and semantic analysis, user queries consisting of only one or two words, and an arduous task to build a MT system. Dictionary based query translation is the most popular method because of its easiness to perform. The main reasons leading to the great drops in CLIP,. effectiveness by this method are ambiguities caused by more than one translation of a query term and failures to translate phrases during query translation. Previous studies [Hull & Grefenstette (1996) ; Ballesteros & Croft (1996) ] have shown that automatic word-byword (WBW) query translation via machine readable dictionary (MKD) results in a 40-60% loss in effectiveness below that of monolingual retrieval. With regard to the use of parallel corpora translation method, the critiques one often raises concern the availability of reliable parallel text corpora. An alternative way is that making use of the comparable corpora because they are easier to be obtained and there are more and more bilingual even multilingual documents on the Internet. From analyzing a document collection, an associated word list can be yielded and it is often used to expansion the query in monolingual information retrieval [Qiu & Frei(1993) ; Jing & Croft(1994) ].In this paper, a new query translation is presented by combination dictionary based method with the comparable corpora analyzing. Ambiguity problem and phrase information lost are attacked in dictionary based Chinese-English Cross-Language information Retrieval (CECLIR). The remainder of this paper is organized as follows: section 1 gives a method to calculate the mutual information matrices of Chinese-English comparable corpora. Section 2 develops a scheme to select the translations of the Chinese query terms and introduces how the compositional phrase can be kept in our method. Section 3 presents a set of preliminary experiment on comparable corpora to evaluate our query translation method and gives some explanations.
0
Quality Estimation (QE) for Machine Translation (MT) is the task of evaluating the quality of the output of an MT system without relying on reference translations. The WMT 2013 QE Shared Task defined four different tasks covering both word and sentence level QE. In this work we describe the Fondazione Bruno Kessler (FBK) and University of Edinburgh approach and system setup of our participation to the shared task. We developed models for two sentence-level tasks: Task 1.1: Scoring and ranking for post-editing effort, and Task 1.3: Predicting post-editing time.The first task aims at predicting the Humanmediated Translation Edit Rate (HTER) (Snover et al., 2006) between a suggestion generated by a machine translation system and its manually post-edited version. The data set contains 2,754 English-Spanish sentence pairs post-edited by one translator (2,254 for training and 500 for test). We participated only in the scoring mode of this task.The second task requires to predict the time, in seconds, that was required to post edit a translation given by a machine translation system. Participants are provided with 1,087 English-Spanish sentence pairs, source and suggestion, along with their respective post-edited sentence and postediting time in seconds (803 data points for training and 284 for test).For both tasks we applied supervised learning methods and made use of information about word alignments, n-best diversity scores, word posterior probabilities, pseudo-references, and back translation to train our models. In the remainder of this paper we describe the features designed for our participation (Section 2), the learning methods used to build our models (Section 3), the experiments that led to our submitted systems (Section 4), and we briefly conclude our experience in this evaluation task (Section 5).
0
At first sight, tokenization is not only boring but also trivial. Humans have few problems with this task for at least two reasons: (1) They are experts at pattern-finding (see, for example, Tomasello, 2003) . Thus, whether the form "your" in an English Facebook post is to be read as one unit (the possessive determiner) or as two (a common misspelling of "you're"), usually causes less problems due to the highly disambiguating grammatical context. (2) They are happy to accept meaningful units without having to determine the exact number of units. While most tokenization guidelines force us to treat "ice cream" as two tokens and "ice-cream" as one token, there often is no difference to native speakers -though it is possible to predict the spelling to some extent based on linguistic context, frequency, etc. (cf. Sanchez-Stockhammer, in preparation).However, given the layered approach typically taken by NLP pipelines, no analysis of the grammatical context is available at the time when tokenization takes place since tokenization is one of the first steps in an NLP text processing pipeline, often only preceded by sentence splitting. 1 However, tokenization is not fully independent of sentence splitting due to the ambiguity of some punctuation marks, most notoriously the baseline dot, which can for instance occur as (1) period/full stop to mark the end of a sentence, (2) marker of abbreviated forms, (3) decimal mark separating the integer from the fractional part of a number, (4) separator of host name, subdomain, domain, top-level domain in Internet addresses, (5) part of a so-called horizontal ellipsis ("..."). When all these restrictions are in place, tokenization immediately becomes more challenging as a task, also for humans. Thus whether the string "No." should be treated as one token or as two is impossible to decide out of context, since it could be a short answer to a question ("Would you like to join us for lunch?" -"No.") or it can be an abbreviation for "number" ("No. 6"). In the former case, tokenization should identify two tokens, in the latter only one. Thus the challenge for any tokenizer is to make use of the linguistic context to disambiguate potentially ambiguous forms even though no higher-level grammatical analysis (i. e. PoS-tagging, lemmatization or even syntactic or semantic analysis) is available. In a way, some of the work done by these high-level tools is thus duplicated in the tokenizer, e. g. identifying numbers, identifying punctuation, identifying proper names (in English) or nouns in general (in German) based on capitalization, where necessary for the tokenization.Of course, an extremely large proportion of tokenization is indeed straightforward. A simple split on white space and common punctuation marks will result in an average F 1 -score of 96.73 on the test data set used for the present task (cf. Section 4). However, the amount of work that is required to get closer to 100% is inversely proportional to the effect size of the improvements that can be achieved, which means that the bulk of this paper is devoted to the remaining 3.27%. The EmpiriST 2015 shared task on automatic linguistic annotation of computer-mediated communication / social media (Beißwenger et al., 2016) consists of two subtasks that deal with NLP for web and social media texts: (1) Tokenization and (2) part-of-speech tagging. We participated in the first subtask and developed a rule-based tokenizer that implements the EmpiriST 2015 tokenization guidelines ; EmpiriST team, 2015). Our system, SoMaJo, won the shared task and is freely available from PyPI, the Python Package Index. 2
0
Machine translation is a complex task which requires diverse linguistic knowledge. The seemingly straightforward translation of the English pronoun it into German requires knowledge at the syntactic, discourse and world knowledge levels for proper pronoun coreference resolution (cr). The German third person pronoun can have three genders, determined by its antecedent: masculine (er), feminine (sie) and neuter (es). Previous work (Hardmeier and Federico, 2010; Miculicich Werlen and Popescu-Belis, 2017; Müller et al., 2018) proposed evaluation methods for pronoun translation. This has been of special interest in context-aware nmt models that are capable of using discourse-level information. Despite promising results (Bawden et al., 2018; Müller et al., 2018; Lopes et al., 2020) , the question remains: Are transformers (Vaswani et al., 2017) truly learning this task, or are they exploiting simple heuristics to make a coreference prediction?To empirically answer this question, we extend ContraPro (Müller et al., 2018 )-a contrastive challenge set for automatic English→German pronoun translation evaluation-by making small adversarial changes in the contextual sentences. Our adversarial attacks on ContraPro show that context-aware transformer nmt models can easily be misled by simple and unimportant changes to the input. However, interpreting the results obtained from adversarial attacks can be difficult. The results indicate that nmt uses brittle heuristics to solve cr, but it is not clear what those heuristics are. In general, it is challenging to design attacks based on modifying ContraPro that can test specific phenomena that may be of interest.The cat and the actor were hungry. It (?) was hungrier.Step 1:The cat and the actor were hungry. It (?) was hungrier.Step 2:The cat and the actor were hungry. It ( ) was hungrier.Step 3: Language Translation Der Schauspieler und die Katze waren hungrig. Er / Sie ( ) / Es war hungriger. Table 1 : A hypothetical cr pipeline that sequentially resolves and translates a pronoun.For this reason, we propose an independent set of templates for coreferential pronoun translation evaluation to systematically investigate which heuristics are being used. Inspired by previous work on cr (Raghunathan et al., 2010; Lee et al., 2011) , we create a number of templates tailored to evaluating the specific steps of an idealized cr pipeline. We call this collection Contracat ( ), Contrastive Coreference Analytical Templates. The templates are constructed in a completely controlled manner, enabling us to easily create large number of coherent test examples and provide strong conclusions about the cr capabilities of nmt. The procedure we used in creating the templates can be adapted to many language pairs with little effort. Our results suggest that transformer models do not learn each step of a hypothetical cr pipeline.We also present a simple data augmentation approach specifically tailored to pronoun translation. The experimental results show that this approach improves scores and robustness on some of our metrics, but it does not fundamentally change the way cr is being handled by nmt.We publicly release Contracat and the adversarial modifications to ContraPro 1 .
0
Several early studies in large-scale text processing (Liakata and Pulman, 2002; Gildea and Palmer, 2002; Schubert, 2002) showed that having access to a sentence's syntax enabled credible, automated semantic analysis. These studies suggest that the use of increasingly sophisticated linguistic analysis tools could enable an explosion in available symbolic knowledge. Nonetheless, much of the subsequent work in extraction has remained averse to the use of the linguistic deep structure of text; this decision is typically justified by a desire to keep the extraction system as computationally lightweight as possible.The acquisition of background knowledge is not an activity that needs to occur online; we argue that as long as the extractor will finish in a reasonable period of time, the speed of such a system is an issue of secondary importance. Accuracy and usefulness of knowledge should be of paramount concern, especially as the increase in available computational power makes such "heavy" processing less of an issue.The system explored in this paper is designed for Open Knowledge Extraction: the conversion of arbitrary input sentences into general world knowledge represented in a logical form possibly usable for inference. Results show the feasibility of extraction via the use of sophisticated natural language processing as applied to web texts.e.g., A grand-jury may say a proposition 2. TRUE BUT TOO SPECIFIC TO BE USEFUL e.g., Bunker walls may be decorated with seashells 3. TRUE BUT TOO GENERAL TO BE USEFUL e.g., A person can be nearest an entity 4. SEEMS FALSE e.g., A square can be round 5. SOMETHING IS OBVIOUSLY MISSING e.g., A person may ask 6. HARD TO JUDGE e.g., Supervision can be with a company Figure 1 : Instructions for categorical judgingEvaluation Extraction quality was determined through manual assessment of verbalized propositions drawn randomly from the results. Initial evaluation was done using the method proposed in Schubert and Tong (2003) , in which judges were asked to label propositions according to their category of acceptability; abbreviated instructions may be seen in Figure 1 . 7 Under this framework, category one corresponds to a strict assessment of acceptability, while an assignment to any of the categories between one and three may be interpreted as a weaker level of acceptance. As seen in Table 2 , average acceptability was judged to be roughly 50 to 60%, with associated Kappa scores signalling fair (0.28) to moderate (0.48) agreement. Judgement categories at this level of specificity are useful both for system analysis at the development stage, as well as for training judges to recognize the disparate ways in which a proposition may not be acceptable. However, due to the rates of agreement observed, evaluation moved to the use of a five point sliding scale (Figure 2 ). This scale allows for only a single axis of comparison, thus collapsing the various ways in which a proposition may or may not be flawed into a single, general notion of acceptability. The authors judged 480 propositions sampled randomly from amongst bins corresponding to frequency of support (i.e., the number of times a given proposition was extracted). 60 propositions were sampled from each of 8 such ranges. 8 As seen in Figure 3 , propositions that were extracted at least twice were judged to be more acceptable than those extracted only once. While this is to be expected, it is striking that as frequency of support increased further, the level of judged acceptability remained roughly the same.
0
A great deal of linguistic information exists online in the form of academic publications. ODIN, the Online Database of INterlinear text (Lewis and Xia, 2010) was created upon this premise, a resource which makes available language data for approximately 1,500 languages, including linguistic glosses and resource-rich language translations (Lewis et al., 2015) . The data targeted by ODIN is Interlinear Glossed Text, or IGT, a semi-structured data format for presentation of linguistic examples, as shown in Figure 1 . This data has been shown to have interesting characteristics, making some NLP analysis possible for resource-poor languages (Lewis and Xia, 2008; Georgi et al., 2014; Georgi et al., 2015) . All the IGT instances in ODIN, including the one in Figure 1 , were extracted from linguistic articles that were distributed electronically as Portable Document Format (PDF) documents, a format developed by Adobe Systems for the purposes of standardizing document display and typesetting across platforms. As this format was primarily designed to "communicate visual material between different computer applications and systems" (Warnock, 1991) , re-extracting the displayed text for automated systems was not an intended aspect of the format design. In the PDF specification, text is represented as glyphs of a specified font in a "Character Space" coordinate system, embedded within a "Text Space" renderer (Bienz et al., 1997) . One of the downsides of this document structure is that the internal structure of a PDF file gives text as glyph-coordinate pairs, and guarantees only the positioning of text rendered on the page. It does not guarantee that the order of the encoded text resembles the order in which it is intended to be read. Consequently, extracting text from PDF documents is not a straightforward task. Whitespace within a PDF may be purely a function of layout, as in a document with multiple columns, or it may be meant to provide a cue to meaningful structural deviations in the text, such as inline examples or floating tables.We present in this paper a system that consumes the extracted text-coordinate information from an off-theshelf PDF-to-text converter, but reanalyzes this output to perform block detection, respacing, and tabular data analysis. The output format of this system is more human-readable than the verbose xml format produced by the off-the-shelf converters, while making the important positional information available to downstream processes.
0
Many students studying at KIT are from abroad. While they usually speak English fluently, they are often not sufficiently proficient in German in order to follow the content of German lectures. Since most lectures at KIT are given in German, this means that students from abroad struggle with the language barrier during their stay at KIT, most prominently in their academic studies. Universities in English speaking countries do not suffer from this problem, hence they have a higher percentage of foreign students. One of the goals of KIT's internationalization strategy is to become more attractive for students from abroad, among others by lowering the language barrier while at the same time still teaching in German due to considerations of cultural and scientific diversity. In this the use of human interpreters for translating lectures at KIT is not a feasible option, as unlike the European Parliament, KIT does not have the financial resources for employing sufficient amounts of human interpreters. Instead, we have started to offer an automatic simultaneous translation system at several of KIT's lecture halls (Cho et al., 2013) . The system translates the German lectures into English in real time and displays the results of the automatic speech recognition and machine translation as subtitles on a web page. The web page can then be viewed on the personal devices of the students, e.g., smart phones or laptop computers. Translating university lectures is a challenging task. There exist different constraints that have to be met in order to build a system that is of actual use to the students. The two key aspects are the quality of the output of the system itself, as well as the time it takes to create it. Translations have to be output in a timely fashion without large delays. Our system first transcribes the audio by the use of a speech recognition system and then automatically translates the transcribed text. In order for the students to access the transcription and translation using their own electronic device we have implemented this system as a web-service. This way, the students can use their laptops, tablets or smart phones to access our service. In each supported lecture hall, the system is integrated into the PA. Due to this seamless integration, there is no need for the lecturer to carry an additional microphone. The system is automatically started and stopped at predefined times. We first introduced this system during summer term 2012. Since its inauguration, we offer this service to the students on a regular basis. It is installed in our main lecture hall (Audimax) and in multiple other lecture halls. With the system being available in more lecture halls, we are able to offer this service to a wider audience. In this paper we now present the results of a user study on how the students actually benefit from the system, and which aspects of the systems should be improved with the highest priority. We carried out the study over the course of two terms. The rest of the paper is organized as follows: In the next chapter we give a brief overview of work done in this field. In Section 3 we describe the evaluation procedure. Following that, we present the results in Section 4. We conclude in Section 5 with an outlook where we show next possible steps to improve our system.
0
Performing semantic inference is important for natural language applications such as Question Answering (QA), Information Extraction, and Discourse Analysis. One of the missing links for semantic inference is the availability of commonsense knowledge in computers. In this paper, we focus on acquiring knowledge about causal relations between events.Previous work on causal rule acquisition targeted at simple rules each of whose head is represented by a single literal or n-ary predicate: for example, Girju (2003) collected causal rules between nouns (e.g., hunger ⇒ headache); and Pantel et al. (2007) acquired causal rules between verbs (e.g., ( announced the arrest of ❳ ⇒ ❳ is charged by (). However, humans perform more complicated inferences to predict outcomes of an event.Let us consider the following example: Google acquires Android Inc. The acquisition will enhance Google's competition in mobile phones. The first sentence mentions an acquisition event with the verb acquire. Starting with its deverbal noun acquisition, the second sentence describes the possible outcome of the acquisition event. Referring to events explained in the preceding sentences, deverbal nouns often provide good clues for identifying cause-effect relations. However, acquiring the following causal rule from the above example is of no use: ❛❝q✉✐r❡(❳, Android) ⇒ ❝♦♠♣❡t❡✲✐♥(❳, mobile phones)( 1)Even though we generalize the causal relation by replacing the company name Google with a variable X , it is unlikely to reuse the causal knowledge that if a company acquires Android, the company will compete in mobile phones. Having said that, the following rule may be too generic:❛❝q✉✐r❡(❳, () ⇒ ❝♦♠♣❡t❡✲✐♥(❳, )) (2)This rule only expresses that if a company acquires something, the company will compete in some area. This causality may be supported by a lot of activities in the real world, but it does not provide a good hint for predicting the value of ). In contrast, inducing the following causal rule would be more preferable in terms of the reusability and predictability:❛❝q✉✐r❡(❳, () ∧ s♣❡❝✐❛•✐③❡❞✲✐♥((, )) ⇒ ❝♦♠♣❡t❡✲✐♥(❳, )) (3)Here, we complemented a predicate s♣❡❝✐❛•✐③❡❞✲✐♥((, )) as a constraint, i.e., as a premise in the head (left-hand side) of the rule, even though this is not explicit in the original text. Humans accept the causal relation mentioned in the above text because we have a prior knowledge about Rule 3 and the truth of the predicate s♣❡❝✐❛•✐③❡❞✲✐♥(Android, mobile phone).This paper presents a novel approach for inducing causal rules like Rule 3 from the sentences with deverbal nouns (as in the above example). The contributions of this paper are twofold:1. We focus on verbs and their deverbal nouns that co-refer to the same events. The use of deverbal nouns was not explored in the previous work on causal knowledge acquisition. We investigate the advantage of this approach empirically.2. We present a method for generalizing and constraining causal relations by making use of relation instances acquired automatically from a large corpus. Previous work replaced the same mention (string) in a pattern with a variable to induce an inference rule (template). In contrast, this work unveils hidden predicates and variables that are not stated explicitly in text, but are crucial for explaining causal relations. This part is very challenging because we need to combine pieces of predicates obtained from different texts.
0
In the following we will describe an experimental machine translation system as it is currently under development at the University of Stuttgart with the following goals in mind:• flexibility regarding the level of transfer * The research reported here is supported by the German Science Foundation (SFB 340 -Project B3).• incorporation and exploitation of contextual and non-linguistic information in the transfer system • variable depth of semantic and pragmatic analysis required for translationThe core of the system is a bidirectional transfer system using LFG-grammars for source and target languages (currently German and French) based on the approach suggested by Kaplan et al.(1989) .This core system produces a functional structure of the input sentences together with a shallow semantic analysis (f-s-structures). In order to include contextual intersentential information a Contextual Resolver 1 using Discourse Representation Theory (DRT; Kamp(1981) ) was added. The system combines a linguistic transfer system with (modules of) a general purpose text understanding system in order to study to what extent translation is possible on the basis of the linguistic information provided by the core system (the f-s-structures) and to develop the control mechanisms for cases where this is not possible. For these cases we developed the concept of what we will call contextual constraints which are the interface between the transfer system and the contextual resolver.The general methodology we followed in developing the system is to keep its modules as independent from the application in MT as possible. This means, for instance, that we use the same grammars in monolingual applications (e.g. in an NL-interface to databases) as in MT. An important consequence is that we do not want to introduce into the grammar of one language distinctions or structures relevant only to the target language.Our research interest is centered around notoriously difficult problems such as the translation of tense and anaphors. Since German and French differ in their tense systems as well as in their gender system a simple transfer scheme on the level of f-s-structures is not possible. This will be discussed in greater detail after a short description of the overall architecture of the system and the concepts underlying it.
0
Recent years have seen increasing research and commercial activity in the area of Spoken Language Translation (SLT) for mission-critical applications. In the health care area, for instance, such products as Converser , S-MINDS (www.fluentialinc.com), and Med-SLT (Bouillon et al, 2005) are coming into use. For military applications, products like Phraselator (www.phraselator.com) and S-MINDS (www.fluentialinc.com) have been deployed. However, the demand for real-time translation is by no means restricted to these areas: it is clear in numerous other areas not yet extensively addressedemergency services, law enforcement, and others. Ideally, a system produced for one such domain (e.g., health care) could be easily ported to other domains. However, porting has in practice proven difficult. This paper will comment on the sources of © 2008. Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license (http://creativecommons.org/licenses/by-ncsa/3.0/). Some rights reserved. this difficulty and briefly present an approach to rapid inter-domain portability that we believe is promising. Three aspects of our approach will be discussed: (1) large general-purpose lexicons for automatic speech recognition (ASR) and machine translation (MT), made reliable and usable through interactive facilities for monitoring and correcting errors; (2) easily modifiable facilities for instant translation of frequent phrases; and (3) quickly modifiable custom glossaries.As preliminary support for our approach, we apply our current SLT system, now optimized for the health care domain, to sample utterances from the military, emergency service, and law enforcement domains.With respect to the principal source of the porting problems affecting most SLT systems to date: most systems have relied upon statistical approaches for both ASR and MT (Karat and Nahamoo, 2007; Koehn, 2008) ; so each new domain has required extensive and high-quality in-domain corpora for best results, and the difficulty of obtaining them has limited these systems' portability. The need for indomain corpora can be eliminated through the use of a quite general corpus (or collection of corpora) for statistical training; but because large corpora give rise to quickly increasing perplexity and error rates, most SLT systems have been designed for specialized domains.By contrast, breadth of coverage has been a central design goal of our SLT systems. Before any optimization for a specific domain, we "give our systems a liberal arts education" by incorporating very broad-coverage ASR and MT technology. (We presently employ rule-based rather than statistical MT components, but this choice is not essential.) For example, our MT lexicons for English<>Spanish translation in the health care area contain roughly 350,000 words in each direction, of which only a small percentage are specifically health care terms. Our translation grammars (presently licensed from a commercial source, and further developed with our collaboration) are similarly designed to cover the structures of wide-ranging general texts and spoken discourse.To deal with the errors that inevitably follow as coverage grows, we provide a set of facilities that enable users from both sides of the language barrier to interactively monitor and correct such errors. We have described these interactive techniques in (Dillinger and Seligman, 2004; Zong and Seligman, 2005; Dillinger and Seligman, 2006; . With users thus integrated into the speech translation loop, automatically translated spoken conversations can range widely with acceptable accuracy (Seligman, 2000) . Users can move among domains with relative freedom, even in advance of lexical or other domain specialization, because most domains are already covered to some degree. After a quick summary of our approach (in Section 2), we will demonstrate this flexibility (in Section 3).While our system's facilities for monitoring and correction of ASR and MT are vital for accuracy and confidence in wide-ranging conversations, they can be time consuming. Further, interactivity demands a minimum degree of computer and print literacy, which some patients may lack. To address these issues, we have developed a facility called Translation Shortcuts™, through which prepared translations of frequent or especially useful phrases in the current domain can be instantly executed by searching or browsing. The facility is described in (Seligman and Dillinger, 2006) . After a quick description of the Translation Shortcuts facility (Section 4), this paper will emphasize the contribution of the Translation Shortcuts facility to domain portability, showing how a domain-specific set of Shortcuts can be composed and integrated into the system very quickly (Section 5).Finally, while the extensive lexical resources already built into the system provide the most significant boost to domain portability in our system, it will always be desirable to add specialized lexical items or specialized meanings of existing ones. Section 6 will briefly present our system's glossary import facility, through which lexical items can be added or updated very quickly. Our concluding remarks appear in Section 7.
0
A zero pronoun (ZP) is a gap in a sentence that is found when a phonetically null form is used to refer to a real-world entity. An anaphoric zero pronoun (AZP) is a ZP that corefers with one or more preceding mentions in the associated text. Below is an example taken from the Chinese Treebank (CTB), where the ZP (denoted as *pro*) refers to 俄罗斯 (Russia).[俄罗斯] 作为米洛舍夫维奇一贯的支持者, *pro* 曾经提出调停这场政治危机。 ([Russia] is a consistent supporter of Milošević, *pro* has proposed to mediate the political crisis.)As we can see, ZPs lack grammatical attributes that are useful for overt pronoun resolution such as number and gender. This makes ZP resolution more challenging than overt pronoun resolution.Automatic ZP resolution is typically composed of two steps. The first step, AZP identification, in-volves extracting ZPs that are anaphoric. The second step, AZP resolution, aims to identify an antecedent of an AZP. State-of-the-art ZP resolvers have tackled both of these steps in a supervised manner, training one classifier for AZP identification and another for AZP resolution (e.g., Zhao and Ng (2007) , Kong and Zhou (2010) ).More recently, Chen and Ng (2014b; have proposed unsupervised probabilistic AZP resolution models (henceforth the CN14 model and the CN15 model, respectively) that rival their supervised counterparts in performance. An appealing aspect of these unsupervised models is that their language-independent generative process enables them to be applied to languages where data annotated with ZP links are not readily available. Though achieving state-of-the-art performance, these models have several weaknesses.First, a lot of manual efforts need to be spent on engineering the features for generative probabilistic models, as these models are sensitive to the choice of features. For instance, having features that are (partially) dependent on each other could harm model performance. Second, in the absence of labeled data, it is difficult, though not impossible, for these models to profitably employ lexical features (e.g., word pairs, syntactic patterns involving words), as determining which lexical features are useful and how to combine the potentially large number of lexical features in an unsupervised manner is a very challenging task. In fact, the unsupervised models proposed by Chen and Ng (2014b; are unlexicalized, presumably owing to the aforementioned reasons. Unfortunately, as shown in previous work (e.g, Zhao and Ng (2007) , Chen and Ng (2013) ), the use of lexical features contributed significantly to the performance of state-of-the-art supervised AZP resolvers. Finally, owing to the lack of labeled data, the model parameters are learned to maximize data likelihood, which may not correlate well with the desired evaluation measure (i.e., F-score). Hence, while unsupervised resolvers have achieved stateof-the-art performance, these weaknesses together suggest that it is very challenging to scale these models up so that they can achieve the next level of performance.Our goal in this paper is to improve the state of the art in AZP resolution. Motivated by the aforementioned weaknesses, we propose a novel approach to AZP resolution using deep neural networks, which we believe has three key advantages over competing unsupervised counterparts.First, deep neural networks are particularly good at discovering hidden structures from the input data and learning task-specific representations via successive transformations of the input vectors, where different layers of a network correspond to different levels of abstractions that are useful for the target task. For the task of AZP resolution, this is desirable. Traditionally, it is difficult to correctly resolve an AZP if its context is lexically different from its antecedent's context. This is especially the case for unsupervised resolvers. In contrast, a deep network can handle difficult cases like this via learning representations that make lexically different contexts look similar.Second, we train our deep network in a supervised manner. 1 In particular, motivated by recent successes of applying the mention-ranking model (Denis and Baldridge, 2008) to entity coreference resolution (e.g., Chang et al. (2013) , Durrett and Klein (2013), Clark and Manning (2015) , Martschat and Strube (2015) , Wiseman et al. (2015) ), we propose to employ a ranking-based deep network, which is trained to assign the highest probability to the correct antecedent of an AZP given a set of candidate antecedents. This contrasts with existing supervised AZP resolvers, all of which are classification-based. Optimizing this objective function is better than maximizing data likelihood, as the former is more tightly coupled with the desired evaluation metric (F-score) than the latter.Finally, given that our network is trained in a supervised manner, we can extensively employ lex-ical features and use them in combination with other types of features that have been shown to be useful for AZP resolution. However, rather than employing words directly as features, we employ word embeddings trained in an unsupervised manner. The goal of the deep network will then be to take these task-independent word embeddings as input and convert them into embeddings that would work best for AZP resolution via supervised learning. We call our approach an embedding matching approach because the underlying deep network attempts to compare the embedding learned for an AZP with the embedding learned for each of its antecedents.To our knowledge, this is the first approach to AZP resolution based on deep networks. When evaluated on the Chinese portion of the OntoNotes 5.0 corpus, our embedding matching approach to AZP resolution outperforms the CN15 model, achieving state-of-the-art results.The rest of the paper is organized as follows. Section 2 overviews related work on zero pronoun resolution for Chinese and other languages. Section 3 describes our embedding matching approach, specifically the network architecture and the way we train and apply the network. We present our evaluation results in Section 4 and our conclusions in Section 5.
0
Many recent tools for morphological analysis use statistical approaches such as neural networks (Cotterell et al., 2018) . These approaches can profitably use huge amounts of training data, which makes them ideal for high-resource languages. But if there is little training data available, statistical methods can struggle to learn an accurate model. And when they learn the wrong model, it is difficult to diagnose the error, because the model is a black box.This paper presents a new approach to morphological analysis that produces humanunderstandable models and works even if only a few training words are given. The user gives a list of inflected words together with each word's morphosyntactic features and standard form (lemma):standard inflected features woman women Pl;Nom baby babies' Pl;Gen dog dogs' Pl;Gen cat cat's Sg;Gen lorry lorries Pl;Nom Our tool then proposes, for each feature, the affixes and morphological rules which mark that feature. For the example above it suggests the following:feature context morpheme Gen Sg +'s * Gen Pl +' * Pl Gen +s * Pl Nom +a+ → +e+ Pl +y * → +ies * Sg +y * , +a+, ∅Here, +'s * represents the suffix 's, and +a+ represents an infix a. 1 The table shows that both 's and an apostrophe can mark the genitive case; the second column means that the genitive was marked by 's only in singular nouns, and by an apostrophe only in plural nouns. An s suffix marks plural, and because of the tiny input data it was only seen in genitive nouns. Plural can be marked by an inner vowel change from a to e (indicated by the arrow), or by changing a final y to ies, in which case the singular form is marked by a or y.The tool also segments the input words into morphemes consistent with the table of rules (the inferred stem is marked in bold): standard inflected wom| a recognition (Rajagopal Narasimhan et al., 2014) and information retrieval (Turunen and Kurimo, 2008) ; for a more detailed overview we refer the reader to Ruokolainen et al. (2016) or Hammarström and Borin (2011) . Unsupervised learning (Harris, 1955; Creutz and Lagus, 2007; Goldsmith, 2001; Johnson, 2008; Poon et al., 2009) relies on unannotated text, and is perhaps the most popular approach, because of the large amount of unannotated text available for many languages, but it can suffer from low accuracy (Hammarström and Borin, 2011) . One way to improve accuracy is to exploit semantic information (Sakakini et al., 2017; Vulić et al., 2017; Schone and Jurafsky, 2000; Soricut and Och, 2015) . Another is minimally-supervised learning (Monson et al., 2007; Kohonen et al., 2010; Ruokolainen et al., 2013; Grönroos et al., 2014; Sirts and Goldwater, 2013; Ahlberg et al., 2014) , which combines a large amount of unannotated text with a small amount of annotated text, and potentially provides high accuracy at a low cost. Silfverberg and Hulden (2017) observe that in the Universal Dependencies project (Nivre et al., 2017) , each word is annotated with its lemma and features, but not segmented. They study the problem of finding a segmentation for these words. Our segmentation algorithm solves the same problem, and can be used in their setting. We improve on their solution by using a constraint solver to achieve high accuracy even with limited data, and using a precise language for expressing affixes and rules, which allows us to use the resulting model to precisely analyse new words. Luo et al. (2017) use Integer Linear Programming for unsupervised modelling of morphological families. They use ILP as a component of a larger training algorithm, so unlike our work they do not attempt to find a globally optimal solution. ILP has been used in other NLP applications outside of morphology (Berant et al., 2011; Roth and Yih, 2005; Clarke and Lapata, 2008) .
0