text
stringlengths
4
222k
label
int64
0
4
In recent years, with the advent of large training corpora and pretrain technology, chatbot models have evolved considerably in open domain (Bao et al., 2020; Roller et al., 2021) . Current chatbots have achieved surprising results in generating fluent, engaging, informative responses, but still occasionally generate responses that are contradictory with history when interacting with human (Li et al., 2021b) . Such contradiction issues are often jarring and severely disrupt communication. Therefore, it is essential to reduce contradiction for chat-bots in multi-turns dialogues.Previous work (Li et al., 2016; proposes to use the paradigm of RL to mitigate the gap between the training and contradiction avoiding objective. However, the RL-based methods are easy to degrade in deep neural network (Parisotto et al., 2020) , leading to the decoder generates responses that deviate from human language (Lewis et al., 2017; Kottur et al., 2017) . Other method (Li et al., 2020) aims to address dialogue logical contradictions via unlikelihood training (Welleck et al., * Corresponding Author. 2019). While they reduce the probability of the labeled contradicting responses, it is less generalizable to different conversation scenarios with the limited coverage of labeled contradicting data. √ × Figure 1 : The similarity between correct and contradictory response is 0.9315 in blenderbot embedding space.We argue that one of the reasons behind contradiction is that model lacks the ability to identify contradictory behavior clearly. As shown in Fig.1 , the large pretrained chatbot blenderbot (Roller et al., 2021) still has high similarity between the correct and contradictory responses in embedding space. Chatbots are likely to cause contradictions when probed with unusual conversations during inference (Roller et al., 2021) , while they are commonly trained to mimic human context-response pairs under the teacher-forcing algorithm (Williams and Zipser, 1989) . Without being exposed to incorrect and contradictory context-response pairs, chatbots fail to learn the ability that discriminating contradictory response patterns directly, which hurts its robustness to avoid contradiction.To tackle this challenging issue, we propose a novel method to Mitigate Contradiction via Contrastive Learning, namely MCCL. Our method explicitly perceives the difference between the self-contradiction negative example and semanticaligned positive example. Instead of utilizing welllabeled contradicting examples (Li et al., 2020) , we generate a self-contradiction negative example with a learnable latent noise. To capture contradic-tion actions, we employ the policy gradient method for rewarding the latent noise based on the feedback from a pre-trained critic. Furthermore, we construct an additional positive example by adding a small perturbation. The positive example has aligned semantic with the original context, which devotes to the training stability and robustness.Overall, our contributions are summarized as follows: 1) To mitigate contradictions in dialogue, we propose a novel method named MCCL, which contrasts target response with negative pairs, to make chatbot models discriminate and refrain from contradictory response patterns. 2) Experiment results show that our method performs better than baselines in automatic metrics and manual evaluation, especially in contradiction score.
0
The Expedition project is devoted to fast "ramp-up" of machine translation systems from less studied, so-called "low-density" languages into English. One of the components that must be acquired and built during this process is a morphological analyzer for the source low-density language. Since we expect that the source language informant will not be well-versed in computational linguistics in general or in recent approaches to building morphological analyzers (e.g., [Koskenniemi, 1983] , [Antworth. 1990] , , [Karttunen, 1994] ) and the operation of state-of-the-art finite state tools (e.g., [Karttunen. 1993] , [Karttunen and Beesley, 1992] , [Karttunen et al., 1996] ) in particular, the generation of the morphological analyzer component has to be accomplished almost semi-automatically. The user must be guided through a knowledge elicitation procedure for the knowledge required for the morphological analyzer. This is accomplished using the elicitation component of Expedition, the Boas system. As this task is not easy, we expect that the development of the morphological analyzer will be an iterative process, whereby the human informant will revise and/or refine the information previously elicited based on the feedback from a test runs of the nascent analyzer.The work reported in this paper describes the use of machine learning in the process of building and refining morphological analyzers. The main use of machine learning in our current approach is in the automatic learning of formal rewrite or replace rules for morphographemic changes from the examples, provided by the informant. This subtask of accounting for such phenomena is perhaps one of the more complicated aspects of building an analyzer and by automating it we expect to gain a certain improvement in productivity.There have been a number of studies on inducing morphographemic rules from a list of inflected words and a root word list. Johnson [1984] presents a scheme for inducing phonological rules from surface data, mainly in the context of studying certain aspects of language acquisition. The premise is that languages have a finite number of alternations to be handled by morphographemic rules and a fixed number of contexts in which they appear; so if there is enough data, phonological rewrite rules can be generated to account for the data. Rules are ordered by some notion of "'surfaciness", and at each stage the nmst surfacy rule --the rule with the most transparent context is selected. Golding and Thompson[1985] describe an approach for inducing rules of English word formation from a given corpus of root forms and the corresponding inflected forms. The procedure described there generates a sequence of transformation rules, l each specifying how to perform a particular inflection.More recently, Theron and Cloete [1997] have pre-1Not in the sense it is used in transformation-based learning [Brill, 1995] . sented a scheme for obtaining two-level morphology rules from a set of aligned segmented and surface pairs. They use the notion of string edit sequences assuming that only insertions and deletions are applied to a root form to get the inflected form. They determine the root form associated with an inflected form (and consequently the suffixes and prefixes) by exhaustively matching against all root words. The motivation is that "real" suffixes and prefixes will appear often enough in the corpus of inflected forms, so that, once frequently occurring suffixes and prefixes are identified, one can then determine the segmentation for a given inflected word by choosing the segmentation with the most frequently occurring affix segments and considering the remainder to be the root. While this procedure seems to be reasonable for a small root word list, the potential for "noisy" or incorrect alignments is quite high when the corpus of inflected forms is large and the procedure is not given any prior knowledge of possible segmentations. As a result, selecting the "correct" segmentation automatically becomes quite nontrivial. An additional complication is that allomorphs show up as distinct affixes and their counts in segmentations are not accumulated, which might lead to actual segmentations being missed due to fragmentation. The rule induction is not via a learning scheme: aligned pairs are compressed into a special data structure and traversals over this data structure generate morphographemic rules. Theron and Cloete have experimented with pluralization in Afrikaans, and the resulting system has shown about 94% accuracy on unseen words.Goldsmith [1998] has used an unsupervised learning method based on the minimum description length principle to learn the "morphology" of a number of languages. What is learned is a set of "root" words and affixes, and common inflectional pattern classes. The system requires just a corpus of words in a language. In the absence of any root word list to use as a scaffolding, the shortest forms that appear frequently are assumed to be roots, and observed surface forms are then either generated by concatenative affixation of suffixes or by rewrite rules. 2 Since the system has no notion of what the roots and their part of speech values really are, and what morphological information is encoded by the affixes, these need to be retrofitted manually by a human (if one is building a morphological analyzer) who would have to weed through a large number of noisy rules. We feel that this approach, while quite novel, can be used to build real-world morphological analyzers only after substantial modifications are made.ZSome of which may" not make sense, but are necessaryto account for the data: for instance a rule like insert a word final y after the root "eas". is used to generate easy.This paper is organized as follows: The next section very briefly describes the Boas project of which this work is a part. The subsequent sections describe the details of the approach, the morphological analyzer architecture, and the induction of morphographemic rules along with explanatory examples. Finally, we provide some conclusions and ideas for future work.Boas [Nirenburg, 1998, Nirenburg and Raskin, 1998 ] is a semi-automatic knowledge elicitation system that guides a team of two people through tile process of de~ veloping the static knowledge sources for a moderatequality, broad-coverage MT system from any "lowdensity" language into English. Boas contains knowledge about human language and means of realization of its phenomena in a number of specific languages and is, thus, a kind of a "linguist in the box" that helps nonprofessional acquirers with the task. In the spirit of tile goal-driven, "demand-side" approach to computational applications of language processing [Nirenburg, 1996] , the process of acquiring this knowledge has been split into two steps: (i) acquiring the descriptive, declarative knowledge about a language and, (ii) deriving operational knowledge (content for the processing engines) from this descriptive knowledge. A typical elicitation interaction screen of Boas is shown in Figure 1 .An important aspect that we strive to achieve regarding these descriptive and operational pieces of information, be it elicited from human informants or acquired via machine learning is that they should be transparent and human readable, and where necessary human maintainable and extendable, contrary to opaque and uninterpretable representations acquired by various statistical learning paradigms.Before proceeding any further we would also like to state the aims and limitations of our approach. Our main goal is to significantly expedite the deveIopment of a morphological analyzer. It is clear that for inflectional languages where each root word can be associated with a finite number of word forms, one can, with a lot of work, generate a list of word forms with associated morphological features encoded, and use this as a lookup table to analyze word forms in input texts. This is, however, something we would like to avoid, as it is time consuming, expensive and error-prone. We would prefer attempting to capture general morphophonological and morphographemic phenomena, and lexicon abstractions (say as inflectional paradigms) using an example driven technique, and essentially reduce the acquisition process to one of just assigning root or citation forms to one of these lexicon abstractions, with the automatic generation process to be described, doing the rest of the work. This process will still be imperfect, as we expect human informants to err in making their paradigm abstractions, and overlook details or exceptions. So, the whole process will be an iterative one, with convergence to a wide-coverage analyzer coming slowly at the beginning (where morphological phenomena and lexicon abstractions are being defined and tested), but significantly speeding up once wholesale root form acquisition starts. Since the generation of the operation content (data files to be used by the morphological analyzer engine) from the elicited descriptions, is expected to take a few minutes, feedback on operational performance can be provided very fast. There are also ways to utilize a partially acquired morphological analyzer to aid in the acquisition of open class root or citation forms. Human languages have many diverse morphological phenomena and it is not our intent at this point to have a universal architecture that can accommodate any and all phenomena. Rather, we propose a modular and extensible architecture that can accommodate additional functionality in future incarnations of Boas. We also intend to limit the morphological processing to processing single tokens and deal with multi-token phenomena such as partial or full word reduplications with additional machinery that we do not discuss here.In this paper we concentrate on operational content in the context of building a morphological analyzer. To determine this content, we integrate the information provided by the informant with automatically derived information. The whole process is an iterative one as illustrated in Figure 2 , whereby the information elicited is transformed into operational data required by the generic morphological analyzer engine s and the resulting analyzer is tested on a test corpus. 4 Any discrepancies between the output of the analyzer and the test corpus are then analyzed and potential sources of errors are given as feedback to the elicitation process. Currently, this feedback is limited to morphographemic processes. Figure 2 labeled Morphological Analyzer Generation is the main component which takes in the information elicited and generates a series of regular expressions for describing the morphological lexicon and morphographemic rules. The morphographemic rules describing changes in spelling as a result of affixation operations, are induced from the ex-3We currently use XRCE finite state tools as our target environment [Karttunen et al., 1996] . 4Also independently elicited from either the human informant or compiled from any on-line resources for the language in question. amples provided, by using transformation-based learning [Brill, 1995, Satta and Henderson, 1997] . The result is an ordered set of contextual replace oz" rewrite rules, much like those used in phonology. We then use error-tolerant finite state recognition [Oflazer, 1996] to perform "reverse spelling correction" for identifying the erroneous words the finite state analyzer accepts that are (very) close to the correct words in the test corpus that it rejects. The resulting pairs are then aligned, and the resulting mismatches are identified and logged for feedback purposes.We adopt the general approach advocated by Karttunen [1994] and build the morphological analyzer as the combination of several finite state transducers some of which are constructed directly from the elicited information while others are constructed from the output of the machine learning stage. Since the combination of the transducers is computed at compile time, there are no run time overheads. The basic architecture of the morphological analyzer is depicted in Figure 3 . The components of this generic architecture are as follows: The analyzer consists of the union of transducers each of which implements the morphological ealalysis process for one paradigm. Each such transducer is the composition of a number of components. These components are (from bottom to top) described below:1. The bottom component is an ordered sequence of morphographemic rules that are learned via transformation-based learning from the examples for the inflectional paradigm provided by the human informant. The rules are then composed into one finite state transducer [Kaplan and Kay, 1994 ].2. The root and morpheme lexicon contains the root words and the affixes. We currently assume that all affixation is concatenative and that the lexicon is described by a regular expression of the sort[ Affixes ]* [ Roots ] [ Suffixes ]*.53. The morpheme to surfacy ]eature mapping essentially maps morphemes to feature names but retains some encoding of the surface morpheme. Thus, allomorphs that encode the same feature would be mapped to different "surfacy" features.4. The lexical and surfacy constraints specify any conditions to constrain the possibly overgenerating morphotactics of the root and morpheme lexicon. These 5%Ve currently assume that we have at most one prefix and at most one suffix, but this is not a fundamental limitation. On the other hand, elicitation of complex morphotactics for an agglutinative language like Turkish or Finnish, requires a more sophisticated elicitation machinery. constraints can be encoded using the root morphemes and the surfacy features generated by the previous mapping. The use of surfacy features enables reference to zero morphemes which otherwise could not have been used. For instance, if in some paradigm a certain prefix does not co-occur with a certain suffix, or always occurs with some other suffix, or if a certain root/lemma of that paradigm has exceptional behavior with respect to one or more of the affixes, or if the allomorph that goes with a certain root depends on the properties of the root, these are encoded at this level as a finite state constraint.The surfacy feature to feature mapping module maps the surfacy representation of the affixes to symbolic feature names; as a result, no surface information remains except for the lemma or the root word. Thus, for instance, allomorphs that encode the same feature and map to different surfacy features, now map to the same feature symbol.The feature constraints specify ant' constraints among the symbolic features. This is an alternative functionality to that provided by lexical and surfacy constraints to constrain morphotactics, but at this level one refers to and constrains features as opposed to surfacy features. This may provide a more natural or convenient abstraction, especially for languages with long distance morphotactic constraints.These six finite state transducers are composed to yield the transducer for the paradigm, and the union of the resulting transducers produces one (possibly large) transducer for morphological analysis where surface strings applied at the lower side produce all possible analyses at the upper side.The Boas environment elicits morphological information by asking the informant a series of questions about the paradigms of inflection. A paradigm abstracts together lemmas (or root words) that essentially behave the same with respect to inflection, and captures information about the morphological features encoded and forms realizing these features, from which additional information can be extracted. It is assumed that all lemmas that belong to the same paradigm take the same set of inflectional affixes. It is expected that the roots and/or the affixes may undergo systematic or idiosyncratic morphographemic changes. It is also assumed that certain lemmas in a given paradigm mat" behave in some exceptional way (for instance, contrary to all other lemmas, a given lemma may not have one of the inflected forms]) A paradigm description also provides the full inflectional patterns for one characteristic or distinguished lemma belonging to the paradigm, and additional examples for any other lemmas whose inflectional forms undergo nonstandard morphographemic changes. If necessary, any lexical and feature constraints can be encoded. Currently the provisions we have for such constraints are limited to writing regular expressions (albeit at a much higher level), but capturing such constraints using a more natural language (e.g., [Ranta, 1998 ]) can be stipulated for future versions.The information elicited from the human informant is captured as a text file. The root word and the inflection examples for the distinguished lemma are processed with an alignment algorithm to determine how the given root word aligns with each inflected form so that the edit distance is minimum. Once such alignments are performed, the segments in the inflected form that are before and after the root alignment points are considered to be the prefixes and suffixes of the paradigm. These are then associated with the features given with the inflected form. Let us provide a simple example from a Russian verb inflection paradigm. The following information about the distinguished lemma in the paradigm is provided: 6 The alignment produces the following suffix feature 6Upper case characters and the single quote symbol encode specific Russian characters. The transliteration is not conventional.pairs for the suffix lexicon and morpheme to feature mapping transduction:+at'-> +Inf +u -> +Preslsg +em -> +Preslpl +all -> +PastPl +al -> +PastMs E +' -> +Impsg +eS -> +Pres2sg +ete -> +Pres2pl +alo -> +PastNsg +'te -> +Imppl +'et -> +Pres3sg +ut -> +Pres3pl +ala -> +PastFsgWe then produce the following segmentations to be used by the learning stage discussed in the next section. It should be noted we (can) use the lemma form as the morphological stem, so that the analysis we generate will have the lemma. Thus, some of the rules learned later will need to deal with this.(rezat'+at', rezat') (rezat'+'te, reZ'te) (rezat'+et, reZet) (rezat'+ete, reZete) (rezat'+ali, rezali) (rezat'+ala, rezala) (rezat '÷t, reZ') (rezat'+eS, reZeS) (rezat'+em, reZem) (rezat'+ut, reZut) (rezat'+alo, rezalo) (rezat'+al, rezal)The lemma and suffix information elicited and extracted as summarized above are used to c~mstruct regular expressions for the lexicon component of each paradigm. 7 The example segmentations like those above are fed into the learning module to induce morphographemic rules.~The result of this process is a script for the XRCE finite state tool xfst. Large scale lexicons can be more efficiently compiled ~, the XRCE tool lexc. We currently do not generate lerc scripts, but it is trivial to do so. Rules from Examples The preprocessing stage yields a list of pairs of segmented lexical forms, and surface ]orms. The segmented forms have the roots/lemmas and affixes, and the affix boundaries are marked by the + symbol. This list is then processed by a transformation-based learning paradigm [Brill, 1995, Satta and Henderson, 1997] as illustrated in Figure 4 . The basic idea is that we consider the list of segmented words as our input and find transformation rules (expressed as contextual rewrite rules) to incrementally transform it into the list of surface forms. The transformation we choose at every iteration is the one that makes the list of segmented forms closest to the list of surface forms.The first step in the learning process is an initial alignment of pairs using a standard dynamic programming scheme. The only constraints in the alignment are that a + in the segmented lexical form is always aligned with an empty string on the surface side (notated by a 0), and that a consonant (vowel) on one side is aligned with a consonant (vowel) or 0 on the other side. The alignment is also constrained by the fact that it should correspond to the minimum edit distance between the original lexical and surface forms, s ~,From this point on, we will use a simple example from English to clarify our points.We assume that we have the pairs (un+happy+est, unhappiest) and (shop+ed, shopped) in our example base. We align these and determine the total number of "errors" in the segmented forms that we have to fix to make all match the corresponding surface forms. The initial alignment produces the aligned pairs:un+happy+es*c shop0+ed un0happi0est shopp0ed with a total of 5 errors. From each segmented pair we generate rewrite rules of the sort 9SWe choose one if there are multiple legitimate alignments.9V~re use the XRCE Finite State Tools regular expression syntax [Karttunen et al., 1996] . For the sake of readability.we will ignore the escape symbol (Z) that should precede any special characters (e.g., +) used in these rules.u -> 1 [] LeftContext _ RightContext ;where u(pper) is a symbol in the segmented form, l(ower) is a symbol in the surface form. Rules are generated only from those aligned symbol pairs which are different. LeftContext and RightContext are simple regular expressions describing contexts in the segmented side (up to some small length) taking into account also the word boundaries. For instance, from the first aligned-pair example, this procedure would generate rules such as (depending on the amount of left and right context allowed) The # symbol denotes a word boundary, to capture any word initial and final phenomena. The segmentation rules (+ -> 0) require at least some minimal left or right context (usually longer than the minimal context for other rules for more accurate segmentation decisions). We also disallow contexts that consist only of a morpheme boundary, as such contexts are usually not informative. It should also be noted that these are rules that transform a segmented form into a surface form (contrary to what may be expected for analysis.) This lets us capture situations where multiple segmented forms may map to the same surface form, which would be the case when the language has morphological ambiguity. Thus, in a reverse look-up a given surface form may be interpreted in multiple wa~'s if applicable.10y -> i ]1 p _ y -> i II p _ ÷Since we have many examples of aligned pairs, it is likely that a given rule will be generated from many pairs. For instance, if the pairs (stop+ed, stopped) and (trip+ed, tripped) were also in the list. the gemination rule0 -> p ]l p _ + e d, (along with certain others) will also be generated from these examples. We count how many times a rule is generated and associate this number with the rule as its promzse, meaning that it promises to fix this many "errors" if it is selected to apply to the current list of segmented forms.Rules If information regarding phoneme/grapheme classes in addition to consonant and vowel classes, such as SIBILANTS = {s,x.z}, LABIAL = {b,m, ...} HIfiHWOVELS = { u, i ...). etc., it is l°However, the learning procedure may fail to fix all errors, if among the examples there are cases where the same segmented form maps to two different surface forms (generation ambiguity). possible to generate more general rules. Such rules can cover more cases and the number of rules induced will typically be smaller and cover more unseen cases. For instance, in addition to arule like 0 -> p II p _ + e, the ruleso -> p II 0 -> p II 0 -> p II 0 -> p II CONSONANTS _ e p _ VOWELS_ e CONSONANTS _ VOWELS can be generated where symbols such as CONSONANTS or LABIALS stand for regular expressions denoting the union of relevant symbols in the alphabet. The promise scores of the generalized rules are found by adding the promise scores of the original rules generating them. It should also be noted that generalization will increase substantially the number of candidate rules to be considered during each iteration, but this is hardly a serious issue, as the number of examples one would have per paradigm would be quite small. The rules learned in the process would be the most general set of rules that do not conflict with the evidence in the examples.Selecting Rules At each iteration all the rules along with their promise scores are generated from the current state of the example pairs. The rules generated are then ranked based on their promise scores with the top rule having the highest promise. Among rules with the same promise score, we rank more general rules higher with generality being based on context subsumption. However, all the segmentation rules go to the bottom of the list, though within this group rules are still ranked based on decreasing promise and context generality. The reasoning for treating the segmentation rules separately and later in the process, is that affixation boundaries constitute contexts for any morphographemic changes and they should not be eliminated if there are any (more) morphographemic phenomena to process. Starting with the top ranked rule we test each rule on the segmented component of the pairs using the finite state engine, to see how much the segmented forms are • 'fixed". The first rule that fixes as many "errors" as it promises to fix, gets selected and is added to the list of rules generated, in order. HThe complete procedure for rule learning can now be given as follows: Re-align the resulting segmented forms with the corresponding surface forms to see how many ''errors'' have been f~xed;-If the number fixed is equal to what the rules promised to fix select this rule; ) -Commit the changes with the changes performed by the rule and save alignments;-Reduce Error by the promise score of the selected rule; )This procedure eventually generates all ordered sequence of two groups of rewrite rules. The first group of rules are for any morphographemic phenomena in the given set of examples, and the second group of rules handle segmentation. All these rules are composed in the order generated to construct the Morphographemic Rules transducer at the bottom of each paradigm (see Figure 3 ).Once the MoTThographemic Rules transducers are compiled and composed with the lexicon transducer that is generated automatically fl'om the elicited information, we obtain the analyzer as the union of the individual transducers for each paradigm. It is now possible to test this transducer against a test corpus and to see if there are any surface forms in the test corpus that are not recognized by the generated analyzer. Our intention is to identify and provide feedback about any minor problems that are due to a lack of examples that cover certain morphographemic phenomena, or to an error in associating a given lemma with a paradigm.Our approach here is as follows: we use the resulting morphological analyzer with an error-tolerant finite state recognizer engine [Oflazer. 1996] . For any (correct) word in the test corpus that is not recognized we try to find words recognized by the analyzer that are (very) close to the rejected word. by error-tolerant recognition, performing essentially a reverse spelling correction. If the rejection is due a snmll number (1 or 2) of errors, the erroneous words recognized by the recognizer are aligned with the corresponding correct words from the test corpus. These aligned pairs can then be analyzed to see what the problems may be.Performance Issues The process of generating a morphological analyzer once the descriptive data is given, is very fast. Each paradigm can be processed within seconds on a fast workstation, including the few dozens of iterations of rule learning from the examples. A new version of the analyzer ca,, be generated within minutes and tested very rapidly on any test data. Thus, none of the processes described in this paper constitutes a bottleneck in the elicitation process.We have presented the highlights of our approach for automatically generating finite state morphological analyzers from information elicited from human informants. Our approach uses transformation-based learning to induce morphographemic rules from examples and combines these rules with the lexicon information elicited to compile the morphological analyzer. There are other opportunities for using machine learning in this process. For instance, one of the important issues in wholesale acquisition of open class items is that of determining which paradigm a given lemma or root word belongs to. From the examples given during the acquisition phase it is possible to induce a classifier that can perform this selection to aid the informant.We believe that this approach to machine learning of a natural language processor that involves a 1/uman informant in an elicit-generate-test loop and uses scaffolding provided by the human informant in machine learning, is a very viable approach that avoids the noise and opaqueness of other induction schemes. Our current work involves using similar principles to induce (light) syntactic parsers in the Boas framework.
0
Semantic analyses often go beyond treestructured representations, assigning multiple semantic heads to nodes, some semantic formalisms even tolerating directed cycles. 1 At the same time, syntactic parsing is a mature field with efficient, highly optimised decoding and learning algorithms for tree-structured representations. We present tree approximation algorithms that in combination with a state-of-the-art syntactic parser achieve competitive performance in semantic digraph parsing.We investigate two kinds of tree approximation algorithms that we will refer to as pruning algorithms and packing algorithms. Our pruning algorithms simply remove and reverse edges until the graph is a tree; edge reversals are then undone as a postprocessing step. Our packing algorithms, on the other hand, carry out two bijective graph This work is licenced under a Creative Commons Attribution 4.0 International License. Page numbers and proceedings footer are added by the organizers. License details: http://creativecommons.org/licenses/by/4.0/ 1 For example, HPSG predicate-argument structures (Pollard and Sag, 1994) . transformations to pack structural information into new edge labels, making it possible to reconstruct most of the structural complexity as a postprocessing step. Specifically, we present a packing algorithm that consists of two fully bijective graph transformations, in addition to a further transformation that incurs only a small information loss.We carry out experiments across three semantic annotations of the Wall Street Journal section of the Penn Treebank (Marcus et al., 1993) , corresponding to simplified versions of the semantic formalisms minimal recursion semantics (MRS) (Copestake et al., 2005) , Enju-style predicateargument structures (Miyao and Tsujii, 2003) , and Prague-style tectogrammar semantics (Böhmová et al., 2003) . We show that pruning and packing algorithms lead to state-of-the-art performance across these semantic formalisms using an off-theshelf syntactic dependency parser. Sagae and Tsujii (2008) present a pruning algorithm in their paper on transition-based parsing of directed acyclic graphs (DAGs), which discards the edges of longest span entering nodes. They apply the dependency parser described in Sagae and Tsujii (2007) to the tree representations. We note that this algorithm is not sufficient to produce trees in our case, where the input graphs are not necessarily acyclic. It does correspond roughly to our LONGEST-EDGE baseline, which removes the longest edge in cycles, in addition to flow reversal. Sagae and Tsujii (2008) also present a shiftreduce automaton approach to parsing DAGs. In their paper, they report a labeled F1-score of 88.7% on the PAS dataset (see Section 3), while we obtain 89.1%, however the results are thus not directly comparable due to different data splits. 2 The shared task organizers of the Broadcoverage Semantic Dependency Parsing task at SemEval-2014 3 also presented a pruning-based baseline system. They eliminate re-entrancies in the graph by removing dependencies to nodes with multiple incoming edges. Of these edges, they again keep the shortest. They incorporate all singleton nodes by attaching nodes to the immediately following node or to a virtual root -in case the singleton is sentence-final. Finally, they integrate fragments by subordinating remaining nodes with in-degree 0 to the root node. They apply the parser described in Bohnet (2010) , also used below, to the resulting trees. This system obtained a labeled F1-score of 54.7% on the PAS dataset. The performance of their pruning algorithm was also considerably lower than our algorithms on the other datasets considered below.
0
Parmi l'ensemble des outils d'aide à la traduction, le système de mémoire de traduction (SMT) est certainement l'outil le plus populaire auprès des traducteurs professionnels. Comme l'explique (Planas, 2000) , ce succès est dû à deux types de redondances que le traducteur rencontre fréquemment dans son activité et qui sont prises en compte de manière naturelle (du moins en théorie) par une mémoire de traduction. Primo, il est fréquent que le traducteur ait à traduire un texte proche d'un autre texte déjà traduit par le passé (c'est par exemple le cas lorsqu'il traduit une nouvelle version d'un manuel technique). L'auteur parle alors de redondance inter-document. Secundo, un même texte peut contenir de nombreux passages répétitifs, un phénomène que (Planas, 2000) qualifie alors de redondance intra-document.Une des rares études où un mécanisme est présenté pour exploiter la redondance intra-document est celle de (Brown, 2005) , dans le cadre d'un système de traduction basé sur l'exemple que l'on peut voir comme une extension d'un SMT (Planas et Furuse, 2000) . Brown montre comment, en tenant compte des traductions précédentes, il est possible de favoriser certains segments suggérés par le SMT, au détriment d'autres fragments, moins pertinents. Ce cadre de traduction dans lequel l'usager se trouve au moment d'interroger la mémoire est le contexte. Cette information contextuelle est donc à même de réduire la quantité de matériel proposé à l'utilisateur. Ceci répond à une faiblesse potentielle des SMT : sans filtre, ils sont en mesure d'inonder le traducteur de fragments, les rendant ainsi inutilisables. Désireux d'exploiter l'approche contextuelle, le RALI et Lingua Technologies Inc. collaborent actuellement pour développer une mémoire de traduction de troisième génération (MT3G) sensible au contexte, c'est-à-dire un système capable de recycler des traductions au niveau sousphrastique tout en tenant compte du cadre de traduction où se trouve l'utilisateur. Dans cette étude, nous présentons une tentative d'intégration de cette information contextuelle qui revient à détecter le domaine de traduction (translation topic) dans lequel évolue le traducteur, pour augmenter le rendement et l'ergonomie de notre système. Nous considérons cette approche comme un premier pas vers la prise en compte du contexte au sens où l'entend (Brown, 2005) .Nous présentons en section 2 l'architecture de notre SMT et le cadre dans lequel nous le développons. Nous nous intéressons ensuite (section 3) au problème non trivial de l'évaluation d'un tel système ; problème abordé par plusieurs auteurs, notamment par (Simard et Langlais, 2001 ) et par (Planas, 2000) . Nous présentons ensuite en section 4 le mécanisme que nous avons mis en place pour tenir compte du contexte de traduction lors de l'interrogation de la mémoire. Nous montrons que cette approche perfectible donne déjà des améliorations de nature à augmenter la productivité d'un traducteur professionnel. Nous concluons nos travaux en section 5 et dressons la liste des extensions de notre approche sur lesquelles nous travaillons actuellement.
0
The University of Edinburgh participated in the WMT19 Shared Task on News Translation in six language directions: English-Gujarati (EN↔GU), English-Chinese (EN↔ZH), German-English (DE→EN) and English-Czech (EN→CS). All our systems are neural machine translation (NMT) systems trained in constrained data conditions with the Marian 1 toolkit . The different language pairs pose very different challenges, due to the characteristics of the languages involved and arguably more importantly, due to the amount of training data available.Pre-processing For EN↔ZH, we investigate character-level pre-processing for Chinese compared with subword segmentation. For EN→CS, we show that it is possible in high resource settings to simplify pre-processing by removing steps.1 https://marian-nmt.github.ioExploiting non-parallel resources For all language directions, we create additional, synthetic parallel training data.For the high resource language pairs, we look at ways of effectively using large quantities of backtranslated data. For example, for DE→EN, we investigated the most effective way of combining genuine parallel data with larger quantities of synthetic parallel data and for CS→EN, we filter backtranslated data by re-scoring translations using the MT model for the opposite direction. The challenge for our low resource pair, EN↔GU, is producing sufficiently good models for backtranslation, which we achieve by training semisupervised MT models with cross-lingual language model pre-training (Lample and Conneau, 2019) . We use the same technique to translate additional data from a related language, Hindi.In all experiments, we test state-of-the-art training techniques, including using ultra-large mini-batches for DE→EN and EN↔ZH, implemented as optimiser delay.Results summary Automatic evaluation results for all final systems on the WMT19 test set are summarised in Table 1 . Throughout the paper, BLEU is calculated using SACREBLEU 2 (Post, 2018) unless otherwise indicated. A selection of our final models are available to download. 3
0
People can discriminate between typical (e.g., A cop arrested a thief ) and atypical events (e.g., A thief arrested a cop) and exploit this ability in online sentence processing to anticipate the upcoming linguistic input. Brains have been claimed to be "prediction machines" (Clark, 2013) and psycholinguistic research has shown that a crucial ingredient of such predictive ability is the knowledge about events and their typical participants stored in human semantic memory, also referred to as Generalized Event Knowledge (GEK) by McRae and Matsuki (2009) . To make an example, if we were asked to think about things that are played with a guitar, we would quickly and more or less unanimously think of words such as song, piece or riff.Computational models of predicate-argument typicality, generally referred to as thematic fit in the psycholinguistic literature (McRae et al., 1998) , extract typical arguments from parsed corpora. However, GEK is not just storing relations between words: The fact that this knowledge is generalized -that is, it is based on an abstract representation of what is typical -allows us to easily classify new argument combinations as typical or atypical. Furthermore, psycholinguistic studies (Bicknell et al., 2010; Matsuki et al., 2011) have shown that humans are able to combine and dynamically update their expectations during sentence processing: for example, their expectations given the sequence The barber cut the differ from the ones given The lumberjack cut the , since the integration of knowledge "cued" by the agent argument with the verb will lead to the activation of different event scenarios. In Distributional Semantics, sophisticated models of the GEK have been proposed to make predictions on upcoming arguments by integrating the cues coming from the verb and the previously-realized arguments in the sentence (Lenci, 2011; Chersoni et al., 2019) . Since such knowledge is acquired from both first-hand and linguistic experience (McRae and Matsuki, 2009) , an important assumption of this literature is that, at least for its "linguistic subset", the GEK can be modeled with distributional information extracted from corpora (Chersoni et al., , 2021 .Language Models are trained to make predictions given a context, and thus, they can also be viewed as models of GEK. This approach is promising if one considers the success of recent Transformer-based Language Models (henceforth TLMS), which are trained on huge corpora and contain a massive number of parameters. Even if these models receive extensive training and have been shown to capture linguistic properties (Jawahar et al., 2019; Goldberg, 2019) , it is not obvious whether they acquire the aspects of GEK that have been modeled explicitly in previous approaches. To the best of our knowledge, Transformers have never been tested on dynamic thematic fit modeling, nor their performance has been compared with traditional distributional models. Our current work is addressing this issue.1. we propose a methodology to adapt TLMS to the dynamic estimation of thematic fit, using a dataset that contains several types of argument combinations differing for their typicality;2. we present a comprehensive evaluation of various TLMS on this task, performed by comparing them to a strong distributional baseline;3. we conduct further analysis aimed at identifying the potential limitations of TLMS as models of GEK.Our results are relevant for researchers interested in assessing the linguistic abilities of TLMS, as well as those working on applications involving TLMS, such as text generation.
0
Twitter has become one of the most widely used social media platforms, with users (as of March 2013) posting approximately 400 million tweets per day (Wickre, 2013) . This public data serves as a potential source for a multitude of information needs, but the sheer volume of tweets is a bottleneck in identifying relevant content (Becker et al., 2011) . De Choudhury et al. (2012) showed that the user type of a Twitter account is an important indicator in sifting through Twitter data. The knowledge of a tweet's origin has potential implications on the nature of the content to an end user (e.g., credibility, location, etc). Also, certain types of events are more likely to be reported by individual persons (e.g., local events) whereas organizations generally report events that are of interest to a wider audience.The first part of our research focuses on user type classification in Twitter. De Choudhury et al. (2012) addressed this problem by examining meta-information derived from the Twitter API. In contrast, the goal of our work is to classify tweets, based solely on their textual content. We highlight several reasons why this can be advantageous. One reason is that people frequently share content from other sources, but the shared content often appears in their Twitter timeline as if it was their own. Consequently, a tweet that was posted by an individual may have originated from an organization. Moreover, meta-information can sometimes be infeasible to obtain given the rate limits 1 and there are times when profile information for a user account is unavailable or ambiguous (e.g., users often leave their profile information blank or write vague entries). Therefore, we believe there is value in being able to infer the type of user who authored a tweet based solely on its textual content. Potentially, our methods for user type classification based on textual content can also be combined with methods that examine user profile data or other meta-data, since they are complementary sources of information.In this paper, we present a classifier that tries to determine whether a tweet originated from an organization or a person using a rich, linguisticallymotivated feature set. We design features to recognize linguistic characteristics, including sentiment and emotion expressions, informal language use, tweet style, and similarity with news headlines. We evaluate our classifier on both English and Spanish Twitter data and find that the classifier achieves an 89% F 1 -score for identifying tweets that originate from organizations in English and a 87% F 1 -score for Spanish.The second contribution of this paper is to demonstrate that user type classification can improve event recognition in Twitter. We conduct a study of event recognition for civil unrest events and disease outbreak events. Based on statistics from manually annotated tweets, we found that organization-tweets are much more likely to mention these events than person-tweets. We then investigate several approaches to incorporate user type information into event recognition models. Our best results are produced by training separate event classifiers for tweets from different user types. We show that user type information consistently improves event recognition performance for both civil unrest events and disease outbreak events and for both English and Spanish tweets.
0
Sentiment analysis of English texts has become a large and active research area, with many commercial applications, but the barrier of language limits the ability to assess the sentiment of most of the world's population.Although several well-regarded sentiment lexicons are available in English (Esuli and Sebastiani, 2006; Liu, 2010) , the same is not true for most of the world's languages. Indeed, our literature search identified only 12 publicly available sentiment lexicons for only 5 non-English languages (Chinese mandarin, German, Arabic, Japanese and Italian) . No doubt we missed some, but it is clear that these resources are not widely available for most important languages.In this paper, we strive to produce a comprehensive set of sentiment lexicons for the worlds' major languages. We make the following contributions:• New Sentiment Analysis Resources -We have generated sentiment lexicons for 136 major languages via graph propagation which are now publicly available 1 . We validate our own work through other publicly available, human annotated sentiment lexicons. Indeed, our lexicons have polarity agreement of 95.7% with these published lexicons, plus an overall coverage of 45.2%.• Large-Scale Language Knowledge Graph Analysis -We have created a massive comprehensive knowledge graph of 7 million vocabulary words from 136 languages with over 131 million semantic inter-language links, which proves valuable when doing alignment between definitions in different languages.• Extrinsic Evaluation -We elucidate the sentiment consistency of entities reported in different language editions of Wikipedia using our propagated lexicons. In particular, we pick 30 languages and compute sentiment scores for 2,000 distinct historical figures. Each language pair exhibits a Spearman sentiment correlation of at least 0.14, with an average correlation of 0.28 over all pairs.The rest of this paper is organized as follows. We review related work in Section 2. In Section 3, we describe our resource processing and design decisions. Section 4 discusses graph propagation methods to identify sentiment polarity across languages. Section 5 evaluates our results against each available human-annotated lexicon. Finally, in Section 6 we present our extrinsic evaluation of sentiment consistency in Wikipedia prior to our conclusions.
0
Machine learning models using deep neural architectures have seen tremendous performance improvements over the last few years. The advent of models such as LSTMs (Hochreiter and Schmidhuber, 1997) and, more recently, attention-based models such as Transformers (Vaswani et al., 2017) have allowed some language technologies to reach near human levels of performance. However, this performance has come at the cost of the interpretability of these models: high levels of nonlinearity make it a near impossible task for a human to comprehend how these models operate.Understanding how non-interpretable black box models make their predictions has become an active area of research in recent years Jumelet and Hupkes, 2018; Samek et al., 2019; Linzen et al., 2019; Tenney et al., 2019; Ettinger, 2020, i.a.) . One popular interpretability approach makes use of feature attribution methods, that explain a model prediction in terms of the contributions of the input features. For instance, a feature attribution method for a sentiment analysis task can tell the modeller how much each of the input words contributed to the decision of a particular sentence.Multiple methods of assigning contributions to the input feature approaches exist. Some are based on local model approximations (Ribeiro et al., 2016) , others on gradient-based information (Simonyan et al., 2014; Sundararajan et al., 2017 ) and yet others consider perturbation-based methods (Lundberg and Lee, 2017) that leverage concepts from game theory such as Shapley values (Shapley, 1953) . Out of these approaches the Shapley-based attribution methods are computationally the most expensive, but they are better able at explaining more complex model dynamics involving feature interactions. This makes these methods well-suited for explaining the behaviour of current NLP models on a more linguistic level.In this work, we therefore focus our efforts on that last category of attribution methods, focusing in particular on a method known as Contextual Decomposition (CD, Murdoch et al., 2018) , which provides a polynomial approach towards approximating Shapley values. This method has been shown to work well on recurrent models without attention (Jumelet et al., 2019; Saphra and Lopez, 2020) , but has not yet been used to provide insights into the linguistic capacities of attentionbased models. Here, to investigate the extent to which this method is also applicable for attention based models, we extend the method to include the operations required to deal with attention-based models and we compare two different recurrent models: a multi-layered LSTM model (similar to Jumelet et al., 2019) , and a Single Headed Attention RNN (SHA-RNN, Merity, 2019) . We focus on the task of language modelling and aim to discover simultaneously whether attribution methods like CD are applicable when attention is used, as well as how the attention mechanism influence the resulting feature attributions, focusing in particular on whether these attributions are in line with human intuitions. Following, i.a. Jumelet et al. (2019) , Lakretz et al. (2019) and Giulianelli et al. (2018) , we focus on how the models process long-distance subject verb relationships across a number of different syntactic constructions. To broaden our scope, we include two different languages: English and Dutch.Through our experiments we find that, while both English and Dutch language models produce similar results, our attention and non-attention models behave differently. These differences manifest in incorrect attributions for the subjects in sentences with a plural subject-verb pair, where we find that a higher attribution is given to a plural subject when a singular verb is used compared to a singular subject.Our main contributions to the field thus lie in two dimensions: on the one hand, we compare attention and non-attention models with regards to their explainability. On the other hand, we perform our analysis in two languages, namely Dutch and English, to see if patterns hold in different languages.
0
In recent times, we have seen how Internet has revolutionized the field of education through Massive Open Online Courses (MOOCs). Universities are incorporating MOOCs as a part of their regular coursework. Since most of these courses are in English, the students are expected to know the language before they are admitted to the university. In order to provide proof of English proficiency, students take up exams such as TOEFL (Test Of English as a Foreign Language), IELTS (International English Language Testing System),etc. In addition, students are required to take up GRE (Graduate Record Examination) in some universities. All these tests require the students to expand their vocabulary.Students use several materials and applications in order to prepare for these tests. Amongst several techniques that have known to be effective for acquiring vocabulary, flashcard applications are the most popular. We believe the benefits of flashcard applications can be further amplified by incorporating techniques from Cognitive Science. One such technique that has been supported by experimental results is the Testing Effect, also referred to as Test Enhanced Learning. This phenomenon suggests that taking a memory test not only assesses what one knows, but also enhances later retention (Roediger and Karpicke, 2006) .In this paper, we start by briefly discussing Testing Effect and other key works that influenced the development of the automatic short answer grading algorithm, implemented in V for Vocab 1 for acquiring vocabulary. Next, we have an overview of the application along with the methodology we use to collect data. In the later section, we describe our automatic short answer grading algorithm and present the evaluation results for variants of this algorithm on popular word similarity datasets such as RG65, WS353, SimLex-999 and SimVerb 3500. To conclude, we present a discussion that provides fodder for future work in this application.
0
Manually annotated corpora and treebanks are the primary tools that we have for developing and evaluating models and theories for natural language processing. Given their importance for testing our hypotheses, it is imperative that they are of the best quality possible. However, manual annotation is tedious and error-prone, especially if many annotators are involved. It is therefore desirable to have automatic means for detecting errors and inconsistencies in the annotation.Automatic methods for error detection in treebanks have been developed in the DECCA project 1 for several different annotation types, for example part-of-speech (Dickinson and Meurers, 2003a) , constituency syntax (Dickinson and Meurers, 2003b) , and dependency syntax (Boyd et al., 2008) . These algorithms work on the assumption that two data points that appear in identical contexts should be labeled in the same way. While the data points in question, or nuclei, can be single tokens, spans of tokens, or edges between two tokens, context is usually modeled as n-grams over the surrounding tokens. A nucleus that occurs 1 http://www.decca.osu.edu multiple times in identical contexts but is labeled differently shows variation and is considered a potential error.Natural language is ambiguous and variation found by an algorithm may be a genuine ambiguity rather than an annotation error. Although we can support an annotator in finding inconsistencies in a treebank, these inconsistencies still need to be judged by humans. In this paper, we present a tool that allows a user to run automatic error detection on a corpus annotated with part-of-speech or dependency syntax. 2 The tool provides the user with a graphical interface to browse the variation nuclei found by the algorithm and inspect their label distribution. The user can always switch between high-level aggregate views and the actual sentences containing the potential error in order to decide if that particular annotation is incorrect or not. The interface thus brings together the output of the error detection algorithm with a direct access to the corpus data. This speeds up the process of tracking down inconsistencies and errors in the annotation considerably compared to working with the raw output of the original DECCA tools. Several options allow the user to fine-tune the behavior of the algorithm. The tool is part of ICARUS (Gärtner et al., 2013) , a general search and exploration tool. 3
0
The increasing amount of documents in electronic form makes imperative the need for document content classification and semantic labelling. Keyphrase extraction contributes to this goal by the identification of important and discriminative concepts expressed as keyphrases. Keyphrases as reduced document content representations may find applications in document retrieval, classification and summarisation (D'Avanzo and Magnini, 2005) . The literature distinguishes between two principal processes: keyphrase extraction and keyphrase assignment. In the case of keyphrase assignment, suitable keyphrases from an existing knowledge resource, such as a controlled vocabulary, or a thesaurus are assigned to documents based on classification of their content. In keyphrase extraction, the phrases are mined from the document itself. Supervised approaches to the problem of keyphrase extraction include the Naive Bayes-based KEA algorithms (Gordon et al., 1999) (Medelyan and Witten, 2006) , decision tree-based and the genetic algorithm-based GenEx (Turney, 1999) , and the probabilistic KL divergence-based language model (Tomokiyo and Hurst, 2003) . Research in keyphrase extraction proposes the detection of keyphrases based on various statistics-based, or pattern-based fea-tures. Statistical measures investigated focus primarily on keyphrase frequency measures, whereas pattern-features include noun phrase pattern filtering, identification of keyphrase head and respective frequencies (Barker and Cornacchia, 2000) , document section position of the keyphrase (e.g., (Medelyan and Witten, 2006) ) and keyphrase coherence (Turney, 2003) . In this paper, we present an unsupervised approach which combines pattern-based morphosyntactic rules with a statistical measure, the C-value measure (Frantzi et al., 2000) which originates from research in the field of automatic term recognition and was initially designed for specialised domain terminology acquisition.
0
Over the last years, corpus based approaches have gained significant importance in the field of natural language processing (NLP). Large corpora for many different languages are currently being collected all over the world, like In order to use this amount of data for training and testing purposes of NLP systems, corpora have to be annotated in various ways by adding, for example, prosodic, syntactic, or dialogue act information. This annotation assumes an underlying coding scheme. The way such schemes are designed depends on the task, the domain, and the linguistic phenomena on which developers focus. The author's own style and scientific background also has its effects on the scheme. So far, standardisation in the field of dialogue acts is missing and reusability of annotated corpora in various projects is complicated. On the other hand reusability is needed to reduce the costs of corpus production and annotation time.The participating sites of the EU sponsored project MATE (Multi level Annotation Tools Engineering) reviewed the world-wide approaches, available schemes [Klein et al.1998 ], and tools on spoken dialogue annotation [Isard et ai.1998 ]. The project builds its own workbench of integrated tools to support annotation, evaluation, statistical analysis and mapping between different formats. MATE also aims to develop a preliminary form of standard concerning annotation schemes on various levels to support the reusability of corpora and schemes.In this paper we focus on the level of dialogue acts. We outline the results of the comparison of the reviewed coding schemes based on [Klein et al.1998 ] and discuss best practice techniques for annotation of mass data on the level of dialogue acts. These techniques are considered as a first step towards a standard on the level of dialogue acts.Plenty of research has been done in the field of annotation schemes and many schemes for different purposes exist. Not all of these schemes can be annotated reliably and are suitable for reuse. In the following we state guidelines we have developed for selecting most appropriate schemes and represent the results of our scheme comparison according to these guidelines.Firstly, it is important for us that there is a coding book provided for a scheme. Without def-inition of a tag set, decision trees, and annotation examples, a scheme is hard to apply. Also the scheme has to show that it is easy to handle which means it should have been successfully used by a reasonable number of annotators on different levels of expertise. For reusability reasons, language, task, and domain independence is required. Additionally, it is crucial that the scheme has been applied to large corpora. The annotation of mass data is the best indicator for the usability of a scheme. Finally, it was judged positive if schemes directly proved their reliability by providing a numerical evaluation of inter-coder agreement, e.g. the ~-value [Carletta1996] .Information about schemes was collected from the world wide web, from recent proceedings and through personal contact. We compared 16 different schemes, developed in the UK, Sweden, the US, Japan, the Netherlands, and Germany. Most of these schemes were applied to English language data. Only three of the reviewed schemes were annotated in corpora of more than one language, and thus, indicate some language independence.A drawback in reusing schemes for different purposes is tailoring them to a certain domain or task. Nevertheless, most of the ongoing projects in corpus annotation look at two-agent, task-oriented dialogues, in which the participants collaborate to solve some problem. These facts are also reflected in the observed schemes which were all designed for a certain task and/or used in a specific domain.With regard to the evaluation guidelines stated above we can positively mention that all schemes provide coding books. Also, all schemes were applied to corpora of reasonable size (10 K -16 MB data). In 14 cases expertised annotators were employed to apply the schemes which leads to the assumption that these schemes are rather difficult to use. The inter-coder agreement, given by 10 of the schemes, shows intermediate to good results.The comparison of tag sets was performed differently with regard to higher and lower order categories. The definition of higher order categories was mainly driven by the linguistic, e.g. [Sacks and Schegloff1973] , and/or philosophical theories, e.g. [Searle1969] , the schemes were based on. Whereas definitions and descriptions of lower order categories were influenced by the underlying task the scheme was designed for, e.g. information retrieval, and the domain of the corpus the scheme was applied to, e.g. conversation between children.The only higher order aspect that was implicitly or explicitly covered in all schemes was forward and backward looking functionality. This means that a certain dialogue segment is related to a previous dialogue part, like a "RESPONSE", or to the following dialogue part, like a "CLAIM" that forces a reaction from the dialogue partner.On the level of lower order tags we could see tags• with nearly equivalent definitions, e.g.the dialogue act "REQUEST" definition in D. Traum's scheme: "The speaker aims to get the hearer to perform some action." [Traum1996] compared to -the dialogue act "RA" definition in S. compared to -the dialogue act "REQUEST" definition in the VERBMOBIL scheme: "If you realise that the speaker requests some action from the hearer [...] you use the dialogue act REQUEST" [Alexandersson et a1.1998 ];• which broadly seem to cover the same feature with slightly different description facettes, e.g.-the dialogue act "OPEN-OPTION" definition in the DAMSL scheme: "It suggests a course of action but puts no obligation to the listener." [Allen and Core1997] compared with the examples above;• which differ completely from the rest, e.g.-the dialogue act "UPDATE" definition in the LINLIN scheme: "where users provide information to the system" [Dahlb~ck and JSnsson1998]--addressed to human-machine dialogues.Especially the last group can be interpreted as highly task or domain dependent.A STANDARD There are several possibilities, how standardisation on the level of dialogue acts can be achieved. One possibility is to develop a single, very general scheme. Our impression is, that such a new scheme which has not proven usability is not going to be accepted by researchers who want to look at a certain phenomena of interest. The CHAT scheme used in the CHILDES system [MacWhinney] , for example, distinguishes 67 different dialogue acts --it is very unlikely that a general scheme would fit all of their requirements concerning children's conversation.Another possibility is to provide a set of coding schemes for several purposes. These already existing coding schemes must hold the condition that they have proven reliability in mass data annotation. As there cannot exist a scheme for every purpose, this approach only serves developers of new schemes who want to get an idea how to proceed. With regard to the problem of standardisation this solution is very unsatisfiable as mapping between schemes is often impossible, if schemes do not have a common ground, like the SLSA scheme that models feedback and own communication management [Nivre et al.1998 ], and the AL-PARON scheme [van Vark and de Vreught1996] with the primary objective to analyse the previously mentioned dialogues to model information transfer.The Discourse Resource Initiative (DRI) group provided input on a third possibility: Developing best practice methods for scheme design, documentation and annotation.We can classify the existing schemes in two categories: multi-dimensional and single-dimensional schemes.Multi-dimensional schemes are based on the assumption that an utterance covers several different orthogonal aspects, called dimensions. Each dimension can be labeled. DAMSL [Allen and Core1997], for instance, is a scheme that implements a four dimensional hierarchy. These dimensions are tailored to two-agent, taskoriented, problem-solving dialogues. Suggested dimensions are• Communicative Status which records whether an utterance is intelligible and whether it was 37 successfully completed,• In]ormation Level which represents the semantic content of an utterance on an abstract level,• Forward Looking Function which describes how an utterance constraints the future beliefs and actions of the participants, and affects the discourse, and• Backward Looking Function which characterizes how an utterance relates to the previous discourse.Single-dimensional schemes consist of one single list of possible labels. Their labels belong basically to what is called Forward and Backward Looking Functions in DAMSL. Apart from DAMSL all observed schemes belong to this category.Comparing both categories, the multi-dimensional approach is more linguistically motivated presenting a clear modeling of theoretical distinctions; but annotation experiments have shown that it takes more effort to apply them than it is for single-dimensional schemes. On the other hand, although a single-dimensional scheme is easier to annotate it is hard to judge from outside what kind of phenomena such a scheme tries to model as dimensions are merged --a major disadvantage if reusability is considered. An example for a dialogue act that merges DAMSL's backward and forward looking function is the "CHECK" move in the Map Task scheme. "A CHECK move requests the partner to confirm information that the speaker has some reason to believe, but is not entirely sure about" [Carletta et al.1996] . This reflects the forward looking aspect of such a dialogue act. "However, CHECK moves are almost always about some information which the speaker has been told" [Carletta et al.1996 ] --a description that models the backward looking functionality of a dialogue act.Our suggestion to tackle the problem of what kind of scheme is most appropriate, is to use single-and multi-dimensional schemes in parallel. The developer of a new scheme is asked to think precisely what kind of phenomena will be explored and what kind of tags are needed for this purpose. These tags have to be classified with regard to the dimension they belong to. The theoretical multi-dimensional scheme will then be applied to some test corpora. The example annotation shows which tags are less used than others and which tag combinations often occur together. Based on this information the scheme designer can derive a flattened single-dimensional version of the multidimensional scheme. The flattened or merged scheme is used for mass data-annotation. A mapping mechanism has to be provided to convert a corpus from its surface structure, annotated using the single-dimensional scheme to the internal structure, annotated using the multi-dimensional scheme. The multi-dimensional scheme can easily be reused and extended by adding a new dimension. Furthermore, the corpus annotated with the multi-dimensional scheme is not any longer task dependent.Each coding scheme should provide a coding book to be applicable. Such a document is needed to help other researchers to understand why a tag set was designed in the way it is. Therefore the introduction part of a coding book should state the purpose, i.e. task and domain, the scheme is designed for, the kind of information that has been labeled with regard to the scheme's purpose, and the theory the scheme is based on.For detailed information about a tag, a tag set definition is required. Following [Carletta1998] , such a definition should be mutual exclusive and unambiguous so that the annotator finds it easy to classify a dialogue segment as a certain dialogue act. Also the definition should be intention-based and hence easy to understand and to remember, so that the annotator does not have to consult the coding book permanently even after using the scheme for quite a while.We suggest, that a coding book should contain a decision tree that aims to give an overview of all possible tags and how they are related to each other. Additionally, the decision tree has to be supplemented by rules that help to navigate through the tree. For each node in the tree there should be a question which states the condition that has to be fulfilled in order to go to a lower layer in the tree. If no further condition holds, the current node (or leaf) in the tree represents the most appropriate tag. As an example of a subtree plus decision rules see Figure 1 , taken from [Alexandersson et ai.1998 ].if the segment is used to open a dialogue by greeting a dialogue partner then label with GREET else if the segment is used to close a dialogue by saying good-bye to a dialogue partner then label with BYE. else if the segment contains the introduction of the speaker, i.e. name, title, associated company etc. label with INTRODUCE else if the segment is used to perform an action of politeness like asking about the partner's good health or formulating compliments label with POLITENESS_FORMULA else if the segment is used to express gratitude towards the dialogue partner label with THANK else if the segment is used to gain dialogue time by thinking aloud or using certain formulas label with DELIBERATE else if the segment is used to signal understanding (i.e. acknowledging intact communication) label with BACKCHANNEL Examples should complement the scheme's description. These examples should present ordinary but also problematic and more difficult cases of annotation. The difficulties should be briefly explained.Experiences have shown that for new coders tag set definitions are most important to get an understanding of schemes. Annotation examples serve as a starting point to get a feeling for annotation but to manage the annotation task, decision trees are used until coders are experienced enough to perform annotation without using a coding book. This shows how important these three components of a coding book are in order to give new annotators or other scientists best support to understand and apply a coding scheme.To interpret the evaluation results of intercoder agreement in the right way, the coding procedure that was used for annotation should be mentioned. Such a coding procedure covers, for example, how segmentation of a corpus is performed, if multiple tagging is allowed and if so, is it unlimited or are there just certain combinations of tags not allowed, is look ahead permitted, etc..For further information on coding procedures we want to refer to [Dybkjmr et al.1998 ] and for good examples of coding books see, for example, [Carletta et al.1996] , [Alexandersson et al.1998 ], or [Thym~-Gobbel and Levin1998] .Another criterion which is important to increase the effectiveness of annotation is using a user-friendly annotation tool.Such a tool also guarantees consistency, as typing errors are avoided and, hence, improves the quality of annotated corpora.This issue is addressed by the MATE workbench. Other, already existing tools are the ALEMBIC Workbench by [The Mitre Corporation] , NB by [Flammia] , or FRINGE used in the FESTIVAL system by [The Centre for Speech Technology Research] .The approach in MATE is to reuse the DAMSL scheme as an example for an internal multidimensional scheme and a variant of the SWBD-DAMSL scheme [Jurafsky et al.1997] as its example flattened surface counterpart. SWBD-DAMSL was derived from the original DAMSL scheme using the techniques described above. Unfortunately some additional tags were added so that an exact mapping from one scheme to the other is not possible any more. For this reason the MATE SWBD-DAMSL variant omits these additional tags.], a widely accepted interchange and storage format for structured textual data, to represent the schemes and the annotated corpora.Stylesheets (a subset of XSL [The W3Cb] ) are used as a mapping mechanism between corpora annotated with the surface scheme and corpora annotated with the internal scheme.The choice in MATE to use the W3C (World Wide Web Consortium) proposals is because XML is the latest, most flexible data exchange format currently available and strongly supported by industry. XSL supplements XML insofar that it realises the formatting of an XML document.The facilities for dialogue act annotation are embedded in the MATE workbench. The workbench is currently being implemented in Java 1.2 as a platform independent approach. This makes the distribution process of the workbench easier and supports wide-spreading MATE's ideas of best practice in annotation.Projects which are related to MATE's aim to develop a preliminary form of standard concerning annotation schemes are the DR/which was started as an effort to assemble discourse resources to support discourse research and application. The goal of this initiative is to develop a standard for semantic/pragmatic and discourse features of annotated corpora [Carletta et al.1997] . Another project, LE-EAGLES, also has the goal to provide preliminary guidelines for the representation or annotation of dialogue resources for language engineering [Leech et al.1998 ]. These guidelines cover the areas of orthographic transcription, morpho-syntactic, syntactic, prosodic, and pragmatic annotation. LE-EAGLES describes most used schemes, markup languages, and systems for annotation rather than proposing standards.Having reviewed a large amount of currently available coding schemes for dialogue acts we presented a methodology how to tackle the standardisation problem. We outlined best practice for scheme and coding book design which hopefully will lead to a better understanding and reusability of schemes and corpora annotated using the proposed method.Our approach is currently being implemented in the MATE workbench and will be tested and enhanced in the remaining time of the project. It will be applied to the CSELT, Danish Dialogue System, CHILDES, MAPTASK and VERBMO-BIL corpus to help making it as task, domain and language independent as possible. Inadequacies during the testing phase which are related to the internal scheme we use will be discussed with the members of the DRI to further improve the scheme or its flattened variant. ACKNOWLEDGMENT I would like to thank the members of the DRI for stimulating and fruitful discussions and suggestions, and Norbert Reithinger, Jan Alexandersson, and Michael Kipp who gave valuable feedback on this paper.
0
Great strides have been made in Speech-to-Speech (S2S) translation systems that facilitate cross-lingual spoken communication [1] [2] [3] . While these systems [3] [4] [5] already fulfill an important role, their widespread adoption requires broad domain coverage and unrestricted dialog capability. To achieve this, S2S systems need to be transformed from passive conduits of information to active participants in cross-lingual dialogs by detecting key causes of communication failures and recovering from them in a user-friendly manner. Such an active participation by the system will not only maximize translation success, but also improve the user's perception of the system.The bulk of research exploring S2S systems has focused on maximizing the performance of the constituent automatic speech recognition (ASR), machine translation (MT), and text-to-speech (TTS) components in order to improve the rate of success of cross-lingual information transfer. There have also been several attempts at joint optimization of ASR and MT, as well as MT and TTS [6] [7] [8] . Comparatively little effort has been invested in the exploration of approaches that attempt to detect errors made by these components, and the interactive resolution of these errors with the goal of improving translation / concept transfer accuracy.Disclaimer: This paper is based upon work supported by the DARPA BOLT Program. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.Our previous work presented a novel methodology for assessing the severity of various types of errors in our English/Iraqi S2S system [9] . These error types can be broadly categorized into: (1) out-of-vocabulary concepts; (2) sense ambiguities due to homographs, and (3) ASR errors caused by mispronunciations, homophones, etc. Several approaches, including implicit confirmation of ASR output with barge-in and back-translation [10] , have been explored for preventing such errors from causing communication failures or stalling the conversation. However, these approaches put the entire burden of error detection, localization, and recovery on the user. In fact, the user is required to infer the potential cause of the error and determine an alternate way to convey the same conceptclearly impractical for the broad population of users.To address the critical limitation of S2S systems described above, we present novel techniques for: (1) automatically detecting potential error types, (2) localizing the error span(s) in spoken input, and (3) interactively resolving errors by engaging in a clarification dialog with the user. Our system is capable of detecting a variety of error types that impact S2S systems, including out-of-vocabulary (OOV) named entities and terms, word sense ambiguities, homophones, mispronunciations, incomplete input, and idioms.Another contribution of this paper is the novel strategies for overcoming these errors. For example, we describe an innovative approach for cross-lingual transfer of OOV named entities (NE) by splicing corresponding audio segments from the input utterance into the translation output. For handling word sense ambiguities, we propose a novel constrained MT decoding technique that accounts for the user's intended sense based on the outcome of the clarification dialog.A key consideration for making the system an active participant is deciding how much the system should talk, i.e. the number of clarification turns allowed to resolve potential errors. With that consideration, we present an effective strategy for prioritizing the different error types for resolution and also describe a flexible architecture for storing, prioritizing, and resolving these error types.We created rules for pronoun expansion (e.g. "his" AE "her", "their", etc.) and verb expansion (e.g. "give her a piece of my mind" AE "gave her a piece of my mind"), being conservative to avoid explosion and creation of nonsense variants.
0
Natural language understanding seeks to create models that read and comprehend text. A common strategy for assessing the language understanding capabilities of comprehension models is to demonstrate that they can answer questions about documents they read, akin to how reading comprehension is tested in children when they are learning to read. After reading a document, a reader usually can not reproduce Title: Ghostbusters II Question: How is Oscar related to Dana? Answer: her son Summary snippet: . . . Peter's former girlfriend Dana Barrett has had a son, Oscar. . .DANA (setting the wheel brakes on the buggy) Thank you, Frank. I'll get the hang of this eventually.She continues digging in her purse while Frank leans over the buggy and makes funny faces at the baby, OSCAR, a very cute nine-month old boy.Hiya, Oscar. What do you say, slugger?That's a good-looking kid you got there, Ms. Barrett.Example question-answer pair. The snippets here were extracted by humans from summaries and the full text of movie scripts or books, respectively, and are not provided to the model as supervision or at test time. Instead, the model will need to read the full text and locate salient snippets based solely on the question and its reading of the document in order to generate the answer.the entire text from memory, but often can answer questions about underlying narrative elements of the document: the salient entities, events, places, and the relations between them. Thus, testing understanding requires the creation of questions that examine highlevel abstractions instead of just facts occurring in one sentence at a time.Unfortunately, superficial questions about a document may often be answered successfully (by both humans and machines) using a shallow pattern match-ing strategies or guessing based on global salience. In the following section, we survey existing QA datasets, showing that they are either too small or answerable by shallow heuristics (Section 2). On the other hand, questions which are not about the surface form of the text, but rather about the underlying narrative, require the formation of more abstract representations about the events and relations expressed in the course of the document. Answering such questions requires that readers integrate information which may be distributed across several statements throughout the document, and generate a cogent answer on the basis of this integrated information. That is, they test that the reader comprehends language, not just that it can pattern match. We present a new task and dataset, which we call NarrativeQA, which will test and reward artificial agents approaching this level of competence (Section 3), and make available online. 1 The dataset consists of stories, which are books and movie scripts, with human written questions and answers based solely on human-generated abstractive summaries. For the RC tasks, questions may be answered using just the summaries or the full story text. We give a short example of a sample movie script from this dataset in Figure 1 . Fictional stories have a number of advantages as a domain (Schank and Abelson, 1977) . First, they are largely self-contained: beyond the basic fundamental vocabulary of English, all of the information about salient entities and concepts required to understand the narrative is present in the document, with the expectation that a reasonably competent language user would be able to understand it. 2 Second, story summaries are abstractive and generally written by independent authors who know the work only as a reader.
0
Many existing approaches to text generation rely on recurrent neural networks trained using likelihood on sequences of words or characters. However, such models often fail to capture overall structure and coherency in multi-sentence or longform text Holtzman et al., 2018) . To rectify this, prior work has proposed losses which encourage overall coherency or other desired behavior (Li et al., 2016; Zhang and Lapata, 2017; . However, most of these approaches rely on manually provided definitions of what constitutes a good or suitable structure, thereby limiting their applicability. In this paper we propose a method for English poetry generation that directly learns higher-level rhyming constraints as part of a generator without requiring strong manual intervention. Prior works on poetry generation (Oliveira, 2017; Ghazvininejad et al., 2018) have focused mostly on ad-hoc decoding procedures to generate reasonable poetry, often relying on pruning from a set of candidate outputs to encourage desired behavior such as presence of explicitly-defined rhyming patterns.We propose an adversarial approach to poetry generation that, by adding structure and inductive bias into the discriminator, is able to learn rhyming constraints directly from data without prior knowledge. The role of the discriminator is to try to distinguish between generated and real poems during training. We propose to add inductive bias via the choice of discriminator architecture: We require the discriminator to reason about poems through pairwise comparisons between line ending words. These learned word comparisons form a similarity matrix for the poem within the discriminator's architecture. Finally, the discriminator evaluates the poem through a 2D convolutional classifier applied directly to this matrix. This final convolution is naturally biased to identify spatial patterns across word comparisons, which, in turn, biases learned word comparisons to pick up rhyming since rhymes are typically the most salient spatial patterns.Recent work by Lau et al. 2018proposes a quatrain generation method that relies on specific domain knowledge about the dataset to train a classifier for learning the notion of rhyming: that a line ending word always rhymes with exactly one more ending word in the poem. This limits the applicability of their method to other forms of poetry with different rhyming patterns. They train the classifier along with a language model in a multi-task setup. Further, at generation time, they heavily rely on rejection sampling to produce quatrains which satisfy any valid rhyming pattern. In contrast, we find that generators trained using our structured adversary produce samples that satisfy rhyming constraints with much higher frequency. We propose a structured discriminator to learn a poetry generator in a generative adversarial setup. Similarities between pairs of end-of-line words are obtained by computing cosine similarity between their corresponding representations, produced by a learned character-level LSTM encoder. The discriminator operates on the resulting matrix S representing pair-wise similarities of end words. The proposed discriminator learns to identify rhyming word pairs as well as rhyming constraints present in the dataset without being provided phonetic information in advance.Our main contributions are as follows: We introduce a novel structured discriminator to learn a poetry generation model in a generative adversarial setup. We show that the discriminator induces an accurate rhyming metric and the generator learns natural rhyming patterns without being provided with phonetic information. We successfully demonstrate the applicability of our proposed approach on two datasets with different structural rhyming constraints. Our poem generation model learned with the structured discriminator is more sampling efficient compared to prior work -many fewer generation attempts are required in order to obtain an valid sample which obeys the rhyming constraints of the corresponding poetry dataset.
0
With the translation industry exponentially growing, more hope is vested in the use of machine translation (MT) to increase translators' productivity (Rinsche and Portera-Zanotti, 2009) . Though post-editing MT has proven to increase productivity and even quality for certain text types (Tatsumi, 2010) , research on the usability of post-editing for more general texts is rather limited. The research presented in this paper is a pilot study conducted as part of the ROBOT-project 1 , a project designed to gain insight in the differences between human translation and the post-editing of machine translation. The process and product of translation are the two main areas of interest of the project, and results of student translators and professional translators shall be compared. In this paper, the translation quality assessment approach developed for the project is presented and tested on translations of student translators. This fine-grained, two-step approach does not only allow for the analysis and comparison of translation problems for different methods of translation (such as human translation and post-editing), but can also be used as an evaluation method for different text types and goals. As such, it is a useful tool, both for researchers and people concerned with the evaluation of translation quality in general.
0
Emojis have become crucial components of written language. Emojis were initially designed to express emotions or feelings, e.g., for a smiley face, and they have grown to be a large family of over 2,000 icons over the years which can express not only emotions but a wide range of objects or actions, e.g., for a gift and for celebrations. Compared to words, emojis have the merit of preserving information more densely. For example, carries the same meaning as the phrase "laughing with tears in eyes". Additionally, the byte-level encoding of subtle linguistic expressions makes it easier to discriminate complicated feelings, e.g., the bond between and is clearly weaker than their phrasal explanations "laughing with tears in eyes" and "crying loudly" due to the similarity between "tear" and "crying". These characteristics of emojis aid in accurate summarization of text, thus benefiting natural language understanding (NLU) tasks. Felbo et al. (2017) define the emoji prediction task by finding the most appropriate emoji(s) summarizing a piece of text. They also show with experiments that language representations learned on the emoji prediction task can boost the performance of emotion recognition, sentiment analysis, and sarcasm detection tasks. Consequently, using emoji prediction as a bridge to solve other natural language processing (NLP) tasks appears to be effective and promising. However, the emoji prediction task is yet far from being well established. First and foremost, as a classification task, there is not a set of labels agreed upon by previous research. To the best of our knowledge, all the existing papers on emoji prediction use either a handcrafted emoji set (Felbo et al., 2017) or the most frequent emojis in their individual datasets (Barbieri et al., 2018c,b) . Handcrafted emoji sets are usually limited in size and topics (usually limited to emotional emojis), while frequency-based emoji sets are dataset-specific. The lack of a standard label set makes it difficult to evaluate and compare emoji prediction models, hampering the research on emoji prediction and its interactions with other NLP tasks. To solve this problem, we use an emoji list from the unicode office 1 as the label set for the emoji prediction task. This emoji list includes 1,467 emojis in total, ordered by the median frequency of their use from multiple resources. We believe using this emoji list is good for standardizing the task since it is open to all researchers and is not influenced by how we sample the data.The second problem with emoji prediction is that existing labeled datasets are either too small in scale or not publicly available. This often results from the policy of social media platforms on using their data and the constantly changing nature of posts on these platforms, e.g., post deletion and edits. To address the problem of data unavailability or expiration, we annotate the PAN-19 Celebrity Profiling corpus (Wiegmann et al., 2019) , a tweetbased corpus, which is large and available to all researchers. We provide three types of annotations in this paper. Existing emoji prediction datasets are almost all annotated on the passage-level under the multi-class classification setting, which means each record contains exactly one tweet and one emoji. While we also release this type of annotation, we additionally provide passage-level multi-label and aspect-level multi-class classification annotations. Annotations for the passage-level multi-label classification setting are similar to the multi-class setting, but with possibly multiple emojis in each record (i.e., a tweet could be associated with multiple emojis). We introduce aspect-level labels to the emoji prediction task to enable a finer-grained analysis of the functions of emojis in tweets. Each emoji in these annotations points to a span of its corresponding text instead of the entire tweet. Text fractions associated with different emojis in the same tweet may overlap with each other.Given the large size of our dataset, all three types of annotations are generated automatically using heuristics or with the help of a Transformer-based model. The assumption underlying the passagelevel annotations is that the text fully covers the meanings of emojis in a tweet. Thus we extract the emojis appearing in the text as passage-level labels, as (Felbo et al., 2017) do. Under the multiclass classification setting, a record is duplicated and assigned different emojis if it contains multiple emojis. The aspect-level annotations are created based on passage-level multi-class classification labels. Since the attention maps in a Transformerbased model reflect the interrelations of each word pair, we are able to evaluate the contribution of each word to a predicted emoji under the multi-class classification setting. We then combine the labels based on tweets to form the aspect-level multi-class annotations for the dataset. We will introduce the annotation methods in more detail in Section 3.2.The contributions of this paper are three-fold. First, we provide a large emoji list to be used as a label set for the emoji prediction task. These emojis are all frequently-used and meaningful, benefiting further research on the emoji prediction task and its connections to other NLP tasks. Second, we introduce a data annotation method based on the self-attention mechanism in Transformer networks (Vaswani et al., 2017) . The method is designed specifically for annotating aspect-based labels and can potentially be used on any NLP task. Third, we provide three types of annotations for emoji prediction based on a publicly available tweet dataset. Besides the commonly used tweet-level 2 multi-class classification labels, our annotations include passage-level multi-label and aspect-level multi-class classification labels for better understanding of the linguistic roles of emojis.We release a carefully curated (both manually and automatically) emoji prediction dataset based on the 64 top-ranked emojis in our emoji list. 3
0
Cross-lingual text representations have become extremely popular in NLP, since they promise universal text processing in multiple human languages with labeled training data only in a single one. They go back at least to the work of Klementiev et al. (2012) , and have seen an exploding number of contributions in recent years. Recent cross-lingual models provide representations for about 100 languages and vary in their training objectives. In offline learning, cross-lingual representations are obtained by projecting independently trained monolingual representations into a shared representational space using bilingual lexical resources (Faruqui and Dyer, 2014; Artetxe et al., 2017) . In joint learning , the cross-lingual representations are learned directly, for example as a byproduct of large-scale machine translation (Artetxe and Schwenk, 2018) .As parallel data is scarce for less frequent language pairs, the multilingual BERT model (mBERT) simply trains the BERT architecture (Devlin et al., 2019) on multilingual input from Wikipedia. The cross-lingual signal is thus only learned implicitly because mBERT uses the same representational space independent of the input language. This naive approach yields surprisingly high scores for cross-lingual downstream tasks, but the transfer does not work equally well for all languages. Pires et al. (2019) show that the performance differences between languages are gradual and that the representational similarity between languages seem to correlate with typological features. These relationships between languages remain opaque in cross-lingual representations and pose a challenge for the evaluation of their adequacy. Evaluations in down-stream tasks are an unreliable approximation because they can often be solved without accounting for deep linguistic knowledge or for interdependencies between subgroups of languages (Liang et al., 2020) .While more language-agnostic representations can be beneficial to improve the average performance in task-oriented settings and to smooth the performance differences between high-and low-resource languages (Libovickỳ et al., 2019; Zhao et al., 2020a) , linguists are more interested in the representational differences between languages. The field of computational historical linguistics, for example, examines subtle semantic and syntactic cues to infer phylogenetic relations between languages (Rama and Borin, 2015; Jäger, 2014) . Important aspects are the diachronic stability of word meaning (Pagel et al., 2007; Holman et al., 2008) and the analysis of structural properties for inferring deep language relationships (Greenhill et al., 2010; Wichmann and Saunders, 2007) .Traditionally, these phenomena have been approximated using hand-selected word lists and typological databases. Common ancestors for languages are typically inferred based on cases of shared word meaning and surface form overlap and it can be assumed that these core properties are also captured in large-scale cross-lingual representations to a certain extent. For example, find that phylogenetic relations between languages can be reconstructed from cross-lingual representations if the training objective optimizes monolingual semantic constraints for each language separately as in the multilingual MUSE model (Conneau et al., 2017) . MUSE is restricted to only 29 frequent languages, however. While mBERT is a powerful cross-lingual model covering an order of magnitude more languages (104), a better understanding of the type of signal captured in its representations is needed to assess its applicability as a testbed for cross-lingual or historical linguistic hypotheses. Our analysis quantifies the representational similarity across languages in mBERT and disentangles it along genetic, geographic, and structural factors.In general, the urge to improve the interpretability of internal neural representations has become a major research field in recent years. Whereas dense representations of images can be projected back to pixels to facilitate visual inspection, interpreting the linguistic information captured in dense representation of languages is more complex (Alishahi et al., 2019; Conneau and Kiela, 2018) . Diagnostic classifiers , representational stability analysis (Abnar et al., 2019) and indirect visualization techniques (Belinkov and Glass, 2019) are only a few examples for newly developed probing techniques. They are used to examine whether the representations capture part-of-speech information (Zhang and Bowman, 2018) , syntactic agreement (Giulianelli et al., 2018) , speech features (Chrupała et al., 2017) , and cognitive cues (Wehbe et al., 2014) . However, the majority of these interpretability studies focus solely on English. Krasnowska-Kieraś and Wróblewska (2019) perform a contrastive analysis of the syntactic interpretability of English and Polish representations and Eger et al. (2020) probe representations in three lower-resource languages. Cross-lingual interpretability research for multiple languages focuses on the ability to transfer representational knowledge across languages for zero-shot semantics (Pires et al., 2019) and for syntactic phenomena (Dhar and Bisazza, 2018) . In this work, we contribute to the nascent field of typological and comparative linguistic interpretability of language representations at scale (Kudugunta et al., 2019) and analyze representations for more than 100 languages.Our contributions: We probe the representations of one of the current most popular cross-lingual models (mBERT) and find that mBERT lacks information to perform well on cross-lingual semantic retrieval, but can indeed be used to accurately infer a phylogenetic language tree for 100 languages. Our results indicate that the quality of the induced tree depends on the inference algorithm and might also be the effect of several conflated signals. In order to better disentangle phylogenetic, geographic, and structural factors, we go beyond simple tree comparison and probe language distances inferred from cross-lingual representations by means of multiple regression. We find phylogenetic similarity to be the strongest and structural similarity to be the weakest signal in our experiments. The phylogenetic signal is present across all layers of mBERT. Our analysis not only contributes to a better interpretation and understanding of mBERT, but may also help explain its cross-lingual behavior in downstream tasks (Pires et al., 2019) . 1
0
Quality Estimation (QE) for Machine Translation (MT) (Blatz et al., 2004; Quirk, 2004; Specia et al., 2009) aims at providing quality scores or labels to MT output when translation references are not available. Sentence-level QE is usually conducted using human produced direct assessments (DA) (Graham et al., 2013) or post-edits. The latter allows to derive token-level quality indicators such as good and bad tags (Fonseca et al., 2019; . Token-level QE is particularly useful for applications such as source preediting or focused MT post-editing, but requires high-quality fine-grained annotated data for supervised learning. Furthermore, token-level quality indicators can be seen as explanations for sentencelevel scores, whether given by humans or automatically produced. However, explainability of QE models decisions is obscured by contemporary approaches relying on large data-driven neural-based models, making use of pretrained contextual language models (LM) such as BERT (Devlin et al., 2019) and XLM (Conneau and Lample, 2019) , albeit showing steady performance increase as reported in the QE shared tasks (Fonseca et al., 2019; ). Yet, the QE layers and architectures are rarely investigated, neither for performance nor for interpretability purposes, and the center of attention is mainly on large pretrained models and generating additional (synthetic) training corpora.In this paper, we present a novel QE architecture which encompasses a metric-to-input attention mechanism allowing for several extensions of the habitual QE approach. First, since sentence-level QE scores are usually obtained with surface-level MT metrics computed between translation outputs and human produced references or post-edits such as HTER (Snover et al., 2006) , we propose to make use of several metrics simultaneously in order to model translation errors at various granularities, i.e. at the character, token, and phrase levels. Second, we design a metric embeddings model which represents metrics in their own space through a dedicated set of learnable parameters, allowing for straightforward extensions of the number and type of metrics. Third, by employing an attention mechanism between metric embeddings and bilingual input representations, the metric-to-input attention weights indicate where each metric focuses given an input sequence, increasing the interpretability of the QE components. We conduct a set of experiments on the Eval4NLP 2021 shared task dataset (Fomicheva et al., 2021) using only the training data along with sentence-level scores officially released for the tasks (illustrated in Figure 1 ). In addition, we Source Religioon pakub vaimu puhastamiseks teatud vahendeid .Religion offers certain means of cleansing the spirit .Religion offers certain means of cleansing the spirit .Sentence-level scores: DA 0.905 -chrF 1.0 -TER 0.0 -BLEU 1.0Source Tänu Uku kalastamiskirele pääseb Õnne 13 maja põlengust .Thanks to the breath of fresh fishing , 13 houses are escaped from contempt . PE Thanks to Uku 's passion for fishing, the house at Õnne 13 is saved from fire.Sentence-level scores: DA 0.132 -chrF 0.366 -TER 0.667 -BLEU 0.0Figure 1: Samples of source sentences, automatic translations and human post-editions, along with direct assessment (DA) scores, taken from the Eval4NLP 2021 shared task Estonian-English validation set representing high and low quality translations. Additional metrics are presented, namely chrF, TER and BLEU, to illustrate variations related to metrics granularity. Green and red colors are tokens annotated with classes 0 and 1 respectively. produce a large synthetic corpus for QE pretraining using publicly available resources. The contributions of our work are the following: (i) a novel QE architecture using metric embeddings and attention-based interpretable neural components allowing for unsupervised token-level quality indicators, (ii) an extensible framework designed for unrestricted sentence-level QE scores or labels where new metrics can be added through finetuning, (iii) the reproducibility guaranteed by the use of publicly available datasets, tools, and models, and (iv) word and sentence-level QE results on par or outperforming top-ranked approaches based on the official Eval4NLP 2021 shared task results.The remainder of this paper is organized as follows. In Section 2, we introduce some background in QE based on contextual language model, followed in Section 3 with the detailed implementation of the proposed model using metric embedding and attention. In Section 4, the experimental setup is presented, including the data and tools used, as well as the training procedure of our models. Section 5 contains the results obtained in our experiments along with their analysis and interpretation. A comparison of our method and results with previous work is made in Section 6. Finally, we conclude and suggest future research directions in Section 7.
0
An event is something that occurs in a certain place at a certain time (Pustejovsky et al., 2003) . Understanding events plays a major role in various natural language processing tasks such as information extraction (Humphreys et al., 1997) , question answering (Narayanan and Harabagiu, 2004) , textual entailment (Haghighi et al., 2005) , event coreference (Choubey and Huang, 2018) and contradiction detection (De Marneffe et al., 2008) . There has been a significant amount of work on automatic processing of events in text including systems for events extraction, event coreference resolution, and temporal relation detection (Araki, 2018; Ning et al., 2017) . However, events are not atomic entities: they often have complex internal structure that can be expressed in a variety of ways (Huttunen et al., 2002; Bejan and Harabagiu, 2008; Hovy et al., 2013) .One of the unsolved problems related to event understanding is the detection of subevents, also referred to as event hierarchy construction. As described by Glavaš andŠnajder (2014a) , there have been efforts that have focused on detecting temporal and spatial subevent containment individually. However, it is clear that subevent detection requires both simultaneously. The subevent relationship is defined in terms of (e 1 ,e 2 ), where e 1 and e 2 are events: event e 2 is a subevent of event e 1 if e 2 is spatiotemporally contained by e 1 . More precisely, we say that an event e 1 is a parent event of event e 2 , and e 2 is a child event of e 1 if (1) e 1 is collector event that contains a complex sequence of activities; (2) e 2 is one of these activities; and (3) e 2 is spatially and temporally contained within e 1 (i.e., e 2 occur at the same time and same place as e 1 ) (Hovy et al., 2013; Glavaš andŠnajder, 2014b) . This subevent relationship is independent of other types of relationships, e.g., causal relationship between the events. Example 1 illustrates a text expression of a complex event hierarchy. Figure 1 shows a corresponding graphical representation of the hierarchy.Egyptian police have said that five protesters were killed 1 when they were attacked 2 by an armed group near the Defense Ministry building in Cairo. The statement said that early this morning, the armed group attacked 3 the demonstrators who have for days been staging their protest 4 against the military government. . . . Police said that the attack 5 on Wednesday wounded 6 at least 50 protesters.Example 1: Excerpt from the HiEve corpus (Glavaš et al., 2014a) . Events are in bold and given a numerical subscript for reference. In all the examples the identified events are gold annotations, but for clarity not all annotations are included. In Figure 1 , we see that killed 1 and wounded 6 are explicitly annotated as subevents of attacked 3 , while that event in turn is a subevent of protest 4 . Events attacked 2 and attack 5 are explicitly indicated as coreferent with attacked 3 . These relationships induce the implicit subevent relations shown by dashed lines.In this work we propose a pairwise model that leverages new discourse and narrative features to significantly improve subevent relation detection. evaluate our model on two corpora, namely, the HiEve corpus (Glavaš et al., 2014a) and the Intelligence Community (IC) corpus 1 (Hovy et al., 2013) . We build on feature sets proposed in previous work, but propose several important discourse and narrative level features. We show that our model outperforms current systems on the subevent detection task by a significant margin. An error analysis reveals why these features are important and further details on why the subevent detection task is difficult.We begin the paper by discussing prior work on subevent detection task ( §2). Then we introduce our model and the feature set ( §3). Following that, we describe the corpora ( §4.1) we used and the experimental setup ( §4.2). We then present the evaluation metrics and the performance of our model ( §4.3) as well as compare our model performance to previous works ( §5). To the end, we show an extensive error analysis ( §6) and conclude with a list of contributions ( §7).
0
Paraphrase generation, namely rewriting a sentence using different words and/or syntax while preserving its meaning (Bhagat and Hovy, 2013) , is an important technique in natural language processing, that has been widely used in various downstream tasks including question answering (Fader et al., 2014a; McCann et al., 2018) , summarization (Rush et al., 2015) , data augmentation (Yu et al., 2018) and adversarial learning (Iyyer et al., 2018) . However, not all paraphrases are equally useful. For most real-world applications, paraphrases which are too similar to the original sentence are of limited value, while those with high linguistic diversity, i.e. with large syntactic/lexical differences between the paraphrase and the original sentence, are more beneficial to the robustness and accuracy of automatic text evaluation and classification, and can avoid the blandness caused by repetitive patterns (Qian et al., 2019) . The quality of paraphrases is often evaluated using three dimensions, where high quality paraphrases are those with high semantic similarity as well as high lexical and/or syntactic diversity (McCarthy et al., 2009) .Generating high quality paraphrases can be challenging (for both humans and automatic models) since it is increasingly difficult to preserve meaning with increasing linguistic diversity. Indeed, when examining the quality of paraphrases among paraphrase generation datasets, one can find a wide range of paraphrase qualities, where the area of high quality is often very sparse (see Figure 1 ). This in turn results in scarcity of supervised data for high-quality paraphrase generation.A recent approach aiming to produce high quality paraphrases is controlled paraphrase generation, which exposes control mechanisms that can be manipulated to produce diversity. While the controlled generation approaches have yielded impressive results, they require providing the model with very specific information regarding the target sentence, such as its parse tree (Iyyer et al., 2018) or the list of keywords it needs to contain (Zeng et al., 2019) . However, for most downstream applications, the important property of the paraphrase is its overall quality, rather than its specific syntactic or lexical form. The over-specificity of existing controlbased methods not only complicates their usage and limits their scalability, but also hinders their coverage. Thus, it would be desirable to develop a paraphrase generation model, which uses a simple mechanism for directly controlling paraphrase quality, while avoiding unnecessary complications associated with fine-grained controls.In this paper we propose QCPG, a Quality Controlled Paraphrase Generation model, that given an input sentence and quality constraints, represented by a three dimensional vector of semantic similarity, and syntactic and lexical distances, produces a target sentence that conforms to the quality constraints.Our constraints are much simpler than previously suggested ones, such as parse trees or keyword lists, and leave the model the freedom to choose how to attain the desired quality levels.Enabling the direct control of the three quality dimensions, allows flexibility with respect to the specific requirements of the task at hand, and opens a range of generation possibilities: paraphrases of various flavors (e.g. syntactically vs. lexically diverse), quasi-paraphrases (with lower semantic similarity), and even non-paraphrases which may be useful for downstream tasks (e.g. hard negative examples of sentences that are linguistically similar but have different meanings Reimers and Gurevych, 2020) ).Our results show that the QCPG model indeed enables controlling paraphrase quality along the three quality dimensions.Furthermore, even though the training data is of mixed quality, and exhibits scarcity in the high quality area (see Figure 1) , our model is able to learn high quality paraphrasing behavior, i.e. it increases the linguistic diversity of the generated paraphrases without decreasing the semantic simi-larity compared to the uncontrolled baseline.
0
Prosody refers to the suprasegmental features of natural speech, such as rhythm and intonation, since it normally extends over more than one phoneme segment. Speakers use prosody to convey paralinguistic information such as emphasis, intention, attitude, and emotion. Humans listening to speech with natural prosody are able to understand the content with low cognitive load and high accuracy. However, most modern ASR systems only use an acous-tic model and a language model. Acoustic information in ASR is represented by spectral features that are usually extracted over a window length of a few tens of milliseconds. They miss useful information contained in the prosody of the speech that may help recognition.Recently a lot of research has been done in automatic annotation of prosodic events (Wightman and Ostendorf, 1994; Sridhar et al., 2008; Ananthakrishnan and Narayanan, 2008; Jeon and Liu, 2009) . They used acoustic and lexical-syntactic cues to annotate prosodic events with a variety of machine learning approaches and achieved good performance. There are also many studies using prosodic information for various spoken language understanding tasks. However, research using prosodic knowledge for speech recognition is still quite limited. In this study, we investigate leveraging prosodic information for recognition in an n-best rescoring framework.Previous studies showed that prosodic events, such as pitch-accent, are closely related with acoustic prosodic cues and lexical structure of utterance. The pitch-accent pattern given acoustic signal is strongly correlated with lexical items, such as syllable identity and canonical stress pattern. Therefore as a first study, we focus on pitch-accent in this paper. We develop two separate pitch-accent detection models, using acoustic (observation model) and lexical information (expectation model) respectively, and propose a scoring method for the correlation of pitch-accent patterns between the two models for recognition hypotheses. The n-best list is rescored using the pitch-accent matching scores combined with the other scores from the ASR system (acoustic and language model scores). We show that our method yields a word error rate (WER) reduction of about 3.64% and 2.07% relatively on two baseline ASR systems, one being a state-of-the-art recognizer for the broadcast news domain. The fact that it holds across different baseline systems suggests the possibility that prosody can be used to help improve speech recognition performance.The remainder of this paper is organized as follows. In the next section, we review previous work briefly. Section 3 explains the models and features for pitch-accent detection. We provide details of our n-best rescoring approach in Section 4. Section 5 describes our corpus and baseline ASR setup. Section 6 presents our experiments and results. The last section gives a brief summary along with future directions.
0
Quality Estimation (QE) aims to predict the quality of the output of Machine Translation (MT) systems when no gold-standard translations are available. It can make MT useful in real-world applications by informing end-users on the translation quality. We focus on sentence-level QE, usually formulated as a regression task where quality is required to be predicted on an continuous scale, e.g. 0-100.The high performances achieved in the most recent shared task on sentence-level QE have been attributed to the use of strong pretrained language models, namely BERT (Devlin et al., 2018) and its multilingual variants, especially XLM-Roberta (Conneau et al., 2020a) . These models have an extremely large number of parameters and, since they are required at training and inference time, they are very disk and RAM-hungry, also making inference slow. This poses challenges for real-time inference, and prohibits deployment on client machines with limited resources.Making models based on pre-trained representations smaller and more usable in practice is an active area of research. One approach is Knowledge Distillation (KD), aiming to extract knowledge from a top-performing large model (the teacher) into a smaller (in terms of memory print, computational power and prediction latency) yet wellperforming model (the student) (Hinton et al., 2015; Gou et al., 2020) . KD techniques have been used to make BERT and similar models smaller. For example, DistilBERT (Sanh et al., 2019) and TinyBERT (Jiao et al., 2020) follow the same general architecture as the teacher BERT, but with a reduced number of layers. However, these student models are also based on Transformers and, as such, they still have too large memory and disk footprints. For instance, the number of parameters in the multilingual DistilBERT-based TransQuest model for QE (Ranasinghe et al., 2020) is 135M.In this paper, we propose to distill the QE model directly, where the student architecture can be completely different from that of the teacher. Namely, we distill a large and powerful QE model based on XLM-Roberta into a small RNN-based model. Existing work along these lines has applied KD mainly to classification tasks (Tang et al., 2019; Sun et al., 2019) . We instead explore this approach in the context of regression. In contrast to classification, where KD provides useful information on the output distribution of incorrect classes, for regression the teacher predictions are point-based estimates, and as such have the same properties as gold labels. Therefore, it is not obvious whether teacher-student learning can be beneficial. The few existing works on KD for regression (Chen et al., 2017; Takamoto et al., 2020) use the teacher loss to minimise the impact of noise in the teacher predictions on the student training. However, this approach requires access to gold labelled examples to train the student, which in our case are very limited in number.Our approach allows for much larger unlabelled student training datasets, built only from source-MT pairs and labelled by the teacher model. We study the performance of student models under different training data regimes: standard training with gold labels, training with teacher predictions on the same data, training with teacher predictions on augmented in-domain and out-of-domain data, as well as augmented data filtered based on uncertainty of teacher predictions. Interestingly, we find that (i) training with teacher predictions results in better performance than training with gold labels; and (ii) student models trained with augmented data perform competitively with DistilBERT-based TransQuest predictors with 8x fewer parameters.
0
Grammatical and linguistic acceptability is an extensive area of research that has been studied for generations by theoretical linguists (e.g. Chomsky, 1957) , and lately by cognitive and compu-1 SwedishGlue (Swe. SuperLim) is a collection of datasets for training and/or evaluating language models for a range of Natural Language Understanding (NLU) tasks.This work is licensed under a Creative Commons Attribution 4.0 International Licence.Licence details: http://creativecommons.org/licenses/by/4.0/. tational linguists (e.g. Keller, 2000; Lau et al., 2020; Warstadt et al., 2019) . Acceptability of sentences is defined as "the extent to which a sentence is permissible or acceptable to native speakers of the language." (Lau et al., 2015 (Lau et al., , p.1618 , and there have been different approaches to studying it. Most work views acceptability as a binary phenomenon: the sentence is either acceptable/ grammatical or not (e.g. Warstadt et al., 2019) . Lau et al. (2014) show that the phenomenon is in fact gradient and is dependent on a larger context than just one sentence. While most experiments are theoretically-driven, the practical value of this research has been also underlined, especially with respect to language learning and error detection (Wagner et al., 2009; Heilman et al., 2014; Daudaravicius et al., 2016) .Datasets for acceptability judgments require linguistic samples that are unacceptable, which requires a source of so-called negative examples. Previously, such samples have been either manually constructed, artificially generated through machine translation (Lau et al., 2020) , prepared by automatically distorting acceptable samples e.g. by deleting or inserting words or inflections (Wagner et al., 2009) or collected from theoretical linguistics books (Warstadt et al., 2019) . Using samples produced by language learners has not been mentioned in connection to acceptability and grammaticality studies. However, there are obvious benefits of getting authentic errors that automatic systems may meet in real-life. Another benefit of reusing samples from learner corpora is that they often contain not only corrections, but also labels describing the corrections. The major benefit, though, is that (un)acceptability judgments come from experts, i.e. teachers, assessors or trained assistants, and are therefore reliable.
0
Tables convey important information in a concise manner. This is true in many domains, scientific documents being one of them. Truth verification tasks in past(e.g. SemEval-2019 Fact Checking Task) have focused on written text without considering the tables. The current shared task (Wang et al., 2021) focuses on tables written in English language. It requires participants to develop systems to predict• veracity of textual claims (statement verification) • identify table cells forming relevant evidence for the claim (evidence finding)The shared task 1 comprised of two sub-tasks:1. Subtask A: Table Statement Support
0
The Statistical machine translation (SMT) systems are considered as one of the most popular approaches to machine translation (MT). However, SMT can suffer from grammatically incorrect output with erroneous syntactic and semantic structure for the language pair on which it is being applied. It is observed that the grammatical errors not only weaken the fluency, but in some cases it may even completely change the meaning of a sentence. In morphologically rich languages, grammatical accuracy is of particular importance, as the interpretation of syntactic relations depends heavily on the morphological agreement within sentences. Morphological errors create serious problems in context of translating the sentiment related components from source to target language. In this paper, we handle these errors by focusing on the roles of sentiment-holder, sentiment expression, corresponding objects and their relations with each other at the clause level.A common error that occurs during translation using SMT is the relations among the holders, associated sentiment expressions and their corresponding objects in a sentence (in case of complex and compound sentences) may interchange. In the following example, the position of the sentiment expression has been changed in target language while translated. Similar instances are found if any interchange occurs in case of other sentiment components such as holder or object.Example 1: Source: In 1905, <hold-er>Calcutta</holder> <expression_1>protested </expression_1> <object_1>the partition of Ben-gal</object_1> and <expression_2>boycotted </expression_2> <object_2>all the British Goods </object_2>. Target: 1905 sale, <holder>Calcutta </holder> <object_1>bongo vongo-r</object_1> <expres-sion_2>boykot korechilo </expression_2 > ebong <object_2>ssmosto british samogri </object_2> <expression_1> protibad janiyechilo </expression_1>.Thus, the entire semantics of the sentence has been changed even the sentence is considered as grammatically correct. Another major challenge is to develop a sentiment phrase aligned system between a resource-rich language English and a resource-constrained language Bengali.The scarcity in state of the art sentiment aligned translation system motivates us to perform this task. To the best of our knowledge, no previous work has been done for the English-Bengali language pair translation by considering the sentiment-aligned approach.In our approach, sentiment expressions, sentiment holder and the corresponding objects of the holders are used to improve the phrase alignment of the SMT system during training stage. Sentiment information is also used in the automatic post-editing of the SMT output after the decoding phase. SMT is based on a mathematical model, is most reliable and cost effective in many applications. This is one of the main reasons to choose SMT for our English-Bengali translation task. For automatic post-editing, we marked the phrases that contain sentiment expression, holders and their corresponding object. After translating the markedup sentences, we then restructure the restructure the output according to the sentiment relations between the sentiment holder and the sentiment expression. Our approach involves the following steps:• We first identify phrases, which contain sentiment holder, sentiment expressions and their corresponding objects. • We aligned these phrases using word alignment provided by GIZA++. • The aligned phrases are incorporated with the PB-SMT phrase table. • Finally, the automatic post-editing has been carried out using the positional information of sentiment components. The rest of the paper is organized in the following manner. Section 2 briefly elaborates the related work. Section 3 provides an overview of the dataset used in our experiments. The proposed system is described in Section 4 while Section 5 provides the system setup for the various experiments. Section 6 includes the experiments and results obtained. Finally, Section 7 concludes and provides avenues for further work.
0
Spoken language translation (SLT) connects automatic speech recognition (ASR) and machine translation (MT) by translating recognized spoken language into a target language. In general, the speech translation process is divided into two separate parts. First, an ASR system provides an automatic transcription of spoken words. Then, the recognized words are translated by a machine translation system.However, a difficult part of SLT is the interface between the ASR system and the MT system, due to the mismatch between the output of the ASR system and the expected input of the MT system. A standard MT system expects grammatically correct written language as input, because it is usually trained on written bilingual text with punctuation marks and case information. In contrast, the output of an ASR system is automatically transcribed natural speech containing recognition errors. Thus, the expected input of the MT system does not match the actual ASR output. Furthermore, ASR systems recognize sequences of words and do not provide punctuation marks or case information.In this paper, we describe how the inconsistency between the ASR output and the SMT input is solved by replacing the source language data of a bilingual training corpus with automatically transcribed text. In a first approach, we keep the target language including case information and punctuation, because our goal is to improve the translation quality directly in an SLT task. On this new corpus, we train a sta-tistical machine translation (SMT) system and use the system to translate the recognized speech into another language. Furthermore, case information and punctuation are restored during the translation process.As a second approach, we built a bilingual training corpus with ASR output as source language data and the corresponding manual transcription with case information and punctuation marks as target language data. In the next step, an SMT system is trained on this corpus. Before translating the recognized speech into the target language, the ASR output is translated into manual transcription. Thus, the postprocessing of the ASR output is modelled as machine translation and we are able to translate the postprocessed ASR output with a standard translation system which is trained on written bilingual text.On the English-French SLT task from IWSLT 2012, we show that our presented approaches improve the translation quality by up to 0.9 BLEU and 0.9 TER.The paper is organized as follows. In the next section, we give a short overview of related work. In Section 3, we describe the usage of automatically transcribed text in the training process of an SMT system. Finally, we discuss the experimental results in Section 5, followed by a conclusion.
0
The naive Bayes classifier has been one of the core frameworks in the information retrieval research for many years. Recently, naive Bayes is emerged as a research topic itself because it sometimes achieves good performances on various tasks, compared to more complex learning algorithms, in spite of the wrong independence assumptions on naive Bayes.Similarly, naive Bayes is also an attractive approach in the text classification task because it is simple enough to be practically implemented even with a great number of features. This simplicity enables us to integrate the text classification and filtering modules with the existing information retrieval systems easily. It is because that the frequency related information stored in the general text retrieval systems is all the required information in naive Bayes learning. No further complex generalization processes are required unlike the other machine learning methods such as SVM or boosting. Moreover, incremental adaptation using a small number of new training documents can be performed by just adding or updating frequencies.Several earlier works have extensively studied the naive Bayes text classification (Lewis, 1992; Lewis, 1998; McCallum and Nigam, 1998) . However, their pure naive Bayes classifiers considered a document as a binary feature vector, and so they can't utilize the term frequencies in a document, resulting the poor performances. For that reason, the unigram language model classifier (or multinomial naive Bayes text classifier) has been referred as an alternative and promising naive Bayes by a number of researchers (McCallum and Nigam, 1998; Dumais et al., 1998; Yang and Liu, 1999; Nigam et al., 2000) . Although the unigram language model classifiers usually outperform the pure naive Bayes, they also have given the disappointing results compared to many other statistical learning methods such as nearest neighbor classifiers (Yang and Chute, 1994) , support vector machines (Joachims, 1998) , and boosting (Schapire and Singer, 2000) , etc.In the real world, an operational text classification system is usually placed in the environment where the amount of human-annotated training documents is small in spite of the hundreds of thousands classes. Moreover, re-training of the text classifiers is required since a small number of new training documents are continuously provided. In this environment, naive Bayes is probably the most appropriate model for the practical systems rather than other complex learning models. Therefore, more intensive studies about the naive Bayes text classification model are required.In this paper, we revisit the naive Bayes framework, and propose a Poisson naive Bayes model for text classification with a statistical feature weighting method. Feature weighting has many advantages compared to the previous feature selection approaches, especially when the new training examples are continuously provided. Our new model assumes that a document is generated by a multivariate Poisson model, and their parameters are estimated by weighted averaging of the normalized and smoothed term frequencies over all the training documents. Under the assumption, we have tested the feature weighting approach with three measures: information gain, ¾ -statistic, and newly introduced probability ratio. With the proposed model and feature weighting techniques, we can get much better performance without losing the simplicity and efficiency of the naive Bayes model. The remainder of this paper is organized as follows. The next section presents a naive Bayes framework for the text classification briefly. Section 3 describes our new naive Bayes model and the proposed technique, and the experimental results are presented in Section 4. Finally, we conclude the paper by suggesting possible directions for future work in Section 5.
0
Grammatical formalisms such as HPSG [Pollard and Sag, 1987] [Pollard and Sag, 1992] and LFG [Kaplan and Bresnan, 1982] employ feature descriptions [Kasper and Rounds, 1986] [Smolka, 1992] as the primary means for stating linguistic theories. However the descriptive machinery employed by these formalisms easily exceed the descriptive machinery available in feature logic [Smolka, 1992] . Furthermore the descriptive machinery employed by both HPSG and LFG is difficult (if not impossible) to state in feature based formalisms such as ALE [Carpenter, 1993] , TFS [Zajac, 1992] and CUF [D6rre and Dorna, 1993] which augment feature logic with a type system. One such expressive device employed both within LFG [Kaplan and Bresnan, 1982] and HPSG but is unavailable in feature logic is that of set descriptions. Although various researchers have studied set descriptions (with different semantics) [Rounds, 1988] [ Pollard and Moshier, 1990 ] two issues remain unaddressed. Firstly there has not been any work on consistency checking techniques for feature terms augmented with set descriptions. Secondly, for applications within grammatical theories such as the HPSG formalism, set descriptions alone are not enough since descriptions involving set union are also needed. Thus to adequately address the knowledge representation needs of current linguistic theories one needs to provide set descriptions as well as mechanisms to manipulate these. In the HPSG grammar formalism [Pollard and Sag, 1987] , set descriptions are employed for the modelling of so called semantic indices ([Pollard and Sag, 1987] pp. 104). The attribute INDS in the example in (1) is a multi-valued attribute whose value models a set consisting of (at most) 2 objects. However multi-valued attributes cannot be described within feature logic [Kasper and Rounds, 1986] [ Smolka, 1992] . 1Io DREL --4°~TIs~R[] / Ls'~E~ w J [NDS IRESTINAME ~andy ]['IRESTINAME kim II ¢ L L N*M" D JIL L JJJA further complication arises since to be able to deal with anaphoric dependencies we think that set memberships will be needed to resolve pronoun dependencies. Equally, set unions may be called for to incrementally construct discourse referents. Thus set-valued extension to feature logic is insufficient on its own. Similarly, set valued subcategorisation frames (see (2)) has been considered as a possibility within the HPSG formalism.(2)believes= IYNILOCISUBCAT~ [[SYN~LOOIHEADICAT v]But once set valued subeategorisation frames are employed, a set valued analog of the HPSG subcategorisation principle too is needed. In section 2 we show that the set valued analog of the subcategorisation principle can be adequately described by employing a disjoint union operation over set descriptions as available within the logic described in this paper.
0
Words, syntactic groups, clauses, sentences, paragraphs, etc. usually form the basis of the analysis and processing of natural language text. However, texts in electronic form are just sequences of characters, including letters of the alphabet, numbers, punctuation, special symbols, whitespace, etc. The identification of word and sentence boundaries is therefore essential for any further processing of an electronic text. Tokenisation or word segmentation may be defined as the process of breaking up the sequence of characters in a text at the word boundaries (see, for example, Palmer, 2000) . Tokenisation may therefore be regarded as a core technology in natural language processing.Since disjunctive orthography is our focus, we distinguish between an orthographic word, that is a unit of text bounded by whitespace, but not containing whitespace, and a linguistic word, that is a sequence of orthographic words that together functions as a member of a word category such as, for example, nouns, pronouns, verbs and adverbs (Kosch, 2006) . Therefore, tokenisation may also be described as the process of identifying linguistic words, henceforth referred to as tokens.While the Bantu languages are all agglutinative and exhibit significant inherent structural similarity, they differ substantially in terms of their orthography. The reasons for this difference are both historical and phonological. A detailed discussion of this aspect falls outside the scope of this article, but the interested reader is referred to Cole (1955 ), Van Wyk (1958 & 1967 and Krüger (2006) .Setswana, Northern Sotho and Southern Sotho form the Sotho group belonging to the South-Eastern zone of Bantu languages. These languages are characterised by a disjunctive (also referred to as semi-conjunctive) orthography, affecting mainly the word category of verbs (Krüger, 2006:12-28) . In particular, verbal prefixal morphemes are usually written disjunctively, while suffixal morphemes follow a conjunctive writing style. For this reason Setswana tokenisation cannot be based solely on whitespace, as is the case in many alphabetic, segmented languages, including the conjunctively written Nguni group of South African Bantu languages, which includes Zulu, Xhosa, Swati and Ndebele.The following research question arises: Can the development and application of a precise tokeniser and morphological analyser for Setswana resolve the issue of disjunctive orthography? If so, subsequent levels of processing could exploit the inherent structural similarities between the Bantu languages (Dixon and Aikhenvald, 2002:8) and allow a uniform approach.The structure of the paper is as follows: The introduction states and contextualises the research question. The following section discusses tokenisation in the context of the South African Bantu languages. Since the morphological structure of the Setswana verb is central to the tokenisation problem, the next section comprises a brief exposition thereof. The paper then proceeds to discuss the finite-state computational approach that is followed. This entails the combination of two tokeniser transducers and a finite-state (rulebased) morphological analyser. The penultimate section concerns a discussion of the computational results and insights gained. Possibilities for future work conclude the paper.
0
Reordering remains one of the greatest challenges in Statistical Machine Translation (SMT) research as the key contextual information may span across multiple translation units. 1 Unfortunately, previous approaches fall short in capturing such cross-unit contextual information that could be critical in reordering. For example, state-of-the-art translation models, such as Hiero (Chiang, 2005) or Moses (Koehn et al., 2007) , are good at capturing local reordering within the confine of a translation unit, but their formulation is approximately a simple unigram model over derivation (a sequence of the application of translation units) with some aid from target language models. Moving to a higher order formulation (say to a bigram model) is highly impractical for several reasons: 1) it has to deal with a severe sparsity issue as the size of the unigram model is already huge; and 2) it has to deal with a spurious ambiguity issue which allows multiple derivations of a sentence pair to have radically different model scores.In this paper, we develop "Anchor Graph" (AG) where we use a graph structure to capture global contexts that are crucial for translation. To circumvent the sparsity issue, we design our model to rely only on contexts from a set of selected translation units, particularly those that appear frequently with important reordering patterns. We refer to the units in this special set as anchors where they act as vertices in the graph. To address the spurious ambiguity issue, we insist on computing the model score for every anchors in the derivation, including those that appear inside larger translation units, as such our AG model gives the same score to the derivations that share the same reordering pattern.In AG model, the actual reordering is modeled by the edges, or more specifically, by the edges' labels where different reordering around the anchors would correspond to a different label. As detailed later, we consider two distinct set of labels, namely dominance and precedence, reflecting the two dominant views about reordering in literature, i.e. the first one that views reordering as a linear operation over a sequence and the second one that views reordering as a recursive operation over nodes in a tree structure The former is prevalent in phrase-based context, while the latter in hierarchical phrase-based and syntax-based context. More concretely, the dominance looks at the anchors' relative positions in the translated sentence, while the precedence looks at the anchors' relative positions in a latent structure, induced via a novel synchronous grammar: Anchorcentric, Lexicalized Synchronous Grammar.From these two sets of labels, we develop two probabilistic models, namely the dominance and the orientation models. As the edges of AG link pairs of anchors that may appear in multiple translation units, our AG models are able to capture high order contextual information that is previously absent. Furthermore, the parameters of these models are estimated in an unsupervised manner without linguistic supervision. More importantly, our experimental results demonstrate the efficacy of our proposed AGbased models, which we integrate into a state-of-theart syntax-based translation system, in a large scale Chinese-to-English translation task. We would like to emphasize that although we use a syntax-based translation system in our experiments, in principle, our approach is applicable to other translation models as it is agnostic to the translation units.
0
Minimum Bayes Risk (MBR) is a theoreticallyelegant decision rule that has been used for singlesystem decoding and system combination in machine translation (MT). MBR arose in Bayes decision theory (Duda et al., 2000) and has since been applied to speech recognition (Goel and Byrne, 2000) and machine translation (Kumar and Byrne, 2004) . The idea is to choose hypotheses that minimize Bayes Risk as oppose to those that maximize posterior probability. This enables the use of taskspecific loss functions (e.g BLEU).However, the definition of Bayes Risk depends critically on the posterior probability of hypotheses. In single-system decoding, one could approximate this probability using model scores. However, for system combination, the various systems have incompatible scores. In practice, MT designers resort to uniform probability, but the result is that the chosen hypothesis no longer has anything to do with Bayes Risk. This hypothesis can be seen as a consensus of multiple hypotheses, and in practice the consensus translation is often good, but it cannot be accurately thought of as MBR.Here, we propose a method that better achieves MBR in system combination settings. The insight is to generalize the loss function in the MBR equation and allow it to be parameterized. The parameters are then tuned on a small development data so that the loss function is converted to one that gives low Bayes Risk under the assumption of uniform posteriors. We will show that a small bitext is sufficient for tuning this generalized loss, and that it vastly outperforms the conventional MBR approach in system combination.In the following, we first review MBR and explain the difficulty in applying it to system combination (Section 2). Then, we propose our Generalized MBR (Section 3) and evaluate it under the NTCIR Patent Translation tasks (Section 4). Finally, we conclude in Section 5.
0
Traditional written corpora for linguistics research are created primarily from printed text, such as newspaper articles and books. With the growth of the World Wide Web as an information resource, it is increasingly being used as training data in Natural Language Processing (NLP) tasks.There are many advantages to creating a corpus from web data rather than printed text. All web data is already in electronic form and therefore readable by computers, whereas not all printed data is available electronically. The vast amount of text available on the web is a major advantage, with Keller and Lapata (2003) estimating that over 98 billion words were indexed by Google in 2003.The performance of NLP systems tends to improve with increasing amount of training data. Banko and Brill (2001) showed that for contextsensitive spelling correction, increasing the training data size increases the accuracy, for up to 1 billion words in their experiments.To date, most NLP tasks that have utilised web data have accessed it through search engines, using only the hit counts or examining a limited number of results pages. The tasks are reduced to determining n-gram probabilities which are then estimated by hit counts from search engine queries. This method only gathers information from the hit counts but does not require the computationally expensive downloading of actual text for analysis. Unfortunately search engines were not designed for NLP research and the reported hit counts are subject to uncontrolled variations and approximations (Nakov and Hearst, 2005) . Volk (2002) proposed a linguistic search engine to extract word relationships more accurately.We created a 10 billion word topic-diverse Web Corpus by spidering websites from a set of seed URLs. The seed set is selected from the Open Directory to ensure that a diverse range of topics is included in the corpus. A process of text cleaning transforms the HTML text into a form useable by most NLP systems -tokenised words, one sentence per line. Text filtering removes unwanted text from the corpus, such as non-English sentences and most lines of text that are not grammatical sentences. We compare the vocabulary of the Web Corpus with newswire.Our Web Corpus is evaluated on two NLP tasks. Context-sensitive spelling correction is a disambiguation problem, where the correction word in a confusion set (e.g. {their, they're}) needs to be selected for a given context. Thesaurus extraction is a similarity task, where synonyms of a target word are extracted from a corpus of unlabelled text. Our evaluation demonstrates that web text can be used for the same tasks as search engine hit counts and newspaper text. However, there is a much larger quantity of freely available web text to exploit.
0
In recent years, advances in the field of Natural Language Processing (NLP) have revolutionized the way machines are used to interpret humanwritten text. With the rapid accumulation of publicly available documents, from newspaper articles to social media posts, machine learning methods designed to automate data analysis are urgently needed. A problem that has been relevant since the dawn of NLP is the automatic summary extraction from a large corpus of text. The development of a consistent and time-efficient method of extractive summarization can assist journalists in their day to day tasks, as well as provide better tools for information retrieval.Summaries need to be as brief as possible but must also capture the important elements of a text. This turns out to be a challenging task for any algorithm to carry out, since there is a virtually infi-nite number of documents that can exist, and each one of them can refer to a unique concept. Natural language is tricky for a computer to model; the absence or presence of a single word can shift the meaning of a whole sentence or even of a whole chapter. On the other hand, some words do not add any value to a sentence, the meaning is still the same even if we ignore them. To make matters even more complex, a word can be crucial for one article but of little importance to another.Human brains have evolved to effectively detect complex patterns in text, to focus on the most important bits of a text while ignoring those that are less important. For a machine, the importance of word or a sentence is not obvious, as it needs to be programmed with a built-in way to assess it in any given context. For the purposes of summary extraction, an automatic summarizer needs to be able to compare words, or sentences via computational means, and announce those with the highest scores as the most relevant for a given document. The representation, aka. the method by which these similarity scores are assigned, is of critical importance to any summary extraction task.When the representation is selected, the next step is training the model, that is, feeding the sentences represented as numerical sequences, to a machine learning procedure. If the representation and the dataset are suitable for the goal we are trying to accomplish, we can expect that the model will be able to predict which words or sentences are more important to a given document. Summing up all the sentences that the model considers to be important, results in a summary of the input text. Topics can be viewed as semantic groups that refer to a particular portion of reality. A document can refer to one or more distinct topics, which humans can often easily distinguish. For example, the words "fishing", "boat" , "waves" , have something in common; they are all affiliated with the sea. We can think of Sea as one topic, which contains these three words. However, topics are not always that identifiable and there can be broader or narrower topics. Resuming the previous example, alternatively, there can exist a topic on fishing , another one on boats and another one on ocean waves. Each one of them contains a number of words that are directly tied to that concept.As demonstrated, there is no unique way to infer topics from an input document. It depends on the representation, the way that we measure the similarity scores between two words.It only makes sense that if two words are similar, they will have a high chance of belonging to the same topic.This statement derives from the distributional hypothesis in linguistics which proposes that words that occur in similar contexts tend to have similar meanings (Harris, 1954 ) However, we have to keep in mind that one word can also belong to one or more topics and that the number of topics in a document is also not known.
0
Rooted in Structural Linguistics (de Saussure, 1966; Harris, 1951) , Distributional Semantic Models (DSMs, see e.g. (Baroni and Lenci, 2010) ) characterize the meaning of lexical units by the contexts they appear in, cf. (Wittgenstein, 1963; Firth, 1957) . Using the duality of form and contexts, forms can be compared along their contexts (Miller and Charles, 1991) , giving rise to the field of Statistical Semantics. A data-driven, unsupervised approach to representing word meaning is attractive as there is no need for laborious creation of lexical resources. Further, these approaches naturally adapt to the domain or even language at hand. Desirable, in general, is a model that provides a firm basis for a wider range of (semantic) tasks, as opposed to specialised solutions on a per-task basis.While most approaches to distributional semantics rely on dense vector representations, the reasons for this seem rather technical than well-justified. To de-bias the discussion, I propose a competitive graph-based formulation. Since all representations have advantages and disadvantages, I will discuss some ways of how to fruitfully combine graphs and vectors in the future.Vector space models have a long tradition in Information Retrieval (Salton et al., 1975) , and heavily influence the way we think about representing documents and terms today. The core idea is to represent each document with a bag-of-words vector of |V | dimensions with vocabulary V , counting how often This work is licensed under a Creative Commons Attribution 4.0 International Licence. License details: http://creativecommons. org/licenses/by/4.0/ each word appears in the respective document. Queries, which are in fact very short documents, can be matched to documents by appropriately comparing their vectors. Since V is large, the representation is sparse -most entries in the vectors are zero. Note, however, that zeros are not stored in today's indexing approaches. When Deerwester et al. (1990) introduced Latent Semantic Analysis (LSA), its major feature was to reduce the dimensionality of vectors, utilising the entirety of all documents for characterising words by the documents they appear in, and vice versa. Dimensionality reduction approaches like Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) map distributionally similar words (occurring in similar contexts) to similar lower-dimensional vectors, where the dimensionality typically ranges from 200 to 10'000. Such a representation is dense: there are virtually no zero entries in these vectors. A range of more recent models, such as Latent Dirichlet Allocation (LDA), are characterised in the same way -variants are distinguished by the notion of context (document vs. window-based vs. structured by grammatical dependencies) and the mechanism for dimensionality reduction. With the advent of neural embeddings such as word2vec (Mikolov et al., 2013) , a series of works showed modest but significant advances in semantic tasks over previous approaches. Levy and Goldberg (2014b) , however, showed that there is no substantial representational advance in neural embeddings, as they approximate matrix factorisation, as used in LSA. The advantage of word2vec is rather its efficient and scalable implementation that enables the processing of larger text collections. Improvements on task performance can mostly be attributed to better tuning of hyperparameters 1 -which however overfits the DSM to a task at hand, and defies the premise of unsupervised systems of not needing (hyper)supervision.But there is a problem with all of these approaches: the fallacy of dimensionality 2 , following from a simplification that we should not apply without being aware of its consequences: there is no 'appropriate number' of dimensions in natural language, because natural language follows a scale-free distribution on all levels (e.g. (Zipf, 1949; Steyvers and Tenenbaum, 2005; Mukherjee et al., 2008) , inter al.). Thus, a representation with a fixed number of dimensions introduces a granularity -'major' dimensions encode the most important distinctions while 'minor' distinctions in the data cannot be modelled if the granularity is too coarse. This is why the recommended number of dimensions depends on the task, the dataset's size and even the domain. In principle, there are two conclusions from studies that vary the number of dimensions to optimise some sort of a score: (a) in one type of study, there is a sweet spot in the number of dimensions, typically between 50 and 2000. This means that the dimension is indeed task-dependent, (b) the 'optimal' number of dimensions is the highest number tested, indicating that it probably would have been better to keep a sparse representation. Interestingly, the most frequent reason researchers state, if asked why they did not use a sparse representation, is a technical one: many machine learning and statistical libraries do not natively operate on sparse representations, thus run out of memory when trying to represent all those zeros.
0
Deep neural networks have achieved remarkable successes in natural language processing recently. Although neural models have demonstrated performance superior to humans on some tasks, e.g. reading comprehension (Rajpurkar et al., 2016; Devlin et al., 2019; Lan et al.) , it still lacks the ability of discrete reasoning, resulting in low accuracy on math reasoning. Thus, it is hard for pure neural network approaches to tackle the task of solving math word problems (MWPs) , which requires a model to be capable of natural language understanding and discrete reasoning. MWP solving aims to automatically answer a math word problem by understanding the textual description of the problem and reasoning out the underlying answer. A typical MWP is a short story that describes a partial state of the world and poses a question about an unknown quantity or multiple unknown quantities. To solve an MWP, the relevant quantities need to be identified from the text. Furthermore, the correct operators along with their computation order among these quantities need to be determined. Therefore, integrating neural networks with symbolic reasoning is crucial for solving MWPs. Inspired by the recent amazing progress on neural semantic parsing (Liang et al., 2017a) and reading comprehension , we address this problem by neural-symbolic computing.Recently, many researchers (Wang et al., 2017; Huang et al., 2018; Wang et al., 2018b Wang et al., , 2019 Xie and Sun, 2019; Chiang and Chen, 2019) , inspired by an encoder-decoder framework (Cho et al., 2014) , apply neural networks to solve MWPs by learning the mapping function between problems and their corresponding equations, and achieve remarkable successes. The encoder uses a neural network to represent a problem as a real-valued vector, and the decoder uses another neural network to generate an equation or expression token by token. The main difference among previous methods is the way to decode expressions or equations. However, they only follow the encoder-decoder paradigm but lacking the ability to explicitly incorporate essential math symbolic constraints (e.g. commonsense constants, formulation regularization), leading to unexplainable and unreasonable predictions. Besides, most of them only focus on arithmetic MWPs without any unknown, preventing them from generalizing to various types of MWPs, such as equation set problems.To address the above issues, we propose a novel Neural-Symbolic Solver (NS-Solver), which explicitly and seamlessly incorporates different levels of symbolic constraints by auxiliary learning tasks. Our NS-Solver consists of three main components, a problem reader to encode the math word problems into vector representations, a programmer to generate the symbolic grounded equations, which are executed to produce answers, and a symbolic executor to obtain final results. In addition to the supervised training objective between generated symbolic grounded equations and groundtruth equations, our solver is also optimized by four novel auxiliary objectives that enforce four levels of problem understanding and symbolic reasoning. First, we apply number prediction task to predict both the number quantity and number location in the problem in a self-supervised manner. Second, we deploy commonsense constant prediction task to predict what prior commonsense knowledge (e.g. how many legs a chicken has) is required for our solver. Third, we propose program consistency checker to compute the semantic loss between the predicted program and ground-truth equation to ensure reasonable equation mapping. Finally, we also propose a novel duality exploiting task that exploits the quasi duality between symbolic grounded equation generation and the problem's part-of-speech generation to enhance the understanding ability of our solver. There are some key advantages of our solution. First of all, the above four auxiliary tasks can produce additional training signals, which improves the data efficiency in training and makes our solver more robust. Second, using the predicted constant to constrain the target symbolic table can reduce the search space greatly, which means that our solver can generate correct symbolic grounded equations easier and better. Third, the auxiliary tasks have been proven to help reduce the domain gap between seen and unseen MWPs (Sun et al., , 2020 , thus improving the reasoning ability of our solver.Besides, beyond the current large-scale highquality MWP benchmark that only includes one type of problems, we also construct a large-scale challenging Chinese MWPs dataset CM17K, which contains 4 types of MWPs (arithmetic MWPs, oneunknown linear MWPs, one-unknown non-linear MWPs, equation set problems) with more than 17K samples, to provide a more realistic and challenging benchmark for developing a universal and scalable math solver. Extensive experiments on public Math23K and our proposed CM17k demonstrate the superiority of our NS-Solver compared to stateof-the-art methods in predicting final results while ensuring intermediate equation rationality.
0
The success of statistical approaches to Machine Translation (MT) can be attributed to the IBM models (Brown et al., 1993) that characterize wordlevel alignments in parallel corpora. Parameters of these alignment models are learnt in an unsupervised manner using the EM algorithm over sentence-level aligned parallel corpora. While the ease of automatically aligning sentences at the word-level with tools like GIZA++ (Och and Ney, 2003) has enabled fast development of statistical machine translation (SMT) systems for various language pairs, the quality of alignment is typically quite low for language pairs that diverge from the independence assumptions made by the generative models. Also, an immense amount of parallel data enables better estimation of the model parameters, but a large number of language pairs still lack parallel data.Two directions of research have been pursued for improving generative word alignment. The first is to relax or update the independence assumptions based on more information, usually syntactic, from the language pairs (Cherry and Lin, 2006) . The second is to use extra annotation, typically word-level human alignment for some sentence pairs, in conjunction with the parallel data to learn alignment in a semi-supervised manner. Our research is in the direction of the latter, and aims to reduce the effort involved in hand-generation of word alignments by using active learning strategies for careful selection of word pairs to seek alignment.Active learning for MT has not yet been explored to its full potential. Much of the literature has explored one task -selecting sentences to translate and add to the training corpus (Haffari et al., 2009) . In this paper we explore active learning for word alignment, where the input to the active learner is a sentence pair (s J 1 , t I 1 ), present in two different languages S = {s * } and T = {t * }, and the annotation elicited from human is a set of links {(j, i) : j = 0 • • • J; i = 0 • • • I}. Unlike previous approaches, our work does not require elicitation of full alignment for the sentence pair, which could be effortintensive. We use standard active learning query strategies to selectively elicit partial alignment information. This partial alignment information is then fed into a semi-supervised word aligner which per-forms an improved word alignment over the entire parallel corpus.Rest of the paper is organized as follows. We present related work in Section 2. Section 3 gives an overview of unsupervised word alignment models and its semi-supervised improvisation. Section 4 details our active learning framework with discussion of the link selection strategies in Section 5. Experiments in Section 6 have shown that our selection strategies reduce alignment error rates significantly over baseline. We conclude with discussion on future work.
0
Medical dialogue systems, which have gained increasing attention, aim to communicate with patients to enquire about diseases beyond their selfreported and make an automatic diagnosis (Wei et al., 2018; Lin et al., 2019) . It has the potential to substantially automate the diagnostic process while also lowering the cost of gathering information from patients (Kao et al., 2018) . In addition, preliminary diagnosis findings that are generated by a medical dialogue system may help doctors make a diagnosis more quickly. Because of these advantages, researchers work on addressing sub-problems in a medical dialogue system, such as natural language understanding (Lin et al., 2019; Shi et al., 2020) .However, the dialogue system for medical diagnosis, on the other hand, has specific require- ments for dialogue reasoning in the context of medical knowledge. The diagnosis elicited by the dialogue system should be associated with the underlying medical condition and coherent with medical knowledge. In the absence of medical knowledge, traditional generative dialogue models frequently use neural sequence modelling (Sutskever et al., 2014; Vaswani et al., 2017) and cannot be directly applied to the medical dialogue scenario.Recently, transformer-based language models (LMs) (Devlin et al., 2019; Radford et al., 2019; Song et al., 2019) are fine-tuned for medical dialogue tasks. collected a MedDialog dataset and fine-tuned various transformer-based LMs which includes a vanilla transformer (Vaswani et al., 2017) , GPT (Radford et al., 2019) and BERT-GPT Lewis et al., 2020) for medical dialogue generation task. , in another study, presented a CovidDialog dataset and then train dialogue generation models based on Transformer, GPT-based model, and BART (Lewis et al., 2020) and BERT-GPT for medical dialogue generation tasks. These LMs are trained on huge corpus but may not provide a good representation of specific domains (Müller et al., 2020) and need an adequate amount of task-specific data (Dou et al., 2019) in order to establish correlations between diseases and symptoms (see Figure 1 ). Instead of using publicly available models, we can pre-train a model that emphasizes domain-specificity. On the other hand, pre-training is time-intensive and computationally costly, making it unavailable for most users. Furthermore, while it is possible to inject domain-specific knowledge into LMs during pretraining, this method of acquiring knowledge can be expensive and inefficient. For instance, pretraining data must contain many occurrences of the words "Panadol" and "headache" occurring together for the model to learn that "Panadol" can treat headaches. What other options do we have to make the model an expert in its field besides this one? The knowledge graph (KG), also known as an ontology, was a good solution in the early stages of research. SNOMED-CT (Bodenreider, 2008) , in the medical field, and HowNet (Dong et al., 2010) , in the field of Chinese conception, are two examples of KGs developed as knowledge was distilled into a structured form. If KG can be incorporated into the LM, it will provide domain knowledge to the computational method, enhancing its effectiveness on domain-specific tasks while significantly lowering the expense of pre-training. To address the limitations mentioned above, this article describes a method for incorporating domain-specific external knowledge into transformer-based LMs for medical dialogue generation tasks. Our contributions are as follows:• We presented a new method that incorporates medical knowledge to transformer-based language models;• The proposed method first injects knowledge from a medical knowledge graph into an utterance. Next, the embedding layer transforms the utterance tree into an embedding that is fed to the masked self-attention of a transformer, followed by the decoder to generate the response.• To evaluate the performance of the proposed method, we performed both automatic and human evaluations. Our results demonstrated that incorporating medical knowledge improves the performance compared to several state-of-the-art baselines on the MedDialog dataset.
0
The purpose of an information retrieval (IR) system is to retrieve the documents relevant to user's information need expressed in the form of a query. Many information needs are event-oriented, while at the same time there exists an abundance of event-centered texts (e.g., breaking news, police reports) that could satisfy these needs. Furthermore, event-oriented information needs often involve structure that cannot easily be expressed with keyword-based queries (e.g., "What are the countries that President Bush has visited and in which has his visit triggered protests?"). Traditional IR models (Salton et al., 1975; Robertson and Jones, 1976; Ponte and Croft, 1998) rely on shallow unstructured representations of documents and queries, making no use of syntactic, semantic, or discourse level information. On the other hand, models utilizing structured event-based representations have not yet proven useful in IR. However, significant advances in event extraction have been achieved in the last decade as the result of standardization efforts (Pustejovsky et al., 2003) and shared evaluation tasks (Verhagen et al., 2010) , renewing the interest in structured event-based text representations.In this paper we present a novel retrieval model that relies on structured event-based representation of text and addresses event-centered queries. We define an event-oriented query as a query referring to one or more real-world events, possibly including their participants, the circumstances under which the events occurred, and the temporal relations between the events. We account for such queries by structuring both documents and queries into event graphs (Glavaš andŠnajder, 2013b) . The event graphs are built from individual event mentions extracted from text, capturing their protagonists, times, locations, and temporal relations. To measure the query-document similarity, we compare the corresponding event graphs using graph kernels (Borgwardt, 2007) . Experimental results on two news story collections show significant improvements over stateof-the-art keyword-based models. We also show that our models are especially suitable for retrieval from collections containing topically similar documents.
0
Many algorithms in speech and language processing can be viewed as instances of dynamic programming (DP) (Bellman, 1957) . The basic idea of DP is to solve a bigger problem by divide-and-conquer, but also reuses the solutions of overlapping subproblems to avoid recalculation. The simplest such example is a Fibonacci series, where each F (n) is used twice (if cached). The correctness of a DP algorithm is ensured by the optimal substructure property, which informally says that an optimal solution must contain optimal subsolutions for subproblems. We will formalize this property as an algebraic concept of monotonicity in Section 2. search space \ ordering topological-order best-first graph + semirings 2Viterbi (3.1) Dijkstra/A* (3.2) hypergraph + weight functions (4) Gen. Viterbi (5.1) Knuth/A* (5.2) Table 1 : The structure of this paper: a two dimensional classification of dynamic programming algorithms, based on search space (rows) and propogation ordering (columns). Corresponding section numbers are in parentheses.This report surveys a two-dimensional classification of DP algorithms (see Table 1 ): we first study two types of search spaces (rows): the semiring framework (Mohri, 2002) when the underlying representation is a directed graph as in finite-state machines, and the hypergraph framework (Gallo et al., 1993) when the search space is hierarchically branching as in context-free grammars; then, under each of these frameworks, we study two important types of DP algorithms (columns) with contrasting order of visiting nodes: the Viterbi style topological-order algorithms (Viterbi, 1967) , and the Dijkstra-Knuth style best-first algorithms (Dijkstra, 1959; Knuth, 1977) . This survey focuses on optimization problems where one aims to find the best solution of a problem (e.g. shortest path or highest probability derivation) but other problems will also be discussed.
0
Bilingual data (including bilingual sentences and bilingual terms) are critical resources for building many applications, such as machine translation (Brown, 1993) and cross language information retrieval (Nie et al., 1999) . However, most existing bilingual data sets are (i) not adequate for their intended uses, (ii) not up-to-date, (iii) apply only to limited domains. Because it"s very hard and expensive to create a large scale bilin-gual dataset with human effort, recently many researchers have turned to automatically mining them from the Web.If the content of a web page is written in two languages, we call the page a Bilingual Web Page. Many such pages exist in non-English web sites. Most of them have a primary language (usually a non-English language) and a secondary language (usually English). The content in the secondary language is often the translation of some primary language text in the page.Since bilingual web pages are very common in non-English web sites, mining bilingual data from them should be an important task. However, as far as we know, there is no publication available on mining bilingual sentences directly from bilingual web pages. Most existing methods for mining bilingual sentences from the Web, such as (Nie et al., 1999; Resnik and Smith, 2003; Shi et al., 2006) , try to mine parallel web documents within bilingual web sites first and then extract bilingual sentences from mined parallel documents using sentence alignment methods.As to mining term translations from bilingual web pages, Cao et al. (2007) and Lin et al. (2008) proposed two different methods to extract term translations based on the observation that authors of many bilingual web pages, especially those whose primary language is Chinese, Japanese or Korean, sometimes annotate terms with their English translations inside a pair of parentheses, like "c 1 c 2 ...c n (e 1 e 2 ... e m )" (c 1 c 2 ...c n is a primary language term and e 1 e 2 ... e m is its English translation).Actually, in addition to the parenthesis pattern, there is another interesting phenomenon that in many bilingual web pages bilingual data appear collectively and follow similar surface patterns. Figure 1 shows an excerpt of a page which introduces different kinds of dogs 2 . The page provides a list of dog names in both English and Chinese. Note that those bilingual names do not follow the parenthesis pattern. However, most of them are identically formatted as: "{Number}。{English name}{Chinese name}{EndOfLine}". One exceptional pair ("1.Alaskan Malamute 啊拉斯加 雪 橇 犬 ") differs only slightly. Furthermore, there are also many pages containing consistently formatted bilingual sentences (see Figure 2 ). The page 3 lists the (claimed) 200 most common oral sentences in English and their Chinese translations to facilitate English learning. . Consistently formatted sentence translation pairs People create such web pages for various reasons. Some online stores list their products in two languages to make them understandable to foreigners. Some pages aim to help readers with foreign language learning. And in some pages where foreign names or technical terms are mentioned, the authors provide the translations for disambiguation. For easy reference, from now on we will call pages which contain many consistently formatted translation pairs Collective Bilingual Pages.According to our estimation, at least tens of millions of collective bilingual pages exist in Chinese web sites. Most importantly, each such page usually contains a large amount of bilingual 3 http://cul. beelink.com/20060205/2021119.shtml data. This shows the great potential of bilingual data mining. However, the mining task is not straightforward, for the following reasons:1) The patterns vary in different pages, so it"s impossible to mine the translation pairs using predefined templates; 2) Some pages contain consistently formatted texts in two languages but they are not translation pairs; 3) Not all translations in a collective bilingual page necessarily follow an exactly consistent format. As shown in Figure 1 , the ten translation pairs are supposed to follow the same pattern, however, due to typos, the pattern of the first pair is slightly different.Because of these difficulties, simply using a classifier to extract translation pairs from adjacent bilingual texts in a collective bilingual page may not achieve satisfactory results. Therefore in this paper, we propose a pattern-based approach: learning patterns adaptively from collective bilingual pages instead of using the parenthesis pattern, then using the learned patterns to extract translation pairs from corresponding web pages. Specifically, our approach contains four steps: 1) Preprocessing: parse the web page into a DOM tree and segment the inner text of each node into snippets; 2) Seed mining: identify potential translation pairs (seeds) using an alignment model which takes both translation and transliteration into consideration; 3) Pattern learning: learn generalized patterns with the identified seeds; 4) Pattern based mining: extract all bilingual data in the page using the learnt patterns.Let us take mining bilingual data from the text shown in Figure 1 as an example. Our method identifies "Boxer 拳师" and "Eskimo Dog 爱斯 基摩犬" as two potential translation pairs based on a dictionary and a transliteration model (Step 2 above). Then we learn a generalized pattern that both pairs follow as "{BulletNumb-er}{Punctuation}{English term}{Chinese term}{EndOfLine}", (Step 3 above). Finally, we apply it to match in the entire text and get all translation pairs following the pattern (Step 4 above).The remainder of this paper is organized as follows. In Section 2, we list some related work. The overview of our mining approach is presented in Section 3. In Section 4, we give de-tailed introduction to each of the four modules in our mining approach. The experimental results are reported in Section 5 followed by our conclusion and some future work in Section 6.Please note that in this paper we describe our method using example bilingual web pages in English and Chinese, however, the method can be applied to extract bilingual data from web pages written in any other pair of languages, such as Japanese and English, Korean and English etc.
0
Vowel distinction exists due to their being different vowel inventories and phonetic features across languages. There are new and similar vowels when comparing two vowel systems of languages. Similar vowels represent the vowels sharing certain phonetic features and phonology status within two vowel systems. While non-native speakers acquire similar vowels, they can get information from their mother tongue. New vowels refer to the ones that do not have counterparts in the mother vowel system, and language learners will develop a new category while perceiving the new vowels (Flege, 1987) . In Speech Learning Model (SLM) Flege (1995) proposed that nonnative speakers are likely to make equivalences between native (L1) and non-native (L2) systems. For instance, if the sounds in L2 have similar counterparts in learners' phonological system, the non-native sounds will be merged to the L1 category. Due to the impact of native pronunciation, language learners may produce L2 sounds with a strong foreign accent. If the L2 sound is totally new to learners, it is predicted to be easily acquired.However, there is no agreement on whether the similarity between language systems can prevent or assist the acquisition of L2 phones. Bohn and Flege (1992) explored the influence of L1 experience upon the production of new and similar English vowels by German speakers. They reported that the similar English vowels produced by the experienced L2 speakers did not have stronger accent than the ones by inexperienced L2 speakers. This observation was in line with the hypothesis of SLM. However, the accent rating result for the new vowel /ae/ did not clearly support the model, and the acoustic results for new vowel demonstrated that the experienced learners could articulate /ae/ in the same way as English natives did, unlike the inexperienced learners.With regard to the non-native sound acquisition, various factors influence the accent degree in the production of non-native speakers. For instance, it has been documented that age of learning (AOL, Piske et al., 2001; Uzal et al., 2015) and proficiency have a significant negative correlation in language acquisition. Moreover, a short length of residence (LOR; Flege and Fletcher, 1992) is also found to have more accent while articulating new sounds. Other factor includes the amount or the use of L1 and L2 (Uzal et al., 2015) .In addition, previous researches have also unveiled the weightings of learning variables on the L2 sound acquisition by L2 learners. Uzal et al. (2015) examined the perceived accents of Turkish children born in Finnish and revealed that AOL interrelated with language use factors. AOL was found to be the major indicator of perceived accent, followed by home use of L1 and L2 use.In terms of L2 acquisition, non-native learners may acquire more than one non-native languages. Generally, languages acquired after the first language (L1) are usually called second languages (L2). However, consensus has not yet been reached on the definition of the term 'the third language (L3)' (De Angelis, 2007) . According to De Angelis (2007), L2 or L3 languages can be defined on the basis of the sequence of time when they are learned by non-native speakers. That means that L2 is learned prior to L3. Thus, the term L3 usually refers to the acquisition of any languages beyond L2. Hammarberg (2001) proposed that L2 refers to the language that has already been acquired, while L3 denotes languages that are currently being learned. Empirical researches on the crosslinguistic effects on L3 acquisition mainly focused on the following two dimensions: typology and L2 status (Cenoz, 2001) . Typology refers to crosslanguage distance/similarity, whereas L2 status is relative to the second language that a person knows.In a language environment like Hong Kong where people use "Two literacy and three languages", a multilingual acquisition study has significance and importance. According to the data from the census conducted by Census and Statistics Department of Hong Kong Special Administrative Government in 2011, the number of Indian and Pakistan students younger than 15 years old was 5767 and 7148, respectively. Indian-Pakistani students composed 54% of the Asian student (non-native) population of the same age. A majority of the primary and secondary schools (only the schools funded by the Government, not including international schools) at Hong Kong use Cantonese as the medium of instruction in teaching Chinese to non-native speakers. Although some research has been done on the accent analysis of vowel articulation by non-native speakers for other languages, there is still a lack of studies investigating the accent ratings of vowels in Hong Kong Cantonese by non-Chinese speaking (NCS) students. As Pakistani students make up a large portion of the NCS population at local schools, the present project chooses them as target subjects to reveal how Pakistani students produce new and similar vowels in Hong Kong Cantonese.Syllable is the basic unit of Hong Kong Cantonese. Like Mandarin, Cantonese is a tonal language and each syllable consists of finals, tones and optional initials. In the phonological system of Hong Kong Cantonese, there are seven long vowels /a, i, u, y, ɛ, ɔ, oe/ (Shi et al., 2015) . For Urdu (Oxford Urdu English Dictionary, 2013), there exist eight long vowels (/i, e, ɛ, ae, a, ɔ, o, u/) and 3 short vowels /ɪ, ə, ʊ/, and the universal short vowels mostly appear in checked syllables just like in Cantonese.As introduced by Roach (2004) , modern standard English contains a 12-vowel system, including 5 long vowels /i, ɛ, u, ɔ, ɑ/ and 7 short vowels /ɪ, e, ae, ə, ʌ, ʊ, ɒ/. Urdu and English vowel systems have no vowels /y/ and /oe/ like in Cantonese. The Cantonese long vowel /ɛ/ pertains to mid-high front vowels (Zee, 1999) , whereas Urdu long vowel /ɛ/ belongs to mid-low front vowels (Ohala, 1999) , and in English, /ɛ/ is a midcentral vowel. Likewise, the Cantonese long vowel /ɔ/ is pronounced as mid-high back vowels, while the Urdu long vowel /ɔ/ pertains to mid-low back vowels, and English also has a long vowel /ɔ/, which belongs to mid-back vowel. As there is no /y/ and /oe/ in Urdu or English; for Urdu speakers who were L3 learners of Hong Kong Cantonese, /y/ and /oe/ were still treated as new vowels in the current study.Regarding the development of Hong Kong Cantonese vowels by native children, Cheung, (1990) indicated that native children mastered all vowels and diphthongs by age 3;0 (years;months). The ranked order of vowel accuracy was proposed as: /ɛ/ > /ɔ/ > /a/ = /i/ > /ɐ/> /oe/, /ɵ/ > /u /> /y/ for children aged 24 to 27 months (Stokes and Wong, 2002) . Conducting a population study of 1,726 children ages 2;4 to12;4, To et al. (2013) found that /i, a, u, ɔ, ɛ/ were acquired by age 2;6, whereas /y/ and /oe/ were acquired by age 3;0 and age 4;0, respectively.Based on vowel comparison in Hong Kong Cantonese, Urdu and English, SLM and L3 acquisition theories, the current project aims to investigate the acquisition of similar /ɛ, ɔ/ and new vowels /y, oe/ in Hong Kong Cantonese articulated by Urdu speakers from local secondary schools. This group of subjects was regarded as L3 learners at the time of experiment as they had learnt English prior to Cantonese. As an indicator of the accuracy of articulation by non-native speakers, accent rating results can be obtained by the auditory evaluation of native listeners. Moreover, the current study aims to how learning factors affect the degree of perceived accents in the vowels produced by non-native learners. Taking SLM and L3 acquisition as theoretical foundation, the current project adopts accent rating and statistical methods to explore how Pakistani students in secondary school acquire new and similar vowels in Hong Kong Cantonese after a certain amount of Cantonese learning. The hypothesis is that non-native speakers, may have different degrees of difficulties pronouncing different vowel types (such as similar and new) due to the effect of L1 and L2 vowel systems across languages. Moreover, learning variables also affect the accent of new and similar vowels in Hong Kong Cantonese produced by non-native speakers. Thus, this study explores the factors that influence the production of the new and similar vowels through having native speakers listen to the Pakistani children speak Cantonese and by the correlation of learning factors and accent ratings. The results of this project attests whether SLM could account for the sound acquisition of L3, which extends SLM to the field of L3/multilingual research. Thus, the findings from the study would contribute to L3/multilingual acquisition studies and serve as a reference point for Chinese language teaching targeted at NCS students at Hong Kong.
0
Event trigger extraction, as defined the Automatic Content Extraction multilingual evaluation benchmark (ACE2005) (Walker, 2006) , is a subtask of event extraction which requires systems to detect and label the lexical instantiation of an event, known as a trigger. As an example, in the sentence "John traveled to NYC for a meeting", trav- eled is a trigger of a Movement-Transport event.Trigger detection is typically the first step in extracting the structured information about an event (e.g. the time, place, and participant arguments; distinguishing between past, habitual, and future events). This definition of the task restricts it to events that can be triggered explicitly by actual words and makes it context-vulnerable: the same event might be expressed by different triggers and a specific trigger can represent different event types depending on the context. Event trigger extraction is challenging as it involves understanding the context in order to be able to identify the event that the trigger refers to. Figure 1 shows two examples where context plays a crucial role in disambiguating the word sense of leaving, which is a trigger for a Movement-Transport event in the first sentence and for an End-Position event in the second sentence.Due to the complexity of the task and the difficulty in constructing a standard annotation scheme, there exists limited labeled data, for only a few languages. The earliest work has focused mainly on English, for which there are relatively many annotated sentences, and relies extensively on language-specific linguistic tools to extract the lexical and syntactic features that need to be computed as a pre-requisite for the task (Ji and Grishman, 2008; Liao and Grishman, 2010; Hong et al., 2011; Li et al., 2013) .Simply generating annotated corpora for each language of interest is not only costly and timeconsuming, it is also not necessarily guaranteed to address the great NLP divide, where performance depends on the language, the ability to generate language-specific features, and the quality tools (in this case, syntactic parsers) available for each language. In an attempt to reduce the great NLP divide, we observe a tendency of practitioners drifting away from linguistic features and more towards continuous distributed features that can be obtained without hand-engineering, based simply on publicly available corpora. Recently, approaches have tried to overcome the limitation of traditional lexical features, which can suffer from the data sparsity problem and inability to fully capture the semantics of the words, by making use of sequential modeling methods including variants of Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN), and/or Conditional Random Fields (CRF). (Chen et al., 2015; Nguyen et al., 2016; Sha et al., 2018; Liu et al., 2018b) .Existing approaches which take into consideration the cross-lingual aspect of event trigger extraction tend to either take inspiration from machine translation, distant supervision or multitasking. Machine translation is used by Liu et al. (2018a) to project monolingual text to parallel multilingual texts to model the confidence of clues provided by other languages. However, this approach suffers from error propagation of machine translation.Another approach relies on multilingual embeddings, which can be pre-trained beforehand on large monolingual corpora, using no explicit parallel data, and bridging the gap between different languages by learning a way to align them into a shared vector space. The ability of these models to represent a common representation of words across languages makes them attractive to numerous downstream NLP applications. Multilingual Unsupervised and Supervised Embeddings (MUSE) is a framework for training cross-lingual embeddings in an unsupervised manner, which leads to competitive results, even compared to supervised approaches (Conneau et al., 2017) . However, there is no prior work leveraging this kind of representation for cross-lingual event trigger extraction.More recently, BERT, a deep bidirectional representation which jointly conditions on both left and right context (Devlin et al., 2019) , was proposed, which unlike MUSE, provides contextualized word embeddings, and has been shown to achieve state-of-the-art performance on many NLP tasks. In particular, (Yang et al., 2019) propose a method based on BERT for enhancing event trigger and argument extraction by generating more labeled data. However, it has not been applied in the context of cross-lingual transfer learning.In this paper, we investigate the possibility of automatically learning effective features from data while relying on zero language-specific linguistic resources. Moreover, we explore the application of multilingual embeddings to the event trigger extraction task in a direct transfer of annotation scheme where ground truth is only needed for one language and can be used to predict labels in other languages and other boosted and joint multilingual schemes. We perform a systematic comparison between training using monolingual versus multilingual embeddings and the difference in gain on performance with respect to different train/test language pairs. We evaluate our framework using two embedding approaches: typebased unsupervised embeddings (MUSE) and contextualized embeddings (BERT). For the latter, we demonstrate that our proposed model achieves a better (or comparable) performance for all languages compared to some benchmarks for event extraction on the ACE2005 dataset.The main contributions of the paper can be summarized as follows:(1) We apply different state-of-the-art NN architectures for sequence tagging on trigger extraction and compare them to feature-based baselines and multilingual projection based models.(2) We achieve a better performance using contextualized word representation learning in event trigger extraction backed up with both quantitative and qualitative analysis.(3) We evaluate the effectiveness of a multilingual approach using zero-shot transfer learning, targeted cross-lingual and joint multilingual training schemes.(4) We investigate event trigger extraction performance when using Arabic.
0
The task of relation extraction (RE) deals with identifying whether any pre-defined semantic relation holds between a pair of entity mentions in the given sentence. Pure relation extraction techniques (Zhou et al., 2005; Jiang and Zhai, 2007; Bunescu and Mooney, 2005; Qian et al., 2008) assume that for a sentence, gold-standard entity mentions (i.e. boundaries as well as types) in it are known. In contrast, end-to-end relation extraction deals with plain sentences without assuming any knowledge of entity mentions in them. The task of end-to-end relation extraction consists of three sub-tasks: i) identifying boundaries of entity mentions, ii) identifying entity types of these mentions and iii) identifying appropriate semantic relation for each pair of mentions. First two sub-tasks correspond to the Entity Detection and Tracking task defined by the the Automatic Content Extraction (ACE) program (Doddington et al., 2004) and the third sub-task corresponds to the Relation Detection and Characterization (RDC) task. ACE standard defined 7 entity types 1 : PER (person), ORG (organization), LOC (location), GPE (geopolitical entity), FAC (facility), VEH (vehicle) and WEA (weapon). It also defined 7 coarse level relation types 2 : EMP-ORG (employment), PER-SOC (personal/social), PHYS (physical), GPE-AFF (GPE affiliation), OTHER-AFF (PER/ORG affiliation), ART (agent-artifact) and DISC (discourse).Traditionally, the three sub-tasks of end-to-end relation extraction are carried out serially in a "pipeline" fashion. In this case, the errors in any sub-task affect subsequent sub-tasks. Another disadvantage of this "pipeline" approach is that it allows only one-way information flow, i.e. the knowledge about entities is used for identifying relations but not vice versa. Hence to overcome this problem, several approaches (Roth and Yih, 2004; Roth and Yih, 2002; Singh et al., 2013; were proposed which carried out these subtasks jointly rather than in "pipeline" manner.We propose a new approach which combinesBoundaries Entity Type His (0, 0) PER sister(1, 1) PER Mary Jones(2, 3) PER United Kingdom (7, 8) GPE Table 1 : Expected output of end-to-end relation extraction system for entity mentions the powers of Neural Networks and Markov Logic Networks to jointly address all the three sub-tasks of end-to-end relation extraction. We design the "All Word Pairs" neural network model (AWP-NN) which reduces solution of these three subtasks to predicting an appropriate label for each word pair in a given sentence. End-to-end relation extraction output can then be constructed easily from these labels of word pairs. Moreover, as a separate prediction is made for each word pair, there may be some inconsistencies among the labels. We address this problem by refining the predictions of AWP-NN by using inference in Markov Logic Networks so that some of the inconsistencies in word pair labels can be removed at the sentence level. The specific contributions of this work are : i) modelling boundary detection problem by introducing a special relation type WEM and ii) a single, joint neural network model for all three subtasks of end-to-end relation extraction. The paper is organized as follows. Section 2 provides a detailed problem definition. Section 3 describes our AWP-NN model in detail, followed by Section 4 which describes how the predictions of AWP-NN model are revised using inference in MLNs. Section 5 provides experimental results and analysis. Finally, we conclude in Section 6 with a short note on future work.
0
Many natural language phenomena are inherently not context-free, or call for structural descriptions that cannot be produced by a context-free grammar (Chomsky, 1957; Shieber, 1985; Savitch et al., 1987) . Examples are extraposition, cross-serial dependencies and WH-inversion. However, relaxing the context-freeness assumption comes at the peril of combinatorial explosion.This work aims to transcend two limitations associated with probabilistic context-free grammars. First in the sense of the representations produced by the parser, which allow constituents with gaps in their yields (see figure 1). Building on , we parse with a mildly context-sensitive grammar (LCFRS) that can be read off directly from a treebank annotated with discontinuous constituents.Secondly, the statistical dependencies in our generative model are derived from arbitrarily large frag- ments from the corpus. We employ a Data-Oriented Parsing (DOP) model: a probabilistic tree-substitution grammar that employs probabilities derived from the frequencies of all connected fragments in the treebank (Bod, 1992; Bansal and Klein, 2010; Sangati and Zuidema, 2011) . We generalize the DOP model to support discontinuity. This allows us to model complex constructions such as NP kann man VVINF.
0
Bilingual Word Embeddings are useful for crosslingual tasks such as cross-lingual transfer learning or machine translation. Mapping based BWE approaches rely only on a cheap bilingual signal, in the form of a seed lexicon, and monolingual data to train monolingual word embeddings (MWEs) for each language, which makes them easily applicable in low-resource scenarios (Mikolov et al., 2013b; Xing et al., 2015; Artetxe et al., 2016) . It was shown that BWEs can be built using a small seed lexicon (Artetxe et al., 2017) or without any word pairs (Lample et al., 2018a; Artetxe et al., 2018) relying on the assumption of isomorphic MWE spaces. Recent approaches showed that BWEs can be built without the mapping step. Lample et al. (2018b) built FASTTEXT embeddings (Bojanowski et al., 2017) on the concatenated source and target language corpora exploiting the shared character n-grams in them. Similarly, the shared source and target language subword tokens are used as a cheap cross-lingual signal in Devlin et al. (2019) ; Conneau and Lample (2019) . Furthermore, the advantages of mapping and jointly training the MWEs and BWEs were combined in Wang et al. (2020) for even better BWEs.While these approaches already try to minimize the amount of bilingual signal needed for cross-lingual applications, they still require a larger amount of monolingual data to train semantically rich word embeddings (Adams et al., 2017) . This becomes a problem when one of the two languages does not have sufficient monolingual data available (Artetxe et al., 2020) . In this case, training a good embedding space can be infeasible which means mapping based approaches are not able to build useful BWEs (Michel et al., 2020) .In this paper we introduce a new approach to building BWEs when one of the languages only has limited available monolingual data. Instead of using mapping or joint approaches, this paper takes the middle ground by making use of the MWEs of a resource rich language and training the low resource language embeddings on top of it. For this, a bilingual seed lexicon is used to initialize the representation of target language words by taking the pre-trained vectors of their source pairs prior to target side training, which acts as an informed starting point to shape the vector space during the process. We randomly initialize the representations of all non-lexicon target words and run Continuous Bag-of-Words (CBOW) and skipgram (SG) training procedures to generate target embeddings with both WORD2VEC (Mikolov et al., 2013a) and FASTTEXT (Bojanowski et al., 2017) . Our approach ensures that the source language MWE space is intact, so that the data deficit on the target side does not result in lowered source embedding quality. The improved monolingual word embeddings for the target language outperform embeddings trained solely on monolingual data for semantic tasks such as word-similarity prediction. We study low-resource settings for English-German and English-Hiligaynon, where previous approaches have failed (Michel et al., 2020) , as well as English-Macedonian.
0
Intelligent tutoring systems help students improve learning compared to reading textbooks, though not quite as much as human tutors (Anderson et al., 1995) . The specific properties of human-human dialogue that help students learn are still being studied, but the proposed features important for learning include allowing students to explain their actions (Chi et al., 1994) , adapting tutorial feedback to the learner's level, and engagement/affect. Some tutorial dialogue systems use NLP techniques to analyze student responses to "why" questions. (Aleven et al., 2001; Jordan et al., 2006) . However, for remediation they revert to scripted dialogue, relying on shortanswer questions and canned feedback. The resulting dialogue may be redundant in ways detrimental to student understanding (Jordan et al., 2005) and allows for only limited adaptivity (Jordan, 2004) .We demonstrate two tutorial dialogue systems that use techniques from task-oriented dialogue systems to improve the interaction. The systems are built using the Information State Update approach (Larsson and Traum, 2000) for dialogue management and generic components for deep natural language understanding and generation. Tutorial feedback is generated adaptively based on the student model, and the interpretation is used to process explanations and to differentiate between student queries and hedged answers phrased as questions. The systems are intended for testing hypotheses about tutoring. By comparing student learning gains between versions of the same system using different tutoring strategies, as well as between the systems and human tutors, we can test hypotheses about the role of factors such as free natural language input, adaptivity and student affect.
0
A common sequence-labeling task in natural language processing involves assigning a part-ofspeech (POS) tag to each word in the input text. Previous authors have used numerous HMM-based models (Banko and Moore, 2004; Collins, 2002; Lee et al., 2000; Thede and Harper, 1999) and other types of networks including maximum entropy models (Ratnaparkhi, 1996) , conditional Markov models (Klein and Manning, 2002; McCallum et al., 2000) , conditional random fields (CRF) (Lafferty et al., 2001) , and cyclic dependency networks (Toutanova et al., 2003) . All of these models make use of varying amounts of contextual information. In this paper, we present a new model which remains within the well understood framework of Dynamic Bayesian Networks (DBNs), and we show that it produces state-of-the-art results when applied to the POS-tagging task. This new model is conditionally-structured and, through the use of virtual evidence (Pearl, 1988; Bilmes, 2004) , resolves the explaining-away problems (often described as label or observation bias) inherent in the CMM.This paper is organized as follows. In section 2 we discuss the differences between a hidden Markov model (HMM) and the corresponding conditional Markov model (CMM). In section 3 we describe our observed-child model (OCM), introducing the notion of virtual evidence, and providing an information-theoretic foundation for the use of negative training data. In section 4 we discuss our experiments and results, including a comparison of three simple first-order models and state-of-the-art results from our feature-rich second-order OCM.For clarity, the comparisons and derivations in sections 2 and 3 are done for first-order models using a single binary feature. The same ideas are then generalized to a higher order model with more features (including adjacent words).
0
Commercial search engines use query associations in a variety of ways, including the recommendation of related queries in Bing, 'something different' in Google, and 'also try' and related concepts in Yahoo. Mining techniques to extract such query associations generally fall into four categories: (a) clustering queries by their co-clicked url patterns (Wen et al., 2001; Baeza-Yates et al., 2004) ; (b) leveraging co-occurrences of sequential queries in web search query sessions (Zhang and Nasraoui, 2006; Boldi et al., 2009) ; (c) pattern-based extraction over lexicosyntactic structures of individual queries (Paşca and Durme, 2008; Jain and Pantel, 2009) ; and (d) distributional similarity techniques over news or web corpora (Agirre et al., 2009; . These techniques operate at the surface level, associating one surface context (e.g., queries) to another.In this paper, we focus instead on associating surface contexts with entities that refer to a particular entry in a knowledge base such as Freebase, IMDB, Amazon's product catalog, or The Library of Congress. Whereas the former models might associate the string "Ronaldinho" with the strings "AC Milan" or "Lionel Messi", our goal is to associate "Ronaldinho" with, for example, the Wikipedia entity page "wiki/AC Milan" or the Freebase entity "en/lionel mess". Or for the query string "ice fishing", we aim to recommend products in a commercial catalog, such as jigs or lures.The benefits and potential applications are large. By knowing the entity identifiers associated with a query (instead of strings), one can greatly improve both the presentation of search results as well as the click-through experience. For example, consider when the associated entity is a product. Not only can we present the product name to the web user, but we can also display the image, price, and reviews associated with the entity identifier. Once the entity is clicked, instead of issuing a simple web search query, we can now directly show a product page for the exact product; or we can even perform actions directly on the entity, such as buying the entity on Amazon.com, retrieving the product's oper-ating manual, or even polling your social network for friends that own the product. This is a big step towards a richer semantic search experience.In this paper, we define the association between a query string q and an entity id e as the probability that e is relevant given the query q, P (e|q). Following Baeza-Yates et al. 2004, we model relevance as the likelihood that a user would click on e given q, events which can be observed in large query-click graphs. Due to the extreme sparsity of query click graphs (Baeza-Yates, 2004), we propose several smoothing models that extend the click graph with query synonyms and then use the synonym click probabilities as a background model. We demonstrate the effectiveness of our smoothing models, via a large-scale empirical study over realworld data, which significantly reduce model errors. We further apply our models to the task of queryproduct recommendation. Queries in session logs are annotated using our association probabilities and recommendations are obtained by modeling sessionlevel query-product co-occurrences in the annotated sessions. Finally, we demonstrate that our models affect 9% of general web queries with 94% recommendation precision.
0
The main contribution of this paper is the presentation of a conceptualized and implemented workflow for the study of relations between entities mentioned in text. The workflow has been realized for multiple, diverse but structurally similar research questions from Humanities and Social Sciences, although this paper focuses on one from literary studies in particular. We see this workflow as exemplary for research involving Natural Language Processing (NLP) and Digital Humanities (DH), in which operationalization and modularization of complex research questions often has to be a first step. It is important to realize that this modularization can not be guided by NLP standards alone -the interests of the respective humanities discipline need to be considered, and practical considerations regarding timely availability of analyses as well: If a large portion of the funding period is spent with developing, adapting and fine-tuning NLP tools, the analysis of the results (with often leads to new adaptation requests) risks being missed out.Our workflow combines clearly defined tasks for which we follow the relatively strict NLP paradigm (annotation guidelines, gold standard, evaluation) with elements that are more directly related to specific Humanities research questions (that often are not defined as strictly). The final module of this workflow consists in the manual exploration and assessment of the resulting social networks by literary scholars with respect to their research questions and areas. In order to enable scholars to explore the resulting relations, we make use of interactive visualization, which can also show developments and changes over time.More generally, this workflow is the result of ongoing work on the modularization and standardization of Humanities research questions. The need for modularization is obvious for computer scientists (and computational linguists), as they are often consciously restricting their tasks to clearly defined problems (e.g., dependency parsing). However, this opposes typical Humanities research style, which involves the consideration of different perspectives, contexts and information sources -ignoring the big picture would be a nogo in literary studies. This makes research questions seemingly unique and incomparable to others, which in turn leaves little room for standards applied across research questions.Our ultimate goal is to develop methodology that supports the work of humanities scholars on their research questions. This in turn makes interpretability of the results of NLP-tools an important constraint, which sometimes goes against the tendency of NLP research to produce methods that are solely judged on their prediction performance. However, we intentionally do not focus on tool development: The appropriate use of tools and adequate interpretation of their results is of utmost importance if these form the basis of hermeneutical interpretations. To that end, scholars need to understand fundamental concepts of quantitative analysis and/or machine learning.The trade-off between interpretability and pre-diction performance has also been discussed in other projects, e.g. in Bögel et al. (2015) . In our project we follow two strategies: (i) Offering visualization and inspection tools as well as a close feedback loop and (ii) integrating humanities scholars early into the development cycle, such that they are involved in the relevant decisions.Parzival We will use Parzival as an example in this paper, because it involves a number of DHrelated challenges. The text is an Arthurian grail novel and has been written between 1200 and 1210 CE by Wolfram von Eschenbach in Middle High German. The text comprises of 25k lines and is divided into 16 books. The story of the books mainly follows the two knights Parzivâl and Gâwân and their interaction with other characters. One of the key characteristics of Parzival is a large inventory of characters that have complex genealogical patterns and familial relations. This led to an ongoing discussion about the social relations in Parzival (Bertau, 1983; Delabar, 1990; Schmidt, 1986; Sutter, 2003) , which are much more complex than in other Arthurian romances (Erec, Iwein). The systematic comparison of the social/spatial relations in different narrations of a similar story is one of our goals. With that in mind, we investigate various operationalization options for these networks.
0
For a long time, speech has been the only modality for input and output in telephone-based information systems. Speech is often considered to be the most natural form of input for such systems, since people have always used speech as the primary means of communication. Moreover, to use a speech-only system a simple telephone suffices and no additional devices are required. Obviously, in situations where both hands and eyes are busy, speech is definitely preferable over other modalities like pen/mouse. However, speech-only interfaces have also shown a number of shortcomings that result in less effective and less efficient dialogues. The aim of the research described in this paper is to assess the extent to which multimodal input/output can help to improve effectiveness, efficiency and user satisfaction of information systems in comparison with unimodal systems. This paper describes how, within the framework of the MATIS 1 (Multimodal Access to Transaction and Information Services) project we developed a prototype of a multimodal railway information system by extending a speech-only version in such a way that it supports screen output and point-and-click actions of the user as input. This system is a typical example of a simple application that can be implemented using a slotfilling paradigm and may stand model for various other form filling applications. First, a number of problems are described that arise in speech-only interfaces. Then we briefly describe the architecture of the speech-only railway information system. Next, we describe in more detail how we added multimodality to this version of the system and explain why we think this may help to solve the shortcomings of speech-only systems. We conclude this paper by discussing several open issues that we intend to solve by means of user tests with the multimodal system.
0
The goal of the Recognizing Textual Entailment (RTE) task is, given a pair of sentences, to determine whether a Hypothesis sentence can be inferred from a Text sentence. The majority of work in RTE is focused on finding a generic solution to the task. That is, creating a system that uses the same algorithm to return a yes or no answer for all textual entailment pairs. A generic approach never works well for every single entailment pair: there are entailment pairs that are recognized poorly by all the generic systems.Some approaches consequently propose a component-based model. In this framework, a generic system would have additional special components that take care of special subclasses of entailment pairs. Such a component is involved when a pair of its subclass is recognized. Vanderwende and Dolan (2005) , and subsequently Vanderwende et al. (2006) , divide all the entailment pairs according to whether categorization could be accurately predicted based solely on syntactic cues. Related to this, Akhmatova and Dras (2007) present an entailment type where the relationship expressed in the Hypothesis is encoded in a syntactic construction in the Text. Vanderwende et al. (2006) note that what they term is-a relationships are a particular problem in their approach. Observing that this encompasses hypernymy relations, and that there has been a fair amount of recent work on hypernymy acquisition, where ontologies containing hypernymy relations are extended with corpus-derived additions, we propose a HYPERNYMY ENTAILMENT TYPE to look at in this paper. In this type, the Hypothesis states a hypernymy relationship between elements of the Text: for example, This was seen as a betrayal by the EZLN and other political groups implies that EZLN is a political group. This subtype is of particular relevance to Question Answering (QA): in the RTE-2 dataset, 1 for example, all is-a Hypotheses were drawn from QA data.In this paper we take the hypernymy acquisition work of Snow et al. (2005) as a starting point, and then investigate how to adapt it to an entailment context. We see this as an investigation of a more general approach, where work in a separate area of NLP can be adapted to define a related entailment subclass.Section 2 of the paper discusses the relevant work from the areas of component-based RTE and hypernymy extraction. Section 3 defines the hypernymy entailment type and expands on the main idea of the paper. Section 4 describes the experimental set-up and the results; and Section 5 concludes the work.2 Related Work 2.1 Component-based RTE Vanderwende et al. (2006) use an approach based on logical forms, which they generate by the NLPwin parser. Nodes in the resulting syntactic dependency graphs for Text and Hypothesis are then heuristically aligned; then syntax-based heuristics are applied to detect false entailments. As noted above, is-a relations fared particularly badly. In our approach, we do not use such a heavy duty representation for the task, using instead the techniques of hypernym acquisition described in Section 2.2. Cabrio et al. (2008) proposed what they call a combined specialized entailment engine. They have created a general framework, based on distance between T and H (they measure the cost of the editing operations such as insertion, deletion and substitution, which are required to transform the text T into the hypothesis H) and several modular entailment engines, each of which is able to deal with an aspect of language variability such as negation or modal verbs. Akhmatova and Dras (2007) built a specific component from a subset of entailment pairs that are poorly recognized by generic systems participating in an RTE Challenge. These are the entailment pairs where a specific syntactic construction in the Text encodes a semantic relationship between its elements that is explicitly shown in the Hypothesis, as in example (1):(1) Text: Japan's Kyodo news agency said the US could be ready to set up a liaison office-the lowest level of diplomatic representation-in Pyongyang if it abandons its nuclear program. Hypothesis: Kyodo news agency is based in Japan.The entailment pairs share a set of similar features: they have a very high word overlap regardless of being a true or false entailments, for example. High word overlap is one of the features for an RTE system for the majority of the entailment pair types, which presumably hints at true, but this is not useful in our case. Akhmatova and Dras (2007) described a two-fold probabilistic approach to recognizing entailment, that in its turn was based on the well-known noisy channel model from Statistical Machine Translation (Brown et al., 1990) . In the work of this paper, by contrast, we look at only identifying a hypernymy-related Text, so the problem reduces to one of classification over the Text.
0
Polish, one of the West-Slavic languages [1] , due to its complex inflection and free word order, forms a challenge for statistical machine translation (SMT). Polish grammar is quite complex: seven cases, three genders, animate and inanimate nouns, adjectives agreed with nouns in terms of gender, case and number and a lot of words borrowed from other languages which are often inflected similarly to those of Polish origin. These cause problems in establishing vocabularies of manageable sizes for translation to/from other languages and sparseness of data for statistical model training. Despite of ca. 60 millions of Polish speakers worldwide the number of publicly available resources for the preparation of SMT systems is rather limited, thus progress in that domain is slower than for other languages. In this paper, our efforts in preparation of the Polish-to-English SMT system for the TED task, part of the IWSLT 2012 evaluation campaign, MT optional track, are described. The remainder of the paper is structured as follows. In section 2 Polish data preparation is described, section 3 deals with English, 4 with training of the translation and language models, and section 5 presents our results. Finally, the paper concludes with a discussion about encountered issues and future perspectives in sections 6 and 7.
0
New Dimensions in Testimonies (NDT) is a dialogue system that allows for two-way communication with a person who is not available for conversation in real time: a large set of statements is prepared in advance, and users access these statements through natural conversation that mimics face-to-face interaction (Artstein et al., 2014) . Users interact with a recording of Holocaust survivor Pinchas Gutter. The system listens to their questions, selects and plays back Mr. Gutter's responses from a collection of video clips. We deployed the system at the Illinois Holocaust Museum and Education Center in Skokie since March 2015 (Traum et al., 2015a) , where a museum docent relays questions from a large group audience to the system. The system was also installed for a few months at the U.S. Holocaust Memorial Museum in Washington, DC, and was demonstrated at other locations by our collaborators from the USC Shoah Foundation (SFI).The installation proved to be a successful teaching aid: student gains were reported in interest in historical topics, critical thinking, and knowledge of issues going on in the world. The NDT system provided an engaging and emotional experi- ence (Traum et al., 2015b) . However, we discovered a number of issues with the system maintenance and support: system installation was a delicate process, there were issues maintaining the 3rd party system dependencies, the system user interface tended to overwhelm and confuse the operators, and reliable data collection proved to be a challenge.The lessons we learned from the deployment of the NDT system led us back to the drawing board. We created a new version of the NDT system that we call Alfred. Our goal was to make the system easier to install and maintain. We looked to simplify and streamline the user interface; create a better data collection and archiving mechanism; develop support for multiple survivor databases; and optimize the system for better performance. This paper describes the initial NDT system architecture and compares Alfred's design to it. We enumerate the challenges with encountered in the initial deployment and discuss how we addressed each challenge in Alfred.
0
Evaluating text difficulty, or text readability, is an important topic in natural language processing and applied linguistics (Zamanian and Heydari, 2012; Pitler and Nenkova, 2008; Fulcher, 1997) . A key challenge of text difficulty evaluation is that linguistic difficulty arises from both vocabulary and grammar (Richards and Schmidt, 2013) . However, most existing tools either do not sufficiently take the impact of grammatical difficulty into account (Smith III et al., 2014; Sheehan et al., 2014) , or use traditional syntactic features, which differ from what language students actually learn, to estimate grammatical complexity (Schwarm and Ostendorf, 2005; Heilman et al., 2008; François and Fairon, 2012) . In fact, language courses introduce grammar constructs together with vocabulary, and grammar constructs vary in frequency and difficulty just like vocabulary (Blyth, 1997; Manzanares and López, 2008; Waara, 2004) . Ideally, we would like to have better ways of estimating the grammatical complexity of a sentence.To make progress in this direction, we introduce grammatical templates as an important feature in text difficulty evaluation. These templates are what language teachers and linguists have identified as the most important units of grammatical understanding at different levels, and what students actually learn in language lessons. We also demonstrate that grammatical templates can be automatically extracted from the dependency-based parse tree of a sentence.To evaluate, we compare the difficulty prediction accuracy of grammatical templates with existing readability features in Japanese language placement tests and textbooks. Our results show that grammatical template features slightly outperform existing readability features. Moreover, adding grammatical template features into existing readability features significantly improves the accuracy by 7.4%. We also propose a multilevel linear classification algorithm using only 5 grammatical features. We demonstrate that this simple and human-understandable algorithm effectively predicts the difficulty level of Japanese texts with 87.7% accuracy.
0
Text summarization is a major NLP task, whose aim is "to present the main ideas in a document in less space" (Radev et al., 2002) . A specific type of summarization is the extractive summarization, which consists of selecting a subset of the original sentences of a document for verbatim inclusion in the summary; in contrast, the abstractive summarization consists of creating an abstract from scratch, by detecting the most important information in the document, appropriately encoding it, and, finally, rendering it using natural language generation techniques. Both extractive and abstractive approaches have been attempted over the past five decades since research on summarization started, with extractive approaches emerging as particularly suitable for large-scale applications. A survey on summarization methods can be found, for instance, in Das & Martins (2007) .The recent years have seen a growing interest in applying summarization to online content, and particularly to user-generated content, such as electronic mail messages, advertisements, and blogs, as well as to news articles. In addition, Wikipedia, the user-generated Internet encyclopedia, 1 is another online resource that is arguably very appealing from a summarization point of view. It currently contains over 3.6 million articles in English, and 18 million overall. Its comprehensive coverage makes it an important source of information used every day by a very broad audience. Since its content is continuously enriched, each article becomes more and more complex. Having an automatic means to produce article abstracts is therefore highly desirable for many readers.The Simple English Wikipedia initiative 2 has recently been launched as an effort to improve the readability of Wikipedia articles and to provide shorter versions "presenting only the basic information". It resulted into the manual creation of shorter versions for almost 70'000 articles from the Ordinary English Wikipedia. Through the use of simpler words and simpler syntactic structures, these versions become accessible to a much broader audience, e.g., non-native speakers, children, and poor-literacy readers. The ultimate goal of our work is to create a text simplification system which can be used, in particular, to automatically generate simpler Wikipedia articles.In this paper, we present the first steps we have taken in this direction, by implementing a domain-specific extractive summarization system. Given a collection of manually created summaries in a given domain -in our specific scenario, these are Simple English Wikipedia articles from a given Wikipedia category -in the development stage we infer a special kind of "template" representing the most important information that should be included in the summary. Then, given a new document -in our case, an Ordinary English Wikipedia article from the same category -we match the "template" against it to detect the relevant sentences to select for the output. The summaries created in this way will later be fed into the simplification system proper, which will transform them so that they obey predefined constraints on length, lexical choice, and syntactic structure. In what follows, we motivate our approach ( §2), describe the system ( §3), present experimental results ( §4), and provide concluding remarks ( §5).
0
The Arabic language is characterized by diglossia (Ferguson, 1959) : two linguistic variants live side by side: a standard written form and a large variety of spoken dialects. While dialects differ from one region to another, the written variety, called Modern Standard Arabic (MSA), is generally the same. MSA, the official language for Arabic countries, is used for written communication as well as in formal spoken communications. Spoken varieties, generally used in informal daily discussions, are increasingly being used for informal written communication on the web. Such unstandardized varieties differ from MSA with respect to phonology, morphology, syntax and the lexicon. Unlike MSA which has an important number of NLP resources and tools, Arabic dialects are less-resourced. In this paper, we focus on the Tunisian Arabic dialect (TUN) . It is the spoken language of twelve million speakers living mainly in Tunisia. TUN is the result of interactions and influences of a number of languages including Arabic, Berber and French (Mejri et al., 2009) .In this paper, we focus on the development of a part-of-speech (POS) tagger for TUN. There are two main options when developing such a tool for TUN. The first one is to build a corpus of TUN, which involves recording, transcribing and manually POS tagging. In order to have a state-of-theart POS tagger one also needs to develop a lexicon. The second option is to convert TUN into an approximate form of MSA, that we will call pseudo MSA, and use an existing MSA POS tagger. We intentionally do not use the verb translate to describe the process of transforming a TUN text into a pseudo MSA text. The reason being that we are not translating between two natural languages: pseudo MSA is not meant to be read by humans. Its only purpose is to be close enough to MSA so that running it through NLP tools would give good results. The annotation produced is then projected back on the TUN text. More technically, the conversion process focuses on morphological and lexical aspects; it is based on morphological analyzers and generators for TUN and MSA as well as a TUN-MSA dictionaries which are themselves partly automatically produced using the morphological analyzers and generators. Besides producing a POS tagger for TUN, we aim at proposing a general methodology for developing NLP tools for dialects of Arabic.The rest of the paper is organized as follows: we present, in section 2, phonological, lexical and morphosyntactic variations between TUN and MSA. We then discuss related works and existing POS taggers of Arabic dialects in section 3. Section 4 reviews the tools and resources used in this work. In section 5, we describe in detail our approach to tag TUN texts. Finally, Section 6 presents results evaluating our approach under several conditions.
0
Recently, the capability of large-scale pre-trained models has been verified in open-domain dialogue generation, including Meena (Adiwardana et al., 2020) , Blender (Roller et al., 2021) , and PLATO-2 (Bao et al., 2020) . Without introducing explicit knowledge in learning process, substantive knowledge is implicitly embedded into parameters from the training corpus. However, these models are found to suffer from knowledge hallucinations (Roller et al., 2021; Marcus, 2020) , producing plausible statements with factual errors. To boost the generation accuracy, there is a trend to leverage external knowledge in addition to the parameters of large-scale pre-trained models (Guu et al., 2020; .In knowledge-grounded conversation, several datasets have been collected through crowdsourcing (Dinan et al., 2019; Gopalakrishnan et al., 2019; Komeili et al., 2021) . Given that manual annotation is expensive and time-consuming, it is not feasible to annotate the corresponding knowledge for each response on a large scale. Therefore, it is desirable to develop knowledge-grounded dialogue generation models without reliance on explicit knowledge labels.Some attempts have been made to learn the unsupervised retrieval of external knowledge based on semantic similarity (Ghazvininejad et al., 2018; Dinan et al., 2019) . Whereas, there exists the oneto-many phenomenon in knowledge-grounded conversation (Kim et al., 2019) , where multiple knowledge elements can be appropriate to reply a given context. The prior top-1 knowledge selection employed by these approaches (Ghazvininejad et al., 2018; Dinan et al., 2019) has difficulties to hit the knowledge contained in the target response, deteriorating the learning of knowledge utilization. As an improvement, PostKS (Lian et al., 2019) and KnowledGPT (Zhao et al., 2020) rely on the target response to identify the grounded knowledge. However, involving the posterior knowledge selection will inevitably cause discrepancy between the training and inference stages (Zhao et al., 2019) .In this paper, we propose an unsupervised approach for end-to-end knowledge-grounded conversation modeling, namely PLATO-KAG (Knowledge-Augmented Generation). As shown in Figure 1 , given each dialogue context, the top-k relevant knowledge elements are selected for the subsequent response generation. Then, the model learns to generate the target response grounded on each of the selected knowledge. The generation probability can in turn provide backpropagating signal for the precedent knowledge selection. These two components of knowledge selection and response generation are optimized jointly.Two essential ingredients contribute to the performance of PLATO-KAG: top-k knowledge selection and balanced joint training. Firstly, in comparison to the conventional top-1 selection, top-kKnowledge Selection ! ( | ) Knowledge Grounded Response Generation " ( | , ) Transformer Blocks Context Transformer Blocks ( ) ( # )Transformer BlocksKnowledge Context Response ⋅ ⋅ CLS ⋅ ⋅ Knowledge ⋅ ⋅ CLS ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ Top-k ( $ ) ( % ) ( & )Zumba is an interesting fitness exercise with so many variations. End-to-End Backpropagation through and( | )Figure 1: An overview of joint training in PLATO-KAG. For each dialogue context, top-k relevant knowledge elements are selected and employed in response generation. The generation probability can reflect the quality of the precedent knowledge selection. These two components of knowledge selection θ and response generation φ are optimized jointly in an unsupervised manner.selection remarkably increases the chance to hit the grounded knowledge and improves the effectiveness of prior knowledge selection. Without the interlude of posterior knowledge selection, we manage to avoid the discrepancy between training and inference stages. Secondly, considering the difference of knowledge selection and response generation, balanced training is further designed for their effective joint optimization. To evaluate the performance of the proposed method, comprehensive experiments have been carried out on two publicly available datasets. Experimental results demonstrate that our method achieves better performance as compared with other state-of-the-art unsupervised approaches. 1
0
Dialogue Acts (DAs) are the functions of utterances in dialogue-based interaction (Austin, 1975) . A DA represents the meaning of an utterance at the level of illocutionary force, and hence, constitutes the basic unit of linguistic communication (Searle, 1969) . DA classification is an important task in Natural Language Understanding, with applications in question answering, conversational agents, speech recognition, etc. Examples of DAs can be found in Table 1 . Here we have a conversation of 7 utterances between two speakers. Each utterance has a corresponding label such as Question or Backchannel.Early work in this field made use of statistical machine learning methods and approached the task as either a structured prediction or text classification problem (Stolcke et al., 2000; Ang et al., 2005; Zimmermann, 2009; Surendran and Levow, 2006) . Many recent studies have proposed deep learning models for the DA classification task with promising results (Lee and Dernoncourt, 2016; Khanpour et al., 2016 Well, we're in the process of, revitalizing it. Statement Table 1 : A snippet of a conversation sample from the SwDA Corpus. Each utterance has a corresponding dialogue act label.Vu, 2017). However, most of these approaches treat the task as a text classification problem, treating each utterance in isolation, rendering them unable to leverage the conversation-level contextual dependence among utterances. Knowing the text and/or the DA labels of the previous utterances can assist in predicting the current DA state. For instance, in Table 1 , the Answer or Statement dialog acts often follow Question type utterances.This work draws from recent advances in NLP such as self-attention, hierarchical deep learning models, and contextual dependencies to produce a dialogue act classification model that is effective across multiple domains. Specifically, we propose a hierarchical deep neural network to model different levels of utterance and dialogue act semantics, achieving state-of-the-art performance on the Switchboard Dialogue Act Corpus. We demonstrate how performance can improve by leveraging context at different levels of the model: previous labels for sequence prediction (using a CRF), conversation-level context with self-attention for utterance representation learning, and character embeddings at the word-level. Finally, we explore different ways to learn effective utterance repre-sentations, which serve as the building blocks of our hierarchical architecture for DA classification.
0
Computational models of language understanding must recognize narrative structure because many types of natural language texts are narratively structured, e.g. news, reviews, film scripts, conversations, and personal blogs (Polanyi, 1989; Jurafsky et al., 2014; Bell, 2005; Gordon et al., 2011a) . Human understanding of narrative is driven by reasoning about causal relations between the events and states in the story (Ger- We packed all our things on the night before Thu (24 Jul) except for frozen food. We brought a lot of things along. We woke up early on Thu and JS started packing the frozen marinatinated food inside the small cooler... In the end, we decided the best place to set up the tent was the squarish ground that's located on the right. Prior to setting up our tent, we placed a tarp on the ground. In this way, the underneaths of the tent would be kept clean. After that, we set the tent up. 1993; Graesser et al., 1994; Lehnert, 1981; Goyal et al., 2010) . Thus previous work has aimed to learn a knowledge base of semantic relations between events from text (Chklovski and Pantel, 2004; Gordon et al., 2011a; Chambers and Jurafsky, 2008; Balasubramanian et al., 2013; Pichotta and Mooney, 2014; Do et al., 2011) , with the long-term aim of using this knowledge for understanding. Some of this work explicitly models causality; other work characterizes the semantic relations more loosely as "events that tend to cooccur". Related work points out that causality is granular in nature, and that humans flexibly move back and forth between different levels of granularity of causal knowledge (Hobbs, 1985) . Thus methods are needed to learn causal relations and reason about them at different levels of granularity (Mulkar-Mehta et al., 2011) .One limitation of prior work is that it has primarily focused on newswire, thus have only learned relations about newsworthy topics, and likely the most frequent, highly common (coarsegrained) news events. But news articles are not the only resource for learning about relations between events. Much of the content on social media in personal blogs is written by ordinary people about their daily lives (Burton et al., 2009) , and these blogs contain a large variety of everyday events (Gordon et al., 2012) . Film scene descriptions are also action-rich and told in fine-grained detail (Beamer and Girju, 2009; Hu et al., 2013) . Moreover, both of these genres typically report events in temporal order, which is a primary cue to causality. In this position paper, we claim that knowledge about fine-grained causal relations between everyday events is often not available in news, and can be better learned from other narrative genres.For example, Figure 1 shows a part of a personal narrative written in a blog about a camping trip (Burton et al., 2009) . The major event in this story is camping, which is contingent upon several finer-grained events, such as packing things the night before, waking up in the morning, packing frozen food, and later on at the campground, placing a tarp and setting up the tent. Similarly film scene descriptions, such as the one shown in Figure 2 , typically contain fine-grained causality. In this scene from Lord of the Rings, grabbing leads to spilling, and pushing leads to stumbling and falling.We show that unsupervised methods for modeling causality can learn fine-grained event relations from personal narratives and film scenes, even when the corpus is relatively small compared to those that have been used for newswire. We learn high-quality causal relations, with over 80% judged as causal by humans. We claim that these fine-grained causal relations are much closer in spirit to those motivating earlier work on scripts (Lehnert, 1981; Schank et al., 1977; Wilensky, 1982; de Jong, 1979) , and we show that the causal knowledge we learn is not found in causal knowledge bases learned from news. Section 2 first summarizes previous work on learning causal knowledge. We then present our experiments and results on modeling event causality in blogs and film scenes in Section 3. Conclusions and future directions are discussed in Section 4.
0
We present an algorithm for identifying noun phrase antecedents of personal pronouns, demonstrative pronouns, reflexive pronouns, and omitted pronouns (zero pronouns) in Spanish. The algorithm identifies both intrasentential and intersentential antecedents and is applied to the syntactic analysis generated by the slot unification parser (SUP) (Ferr~ndez, Palomar, and Moreno 1998b) . It also combines different forms of knowledge by distinguishing between constraints and preferences. Whereas constraints are used as combinations of several kinds of knowledge (lexical, morphological, and syntactic), preferences are defined as a combination of heuristic rules extracted from a study of different corpora.We present the following main contributions in this paper:• an algorithm for anaphora resolution in Spanish texts that uses different kinds of knowledge• an exhaustive study of the importance of each kind of knowledge in Spanish anaphora resolution• a proposal concerning syntactic conditions on NP-pronoun noncoreference in Spanish that can be evaluated on a partial parse tree• a proposal regarding preferences that are appropriate for resolving anaphora in Spanish and that could easily be extended to other languages• a blind test of the algorithm• a comparison with other approaches to anaphora resolution that we have applied to Spanish texts using the same blind testIn Section 2, we show the classification scheme we used to identify the different types of anaphora that we would be resolving. In Section 3, we present the algorithm and discuss its main properties. In Section 4, we evaluate the algorithm. In Section 5, we compare our algorithm with several other approaches to anaphora resolution. Finally, we present our conclusions.
0
La génération textuelle est une tâche centrale pour l'interaction entre un système intelligent et ses utilisateurs (réponse d'un agent conversationnel, résumé de texte, génération d'article...). Lors de cette interaction, il est désirable de contrôler les générations afin qu'elles respectent des contraintes imposées par le contexte. On peut ainsi vouloir agir sur la longueur d'une phrase générée, son niveau de langue, sa politesse, sa polarité, et d'autres caractéristiques dé-corrélables, au moins en partie, de la sémantique. On les regroupera sous le terme fédérateur de "style". La transformation de textes pour modifier leur style et résoudre ce problème constitue un domaine actif de recherche, où sont notamment utilisés des modèles et données spécifiques (Pitler, 2010) (Shardlow, 2014) (Xu et al., 2012) . Les données utilisées sont des couples alignés de phrases du style original et du style "cible". Prenons l'exemple de la contraction de phrase. On peut vouloir passer automatiquement de la phrase source They also, by law, have to be held in Beirut à une phrase contractée dite cible : They have to be held in Beirut. Cependant, les données sous forme de telles paires alignées sont parfois trop peu nombreuses pour bien apprendre directement la tâche, et leur création est coûteuse. Nous nous plaçons ici dans un cadre à la fois unificateur aux problèmes de changement de style et nécessitant une plus faible supervision. Au lieu de phrases alignées de deux styles différents, la méthode proposée se contente d'un ensemble de phrases et d'indicateurs sur leur style. Ces indicateurs peuvent être liés à l'origine des phrases (par exemple l'année d'écriture) ou calculés (comme la longueur des phrases). Notre modèle utilise un indicateur comme signal pour modifier les générations et correspondre à une valeur voulue. Pour cela, nous introduisons d'abord des modèles neuronaux de génération de phrases (section 2), puis proposons une méthode de changement de style (section 3).
0
Previous work on Chinese Semantic Role Labeling (SRL) mainly focused on how to implement SRL methods which are successful on English. Similar to English, parsing is a standard pre-processing for Chinese SRL. Many features are extracted to represent constituents in the input parses (Sun and Jurafsky, 2004; Xue, 2008; Ding and Chang, 2008) . By using these features, semantic classifiers are trained to predict whether a constituent fills a semantic role. Developing features that capture the right kind of information encoded in the input parses has been shown crucial to advancing the state-of-the-art. Though there has been some work on feature design in Chinese SRL, information encoded in the syntactic trees is not fully exploited and requires more research effort. In this paper, we propose a set of additional features, some of which are designed to better capture structural information of sub-trees in a given parse. With help of these new features, our system achieves 93.49 F-measure with hand-crafted parses. Comparison with the best reported results, 92.0 (Xue, 2008) , shows that these features yield a significant improvement of the state-of-the-art.We further analyze the effect of syntactic parsing in Chinese SRL. The main effect of parsing in SRL is two-fold. First, grouping words into constituents, parsing helps to find argument candidates. Second, parsers provide semantic classifiers plenty of syntactic information, not to only recognize arguments from all candidate constituents but also to classify their detailed semantic types. We empirically analyze each effect in turn. We also give some preliminary linguistic explanations for the phenomena.
0
In the biomedical domain, the vast amount of data and the great variety of induced features are two major bottlenecks for further natural language processing on the biomedical literature. In this paper, we investigate the biomedical named entity recognition (NER) problem. This problem is particularly important because it is a necessary pre-processing step in many applications.This paper addresses two main issues that arise from biomedical NER.Traditional approaches that depend on the maximum likelihood training method are slow even with large-scale optimization methods such as L-BFGS. This problem worsens with the sheer volume and growth rate of the biomedical literature. In this paper, we propose the use of an online training method that greatly reduces training time.Large Memory Space: The total number of features used to extract named entities from documents is very large. To extract biomedical named entities, we often need to use extra features in addition to those used in general-purpose domains, such as prefix, suffix, punctuation, and more orthographic features. We need a correspondingly large memory space for processing, exacerbating the first issue. We propose to alleviate this problem by employing a cascaded approach that divides the NER task into a segmentation task and a classification task.The overall approach is the online cascaded approach, which is described in the remaining sections of this paper: Section 2 describes the general model that is used to address the above issues. We address the issue of long training time in Section 3. The issue of large memory space is addressed in Section 4. Experimental results and analysis are presented in Section 5. We discuss related work in Section 6 and conclude with Section 7.
0
In modern business, contact centers are becoming more and more important for improving customer satisfaction. Such contact centers typically have quality analysts who mine calls to gain insight into how to improve business productivity (Takeuchi et al., 2007; Subramaniam et al., 2009) . To enable them to handle the massive number of calls, automatic summarization has been utilized and shown to successfully reduce costs (Byrd et al., 2008) . However, one of the problems in current call summarization is that a domain ontology is required for understanding operator/caller utterances, which makes it difficult to port one summarization system from domain to domain.This paper describes a novel automatic summarization method for contact center dialogues without the costly process of creating domain on-tologies. More specifically, given contact center dialogues categorized into multiple domains, we create a particular type of hidden Markov model (HMM) called Class Speaker HMM (CSHMM) to model operator/caller utterance sequences. The CSHMM learns to distinguish sequences of individual domains and common sequences in all domains at the same time. This approach makes it possible to accurately distinguish utterances specific to a certain domain and thereby has the potential to generate accurate extractive summaries.In Section 2, we review recent work on automatic summarization, including its application to contact center dialogues. In Section 3, we describe the CSHMM. In Section 4, we describe our automatic summarization method in detail. In Section 5, we describe the experiment we performed to verify our method and present the results. In Section 6, we summarize and mention future work.
0
Persuasion is a primary goal of argumentation (O'Keefe, 2006) . It is often carried out in the form of a debate or discussion, where debaters argue to persuade others to take certain stances on controversial topics. Several studies have examined persuasiveness in debates by probing the main factors for establishing persuasion, particularly regarding the role of linguistic features of debaters' arguments (Zhang et al., 2016) , the interaction between debaters (Tan et al., 2016) , and the personal characteristics of debaters (Durmus and Cardie, 2018) .While the impact of debaters' characteristics on persuasiveness has been observed in online debates, the exploitation of these characteristics for predicting persuasiveness has been done based on explicit characteristics-related information in users' profiles or on questionnaires. For example, Lukin et al. (2017a) performed a personality trait test for selected people and asked them for their stances on specific topics to estimate their beliefs. Also, Durmus and Cardie (2018) used the information in users' profiles in an online forum, where their stances on controversial topics are explicitly stated, as a proxy of their beliefs. Such a means of exploitation limits the applicability of predicting persuasiveness, as the characteristics of debaters are usually not explicitly available in online debates, and it is not practicable to survey every debater.The paper at hand studies how the characteristics of debaters can be modeled automatically and utilized successfully for predicting persuasiveness. To this end, we propose a new approach of various features that capture the beliefs, interests, and personality traits of debaters on the subreddit "ChangeMyView" based on the debaters' previous activity on the Reddit.com platform.We apply this approach to the tasks of predicting argument persuasiveness and predicting debater's resistance to persuasion. Our experiments show that incorporating debater characteristics improves the prediction effectiveness of the two tasks over previous approaches which rely primarily on linguistic features. Interestingly, personality traits alone were the most predictive feature for resistance to persuasion, outperforming the linguistic features of the post itself.The contribution of this paper is three-fold:1. A large-scale corpus of argumentative and general discussions mined from Reddit.com. 1 2. Features that capture the beliefs, interests, and personality traits of debaters based on their posting history.3. A characteristics-based approach that tackles two persuasiveness tasks with improved effectiveness over previous approaches. 2
0
Many existing dialogue systems adopt a two-phase approach to satisfying a user's request for information: query construction followed by solution construction and presentation, with the former concerned with the acquisition of the preferences and restrictions in a user's information needs, and the latter concerned with presenting solutions that satisfy those needs (e.g., Abella, et al., 1996; Litman, et al., 1998; Chu-Carroll, 2000) . When we look at human-human information-seeking dialogues, however, we observe that a two-phase approach can lead to problems such as delayed identification of over-constrained problems (see example below).In this dialogue between a travel agent (A) and a customer (C) (SRI transcripts, 1992) , the overconstraining attribute-value pair -flight United 1117 -occurred in turn C2, but was not detected until turn A11. Instead, the travel agent continues to collect more constraints to complete the information need specification only to find during the solution phase that the problem is overconstrained. Worse yet, if the human agent or information system cannot offer the user informed assistance, such as suggesting which constraint(s) to relax when an over-constrained situation is finally detected, the user is forced to adopt an inefficient trial-and-error strategy of selecting constraints to relax.We present a low-level dialogue generation model that uses a Constraint-Based Problem-Solver (CBPS) to support cooperative mixedinitiative information-seeking dialogue. (We refer to the dialogue generation model as low-level since it does not address surface generation.) Use of the CBPS enables a dialogue system to 1) incrementally interleave query construction with solution construction 2) immediately detect underconstrained and over-constrained information requests, and 3) provide cooperative responses when these types of problems are detected. The model has been implemented in COMIX, a prototype system for providing airline flight information. In addition, we present a system evaluation designed to evaluate COMIX's performance when users make over-constrained requests.
0
It has usually been assumed that the semantics of temporal expressions is directly related to the linear dimensional concaption of time familiar from high-school physics -that is, to a model based on the number-line. However, there are good reasons for suspecting that such a conception is not the one that our linguistic categories are most directly related to.When-clauses provide an example of the mismatch between linguistic temporal categories and a semantics based on such an assumption. Consider the following examples:(1) When they built the 39th Street bridge... (a) ...a local architect drew up the plans. Co) ...they used the best materials. (c) ...they solved most of their traffic problems.To map the temporal relations expressed in these examples onto linear time, and to try to express the semantics of when in terms of points or intervals (poss~ly associated with events), would appear to imply either that when is multiply ambiguous, allowing these points or intervals to be temporally related in at least three different ways, or that the relation expressed between main and when-clauses is one of "approximate coincidence". However, neither of these tactics explains the peculiarity of utterances like the following:(2) #When my car broke down, the sun set.The oddity of this statement seems to arise because the whenclause predicates something more than mere temporal coincidence, that is, some contingent relation such as a causa/link between the two events. Of course, our knowledge of the world does not easily support such a link. This aspect of the sentence's meaning must stem from the sense-meaning of when, because parallel utterances using just after, at approxi. mate/y the same t/me as, and the like, which predicate purely temporal coincidence, are perfectly felicitous.We shall claim that the different temporal relations conveyed in examples (1)do not arise from any sense-ambiguity of when, or from any "fuzziness" in the relation that it expresses between the times refered to in the clauses it conjoins, but from the fact that the meaning of when is not primarily temporal at an. We shall argue that when has single sense-meaning reflecting its role of establishing a temporal focus. The apparent diversity of meanings arises from the nature of this referent and the organisation of events and states of affairs in episodic memory under a relation we shall call "contingency", a term related to such notions as causality, rather than temporal sequenfiality. This contingent, non-temporal relation also determines the ontology of the elementary propositions denoting events and states of wl~ch episodic memory is composed, and it is to these that we turn first.
0
Obtaining the definition is the first step toward understanding a new terminology. The lack of precise terminology definition poses great challenges in scientific communication and collaboration (Oke, 2006; Cimino et al., 1994) , which further hinders new discovery. This problem becomes even more severe in emerging research topics (Baig, 2020; Baines et al., 2020) , such as COVID-19, where curated definitions could be imprecise and do not scale to rapidly proposed terminologies. Neural text generation (Bowman et al., 2016; Vaswani et al., 2017; Sutskever et al., 2014; Song et al., 2020b) could be a plausible solution to this problem by generating definition text based on the terminology text. Encouraging results by neural text generation have been observed on related tasks, such as paraphrase generation (Li et al., 2020) , description generation (Cheng et al., 2020) , synonym generation (Gupta et al., 2015) and data augmentation (Malandrakis et al., 2019) . However, it remains unclear how to generate definition, which comprises concise text in the input space (i.e., terminology) and longer text in the output space (i.e., definition). Moreover, the absence of large-scale terminology definition datasets impedes the progress towards developing definition generation models.Despite these challenges, scientific terminologies often form a directed acyclic graph (DAG), which could be helpful in definition generation. Each DAG organizes related terminologies from general ones to specific ones with different granu-larity levels (Figure 1 ). These DAGs have proved to be useful in assisting disease, cell type and function classification (Wang et al., 2020b; Song et al., 2020a; Wang et al., 2015) by exploiting the principle that nearby terms on the graph are semantically similar (Altshuler et al., 2000) . Likewise, terminologies that are closer on this DAG should acquire similar definitions. Moreover, placing a new terminology in an existing DAG requires considerably less expert efforts than curating the definition, further motivating us to generate the definition using the DAG.In this paper, we collectively advance definition generation in the biomedical domain through introducing a terminology definition dataset Graphine and a novel graph-aware text generation model Graphex. Graphine encompasses 2,010,648 terminology definition pairs encapsulated in 227 DAGs. These DAGs are collected from three major biomedical ontology databases (Smith et al., 2007; Noy et al., 2009; Jupp et al., 2015) . All definitions are curated by domain experts. Our graph-aware text generation model Graphex utilizes the graph structure to assist definition generation based on the observation that nearby terminologies exhibit semantically similar definitions.Our human and automatic evaluations demonstrate the substantial improvement of our method on definition generation in comparison to existing text generation methods that do not consider the graph structure. In addition to definition generation, we illustrate how Graphine opens up new avenues for investigating other tasks, including domain-specific language model pretraining, graph representation learning and a novel task of sentence granularity prediction. Finally, we present case studies of a failed generation by our method, pinpointing directions for future improvement. To the best of our knowledge, Graphine and Graphex build up the first large-scale benchmark for terminology definition generation, and can be broadly applied to a variety of tasks.
0
of all public-facing documents, while sociocultural pressure has also influenced private businesses to invest in translation services. But the mounting demand for translation services presents challenges as well as opportunities for Welsh Linguistic Service Providers (hereafter LSPs). LSPs need to balance expenditure (on staff and equipment) with the capacity to deal with existing demands for services. Technology provides one answer to this challenge, as the work of a single translator can be extended.A report by Bangor University's Language Technology Unit (Prys et al., 2009) found that using various kinds of translation technology could raise the economic productivity of the Welsh translation industry by 40% and could also prevent the undercutting of translation services by foreign providers leveraging new technology (2009: 23) . The uptake of translation technology in Wales has been slow however, with various surveys (Prys et al., 2009 and Andrews 2010) reporting percentages of Welsh translators using translation environment technology as low as 49% and 50%, compared to the figures of 82% (Lagoudaki, 2006) and 65% (EU Commission, 2017) reported at the international level 2 and in the UK respectively. While low adoption rates for new technology may seem inevitable in the context of a lesser-resourced language, the Welsh Government has made the expansion of such tools an important part of its strategy to reach a million Welsh speakers by 2050 (Welsh Government, 2019: 34) .from 54 countries. The author does not provide information on the linguistic backgrounds of respondents, but does mention that the survey had to be completed in English, which could mean that results were biased towards "English-speaking professionals" (Lagoudaki, 2006: 6) .One tool which the Welsh Government has to promote specialist training and skills in the private sector is the Knowledge Transfer Partnership, or KTP. KTPs involve a partnership in which a university works together with a private business in order to transfer academic knowledge relating to a specific field. The project described in this paper involved a KTP between a Welsh University and a North Wales LSP, Cymen Cyf.
0
In the context of goal-oriented dialogue systems, intent classification (IC) is the process of classifying a user's utterance into an intent, such as Book-Flight or AddToPlaylist, referring to the user's goal. While slot filling (SF) is the process of identifying and classifying certain tokens in the utterance into their corresponding labels, in a manner akin to named entity recognition (NER). However, in contrast to NER, typical slots are particular to the domain of the dialogue, such as music or travel. As a reference point, we list intent and slot label annotations for an example utterance from the SNIPS dataset with the AddToPlaylist IC in Figure 1 As of late, most state-of-the-art IC/SF models are based on feed-forward, convolutional, or recurrent neural networks (Hakkani-Tür et al., 2016; Goo et al., 2018; Gupta et al., 2019) . These neural models offer substantial gains in performance, but they often require a large number of labeled examples (on the order of hundreds) per intent class and slot-label to achieve these gains. The relative scarcity of large-scale datasets annotated with intents and slots prohibits the use of neural IC/SF models in many promising domains, such as medical consultation, where it is difficult to obtain large quantities of annotated dialogues.Accordingly, we propose the task of few-shot IC/SF, catering to domain adaptation in low resource scenarios, where there are only a handful of annotated examples available per intent and slot in the target domain. To the best of our knowledge, this work is the first to apply the few-shot learning framework to a joint sentence classification and sequence labeling task. In the NLP literature, fewshot learning often refers to a low resource, cross lingual setting where there is limited data available in the target language. We emphasize that our definition of few-shot IC/SF is distinct in that we limit the amount of data available per target class rather than target language.Few-shot IC/SF builds on a large body of existing few-shot classification work. Drawing inspiration from computer vision, we experiment with two prominent few shot image classification approaches, prototypical networks and model agnostic meta learning (MAML). Both these methods seek to decrease over-fitting and improve generalization on small datasets, albeit via different mechanisms. Prototypical networks learns class specific representations, called prototypes, and performs inference by assigning the class label associated with the prototype closest to an input embedding. Whereas MAML modifies the learning objective to optimize for pre-training representations that transfer well when fine-tuned on a small number of labeled examples.For benchmarking purposes, we establish fewshot splits for three publicly available IC/SF datasets: ATIS (Hemphill et al., 1990) , SNIPS (Coucke et al., 2018) , and TOP . Empirically, prototypical networks yields substantial improvements on this benchmark over the popular "fine-tuning" approach (Goyal et al., 2018; Schuster et al., 2018) , where representations are pre-trained on a large, "source" dataset and then fine-tuned on a smaller, "target" dataset. Despite performing worse on intent classification, MAML also achieves gains over "fine-tuning" on the slot filling task. Orthogonally, we experiment with the use of two pre-trained language models, BERT and ELMO, as well as joint training on multiple datasets. These experiments show that the use of pre-trained, contextual representations is complementary to both methods. While prototypical networks is uniquely able to leverage joint training to consistently boost slot filling performance.In summary, our primary contributions are fourfold:1. Formulating IC/SF as a few-shot learning task;2. Establishing few-shot splits 1 for the ATIS, SNIPS, and TOP datasets;3. Showing that MAML and prototypical networks can outperform the popular "finetuning" domain adaptation framework;1 Few-shot split intent assignments given in section A.1 4. Evaluating the complementary of contextual embeddings and joint training with MAML and prototypical networks.2 Related Work
0
Vector space models and distributional information have been a steadily increasing, integral part of lexical semantic research over the past 20 years. On the one hand, vector space models (see Turney and Pantel (2010) and Erk (2012) for two recent surveys) have been exploited in psycholinguistic (Lund and Burgess, 1996) and computational linguistic research (Schütze, 1992) to explore the notion of "similarity" between a set of target objects within a geometric setting. On the other hand, the distributional hypothesis (Firth, 1957; Harris, 1968) has been exploited to determine co-occurrence features for vector space models that best describe the words, phrases, sentences, etc. of interest.While the emergence of vector space models is increasingly pervasive within data-intensive lexical semantics, and even though useful features have been identified in general terms: 1 when it comes to a specific semantic phenomenon, we need to explore the relevant distributional features in order to investigate the respective phenomenon. Our research is interested in the meaning of German compounds. More specifically, we aim to predict the degrees of compositionality of German noun-noun compounds (e.g., Feuerwerk 'fire works') with regard to the meanings of their constituents (e.g., Feuer 'fire' and Werk 'opus'). This prediction uses vector space models, and our goal is to identify salient features that determine the degree of compositionality of the compounds by relying on the distributional similarities between the compounds and their constituents.In this vein, we systematically explore windowbased and syntax-based contextual clues. Since the targets in our vector space models are all nouns (i.e., the compound nouns, the modifier nouns, and the head nouns), our hypothesis is that adjectives and verbs are expected to provide salient distributional properties, as adjective/verb meaning and noun meaning are in a strong interdependent relationship. Even more, we expect adjectives and verbs that are syntactically bound to the nouns under consideration (syntax-based, i.e., attributive adjectives and subcategorising verbs) to outperform those that "just" appear in the window contexts of the nouns (window-based). In order to investigate this first hypothesis, we compare window-based and syntaxbased distributional features across parts-of-speech.Concerning a more specific aspect of compound meaning, we are interested in the contributions of the modifier noun versus head noun properties with regard to the meaning of the noun-noun compounds. While there has been prior psycholinguistic research on the constituent contributions (e.g., Gagné and Spalding (2009; 2011) ), computational linguistics has not yet paid much attention to this issue, as far as we know. Our hypothesis is that the distributional properties of the head constituents are more salient than the distributional properties of the modifier constituents in predicting the degree of compositionality of the compounds. In order to assess this second hypothesis, we compare the vector space similarities between the compounds and their modifier constituents with those of the compounds and their head constituents, with regard to the overall most successful features.The paper is organised as follows. Section 2 introduces the compound data that is relevant for this paper, i.e., the noun-noun compounds and the compositionality ratings. Section 3 performs and discusses the vector space experiments to explore our hypotheses, and Section 4 describes related work.
0
Sarcasm detection is the computational task of predicting sarcasm in text. Past approaches in sarcasm detection rely on designing classifiers with specific features (to capture sentiment changes or incorporate context about the author, environment, etc.) Wallace et al., 2014; Rajadesingan et al., 2015; Bamman and Smith, 2015) , or model conversations using the sequence labeling-based approach by Joshi et al. (2016c) . Approaches, in addition to this statistical classifier-based paradigm are: deep learning-based approaches as in the case of Silvio Amir et al. (2016) or rule-based approaches such as Riloff et al. (2013; Khattri et al. (2015) .This work employs a machine learning technique that, to the best of our knowledge, has not been used for computational sarcasm. Specifically, we introduce a topic model for extraction of sarcasm-prevalent topics and as a result, for sarcasm detection. Our model based on a supervised version of the Latent Dirichlet Allocation (LDA) model (Blei et al., 2003) is able to discover clusters of words that correspond to sarcastic topics. The goal of this work is to discover sarcasm-prevalent topics based on sentiment distribution within text, and use these topics to improve sarcasm detection. The key idea of the model is that (a) some topics are more likely to be sarcastic than others, and (b) sarcastic tweets are likely to have a different distribution of positive-negative words as compared to literal positive or negative tweets. Hence, distribution of sentiment in a tweet is the central component of our model. Our sarcasm topic model is learned on tweets that are labeled with three sentiment labels: literal positive, literal negative and sarcastic. In order to extract sarcasm-prevalent topics, the model uses three latent variables: a topic variable to indicate words that are prevalent in sarcastic discussions, a sentiment variable for sentiment-bearing words related to a topic, and a switch variable that switches between the two kinds of words (topic and sentiment-bearing words). Using a dataset of 166,955 tweets, our model is able to discover words corresponding to topics that are found in our corpus of positive, negative and sarcastic tweets. We evaluate our model in two steps: a qualitative evaluation that ascertains sarcasm-prevalent topics based on the ones extracted, and a quantitative evaluation that evaluates sub-components of the model. We also demonstrate how it can be used for sarcasm detection. To do so, we compare our model with two prior work, and observe a significant improvement of around 25% in the F-score.The rest of the paper is organized as follows. Section 2 discusses the related work. Section 3 presents our motivation for using topic models for automatic sarcasm detection. Section 4 describes the design rationale and structure of our model. Section 5 describes the dataset and the experiment setup. Section 6 discusses the results in three steps: qualitative results, quantitative results and application of our topic model to sarcasm detection. Section 7 concludes the paper and points to future work.
0
Distributed representation of words (Bengio et al., 2003; Mikolov et al., 2013a; Pennington et al., 2014; Bojanowski et al., 2017) and sentences (Kiros et al., 2015; Conneau et al., 2017; Reimers and Gurevych, 2019; Gao et al., 2021) have shown to be extremely useful in transfer learning to many NLP tasks. Therefore, it plays an essential role in how we evaluate the quality of embedding models. Among many evaluation methods, the word and sentence similarity task gradually becomes the de facto intrinsic evaluation method. Figure 1 shows examples from word and sentence similarity datasets. In general, the datasets consist of pairs of words (w 1 , w 2 ) (or sentences) and human-annotated similarity scores S h . To evaluate an embedding model φ(•), we first extract embeddings for (w 1 ,w 2 ): (e 1 , e 2 ) = (φ(w 1 ), φ(w 2 )).1 Available at https://github.com/BinWang28/ EvalRank-Embedding-Evaluation. Figure 1 : Word and sentence pairs with humanannotated similarity scores from WS-353 and STS-B datasets (scaled to range 0 (lowest) to 10 (highest)).Then, a similarity measure is applied to compute an predicted score S p = sim(e 1 , e 2 ), where cosine similarity is adopted as sim unquestionably in the majority of cases. Finally, the correlation between S h and S p is computed, and a higher correlation suggests good alignment with human annotations and a better embedding model. Many studies, especially those targeting on information retrieval via semantic search and clustering (Reimers and Gurevych, 2019; Su et al., 2021) , have used the similarity task as the only or main evaluate method (Tissier et al., 2017; Mu et al., 2018; Arora et al., 2017; Li et al., 2020; Gao et al., 2021) . We observe a number of issues in word or sentence similarity tasks ranging from dataset collection to the evaluation paradigm, and consider that focusing too much on similarity tasks would negatively impact the development of future embedding models.The significant concerns are summarized as follows, which generally apply to both word and sentence similarity tasks. First, the definition of similarity is too vague. There exist complicated relationships between sampled data pairs, and almost all relations contribute to the similarity score, which is challenging to non-expert annotators. Second, the similarity evaluation tasks are not directly relevant to the downstream tasks. We believe it is because of the data discrepancy between them, and the properties evaluated by similarity tasks are not the ones important to downstream applications. Third, the evaluation paradigm can be tricked with simple post-processing methods, making it unfair to benchmark different models.Inspired by Spreading-Activation Theory (Collins and Loftus, 1975) , we propose to evaluate embedding models as a retrieval task, and name it as EvalRank to address the above issues. While similarity tasks measure the distance between similarity pairs from all similarity levels, EvalRank only considers highly similar pairs from a local perspective.Our main contributions can be summarized as follows:1 We point out three significant problems for using word and sentence similarity tasks as the de facto evaluation method through analysis or experimental verification. The study provides valuable insights into embeddings evaluation methods.2 We propose a new intrinsic evaluation method, EvalRank, that aligns better with the properties required by various downstream tasks.3 We conduct extensive experiments with 60+ models and 10 downstream tasks to certify the effectiveness of our evaluation method. The practical evaluation toolkit is released for future benchmarking purposes.
0
Extracting knowledge from unstructured text has been a long-standing goal of NLP and AI. The advent of the World Wide Web further increases its importance and urgency by making available an astronomical number of online documents containing virtually unlimited amount of knowledge (Craven et al., 1999) . A salient example domain is biomedical literature: the PubMed 1 online repository contains over 18 million abstracts on biomedical research, with more than two thousand new abstracts added each day; the abstracts are written in grammatical English, which enables the use of advanced NLP tools such as syntactic and semantic parsers.Traditionally, research on knowledge extraction from text is primarily pursued in the field of information extraction with a rather confined goal of extracting instances for flat relational schemas with no nested structures (e.g, recognizing protein names and protein-protein interaction (PPI)). This restriction mainly stems from limitations in available resources and algorithms. The BioNLP'09 Shared Task (Kim et al., 2009) is one of the first that faced squarely information needs that are complex and highly structured. It aims to extract nested bio-molecular events from research abstracts, where an event may have variable number of arguments and may contain other events as arguments. Such nested events are ubiquitous in biomedical literature and can effectively represent complex biomedical knowledge and subsequently support reasoning and automated discovery. The task has generated much interest, with twenty-four teams having submitted their results. The top system by UTurku (Bjorne et al., 2009) attained the state-of-the-art F1 of 52.0%.The nested event structures make this task particularly attractive for applying joint inference. By allowing information to propagate among events and arguments, joint inference can facilitate mutual disambiguation and potentially lead to substantial gain in predictive accuracy. However, joint inference is underexplored for this task. Most participants either reduced the task to classification (e.g., by using SVM), or used heuristics to combine manual rules and statistics. The previous best joint approach was Riedel et al. (2009) . While competitive, it still lags UTurku by more than 7 points in F1.In this paper, we present the first joint approach that achieves state-of-the-art results for bio-event extraction. Like Riedel et al. (2009) , our system is based on Markov logic, but we adopted a novel formulation that models dependency edges in argument paths and jointly predicts them along with events and arguments. By expanding the scope of joint inference to include individual argument edges, our system can leverage fine-grained correlations to make learning more effective. On the development set, by merely adding a few joint inference formulas to a simple logistic regression model, our system raised F1 from 28% to 54%, already tying UTurku.We also presented a heuristic method to fix errors in syntactic parsing by leveraging available semantic information from task input, and showed that this in turn led to substantial performance gain in the task. Overall, our final system reduced F1 error by more than 10% compared to Riedel et al. (2009) .We begin by describing the shared task and related work. We then introduce Markov logic and our Markov Logic Network (MLN) for joint bio-event extraction. Finally, we present our experimental results and conclude.
0
Discourse structures for texts represent relational semantic structures that convey causal, topical, argumentative relations inter alia or more generally coherence relations. Following (Muller et al., 2012; Li et al., 2014; Morey et al., 2018) , we represent them as dependency structures or graphs containing a set of nodes that represent discourse units (DUs), or instances of propositional content, and a set of labelled arcs that represent coherent relations between DUs. For dialogues with multiple interlocutors, extraction of their discourse structures could provide useful semantic information to the "downstream" models used, for example, in the production of intelligent meeting man-agers or the analysis of user interactions in online fora. However, despite considerable efforts on computational discourse-analysis (Duverle and Prendinger, 2009; Joty et al., 2013; Ji and Eisenstein, 2014; Surdeanu et al., 2015; Yoshida et al., 2014; Li et al., 2016) , we are still a long way from usable discourse models, especially for dialogue. The problem of extracting full discourse structures is difficult: standard supervised models struggle to capture the sparse long distance attachments, even when relatively large annotated corpora are available. In addition, the annotation process is time consuming and often fraught with errors and disagreements, even among expert annotators. This motivated us to explore a weak supervision approach, data programming (Ratner et al., 2016) , in which we exploit expert linguistic knowledge in a more compact and consistent rule-based form.In our study, we restrict the structure learning problem to predicting edges or attachments between DU pairs in the dependency graph. After training a supervised deep learning algorithm to predict attachments on the STAC corpus 1 , we then constructed a weakly supervised learning system in which we used 10% of the corpus as a development set. Experts on discourse structure wrote a set of attachment rules, Labeling Functions (LFs), with reference to this development set. Although the whole of the STAC corpus is annotated, we treated the remainder of the corpus as unseen/unannotated data in order to simulate the conditions in which the snorkel framework is meant to be used, i.e. where there is a large amount of unlabeled data but where it is only feasible to hand label a relatively small portion of it. Accordingly, we applied the completed LFs to our "unseen" training set, 80% of the corpus, and used the final 10% as our test set.After applying the LFs to the unannotated data and training the generative model, the F1 score for attachment was 4 points higher than that for the supervised method, showing that hybrid learning architectures combining expert linguistic conceptual knowledge with data-driven techniques can be highly competitive with standard learning approaches.
0
There is a long linguistic tradition of frame and role annotation for verbal predications, rooted in verb sense classifications on the one hand (e.g. Levin 1993), and the concept of semantic roles (also called thematic or case roles, Fillmore 1968) on the other. In a frame-based framework, verb categories and semantic roles are seen as interdependent, and predications are annotated for both, usually involving both valency-bound arguments and free (adverbial) satellites of a given verb. Two crucial resources in the area are FrameNet (Baker et al. 1998 , Ruppenhofer et al. 2010 and PropBank (Palmer et al. 2005) .The former is more lexicographical in its conception and focuses on a one-by-one exhaustive description of individual frames, the latter offers exhaustive proposition annotation of running corpus sentences, with an eye on applications such as Machine Learning (ML).For Danish, both a FrameNet and a frame tagger (DanGram) have been published (Bick 2011) , but unlike some work on larger languages, e.g. the German Salsa corpus (Rehbein et al. 2012) , these Danish tools addressed only verbal frames, largely ignoring nominal predications. The work presented here strives to resolve this problem in a three-pronged fashion, with automatic corpus annotation based on (a) systematic derivation of noun frames from verb frames, (b) lexicographical treatment of argument-carrying nouns and (c) free role-mapping rules based on semantic noun classes and syntactic triggers.
0
Linguistic analyses have often been computationally performed around the static notion of words or word categorization methods (e.g. LIWC) where the context of words is not taken into account. Previous work on politeness (Danescu-Niculescu-Mizil et al., 2013) and gender (Bamman et al., 2014) have focused on comparing the use of non-contextual broad word categories such as personal pronouns across different categories/groups. For example, Bamman et al. (2014) found that women use more pronoun words than men. Danescu-Niculescu-Mizil et al. (2013) found that requests which contain a hedge word (e.g. think) are more likely to be perceived as polite than impolite. However, words often occur in many different contexts and thus analyzing them statically hides cues which can potentially enrich our understanding of the phenomenon being studied. As opposed to static word embeddings which provide the same representation for a word regardless of its context (i.a. Mikolov et al., 2013a; Mikolov et al., 2013b; Pennington et al., 2014) , the BERT (Devlin et al., 2018) and ELMo (Peters et al., 2018) models provide methods for extracting pre-trained contextualized word representations. By leveraging contextualized representations, linguistic theories can be validated and enriched.Given a dataset annotated for a downstream task, we build a model which automatically discovers fine-grained context patterns of words. Discovering such patterns provides insight into the phenomenon the task attempts to model. We use pre-trained BERT embeddings and exploit the fact that words which occur in similar contexts tend to have similar representations. Our model takes as input contextualized representations for a given word and splits them into different clusters based on the context in which the word appeared and the labels of the items the word belongs to. By doing so, we are able to identify the different contexts in which the word appears and how they correlate with the categories of the task being studied.We use politeness as case study of a linguistic phenomenon. Existing computational work on politeness developed feature-based (Danescu-Niculescu-Mizil et al., 2013) and neural (Aubakirova & Bansal, 2016 , Niu & Bansal, 2018 models which detect if a natural language request (e.g. Can you please tell me how to do that?) is polite or impolite. Danescu-Niculescu-Mizil et al. (2013) developed a computational tool driven by existing theories in the literature on politeness (Brown & Levinson, 1987) . These theories highlighted linguistic constructions that speakers use to reduce the burden on the addressee by sounding indirect (e.g. Could you please [...] ). Danescu-Niculescu-Mizil et al. (2013) showed further that, for some words, their position in the request plays a role in whether the request will be perceived as polite or impolite. For example, even though please is considered polite, they show that requests starting with please (e.g. Please explain this to me.) are more likely to be perceived as impolite while requests with please in the middle (e.g. Could you please help me out?) are more likely to be perceived as polite. We take an exploratory approach that is driven to discover fine-grained word patterns that encode the surrounding context as opposed to simply the position of the word. The proposed model automatically discovers patterns that have already been discussed in the literature in addition to multiple novel ones. For example, the model discovers fine-grained context patterns of the word please one of which include occurrences of please in the middle of impolite requests (e.g. Would you please stop?) signaling that our model does not simply encode the position of the word but also the surrounding context. We validate the proposed model by showing that features based on the fine-grained patterns that it discovers outperform current feature-based politeness models.Uncovering contextual information on how words are used can enhance our understanding of the task/phenomena being studied. It can also help in enriching the linguistic theories associated with the task. Striving for informativeness and interpretability can make models more useful in downstream applications.
0
Speech-to-text translation (ST) has been traditionally approached with cascade architectures consisting of a pipeline of two sub-components (Stentiford and Steer, 1988; Waibel et al., 1991) : an automatic speech recognition (ASR), which transforms the audio input into a textual representation, and a machine translation (MT) model, which projects the transcript into the target language. A more recent approach consists in directly translating speech into target text using a single model (Bérard et al., 2016; Weiss et al., 2017) . This direct solution has interesting advantages (Sperber and Paulik, 2020) : i) it can better exploit audio information (e.g. prosody) during the translation phase, ii) it has lower latency, and iii) it is not affected by error propagation.The authors contributed equally.Thanks to these advantages, the initially huge performance gap with cascade systems has gradually closed (Ansari et al., 2020) , motivating research towards further improvements.Direct ST models are fed with features extracted from the audio with high frequency (usually every 10ms). This, on average, makes the resulting input sequence of vectors ∼10 times longer than the corresponding text, leading to an intrinsically redundant (i.e. long and repetitive) representation. For this reason, it is not possible to process speech data with a vanilla Transformer encoder (Vaswani et al., 2017) , whose self-attention layers have quadratic memory complexity with respect to the input length. State-of-the-art architectures tackle the problem by collapsing adjacent vectors in a fixed way, i.e. by mapping a predefined number of vectors (usually 4) into a single one, either using strided convolutional layers (Bérard et al., 2018; Di Gangi et al., 2019; Wang et al., 2020a) or by stacking them (Sak et al., 2015) . As a positive side effect, these length reduction solutions lower input redundancy. As a negative side effect, they disregard the variability over time of the amount of linguistic and phonetic information in audio signals (e.g. due to pauses and speaking rate variations) by giving equal weight to all features. In doing this, relevant features are penalized and considered equally important to the irrelevant ones, resulting in an information loss.Recently, Salesky et al. (2019) obtained considerable translation quality gains by collapsing consecutive vectors with the same phonetic content instead of compressing them in a fixed way. Zhang et al. (2020) also showed that selecting a small percentage (∼16%) of input time steps based on their informativeness improves ST quality. On the downside, these approaches respectively require adding a model that performs phoneme classification and a pre-trained adaptive feature selection layer on top of an ASR encoder, losing the compactness of direct solutions at the risk of error propagation.In direct ST, Liu et al. (2020) and Gaido et al. (2021) addressed the problem with a transcript/phoneme-based compression leveraging Connectionist Temporal Classification (CTC - Graves et al. 2006) . However, since these methods are applied to the representation encoded by Transformer layers, the initial content-unaware downsampling of the input is still required for memory reasons, at the risk of losing important information.To avoid initial fixed compression, we propose Speechformer: the first Transformer-based architecture that processes full audio content maintaining the original dimensions of the input sequence. Inspired by recent work on reducing the memory complexity of the attention mechanism (Wang et al., 2020b) , we introduce a novel attention layer -the ConvAttention -whose memory requirements are reduced by means of convolutional layers. As the benefits of avoiding the initial lossy compression might be outweighed by the increased redundancy of the encoded audio features, we aggregate the high-level representation of the input sequence in a linguistically informed way, as in (Liu et al., 2020; Gaido et al., 2021) . In other words, we collapse vectors representing the same linguistic atomic content (words, sub-words, pauses) into a single element, since they express the same linguistic information. The usage of the ConvAttention and of the linguistically motivated compression produces a considerably shorter, yet informative, sequence that fits the memory requirements of vanilla Transformer encoder layers. Experiments on three language directions (en→de/es/nl) show that the proposed architecture outperforms a state-of-the-art ST model by up to 0.8 BLEU points on the standard MuST-C corpus and obtains significantly larger gains (up to 4.0 BLEU) in a low resource setting where the amount of training data is reduced to 100 hours.
0
Large pre-trained language models (LMs) are continuously pushing the state of the art across various NLP tasks. The established procedure performs self-supervised pre-training on a large text corpus and subsequently fine-tunes the model on a specific target task (Devlin et al., 2019; Liu et al., 2019b) . The same procedure has also been applied to adapter-based training strategies, which achieve on-par task performance to full model fine-tuning while being considerably more parameter efficient (Houlsby et al., 2019) and faster to train (Rücklé et al., 2021) . 2 Besides being more efficient, adapters are also highly modular, enabling a wider range of transfer learning techniques (Pfeiffer et al., 2020b (Pfeiffer et al., , 2021a Üstün et al., 2020; Vidoni et al., 2020; Rust et al., 2021; Ansell et al., 2021) .Extending upon the established two-step learning procedure, incorporating intermediate stages of knowledge transfer can yield further gains for fully fine-tuned models. For instance, Phang et al. (2018) sequentially fine-tune a pre-trained language model on a compatible intermediate task before target task fine-tuning. It has been shown that this is most effective for low-resource target tasks, however, not all task combinations are beneficial and many yield decreased performances (Phang et al., 2018; Wang et al., 2019a; Pruksachatkun et al., 2020) . The abundance of diverse labeled datasets as well as the continuous development of new pre-trained LMs calls for methods that efficiently identify intermediate dataset that benefit the target task.So far, it is unclear how adapter-based approaches behave with intermediate fine-tuning. In the first part of this work, we thus establish that this setup results in similar gains for adapters, as has been shown for full model fine-tuning (Phang et al., 2018; Pruksachatkun et al., 2020; Gururangan et al., 2020) . Focusing on a low-resource target task setup, we find that only a subset of intermediate adapters yield positive gains, while others hurt the performance considerably (see Table 1 and Figure 2 ). Our results demonstrate that it is necessary to obtain methods that efficiently identify beneficial intermediately trained adapters.In the second part, we leverage the transfer results from part one to automatically rank and identify beneficial intermediate tasks. With the rise of large publicly accessible repositories for NLP models (Wolf et al., 2020; Pfeiffer et al., 2020a) , the chances of finding pre-trained models that yield positive transfer gains are high. However, it is infeasible to brute-force the identification of the best intermediate task. Existing approaches have focused on beneficial task selection for multi-task learning (Bingel and Søgaard, 2017) , full fine-tuning of intermediate and target transformer-based LMs for NLP tasks (Vu et al., 2020) , adapter-based models for vision tasks (Puigcerver et al., 2021) and unsupervised approaches for zero-shot transfer for community question answering . Each of these works require different types of data, such as intermediate task data and/or intermediate model weights, which, depending on the scenario, are potentially not accessible. 3 In this work we thus aim to address the efficiency aspect of transfer learning in NLP from multiple different angles, resulting in the following contributions: 1) We focus on adapter-based transfer learning which is considerably more parameter (Houlsby et al., 2019) and computationally efficient than full model fine-tuning (Rücklé et al., 2021) , while achieving on-par performance; 2) We evaluate sequential fine-tuning of adapter-based approaches on a diverse set of 42 intermediate and 11 target tasks (i.e. classification, multiple choice, question answering, and sequence tagging); 3) We identify the best intermediate task for transfer learning, without the necessity of computational expensive, explicit training on all potential candidates. We compare different selection techniques, consolidating previously proposed and new methods; 4) We provide a thorough analysis of the different techniques, available data scenarios, and task-, and model types, thus presenting deeper insights into the best approach for each respective setting; 5) We provide computational cost estimates, enabling informed decision making for trade-offs between expense and downstream task performance.
0
Named entity recognition (NER) is a fundamental task in information extraction, and the ability to detect mentions of domain-relevant entities such as chemicals and proteins is required for the analysis of texts in specialized domains such as biomedicine. Although a wealth of manually annotated corpora and dedicated NER methods have been introduced for the analysis of English biomedical and clinical texts (e.g. (Leaman and Lu, 2016; Crichton et al., 2017; Weber et al., 2019) ), there has been comparatively little work on these basic resources for other languages, including Spanish.The PharmaCoNER task focuses on pharmacological compound mentions in Spanish clinical texts, promoting the development of biomedical text mining tools for non-English data . Track 1 involves the recognition and classification of entity mentions into upper-level ontological categories (chemical, protein, etc.) , and Track 2 the normalization (grounding) of these mentions to identifiers in external resources. We participate in Track 1.We participate in the PharmaCoNER task using a collection of tools developed for English as well as out-of-domain multilingual models. In particular, we use a freely available NER toolkit, NERsuite, tailored for English biomedical literature and a multilingual neural model, BERT, pretrained on general domain Wikipedia articles. Thus, the emphasis of this work is on analyzing how well such tools can be adapted to new languages and domains with minimal effort. We cast the task as sequence labeling using a conventional in-outbegin (IOB) representation of the data for learning and prediction. The used tools are described in detail in Section 3.
0
Currently a major part of cutting-edge research in MT revolves around the statistical machine translation (SMT) paradigm. SMT has been inspired by the use of statistical methods to create language models for a number of applications including speech recognition. A number of different translation models of increasing complexity and translation accuracy have been developed (Brown et al., 1993) . Today, several packages for developing statistical language models are available for free use, including SRI (Stolke et al., 2011) , thus supporting research into statistical methods. A main reason for the widespread adoption of SMT is that it is directly amenable to new language pairs using the same algorithms. An integrated framework (MOSES) has been developed for the creation of SMT systems . The more recent developments of SMT are summarised by Koehn (2010) . One particular advance in SMT has been the integration of syntactically motivated phrases in order to establish correspondences between source language (SL) and target language (TL) (Koehn et al., 2003) . Recently SMT has been enhanced by using different levels of abstraction e.g. word, lemma or part-of-speech (PoS), in fac-tored SMT models so as to improve SMT performance .The drawback of SMT is that SL-to-TL parallel corpora of the order of millions of tokens are required to extract meaningful models for translation. Such corpora are hard to obtain, particularly for less resourced languages. For this reason, SMT researchers are increasingly investigating the extraction of information from monolingual corpora, including lexica (Koehn & Knight, 2002 & Klementiev et al., 2012 , restructuring (Nuhn et al., 2012) and topic-specific information (Su et al., 2011) .As an alternative to pure SMT, the use of less specialised but more readily available resources has been proposed. Even if such approaches do not provide a translation quality as high as SMT, their ability to develop MT systems with very limited resources confers to them an important advantage. Carbonell et al. (2006) have proposed an MT method that requires no parallel text, but relies on a full-form bilingual dictionary and a decoder using long-range context. Other systems using low-cost resources include METIS (Dologlou et al., 2003) and METIS-II (Markantonatou et al., 2009) , which are based only on large monolingual corpora to translate SL texts.Another recent trend in MT has been towards hybrid MT systems, which combine characteristics from multiple MT paradigms. The idea is that by fusing characteristics from different paradigms, a better translation performance can be attained (Wu et al., 2005) . In the present article, the PRESEMT hybrid MT method using predominantly monolingual corpora (Sofianopoulos et al., 2012 & Tambouratzis et al., 2013 is extended by integrating n-gram information to improve the translation accuracy. The focus of the article is on how to extract, as comprehensively as possible, information from monolingual corpora by combining multiple models, to allow a higher quality translation.A review of the base MT system is performed in section 2. The TL language model is then detailed, allowing new work to be presented in section 3. More specifically, via an error analysis, ngram based extensions are proposed to augment the language model. Experiments are presented in section 4 and discussed in section 5.
0
The functionality of systems that extract information from texts can be specified quite simply: the input is a stream of texts and the output is some representation of the information to be extracted. In the message understanding research promoted by ARPA through its Human Language Technology initiative, the form of this output has been templates (feature-structures), with complex path-names (slots) and various constraints on fillers. The design of these templates, especially considered as concrete data structures, has been determined to some degree at least by considerations having to do with automatic scoring. Beyond that, it has not been made clear what principles have driven or should drive the design of these output forms; but it has become clear that serious defects in the form of the output can undermine the utility of an information extraction system. If the output is unusable, or not easily usable, the breadth and reliability of coverage of the natural language analysis component will be of little value.As part of the DASH research project on Data Access for Situation Handling, we axe attempting to elucidate principles of template design and at compiling these, with examples, in • a manual for template designers. Our methodology has ineluded detailed critical analysis of the templates from a variety of information extraction tasks (MUC-4, MUG-5, Tipster-1, the Waxbreaker Message Handling [WBMH] tasks), together with the creation of templates for the TREC topic descriptions and narratives.The design of templates, or more generally, abstract data structures, as output forms for automatic information extraction systems must be sensitive to three different but interacting considerations:1. the template as representational device 2. the template as generated from input 3. the template as input to further processing, by humans or programs or both.The central consideration in our research is that of the template as a representational device. The problem of template design is a special case of the general problem of knowledge representation. In particular, it is the problem of representing, within a constrained formalism, essential facts about situations in a way that can mediate between texts that describe those situations and a variety of applications that involve reasoning about them.What facts about a situation are essential is determined by a semantic model of the domain, which is in turn motivated by the particular information requirements of the analytical purposes which the extracted information is to serve. This specification could, in principle, be done without any detailed thought given to the nature of the texts from which information is to be extracted; thus it could include information requirements that simply could not be met by the input stream. It might also abstract from information readily transduced from the input stream. Conversely, the domain specification may reveal cases where one must extract information that is not important to the end user in order to disambiguate or otherwise explicate important informational content. Again, the domain model could be specified without any detailed thought given to the design of the concrete syntax of the template. In this latter regard, crucial considerations include intelligibility and 'browsability', together with the utility of the template fills as input to further processing.We here report some results of a program of research ~med at uncovering the underlying principles of template design.
0
According to the World Health Organization (WHO), 20% of children and adolescents in the world have mental disorders or problems (WHO, 2014) . Suicide ranks as the second leading cause of death in the 15-29 years old group and every 40 seconds a person dies by suicide in the world. The WHO pointed early identification and intervention as a key factor in ensuring that people receive the care they need (WHO, 2014) . Mental health problems have a strong impact on our society and require the use of new techniques for their study, prevention, and intervention.In this context, text mining tools are emerging as a powerful channel to study and detect the mental state of the writers (Calvo and Mac Kim, 2013; Bedi et al., 2015 Bedi et al., , 2014 De Choudhury et al., 2013a,b; Coppersmith et al., 2015) . In particular, there is a greater interest in the study and detection of suicidal ideation in texts coming from social networks. In this line, Tong et al. (2014) and O'Dea et al. (2015) developed automatic detection systems to identify suicidal thoughts in tweets, and Homan et al. (2014) studied the network structure of users with suicidal ideation in a forum. Furthermore, the CLPsych 2016 shared task proposed the triage of posts, based on urgency, from a peer-support mental health forum (for a more ex-haustive review see (Calvo et al., 2017) ). In the present article, we build an automatic post triage system and compete in the CLPsych 2017 shared task . The automatic detection of suicidal ideation in social networks and forums provide a powerful tool to address early interventions in serious situations. Additionally, these techniques allow tracking the prevalence of different suicide risk factors among the population (Jashinsky et al., 2014; Fodeh et al., 2017) , which provides valuable information that can be capitalized for the design of prevention plans.The CLPsych 2017 shared task involves the triage of posts from an Australian mental health forum, Reachout.com, which provides a peer-support online space for adolescents and young adults. Reachout.com offers a space to read about other peoples experiences and talk anonymously. Additionally, the forum has trained moderators who intervene in delicate situations, such as when a user is expressing suicidal ideation. There is an escalation process to follow when forum members might be at risk of harm. As the number of forum members increases the reading of all post become impossible, thus an automatic triage that efficiently guides moderator's attention to the most urgent posts result essential . The CLPsych 2017 Shared Task consists of identifying each forum post with one of four triage levels: crisis, red, amber and green (in decreasing priority). A crisis label indicates that the author is in risk so moderators should prioritize this post above all others, while a green label indicates that post does not require the attention of any moderator. See Milne et al. (2016) for a detailed description of the annotation process and the ethical considerations.CLPsych 2017 Shared Task dataset consists of 157963 posts written between July 2012 and March 2017 (see Table 1 ). Among these posts, 1188 were labeled by 3 annotators in order to train the model (training set), and 400 were selected to form the testing set. Posts in the training set were written between April 2015 and June 2016 while posts in the test set were written between August 2016 and March 2017.Fifteen teams took part in CLPsych 2017 shared task, with unlimited submissions per group. Each post of the dataset contains the text of the subject crisis red amber green total train 40 137 296 715 1188 test 42 48 94 216 400 extra ----156375 Table 1 : Training dataset and extra unlabeled dataset statistics. Crisis, red, amber and green, are the four triage levels and reflects a decreasing priority of required moderator intervention/response. We had access to the test dataset only after the competition have finished and the body, structured in XML format. Additional metadata is also provided, such as boards, thread, post time, or if the post was written by a moderator or not. The official metrics of the task are:• Macro-averaged f-score: the average f1score among crisis, red and amber labels.• F-score for flagged vs. non-flagged: the average f1-score among flagged (crisis + red + amber) and non-flagged (green) labels. This is considered considered by the task organizers as the most important metric, given that it measures the system's capability to identify post that need moderators attention.• F-score for urgent vs. non-urgent: the average f1-score among urgent (crisis + red) and non-urgent (amber + green) labels.The official measures are the f-scores, as accuracy is known to be less sensitive to misclassification of elements in the minority class in highly unbalanced datasets. In this paper, we also analyze the f-score for crisis vs. non-crisis, which measures the system's capability to identify the most serious cases. This competition is a new version of the CLPsych 2016 Shared Task , which has the same goal but counts with a smaller dataset. The different approaches used in 2016 competition involved a huge variety of features, such as N-grams, lexicon-based features, word embeddings, and metadata. Most of the models extracted features from the content of posts, but only a few authors took advantage of features extracted from the context of the posts, such as n-grams of previous posts of the thread, or previous author's posts (Malmasi et al., 2016; Cohan et al., 2016; . In the present work, we extract and test a large variety of new features from both the body of the posts and the context in which the posts occur, such as: (1) authors' history, (2) adjacent posts, and (3) the authors' interaction network. We hypothesize that the contextual features will be useful to capture new elements that allow building a better profile of the author of the posts. This idea is grounded in Van Orden et al. 2010observation that suicidal behavior tends to persist over the lifetime, and also De Choudhury et al. 2013b, Homan et al. 2014studies in which they show that interaction patterns have valuable information about the underlying mental state of the users.
0
Figure 1: Examples of disambiguating information provided by images for the prepositional phrase attachment of the sentence Mary eats spaghetti with a friend (Gokcen et al., 2018) .that the proposed models achieve state-of-the-art results on multilingual induction datasets, even without help from linguistic knowledge or pretrained image encoders. Experiments show several specific benefits attributable to the use of visual information in induction. First, as a proxy to semantics, the co-occurrences between objects in images and referring words and expressions, such as the word spaghetti and the plate of spaghetti in Figure 1 , 2 provide clues to the induction model about the syntactic categories of such linguistic units, which may complement distributional cues from word collocation which normal grammar inducers rely on solely for induction. Also, pictures may help disambiguate different syntactic relations: induction models are not able to resolve many prepositional phrase attachment ambiguities with only text -for example in Figure 1 , there is little information in the text of Mary eats spaghetti with a friend for the induction models to induce a high attachment structure where a friend is a companion -and images may provide information to resolve these ambiguities. Finally, images may provide grounding information for unknown words when their syntactic properties are not clearly indicated by sentential context.
0
Dialogue systems based on deep neural networks (DNNs) have been widely studied. Although these dialogue systems can generate fluent responses, they often generate dull responses such as "yes, that's right" and lack engagingness as a conversation partner (Jiang and de Rijke, 2018) . To develop an engaging dialogue system, it is necessary to generate a variety of responses not to bore users.However, dialogue systems that are capable of generating diverse responses are difficult to automatically evaluate. A commonly used evaluation metric is BLEU (Papineni et al., 2002) used in machine translation, which measures the degree of n-gram agreement with the reference response. However, due to the diversity of responses, i.e., the one-to-many nature of dialogue (Zhao et al., 2017) , which means the existence of multiple appropriate responses to an utterance, methods that compare the response to reference responses are not appropriate. Therefore, there is a need for evaluation methods that do not use reference responses, and one of them is supervised evaluation. It trains DNNs using human evaluations of responses generated by humans and models (Zhao et al., 2020; Ghazarian et al., 2019) . DNN-based evaluations correlate to some extent with human evaluations.We aim to develop a dialogue system that is more engaging as a conversational partner by combining independently studied response generation and response evaluation models into a single dialogue system. Specifically, we propose a generator-evaluator model in which multiple responses are generated by the generation model, evaluated by the evaluation model, and the response with the highest evaluation score is selected. By generating multiple responses, we can obtain diverse responses. This can be enabled by the response evaluator that does not require reference responses.Our methods of generating multiple responses include a method with multiple decoding schemes and a method that uses a model that can generate responses with a specified Dialogue Act (DA). Generating responses by specifying various DAs leads to a variety of responses.To evaluate the proposed method, we conducted human evaluation by crowdsourcing to compare the outputs of the proposed system and a baseline system. The evaluation results show that the proposed system outputs better responses, and indicate the effectiveness of the proposed method.We target Japanese dialogue systems and construct datasets of Japanese dialogues.
0
Paraphrase generation is an important and challenging task in the field of Natural Language Processing (NLP), which can be applied in a variety of applications such as information retrieval (Yan et al., 2016) , question answering (Fader et al., 2014; Yin et al., 2015) , machine translation (Cho et al., 2014) , and so on.Traditionally, paraphrase generation is usually implemented using ruled-based models (Fader et al., 2014; Zhao et al., 2009) , lexicon-based methods (Bolshakov and Gelbukh, 2004; Kauchak and Barzilay, 2006) , grammar-based methods (Narayan et al., 2016) , statistical machine translation-based methods (Quirk et al., 2004; Zhao et al., 2008) . With the rapid development of deep learning techniques, neural methods have shown great power in paraphrase generation and achieve state-of-the-art results (Gupta et al., 2018; Yang et al., 2019a) . Neural paraphrase generation models usually follow the encoder-decoder paradigm. Given a sentence X, these models generate the paraphrase Y by directly modeling P (Y |X) through a deep neural network. However, deep neural networks are sensitive to domains in general (Stahlberg, 2020) , while existing mainstream paraphrase corpora only cover a few specific domains, such as image caption (Lin et al., 2014) and questions (Fader et al., 2013) . Highquality paraphrases for general domains are difficult to obtain in practice, which greatly restricts the application of these seq2seq models.Benefiting from the rapid development of machine translation technologies, pivot-based methods (Guo et al., 2019; Wieting et al., 2017) have been proposed for paraphrase generation. Formally speaking, pivot-based methods generate the paraphrase by following P (Y |X) = P (Z|X)P (Y |Z), where Z denotes the pivot of X. Existing pivot-based methods all choose Z as representations in a different language, therefore the quality of the generated paraphrases largely depends on the pre-existing machine translation system.Choosing language as pivot has some disadvantages, for example: (1) the pipeline translations may incur semantic shift (Guo et al., 2019) , and (2) machine translation systems are sensitive to domain, and the quality of translating out-of-domain sentences can not be guaranteed.In this paper, we explore the feasibility of using different pivots for pivot-based paraphrasing models, including syntactic representation (Universal Dependencies (McDonald et al., 2013) , UD), semantic representation (Abstract Meaning Representation (Banarescu et al., 2013) , AMR), and latent semantic representation (LSR). Compared with choosing other languages as pivot, choosing syntactic or semantic as pivot is a more direct way, and is less likely to incur semantic shift. Apart from pipeline pivot-based generation, we also investigate how much an end-to-end pivot-based model, which can produce paraphrases in a single step with the help of pivot, affects the quality of paraphrases. In the end-to-end framework, the model directly learns the paraphrasing probability P (Y |X) from text distribution P (X) and P (Y ), pivot distribution P (Z), as well as parallel text-pivot distribution P (Z|X) and P (Y |Z).We conduct experiments on two benchmarks of paraphrasing tasks: Parabank and Quora datasets. We compared in detail the pros and cons of models using different pivots in terms of fidelity, fluency, diversity and so on in the experiments. The results show that using the AMR as the pivot can also produce high-quality paraphrases. Besides, the endto-end framework can reduce the semantic shift when language is the pivot.In sum, the prime contributions of this paper are as follows:• We explore to use syntactic and semantic representations as pivots for pivot-based paraphrasing models, which is a more direct way and less likely to incur semantic shift.• We also investigate applying an end-to-end paraphrasing model instead of the pipeline framework.• We conduct experiments on two paraphrasing datasets to detailedly investigate the pros and cons of models using different pivots.• We find out that models taking AMR as pivot can generate better paraphrases compared with taking UD or language as pivot.The end-to-end framework can also reduce the semantic changes when language is used as the pivot. Besides, several unsupervised pivot-based methods can generate paraphrases as good as the supervised encoder-decoder method, indicating that parallel samples may not be essential to generate high-quality paraphrases.2 Introduction of Pivots
0
Text, no matter the length, can potentially convey an emotional meaning. As the availability of digitized documents has increased over the past decade, so the ability and need to classify this data by its affective content has increased. This in turn has generated a large amount of interest in the field of Sentiment Analysis.Typical approaches to Sentiment Analysis tend to focus on the binary classification problem of valence: whether a text has a positive or negative sentiment associated with it. The task of classifying text by its valence has been applied successfully across varying datasets, from product reviews (Blitzer et al., 2007) and online debates (Mukherjee and Liu, 2012) , even spanning as far as the sentiment communicated through patient discourse (Smith and Lee, 2012) . While numerous works concentrate on the binary-classification task, the next logical task in sentiment analysis, emotion classification, can sometimes be overlooked, for numerous reasons.Emotion classification provides a more complex problem than the polarity based sentiment analysis task. While both suffer from the subtleties that the implicit nature of language holds, one of the central reasons for its complexity is that there are a greater number of categories, emotions, in which to undertake classification. Additionally, there is no fixed number of categories, as varying theories of emotion have been proposed, each detailing a slightly different subset of emotions. This paper will provide a general approach to emotion classification, which utilises the lexical semantics of words and their combinations in order to classify a text. We will experiment with our proposed method on the SemEval 2007 Affective Task, proposed by Strapparava and Mihalcea (2007) . The task offered an interesting challenge for sentiment analysis, as little data was given for training, so supervised machine learning approaches that are common to text classification on the whole, were discouraged. This therefore encouraged competing systems to consider the syntax and semantics of language when crafting their approaches to classification. The task was split into two tracks, one for traditional valence classification, and one for emotion classification. Our system experiments with the latter track.The corpus that was compiled for the Affective Task consisted of general news headlines obtained from websites such as Google News and CNN. Whilst a corpus of headlines is not typical for sentiment analysis, this domain was chosen for the task in hand due to the salience of the emotions that are conveyed through the use of only a few thought provoking words. It is usual for sentiment analysis to be carried out on large document sets, where documents may consist of numerous paragraphs, but in the case of this task, sentiment analysis focused on the sentence level.The headlines provided in the corpus were annotated by six independent annotators. Six different emotions that correspond with those proposed in Ekman (1982) were used as the category labels. These six emotions were anger, disgust, fear, joy, sadness and surprise. For each emotional category, the headline was annotated on a fine-grained scale between 0 and 100, dependent upon how strongly an annotator felt that a particular emotion was expressed. For the coarse-grained evaluations of systems, each emotion was mapped to a 0/1 classification, where 0=[0,50] and 1= [50, 100] .The dataset that was released consisted of two sections, a trial set and a test set. The trial set, consisted of 250 headlines, and the test set, used for evaluating the systems consisted of 1,000 annotated headlines.A central part of our approach to emotion classification was the use of an appropriate lexicon. Whilst a number of lexica for sentiment analysis exist such as SentiWordNet (Esuli and Sebastiani, 2006) and AFINN (Hansen et al., 2011) , as is the case with most approaches to sentiment analysis, valence is focused on, and emotions unfortunately are not considered. Therefore, in our approach to emotion classification, we use the optional lexicon of emotion bearing unigrams, WordNet-Affect, provided by the task organisers. This lexicon presents a mapping from emotional terms to the relevant emotional categories that were used to annotate the headlines in the affective task.The WordNet-Affect dictionary alone would not suffice in a classification task from a specific genre of texts, namely headlines. WordNet-Affect contains hypernymic words associated with basic emotional concepts, but does not contain some of the more general emotion causing lexical items that are associated with headlines, such as war. Due to this, expansion of the lexicon with further emotion-bearing concepts was required.Alongside the expansion of the lexicon, another occurrence in sentences needed to be taken into account: contextual valence shifters. For example, consider the sentence from the trial data set 'Budapest calm after night of violent protests'. A basic bag-of-words approach to this may view the words (violent, protests) as fear, anger or sadness, whereas the only word that suggests joy is (calm). With a uniform scoring system in place, this headline would be incorrectly classified.To overcome this short-coming in bag-of-words approaches to classification, sentence level valence shifters (Polanyi and Zaenen, 2006) are implemented. These influential lexical items act by altering the valence of words around them. The combination of calm after suggests a change in valence of the sentence, and so the phrase night of violent protests is shifted from a negative to positive valence.To apply this valence shifting technology to emotion classification, we must build upon the hypothesis proposed by Ortony et al. (1988) that emotions are rooted with either a positive or negative valence, and that most words have the capability to shift valence under certain contexts. In the case of this task, we assume only joy to be associated with a positive valence, and the emotions of anger, fear, disgust, sadness and surprise stem from a negative valence. In doing this, we are able to make fine-grained emotional classifications on the headlines.In order to implement the contextual valence shifters, a relevant parser was required that could capture adequately the functionality of valence shifting lexical entities. The Categorial Combinatory Grammar (Steedman, 2000) takes advantage of the surface syntax as an interface to the underlying compositional semantics of a language, and therefore is suitable for discovering valence shifting terms. To intergrate the CCG formalism into our system, Clark and Curran's (Clark and Curran, 2004) implementation of the parser was used.
0
As China plays a more and more important role of the world, learning Chinese as a foreign language is becoming a growing trend, which brings opportunities as well as challenges. Due to the variousity of grammar and the flexibility of expression, Chinese Grammatical Error Dignosis(CGED) poses a serious challenge to both foreign learners and NLP researchers. Unlike inflectional languages such as English which follows grammatical rules strictly(i.e. subject-verb agreement, strict tenses and voices), Chinese, as an isolated language, has no morphological changes. Various characters are arranged in a sentence to represent meanings as well as the tense and the voice. These features make it easy for beginners to make mistakes in speaking or writing. Thus it is necessary to build an automatic grammatical error detection system to help them learn Chinese better and faster.In NLP-TEA 3 shared task for Chinese Grammatical Error Diagnosis(CGED), four types of errors are defined: 'M' for missing word error, 'R' for redundant errors, 'S' for word selection error and 'W' for word ordering error. Some typical examples of the errors are shown in Table 1 . Different from the two previous editions for the CGED shared task, each input sentence contains at least one of defined error types. What's more, there can be multiple errors in a single offset of a sentence, which means we can no longer treat it a simple multi-class classification problem. As a result of that, we cannot simply rely on some existing error detection systems but can only seek for a new solution. In order to address the problem, we regard it as a sequence multi-labeling problem and split it into multiple sequence labeling problems which only label 0 or 1. To avoid feature engineering, for each error type except 'W', we trained a Bi-LSTM based neural network model, sharing word embeddings and POS tag embeddings. We treat the word ordering error as a special kind of word selection error. They are trained together and separated during the testing phase. Experiments show that together training is better than separate training. More details are described in the rest of the paper.This paper is organized as follows: Section 2 briefly introduces some previous work in this area. Section 3 describes the Bi-LSTM neural network model we proposed for this task. Section 4 demonstrates the data analysis and some interesting findings. Section 5 shows the data analysis and the evaluation results. Section 6 concludes this paper and illustrates the future work.
0
Text classification is one of the most fundamental tasks in natural language processing (NLP). In real-world scenarios, labeling massive texts is timeconsuming and expensive, especially in some specific areas that need domain experts to participate. Weakly-supervised text classification (WTC) has received much attention in recent years because it can substantially reduce the workload of annotating massive data. Among the existing methods, the mainstream form is keyword-driven (Agichtein and Gravano, 2000; Riloff et al., 2003; Kuipers Text A:, … … Text B: ℎ ℎ , ℎ … Figure 1: (a) Existing methods do not consider the correlation among keywords, which will generate wrong pseudo-label for text B. (b) Our method exploits the correlation among keywords by GNN over a keywordgraph, and converts the task of assigning pseudo-labels for unlabeled texts to annotating subgraphs, which leads to much better performance. et al., 2006; Meng et al., 2018 Meng et al., , 2019 Meng et al., , 2020 Mekala and Shang, 2020; Wang et al., 2021; Shen et al., 2021) , where the users need only to provide some keywords for each class. Such class-relevant keywords are then used to generate pseudo-labels for unlabeled texts.Keyword-driven methods usually follow an iterative process: generating pseudo-labels using keywords, building a text classifier, and updating the keywords or self-training the classifier. Among them, the most critical step is generating pseudo-labels. Most existing methods generate pseudo-labels by counting keywords, with which the pseudo-label of a text is determined by the category having the most keywords in the text.However, one major drawback of these existing methods is that they treat keywords independently, thus ignore their correlation. Actually, such correlation is important for the WTC task if properly exploited, as a keyword may implies different categories when it co-occurs in texts with other different keywords. As shown in Fig. 1 , suppose the users provide keywords "windows" and "microsoft" for class "computer" and "car" for class "traffic". When "windows" and "microsoft" appear in text A, the "windows" means operating system, and text A should be given a pseudo-label of "computer". However, when "windows" meets "car" in text B, the "windows" means the windows of a car and text B should be given a pseudo-label "traffic". With previous simple keyword counting, text A can get a correct pseudo-label, but text B cannot. Therefore, treating keywords independently is problematic.In this paper, we solve the above problem with a novel iterative framework called ClassKG (the abbreviation of Classification with Keyword Graph) where the keyword-keyword relationships are exploited by GNN on keyword graph. In our framework, the task of assigning pseudo-labels to texts using keywords is transformed into annotating keyword subgraphs. Specifically, we first construct a keyword graph G with all provided keywords as nodes and each keyword node updates itself via its neighbors. With G, any unlabeled text T corresponds to a subgraph G T of G, and assigning a pseudo-label to T is converted to annotating subgraph G T . To accurately annotate subgraphs, we adopt a paradigm of first self-supervised training and then finetuning. The keyword information is propagated and incorporated contextually during keyword interaction. We design a self-supervised pretext task that is relevant to the downstream task, with which the finetuning procedure is able to generate more accurate pseudo-labels for unlabeled texts. Texts that contains no keywords are ignored. With the pseudo-labels, we train a text classifier to classify all the unlabeled texts. And based on the classification results, we re-extract the keywords, which are used in the next iteration.Furthermore, we notice that some existing methods employ simple TF-IDF alike schemes for reextracting keywords, which makes the extracted keywords have low coverage and discrimination over the unlabeled texts. Therefore, we develop an improved keyword extraction algorithm that can extract more discriminative keywords to cover more unlabeled texts, with which more accurate pseudolabels can be inferred.In summary, our contributions are as follows:• We propose a new framework ClassKG for weakly supervised text classification where the correlation among different keywords is exploited via GNN over a keyword graph, and the task of assigning pseudo-labels for unlabeled texts is transformed into annotating key-word subgraphs on the keyword graph.• We design a self-supervised training task on the keyword graph, which is relevant to the downstream task and thus can effectively improve the accuracy of subgraph annotating.• We conduct extensive experiments on both long text and short text benchmarks. Results show that our method substantially outperforms the existing ones.
0
Many of the world's languages have a rich morphology, i.e., make use of surface variations of lemmata in order to express certain properties, like the tense or mood of a verb. This makes a variety of natural language processing tasks more challenging, as it increases the number of words in a language drastically; a problem morphological analysis and generation help to mitigate. However, a big issue when developing methods for morphological processing is that for many morphologically rich languages, there are only few or no relevant training data available, making it impossible to train state-of-the-art machine learning models (e.g., (Faruqui et al., 2016; Kann and Schütze, 2016b; Aharoni et al., 2016; Zhou and Neubig, 2017) ). This is the motivation for the CoNLL-SIGMORPHON-2017 shared task on uni-versal morphological reinflection (Cotterell et al., 2017a) , which animates the development of systems for as many as 52 different languages in 6 different low-resource settings for morphological reinflection: to generate an inflected form, given a target morphological tag and either the lemma (task 1) or a partial paradigm (task 2). An example is (use, V;3;SG;PRS) → uses In this paper, we describe the LMU system for the shared task. Since it depends on the language and the amount of resources available for training which method performs best, our approach consists of a modular system. For most medium-and high-resource, as well as some low-resource settings, we make use of the state-of-the-art encoderdecoder (Cho et al., 2014a; Sutskever et al., 2014; Bahdanau et al., 2015) network MED (Kann and Schütze, 2016b) , while extending the training data in several ways. Whenever the given data are not sufficient, we make use of the baseline system, which can be trained on fewer instances.While we submit solutions for every language and setting, our main focus is on task 2 of the shared task and the main contributions of this paper correspondingly address a multi-source input setting: (i) We develop CIS ("choice of important sources"), a novel algorithm for selecting the most appropriate source form for a target tag from a partially given paradigm, which is based on edit trees (Chrupała, 2008) . (ii) We propose to cast the task of multi-source morphological reinflection as a domain adaptation problem. By finetuning on forms from a partial paradigm, we improve the performance of a neural sequence-tosequence model for most shared task languages.Our final methods, averaged over languages, outperform the official baseline by 7.0%, 18.5%, and 16.5% for task 1 and 8.7%, 10.1%, and 10.3% for task 2 for the low-, medium-, and highresource settings, respectively.Furthermore, our submitted sytem-a combination of our methods with the baseline systemsurpasses the baseline's accuracy on test for both tasks as well as all languages and settings. Differences in performance are between 8.69% (task 1 low) and 17.94% (task 1 medium).
0
Chinese sentences arc cx)mposed with string of characters without blanks to mark words. However the basic unit for sentence parsing and understanding is word. Therefore the first step of processing Chinese sentences is to identify the words( i.e. segment the character strings of the sentences into word strings).Most of the current Chinese natural language processing systems include a processor for word identification. Also there are many word segmentation techniques been developed. Usually they use a lexicon with a large set of entries to match input sentences [2, 10, 12, 13, 14, 21] . It is very often that there are many l~)ssible different successful matchings. Therefore the major focus for word identification were on thc resolution of ambiguities. However many other important aspects, such as what should be done, in what depth and what are considered to be the correct identifications were totally ignored. High identification rates are claimed to be achieved, but none of them were measured under equal bases. There is no agreement in what extend words are considered to be correctly identified. For instance, compounds occur very often in Chinese text, but none of the existing systems except ours pay much attention to identify them. Proper name is another type of words which cannot be listed exhaustively in the lexicon. Therefore simple matching algorithms can not successfully identify either compounds or proper names. In this paper, we like to raise the ptx~blems and the difficulties in identifying words and suggest the possible solutions.
0
Aligned corpora have proved very useful in many tasks, including statistical machine translation, bilingual lexicography (Daille, Gaussier and Lange 1993), and word sense disambiguation (Gale, Church and Yarowsky 1992; Chen, Ker, Sheng, and Chang 1997). Several The statistical approach to machine translation (SMT) can be understood as a word-by-word model consisting of two sub-models: a language model for generating a source text segment S and a translation model for mapping S to its translation T. Brown et al. (1993) also recommend using a bilingual corpus to train the parameters of Pr(S I 73, translation probability (TP) in the translation model. In the context of SMT, Brown et al. (1993) present a series of five models of Pr(S I 73 for word alignment. The authors propose using an adaptive Expectation and Maximization (EM) algorithm to estimate parameters for lexical translation probability (LTP) and distortion probability (DP), two factors in the TP, from an aligned bitext.The EM algorithm iterates between two phases to estimate LTP and DP until both functions converge. Church (1993) observes that reliably distinguishing sentence boundaries for a noisy bitext obtained from an OCR device is quite difficult. Dagan, Church and Gale (1993) recommend aligning words directly without the preprocessing phase of sentence alignment.They propose using char_align to produce a rough character-level alignment first. The rough alignment provides a basis for estimating the translation probability based on position, as well as limits the range of target words being considered for each source word.Char_align (Church 1993 ) is based on the observation that there are many instances of. • " Figure 1 . Dotplot. An example of a dotplot of alignment showing only likely dots which lie within a short distance from the diagonal.cognates among the languages in the Indo-European family. However, Fung and Church (1994) point out that such a constraint does not exist between languages across language groups such as Chinese and English. The authors propose a K-vec approach which is based on a kway partition of the bilingual corpus. Fung and McKeown (1994) propose using a similar measure based on Dynamic Time Warping (DTW) between occurrence recency sequences to improve on the Kvec method.The char-align, K-vec and DTW approaches rely on dynamic programming strategy to reach a rough alignment. As Chen (1993) points out, dynamic programming is particularly susceptible to deletions occurring in one of the two languages. Thus, dynamic programming based sentence alignment algorithms rely on paragraph anchors (Brown et al. 1991) or lexical information, such as cognates (Simard 1992), to maintain a high accuracy rate. These methods are not robust with respect to non-literal translations and large deletions (Simard 1996) . This paper presents a new approach based on image processing (IP) techniques, which is immune to such predicaments.
0
With the growing popularity of Twitter, sentiment analysis of tweets has drawn the attention of several researchers from both academia and industry in recent times. Unlike other regular texts, sentiment analysis on Twitter text poses plenty of challenges because of various characteristics such as (i) under-specificity due to text limits, (ii) free-form writing such as the presence of user-defined hashtags, mentions, emoticons, (iii) noisy texts due to the presence of short-form, long-form, multilingual, transliterated text, misspelling. Researchers try to address these problems by adopting various methods like task-specific representation learning (Singh et al., 2020; Pham and Le, 2018; Fu et al., 2018; Tang et al., 2016; Kim, 2014) , incorporating additional information such as hash- * Equal contributions.tags (Alfina et al., 2017; Qadir and Riloff, 2014) , relationship between users (Zhao et al., 2017) , multisource information (Zhou and Huang, 2017) , ensembling (Al-Twairesh and Al-Negheimish, 2019; Araque et al., 2017; Wang et al., 2014) , etc.This paper proposes a novel approach to handle the above issues using a heterogeneous multi-layer network representation of a tweet. A multi-layer network is a network formulated by connecting different layers of networks. For example, a heterogeneous multi-layer network can be formed by connecting layers of networks of mentions, hashtags, and keywords. Multi-layer networks have shown to provide promising performance in other tasks like community detection and clustering (Hanteer and Rossi, 2019; Luo et al., 2020) , node classification (Li et al., 2018; Zitnik and Leskovec, 2017; Ghorbani et al., 2019) , representation learning in graphs (Cen et al., 2019; Ni et al., 2018) . A tweet or a collection of tweets can be represented by a multi-layer network. An advantage of using network-based representation is that a network can be expanded by adding nodes or shrunk by removing nodes. The motivations of using a multi-layer network in this paper are as follows. (i) The semantic relation between keywords, hashtags, and mentions can be captured by applying an effective network embedding method. (ii) The noise and under-specificity can be reduced by expanding the network with related nodes or by shrinking the network after removing the unrelated nodes. Further, the co-occurring keywords, hashtags, and mentions often share semantic relationships (Wang et al., 2016; Weston et al., 2014; Qadir and Riloff, 2013; Wang et al., 2011) .This paper has four major contributions. First, it transforms a tweet into a multi-layer network. Second, it proposes a centrality 1 aware random walk over the multi-layer network. Third, it generates multiple representations of a tweet using the proposed centrality aware random walk and builds an early-fusion based neural sentiment classifier. Fourth, it also addresses under-specificity and noisy text for sentiment classification by expanding or shrinking the network representing the tweets. As such, sentiment classification is a domain-dependent task (Karamibekr and Ghorbani, 2012) . Therefore, we evaluate the proposed method over datasets in different domains. From extensive experimental evaluations, the proposed method is found to outperform its counterparts in the majority of the cases. To the best of our knowledge, this study is the first of its kind to investigate sentiment classification task by transforming tweet into a heterogeneous multi-layer network.The rest part of the paper is organized as follows. Section 2 presents the literature related to this study. Section 3 presents the proposed framework. The experimental setup is described in Section 4. The results and observations are analyzed in Section 5. Finally, Section 6 concludes the study of this paper.
0
Syntactically annotated treebanks are a great resource of linguistic information that is available hardly or not at all in flat text corpora. Retrieving this information requires specialized tools. Some of the best-known tools for querying treebanks include TigerSEARCH (Lezius, 2002) , TGrep2 (Rohde, 2001) , MonaSearch (Maryns and Kepser, 2009) , and NetGraph (Mírovský, 2006) . All these tools dispose of great power when querying a single annotation layer with nodes labeled by "flat" feature records.However, most of the existing systems are little equipped for applications on structurally complex treebanks, involving for example multiple interconnected annotation layers, multi-lingual parallel annotations with node-to-node alignments, or annotations where nodes are labeled by attributes with complex values such as lists or nested attribute-value structures. The Prague Dependency Treebank 2.0 (Hajič and others, 2006) , PDT 2.0 for short, is a good example of a treebank with multiple annotation layers and richly-structured attribute values. NetGraph was a tool traditionally used for querying over PDT, but still it does not directly support cross-layer queries, unless the layers are merged together at the cost of loosing some structural information.The presented system attempts to combine and extend features of the existing query tools and resolve the limitations mentioned above. We are grateful to an anonymous referee for pointing us to ANNIS2 (Zeldes and others, 2009) -another system that targets annotation on multiple levels.
0
The popularity of speech synthesis as a topic in natural language processing has significantly increased after the publication of results by DeepMind (van den Oord et al., 2016) , Baidu (Arik et al., 2017) and Google (Wang et al,, 2017; Shen et al., 2018) , demonstrating the ability to create natural sounding speech with neural methods. While the new methods themselves are mostly language agnostic, extending these text to speech (TTS) systems for a particular language requires a specialized speech corpus for that language. Training these currently mainstream speech synthesis methods from scratch to adequate quality require about 30 hours of good quality audio recordings from a single speaker in noiseless environment, and an accurate transcription of these recordings. For less resourced languages, obtaining such a corpus is the major obstacle to development of TTS solutions. In this paper an unsupervised approach for such corpus creation is presented, using ASR and speaker segmentation as main components.
0
Humor is an important ingredient of human communication, and every automatic system aiming at emulating human intelligence will eventually have to develop capabilities to recognize and generate humorous content. In the artificial intelligence community, research on humor has been progressing slowly but steadily. As an effort to boost research and spur new ideas in this challenging area, we created a competitive task for automatically assessing humor in edited news headlines.Like other AI tasks, automatic humor recognition depends on labeled data. Nearly all existing humor datasets are annotated to study the binary task of whether a piece of text is funny (Mihalcea and Strapparava, 2005; Kiddon and Brun, 2011; Bertero and Fung, 2016; Raz, 2012; Filatova, 2012; Zhang and Liu, 2014; Reyes et al., 2012; Barbieri and Saggion, 2014) . Such categorical data does not capture the non-binary character of humor, which makes it difficult to develop models that can predict a level of funniness.Humor occurs in various intensities, and certain jokes are much funnier than others, including the supposedly funniest joke in the world (Wiseman, 2011) . A system's ability to assess the degree of humor makes it useful in various applications, such as in humor generation where such a system can be used in a generate-and-test scheme to generate many potentially humorous texts and rank them by funniness, for example, to automatically fill in the blanks in Mad Libs R for humorous effects (Hossain et al., 2017; Garimella et al., 2020) .For our SemEval task, we provided a dataset that contains news headlines with short edits applied to them to make them humorous (see Table 1 ). This dataset was annotated as described in Hossain et al. (2019) using Amazon Mechanical Turk, where qualified human workers edited headlines to make them funny and the quality of humor in these headlines was assessed by a separate set of qualified human judges on a 0-3 funniness scale (see Figure 1 ). This method of quantifying humor enables the development of systems for automatically estimating the degree of humor in text. Our task is comprised of two Subtasks: Table 1 : Edited headlines from our dataset and their funniness rating. We report the mean of the estimated ratings from the top 20 ranked participating systems (Est.) and its difference from the true rating (Err.).• Subtask 1: Estimate the funniness of an edited headline on a 0-3 humor scale.• Subtask 2: Given two edited versions of the same headline, determine which one is funnier.Inviting multiple participants to a shared task contrasts with most current work on computational humor, which consists of standalone projects, each exploring a different genre or type of humor. Such projects typically involve gathering new humor data and applying machine learning to solve a particular problem. Repeated attempts at the same problem are rare, hindering incremental progress, which emphasizes the need for unified, shared humor tasks.Recently, competitive humor tasks including shared data have been posed to the research community. One example is #HashtagWars (Potash et al., 2017) , a SemEval task from 2017 that attracted eight distinct teams, where the focus was on ranking the funniness of tweets from a television show. The HAHA competition (Chiruzzo et al., 2019) had 18 participants who detected and rated humor in Spanish language tweets. There were 10 entries in a SemEval task from 2017 that looked at the automatic detection, location, and interpretation of puns (Miller et al., 2017) . Finally, a related SemEval 2018 task involved irony detection in tweets (Van Hee et al., 2018) .Ours is the largest shared humor task to date in terms of participation. More than 300 participants signed up, 86 teams participated in the development phase, and 48 and 31 teams participated, respectively, in the two subtasks in the evaluation phase. By creating an intense focus on the same humor task from so many points of view, we were able to clearly understand how well these systems work as a function of different dimensions of humor, including which type of humor appears easiest to rate automatically.
0
Consider the following example from the Trains corpus (Heeman and Allen 1995) .Example 1 (d93-13.3 utt63) um it'll be there it'll get to Dansville at three a.m. and then you wanna do you take tho-want to take those back to Elmira so engine E two with three boxcars will be back in Elmira at six a.m. is that what you wanna do In order to understand what the speaker was trying to say, the reader probably segmented the above into a number of sentence-like segments, utterances, as follows.um it'll be there it'll get to Dansville at three a.m. and then you wanna do you take tho-want to take those back to Elmira so engine E two with three boxcars will be back in Elmira at six a.m. is that what you wanna doAs illustrated above, understanding a speaker's turn necessitates segmenting it into individual utterance units. However, there is no consensus as to how to define an utterance unit (Traum and Heeman 1997) . The manner in which speakers break their speech into intonational phrases undoubtedly plays a major role in its definition. Intonational phrase endings are signaled through variations in the pitch contour, segmental lengthening, and pauses. Beach (1991) demonstrated that hearers can use intonational information early on in sentence processing to help resolve syntactic ambiguities. Bear and Price (1990) showed that a parser can use automatically extracted intonational phrasing to reduce ambiguity and improve efficiency. Ostendorf, Wightman, and Veilleux (1993) used hand-labeled intonational phrasing to do syntactic disambiguation and achieved performance comparable to that of human listeners. Due to their significance, we will focus on the task of detecting intonational phrase boundaries.The on-line nature of spoken dialogue forces conversants to sometimes start speaking before they are sure of what they want to say. Hence, the speaker might need to go back and repeat or modify what she just said. Of course there are many different reasons why speakers make repairs; but whatever the reason, speech repairs are a normal occurrence in spoken dialogue. In the Trains corpus, 23% of speaker turns contain at least one repair and 54% of turns with at least 10 words contain a repair.Fortunately for the hearer, speech repairs tend to have a standard form. As illustrated by the following example, they can be divided into three intervals, or stretches of speech: the reparandum, editing term, and alteration. 2 Example 2 (d92a-2.1 utt29) that's the one with the bananas I mean that's taking the bananasThe reparandum is the stretch of speech that the speaker is replacing, and can end with a word fragment, where the speaker interrupts herself during the middle of a word. The end of the reparandum is the interruption point and is often accompanied by a disruption in the intonational contour. This can be optionally followed by the editing term, which can consist of filled pauses, such as um or uh or cue phrases, such as I mean, well, or let's see. Reparanda and editing terms account for 10% of the words in the Trains corpus. The last part is the alteration, which is the speech that the speaker intends as the replacement for the reparandum. In order for the hearer to determine the intended utterance, he must detect the repair and determine the extent of the reparandum and editing term. We refer to this latter process as correcting the speech repair. In the example above, the speaker's intended utterance is that's the one that's taking the bananas. Hearers seem to be able to effortlessly understand speech with repairs in it, even when multiple repairs occur in a row. In laboratory experiments, Martin and Strange (1968) found that attending to speech repairs and the content of an utterance are mutually inhibitory, and Bard and Lickley (1997) found that subjects have difficulty remembering the actual words in the reparandum. Listeners must be resolving repairs very early on in processing the speech. Earlier work by Lickley and colleagues (Lickley, Shillcock, and Bard 1991; Lickley and Bard 1992 ) strongly suggests that there are prosodic cues across the interruption point that hearers make use of in detecting repairs. However, little progress has been made in detecting speech repairs based solely on acoustical cues (cf. Bear, Dowding, and Shriberg 1992; Nakatani and Hirschberg 1994; O'Shaughnessy 1994; Shriberg, Bates, and Stolcke 1997 ).Psycholinguistic work in speech repairs and in understanding the implications that they pose for theories of speech production (e.g. Levelt 1983; Blackmer and Mitton 1991; Shriberg 1994) has come up with a number of classification systems. Categories are based on how the reparandum and alteration differ, for instance whether the alteration repeats the reparandum, makes it more appropriate, or fixes an error in the reparandum. Such an analysis can shed light on where in the production system the error and its repair originated. Our concern, however, is in computationally resolving repairs. The relevant features are those that the hearer has access to and can make use of in detecting and correcting a repair. Following loosely in the footsteps of the work of Hindle (1983) , we divide them into the following categories: fresh starts, modification repairs, and abridged repairs.Fresh starts occur where the speaker abandons the current utterance and starts again, where the abandonment seems to be acoustically signaled either in the editing term or at the onset of the alteration. Example 3 illustrates a fresh start where the speaker abandons the partial utterance I need to send, and replaces it by the question how many boxcars can one engine take.Example 3 (d93-14.3 utt2) I need to send let's see how many boxcars can one engine takeFor fresh starts, there can sometimes be little or even no correlation between the reparandum and alteration. Although it is usually easy to determine the reparandum onset, initial discourse markers and preceding intonational phrases can prove problematic.The second type are modification repairs, which comprise the remainder of repairs with a nonempty reparandum. The example below illustrates this type of repair.Example 4 (d92a-1.2 utt40) you can carry them both on J reparandum lp tow both on the same engine Y alterationModification repairs tend to have strong word correspondences between the reparandum and alteration, which can help the hearer determine the reparandum onset as well as signal that a repair occurred. In the example above, there are word matches on the instances of both and on, and a replacement of the verb carry by tow. Modification repairs can in fact consist solely of the reparandum being repeated by the alteration. The third type are the abridged repairs. These repairs consist of an editing term, but with no reparandum, as the following example illustrates.Example 5 (d93-14.3 utt42) we need to um manage to get the bananas to Dansville more quickly T v Ip editing terms For these repairs, the hearer has to determine that an editing term occurred, which can be difficult for phrases such as let's see or well since they can also have a sentential interpretation. The hearer also has to determine that the reparandum is empty. As the example above illustrates, this is not necessarily a trivial task because of the spurious word correspondences between need to and manage to.Phrases such as so, now, firstly, moreover, and anyways can be used as discourse markers (Schiffrin 1987) . Discourse markers are conjectured to give the hearer information about the discourse structure, and so aid the hearer in understanding how the new speech or text relates to what was previously said and for resolving anaphoric references (Hirschberg and Litman 1993) . Although discourse markers, such as firstly, and moreover, are not commonly used in spoken dialogue (Brown and Yule 1983) , a lot of other markers are employed. These markers are used to achieve a variety of effects: such as signal an acknowledgment or acceptance, hold a turn, stall for time, signal a speech repair, or signal an interruption in the discourse structure or the return from one.Although Schiffrin defines discourse markers as bracketing units of speech, she explicitly avoids defining what the unit is. We feel that utterance units are the building blocks of spoken dialogue and that discourse markers operate at this level to relate the current utterance to the discourse context or to signal a repair in an utterance. In the following example, and then helps signal that the upcoming speech is adding new information, while so helps indicate a summary is about to be made.Example 6 (d92a-1.2 utt47) and then while at Dansville take the three boxcars so that's total of fiveThe tasks of identifying intonational phrases and discourse markers and detecting and correcting speech repairs are highly intertwined, and the solution to each task depends on the solution for the others.points of speech repairs share a number of features that can be used to identify them: there is often a pause at these events as well as lengthening of the final syllable before them. Even correspondences, traditionally associated with speech repairs, can cross phrase boundaries (indicated with "%"), as the following example shows.Example 7 (d93-8.3 utt73) that's all you need % you only need one boxcar % Second, the reparandum onset for repairs, especially fresh starts, often occurs at the onset of an intonational phrase, and reparanda usually do not span phrase boundaries. Third, deciding if filled pauses and cue phrases should be treated as abridged repairs can only be done by taking into account whether they are midutterance or not (cf. Shriberg and Lickley 1993) , which is associated with intonational phrasing.Markers. Discourse markers tend to be used at utterance boundaries, and hence have strong interactions with intonational phrasing. In fact, Hirschberg and Litman (1993) found that discourse markers tend to occur at the beginning of intonational phrases, while sentential usages tend to occur midphrase. Example 8 below illustrates so being used midutterance as a subordinating conjunction, not as a discourse marker.Example 8 (d93-15.2 utt9) it takes an hour to load them % just so you know % Now consider the third turn of the following example in which the system is not using no as a quantifier to mean that there are not any oranges available, but as a discourse marker in signaling that the user misrecognized oranges as orange juice.Frequency of discourse markers in the editing term of speech repairs and as the alteration onset. Example 9 (d93-11.1 utt109-111) system: so so we have three boxcars of oranges at Coming user: three boxcars of orange juice at Coming system: no um orangesThe discourse marker interpretation is facilitated by the phrase boundary between no and oranges, especially since the determiner reading of no would be very unlikely to have a phrase boundary separating it from the noun it modifies. Likewise, the recognition of no as a discourse marker makes it more likely that there will be a phrase boundary following it.Discourse markers are often used in the editing term to help signal that a repair occurred, and can be used to help determine if it is a fresh start (cf. Hindle 1983; Levelt 1983) , as the following example illustrates.Example 10 (d92a-1.3 utt75)we have the orange juice in two oh reparandum zp et how many did we needRealizing that oh is being used as a discourse marker helps facilitate the detection of the repair, and vice versus. This holds even if the discourse marker is not part of the editing term, but is the first word of the alteration. Table 1 shows the frequency with which discourse markers co-occur with speech repairs. We see that a discourse marker is either part of the editing term or is the alteration onset for 40% of fresh starts and 14% of modification repairs. Discourse markers also play a role in determining the onset for fresh starts, since they are often utterance initial.Not only are the tasks of identifying intonational phrases and discourse markers and resolving speech repairs intertwined, but these tasks are also intertwined with identifying the lexical category or part of speech (POS) of each word, and the speech recognition problem of predicting the next word given the previous context. Just as POS taggers for text take advantage of sentence boundaries, it is natural to assume that tagging spontaneous speech would benefit from modeling intonational phrases and speech repairs. This is especially true for repairs, since their occurrence disrupts the local context that is needed to determine the POS tags (Hindle 1983 ). In the example below, both instances of load are being used as verbs; however, since the second instance follows a preposition, it could easily be mistaken for a noun.Example 11 (d93-12.4 utt44) by the time we load in load the bananasHowever, by realizing that the second instance of load is being used in a repair and corresponds to the first instance of load, its POS tag becomes obvious. Conversely, since repairs disrupt the local syntactic context, this disruption, as captured by the POS tags, can be used as evidence that a repair occurred, as shown by the following example.Example 12 (d93-13.1 utt90)I can run trains on the in the opposite directionHere we have a preposition following a determiner, an event that only happens across the interruption point of a speech repair. Just as there are interactions with POS tagging, the same holds for the speech recognition problem of predicting the next word given the previous context. For the lexical context can run trains on the, it would be very unlikely that the word in would be next. It is only by modeling the occurrence of repairs and their word correspondences that we can account for the speaker's words.There are also interactions with intonational phrasing. In the example below, after asking the question what time do we have to get done by, the speaker refines this to be whether they have to be done by two p.m. The result, however, is that there is a repetition of the word by, but separated by a phrase boundary.Example 13 (d93-18.1 utt58) what time do we have to get done by % by two p.m. % By modeling the intonational phrases, POS taggers and speech recognition language models would be expecting a POS tag and word that can introduce a new phrase.In this paper, we address the problem of modeling speakers' utterances in spoken dialogue, which involves identifying intonational phrases and discourse markers and detecting and correcting speech repairs. We propose that these tasks can be done using local context and early in the processing stream. Hearers are able to resolve speech repairs and intonational phrase boundaries very early on, and hence there must be enough cues in the local context to make this feasible. We redefine the speech recognition problem so that it includes the resolution of speech repairs and identification of intonational phrases, discourse markers, and POS tags, which results in a statistical language model that is sensitive to speakers' utterances. Since all tasks are being resolved in the same model, we can account for the interactions between the tasks in a Map used by the system in collecting the Trains corpus. framework that can compare alternative hypotheses for the speaker's turn. Not only does this allow us to model the speaker's utterance, but it also results in an improved language model, evidenced by both improved POS tagging and in better estimating the probability of the next word. Furthermore, speech repairs and phrase boundaries have acoustic correlates, such as pauses between words. By resolving speech repairs and identifying intonational phrases during speech recognition, these acoustic cues, which otherwise would be treated as noise, can give evidence as to the occurrence of these events, and further improve speech recognition results.Resolving the speaker's utterances early on will not only help a speech recognizer determine what was said, but it will also help later processing, such as syntactic and semantic analysis. The literature (e.g., Bear and Price 1990; Ostendorf, Wightman, and Veilleus 1993) already indicates the usefulness of intonational information for syntactic processing. Resolving speech repairs will further simplify syntactic and semantic understanding of spontaneous speech, since it will remove the apparent ill-formedness that speech repairs cause. This will also make it easier for these processes to cope with the added syntactic and semantic variance that spoken dialogue seems to license.We next describe the Trains corpus and the annotation of speech repairs, intonational phrases, discourse markers, and POS tags. We then introduce a language model that incorporates POS tagging and discourse marker identification. We then augment it with speech repair and intonational phrase detection, repair correction, and silence information, and give a sample run of the model. We then evaluate the model by analyzing the effects that each component of the model has on the other components. Finally, we compare our work with previous work and present the conclusions and future work.
0
Improvements in data-driven parsing approaches, coupled with the development of treebanks that serve as training data, have resulted in accurate parsers for several languages. However, portability across domains remains a challenge: parsers trained using a treebank for a specific domain generally perform comparatively poorly in other domains. In English, the most widely used training set for parsers comes from the Wall Street Journal portion of the Penn Treebank (Marcus et al., 1993) , and constituent parsers trained on this set are now capable of labeled bracketing precision and recall of over 90% (Charniak and Johnson, 2005; Huang, 2008) on WSJ testing sentences. When applied without adaptation to the Brown portion of the Penn Treebank, however, an absolute drop of over 5% in precision and recall is typically observed (McClosky et al., 2006b) . In pipelined NLP applications that include a parser, this drop often results in severely degraded results downstream.We present experiments with a simple selftraining approach to semi-supervised parser domain adaptation that produce results that contradict the commonly held assumption that improved parser accuracy cannot be obtained by self-training a generative parser without reranking (Charniak, 1997; Steedman et al., 2003; McClosky et al., 2006b McClosky et al., , 2008 . 1 We compare this simple self-training approach to the selftraining with reranking approach proposed by McClosky et al. (2006b) , and show that although McClosky et al.'s approach produces better labeled bracketing precision and recall on out-ofdomain sentences, higher F-score on syntactic parses may not lead to an overall improvement in results obtained in NLP applications that include parsing, contrary to our expectations. This is evidenced by results obtained when different adaptation approaches are applied to a parser that serves as a component in a semantic role labeling (SRL) system. This is, to our knowledge, the first attempt to quantify the benefits of semisupervised parser domain adaptation in semantic role labeling, a task in which parsing accuracy is crucial.
0